12.0.4-31
Updated 10/21/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-97015 | EON | An error occurred, where Vertica was scheduling to remove a storage container while restore/replicate was trying to install the same storage container. After the installation was done, the storage was removed by tombstone. FNF happens. Our old design to prevent this from happening was disabling tombstone before installing storages, and enabling tombstone again later, also clearing all reaper queues as well, but this can cause leaked files. Although the user can use clean_communal_storage() command to remove these files, it is better to improve the restore/replicate to avoid leaked files. This issue has been fixed by scanning all storages in the snapshot to restore/replicate, and removing the storages from tombstone and reaper queue while keeping others untouched in tombstone/reaper queue, so that we can make sure there won’t be FNF or leaked files. 发生错误,Vertica 计划删除存储容器,而恢复/复制正在尝试安装相同的存储容器。安装完成后,存储被墓碑删除。发生 FNF。我们以前的设计是为了防止这种情况发生,在安装存储之前禁用墓碑,稍后再次启用墓碑,同时清除所有收割队列,但这可能会导致文件泄露。虽然用户可以使用 clean_communal_storage() 命令删除这些文件,但最好改进恢复/复制以避免文件泄露。 此问题已得到修复,方法是扫描快照中的所有存储以进行恢复/复制,并从墓碑和收割队列中删除存储,同时保持墓碑/收割队列中的其他存储不变,这样我们就可以确保不会出现 FNF 或文件泄露。 |
| VER-97039 | Optimizer | Between v12 and v24, a previous bugfix made it so null rows passed hash SIP filters. This led to a performance drop on queries that relied on SIPS to filter out nulls early. This has been resolved; now, SIP filters remove null rows again. 在 v12 和 v24 之间,先前的一项错误修复导致空行通过了哈希 SIP 过滤器。这导致依赖 SIPS 尽早过滤掉空值的查询性能下降。此问题已得到解决;现在,SIP 过滤器再次删除了空行。 |
| VER-97113 | UI - Management Console | In MC v11.0, the Query Profile page was working without any issues. MC v12.0 was localized and all labels were translated. However, the Query Profile page was not localized by oversight. This issue has been resolved. 在 MC v11.0 中,查询配置文件页面运行正常,没有任何问题。MC v12.0 已本地化,所有标签都已翻译。但是,查询配置文件页面由于疏忽而未进行本地化。此问题已解决。 |
| VER-97238 | Data load / COPY | Previously, when querying certain ORC/Parquet files in a certain way, a hang would occur. This issue has now been fixed. 以前,以某种方式查询某些 ORC/Parquet 文件时会出现挂起。此问题现已修复。 |
12.0.4-30
Updated 10/01/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-88127 | EON | The sync_catalog function failed when MinIO communal storage did not meet read-after-write and list-after-write consistency guarantees. A check was added to bypass this restriction. However, if possible, users should make sure that their MinIO storage is configured for read-after-write and list-after-write consistency. 当 MinIO 公共存储未满足“先读后写”和“先写后列表”一致性保证时,sync_catalog 函数会失败。 添加了检查以绕过此限制。但是,如果可能,用户应确保其 MinIO 存储已配置为“先读后写”和“先写后列表”一致性。 |
| VER-95966 | Machine Learning | There is a corner case where an orphan blob may remain in a session when the training of an ML model is cancelled. This orphan blob could cause a crash if there was an attempt to train a model with the same name on the same session. This issue has been resolved. 有一种特殊情况,即当取消 ML 模型的训练时,会话中可能会残留一个孤立 blob。 如果尝试在同一会话中训练同名模型,此孤立 blob 可能会导致崩溃。此问题已解决。 |
| VER-96247 | Admin Tools | An internal bug that occurred while fetching an error message has been resolved. 获取错误消息时发生的内部错误已解决。 |
| VER-96255 | EON | Previously, in certain cases when a cancel occurred during Vertica uploads to the communal storage, the node would crash. This issue has been resolved. 以前,在某些情况下,当 Vertica 上传到公共存储期间发生取消时,节点会崩溃。 此问题已得到解决。 |
12.0.4-29
Updated 09/19/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-95553 | Execution Engine | An issue that caused a crash while using WITHIN GROUP () function with listagg has been fixed. 已修复使用 WITHIN GROUP () 函数与 listagg 时导致崩溃的问题。 |
| VER-95665 | Execution Engine | Due to a bug in the numeric division code, users would get a wrong result when evaluating the mod operator on some numeric values with large precision. This issue has been resolved. 由于数字除法代码中存在错误,用户在对某些精度较高的数值进行 mod 运算符求值时会得到错误的结果。 此问题已解决。 |
| VER-95823 | Execution Engine | An error in expression analysis for REGEXP_SUBSTR would sometimes lead to a crash when that function was in the join condition. This issue has been resolved. REGEXP_SUBSTR 表达式分析中的错误有时会导致函数处于连接条件时崩溃。 此问题已解决。 |
12.0.4-28
Updated 07/24/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-95196 | Optimizer | Under certain circumstances, partition statistics could be used in place of full table statistics, leading to suboptimal plans. This issue has been resolved. 在某些情况下,分区统计信息可能代替全表统计信息,从而导致计划不理想。 此问题已解决。 |
| VER-95252 | Optimizer | FKPK Joins over projections with derived expressions would put PK input on the Inner side even when it was much bigger than FK input, which resulted in worse performance in some scenarios. The issue has been fixed. 使用派生表达式对投影进行 FKPK 连接会将 PK 输入放在内侧,即使它比 FK 输入大得多,这在某些情况下会导致性能下降。此问题已解决。 |
12.0.4-27
Updated 07/10/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-94469 | Backup/DR | If a user restores a backup to a cluster whose communal location path contains only a bucket name and ends without a slash, Vertica will give the wrong metadata path when it is needed. This has been fixed by adding a slash at the end of the communal location path when writing a catalog object. 如果用户将备份还原到集群,而该集群的公共位置路径仅包含存储桶名称且结尾没有斜杠,Vertica 将在需要时提供错误的元数据路径。 此问题已通过在写入目录对象时在公共位置路径末尾添加斜杠来修复。 |
| VER-95107 | Optimizer | If ARGMAX_AGG and DISTINCT were both used in a query, an internal error was raised. This issue has been resolved. Now, this case raises an unsupported error message that includes a hint on how to rework the SQL query to avoid the error. 如果在查询中同时使用 ARGMAX_AGG 和 DISTINCT,则会引发内部错误。此问题已解决。 现在,这种情况会引发一条不受支持的错误消息,其中包含有关如何重新执行 SQL 查询以避免错误的提示。 |
12.0.4-26
Updated 05/29/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-93937 | Client Drivers - ODBC | The Windows DSN configuration utility no longer sets vertica as the default KerberosServiceName value when editing a DSN. Starting with version 11.1, providing a value causes the ODBC driver to assume the connection is using Kerberos authentication and communicates to the server that it prefers to use that authentication method, assuming that the user has a grant to a Kerberos authentication method. The KerberosServiceName value might be set in earlier versions of Windows ODBC DSNs. Clearing the value will resolve the issue. This issue only applies to users who have a Kerberos authentication method granted with a lower priority than other authentication methods and use the DSN configuration utility to set up a DSN on Windows. 编辑 DSN 时,Windows DSN 配置实用程序不再将 vertica 设置为默认的 KerberosServiceName 值。 从版本 11.1 开始,提供一个值会导致 ODBC 驱动程序假定连接正在使用 Kerberos 身份验证,并向服务器传达它更喜欢使用该身份验证方法的信息, 假设用户已获得 Kerberos 身份验证方法的授权。KerberosServiceName 值可能在早期版本的 Windows ODBC DSN 中设置。 清除该值将解决此问题。 此问题仅适用于具有比其他身份验证方法优先级更低的 Kerberos 身份验证方法并使用 DSN 配置实用程序在 Windows 上设置 DSN 的用户。 |
| VER-94260 | Basics | Fixed a regression when trace profiling frequency flag is set to 0. 当跟踪分析频率标志设置为 0 时修复了回归问题。 |
| VER-94319 | Catalog Engine | As dc_catalog_refcounts is useful only for debugging purposes, it is disabled by default. 由于 dc_catalog_refcounts 仅用于调试目的,因此默认情况下它是被禁用的。 |
| VER-94338 | Depot | Depot fetch used to call aws list request, now doesn’t. Depot fetch 过去用于调用 aws list request,现在不行了。 |
12.0.4-25
Updated 05/14/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-93576 | Optimizer | In versions 12.0.4-23 and 12.0.4-24, queries that reused views containing WITH clauses would sometimes fail after several executions of the same query. This issue has been resolved. 在版本 12.0.4-23 和 12.0.4-24 中,重用包含 WITH 子句的视图的查询有时会在多次执行相同查询后失败。此问题已解决。 |
| VER-93927 | Execution Engine | Whether LIKE ANY / ALL read strings as UTF8 character sequences or binary byte arrays depended on whether the collation of the current locale was binary, leading to incorrect results when reading multi-character UTF8 strings in binary-collated locales. This has been resolved. Now, LIKE ANY / ALL always reads UTF8 character sequences, regardless of the current locale’s collation. LIKE ANY / ALL 是否将字符串读取为 UTF8 字符序列或二进制字节数组取决于当前语言环境的排序规则是否为二进制, 这导致在二进制排序语言环境中读取多字符 UTF8 字符串时出现错误结果。 此问题已解决。现在,LIKE ANY / ALL 始终读取 UTF8 字符序列,而不管当前语言环境的排序规则如何。 |
12.0.4-24
Updated 04/23/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-93205 | Data load / COPY | When the Avro parser would read a byte array that is at most 8 bytes long into a numeric-typed target, it would only accept a single-word numeric as the target type. This has been resolved; now, the Avro parser supports reading short byte arrays into multi-word numeric targets. 当 Avro 解析器将长度最多为 8 个字节的字节数组读入数字类型的目标时,它只接受单字数字作为目标类型。 此问题已得到解决;现在,Avro 解析器支持将短字节数组读入多字数字目标。 |
| VER-93328 | Execution Engine | User-Defined Aggregates didn’t work with single distinct built-in aggregate in the same query when the input wasn’t sorted on grouping columns plus distinct aggregate column. The issue has been resolved. 当输入未按分组列和不同的聚合列排序时,用户定义聚合无法与同一查询中的单个不同的内置聚合一起使用。 此问题已解决。 |
| VER-93448 | Backup/DR | LocalStorageLocator did not implement the construct_new() method. When called, it fell back to the StorageLocation.construct_new() method, which raised an error. This issue has been resolved. LocalStorageLocator.construct_new() is now implemented. LocalStorageLocator 未实现 construct_new() 方法。调用时,它会回退到 StorageLocation.construct_new() 方法,从而引发错误。此问题已解决。 LocalStorageLocator.construct_new() 现已实现。 |
12.0.4-23
Updated 04/01/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-87087 | UI - Management Console | The Spring security package is upgraded from version 5.7.4 to version 5.7.5 in this release. 在此版本中,Spring 安全包从 5.7.4 版本升级到 5.7.5 版本。 |
| VER-89518 | Procedural Languages | Fixed memory leaks that could occur with certain stored procedures. 修复了某些存储过程可能发生的内存泄漏。 |
| VER-89633 | UI - Management Console | The HTTP Strict-Transport-Security (HSTS) response header was added to all MC responses. This header informs the browser that you should access the site through HTTPS only, and the browser should automatically convert any HTTP connections to HTTPS. 所有 MC 响应都添加了 HTTP 严格传输安全 (HSTS) 响应标头。 此标头告知浏览器您只能通过 HTTPS 访问网站,并且浏览器应自动将任何 HTTP 连接转换为 HTTPS。 |
| VER-91795 | Execution Engine | In rare situations, a logic error in the execution engine “ABuffer” operator would lead to buffer overruns resulting in undefined behavior. This issue has been fixed. 在极少数情况下,执行引擎“ABuffer”运算符中的逻辑错误会导致缓冲区溢出,从而造成未定义的行为。此问题已得到修复。 |
| VER-91820 | Execution Engine | Vertica’s execution engine pre-fetches data from disk to reduce wait time during query execution. Memory for the pre-fetch buffers was not reserved with the resource manager, and in some situations a pre-fetch buffer could grow to a large size and bloat the memory footprint of a query until it completed. Now queries will account for this pre-fetch memory in requests to the resource manager; and several internal changes mitigate the long-term memory footprint of larger-than-average pre-fetch buffers. Vertica 的执行引擎会从磁盘预取数据,以减少查询执行期间的等待时间。 资源管理器并未保留预取缓冲区的内存,在某些情况下,预取缓冲区可能会增长到很大的大小,并使查询的内存占用膨胀,直到查询完成为止。 现在,查询将在对资源管理器的请求中考虑这部分预取内存;并且一些内部更改可以减轻大于平均值的预取缓冲区的长期内存占用。 |
| VER-92111 | DDL - Projection | When we would scan over a projection sorted by two columns (ORDER BY a,b) and materialize only the second one in the sort order (b), we would mistakenly assume the scan is sorted by that column for the purposes of collecting column statistics. This would lead to possible incorrect results when predicate analysis is enabled, and has now been resolved. 当我们扫描按两列排序的投影(ORDER BY a,b)并仅实现按排序顺序(b)排序的第二个投影时, 我们会错误地认为扫描是按该列排序的,以便收集列统计信息。这可能会导致在启用谓词分析时出现不正确的结果,现在已解决。 |
| VER-92115 | Catalog Engine | Previously, syslog notifiers could cause the node to go down when attached to certain DC tables. This issue has been resolved. 以前,系统日志通知程序在连接到某些 DC 表时可能会导致节点关闭。 此问题已得到解决。 |
| VER-92126 | Optimizer | Queries using the same views repeatedly would sometimes return errors if those views included WITH clauses. The issue has been resolved. 如果这些视图包含 WITH 子句,则重复使用相同视图的查询有时会返回错误。 此问题已解决。 |
| VER-92289 | Sessions | The ALTER USER statement could not set the idle timeout for a user to the default value, which is defined by the DefaultIdleSessionTimeout configuration parameter. If the empty string was specified, the idle timeout was set to unlimited. This issue has been resolved. You can now set the idle timeout to the DefaultIdleSessionTimeout value by specifying ‘default’ in the ALTER USER statement. ALTER USER 语句无法将用户的空闲超时设置为默认值,该值由 DefaultIdleSessionTimeout 配置参数定义。 如果指定了空字符串,则空闲超时将设置为无限制。 此问题已解决。现在,您可以通过在 ALTER USER 语句中指定“default”将空闲超时设置为 DefaultIdleSessionTimeout 值。 |
| VER-92678 | Scrutinize | The scrutinize utility produces a tar file of the data it collects. Previously, scrutinize could fail to create this tar file if it encountered a broken symbolic link. This issue has been resolved, and now the size of the tar file is logged to scrutinize_collection.log. scrutinize 实用程序会生成一个包含其收集的数据的 tar 文件。 以前,如果 scrutinize 遇到损坏的符号链接,则可能无法创建此 tar 文件。 此问题已得到解决,现在 tar 文件的大小会记录到 scrutinize_collection.log 中。 |
| VER-92750 | HTTP | Changing the Vertica server certificate triggers an automatic restart of the built-in HTTPS server. When this happened on a busy system, the nodes could sometimes go down. The issue has been fixed. 更改 Vertica 服务器证书会触发内置 HTTPS 服务器的自动重启。 当这种情况发生在繁忙的系统上时,节点有时会出现故障。此问题已得到修复。 |
| VER-92822 | Data load / COPY | In COPY, some missing error checks made it so certain invalid input could crash the database. This has been resolved. 在 COPY 中,缺少一些错误检查,导致某些无效输入可能导致数据库崩溃。此问题已得到解决。 |
12.0.4-22
Updated 02/16/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-91669 | ResourceManager | If the default resource pool, defined by the DefaultResourcePoolForUsers configuration parameter, was set to a value other than ‘general’, the user’s view incorrectly reported the non-general resource pool as the default pool when the user didn’t have that non-general pool set in the profile. This issue has been resolved. The default pool in such cases is now correctly reported as ‘general’. 如果由 DefaultResourcePoolForUsers 配置参数定义的默认资源池设置为“general”以外的值, 则当用户未在配置文件中设置非通用资源池时,用户视图会错误地将非通用资源池报告为默认池。 此问题已解决。现在,此类情况下的默认池会正确报告为“general”。 |
| VER-91744 | Execution Engine | The NULLIF function would infer its output type based on only the first argument. This led to type compatibility errors when the first argument was a small numeric type and the second argument was a much larger numeric type. This has been resolved; now, numeric NULLIF accounts for the types of both arguments when inferring its output type. NULLIF 函数将仅根据第一个参数推断其输出类型。 当第一个参数是较小的数字类型而第二个参数是较大的数字类型时,这会导致类型兼容性错误。此问题已得到解决; 现在,数字 NULLIF 在推断其输出类型时会考虑两个参数的类型。 |
| VER-91827 | Optimizer | Queries with identically looking predicates on different tables used in different subqueries where predicates have very different selectivity could result in bad query plans and worse performance due to incorrect estimates on those tables. The issue has been resolved. 如果查询对不同的表使用相同的谓词,而这些子查询中的谓词具有非常不同的选择性,则这些查询可能会导致查询计划不佳,并且由于对这些表的估计不正确而导致性能下降。 此问题已解决。 |
| VER-91837 | Optimizer | When a node goes down in Eon mode, the buddy node that handles double duty did not adjust the resource calculation. Now, the behavior is consistent with the Enterprise mode node-down scenario. 当节点在 Eon 模式下发生故障时,处理双重任务的伙伴节点不会调整资源计算。 现在,该行为与企业模式节点故障场景一致。 |
12.0.4-21
Updated 01/23/2024
| Issue Key | Component | Description |
|---|---|---|
| VER-90537 | Optimizer | Update statements with subqueries in SET clauses would sometimes return an error. The issue has been resolved. SET 子句中带有子查询的更新语句有时会返回错误。此问题已解决。 |
| VER-91191 | Optimizer | In version 10.1, Vertica updated its execution engine to sample execution times and selectivity of query predicates and join predicates to run them in the most efficient order. This has been disruptive to users who wrote queries which depended on a certain evaluation order, in particular that single-table predicates would be evaluated before join conditions. In particular, queries whose single-table predicates filter out data which would raise a coercion error at the join condition would sometimes raise an error after this change due to the join condition being evaluated first. Now we have improved this experience by ensuring that join conditions do not raise type coercion errors when they are evaluated before single-table predicates. 在 10.1 版本中,Vertica 更新了其执行引擎,以对查询谓词和连接谓词的执行时间和选择性进行抽样,以便按最有效的顺序运行它们。 这对编写依赖于特定评估顺序的查询的用户造成了干扰,特别是单表谓词会在连接条件之前进行评估。 特别是,由于首先评估连接条件,其单表谓词过滤掉会在连接条件下引发强制错误的数据的查询有时会在更改后引发错误。 现在,我们通过确保在单表谓词之前评估连接条件时不会引发类型强制错误来改善这种体验。 |
| VER-91236 | Backup/DR | On HDFS, vbr tried to delete storage files from the wrong fan-out directory. This issue has been resolved. 在 HDFS 上,vbr 尝试从错误的扇出目录中删除存储文件。此问题已解决。 |
12.0.4-20
Updated 12/27/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-90626 | Optimizer | The database query debugging configuration parameter “QueryAssertEnabled”, when set to 1, could cause replay delete query plans to raise INTERNAL errors, failing to run. This issue has been resolved. 数据库查询调试配置参数“QueryAssertEnabled”设置为 1 时,可能会导致重放删除查询计划引发内部错误,从而无法运行。 此问题已解决。 |
| VER-90858 | Optimizer | Create Table As Select statements with repeated occurrences of now() and similar functions were inserting incorrect results into the target table. The issue has been resolved. 重复出现 now() 和类似函数的 Create Table As Select 语句会将错误的结果插入目标表。 此问题已解决。 |
| VER-91151 | Data load / COPY | The upgrade of the C++ AWS SDK in 12.0.2 caused Vertica to make repeated calls to the metadata server for IAM authentication, affecting performance when accessing S3. Vertica now resets the timestamp to prevent excessive pulling. 12.0.2 中 C++ AWS SDK 的升级导致 Vertica 重复调用元数据服务器进行 IAM 身份验证,影响访问 S3 时的性能。 Vertica 现在会重置时间戳以防止过度拉取。 |
12.0.4-19
Updated 12/04/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-89770 | Scrutinize | The parameter --log-limit determines the maximum size of the vertica log that will be preserved when running scrutinize. The limit is applied to the vertica.log file on all nodes in the cluster. The default value changed from 1GB to unlimited. 参数 --log-limit 决定运行 scrutinize 时将保留的 vertica 日志的最大大小。 此限制适用于集群中所有节点上的 vertica.log 文件。默认值从 1GB 更改为无限制。 |
| VER-89779 | Security | The following improvements have been made to LDAPLink: LDAP synchronizations have been optimized and now are much faster for nested groups. Query profiling now works with LDAP dryrun functions. LDAPLink 已做出以下改进: LDAP 同步已优化,现在对于嵌套组来说速度更快。 查询分析现在可与 LDAP dryrun 函数配合使用。 |
| VER-89845 | Data Collector | If a notifier was set for some DC tables and then subsequently dropped, it still remained present in those DC table policies. This could cause a very large number of messages in vertica.log and potential node crashes. The issue was resolved by making “DROP NOTIFIER” support the CASCADE logic. Without CASCADE, drop would fail for the notifiers still used by DC tables. 如果为某些 DC 表设置了通知程序,随后又将其删除,则该通知程序仍存在于这些 DC 表策略中。 这可能会导致 vertica.log 中出现大量消息,并可能导致节点崩溃。 通过使“DROP NOTIFIER”支持 CASCADE 逻辑,此问题已得到解决。 如果没有 CASCADE,则 DC 表仍在使用的通知程序的删除将失败。 |
| VER-89909 | Security | Previously, when configuring a chain of certificates longer than a root CA certificate and a client certificate for internode TLS, the configuration would successfully be applied, but cause the cluster to shut down. This has been fixed. 以前,当配置比根 CA 证书和节点间 TLS 的客户端证书更长的证书链时,配置会成功应用,但会导致集群关闭。此问题已得到修复。 |
| VER-89915 | Backup/DR | Backups to S3 object storage and Google Cloud Storage failed and returned a “Temp path” error. This issue has been resolved. 备份到 S3 对象存储和 Google Cloud Storage 失败并返回“临时路径”错误。此问题已解决。 |
| VER-89988 | Kafka Integration | When a notifier was set for the NotifierErrors or NotifierStats Data collector (DC) tables, notifications sent with a Kafka notifier might cause a loop that produced an infinite stream of notifications. This might result in severely degradated node performance. This issue has been resolved. Now, notifications are disabled for these DC tables, and any existing notifiers have been removed from these tables. 当为 NotifierErrors 或 NotifierStats 数据收集器 (DC) 表设置通知程序时,使用 Kafka 通知程序发送的通知可能会导致循环,从而产生无限的通知流。 这可能会导致节点性能严重下降。 此问题已解决。现在,这些 DC 表的通知已被禁用,并且所有现有通知程序都已从这些表中删除。 |
| VER-90066 | ComplexTypes, Data load / COPY | A logic gap in the source code could lead to an infinite loop while loading complex arrays with thousands of elements, causing the DML statement to never complete. This issue has been fixed. 源代码中的逻辑漏洞可能会导致在加载包含数千个元素的复杂数组时出现无限循环,从而导致 DML 语句永远无法完成。此问题已得到修复。 |
| VER-90091 | Security | In cases of intermittent network connectivity to an LDAP server, Vertica will now retry bind operations. 当与 LDAP 服务器的网络连接间歇性中断时,Vertica 现在将重试绑定操作。 |
| VER-90107 | Catalog Engine | Queries now run correctly when the files of delete vectors are in different storage locations. 当删除向量的文件位于不同的存储位置时,查询现在可以正确运行。 |
12.0.4-18
Updated 10/26/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-89735 | Recovery | Certain rare sequences of memory allocations crashed Vertica. This occurred when you loaded a table with many fields while a node was down and the MaxTieredPoolScale configuration parameter was set to 22 or lower. This issue has been resolved. 某些罕见的内存分配序列会导致 Vertica 崩溃。 当您在节点关闭且 MaxTieredPoolScale 配置参数设置为 22 或更低时加载包含许多字段的表时,就会发生这种情况。此问题已解决。 |
| VER-89775 | Execution Engine | When casting a negative numeric value to an integer and the result of that cast would be 0, then we would incorrectly get an “out of range” error. This issue has been resolved. 当将负数值转换为整数时,如果转换结果为 0,我们会错误地收到“超出范围”错误。此问题已解决。 |
| VER-89784 | Optimizer | In some circumstances, a UNION query that grouped an expression that coerced a value to a common data type returned an error. This issue has been resolved. 在某些情况下,将强制将值转换为通用数据类型的表达式分组的 UNION 查询会返回错误。此问题已解决。 |
12.0.4-17
Updated 10/17/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-88925 | UI - Management Console | When you provisioned a new database on Amazon Web Services, the operation failed. This issue has been resolved. 在 Amazon Web Services 上配置新数据库时,操作失败。此问题已解决。 |
| VER-89567 | Tuple Mover | When the node with the lowest OID became secondary (for example, during cluster demotion), there might have been an increased number of deadlocks and timeouts due to Data Manipulation Language (DML) statements and internal Tuple Mover tasks. This issue has been resolved. 当具有最低 OID 的节点成为次要节点时(例如,在集群降级期间),由于数据操作语言 (DML) 语句和内部 Tuple Mover 任务,可能会出现更多死锁和超时。此问题已解决。 |
12.0.4-16
Updated 10/06/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-89275 | Data load / COPY | If a Parquet query or load were to be interrupted (such as by a LIMIT clause, exception during execution, or user cancellation) while the database has configuration parameter “ParquetColumnReaderSize” set to zero, then Vertica could crash. This issue has been fixed. 如果 Parquet 查询或加载被中断(例如由 LIMIT 子句、执行期间的异常或用户取消),而数据库的配置参数“ParquetColumnReaderSize”设置为零,则 Vertica 可能会崩溃。 此问题已得到修复。 |
| VER-89336 | Data Collector | In some environments the io_stats system view was empty. The monitoring functionality has been improved with better detection of I/O devices. 在某些环境中,io_stats 系统视图为空。监控功能已得到改进,可以更好地检测 I/O 设备。 |
| VER-89488 | EON, Execution Engine | A LIKE ANY or LIKE ALL expression with a non-constant pattern argument on the right-hand side of the expression sometimes resulted in a crash or incorrect internal error. This issue has been resolved. Now, this type of pattern argument results in a normal error. 如果 LIKE ANY 或 LIKE ALL 表达式的右侧带有非常量模式参数,有时会导致崩溃或不正确的内部错误。此问题已得到解决。现在,这种类型的模式参数会导致正常错误。 |
12.0.4-15
Updated 09/21/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-88957 | Optimizer | In some circumstances, queries with outer joins or cross joins that also utilized Top-k projections caused a server error. This issue has been resolved. 在某些情况下,使用 Top-k 投影的外连接或交叉连接的查询会导致服务器错误。此问题已解决。 |
| VER-89102 | Admin Tools | On SUSE Linux Enterprise Server 15, the systemctl status verticad command failed. This issue has been resolved. 在 SUSE Linux Enterprise Server 15 上,systemctl status verticad 命令失败。此问题已解决。 |
| VER-89179 | Spread | Previously, if Vertica received unexpected UDP traffic from its client port, the node could go down. This issue has been resolved. 以前,如果 Vertica 从其客户端端口收到意外的 UDP 流量,则节点可能会关闭。此问题已得到解决。 |
| VER-89276 | Security | The following improvements have been made to LDAPLink: LDAP synchronizations have been optimized and now are much faster for nested groups. Query profiling now works with LDAP dryrun functions. LDAPLink 已做出以下改进: LDAP 同步已优化,现在对于嵌套组来说速度更快。 查询分析现在可与 LDAP dryrun 函数配合使用。 |
12.0.4-14
Updated 08/29/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87968 | SDK | Previously, Vertica didn’t allow Vertica UDX builds using gcc compiler versions 13 or higher. This restriction has been removed. 以前,Vertica 不允许使用 gcc 编译器版本 13 或更高版本构建 Vertica UDX。此限制已被取消。 |
| VER-87971 | Kafka Integration | In some circumstances, there were long timeouts or the process might hang indefinitely when the KafkaAvroParser accessed the Avro Schema Registry. This issue has been resolved. 在某些情况下,当 KafkaAvroParser 访问 Avro Schema Registry 时,会出现长时间超时或进程可能无限期挂起的情况。此问题已得到解决。 |
| VER-88497 | Client Drivers - ODBC | Previously, the connection property FastCursorClose was set to false by default, which prevented you from canceling sqlfetch(). You had to set it to true with conn.addToConnString(“FastCursorClose=1”); to cancel requests. FastCursorClose is now set to {{true}} by default. 以前,连接属性 FastCursorClose 默认设置为 false,这会阻止您取消 sqlfetch()。您必须使用 conn.addToConnString("“FastCursorClose=1"”); 将其设置为 true 才能取消请求。 FastCursorClose 现在默认设置为 {{true}}。 |
| VER-88526 | Backup/DR | Every time Vertica tries to load a snapshot, it checks all the storage files. The file check costs too much time and is not necessary to do it so often. This check is now disabled. 每次 Vertica 尝试加载快照时,它都会检查所有存储文件。文件检查耗费太多时间,没有必要如此频繁地进行。 此检查现已禁用。 |
| VER-88548 | Optimizer | Queries that contained a WITH query that was referred to more than once and also contained multiple distinct aggregates failed with a system error. This issue has been resolved. 包含多次引用的 WITH 查询且包含多个不同聚合的查询因系统错误而失败。此问题已解决。 |
| VER-88632 | Admin Tools | When you shutdown a database with the admintools stop_db option, the command failed and returned an active sessions error. This issue has been resolved. 使用 admintools stop_db 选项关闭数据库时,命令失败并返回活动会话错误。此问题已解决。 |
| VER-88656 | Optimizer | Queries that contained a WITH query that was referred to more than once, and also contained joins on tables with segmented projections and SELECT DISTINCT or LIMIT subqueries sometimes produced an incorrect result. This issue has been resolved. 包含多次引用的 WITH 查询以及包含分段投影的表上的联接和 SELECT DISTINCT 或 LIMIT 子查询的查询有时会产生错误结果。此问题已解决。 |
12.0.4-13
Updated 08/28/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-88116 | Client Drivers - ODBC | Previously, the ODBC driver could return 64-bit FLOATs with incorrect values in its last bit, which are not IEEE-compliant. This has been fixed. 以前,ODBC 驱动程序可能会返回最后一位带有错误值的 64 位 FLOAT,这不符合 IEEE 标准。此问题已得到修复。 |
| VER-88206 | Optimizer | In some query plans with segmentation across multiple nodes, we would get an internal optimizer error when trying to prune out unused data edges from the plan. This issue has been resolved. 在某些跨多个节点分段的查询计划中,当我们尝试从计划中删去未使用的数据边缘时,我们会收到内部优化器错误。此问题已解决。 |
| VER-88239 | EON | In rare circumstances, the automatic sync of catalog files to the communal storage stopped working on some nodes. Users could still manually sync with sync_catalog(). The issue has been resolved. 在极少数情况下,目录文件到公共存储的自动同步在某些节点上停止工作。用户仍然可以使用 sync_catalog() 手动同步。此问题已解决。 |
| VER-88282 | Performance tests | In some cases, the NVL2 function caused Vertica to crash when it returned an array type. This issue has been resolved. 在某些情况下,NVL2 函数在返回数组类型时会导致 Vertica 崩溃。此问题已解决。 |
12.0.4-12
Updated 07/27/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87963 | Tuple Mover | The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved. 当另一个 MERGEOUT 作业正在同一投影上运行时,Tuple Mover 在同一投影上记录了大量 PURGE 请求。此问题已解决。 |
| VER-87966 | Optimizer | Queries with outer joins over subqueries with WHERE clauses that contain AND expressions with constant terms sometimes returned an error. This issue has been resolved. 带有外连接的查询(包含带有常量项的 AND 表达式的 WHERE 子句)有时会返回错误。此问题已解决。 |
| VER-87976 | Optimizer | When you created a UDx side process, Vertica required that the current time zone have a name. This caused a crash when a UDx side process was created under a time zone with a GMT offset rather than a name. This issue has been resolved. 当您创建 UDx 端进程时,Vertica 要求当前时区具有名称。当在具有 GMT 偏移量而非名称的时区下创建 UDx 端进程时,这会导致崩溃。此问题已解决。 |
| VER-88006 | Execution Engine | Queries with large tables stopped the database because the indices that Vertica uses to navigate the tables consumed too much RAM. This issue has been resolved, and now the indices use less RAM. 查询大型表会导致数据库停止运行,因为 Vertica 用于浏览表的索引消耗了过多的 RAM。此问题已解决,现在索引消耗的 RAM 更少。 |
12.0.4-11
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87254 | ComplexTypes, Execution Engine | The optimization that makes it so that EXPLODE on complex types only materializes fields that are needed in the query was not applied to the similar UNNEST function. This issue has been resolved, and now UNNEST similarly prunes out unused fields from scans/loads. 优化使得复杂类型的 EXPLODE 仅实现查询中需要的字段,但该优化并未应用于类似的 UNNEST 函数。 此问题已得到解决,现在 UNNEST 会同样从扫描/加载中删除未使用的字段。 |
| VER-87776 | Admin Tools, Data Collector | If you revived a database and the EnableDataCollector parameter was set to 1, you could not start the database after it was revived. This issue was resolved. To start the database, disable the cluster lease check. 如果您恢复了数据库并且 EnableDataCollector 参数设置为 1,则恢复后无法启动数据库。 此问题已解决。要启动数据库,请禁用集群租约检查。 |
| VER-87800 | Optimizer | During the planning stage, updates on tables with thousands of columns using thousands of SET USING clauses took a long time. Planning performance for these updates was improved. 在规划阶段,使用数千个 SET USING 子句对包含数千列的表进行更新需要很长时间。 这些更新的规划性能已得到改善。 |
| VER-87804 | ComplexTypes, Execution Engine | When rewriting a CROSS JOIN UNNEST query into an equivalent query that puts the UNNEST in a subquery, requesting scalar columns from a table with larger complex columns could lead to an INTERNAL error. This has been resolved. 将 CROSS JOIN UNNEST 查询重写为将 UNNEST 放在子查询中的等效查询时,从具有较大复杂列的表中请求标量列可能会导致内部错误。此问题已解决。 |
| VER-87821 | Execution Engine | When casting a NUMERIC type to an INTEGER type, the bounds of acceptable values were based on the NUMERIC(18, 0) type, not the INTEGER type. This meant that valid 64-bit integers with 19 digits returned an error. This issue has been resolved, and now casting a NUMERIC type to an INTEGER type uses the correct bounds for the INTEGER type. 将 NUMERIC 类型转换为 INTEGER 类型时,可接受值的界限基于 NUMERIC(18, 0) 类型,而不是 INTEGER 类型。 这意味着有效的 64 位整数(19 位)会返回错误。 此问题已解决,现在将 NUMERIC 类型转换为 INTEGER 类型时会使用 INTEGER 类型的正确界限。 |
| VER-87877 | Catalog Engine | Previously, when a cluster lost quorum and switched to read-only mode or stopped, some transaction commits in the queue might get processed. However, due to the loss of quorum, these commits might not have been persisted. These “transient transactions” were reported as successful, but they were lost when the cluster restarted. Now, when Vertica detects a transient transaction, it issues a WARNING so you can diagnose the problem, and it creates an event in ACTIVE_EVENTS that describes what happened. 以前,当集群失去仲裁并切换到只读模式或停止时,队列中的某些事务提交可能会得到处理。 但是,由于仲裁的丢失,这些提交可能不会被持久化。 这些“临时事务”被报告为成功,但它们在集群重新启动时丢失了。 现在,当 Vertica 检测到临时事务时,它会发出警告,以便您可以诊断问题,并在 ACTIVE_EVENTS 中创建一个描述所发生情况的事件。 |
12.0.4-10
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87119 | Kafka Integration, Security | Previously, using a Kafka Notifier with SASL_SSL or SASL_PLAINTEXT would incorrectly use SSL instead. This has been resolved. 以前,使用带有 SASL_SSL 或 SASL_PLAINTEXT 的 Kafka 通知程序会错误地使用 SSL。此问题已得到解决。 |
| VER-87255 | Optimizer | Merge queries with an INTO…USING clause that calls a subquery would sometimes return an error when merging into a table with Set Using/Default query columns. The issue has been resolved. 合并具有调用子查询的 INTO…USING 子句的查询时,在合并到具有 Set Using/Default 查询列的表中时,有时会返回错误。 此问题已解决。 |
| VER-87258 | Execution Engine | Because the explode function is a 1:N transform function, using ORDER BY in its OVER clause has an undefined effect. Previously, using an ORDER BY clause in the OVER clause of Explode could result in an INTERNAL error if the configuration parameter TryPruneUnusedDataEdges was set to 1. This issue has been resolved. 由于explode函数是1:N转换函数,因此在其OVER子句中使用ORDER BY会产生未定义的效果。 以前,如果配置参数TryPruneUnusedDataEdges设置为1,则在Explode的OVER子句中使用ORDER BY子句可能会导致内部错误。此问题已解决。 |
| VER-87294 | Optimizer | Queries eligible for TOPK projections that were also eligible for elimination of no-op joins would sometimes exit with internal error. The issue has been resolved. 符合 TOPK 投影条件且符合无操作连接消除条件的查询有时会因内部错误而退出。此问题已解决。 |
| VER-87297 | Backup/DR | When each node in a cluster pointed to a different backup location, the backup location was non-deterministic, and there were inconsistent failures. This issue was resolved. 当集群中的每个节点指向不同的备份位置时,备份位置是不确定的,并且会出现不一致的故障。此问题已解决。 |
| VER-87422 | Optimizer | In some cases, using SQL macros that return string types could result in core dumps. The issue was resolved. 在某些情况下,使用返回字符串类型的 SQL 宏可能会导致核心转储。此问题已解决。 |
| VER-87433 | ComplexTypes | The flex and kafka parsers would erroneously not respect the parameter “reject_on_materialized_type_error” in cases where an array was too large for the target column, and no element was rejected. Previously, such values would always be rejected. This has been corrected, and now if “reject_on_materialized_type_error” is false, those values will be set to NULL instead. 如果数组对于目标列来说太大,并且没有元素被拒绝,flex 和 kafka 解析器会错误地不遵守参数“reject_on_materialized_type_error”。 以前,这样的值总是会被拒绝。这个问题已经得到纠正,现在如果“reject_on_materialized_type_error”为 false,这些值将被设置为 NULL。 |
| VER-87443 | Data load / COPY | In some circumstances, Parquet file row groups whose metadata field “TotalBytes” has the value 0 might not load. This issue has been resolved. 在某些情况下,元数据字段“TotalBytes”值为 0 的 Parquet 文件行组可能无法加载。此问题已解决。 |
12.0.4-9
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87058 | Catalog Engine | Truncating a local temporary table unnecessarily required a global catalog lock, as temporary tables are session scoped. This issue has been resolved. 截断本地临时表不必要地需要全局目录锁定,因为临时表是会话范围的。此问题已解决。 |
| VER-87085 | Execution Engine | When evaluating check constraints on tables with multiple projections with different sort orders, Vertica would sometimes read the data from the table incorrectly. This issue has been resolved. 在评估具有不同排序顺序的多个投影的表的检查约束时,Vertica 有时会错误地从表中读取数据。此问题已解决。 |
| VER-87102 | ComplexTypes | Previously, the flex table and Kafka parsers could crash if they tried to load array data that is too large for the target table. This behavior was fixed but introduced a change where those array values would cause the whole row to be rejected instead of setting the array value to NULL. Now, the default behavior is to set the data cell to NULL if the array value is too large. This can be overridden with the “reject_on_materialized_type_error” parameter, which will have the rows be rejected instead. 以前,如果 flex 表和 Kafka 解析器尝试加载对于目标表来说太大的数组数据,它们可能会崩溃。 此行为已得到修复,但引入了一个更改,即这些数组值会导致整行被拒绝,而不是将数组值设置为 NULL。 现在,如果数组值太大,默认行为是将数据单元设置为 NULL。这可以通过“reject_on_materialized_type_error”参数覆盖,这将导致行被拒绝。 |
12.0.4-8
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-87058 | Catalog Engine | Truncating a local temporary table unnecessarily required a global catalog lock, as temporary tables are session scoped. This issue has been resolved. 截断本地临时表不必要地需要全局目录锁定,因为临时表是会话范围的。此问题已解决。 |
| VER-86483 | UI - Management Console | In version 12.0.3, the LDAP port number is required to log in to the Management Console (MC) with LDAP authentication. The LDAP port number was not required in some previous versions. If authentication fails, set the default port in the LDAP URL or in MC Settings > Authentication. 在 12.0.3 版本中,需要 LDAP 端口号才能使用 LDAP 身份验证登录管理控制台 (MC)。 在某些以前的版本中不需要 LDAP 端口号。 如果身份验证失败,请在 LDAP URL 或 MC 设置 > 身份验证中设置默认端口。 |
| VER-87053 | Catalog Engine, Performance tests | In version 12.0.0, querying system tables could be slower than in previous versions. Version 12.0.4-8 adjusts the system table segmentation to improve system table queries. 在 12.0.0 版本中,查询系统表可能会比以前的版本慢。 版本 12.0.4-8 调整了系统表分段以改善系统表查询。 |
| VER-87082 | Optimizer | If a user-defined SQL function that returns a string was nested within a call to TRIM which was nested within a call to NULLIF (for example: “NULLIF(TRIM(user_function(value),’ '))”), Vertica could return an invalid result or the error “ERROR: ICU locale operation error: ‘U_BUFFER_OVERFLOW_ERROR’”. This issue has been resolved. 如果返回字符串的用户定义 SQL 函数嵌套在对 TRIM 的调用中,而后者又嵌套在对 NULLIF 的调用中(例如:“NULLIF(TRIM(user_function(value),’ '))”), Vertica 可能会返回无效结果或错误“ERROR:ICU 区域设置操作错误:‘U_BUFFER_OVERFLOW_ERROR’”。此问题已解决。 |
12.0.4-7
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86936 | Security | Previously, adapter_parameters values in the NOTIFIER system table would be truncated if they exceeded 128 characters. This limit has been increased to 8196 characters. 以前,如果 NOTIFIER 系统表中的 adapter_parameters 值超过 128 个字符,则会被截断。此限制已增加到 8196 个字符。 |
| VER-86939 | Data Export, S3 | Export to Parquet sometimes logged errors in a DC table for successful exports. This has ben corrected. 导出到 Parquet 有时会在 DC 表中记录成功导出的错误。此问题已得到纠正。 |
12.0.4-6
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86935 | EON | Under certain circumstances, depending on the frequency and length of the depot fetching activity, a file could not be re-fetched after its eviction—either automatic or cleared manually—unless the node was restarted. This issue has been resolved. 在某些情况下,根据仓库提取活动的频率和长度,文件在被逐出后无法重新提取(无论是自动提取还是手动清除),除非重新启动节点。 此问题已得到解决。 |
12.0.4-5
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86877 | Execution Engine | In some circumstances, the database crashed with errors when you upgraded from Vertica version 11.1.1 and higher to Vertica version 12.0.4. This issue has been resolved. 在某些情况下,当您从 Vertica 版本 11.1.1 及更高版本升级到 Vertica 版本 12.0.4 时,数据库会因错误而崩溃。 此问题已解决。 |
12.0.4-4
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86828 | Catalog Engine, Security | When upgrading a database, any user or role with the same name as a predefined role is renamed. 升级数据库时,任何与预定义角色同名的用户或角色都将被重命名。 |
12.0.4-3
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86721 | Client Drivers – Python, Sessions | Loading certain data could sometimes cause an empty string to not actually be empty, which could lead to a variety of errors. This issue has been resolved. 加载某些数据有时会导致空字符串实际上并非为空,从而导致各种错误。 此问题已解决。 |
| VER-86729 | Optimizer | In some circumstances, queries that had valid scalar data types were returning a VIAssert error. This issue has been resolved. 在某些情况下,具有有效标量数据类型的查询会返回 VIAssert 错误。 此问题已解决。 |
12.0.4-2
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86559 | FlexTable | In some cases COMPUTE_FLEXTABLE_KEYS used to assign non-string data types to keys where string data type was more suitable. The algorithm was improved to prefer string types in those cases. 在某些情况下,COMPUTE_FLEXTABLE_KEYS 会将非字符串数据类型分配给更适合使用字符串数据类型的键。 改进后的算法在这些情况下会优先使用字符串类型。 |
| VER-86613 | Execution Engine | LIKE operators that were qualified by ANY and ALL did not correctly evaluate multiple string constant arguments. This issue has been resolved. 由 ANY 和 ALL 限定的 LIKE 运算符无法正确评估多个字符串常量参数。此问题已解决。 |
| VER-86650 | ComplexTypes | The flex JSON and Avro parsers did not always correctly handle excessively large ARRAY[VARCHAR] inputs. In certain cases this would lead to undefined behavior resulting in a crash. This issue has been fixed. flex JSON 和 Avro 解析器并不总是能正确处理过大的 ARRAY[VARCHAR] 输入。 在某些情况下,这会导致未定义的行为,从而导致崩溃。此问题已得到修复。 |
| VER-86730 | Machine Learning | Loading certain data could sometimes lead to an empty string not being empty, which presented as a variety of errors. This issue has been resolved. 加载某些数据有时会导致空字符串不为空,从而导致出现各种错误。此问题已解决。 |
12.0.4-1
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86280 | Execution Engine | When pushing down predicates of a query that involved a WITH clause being turned into a shared temp relation, an IS NULL predicate on the preserved side of a left outer join was pushed below the join. As a result, rows that should have been filtered out were erroneously included in the result set. This issue has been resolved by updating the predicate pushdown logic. 当将涉及 WITH 子句的查询的谓词下推为共享临时关系时,左外连接保留侧的 IS NULL 谓词被推到连接下方。 结果,本应被过滤掉的行被错误地包含在结果集中。此问题已通过更新谓词下推逻辑得到解决。 |
| VER-86282 | Execution Engine | Expressions resembling expr = ANY(string_to_array(list_of_string_constants)) had a logic error that resulted in undefined behavior. This issue has been resolved. 类似于 expr = ANY(string_to_array(list_of_string_constants)) 的表达式存在逻辑错误,导致未定义的行为。此问题已解决。 |
| VER-86318 | Recovery | When you applied a swap partition event to one table, the other table involved in the swap partition event was removed from the dirty transactions list. This issue has been resolved, and now both tables involved in the swap partition event are in the dirty transactions list. 当您将交换分区事件应用于一个表时,交换分区事件中涉及的另一个表将从脏事务列表中删除。 此问题已解决,现在交换分区事件中涉及的两个表均位于脏事务列表中。 |
| VER-86340 | AP-Geospatial | If you nested multiple geospatial functions, there was an issue finding usable memory that made the database crash. This issue has been resolved. 如果您嵌套了多个地理空间函数,则查找可用内存时会出现问题,从而导致数据库崩溃。此问题已解决。 |
| VER-86346 | Admin Tools, Security | Paramiko has been upgraded to 2.10.1 to address CVE-2022-24302. Paramiko 已升级至 2.10.1,以解决 CVE-2022-24302。 |
| VER-86385 | Execution Engine | When a database had many storage locations, query and other operations such as analyze_statistics() were sometimes slow. This issue has been resolved. 当数据库具有许多存储位置时,查询和其他操作(例如 analyze_statistics())有时会很慢。 此问题已解决。 |
12.0.4-0
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-83522 | Admin Tools | The root cause of this Jira was addressed by resolving VER-84721. 通过解决 VER-84721 解决了该 Jira 的根本原因。 |
| VER-84116 | ComplexTypes | Previously, system tables jdbc_columns and odbc_columns looked up and displayed type information from v_catalog.types, which does not include information on complex types. This issue has been resolved: these tables now retrieve complex type data from the system table v_catalog.complex_types. 以前,系统表 jdbc_columns 和 odbc_columns 查找并显示来自 v_catalog.types 的类型信息,但其中不包含复杂类型的信息。 此问题已得到解决:这些表现在从系统表 v_catalog.complex_types 检索复杂类型数据。 |
| VER-84293 | Optimizer | Queries with deeply nested expressions caused nodes to crash due to stack overflow. This issue has been resolved: now, the query runs normally if stack overflow occurs during non-essential stages of query processing–for example, getting pretty print for the expressions; or the query fails with an error message that the query contained an expression too large to analyze. 具有深层嵌套表达式的查询会因堆栈溢出而导致节点崩溃。此问题已得到解决: 现在,如果在查询处理的非必要阶段(例如,获取表达式的漂亮打印)发生堆栈溢出,查询将正常运行; 或者查询失败并显示错误消息,指出查询包含的表达式太大而无法分析。 |
| VER-84540 | Security | Enabling data-channel encryption with a self-signed certificate could cause the database to go down. This issue has been resolved. 使用自签名证书启用数据通道加密可能会导致数据库瘫痪。此问题已解决。 |
| VER-84610 | Security | An LDAP exception occurred when an LDAP user name was modified manually in Vertica and conflicted with a new LDAP user, causing the initiator node to go down. This issue has been resolved. 在 Vertica 中手动修改 LDAP 用户名并与新 LDAP 用户冲突时发生 LDAP 异常,导致启动器节点关闭。 此问题已解决。 |
| VER-84653 | Data load / COPY | Multiple-row insert is implemented with the explode() function. If explode() could not be found in the active session’s search path, then the multiple-row insert failed. This issue has been resolved. 多行插入通过explode()函数实现。如果在活动会话的搜索路径中找不到explode(),则多行插入失败。 此问题已解决。 |
| VER-84721 | Admin Tools | Installation of an external procedure wrote the copied file in strings rather than bytes. Length calculation errors occurred when Japanse, Chinese, or other special characters were inserted, causing discrepancies between byte and string lengths. This issue has been resolved by writing the copied file in bytes. 安装外部程序时,复制的文件以字符串形式写入,而不是以字节形式写入。 插入日文、中文或其他特殊字符时,长度计算出现错误,导致字节长度和字符串长度不一致。 此问题已通过以字节形式写入复制的文件得到解决。 |
| VER-84723 | SDK, UDX | Previously, running UDT functions with large parameters allocatef large amounts of memory. The issue has been resolved. 以前,运行带有大参数的 UDT 函数会分配大量内存。此问题已得到解决。 |
| VER-84842 | Backup/DR | Full restore in Eon mode sometimes failed if some nodes were down during backup. This issue has been resolved. 如果备份期间某些节点发生故障,Eon 模式下的完全恢复有时会失败。 此问题已解决。 |
| VER-84897 | Data Export | Previously, all exports to parquet/orc/delimited/JSON failed if the exported query invoked a multi-part query plan. This issue has been resolved. 以前,如果导出的查询调用了多部分查询计划,则所有导出到 parquet/orc/delimited/JSON 的操作都会失败。 此问题已得到解决。 |
| VER-84907 | Client Drivers - ADO | When using the ADO.NET driver, canceling a request while load balancing was enabled sometimes cause an exception to occur. This issue has been resolved. 使用 ADO.NET 驱动程序时,在启用负载平衡的情况下取消请求有时会导致发生异常。此问题已解决。 |
| VER-84977 | Data load / COPY | A check to prevent TOCTOU (time of check to time of use) privilege escalations issued false positives in cases where a file is appended to during a COPY. This issue has been resolved: the check has been updated so it no longer issues a false positive in such situations. 为防止 TOCTOU(检查时间到使用时间)特权升级而进行的检查,在 COPY 期间附加文件的情况下发出了误报。 此问题已解决:检查已更新,因此在这种情况下不再发出误报。 |
| VER-84978 | Security | The “starttls” LDAP authentication parameter is no longer deprecated and should generally be used when the LDAPAuth TLS Configuration is not granular enough to handle your environment. For example, if you have several LDAPAuth servers and only some of them can handle TLS, use ALTER AUTHENTICATION to set “starttls” to “soft” in your authentication record to make TLS a preference and not a requirement. “starttls”LDAP 身份验证参数不再被弃用,通常应在 LDAPAuth TLS 配置不够精细以处理您的环境时使用。 例如,如果您有多个 LDAPAuth 服务器,但只有其中一些可以处理 TLS,请使用 ALTER AUTHENTICATION 将身份验证记录中的“starttls”设置为“soft”, 以使 TLS 成为首选项而不是要求。 |
| VER-85005 | Security | Patched libtar to protect against CVE-2021-33643, CVE-2021-33644, CVE-2021-33645, and CVE-2021-33646. 修补了 libtar 以防止 CVE-2021-33643、CVE-2021-33644、CVE-2021-33645 和 CVE-2021-33646。 |
| VER-85017 | Client Drivers - JDBC, Sessions | Previously, if a JDBC client required TLS and the server did not support TLS, the JDBC driver failed to close the socket on the client side, which caused the server to keep its own socket open. As a result, clients sometimes failed to connect with the error “SSL negotiation failed” or long server shutdown times. This issue has been resolved. 以前,如果 JDBC 客户端需要 TLS,而服务器不支持 TLS,则 JDBC 驱动程序无法关闭客户端的套接字,这会导致服务器保持其自己的套接字打开。 因此,客户端有时会出现“SSL 协商失败”错误或服务器关闭时间过长而无法连接。此问题已得到解决。 |
| VER-85092 | Admin Tools | Previously, multibyte characters in LDAPLinkFilterGroup and LDAPLinkFilterUser could prevent the database from starting. This issue has been resolved. 以前,LDAPLinkFilterGroup 和 LDAPLinkFilterUser 中的多字节字符可能会阻止数据库启动。此问题已得到解决。 |
| VER-85197 | Optimizer | Queries containing WITH statements with sorted output referenced in a different way throughout the query sometimes returned erroneous results. The issue has been resolved. 包含 WITH 语句的查询有时会返回错误结果,而这些语句在整个查询过程中以不同的方式引用排序的输出。 此问题已得到解决。 |
| VER-85285 | Optimizer | LIMIT k OVER (…) clauses incorrectedly estimated output rows by k, where k was calculated for every partition in the OVER clause. This issue has been resolved: estimation of output rows is now derived from the OVER clause. LIMIT k OVER (…) 子句错误地用 k 估计了输出行,其中 k 是针对 OVER 子句中的每个分区计算的。 此问题已解决:输出行的估计现在来自 OVER 子句。 |
| VER-85364 | Database Designer Core | DESIGNER_DESIGN_PROJECTION_ENCODINGS returned with an error if a period was embedded in the design name. This issue has been resolved. 如果设计名称中嵌入了句点,则 DESIGNER_DESIGN_PROJECTION_ENCODINGS 返回错误。 此问题已解决。 |
| VER-85372 | Tuple Mover | The Mergeout strata algorithm had a hidden overflow issue due to incorrect type casting for wide projections with more than 4095 columns. This issue has been resolved. 由于对超过 4095 列的宽投影进行类型转换不正确,Mergeout 层算法存在隐藏的溢出问题。此问题已解决。 |
| VER-86519 | DevOps | Fixed RPM digests by installing a newer version of the RPM on our build container when building RPMs. 通过在构建 RPM 时在我们的构建容器上安装较新版本的 RPM 来修复 RPM 摘要。 |
12.0.3-4
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-86368 | Execution Engine | When a database had many storage locations, query and other operations such as analyze_statistics() were sometimes slow. This issue has been resolved. 当数据库具有许多存储位置时,查询和其他操作(例如 analyze_statistics())有时会很慢。此问题已解决。 |
12.0.3-3
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-85643 | Database Designer Core | DESIGNER_DESIGN_PROJECTION_ENCODINGS returned with an error if a period was embedded in the design name. This issue has been resolved. 如果设计名称中嵌入了句点,则 DESIGNER_DESIGN_PROJECTION_ENCODINGS 返回错误。此问题已解决。 |
12.0.3-1
Updated 07/18/2023
| Issue Key | Component | Description |
|---|---|---|
| VER-85014 | Security | Enabling data-channel encryption with a self-signed certificate could cause the database to go down. This issue has been resolved. 使用自签名证书启用数据通道加密可能会导致数据库瘫痪。此问题已解决。 |
| VER-85070 | UI - Management Console | Two MC Activity pages, Table Utilization and Table Details, erroneously referenced system table columns that were removed in an earlier release. This issue has been resolved. 两个 MC 活动页面(表利用率和表详细信息)错误地引用了早期版本中已删除的系统表列。此问题已解决。 |
| VER-85236 | Security | The “starttls” LDAP authentication parameter is no longer deprecated and should generally be used when the LDAPAuth TLS Configuration is not granular enough to handle your environment. For example, if you have several LDAPAuth servers and only some of them can handle TLS, you should set (using ALTER AUTHENTICATION) “starttls” to “soft” in your authentication record to make TLS a preference and not a requirement. “starttls”LDAP 身份验证参数不再被弃用,通常应在 LDAPAuth TLS 配置不够精细以处理您的环境时使用。 例如,如果您有多个 LDAPAuth 服务器,但只有其中一些可以处理 TLS,则应在身份验证记录中将“starttls”设置为“soft”(使用 ALTER AUTHENTICATION),以使 TLS 成为首选项而不是要求。 |

最后修改时间:2024-11-04 09:34:57
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




