24.2.0-6
Updated 10/21/2024
Issue Key | Component | Description |
---|---|---|
VER-97025 | Optimizer | Between v12 and v24, a previous bugfix made it so null rows passed hash SIP filters. This led to a performance drop on queries that relied on SIPS to filter out nulls early. This has been resolved; now, SIP filters remove null rows again. 在 v12 和 v24 之间,先前的一项错误修复导致空行通过了哈希 SIP 过滤器。 这导致依赖 SIPS 尽早过滤掉空值的查询性能下降。此问题已得到解决;现在,SIP 过滤器再次删除了空行。 |
VER-97234 | Data load / COPY | Previously, when querying certain ORC/Parquet files in a certain way, a hang would occur. This issue has now been fixed. 以前,以某种方式查询某些 ORC/Parquet 文件时会出现挂起。此问题现已修复。 |
VER-97339 | Optimizer | A join spill was happening when processing the optimized delete part of the plan, which disallows Filter/Filter join distribution (the join inner was too large to fit in memory, which resulted in an error with a hint to retry with Filter/Filter enabled, even though it is already enabled in the session). This issue has been resolved; to address this, if the EnableFilterFilter hint is encountered when it’s already enabled, we check if it’s an optimized delete plan and in that case retry with optimized delete disabled, allowing the optimizer to choose a Filter/Filter join distribution. 处理计划的优化删除部分时发生了连接溢出,这不允许 Filter/Filter 连接分布(连接内部太大,无法放入内存,这导致出现错误,提示在启用 Filter/Filter 的情况下重试,即使会话中已启用)。 此问题已解决;为了解决这个问题,如果在已启用的情况下遇到 EnableFilterFilter 提示, 我们会检查它是否是优化删除计划,如果是,则在禁用优化删除的情况下重试,允许优化器选择 Filter/Filter 连接分布。 |
24.2.0-5
Updated 10/21/2024
Issue Key | Component | Description |
---|---|---|
VER-96758 | UI - Management Console | The system now allows the addition of LDAP users with numeric usernames in the Management Console. 系统现在允许在管理控制台中添加具有数字用户名的 LDAP 用户。 |
VER-96808 | UI - Management Console | Entering email addresses for Federated accounts is optional; some users encountered an issue, where they couldn’t alter LDAP objects to add email attributes to those objects. This issue has been resolved. Now, this attribute is not mandatory. 输入联合帐户的电子邮件地址是可选的;有些用户遇到了一个问题,他们无法更改 LDAP 对象以向这些对象添加电子邮件属性。此问题已解决。现在,此属性不是强制性的。 |
VER-96867 | Security | Clients not sending packets during initialization of TLS connections will no longer cause CPU usage to spike. 在 TLS 连接初始化期间不发送数据包的客户端将不再导致 CPU 使用率飙升。 |
VER-97116 | UI - Management Console | An issue occurred after users upgraded their cluster from v11 to v12, where the Management Console did not properly show profile data for running queries. This issue has been resolved. 用户将集群从 v11 升级到 v12 后,管理控制台无法正确显示正在运行的查询的配置文件数据,从而出现问题。此问题已解决。 |
24.2.0-4
Updated 09/10/2024
Issue Key | Component | Description |
---|---|---|
VER-95661 | Execution Engine | Due to a bug in the numeric division code, users would get a wrong result when evaluating the mod operator on some numeric values with large precision. This issue has been resolved. 由于数字除法代码中存在错误,用户在对某些精度较高的数值进行 mod 运算符求值时会得到错误的结果。此问题已解决。 |
VER-95819 | Execution Engine | An error in expression analysis for REGEXP_SUBSTR would sometimes lead to a crash when that function was in the join condition. This error has been resolved. REGEXP_SUBSTR 表达式分析中的错误有时会导致该函数处于连接条件时崩溃。此错误已解决。 |
VER-95834 | UI - Management Console | There is an issue with URL parsing when redirecting to Keycloak from MC. The URL had the special character “OR” operator (|). This issue has been resolved. 从 MC 重定向到 Keycloak 时,URL 解析存在问题。URL 包含特殊字符“OR”运算符 (|)。此问题已解决。 |
VER-95962 | Machine Learning | There is a corner case where an orphan blob may remain in a session when the training of an ML model is cancelled. This orphan blob could cause a crash if there was an attempt to train a model with the same name on the same session. This issue has been resolved. 有一种特殊情况,即当取消 ML 模型的训练时,会话中可能会残留一个孤立 blob。 如果尝试在同一会话中训练同名模型,此孤立 blob 可能会导致崩溃。此问题已解决。 |
VER-96126 | Security | install_vertica will no longer prefer the system openssl library to the one shipped with vertica on non-fips systems. If you would like to use the system openssl library, delete the openssl libraries located at /opt/vertica/lib. install_vertica 将不再优先选择系统 openssl 库,而是优先选择非 fips 系统上 vertica 附带的库。 如果您想要使用系统 openssl 库,请删除位于 /opt/vertica/lib 的 openssl 库。 |
VER-96228 | Backup/DR | Server-based replication with target namespace used to fail due to namespace name case sensitivity. This issue has been resolved. 由于命名空间名称区分大小写,基于服务器的与目标命名空间的复制过去常常会失败。此问题已解决。 |
VER-96250 | EON | Previously, in certain cases when a cancel occurred during Vertica uploads to the communal storage, the node would crash. This issue has now been resolved. 以前,在某些情况下,当 Vertica 上传到公共存储期间发生取消时,节点会崩溃。此问题现已解决。 |
VER-96327 | Catalog Engine | Qualified schema names with namespaces have some restrictions, which led to instances of no results being returned. This issue has been resolved. Note: This issue only occurred with displaying the system table. 带有命名空间的合格架构名称有一些限制,这导致不返回任何结果的情况。 此问题已解决。注意:此问题仅在显示系统表时发生。 |
VER-96386 | Backup/DR | VBR no longer requires compat-openssl11 to be installed on RHEL9 systems. VBR 不再需要在 RHEL9 系统上安装 compat-openssl11。 |
24.2.0-3
Updated 07/26/2024
Issue Key | Component | Description |
---|---|---|
VER-94595 | FlexTable | The copy of multiple json files to Vertica table using fjsonparser() is successful now, which was causing the initiator node down issue before this fix. 现在使用 fjsonparser() 将多个 json 文件复制到 Vertica 表已成功,这在修复之前导致启动器节点关闭问题。 |
VER-95103 | Optimizer | If ARGMAX_AGG and DISTINCT were both used in a query, an internal error was raised. This issue has been resolved. Now, this case raises an unsupported error message that includes a hint on how to rework the SQL query to avoid the error. 如果在查询中同时使用了 ARGMAX_AGG 和 DISTINCT,则会引发内部错误。 此问题已解决。 现在,这种情况会引发一条不受支持的错误消息,其中包含有关如何重新执行 SQL 查询以避免错误的提示。 |
VER-95108 | Data load / COPY | Vertica is now handling the null file and a null found case outside the json object with fjsonparser. Vertica 现在使用 fjsonparser 处理 json 对象之外的空文件和空文件发现情况。 |
VER-95200 | Optimizer | Under certain circumstances, partition statistics could be used in place of full table statistics, leading to suboptimal plans. This issue has been resolved. 在某些情况下,可以使用分区统计信息代替全表统计信息,从而导致次优计划。 此问题已解决。 |
VER-95248 | Optimizer | FKPK Joins over projections with derived expressions would put PK input on the Inner side even when it was much bigger than FK input, which resulted in worse performance in some scenarios. The issue has been resolved. 使用派生表达式对投影进行 FKPK 连接会将 PK 输入放在内侧,即使它比 FK 输入大得多,这在某些情况下会导致性能下降。 该问题已解决。 |
VER-95549 | Execution Engine | Some users were experiencing nodes down while using more than two WITHIN groups in listagg. This issue has been resolved. 一些用户在 listagg 中使用两个以上的 WITHIN 组时遇到节点宕机的情况。 此问题已解决。 |
24.2.0-2
Updated 07/10/2024
Issue Key | Component | Description |
---|---|---|
VER-93609 | Installation Program | Previously, the {{install_vertica}} script would return an error if it was unable to determine the device ID, which occurred if any file under /sys/block/{}sd_dir{}/dev/ was missing. This has been fixed; {{install_vertica}} now skips missing dev files, if any. 以前,如果无法确定设备 ID,{{install_vertica}} 脚本将返回错误,这种情况发生在 /sys/block/{}sd_dir{}/dev/ 下的任何文件丢失时。 此问题已得到修复; {{install_vertica}} 现在会跳过丢失的 dev 文件(如果有)。 |
VER-93923 | Execution Engine | Whether LIKE ANY / ALL read strings as UTF8 character sequences or binary byte arrays depended on whether the collation of the current locale was binary, leading to incorrect results when reading multi-character UTF8 strings in binary-collated locales. This has been resolved. Now, LIKE ANY / ALL always reads UTF8 character sequences, regardless of the current locale’s collation. LIKE ANY / ALL 是否将字符串读取为 UTF8 字符序列或二进制字节数组取决于当前语言环境的排序规则是否为二进制,这导致在二进制排序语言环境中读取多字符 UTF8 字符串时结果不正确。 此问题已得到解决。 现在,LIKE ANY / ALL 始终读取 UTF8 字符序列,而不管当前语言环境的排序规则如何。 |
VER-93930 | Client Drivers - ODBC | The Windows DSN configuration utility no longer sets vertica as the default KerberosServiceName value when editing a DSN. Starting with version 11.1, providing a value causes the ODBC driver to assume the connection is using Kerberos authentication and communicates to the server that it prefers to use that authentication method, assuming that the user has a grant to a Kerberos authentication method. The KerberosServiceName value might be set in earlier versions of Windows ODBC DSNs. Clearing the value will resolve the issue. This issue only applies to users who have a Kerberos authentication method granted with a lower priority than other authentication methods and use the DSN configuration utility to set up a DSN on Windows. 编辑 DSN 时,Windows DSN 配置实用程序不再将 vertica 设置为默认的 KerberosServiceName 值。 从版本 11.1 开始,提供一个值会导致 ODBC 驱动程序假定连接正在使用 Kerberos 身份验证, 并向服务器传达它更喜欢使用该身份验证方法的信息, 假定用户已获得 Kerberos 身份验证方法的授权。 KerberosServiceName 值可能在早期版本的 Windows ODBC DSN 中设置。 清除该值将解决该问题。 此问题仅适用于具有比其他身份验证方法优先级更低的 Kerberos 身份验证方法并使用 DSN 配置实用程序在 Windows 上设置 DSN 的用户。 |
VER-94142 | Client Drivers - JDBC | Previously, the JDBC driver used the com.google.gson package as packaged by the original maven build. Now the classes have been shaded under com.vertica to avoid collisions with other versions of gson packages used. 以前,JDBC 驱动程序使用原始 maven 构建打包的 com.google.gson 包。 现在这些类已在 com.vertica 下隐藏,以避免与使用的其他版本的 gson 包发生冲突。 |
VER-94204 | Depot | The subcluster-level depot pin policy became the cluster-level policy after upgrading to version 23.4 or later. This issue has been resolved. For users already using version 23.4 or later, subcluster-level depot policies must be recreated. 升级到 23.4 或更高版本后,子集群级仓库固定策略变为集群级策略。 此问题已解决。 对于已经使用 23.4 或更高版本的用户,必须重新创建子集群级仓库策略。 |
VER-94352 | Depot | Depot fetch used to call the aws list request. This has been resolved and now doesn’t. 仓库获取用于调用 aws 列表请求。此问题已解决,现在不会。 |
VER-94400 | Installation: Server RPM/Deb | The vioperf on some RHEL9 machines would OOM (out-of-memory) if run. This issue has been resolved. 如果运行某些 RHEL9 计算机上的 vioperf,则会出现 OOM(内存不足)。此问题已解决。 |
VER-94569 | Data load / COPY | In rare cases, copying a JSON to a table using FJsonParser or KafkaJsonParser could cause the server to go down. This issue has been resolved. 在极少数情况下,使用 FJsonParser 或 KafkaJsonParser 将 JSON 复制到表可能会导致服务器关闭。此问题已解决。 |
24.2.0-1
Updated 04/25/2024
Issue Key | Component | Description |
---|---|---|
VER-93249 | EON, S3 | Previously, FIPS-enabled databases crashed when Vertica accessed an S3 bucket. This issue has been resolved. 以前,当 Vertica 访问 S3 存储桶时,启用 FIPS 的数据库会崩溃。 此问题已解决。 |
VER-93444 | Backup/DR | LocalStorageLocator did not implement the construct_new() method. When called, it fell back to the StorageLocation.construct_new() method, which raised an error. This issue has been resolved. LocalStorageLocator.construct_new() is now implemented. LocalStorageLocator 未实现 construct_new() 方法。调用时, 它会回退到 StorageLocation.construct_new() 方法,从而引发错误。 此问题已解决。LocalStorageLocator.construct_new() 现已实现。 |
VER-93526 | ComplexTypes, Kafka Integration | Loading JSON/Avro data with Kafka and Flex parsers into tables with many columns suffered from performance degradation. This issue has been resolved. 使用 Kafka 和 Flex 解析器将 JSON/Avro 数据加载到具有许多列的表中会导致性能下降。 此问题已解决。 |
24.2.0-0
Updated 04/25/2024
Issue Key | Component | Description |
---|---|---|
VER-84807 | Data load / COPY | In COPY, some missing error checks made it so certain invalid input could crash the database. This has been resolved. 在 COPY 中,缺少一些错误检查,导致某些无效输入可能会导致数据库崩溃。 此问题已得到解决。 |
VER-85160 | Client Drivers - VSQL | In some cases, setting VSQL_CLIENT_LABEL environment variable wouldn’t properly be set the client label for the session. This has been fixed. You can verify the label for your current session by querying the system table V_MONITOR.SESSIONS. 在某些情况下,设置 VSQL_CLIENT_LABEL 环境变量不会正确设置会话的客户端标签。 此问题已得到修复。 您可以通过查询系统表 V_MONITOR.SESSIONS 来验证当前会话的标签。 |
VER-85379 | Spread | Previously, when a Vertica node went down, the status of standby nodes in the [NODES|https://docs.vertica.com/latest/en/sql-reference/system-tables/v-catalog-schema/nodes/] system table could be shown as DOWN for a short period of time before switching back to STANDBY. This has been fixed. The status of standby notes will now either remain on STANDBY or can be changed temporarily to UNKNOWN if Spread is disrupted for a long period of time. 以前,当 Vertica 节点发生故障时,[NODES|https://docs.vertica.com/latest/en/sql-reference/system-tables/v-catalog-schema/nodes/] 系统表中的备用节点状态可能会在切换回 STANDBY 之前短时间内显示为 DOWN。 此问题已得到修复。 现在,待机注释的状态将保持为 STANDBY,或者如果 Spread 长时间中断,可以暂时更改为 UNKNOWN。 |
VER-85497 | Data load / COPY | When the Avro parser would read a byte array that is at most 8 bytes long into a numeric-typed target, it would only accept a single-word numeric as the target type. This has been resolved; now, the Avro parser supports reading short byte arrays into multi-word numeric targets. 当 Avro 解析器将最多 8 个字节长的字节数组读入数字类型的目标时,它只接受单字数字作为目标类型。 此问题已得到解决; 现在,Avro 解析器支持将短字节数组读入多字数字目标。 |
VER-87864 | Procedural Languages | Fixed memory leaks that could occur with certain stored procedures. 修复了某些存储过程可能发生的内存泄漏。 |
VER-88209 | Execution Engine | Vertica’s execution engine pre-fetches data from disk to reduce wait time during query execution. Memory for the pre-fetch buffers was not reserved with the resource manager, and in some situations a pre-fetch buffer could grow to a large size and bloat the memory footprint of a query until it completed. Now queries will account for this pre-fetch memory in requests to the resource manager; and several internal changes mitigate the long-term memory footprint of larger-than-average pre-fetch buffers. Vertica 的执行引擎会从磁盘预取数据以减少查询执行期间的等待时间。 资源管理器不会为预取缓冲区保留内存,在某些情况下,预取缓冲区可能会增长到很大的大小,并使查询的内存占用膨胀,直到查询完成。 现在,查询将在对资源管理器的请求中考虑这部分预取内存; 一些内部更改可减轻大于平均值的预取缓冲区的长期内存占用。 |
VER-88425 | Execution Engine | User-Defined Aggregates didn’t work with single distinct built-in aggregate in the same query when the input wasn’t sorted on grouping columns plus distinct aggregate column. The issue has been resolved. 当输入未按分组列和不同的聚合列排序时,用户定义聚合无法与同一查询中的单个不同的内置聚合一起使用。 该问题已解决。 |
VER-88529 | Installation Program | When you installed Vertica on RHEL 9 with the install_vertica script, there was a message about a missing ] character. This issue has been resolved. 当您使用 install_vertica 脚本在 RHEL 9 上安装 Vertica 时,会出现一条有关缺少 ] 字符的消息。此问题已解决。 |
VER-88896 | Procedural Languages | Previously, running certain types of queries inside a stored procedure could cause the database to go down. This issue has been resolved. 以前,在存储过程内运行某些类型的查询可能会导致数据库崩溃。 此问题已解决。 |
VER-89117 | Data load / COPY | The upgrade of the C++ AWS SDK in 12.0.2 caused Vertica to make repeated calls to the metadata server for IAM authentication, affecting performance when accessing S3. Vertica now resets the timestamp to prevent excessive pulling. 12.0.2 中 C++ AWS SDK 的升级导致 Vertica 重复调用元数据服务器进行 IAM 身份验证, 影响访问 S3 时的性能。 Vertica 现在会重置时间戳以防止过度拉取。 |
VER-89166 | Execution Engine | A new view called statement_outcomes has been added. This view contains one record per session-statement describing the outcome: success/fail and, if the fail, then the reason behind it. For example, in case the INSERT query succeeds, but the subsequent related constraint check fails, the records for the statement will be success=false with the appropriate description in the “error” column. The query_requests view has not changed. For example, an INSERT statement with a constraint check will have two records: success for the query and success for the related constraint check. The new view is the consolidated statement view. For example, when it shows execution time for the statement, that includes the time of retries (if any happened). 添加了一个名为 statement_outcomes 的新视图。 此视图包含每个会话语句的一条记录, 描述结果:成功/失败,如果失败,则描述其背后的原因。 例如,如果 INSERT 查询成功,但后续相关约束检查失败,则该语句的记录将为 success=false,并在“error”列中显示相应的描述。 query_requests 视图未发生更改。 例如,带有约束检查的 INSERT 语句将有两个记录:查询成功和相关约束检查成功。 新视图是合并语句视图。 例如,当它显示语句的执行时间时,其中包括重试时间(如果有)。 |
VER-89170 | DDL - Table | Vertica has two ways of defining the maximum size of an array type: ARRAY[type, max_elements] and ARRAYtype. Complex array bounds could previously only be defined with the first syntax, but now we support the second syntax as well. Vertica 有两种定义数组类型最大大小的方法:ARRAY[type, max_elements] 和 ARRAYtype。 以前只能使用第一种语法定义复杂数组边界,但现在我们也支持第二种语法。 |
VER-89469 | Backup/DR | Users can now specify a target namespace in the REPLICATE function to replicate data from a migrated Eon Mode database to other Eon Mode databases. 用户现在可以在 REPLICATE 函数中指定目标命名空间,以将数据从迁移的 Eon Mode 数据库复制到其他 Eon Mode 数据库。 |
VER-89555 | HTTP | Changing the Vertica server certificate triggers an automatic restart of the built-in HTTPS server. When this happened on a busy system, the nodes could sometimes go down. The issue has been resolved. 更改 Vertica 服务器证书会触发内置 HTTPS 服务器的自动重启。 当这种情况发生在繁忙的系统上时,节点有时会关闭。 该问题已解决。 |
VER-89804 | DDL - Projection | When we would scan over a projection sorted by two columns (ORDER BY a,b) and materialize only the second one in the sort order (b), we would mistakenly assume the scan is sorted by that column for the purposes of collecting column statistics. This would lead to possible incorrect results when predicate analysis is enabled, and has now been resolved. 当我们扫描按两列排序的投影(ORDER BY a,b)并仅按排序顺序实现第二个投影(b)时, 我们会错误地认为扫描是按该列排序的,以便收集列统计信息。 这可能会导致在启用谓词分析时出现不正确的结果, 现在已解决。 |
VER-89806 | Data Export | Previously, large (chunked) file uploads from Vertica to GCS (such as exports of Parquet to GCS) would fail if the uploaded files included a special character in their path. This issue has been resolved. 以前,如果上传文件的路径中包含特殊字符,从 Vertica 到 GCS 的大型(分块)文件上传(例如将 Parquet 导出到 GCS)将会失败。 此问题已解决。 |
VER-90079 | Installation Program | The install_vertica script did not display information about the HTTPS service settings. This issue has been resolved. install_vertica 脚本未显示有关 HTTPS 服务设置的信息。此问题已解决。 |
VER-90081 | Optimizer | Create Table As Select statements with repeated occurrences of now() and similar functions were inserting incorrect results into the target table. The issue has been resolved. 重复出现 now() 和类似函数的 Create Table As Select 语句会将错误的结果插入目标表。 该问题已解决。 |
VER-90084 | Optimizer | Update statements with subqueries in SET clauses would sometimes return an error. This issue has been resolved. SET 子句中带有子查询的更新语句有时会返回错误。 该问题已解决。 |
VER-90186 | Execution Engine | When a hash join on unique keys would spill, the value columns would sometimes have alignment issues between how the hash table was written and how it gets read by the spill code. If these value columns were string types, this could lead to a crash. This alignment issue has been resolved. 当唯一键上的哈希连接溢出时,值列有时会在哈希表的写入方式和溢出代码的读取方式之间存在对齐问题。 如果这些值列是字符串类型,则可能会导致崩溃。 该对齐问题已解决。 |
VER-90402 | Execution Engine | In rare situations, a logic error in the execution engine “ABuffer” operator would lead to buffer overruns resulting in undefined behavior. This issue has been resolved. 在极少数情况下,执行引擎“ABuffer”运算符中的逻辑错误会导致缓冲区溢出,从而导致未定义的行为。 该问题已解决。 |
VER-90504 | Client Drivers - ADO | Previously, the ADO.NET driver could give the “Invalid type: Guid” error when filtering for or querying UUID columns. This issue has been resolved. 以前,ADO.NET 驱动程序在过滤或查询 UUID 列时可能会给出“无效类型:Guid”错误。 该问题已解决。 |
VER-90525 | Sessions | The ALTER USER statement could not set the idle timeout for a user to the default value, which is defined by the DefaultIdleSessionTimeout configuration parameter. If the empty string was specified, the idle timeout was set to unlimited. This issue has been resolved. You can now set the idle timeout to the DefaultIdleSessionTimeout value by specifying ‘default’ in the ALTER USER statement. ALTER USER 语句无法将用户的空闲超时设置为默认值,该值由 DefaultIdleSessionTimeout 配置参数定义。 如果指定了空字符串,则空闲超时将设置为无限制。 此问题已解决。 现在,您可以在 ALTER USER 语句中指定“default”,将空闲超时设置为 DefaultIdleSessionTimeout 值。 |
VER-90590 | Execution Engine | Since Version 11.1SP1, in some cases, an optimization in the query plan caused queries running under Crunch Scaling mode of COMPUTE_OPTIMIZED to produce wrong results. This issue has been fixed. 自版本 11.1SP1 以来,在某些情况下,查询计划中的优化会导致在 COMPUTE_OPTIMIZED 的 Crunch Scaling 模式下运行的查询产生错误的结果。 此问题已修复。 |
VER-90947 | ResourceManager | If the default resource pool, defined by the DefaultResourcePoolForUsers configuration parameter, was set to a value other than ‘general’, the user’s view incorrectly reported the non-general resource pool as the default pool when the user didn’t have that non-general pool set in the profile. This issue has been resolved. The default pool in such cases is now correctly reported as ‘general’. 如果将 DefaultResourcePoolForUsers 配置参数定义的默认资源池设置为“general”以外的值, 则当用户未在配置文件中设置非通用资源池时,用户的视图会错误地将非通用资源池报告为默认池。 此问题已解决。 在这种情况下,默认池现在被正确报告为“general”。 |
VER-90974 | Build and Release, Client Drivers - ODBC, Client Drivers - VSQL | The Linux ODBC driver and vsql client binaries have been stripped, significantly reducing their sizes. To get non-stripped binaries, please contact support. Linux ODBC 驱动程序和 vsql 客户端二进制文件已被剥离,从而显著减小了它们的大小。 要获取未剥离的二进制文件,请联系支持人员。 |
VER-91232 | Catalog Engine | Previously, syslog notifiers could cause the node to go down when attached to certain DC tables. This issue has been resolved. 以前,系统日志通知程序在连接到某些 DC 表时可能会导致节点关闭。 此问题已解决。 |
VER-91245 | Execution Engine | Queries using WITH clauses which refer to the temporary relation at least N times, where N is the value of the configuration parameter “EnableWITHTempRelReuseLimit”, could suddenly abort the entire Vertica process if an exception (such as running out of temp space) occurred while materializing the temporary relation. We have improved the exception handling of this routine to avoid suddenly crashing the entire process, returning a normal error back to the user when possible or at least handling a PANIC in the fall-back scenario. 使用 WITH 子句的查询至少引用临时关系 N 次,其中 N 是配置参数“EnableWITHTempRelReuseLimit”的值, 如果在实现临时关系时发生异常(例如临时空间不足),可能会突然中止整个 Vertica 进程。 我们改进了此例程的异常处理,以避免整个过程突然崩溃, 在可能时向用户返回正常错误,或者至少在回退场景中处理 PANIC。 |
VER-91271 | Optimizer | Queries using the same views repeatedly would sometimes return errors if those views included WITH clauses. The issue has been resolved. 如果这些视图包含 WITH 子句,则重复使用相同视图的查询有时会返回错误。 该问题已解决。 |
VER-91423 | Execution Engine | When converting strings to numeric using the binary scale notation, using some very large powers would cause internal calculations to overflow, bypass some syntax checks, and crash. This has been resolved – the syntax checks now account for those large powers. 使用二进制表示法将字符串转换为数字时, 使用一些非常大的幂会导致内部计算溢出、绕过某些语法检查并崩溃。 这个问题已经解决 – 语法检查现在考虑了那些较大的幂。 |
VER-91426 | UI - Management Console | The Management Console failed to start on Red Hat 8 because the default timeout to start an application with systemctl is not long enough. To resolve this issue, set the TimeoutStartSec service property for the vertica-consoled service to 300 seconds. 管理控制台无法在 Red Hat 8 上启动,因为使用 systemctl 启动应用程序的默认超时时间不够长。 要解决此问题,请将 vertica-consoled 服务的 TimeoutStartSec 服务属性设置为 300 秒。 |
VER-91430 | Execution Engine | The NULLIF function would infer its output type based on only the first argument. This led to type compatibility errors when the first argument was a small numeric type and the second argument was a much larger numeric type. This has been resolved; now, numeric NULLIF accounts for the types of both arguments when inferring its output type. NULLIF 函数将仅根据第一个参数推断其输出类型。 当第一个参数是小数字类型而第二个参数是大得多的数字类型时,这会导致类型兼容性错误。 这个问题已经解决; 现在,数字 NULLIF 在推断其输出类型时会考虑两个参数的类型。 |
VER-91432 | Logging | The LogRotate metafunction and timer service now support dbLog files. LogRotate 元函数和计时器服务现在支持 dbLog 文件。 |
VER-91553 | Optimizer | Queries with identically looking predicates on different tables used in different subqueries where predicates have very different selectivity could result in bad query plans and worse performance due to incorrect estimates on those tables. The issue has been resolved. 如果在不同的子查询中使用不同的表,而这些子查询的谓词选择性差异很大,则查询计划可能不正确,并且由于对这些表的估计不正确,性能可能会变差。 该问题已解决。 |
VER-91563 | Kafka Integration | When you did not provide the host parameter when you defined a cluster with the vkconfig command, you received an unclear error. This issue has been resolved. 如果您在使用 vkconfig 命令定义集群时未提供主机参数, 则会收到不明确的错误。 该问题已解决。 |
VER-91696 | Optimizer | When a node goes down in Eon mode, the buddy node that handles double duty did not adjust the resource calculation. Now, the behavior is consistent with the Enterprise mode node-down scenario. 当节点在 Eon 模式下关闭时,处理双重任务的伙伴节点不会调整资源计算。 现在,该行为与企业模式节点关闭场景一致。 |
VER-91797 | Client Drivers - Misc | In SQLTools Vertica driver versions 0.0.2 and later, the sessions table populates the following columns: - client_pid - client_type - client_version - client_os - client_os_user_name - client_os_hostname 在 SQLTools Vertica 驱动程序版本 0.0.2 及更高版本中,会话表填充以下列: - client_pid - client_type - client_version - client_os - client_os_user_name - client_os_hostname |
VER-92058 | Scrutinize | The scrutinize utility produces a tar file of the data it collects. Previously, scrutinize could fail to create this tar file if it encountered a broken symbolic link. This issue has been resolved, and the size of the tar file is now logged to scrutinize_collection.log .scrutinize 实用程序会生成一个包含其收集数据的 tar 文件。 以前,如果 scrutinize 遇到损坏的符号链接,则可能无法创建此 tar 文件。 此问题已解决,tar 文件的大小现在记录到 scrutinize_collection.log 。 |
VER-92223 | Data load / COPY | Previously Vertica had poor performance when loading wide tables using RecordParser that performs case-insensitive comparisons. This is now resolved. 以前,Vertica 在使用执行不区分大小写的比较的 RecordParser 加载宽表时性能不佳。 此问题现已解决。 |
VER-92298 | Execution Engine | When you executed a query that filtered data into a JOIN statement, the query processed incorrectly or returned an error. This issue has been resolved. 当您执行将数据过滤到 JOIN 语句中的查询时,查询处理不正确或返回错误。 此问题已解决。 |
VER-92538 | UI - Management Console | When you upgraded the Management Console from version 12.0.4 to 23.3.0, log in to the Management Console failed and an error message displayed. This issue has been resolved. 将管理控制台从版本 12.0.4 升级到 23.3.0 时,登录管理控制台失败并显示一条错误消息。 此问题已解决。 |
VER-92542 | UI - Management Console | When you upgrade Management Console from version 12.0.4 to version 23.3.0, all users are migrated to Keycloak. The ldap password is saved to mconsole.log. 当您将管理控制台从版本 12.0.4 升级到版本 23.3.0 时,所有用户都将迁移到 Keycloak。 ldap 密码保存到 mconsole.log。 |
最后修改时间:2024-11-04 09:37:05
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。