问题描述
Hi, Which of the below mentioned approaches should be better performance wise? The Update vs Insert/Delete both take almost equal time ... Is this correct or am I doing something wrong? I know about the limitation of Ora archive as well as the change in plan for queries which just scanned the index previously, but what surprises me is that an update takes the same time as an delete/insert. ~Regards -- Oracle Archive SQL> UPDATE table1 2 SET ora_archive_state = dbms_ilm.archivestatename (1) 3 WHERE column_id = 50000 4 AND END_TIME BETWEEN TO_DATE ('01-JAN-2015 01.00.00 AM' 5 , 'DD-MON-YYYY HH:MI:SS AM') 6 AND TO_DATE ('01-FEB-2016 01.00.00 AM' 7 , 'DD-MON-YYYY HH:MI:SS AM'); 35133 rows updated. Elapsed: 00:01:26.90 Execution Plan ---------------------------------------------------------- Plan hash value: 479008084 -------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | -------------------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 76103 | 151M| 1 (0)| 00:00:01 | | | | 1 | UPDATE | table1 | | | | | | | | 2 | PARTITION RANGE ITERATOR | | 76103 | 151M| 1 (0)| 00:00:01 | 2 | 398 | | 3 | PARTITION HASH SINGLE | | 76103 | 151M| 1 (0)| 00:00:01 | 8 | 8 | |* 4 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| table1 | 76103 | 151M| 1 (0)| 00:00:01 | | | |* 5 | INDEX RANGE SCAN | PK_table1 | 1 | | 1 (0)| 00:00:01 | | | -------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter("table1"."ORA_ARCHIVE_STATE"='0') 5 - access("column_id"=50000 AND "END_TIME">=TIMESTAMP' 2015-01-01 01:00:00' AND "END_TIME"<=TIMESTAMP' 2016-02-01 01:00:00') Note ----- - dynamic statistics used: dynamic sampling (level=2) - 1 Sql Plan Directive used for this statement Statistics ---------------------------------------------------------- 20897 recursive calls 72338 db block gets 98732 consistent gets 39275 physical reads 17051344 redo size 554 bytes sent via SQL*Net to client 883 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 619 sorts (memory) 0 sorts (disk) 35133 rows processed SQL> SQL> COMMIT; Commit complete. Elapsed: 00:00:00.32 -- traditional Insert and Delete SQL> SQL> INSERT INTO TABLE_ARCHIVE 2 SELECT * FROM TABLE WHERE COLUMN = 65001 3 AND end_time BETWEEN TO_DATE ('01-JAN-2015 01.00.00 AM' 4 , 'DD-MON-YYYY HH:MI:SS AM') 5 AND TO_DATE ('01-FEB-2016 01.00.00 AM' 6 , 'DD-MON-YYYY HH:MI:SS AM'); 35133 rows created. Elapsed: 00:01:18.71 Execution Plan ---------------------------------------------------------- Plan hash value: 1128396010 ------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 60882 | 128M| 1 (0)| 00:00:01 | | | | 1 | LOAD TABLE CONVENTIONAL | TABLE_ARCHIVE | | | | | | | | 2 | PARTITION RANGE ITERATOR | | 60882 | 128M| 1 (0)| 00:00:01 | 2 | 398 | | 3 | PARTITION HASH SINGLE | | 60882 | 128M| 1 0)| 00:00:01 | 2 | 2 | |* 4 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| TABLE | 60882 | 128M| 1 (0)| 00:00:01 | | | |* 5 | INDEX RANGE SCAN | PK_TABLE | 1 | | 1 (0)| 00:00:01 | | | ------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("COLUMN"=65001 AND "end_time">=TIMESTAMP' 2015-01-01 01:00:00' AND "end_time"<=TIMESTAMP' 2016-02-01 01:00:00') Note ----- - dynamic statistics used: dynamic sampling (level=2) - 1 Sql Plan Directive used for this statement Statistics ---------------------------------------------------------- 19651 recursive calls 9262 db block gets 90084 consistent gets 37886 physical reads 8476348 redo size 554 bytes sent via SQL*Net to client 862 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 542 sorts (memory) 0 sorts (disk) 35133 rows processed SQL> SQL> DELETE TABLE WHERE COLUMN = 65001 2 AND end_time BETWEEN TO_DATE ('01-JAN-2015 01.00.00 AM' 3 , 'DD-MON-YYYY HH:MI:SS AM') 4 AND TO_DATE ('01-FEB-2016 01.00.00 AM' 5 , 'DD-MON-YYYY HH:MI:SS AM'); 35133 rows deleted. Elapsed: 00:00:02.91 Execution Plan ---------------------------------------------------------- Plan hash value: 3384627874 -------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | -------------------------------------------------------------------------------------------------------------------------------- | 0 | DELETE STATEMENT | | 60882 | 121M| 1 (0)| 00:00:01 | | | | 1 | DELETE | TABLE | | | | | | | | 2 | PARTITION RANGE ITERATOR | | 60882 | 121M| 1 (0)| 00:00:01 | 2 | 398 | | 3 | PARTITION HASH SINGLE | | 60882 | 121M| 1 (0)| 00:00:01 | 2 | 2 | |* 4 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| TABLE | 60882 | 121M| 1 (0)| 00:00:01 | | | |* 5 | INDEX RANGE SCAN | PK_TABLE | 1 | | 1 (0)| 00:00:01 | | | -------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 5 - access("COLUMN"=65001 AND "end_time">=TIMESTAMP' 2015-01-01 01:00:00' AND "end_time"<=TIMESTAMP' 2016-02-01 01:00:00') Note ----- - dynamic statistics used: dynamic sampling (level=2) - 1 Sql Plan Directive used for this statement Statistics ---------------------------------------------------------- 41 recursive calls 38257 db block gets 36548 consistent gets 135 physical reads 13487984 redo size 560 bytes sent via SQL*Net to client 822 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 35133 rows processed SQL> SQL> COMMIT; Please find the table details below. We have partitions for almost 2 years with almost 5 Billion rows in the same. CREATE TABLE TABLENAME ( COLUMN NUMBER NOT NULL, END_TIME TIMESTAMP(0) NOT NULL, COL2 NUMBER(21,6) NOT NULL, COL3 NUMBER(21,6), COL4 VARCHAR2(3 BYTE), COL5 VARCHAR2(3 BYTE), COL6 NUMBER, COL7 CHAR(1 BYTE), COL8 CHAR(1 BYTE), COL9 TIMESTAMP(0), COL10 NUMBER NOT NULL, COL11 NUMBER, COL12 NUMBER NOT NULL, COL13 NUMBER, COL14 NUMBER, COL15 NUMBER, COL16 NUMBER NOT NULL, COL17 NUMBER NOT NULL, COL18 TIMESTAMP(0) NOT NULL, COL19 NUMBER NOT NULL ) NOCOMPRESS PARTITION BY RANGE (END_TIME) INTERVAL( NUMTODSINTERVAL(1,'DAY')) SUBPARTITION BY HASH (COLUMN) SUBPARTITION TEMPLATE (SUBPARTITION S1 , SUBPARTITION S2 , SUBPARTITION S3 , SUBPARTITION S4 , SUBPARTITION S5 , SUBPARTITION S6 , SUBPARTITION S7 , SUBPARTITION S8 ) ( PARTITION P_01012015 VALUES LESS THAN (TIMESTAMP' 2015-01-01 00:00:00') ); CREATE UNIQUE INDEX P_TABLENAME ON TABLENAME (COLUMN, END_TIME) LOCAL ( PARTITION P_01012015 ( SUBPARTITION P1_S1 , SUBPARTITION P1_S2 , SUBPARTITION P1_S3 , SUBPARTITION P1_S4 , SUBPARTITION P1_S5 , SUBPARTITION P1_S6 , SUBPARTITION P1_S7 , SUBPARTITION P1_S8 ) NOPARALLEL COMPRESS 1; ALTER TABLE TABLENAME ADD ( CONSTRAINT P_TABLENAME PRIMARY KEY (COLUMN, END_TIME) USING INDEX LOCAL ENABLE VALIDATE );复制
专家解答
这个表是用行存档创建的,还是后来添加的?
我想知道您是否遇到了连续迁移问题,例如
因此,就地更新会花费我大约11128块获取和436一致获取。现在我将添加行存档:
看到工作的巨大飞跃了吗?现在,这不是由于行存档,而是我添加了列 (不消耗空间),然后将行更新为更大的事实。所以我们重新定位了一堆:
也许这是导致您的更新成本大于预期成本的原因,而不是使用存档 * 创建 * 的表
我想知道您是否遇到了连续迁移问题,例如
SQL> create table tab1 ( 2 id number, 3 description varchar2(100), 4 col varchar2(1) 5 ) 6 pctfree 0 tablespace no_assm; Table created. SQL> SQL> insert into tab1 2 select rownum,rpad('x',100),'A' 3 from dual 4 connect by level <= 10000; 10000 rows created. SQL> commit; Commit complete. SQL> SQL> set autotrace on stat SQL> update tab1 2 set col ='B'; 10000 rows updated. Statistics ---------------------------------------------------------- 17 recursive calls 11128 db block gets 436 consistent gets 0 physical reads 3799600 redo size 867 bytes sent via SQL*Net to client 945 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10000 rows processed复制
因此,就地更新会花费我大约11128块获取和436一致获取。现在我将添加行存档:
SQL> alter table tab1 ROW ARCHIVAL; Table altered. SQL> SQL> set autotrace on stat SQL> update tab1 2 set ora_archive_state = '1'; 10000 rows updated. Statistics ---------------------------------------------------------- 24 recursive calls 49708 db block gets 1282 consistent gets 0 physical reads 8880524 redo size 867 bytes sent via SQL*Net to client 960 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 10000 rows processed复制
看到工作的巨大飞跃了吗?现在,这不是由于行存档,而是我添加了列 (不消耗空间),然后将行更新为更大的事实。所以我们重新定位了一堆:
SQL> analyze table tab1 compute statistics; Table analyzed. SQL> SQL> select chain_cnt from user_tables 2 where table_name = 'TAB1'; CHAIN_CNT ---------- 413复制
也许这是导致您的更新成本大于预期成本的原因,而不是使用存档 * 创建 * 的表
SQL> create table tab1 ( 2 id number, 3 description varchar2(100), 4 col varchar2(2) 5 ) 6 pctfree 0 tablespace no_assm ROW ARCHIVAL; Table created. SQL> insert into tab1 2 select rownum,rpad('x',100),'A' 3 from dual 4 connect by level <= 10000; 10000 rows created. SQL> commit; Commit complete. SQL> SQL> set autotrace on stat SQL> update tab1 2 set ora_archive_state = '1'; 10000 rows updated. Statistics ---------------------------------------------------------- 15 recursive calls 265 db block gets 223 consistent gets 0 physical reads 943652 redo size 867 bytes sent via SQL*Net to client 960 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10000 rows processed复制
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论
相关阅读
【专家有话说第五期】在不同年龄段,DBA应该怎样规划自己的职业发展?
墨天轮编辑部
1460次阅读
2025-03-13 11:40:53
Oracle RAC ASM 磁盘组满了,无法扩容怎么在线处理?
Lucifer三思而后行
891次阅读
2025-03-17 11:33:53
RAC 19C 删除+新增节点
gh
543次阅读
2025-03-14 15:44:18
2月“墨力原创作者计划”获奖名单公布
墨天轮编辑部
498次阅读
2025-03-13 14:38:19
Oracle 如何修改 db_unique_name?强迫症福音!
Lucifer三思而后行
427次阅读
2025-03-12 21:27:56
Oracle DataGuard高可用性解决方案详解
孙莹
371次阅读
2025-03-26 23:27:33
墨天轮个人数说知识点合集
JiekeXu
319次阅读
2025-04-01 15:56:03
XTTS跨版本迁移升级方案(11g to 19c RAC for Linux)
zwtian
294次阅读
2025-04-08 09:12:48
风口浪尖!诚通证券扩容采购Oracle 793万...
Roger的数据库专栏
267次阅读
2025-03-24 09:42:53
切换Oracle归档路径后,不能正常删除原归档路径上的归档文件
dbaking
266次阅读
2025-03-19 14:41:51