
客户反馈在Oracle 12.1 环境下执行impdp导入数据的速度特别慢,18点下班发起导入,次日8点还未导入完成。通过临时手段先关闭stream pool 内存池空间收缩,接着修改数据库streams_pool_size参数值,重启数据库过程导入出现中断报错,打开数据库后继续从中断点导入,并且速度很快。
导入命令:
impdp system/password dumpfile=expdat.dmp directory=dmp cluster=no logfile=impdp.log
复制

首先,查看下数据库的alert告警日志,确认是否报错信息,例如表空间不足或磁盘不足。同时也检查下导出任务的状态信息,记住任务名从上述的导入日志中提取。
col owner_name for a20
col job_name for a20
col state for a20
set linesize 1000
col operation for a20
col job_mode for a20
select * from dba_datapump_jobs where job_name = 'SYS_IMPORT_FULL_01';
复制
其次,检查数据库内存参数,是否开启内存自动管理模式。同时检查是否存在stream pool 内存池空间收缩情况。
show parameter sga_target
show parameter memory_target
show parameter shared_pool_size
select shrink_phase_knlasg from X$KNLASG;
复制
接着,检查导入期间数据库的相关等待事件以及会话信息。方法一:查询dba_hist_active_sess_history视图;方法二:提取导入期间某一个小时的AWR性能的报告;方法三:开启10046事件跟踪。任意选取一种方法,都可以看到数据库存在大量的“Streams AQ: enqueue blocked on low memory”等待事件。
方法一:查询dba_hist_active_sess_history视图
(1)创建一个临时表
SQL> conn as sysdba
SQL> create table system.m_ash as select * from dba_hist_active_sess_history
where SAMPLE_TIME between TO_TIMESTAMP ('2024-02-25 18:00:00', 'YYYY-MM-DD HH24:MI:SS')
and TO_TIMESTAMP ('2024-02-26 08:00:00', 'YYYY-MM-DD HH24:MI:SS');
(2)基于临时表进行快速查询
select event,count(*) from m_ash group by event order by 2 asc
EVENT COUNT(*)
---------------------------------------------------------------- ----------
db file scattered read 17158
KSV master wait 46015
Streams AQ: enqueue blocked on low memory 84427
(3)查看“Streams AQ: enqueue blocked on low memory”等待事件对应的会话信息
select session_id,count(*) from m_ash where event='Streams AQ: enqueue blocked on low memory'
group by session_id order by 2;
SESSION_ID COUNT(*)
---------- ----------
384 4147
4088 4165
99 4242
3144 5172
3519 5233
576 7726
4374 8690
3 36834
(4)检查这些会话都是与"Data Pump Worker"相关。
方法二:提取导入期间某一个小时的AWR性能的报告
@?/rdbms/admin/awrrpt.sql
方法三:开启10046事件进行SQL跟踪。
开:ALTER SYSTEM SET events '10046 trace name context forever, level 12'
-- run import
关:ALTER SYSTEM SET events '10046 trace name context off';
The Data Pump Master and Worker trace file were analyzed with:
$ tkprof <DW_TRACE_FILE>.trc <DW_OUTPUT_FILE>.out waits=y sort=exeela
复制
最后,基于“Streams AQ: enqueue blocked on low memory”等待事件关键词,翻阅MOS相关文档。很多知识库都描述了在AMM/ASMM模式下buffer cache内存空间不足时,会进行streams pool内存收缩导致enquene阻塞导出时间变长。这个等待事件出现一次延迟1分钟,官方建议调大streams_pool_size参数值。
文档 ID 1596645.1
Buffered messages memory is not freed quickly which can result in enqueue blocked on low memeory.
If the size of the streams_pool is being modified, then this can result in excessive waits
for 'Streams AQ: enqueue blocked on low memory'.
Every occurrence for this event causes a 1 minutes delay.
文档 ID 2386566.1
A load in the buffer cache and streams pool memory is being moved to buffer cache.
复制

1、我们可以通过两种方法来提升导入速度。
方法一:手动强制完成stream pool 内存池空间收缩,临时规避该问题;
connect as sysdba
select shrink_phase_knlasg from X$KNLASG;
“1”。该值表示 streams pool 处于收缩阶段。当 streams pool 完成收缩时,该值应返回”0”,
alter system set events 'immediate trace name mman_create_def_request level 6';
复制
方法二:调整streams_pool_size静态参数值为300M,关闭自动调整内存参数,并重启数据库;
CONNECT as sysdba
ALTER SYSTEM SET streams_pool_size=300m SCOPE=spfile;
ALTER SYSTEM SET "_disable_streams_pool_auto_tuning"=TRUE SCOPE=spfile;
SHUTDOWN IMMEDIATE
STARTUP
复制
2、在重启数据库过程中,数据库导入会出现中断错误。
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "PETER"."OBJ2" 163.5 MB 1236887 rows
. . imported "PETER"."OBJ1" 153.8 MB 1164128 rows 《〈〈〈〈〈〈这里数据库重起
----数据库重启
UDI-01089: operation generated ORACLE error 1089
ORA-01089: immediate shutdown or close in progress - no operations are permitted
ORA-06512: at "SYS.DBMS_AQ", line 1127
ORA-06512: at "SYS.KUPC$QUE_INT", line 556
ORA-06512: at "SYS.KUPC$QUE_INT", line 1703
ORA-06512: at line 1
Process ID: 9138
Session ID: 73 Serial number: 27898
复制
3、当数据库重启后,我们可以从之前中断的时间点继续导入数据。
当impdp的 job中断之后,可以通过impdp attache=JOBName ===> continue_client的形式重新运行。impdp system/oracle job_name=SYSTEM.SYS_IMPORT_FULL_01
我们看到,继续运行导入
Import> continue_client <<<<<<<<<<<<continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Wed Jul 4
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=expdat.dmp directory=dmp
. . imported "PETER"."TEST1" 76.91 MB 581776 rows
. . imported "PETER"."TEST" 9.616 MB 72641 rows
. . imported "PETER"."M_ASH" 35.54 MB 84313 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed
复制
此次数据导入效率很快了。

下面推荐大家做下模拟实验,数据导出导入过程出现中断,该如何处理。
1、创建模拟环境,包括用户、测试表、测试数据。
connect as sysdba
create or replace directory tmp as '/tmp';
create tablespace test_tbs datafile '/tmp/test_tbs_01.dbf' size 100m autoextend on;
create user test_usr identified by test_usr default tablespace test_tbs temporary tablespace temp;
grant connect, resource to test_usr;
alter user test_usr quota unlimited on test_usr;
connect test_usr/test_usr
-- create one partitioned table with 5 partitions
create table parttab001
(
col001 number,
col002 varchar2(1000)
) partition by range (col001)
(
partition p001 values less than (1000001),
partition p002 values less than (2000001),
partition p003 values less than (3000001),
partition p004 values less than (4000001),
partition p005 values less than (5000001)
);
-- populate table, 1000000 rows per partition
declare
stmt varchar2(2000);
begin
for j in 1..3000000 loop
stmt := 'insert into parttab001 values ('||to_char (j)||', lpad (to_char ('||to_char (j)||'), 1000, '||'''0'''||'))';
execute immediate stmt;
-- commit after every 100000 rows
if mod (j, 100000) = 0 then
commit;
end if;
end loop;
commit;
end;
/
复制
2、模拟数据导出中断并继续导出操作。
#> expdp system/<password> directory=tmp dumpfile=parttab001_%u.dmp logfile=expdp_parttab001.log
tables=test_usr.parttab001 job_name=test_usr parallel=2
connect as sysdba
col owner_name for a20
col job_name for a20
col state for a20
set linesize 1000
col operation for a20
col job_mode for a20
select * from dba_datapump_jobs where job_name = 'TEST_USR';
查询执行状态为:EXECUTING
ctrl-c中断导出进入以下窗口:Export> stop_job
查询执行状态为:STOP PENDING,
立即中止导出:Export> STOP_JOB=IMMEDIATE
查询执行状态为:NOT RUNNING
继续中断导出:#> expdp system/<password> attach=test_usr
Export> continue_client
复制
3、模拟数据导入中断并继续导入操作。
impdp system/<password> job_name=test_usr directory=tmp dumpfile=parttab001_%u.dmp tables=test_usr.parttab001 logfile=impdp_parttab001.log parallel=2
ctrl-c中断导入进入以下窗口:Import> stop_job=immediate
Are you sure you wish to stop this job ([yes]/no): y
继续中断导入:#> impdp system/<password> attach=test_usr
Import> continue_client
SQL> connect test_usr/test_user
SQL> select count (*) from parttab001 partition (p001);
COUNT(*)
----------
1000000
SQL> select count (*) from parttab001 partition (p002);
COUNT(*)
----------
1000000
SQL> select count (*) from parttab001 partition (p003);
COUNT(*)
----------
1000000
复制
4、数据库导出命令加上参数LOGTIME,可以看到导入每个环节具体的操作时间。
参数LOGTIME=ALL,控制台和日志记录都有时间戳
]$ expdp test/test DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp logfile=expdat.log SCHEMAS=test LOGTIME=ALL
复制

EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1) Datapump Expdp Or Impdp Slowdown Due To Frequent Waits On ”Streams AQ: Enqueue Blocked On Low Memory" (Doc ID 2386566.1) How To Break And Restart A DataPump Export Or Import Job (Doc ID 1400974.1) Expdp Is Very Slow After Upgrade From 11.2.0.3 To 11.2.0.4 With Wait On AQ: enqueue blocked on low memory (Doc ID 1990633.1) Bug 17365043 - Session hangs on "Streams AQ: enqueue blocked on low memory" (Doc ID 17365043.8) Bug 21286665 - "Streams AQ: enqueue blocked on low memory" waits with fix 18828868 - superseded (Doc ID 21286665.8) Bug 27634991 - Datapump Frequently Waits On 'Streams AQ: enqueue blocked on low memory' (Doc ID 27634991.8) Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) (Doc ID 453895.1) DataPump Import (IMPDP) Performance Known Problems (Doc ID 1948188.1) How To Break And Restart A DataPump Export Or Import Job (Doc ID 1400974.1)
以上就是本期关于”数据泵impdp导入时间特别久及导入中断后继续导入”的性能优化案例分享。希望能给大家带来帮助。
欢迎关注“数据库运维之道”公众号,一起学习数据库技术!
可以加我的微信,交个朋友或讨论数据库解决方案,请备注”姓名单位“,谢谢!
评论
