How to manage lost Datapump jobs
I’ve got a customer who runs export scripts for his most important tables using Oracle data pump export. This is a pretty good solution to backup his important tables using dump file for fast recovery of specific tables (this is in no way a proper database backup) or for transferring it to his development environments. The problem is that the tables which we export are very large and the export process might take a very long time to complete (at least a few hours).
Sometimes, we find ourselves in the need to peek into the export process and see what is going on – and figure out what it is doing: is it stack, what table it is exporting or how much time is left till the process completes. Generally, since our scripts are running at the background (using timed cron) and there isn’t really an output we can look at (except the log files). The problem is that the output which goes into the log file is usually not enough to answer all the questions the customer is asking.
In another scenario, we’re running an export (full, for example) and that takes a very long time but then our connection to the server is closed for some reason. It can be a server policy or it can be that we did it by mistake but my most common reason is just loosing the network connection between my client and the server.
In this post we will look at the solution given to us by the datapump processes and understand how the fact that we’re using data pump jobs, actually makes those issues very simple to solve.
Finding the Job name
When we run the expdp command (or impdp for that matter, but let’s talk about expdp for now) we actually use a command wrapper which tells the database to create a scheduler jobs – an internal job which PL/SQL commands inside the database server which perform the actual export. When our backup starts it prints out at the beginning of the process the name of the job that he created. When we need to get to the management interface, we can press ctrl + c at the expdp run-time. This will send the client to the background and get the management interface prompt.
If we’ve lost our client connection, we can re-connect (or re-attach) to the management interface for that job.
First of all, let’s find the job name (in this case, to job call SYS_EXPORT_FULL_01):
[oracle@lnx-oracle66-db11g ~]$ expdp full=y directory=DATA_PUMP_DIR dumpfile=zohar.dmp logfile=zohar.log Export: Release 11.2.0.4.0 - Production on Thu Feb 19 11:12:37 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Username: zohar Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "ZOHAR"."SYS_EXPORT_FULL_01": zohar/******** full=y directory=DATA_PUMP_DIR dumpfile=zohar.dmp logfile=zohar.log
We can find the same output in our log file (which in this case is “zohar.log”).
If our window is no longer showing the output, we can use this command from the sqlplus prompt to extract the job name:
SQL> SELECT owner_name, job_name, operation, job_mode, state 2 FROM dba_datapump_jobs; OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ZOHAR SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING
Reattach to a running job
Now that we have the name of the job, we can attach our client to it. To do this we need to run the expdp command and use the attach commands:
[oracle@lnx-oracle66-db11g ~]$ expdp zohar attach=SYS_EXPORT_FULL_01
This command will connect us back to the management interface and run the “status” command:
[oracle@lnx-oracle66-db11g ~]$ expdp zohar attach=SYS_EXPORT_FULL_01 Export: Release 11.2.0.4.0 - Production on Thu Feb 19 11:19:20 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Job: SYS_EXPORT_FULL_01 Owner: ZOHAR Operation: EXPORT Creator Privs: TRUE GUID: 0F6E661235961287E0530100007FB4DA Start Time: Thursday, 19 February, 2015 11:17:03 Mode: FULL Instance: orcldg Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND zohar/******** full=y directory=DATA_PUMP_DIR dumpfile=zohar.dmp logfile=zohar.log State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/zohar.dmp bytes written: 4,096 Worker 1 Status: Process Name: DW00 State: EXECUTING Object Schema: SYSMAN Object Name: EM_TARGET Object Type: DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY Completed Objects: 76 Worker Parallelism: 1 Export>
At this point we can make several commands: changing the original command to use parallel process, we can add file dump files, investigate the current state, cease the export process or even reconnecting the running client interface.
Export> help ------------------------------------------------------------------------------ The following commands are valid while in interactive mode. Note: abbreviations are allowed. ADD_FILE Add dumpfile to dumpfile set. CONTINUE_CLIENT Return to logging mode. Job will be restarted if idle. EXIT_CLIENT Quit client session and leave job running. FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands. HELP Summarize interactive commands. KILL_JOB Detach and delete job. PARALLEL Change the number of active workers for current job. REUSE_DUMPFILES Overwrite destination dump file if it exists [N]. START_JOB Start or resume current job. Valid keyword values are: SKIP_CURRENT. STATUS Frequency (secs) job status is to be monitored where the default [0] will show new status when available. STOP_JOB Orderly shutdown of job execution and exits the client. Valid keyword values are: IMMEDIATE.
Stopping the export process
If we want to stop the export process we need distinguish between two types of stop commands. The first is a regular stop (stop_job) from which we can continue the command later. The other command is a full stop of the process by “killing” (kill_job or stop_job = immediate). Once we run the commands, he will be asked to make sure we really want to stop the export and the process will stop – either immediately or after a few minutes. For example:
Export> kill Are you sure you wish to stop this job ([yes]/no): y
Resuming the export process
If we’ve made a regular stop the process (in fact, we actually kind of suspended it), then we can reconnect to the process and turn it back on. In order to do that, We will need to find the name of the job we stopped (note the following command to state that he was NOT RUNNING):
SQL> r 1* SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------ ZOHAR SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING
And attach the job again:
[oracle@lnx-oracle66-db11g ~]$ expdp attach=SYS_EXPORT_FULL_01 Export: Release 11.2.0.4.0 - Production on Thu Feb 19 11:33:06 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Username: zohar Password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Job: SYS_EXPORT_FULL_01 Owner: ZOHAR Operation: EXPORT Creator Privs: TRUE GUID: 0F6E8EA07E7A1310E0530100007F8CBB Start Time: Thursday, 19 February, 2015 11:32:17 Mode: FULL Instance: orcldg Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND zohar/******** full=y directory=DATA_PUMP_DIR dumpfile=zohar.dmp logfile=zohar.log State: IDLING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/zohar.dmp bytes written: 4,096 Worker 1 Status: Process Name: DW00 State: UNDEFINED Export>
Once the job start working, we will make sure it is in “Executing” state:
Export> start Export> status Job: SYS_EXPORT_FULL_01 Operation: EXPORT Mode: FULL State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/zohar.dmp bytes written: 94,208 Worker 1 Status: Process Name: DW00 State: EXECUTING Export>
And then we can reattach the client as well. It will be just like we ran the expdp command from that session:
Export> cont Restarting "ZOHAR"."SYS_EXPORT_FULL_01": zohar/******** full=y directory=DATA_PUMP_DIR dumpfile=zohar.dmp logfile=zohar.log Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Using impdp
Obviously, all the things I’ve shown here can also be done with impdp:
Import> help ------------------------------------------------------------------------------ The following commands are valid while in interactive mode. Note: abbreviations are allowed. CONTINUE_CLIENT Return to logging mode. Job will be restarted if idle. EXIT_CLIENT Quit client session and leave job running. HELP Summarize interactive commands. KILL_JOB Detach and delete job. PARALLEL Change the number of active workers for current job. START_JOB Start or resume current job. Valid keywords are: SKIP_CURRENT. STATUS Frequency (secs) job status is to be monitored where the default [0] will show new status when available. STOP_JOB Orderly shutdown of job execution and exits the client. Valid keywords are: IMMEDIATE.
the import status:
Import> stat Job: SYS_IMPORT_SCHEMA_01 Operation: IMPORT Mode: SCHEMA State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: /u01/app/oracle/product/11.2.0/dbhome_11204/rdbms/log/zohar.dmp Worker 1 Status: Process Name: DW00 State: EXECUTING Object Schema: ZOHAR2 Object Name: MGMT_METRIC_DETAILS_ARRAY Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC Completed Objects: 312 Worker Parallelism: 1
Very useful article, thank you for posting it.
Very helpful. Thanks a lot for sharing.
Thanks for the share.
Foued
Hi, if a job completes successfully , is it possible to run it again?
For example if I want to backup a set of tables from Dev and restore them to Test, I need the same expdp, impdp commands. If I’ve already created the job (and can see it in SQL developer). Can I just re-run the job?