Hi
DBA-Mates, Hope you
all doing well!!!
What Is Data
Pump Export?
Data Pump Export is a utility for unloading data and metadata
into a set of operating system files called a dump file set. The
dump file set can be imported only by the Data Pump Import utility. The dump
file set can be imported on the same system or it can be moved to another
system and loaded there.
What Is Data Pump Import?
Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
1. Today we will discuss schema level export and import with example:
$ expdp dumpfile=schema_scott.dmp logfile=schema_scott.log directory=dmpdir schemas=scott
Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 11:23:28
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: scott
Password:scott_password
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** dumpfile=schema_scott.dmp
logfile=schema_scott.log directory=dmpdir schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.664 KB 4 rows
. . exported "SCOTT"."EMP" 7.820 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/u01/oracle/admin/ora-data/dpdump/schema_scott.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:23:44
* Example for schema level import from scott to Newuser
$ impdp directory=dmpdir dumpfile=schema_scott.dmp logfile=scott_kumar.log exclude=grant remap_schema=scott:kumar
Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 11:25:47
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: scott
Password:scott_password
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_FULL_01": scott/******** directory=dmpdir dumpfile=schema_scott.dmp
logfile=scott_kumar.log exclude=grant remap_schema=scott:kumar
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "KUMAR"."DEPT" 5.664 KB 4 rows
. . imported "KUMAR"."EMP" 7.820 KB 14 rows
. . imported "KUMAR"."SALGRADE" 5.585 KB 5 rows
. . imported "KUMAR"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SCOTT"."SYS_IMPORT_FULL_01" successfully completed at 11:26:02
* We are checking details before import:
SQL> show user
USER is "SYS"
SQL> select username from dba_users;
USERNAME
------------------------------
MGMT_VIEW
SYS
SYSTEM
DBSNMP
SYSMAN
SCOTT
ROHIT
RAJESH
PETER
OUTLN
MDSYS
USERNAME
------------------------------
ORDSYS
CTXSYS
ANONYMOUS
EXFSYS
DMSYS
WMSYS
XDB
ORDPLUGINS
SI_INFORMTN_SCHEMA
OLAPSYS
MDDATA
USERNAME
------------------------------
DIP
TSMSYS
24 rows selected.
* From above query, we have 24 rows selected.
* Please find the details after Import:
USERNAME
------------------------------
MGMT_VIEW
SYS
SYSTEM
DBSNMP
SYSMAN
KUMAR
SCOTT
ROHIT
RAJESH
PETER
OUTLN
USERNAME
------------------------------
MDSYS
ORDSYS
EXFSYS
DMSYS
WMSYS
CTXSYS
ANONYMOUS
XDB
ORDPLUGINS
SI_INFORMTN_SCHEMA
OLAPSYS
USERNAME
------------------------------
TSMSYS
MDDATA
DIP
25 rows selected.
* Here from above query, we have 25 rows selected as shown above query output. Means here 1 row has increased.
SQL> show user
USER is "SYS"
SQL> alter user kumar identified by kumar;
User altered.
SQL> conn kumar
Enter password:xyz
Connected.
SQL> show user
USER is "KUMAR"
SQL> select * from cat;
TABLE_NAME TABLE_TYPE
------------------------------ -----------
DEPT TABLE
BONUS TABLE
SALGRADE TABLE
EMP TABLE
2.How to do full database export and import in oracle database:
$expdp dumpfile=full_datebase_export.dmp logfile=full.log directory=dmpdir full=y
Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 10:30:39
Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_FULL_01": sys/******** AS SYSDBA dumpfile=full_datebase_export.dmp
logfile=full.log directory=dmpdir full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 15.25 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
..............................................................................
...............................................................................
. exported "SYSTEM"."REPCAT$_REPPROP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPSCHEMA" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITES_NEW" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES" 0 KB 0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows
. . exported "TSMSYS"."SRS$" 0 KB 0 rows
. . exported "SYSMAN"."MGMT_POLICY_RULE_CRITERIA" 6.578 KB 7 rows
. . exported "SYSMAN"."MGMT_POLICY_RULE_DEF_COLUMNS" 16.86 KB 111 rows
. . exported "SYSMAN"."MGMT_POLICY_SNAPSHOT_CRITERIA" 6.437 KB 18 rows
. . exported "SYSMAN"."MGMT_POLICY_TARGET_CRITERIA" 6.546 KB 9 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATIONS" 6.273 KB 6 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATION_ROWS" 8.601 KB 18 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATION_VALUES" 7.945 KB 39 rows
. . exported "SYSMAN"."MGMT_PRIVS" 7.539 KB 19 rows
. . exported "SYSMAN"."MGMT_PRIV_GRANTS" 7.023 KB 10 rows
. . exported "SYSMAN"."MGMT_PRIV_INCLUDES" 5.617 KB 12 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY" 9.203 KB 12 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY_GROUP" 5.867 KB 7 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY_TARGET_STATE" 8.703 KB 44 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
/u01/oracle/admin/ora-data/dpdump/full_datebase_export.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 10:32:03
* Now suppose for Some reasion if some schema or table has dropped accidently means we can use the dmpfile.(FULL DATABASE DUMPFILE) after some time or days.
then what should we do, we can import as we have already kept exported schema or table data.
So, here we will see now how to import the table with example:
SQL> show user
USER is "SYS"
SQL>
SQL> drop user kumar cascade;
User dropped.
SQL>
Now, we will import the schema kumar from previous export data:
$ impdp dumpfile=full_database.dmp logfile=full_1.log directory=dmpdir chemas=kumar exclude=grant
Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 13:58:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:manager --(by default)
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** dumpfile=full_database.dmp
logfile=full_1.log directory=dmpdir schemas=kumar exclude=grant
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "KUMAR"."DEPT" 5.664 KB 4 rows
. . imported "KUMAR"."EMP" 7.820 KB 14 rows
. . imported "KUMAR"."SALGRADE" 5.585 KB 5 rows
. . imported "KUMAR"."BONUS" 0 KB 0 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at 13:59:08
* Here, after Import , We will check now the details:
* Connect as SYS user:
$sqlplus / as sysdba
SQL> show user
USER is "SYS"
SQL>
* Changing the password of the user:
SQL> alter user kumar identified by kumar;
User altered.
SQL> conn kumar
Enter password:kumar
Connected.
SQL> show user
USER is "KUMAR"
SQL>
SQL> select * from cat;
TABLE_NAME TABLE_TYPE
------------------------------ -----------
DEPT TABLE
BONUS TABLE
SALGRADE TABLE
EMP TABLE
If we want to set fixed dumpfile file size then we can use below command:
expdp schema_name/schema_password TABLES=table_name1,table_name2,table_name3,table_name4 DIRECTORY=<directory_name> DUMPFILE=<dumpfilename%.dmp> filesize=2G logfile=<logfile_name.log> COMPRESSION=ALL
Hope these examples are clear and for any query, Please comment us to clear your doubts.
What Is Data Pump Import?
Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
1. Today we will discuss schema level export and import with example:
$ expdp dumpfile=schema_scott.dmp logfile=schema_scott.log directory=dmpdir schemas=scott
Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 11:23:28
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: scott
Password:scott_password
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** dumpfile=schema_scott.dmp
logfile=schema_scott.log directory=dmpdir schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.664 KB 4 rows
. . exported "SCOTT"."EMP" 7.820 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/u01/oracle/admin/ora-data/dpdump/schema_scott.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:23:44
* Example for schema level import from scott to Newuser
$ impdp directory=dmpdir dumpfile=schema_scott.dmp logfile=scott_kumar.log exclude=grant remap_schema=scott:kumar
Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 11:25:47
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: scott
Password:scott_password
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_FULL_01": scott/******** directory=dmpdir dumpfile=schema_scott.dmp
logfile=scott_kumar.log exclude=grant remap_schema=scott:kumar
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "KUMAR"."DEPT" 5.664 KB 4 rows
. . imported "KUMAR"."EMP" 7.820 KB 14 rows
. . imported "KUMAR"."SALGRADE" 5.585 KB 5 rows
. . imported "KUMAR"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SCOTT"."SYS_IMPORT_FULL_01" successfully completed at 11:26:02
* We are checking details before import:
SQL> show user
USER is "SYS"
SQL> select username from dba_users;
USERNAME
------------------------------
MGMT_VIEW
SYS
SYSTEM
DBSNMP
SYSMAN
SCOTT
ROHIT
RAJESH
PETER
OUTLN
MDSYS
USERNAME
------------------------------
ORDSYS
CTXSYS
ANONYMOUS
EXFSYS
DMSYS
WMSYS
XDB
ORDPLUGINS
SI_INFORMTN_SCHEMA
OLAPSYS
MDDATA
USERNAME
------------------------------
DIP
TSMSYS
24 rows selected.
* From above query, we have 24 rows selected.
* Please find the details after Import:
USERNAME
------------------------------
MGMT_VIEW
SYS
SYSTEM
DBSNMP
SYSMAN
KUMAR
SCOTT
ROHIT
RAJESH
PETER
OUTLN
USERNAME
------------------------------
MDSYS
ORDSYS
EXFSYS
DMSYS
WMSYS
CTXSYS
ANONYMOUS
XDB
ORDPLUGINS
SI_INFORMTN_SCHEMA
OLAPSYS
USERNAME
------------------------------
TSMSYS
MDDATA
DIP
25 rows selected.
* Here from above query, we have 25 rows selected as shown above query output. Means here 1 row has increased.
SQL> show user
USER is "SYS"
SQL> alter user kumar identified by kumar;
User altered.
SQL> conn kumar
Enter password:xyz
Connected.
SQL> show user
USER is "KUMAR"
SQL> select * from cat;
TABLE_NAME TABLE_TYPE
------------------------------ -----------
DEPT TABLE
BONUS TABLE
SALGRADE TABLE
EMP TABLE
2.How to do full database export and import in oracle database:
$expdp dumpfile=full_datebase_export.dmp logfile=full.log directory=dmpdir full=y
Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 10:30:39
Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_FULL_01": sys/******** AS SYSDBA dumpfile=full_datebase_export.dmp
logfile=full.log directory=dmpdir full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 15.25 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
..............................................................................
...............................................................................
. exported "SYSTEM"."REPCAT$_REPPROP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_REPSCHEMA" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITES_NEW" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS" 0 KB 0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES" 0 KB 0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows
. . exported "TSMSYS"."SRS$" 0 KB 0 rows
. . exported "SYSMAN"."MGMT_POLICY_RULE_CRITERIA" 6.578 KB 7 rows
. . exported "SYSMAN"."MGMT_POLICY_RULE_DEF_COLUMNS" 16.86 KB 111 rows
. . exported "SYSMAN"."MGMT_POLICY_SNAPSHOT_CRITERIA" 6.437 KB 18 rows
. . exported "SYSMAN"."MGMT_POLICY_TARGET_CRITERIA" 6.546 KB 9 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATIONS" 6.273 KB 6 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATION_ROWS" 8.601 KB 18 rows
. . exported "SYSMAN"."MGMT_POLICY_VIOLATION_VALUES" 7.945 KB 39 rows
. . exported "SYSMAN"."MGMT_PRIVS" 7.539 KB 19 rows
. . exported "SYSMAN"."MGMT_PRIV_GRANTS" 7.023 KB 10 rows
. . exported "SYSMAN"."MGMT_PRIV_INCLUDES" 5.617 KB 12 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY" 9.203 KB 12 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY_GROUP" 5.867 KB 7 rows
. . exported "SYSMAN"."MGMT_PURGE_POLICY_TARGET_STATE" 8.703 KB 44 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
/u01/oracle/admin/ora-data/dpdump/full_datebase_export.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 10:32:03
* Now suppose for Some reasion if some schema or table has dropped accidently means we can use the dmpfile.(FULL DATABASE DUMPFILE) after some time or days.
then what should we do, we can import as we have already kept exported schema or table data.
So, here we will see now how to import the table with example:
SQL> show user
USER is "SYS"
SQL>
SQL> drop user kumar cascade;
User dropped.
SQL>
Now, we will import the schema kumar from previous export data:
$ impdp dumpfile=full_database.dmp logfile=full_1.log directory=dmpdir chemas=kumar exclude=grant
Import: Release 10.2.0.1.0 - 64bit Production on Wednesday, 21 September, 2011 13:58:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:manager --(by default)
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** dumpfile=full_database.dmp
logfile=full_1.log directory=dmpdir schemas=kumar exclude=grant
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "KUMAR"."DEPT" 5.664 KB 4 rows
. . imported "KUMAR"."EMP" 7.820 KB 14 rows
. . imported "KUMAR"."SALGRADE" 5.585 KB 5 rows
. . imported "KUMAR"."BONUS" 0 KB 0 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at 13:59:08
* Here, after Import , We will check now the details:
* Connect as SYS user:
$sqlplus / as sysdba
SQL> show user
USER is "SYS"
SQL>
* Changing the password of the user:
SQL> alter user kumar identified by kumar;
User altered.
SQL> conn kumar
Enter password:kumar
Connected.
SQL> show user
USER is "KUMAR"
SQL>
SQL> select * from cat;
TABLE_NAME TABLE_TYPE
------------------------------ -----------
DEPT TABLE
BONUS TABLE
SALGRADE TABLE
EMP TABLE
If we want to set fixed dumpfile file size then we can use below command:
expdp schema_name/schema_password TABLES=table_name1,table_name2,table_name3,table_name4 DIRECTORY=<directory_name> DUMPFILE=<dumpfilename%.dmp> filesize=2G logfile=<logfile_name.log> COMPRESSION=ALL
Hope these examples are clear and for any query, Please comment us to clear your doubts.
This paragraph will assist the internet visitors for
ReplyDeletebuilding up new web site or even a weblog from start to
end.
DeleteDear User,
Thank you for visiting. Hope this may useful.
Regards,
ora-data Team
Hi, Neat post. There is a problem together with your web site in internet explorer, could check
ReplyDeletethis? IE still is the marketplace chief and a good component to other folks will leave out your magnificent writing because
of this problem.
DeleteDear User,
I am not facing any issue in IE. it is working fine.
Regards,
ora-data Team
If some one wants expert view about blogging then i suggest him/her
ReplyDeleteto visit this blog, Keep up the nice job.
DeleteDear User,
Thank you for your comment and appreciation.
Regards,
ora-data Team
The Android functioning system automatically performs scans on installed applications to search for whatever
ReplyDeletethat's out of place, and consumers may also manually activate scans of their Android smartphones right in the updates section.
I'm impressed, I have to admit. Rarely do I encounter a blog that's equally educative and entertaining, and without a doubt, you've hit the nail on the head.
ReplyDeleteThe problem is something too few men and women are speaking intelligently about.
I am very happy that I found this in my search for something regarding this.
Asking questions are genuinely good thing if you are not understanding something completely,
ReplyDeletehowever this paragraph offers pldasant understandijng even.
Dear User,
DeleteThank you for your feedback.
Regards,
ora-data Team
Nice Post! I appreciate to you for this post. Really you are the best. Oracle Java Dumps
ReplyDeleteSimply desire to say your article is as amazing. The clarity to your submit is just excellent
ReplyDeleteand that i can suppose you're a professional on this subject.
Fine along with your permission allow me to grab your feed to keep updated with imminent post.
Thanks a million and please keep up the gratifying work.
I don't even know how I ended up here, but I thought
ReplyDeletethis post was good. I don't know who you are but definitely you're going to a famous
blogger if you are not already ;) Cheers!
Hello, you used to write great, but the last few posts
ReplyDeletehave been kinda boring... I miss your great writings.
Past few posts are just a little bit out of track! come
on!
Does your site have a contact page? I'm having problems locating it but, I'd like to shoot you an e-mail.
ReplyDeleteI've got some recommendations for your blog you might be interested in hearing.
Either way, great website and I look forward to seeing it develop over time.
This website was... how do I say it? Relevant!! Finally I've found something that helped me.
ReplyDeleteThanks a lot!
import export data Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info.
ReplyDelete