Scenarios of expdp and impdp :
1. Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
2. Export the scott schema from orcl and import into ordb database. While import, exclude some objects(sequence,view,package,cluster,table). Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"
3. Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdb parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
4. Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content: If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
5. Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
6. Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:"in('EMP','DEPT')"
query="where deptno in(10,20)"
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query="where deptno = 10"
table_exists_action=APPEND
7. Export the scott schema from orcl database and split the dump file into 50M sizes. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB, the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
8. Export the scott schema from orcl database and split the dump file into four files. Import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created - schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence of the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
9. Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
As per above expdp par file content, it place the dump file into three different location. Let us say, entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
10. We are in orcl database server. Now export the ordb data and place the dump file in orcl database server. After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user and source database schema users should have identical privileges. If there no identical privileges, then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 12:06:40
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
11. Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed with out writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database. If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
12. Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
13. Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
14. Let us start the job and in between, we stop the job in middle and resume the job. After some time, let us kill the job and check the job status for every activity....
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 14:06:51
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to return Import> prompt. Once it is returned to prompt(Import>), we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01
SQL>
Now we are attaching job again..... Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 14:17:11
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2018 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01
SQL>
Attaching the job does not resume the job. Now we are resuming job again.....
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2018 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected
1. Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
2. Export the scott schema from orcl and import into ordb database. While import, exclude some objects(sequence,view,package,cluster,table). Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"
3. Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdb parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
4. Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content: If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
5. Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
6. Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:"in('EMP','DEPT')"
query="where deptno in(10,20)"
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query="where deptno = 10"
table_exists_action=APPEND
7. Export the scott schema from orcl database and split the dump file into 50M sizes. Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB, the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
8. Export the scott schema from orcl database and split the dump file into four files. Import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created - schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence of the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
9. Export the scott schema from orcl database and split the dump file into three files. The dump files will be stored in three different location. This method is especially useful if you do not have enough space in one file system to perform the complete expdp job. After export is successful, import the dump file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
As per above expdp par file content, it place the dump file into three different location. Let us say, entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
10. We are in orcl database server. Now export the ordb data and place the dump file in orcl database server. After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user and source database schema users should have identical privileges. If there no identical privileges, then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 12:06:40
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
11. Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed with out writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database. If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
12. Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
13. Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
14. Let us start the job and in between, we stop the job in middle and resume the job. After some time, let us kill the job and check the job status for every activity....
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 14:06:51
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to return Import> prompt. Once it is returned to prompt(Import>), we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01
SQL>
Now we are attaching job again..... Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 11.2.0.1.0 - Production on Sunday, 17 May, 2018 14:17:11
Copyright (c) 2003, 2013, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2018 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01
SQL>
Attaching the job does not resume the job. Now we are resuming job again.....
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2018 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01
SQL>
Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected
No comments:
Post a Comment