AWS-RDS ORACLE

   

The import process using Oracle Data Pump and the DBMS_FILE_TRANSFER package has the following steps:

Step 1: Grant privileges to user on source database

Step 2: Use DBMS_DATAPUMP to create a dump file

Step 3: Create a database link to the target DB instance

Step 4: Use DBMS_FILE_TRANSFER to copy the exported dump file to the Amazon RDS instance

Step 5: Import the dump file into a database on the Amazon RDS instance

Step 6: Clean up

Importing Oracle Data Pump file

Create all nesessary tablespaces if needed. Each can be created by:

CREATE TABLESPACE MY_TABLESPACE DATAFILE SIZE 5G AUTOEXTEND ON NEXT 1G;

CREATE TABLESPACE "DF_DATA" DATAFILE AUTOEXTEND ON NEXT 100M;

Create necessary schema(user) and grant it the following permissions:

CREATE USER DEV1 IDENTIFIED BY "MY_PASSWORD";

GRANT UNLIMITED TABLESPACE TO DEV1 ;GRANT CREATE SESSION, RESOURCE, DBA to DEV1 ;

 GRANT EXPORT FULL DATABASE TO "DEV1 ";

 GRANT SELECT ANY DICTIONARY TO "DEV1 ";

 GRANT CREATE MATERIALIZED VIEW TO "DEV1 ";

 GRANT CREATE DATABASE LINK TO "DEV1 ";

 GRANT CREATE VIEW TO "DEV1 ";

 GRANT CREATE SYNONYM TO "DEV1 ";

 GRANT UNLIMITED TABLESPACE TO "DEV1 ";

 GRANT "CONNECT" TO "DEV1 ";

 GRANT "RESOURCE" TO "DEV1 ";

 GRANT "IMP_FULL_DATABASE" TO "DEV1 ";

Also, for every tablespace you created:

ALTER USER DEV1 QUOTA 100M ON MY_TABLESPACE;

To initiate dump file copy from S3 bucket, execute the following query:

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(     p_bucket_name => 'your_s3_bucket_name',     p_s3_prefix => '',     p_directory_name => 'DATA_PUMP_DIR')AS TASK_ID FROM DUAL

--------------------------------------------------------------------------------

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3 ( p_bucket_name => 'testenv-db-dump',    p_s3_prefix => 'Singletabledump/',    p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

This query returns task-id, which can be used to track transfer status:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-.log'))

Replace <task_id> with the value returned from the previous query.

You may list all uploaded files using the following query:

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename;

Note: sometimes it’s required to delete imported file. You may do it with the following command:

exec utl_file.fremove('DATA_PUMP_DIR','your_file_name');

As soon as the file transfeted from S3 bucket to Oracle instance, you may start import job:

DECLARE hdnl NUMBER;

BEGIN hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null); 

DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'your_file_name', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file);

DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'imp.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);

DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''your_schema_name'')'); DBMS_DATAPUMP.START_JOB(hdnl);

END;

Replace your_file_name and your_schema_name with your values.

Log file move to S3 bucket check the status import dump

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3 (  p_bucket_name    =>  'your_s3_bucket_name',  p_prefix   =>  'imp.log',  p_s3_prefix  =>  '',       

p_directory_name =>  'DATA_PUMP_DIR')     AS TASK_ID FROM DUAL;


To check status of your job execute the following query:

col owner_name for a30 

col job_name for a30 

col operation for a30 

col job_mode for a30 

col DEGREE for a30 

col state  for a30

SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING';


Read import log; file to get more information about errors or unexpected results:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','imp.log'))

Exporting Oracle Data Pump file

Note: The following dump query may not export ALL your tables, if some tables may not be extent allocated. So, you need to generate a script to alter those tables:

SELECT 'ALTER TABLE '||table_name||' ALLOCATE EXTENT;' FROM user_tables WHERE segment_created = 'NO';

Run generated queries before executing dump query to get a full dump.

To export Oracle Data Pump file you need to export your DB first:


DECLARE hdnl NUMBER;BEGIN hdnl := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'SCHEMA', job_name=>null);

DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'your_file_name', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file);

DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'exp.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);

DBMS_DATAPUMP.METADATA_FILTER (hdnl,'SCHEMA_EXPR','IN (''your_schema_name'')'); DBMS_DATAPUMP.START_JOB(hdnl);

END;


Replace your_file_name and your_schema_name with your desired values.

To check status of your job execute  and process time the following query:

SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING';

-------------------------------------------------------

SELECT OPNAME, SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE" FROM V$SESSION_LONGOPS WHERE OPNAME in (select d.job_name 

from v$session s, v$process p, dba_datapump_sessions d 

where p.addr=s.paddr and s.saddr=d.saddr ) AND OPNAME NOT LIKE '%aggregate%' 

AND TOTALWORK != 0 AND SOFAR <> TOTALWORK;   


SELECT dp.job_name, dp.owner_name, dp.operation, dp.job_mode, dp.state, 

sl.sid, sl.serial#,lpad((round(sl.sofar*100/sl.totalwork,0) || '%'),7) as "% Done", 

sl.totalwork   FROM v$session_longops sl, v$datapump_job dp 

WHERE sl.opname = dp.job_name  AND sl.sofar != sl.totalwork;

-------------------------------------------------------

Also, you may read exp.log during export operation:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','exp.log'))

As soon as export finishes, you may copy your exported file to S3 bucket:

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(p_bucket_name =>  'your_s3_bucket_name',p_prefix =>  '',p_s3_prefix =>  '', p_directory_name =>  'DATA_PUMP_DIR')  AS TASK_ID FROM DUAL;

And again, to check upload status, execute the following query:

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-.log'))

  

----------------------------------------------------------------------

select instance_name,status from v$instance; 

select owner, count(1), object_type from dba_objects group by owner, 

object_type having owner in  (select username from dba_users where default_tablespace = 'USERS') 

and object_type = 'TABLE';      


set linesize 1000    

select username,default_tablespace from dba_users ;  

select count(*) from dba_tables where owner='DEV1'; SELECT count(1) FROM sys.all_views where owner='DEV1';  

select OWNER,OBJECT_NAME,OBJECT_TYPE  from dba_objects where schema_name = 'DEV1'  

select count(1)  from dba_objects where OWNER = 'DEV1' and object_type = 'PROCEDURE';

select count(1) from dba_objects where OWNER = 'DEV1' and object_type = 'PACKAGE';

select count(1) from dba_objects where OWNER = 'DEV1' and object_type ='FUNCTION';

select count(1) from dba_objects where OWNER = 'DEV1' and object_type = 'PACKAGE BODY' ;   

select sequence_owner, sequence_name from dba_sequences where sequence_owner='DEV1'; 

select count(1)  from dba_sequences where sequence_owner='DEV1';

select * from user_cons_columns select *  FROM user_constraints WHERE table_name = '<your table name>'   

select * from ALL_OBJECTS WHERE OBJECT_TYPE IN ('FUNCTION','PROCEDURE','PACKAGE') and OWNER='DEV1';

----------------------------------------------------------------------

select * from dba_datapump_jobs;       

SQL> select table_name,tablespace_name,owner from dba_tables where owner in ('DEV1','DEV2');  

select count (1) from dba_tables where owner ='DEV1';  

OWNER   COUNT(1) OBJECT_TYPE --4 TABLE   

 SELECT a.username  ,COUNT(b.object_name)  FROM sys.dba_users a LEFT JOIN sys.dba_objects b ON a.username = b.owner GROUP BY a.username ORDER BY a.username; 

 select count(*) from ALL_INDEXES where table_owner in ('DEV1','DEV2'); 

select * from ALL_INDEXES where table_owner ='DEV1'; 

select OWNER,INDEX_NAME,TABLE_OWNER,TABLE_NAME from ALL_INDEXES where table_owner ='DEV1';  

SQL> select count(*) from dba_segments where owner='DEV1';  

----------------------------------------------------------------------------    

To check how many segments a schema have and there sizes.

 ---------------------------------------------------------------------------- 

SQL> select segment_name,bytes/1024/1024 from dba_segments where owner='DEV1'; 

select segment_name,bytes/1024/1024 from dba_segments where owner IN ('DEV1','TD');  

------------------------------------------------------------------------ 

How to check size of Particular Schema  

--------------------------------------------------------- 

select owner,sum(bytes)/1024/1024/1024 as "SIZE in GB" from dba_segments where owner='DEV1' group by owner;  

--------------------------------------------------------------------------- 

How to check Database Size, Used and Free Space in a Database

 --------------------------------------------------------------------------- 

col "Database Size" format a20 col "Free space" format a20 col "Used space" format a20 select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size" , round(sum(used.bytes) / 1024 / 1024 / 1024 ) - round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space" , round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space" from (select bytes from v$datafile union all select bytes from v$tempfile union all select bytes from v$log) used , (select sum(bytes) as p from dba_free_space) free group by free.p /  Database Size        Used space          

 Free space -40 GB    9 GB  31 GB  

-----------Query to Check Database Growth in Oracle -----------------------------  

col "USERS" format a20 col "FILE_NAME" format a20 Select a.tablespace_name, a.file_name, a.bytes allocated_bytes,b.free_bytes FROM dba_data_files  a,(SELECT file_id, SUM(bytes) free_bytes FROM dba_free_space b GROUP BY file_id) b WHERE a.file_id=b.file_id order by a.tablespace_name;   select b.tablespace_name, tbs_size SizeMb, a.free_space FreeMb from (select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space from dba_free_space group by tablespace_name) a,(select tablespace_name, sum(bytes)/1024/1024 as tbs_size from dba_data_files group by tablespace_name) b where a.tablespace_name(+)=b.tablespace_name;  

----------------------------------------------------------------------------

 To check Tablespaces that are >=80% full, and how much to add to make them 80% again

 ---------------------------------------------------------------------------------  

set pages 999 lines 100 col   "Tablespace" for a50 col  "Size MB"   for 999999999 col    "%Used"   for 999 col     "Add (80%)"   for 999999 select  tsu.tablespace_name "Tablespace"   ,  ceil(tsu.used_mb) "Size MB"   ,  100 - floor(tsf.free_mb/tsu.used_mb*100) "%Used"   ,  ceil((tsu.used_mb - tsf.free_mb) / .5) - tsu.used_mb "Add (50%)"   from       (select tablespace_name, sum(bytes)/1024/1024 used_mb   from    dba_data_files group by tablespace_name) tsu   ,          (select ts.tablespace_name   ,       nvl(sum(bytes)/1024/1024, 0) free_mb   from    dba_tablespaces ts, dba_free_space fs   where   ts.tablespace_name = fs.tablespace_name (+)   group by ts.tablespace_name) tsf   where      tsu.tablespace_name = tsf.tablespace_name (+)   and        100 - floor(tsf.free_mb/tsu.used_mb*100) >= 50  order      by 3,4   ;

 select   table_name, num_rows counter from   dba_tables where    owner = 'DEV1' order by    table_name;   select   table_name,   to_number( extractvalue( xmltype(dbms_xmlgen.getxml('select count(*) c from '||table_name)),'/ROWSET/ROW/C')) count   from    user_tables order by   table_name;         

select owner as schema_name,  view_name from sys.all_views WHERE  owner ='DEV1' order by owner, view_name;    

---------------- To check status of your job execute the following query:-----------

  SELECT owner_name, job_name, operation, job_mode,DEGREE, state FROM dba_datapump_jobs where state='EXECUTING'      1. Check the large size of segment present in the database  

col owner for a6 col segment_name for a26 select * from (select owner,segment_name||'~'||partition_name segment_name,segment_type,bytes/(1024*1024) size_m from dba_segments ORDER BY BLOCKS desc) where rownum < 11;   3. 

Get the list of LOB objects present in database  select owner, table_name, column_name, segment_name, index_name from dba_lobs;      

https://aws.amazon.com/premiumsupport/knowledge-center/rds-oracle-storage-optimization/           

---List tables and sizes 

set lines 300 

pages 300 

col tablespace_name for a15 col owner for a15 select s.owner,s.segment_name,  s.tablespace_name, s.bytes/1024/1024 size_in_MB from dba_segments s where s.OWNER='DEV1' and s.segment_type='TABLE' order by s.bytes/1024/1024 desc; NOTES: - List tables and their size in "s.OWNER='DEV1'" schama (Note: LOBs sizes not captured)   

-- List indexes and size 

set lines 300 pages 300 col tablespace_name for a15 col owner for a15 select s.owner,s.segment_name,  s.tablespace_name, s.bytes/1024/1024 size_in_MB from dba_segments s where s.OWNER='DEV1' and s.segment_type='INDEX' order by s.bytes/1024/1024 desc;   

-- Get data file details 

set pages 999 set lines 400 col FILE_NAME format a75 select d.TABLESPACE_NAME, d.FILE_NAME, d.BYTES/1024/1024 SIZE_MB, d.AUTOEXTENSIBLE, d.MAXBYTES/1024/1024 MAXSIZE_MB, d.INCREMENT_BY*(v.BLOCK_SIZE/1024)/1024 INCREMENT_BY_MB from dba_data_files d,   v$datafile v where d.FILE_ID = v.FILE# order by d.TABLESPACE_NAME, d.FILE_NAME;   

-- Get temp file details 

set pages 999 set lines 400 col FILE_NAME format a75 select d.TABLESPACE_NAME, d.FILE_NAME, d.BYTES/1024/1024 SIZE_MB, d.AUTOEXTENSIBLE, d.MAXBYTES/1024/1024 MAXSIZE_MB, d.INCREMENT_BY*(v.BLOCK_SIZE/1024)/1024 INCREMENT_BY_MB from dba_temp_files d,  v$tempfile v where d.FILE_ID = v.FILE# order by d.TABLESPACE_NAME, d.FILE_NAME;  

-- Get online log file details 

set lines 300 pages 300 col REDOLOG_FILE_NAME format a100 SELECT  a.GROUP#,  a.THREAD#,  a.SEQUENCE#,  a.ARCHIVED,  a.STATUS,  b.MEMBER AS REDOLOG_FILE_NAME,  (a.BYTES/1024/1024) AS SIZE_MB FROM v$log a JOIN v$logfile b ON a.Group#=b.Group# ORDER BY a.GROUP#;

--------------------------------------

 Determine the spread of the space that's allocated to different components of the Oracle database:

set pages 200 select '===========================================================' || chr(10) || 'Total Database Physical Size = ' || round(redolog_size_gb+dbfiles_size_gb+tempfiles_size_gb+ctlfiles_size_gb,2) || ' GB' || chr(10) || '===========================================================' || chr(10) || ' Redo Logs Size : ' || round(redolog_size_gb,3) || ' GB' || chr(10) || ' Data Files Size : ' || round(dbfiles_size_gb,3) || ' GB' || chr(10) || ' Temp Files Size : ' || round(tempfiles_size_gb,3) || ' GB' || chr(10) || ' Archive Log Size - Approx only : ' || round(archlog_size_gb,3) || ' GB' || chr(10) || ' Control Files Size : ' || round(ctlfiles_size_gb,3) || ' GB' || chr(10) || '===========================================================' || chr(10) || ' Used Database Size : ' || used_db_size_gb || ' GB' || chr(10) || ' Free Database Size : ' || free_db_size_gb || ' GB' ||chr(10) || ' Data Pump Directory Size : ' || dpump_db_size_gb || ' GB' || chr(10) || ' BDUMP Directory Size : ' || bdump_db_size_gb || ' GB' || chr(10) || ' ADUMP Directory Size : ' || adump_db_size_gb || ' GB' || chr(10) || '===========================================================' || chr(10) || 'Total Size (including Dump and Log Files) = ' || round(round(redolog_size_gb,2) +round(dbfiles_size_gb,2)+round(tempfiles_size_gb,2)+round(ctlfiles_size_gb,2) +round(adump_db_size_gb,2) +round(dpump_db_size_gb,2)+round(bdump_db_size_gb,2),2) || ' GB' || chr(10) || '===========================================================' as summary FROM (SELECT sys_context('USERENV', 'DB_NAME') db_name, (SELECT SUM(bytes) / 1024 / 1024 / 1024 redo_size FROM v$log) redolog_size_gb, (SELECT SUM(bytes) / 1024 / 1024 / 1024 data_size FROM dba_data_files) dbfiles_size_gb, (SELECT nvl(SUM(bytes), 0) / 1024 / 1024 / 1024 temp_size FROM dba_temp_files) tempfiles_size_gb, (SELECT SUM(blocks * block_size / 1024 / 1024 / 1024) size_gb FROM v$archived_log WHERE first_time >= SYSDATE - ( (SELECT value FROM rdsadmin.rds_configuration WHERE name = 'archivelog retention hours') / 24 )) archlog_size_gb, (SELECT SUM(block_size * file_size_blks) / 1024 / 1024 / 1024 controlfile_size FROM v$controlfile) ctlfiles_size_gb, round(SUM(used.bytes) / 1024 / 1024 / 1024, 3) db_size_gb, round(SUM(used.bytes) / 1024 / 1024 / 1024, 3) - round( free.f / 1024 / 1024 / 1024) used_db_size_gb, round(free.f / 1024 / 1024 / 1024, 3) free_db_size_gb, (SELECT round(SUM(filesize) / 1024 / 1024 / 1024, 3) FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'))) bdump_db_size_gb, (SELECT round(SUM(filesize) / 1024 / 1024 / 1024, 3) FROM TABLE(rdsadmin.rds_file_util.listdir('ADUMP'))) adump_db_size_gb, (SELECT round(SUM(filesize) / 1024 / 1024 / 1024, 3) FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))) dpump_db_size_gb FROM (SELECT bytes FROM v$datafile UNION ALL SELECT bytes FROM v$tempfile) used, (SELECT SUM(bytes) AS f FROM dba_free_space) free GROUP BY free.f);

To view information about temporary tablespace usage, run the following query on the view DBA_TEMP_FREE_SPACE:

SQL> SELECT * FROM dba_temp_free_space;

3.    To resize the temporary tablespace (for example, to 10 GB), run the following query based on the output of the tablespace usage query:

SQL> ALTER TABLESPACE temp RESIZE 10g;

This command can fail if the tablespace has allocated extends beyond the 10-GB threshold.

4.    If the command fails, then shrink space on the temporary tablespace by running the following command:

SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 10g;

5.    To check for long-running sessions that are performing active sorting to disk and have temporary segments allocated, run the following query:

SQL> SELECT * FROM v$sort_usage;