A) INSTALL DB 10.2.0.1 BINARIES AND UPGRADE TO 10.2.0.5 ON LINUX NODE
The Installation procedure of 10g Binary in Linux is same as mentioned in section 10. The only thing that has to be noted that there is separate set of Binaries to be downloaded for AIX and Linux . Refer the earlier step done on AIX.
B) Run transportable tablespaces script
Patch 6158038: Export/import patch 9/09 including transportable tablespaces
This patch contains the transportable tablespaces script auxttspre.sql with the latest export/import patch (9/09 - 7120092).
Apply this patch. This patch will copy many required sql files as mentioned below :
copy au patch/115/driver dpost_imp.pl 115.7
copy au patch/115/import auexpdp.dat 115.4
copy au patch/115/import auexpimp.dat 115.4
copy au patch/115/sql auclondb.sql 115.1
copy au patch/115/sql audb1110.sql 115.0
copy au patch/115/sql audb1110_nt.sql 115.0
copy au patch/115/sql audb1120.sql 115.0
copy au patch/115/sql audb1120_nt.sql 115.0
copy au patch/115/sql aujv1110.sql 115.2
copy au patch/115/sql aujv1110_nt.sql 115.2
copy au patch/115/sql aujv1120.sql 115.0
copy au patch/115/sql aujv1120_nt.sql 115.0
copy au patch/115/sql aumsc1110.sql 115.0
copy au patch/115/sql aumsc1110_nt.sql 115.0
copy au patch/115/sql aumsc1120.sql 115.0
copy au patch/115/sql aumsc1120_nt.sql 115.0
copy au patch/115/sql auque1.sql 115.4
copy au patch/115/sql ausy1110.sql 115.0
copy au patch/115/sql ausy1110_nt.sql 115.0
copy au patch/115/sql ausy1120.sql 115.0
copy au patch/115/sql ausy1120_nt.sql 115.0
copy au patch/115/sql auxttspre.sql 115.4
Run auxttspre.sql script
auxttspre.sql
This is the first stage of migrating databases through transportable tablespaces. The scripts checks for violations in the transportable set and creates necessary scripts for the process.
This Script generates following scripts
aucrdb.sql Script that creates the target database instance with the appropriate tablespace and file structure
auxttsread.sql Script that converts the transportable tablespace set in the source database to read only
auxttswrite.sql Script that converts the transportable tablespace set in the source database back to read write.
audeftbsp.sql Script that converts the users to their corresponding default tablespaces.
auexpxtts.dat Export parameter file for the transportable tablespace set.
auexpglb.dat Export parameter file for the global temporary tables.
auexpsys.dat Export parameter file for the SYSTEM tablespace schemas.
auexpmsc.dat Export parameter file for the other objects.
aurman.dat Recovery manager parameter file to convert the endian format of data files.===This we won't be using
auimpxtts.dat Import parameter file for the transportable tablespace set.
auimpmsc.dat Import parameter file for the other objects
auimpglb.dat Import parameter file for the global temporary tables.
auimpsys.dat Import parameter file for the SYSTEM tablespace schemas.
All export and import files will be modified with correct datapump directory
Go through the auxttspre.log script, fix any transport set violations it lists out, and re-do the instructions in this step before proceeding with the rest of this document.
C) Record Advanced Queue settings
Advanced Queue settings are not propagated in the target database instance during the export/import process. Therefore, you must record them beforehand and enable them in the target database instance afterwards. Patch 4872830 contains auque1.sql, which generates a script called auque2.sql. You can use auque2.sql to enable the settings in the target database instance.
Copy the auque1.sql script from the $AU_TOP/patch/115/sql directory on the source administration server node to the source database server node into the working directory .Then, on the source database server node (as the owner of the source database server file system and database instance) use SQL*Plus to connect to the source database as SYSDBA and run the auque1.sql script. It generates auque2.sql in the current directory.
$ sqlplus /nolog SQL> connect / as sysdba; SQL> @auque1.sql
auque1.sql:Script to create script (auque2.sql) that activates queues similar to the database against which the script is run.
D) Create data dump directory
Create data dump directry by creating any directory in filesystem and using it.
mkdir /u01/exp_directory
As sysdba
SQL> create directory dmptts as '/u01/exp_directory';
Directory created.
E) Drop unused Columns
Select all tables having unused columns
select * from sys.dba_unused_col_tabs;
OWNER TABLE_NAME COUNT
------------------------------ ------------------------------ ----------
AP AP_CHECKS_ALL 3
AR HZ_PARTIES 8
CN CN_POSTING_DETAILS_SUM_ALL 7
BOM BOM_DEPARTMENTS 1
INV MTL_TXN_REQUEST_LINES 8
INV MTL_MATERIAL_TRANSACTIONS 6
INV MTL_MVT_STATS_RULE_SETS_TL 2
INV MTL_MATERIAL_TRANSACTIONS_TEMP 2
JTF JTF_RS_ROLES_TL 1
MSC MSC_REGIONS 2
MSC MSC_ST_REGIONS 2
ONT OE_AUDIT_ATTR_HISTORY 4
OSM AS_LEADS_ALL 4
OSM AS_SALES_CREDITS_DENORM 2
INV MTL_MVT_STATS_RULE_SETS_TL 2
INV MTL_MATERIAL_TRANSACTIONS_TEMP 2
Drop Unused columns using
Alter table <owner>.<table_name> drop unused columns;
F) Purge Recycle Bin
SQL> select owner, count(*) from dba_recyclebin group by owner;
SQL> purge recyclebin;
G) Grant privilege to source system schema
Grant the exempt access policy privilege to system by using SQL*Plus to connect to the database as SYSDBA and run the following command:
grant EXEMPT ACCESS POLICY to system;
H) Make Tablespaces Read only
There is a script generated named auxttsread.sql in Step B above.
Use SQL*Plus to connect to the database as SYSDBA and run auxttsread.sql to mark the tablespaces within the transportable set as read only:
Run
SQL> spool auxttsread.lst
SQL> @auxttsread.sql
I) Modify export files
The .dat export generated above in step B will require little modification according to our environments.
The customizable parameters are:
1. auexpxtts.dat (Export file for Transport TS set)
Changes required
Change Datapump directory to the one created
directory=dmptts
Change Dump Files
dumpfile=xttsmeta1%U.dmp,xttsmeta2%U.dmp,xttsmeta3%U.dmp
Change Files Size
filesize=4048576000
2.auexpsys.dat (Export file for System Table Space)
Change following
directory=dmptts
dumpfile=xttssys1%U.dmp,xttssys2%U.dmp
filesize=4048576000
3. auexpmsc.dat ( Export file for Miscelleaneous data )
Change Following
directory=dmptts
parallel=2
dumpfile=xttsmsc1%U.dmp,xttsmsc2%U.dmp
J) Running Exports
1. Export the transportable tablespace set
Start an export session on the source database server node using the auexpxtts.dat parameter file. Run following command to export table space metadata
expdp system/ <system_pw> parfile=auexpxtts.dat
---Time 3 hours approx
2.Run following to export SYSTEM table space
Export the SYSTEM tablespace schemas by starting an export session on the source database server node using the auexpsys.dat parameter file. Use the following command:
expdp system/ <system_pw> parfile=auexpsys.dat
--- 15 Mins approx
3. Run the following to export Global Temporary tables
Global temporary tables are not exported as they do not reside in any tablespace. Start an export session on the source database server node using the auexpglb.dat parameter file to export the global temporary tables. Use the following command:
exp system/<system_pw> parfile=auexpglb.dat
---Time 30 mins approx #NOTE This exp not expdp
4. Export miscellaneous data
Start an export session on the source database server node using the auexpmsc.dat parameter file to export other objects. Use the following command:
expdp system/ <system_pw> parfile=auexpmsc.dat
--Time 3 hours approx
K) Generate RMAN convert data file script in source
As sysdba run below dbf_convert.sql
spool df_convert.new.rman
set serveroutput on;
declare
fname varchar(513);
i number := 0;
begin
dbms_output.put_line('# Sample RMAN script to perform file conversion on all user datafiles');
dbms_output.put_line('# Datafile names taken from DBA_DATA_FILES');
dbms_output.put_line('# Please review and edit before using');
dbms_output.put_line('CONVERT DATAFILE ');
for df in
(select replace(file_name,'stage','/<dbf-directory-location>/files_to_convert') file_name
from dba_tablespaces a, dba_data_files b
where a.tablespace_name = b.tablespace_name
and a.tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by a.tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(''''||fname||''',');
end if;
i := 1;
fname := df.file_name;
end loop;
dbms_output.put_line(''||fname||'''');
dbms_output.put_line('FROM PLATFORM ''AIX-Based Systems (64-bit)''');
dbms_output.put_line('PARALLELISM 6');
dbms_output.put_line(';');
end;
/
spool off
It will generate df_convert.rman to convert DBFs in target Linux server
Change datafile names to reflect the datafile location in the target system where the files will be copied.
L) Copy Data Files and export dump from AIX to Linux
Shutdown the database and copy the export dump files and all the data files generated from the source database server node to the target database server node.
This step will take time depending on the size of database and method of coying used.
M) Create the target database instance
1. Copy database preparation scripts to target Oracle home
The database preparation scripts that you applied to the source administration server node earlier contain four scripts that are needed on the target database server node. Copy the following files from the $APPL_TOP/admin directory of the source AIX administration server node to the target Linux database server node into the working directory.
aucrdb.sql ( this script was generated in step B)
addb1020.sql
adsy1020.sql
adjv1020.sql
admsc1020.sql
2. Change/Modify aucrdb.sql to reflect target database server node with any necessary changes to the directory structures for the log file(s), data file(s), or tablespaces, reflecting the layout of the target database server node.
3. Start the Target 10.2.0.5 Instane in No Mount mode.
4. Make sure that the environment of your session on the target database server node is set up properly for the target database instance, especially the ORACLE_HOME, ORACLE_SID, and ORA_NLS10 environment settings. (ORACLE_SID must be set to the same value as the db_name parameter in the init[SID].ora file.) Then, use the following commands to run aucrdb.sql and create the target database instance:
SQL> startup nomount pfile=<INIT FILE PATH>/init<ENV>.ora
ORACLE instance started.
Total System Global Area 8136949760 bytes
Fixed Size 2109128 bytes
Variable Size 2868904248 bytes
Database Buffers 5251268608 bytes
Redo Buffers
SQL> @aucrdb.sql
N) Setup Target Database
1. Set up the SYS schema
Run addb1020.sql copied in previous step as sys user.
The addb1020.sql script sets up the SYS schema for use with the Applications. On the target database server node, use SQL*Plus to connect to the target database instance as SYSDBA and run addb1020.sql (UNIX/Linux)
$ sqlplus "/ as sysdba" @addb1020.sql
2. Set up the SYSTEM schema
Run adsy1020.sql to setup SYSTEM schema run it as system user change SYSTEM password first
The adsy1020.sql script sets up the SYSTEM schema for use with the Applications. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run adsy1020.sql (UNIX/Linux)
$ sqlplus system/[system password] @adsy1020.sql
3. Install Java Virtual Machine
Install Java Components by running adjv1020.sql SYSTEM this takes over an hour to complete and it must be run once only so its better to run from VNC to avoid connection disruption
The adjv1020.sql script installs the Java Virtual Machine (JVM) in the database. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run adjv1020.sql
$ sqlplus system/[system password] @adjv1020.sql
ATTENTION: This script can be run only once in a given database instance, because the scripts that it calls are not rerunnable.
4. Install other required components
Install other components by running admsc1020.sql as SYSTEM provide following arguments
admsc1020.sql FALSE SYSAUX TEMP
The admsc1020.sql or admsc1020_nt.sql script installs the following required components in the database: ORD, Spatial, XDB, OLAP, Data Mining, interMedia, and ConText. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run admsc1020.sql (UNIX/Linux) or admsc1020_nt.sql (Windows). You must pass the following arguments to the script, in the order specified:
$ sqlplus system/[system password] @admsc1020.sql FALSE SYSAUX TEMP
Attention: All of the components are created in the SYSAUX tablespace regardless of where it was installed in the source database.
5. Disable automatic gathering of statistics
Copy $APPL_TOP/admin/adstats.sql from the administration server node to the target database server node.
$ sqlplus "/ as sysdba"
SQL> shutdown normal;
SQL> startup restrict;
SQL> @adstats.sql
SQL> exit;
6. Restart the database
O) RUN RMAN datafile conversion in Target Server
a) Ensure that the Tablespaces are all in read only mode.
b) Ensure that Tses were put once in R/W mode after the upgrade.
c) Ensure Compatinle paramemeter is set to 10.2.0
Go to the source server and
1) run dbf_copy_script.sql as sysdba
set head off;
set lines 500
spool dbf_copy.sh
select 'cp '|| file_name|| ' /copy-dir/files_to_convert'
from dba_tablespaces a, dba_data_files b
where a.tablespace_name = b.tablespace_name
and a.tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by a.tablespace_name;
spool off;
This will generate dbf_copy.sh.
2) run dbf_convert.sql as sysdba
spool df_convert.new.rman
set serveroutput on;
declare
fname varchar(513);
i number := 0;
begin
dbms_output.put_line('# Sample RMAN script to perform file conversion on all user datafiles');
dbms_output.put_line('# Datafile names taken from DBA_DATA_FILES');
dbms_output.put_line('# Please review and edit before using');
dbms_output.put_line('CONVERT DATAFILE ');
for df in
(select replace(file_name,'stage','/<STAGING-DIRETORY>/files_to_convert') file_name
from dba_tablespaces a, dba_data_files b
where a.tablespace_name = b.tablespace_name
and a.tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by a.tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(''''||fname||''',');
end if;
i := 1;
fname := df.file_name;
end loop;
dbms_output.put_line(''||fname||'''');
dbms_output.put_line('FROM PLATFORM ''AIX-Based Systems (64-bit)''');
dbms_output.put_line('PARALLELISM 6');
dbms_output.put_line(';');
end;
/
spool off
It will create a RMAN script named df_convert.rman
df_convert.rman has to be run in Target Linux server so open and check wether all the paths are appropiate
In the target Linux DB run (from a VNC session)
rman target / @df_convert.rman log='dbf_convert_log.log'
# will take 3-4 hrs depending on database size
P) Import Metadata to Target DB
a) Create a import dba_directory
SQL> create directory impdir as
'/u01/imp_dir';
Directory created.
b) Copy all the import parameter files generated in earlier step #B from the source database server node to the target database server node. Use a text editor to modify the auimpxtts.dat, auimpmsc.dat, and auimpsys.dat data pump parameter files to reflect the target environment and other customized parameters.
auimpxtts.dat
auimpsys.dat
auimpmsc.dat
auimpglb.dat
The customizable parameters are:
c) Modify auimpmsc.dat and run import
directory=impdir ===<Directory Name Created in above step>
dumpfile=xttsmsc101.dmp,xttsmsc102.dmp,xttsmsc201.dmp =======<Dump file name>
full=y
logfile=impmsc.log ========<Log File Name>
Run the import command as following
impdp system/[system password] parfile=auimpmsc.dat
# Took 2 hours
d) Modify auimpxtts.dat and run import
1 Copy the original auimpxtts.dat as auimpxtts.dat.orig
2 Delete all the lines under "transport_datafiles="
3 Modify Following lines
directory=impdir -===========<Directory Name Created in above step>
dumpfile=xttsmeta101.dmp,xttsmeta201.dmp,xttsmeta301.dmp =======<Dump file name>
logfile=impxtts.log
exclude=TYPE
transport_datafiles=
To get the path of transport datafile run following from Linux Prompt:
for i in `ls -l /d01/db_CRP1/data01/datafile/CRP1/datafile|awk '{print $9}'`;
do
echo "'/<converted datafile directory path>/"$i"',">>dbf1.txt;
done
Run the impdp command
impdp system/[system password] parfile=auimpxtts.dat
# Took 8 hours
e) Modify auimpglb.dat and run import
Its a par file for traditional import not impdp
cat auimpglb.dat
file=/d01/db_CRP1/imp_dir/gtab.dmp =================<Full Path for the dmp file>
filesize=1048576000
full=y
log=impgtab.log
Run import :
imp system/ [system password] parfile=auimpglb.dat
#Took 15 Minutes
f) run import auimpmsc_procobj.dat
impdp system/ [system password] parfile=auimpmsc_procobj.dat
g) run import for auimpmsc_pre_system_action.dat
impdp system/ [system password] parfile=auimpmsc_pre_system_action.dat
h) Modify auimpsys.dat and run import
Edit like following
directory=impdir ===========<Directory Name Created in above step>
dumpfile=xttssys101.dmp,xttssys201.dmp=======<Dump file name>
full=y
logfile=impsys.log
impdp system/[system password] parfile=auimpsys.dat
# Took 10 Minutes
Q) Convert Tablespaces into R/W mode
Copy auxttswrite.sql generated in Step B to the target database server node. Then, as the owner of the source database server file system and database instance, use SQL*Plus to connect to the source database as sysdba and run the auxttswrite.sql script.
sqlplus "/ as sysdba" @auxttswrite.sql
R) Grant quota to Users
Check quota on the target system by issuing the sqlplus command:
sqlplus> select username, count(*) from dba_ts_quotas group by username order by username;
Manually compare this result with that on the source machine. If there are any discrepancies, issue the sqlplus command:
sqlplus> grant quota on [tablespace] to [user]
Dynamic sql statement from source system can be run on traget.
S) Post Migration Tasks
1) Fix CTXSYS adctxpkg.sql
After Migration to Linux AD_CTX_DDL package is missing this will cause patches to Fail to prevent this issue we have to create the package
Check with
select owner, object_type, status from dba_objects where upper(object_name)=upper('ad_ctx_ddl');
whether the package exists. If found missing create it using
@adctxpkg.sql <SYSTEM Password> CTXSYS apps
adctxpkg.sql should be copied from $AD_TOP/patch/115/sql (11i)
Refernce Note# 438194.1
2) Make Datafiles auto extend ON
Run the following script and spool the output
Set lines 500
Spool auto_extend_prod.sql
SELECT 'alter database datafile '''||file_name||''' autoextend on;' from dba_data_files;
3) Activate Advanced Queue
Copy the auque2.sql script that was generated in Step B to the target database server node. Then, on the target database server node, as the owner of the Oracle 10g file system and database instance, use SQL*Plus to connect to the target database as SYSDBA and run the auque2.sql script to enable the Advanced Queue settings that were lost during the process. The script creates a log file in the current directory.
As sysdba run @auque2.sql which was generated earlier in step#B
4) MRC schema Drop
Run this from VNC it would take 1 hour
sqlplus SYSTEM/<system_password> @addrpmrc.sql <applysys-schema-name> FORCE
<applysys-schema-name> is normally apps
5) Korean lexer fix
Use SQL*Plus to connect to the database as SYSDBA, and run drkorean.sql using the following command:
$ sqlplus "/ as sysdba" @$ORACLE_HOME/ctx/sample/script/drkorean.sql
6) Compile invalids
On the target database server node, as the owner of the Oracle 10g file system and database instance, use SQL*Plus to connect to the target database as SYS and run the $ORACLE_HOME/rdbms/admin/utlrp.sql script to compile invalid objects.
$ sqlplus "/ as sysdba" @$ORACLE_HOME/rdbms/admin/utlrp.sql
7) re-create grants and synonyms
Oracle Database 10g Release 2 (10.2) contains new functionality for grants and synonyms compared to previous database releases.
As a result, you must re-create the grants and synonyms in the APPS schema. On the administration server node, as the owner of the Applications file system, run AD Administration and select the "Recreate grants and synonyms for APPS schema" task from the Maintain Applications Database Objects menu.
8) Take database backup
It is a good idea to take a backup at this stage
Shutdown the database and start up the database in mount
Take a 0 level backup using RMAN or any other possible way.
After the backup start the database