Usage of different users in Oracle APPS
GUEST
The GUEST user account is used in the application internally. One of the major needs of this account is when there is a need to decrypt the APPS password (which is stored in an encrypted format in the apps tables). In order to decrypt the APPS password, the GUEST username/password is used to accomplish this task (using "Guest User Password" profile option).
You can find more details about GUEST account in the dbc file (Under $FND_TOP/secure) and in the context file ($APPL_TOP/admin/<SID>_<hostname>.xml), GUEST password should match is the two files.
GUEST user is used whenever there is a need for the application users to connect to the database internally as apps user.
APPLSYSPUB
This account is used by Oracle Applications to initially connect to the database and check user password during a user signon process before user is connected.
APPLSYSPUB schema has sufficient privileges to perform the authentication of an Applications User (a.k.a. FND user), which includes running PL/SQL packages to verify the username/password combination and the privilege to record the success or failure of a login attempt. There is no need to change the password for APPLSYSPUB.
One can find the password of APPLSYSPUB in dbc file. The dbc file in usage can be found from the profile value "Application Database ID"
However, should you choose to change this password; you must use FNDCPASS
$ FNDCPASS APPS/<apps_pwd> 0 Y SYSTEM/<system_pwd> ORACLE APPLSYSPUB <new_pwd>
After changing the APPLSYSPUB password you must propagate the change to application tier configuration files. If you use Autoconfig, you must edit the CONTEXT file on each tier prior to running Autoconfig.
In the CONTEXT file locate the autoconfig variable “s_gwyuid_pass” and set it to the new password, then run AutoConfig.
If you are not using AutoConfig you must manually edit the following configuration files :
1) FND_TOP/resource/appsweb.cfg
2) OA_HTML/bin/appsweb.cfg
3) FND_TOP/secure/<host_name>_<dbname>.dbc
All application tier processes (apaches) must be restarted following the password change and password propagation.
Note id 189367.1
Note : APPLSYSPUB has no data objects, only synonyms to APPS.
GUEST_USER_PWD(Guest/Oracle) is used by JDBC Thin Client where as GWYUID is used by Thick Clients like via Forms Connections.
GWYUID , stands for Gateway User ID and password. Usually like APPLSYSPUB/PUB
APPS
Apps is a schema which does not contain any tables of itself. it contains all the synonyms of all the table in Oracle apps. But it has code objects for all products (triggers, views, packages, synonyms etc.).
APPLSYS
Applsys schema contains all the tables required for administrative purpose. the default password is apps. All the technical products' database objects are consolidated into a single schema called Applsys.
Why should Apps & Applsys passwords always be the same?
The need to have the same password for Apps and Applsys is because when you sign on to apps, intially it connects to a public schema called APPLSYSPUB. This validates AOL username and password that we enter (operations/welcome using guest user account. Once this is verified we select responsibility, this is validated by APPLSYS schema and then it connects to APPS schema.
Since it uses both applsys and apps during signon process this expects both the password to be identical. Try changing apps password to something else and try to login, the validation at the last stage would fail. This would result in failure of application login. Apps is a universal schema has synonyms to all base product tables and sequences. This also has code objects for all products (triggers, views, packages, synonyms etc.). Applsys schema has applications technology layer products like FND and AD etc.
Purge Programs and their significance
1) Purge Concurrent Requests and/or Manager Data
We had a customer who, when we first started working with them, complained that whenever they submitted a request, they had to wait for about half a minute for the request to commit, and when they tried to query up the status of records in the concurrent manager, they often waited for more than a minute. Our investigation showed that they hadn’t run the “Purge Concurrent Requests and/or Manager Data” concurrent program in the three years that they had been running the applications. Performance improved dramatically once they started running this program. The “Purge Concurrent Requests and/or Manager Data” concurrent program will delete information about completed concurrent requests from a set of tables. It can also delete the log and output files for those
concurrent requests from your UNIX file system. Most companies run this report and delete all records older than 7 -31 days.
If you choose Entity ALL, the purge program deletes rows for completed requests from the
FND_CONCURRENT_REQUESTS,
FND_RUN_REQUESTS,
FND_CONC_REQUEST_ARGUMENTS,
FND_CONC_STAT_LIST,
FND_CONCURRENT_PROCESSES,
FND_CONC_STAT_SUMMARY,
FND_CONC_PP_ACTIONS,
FND_RUN_REQ_PP_ACTIONS
and FND_DUAL tables.
It also deletes your older log and out files. Using this program rather than deleting log and out files using UNIX commands works better, because the Purge will check against the FND tables and only delete files that have a status of completed.In the example, we’ve chosen ALL for Entity. Valid choices are ALL, MANAGER and REQUEST. If you choose MANAGER, then the log information stored in APPLLOG will be deleted, but the output files stored in APPLOUT will not. If you choose REQUEST, then the output files will be deleted but the log files will not be deleted, and the associated rows in the FND_ENV_CONTEXT table will not be deleted. Rows are written to FND_ENV_CONTEXT when a new concurrent manager process or service is started so this table can get very large if you don’t select ALL for Entity. For Mode, we chose Age so we could delete files older than the number of days specified in Mode Value.
You could also choose Count for Mode – you might use a Mode of Count after cloning an instance to delete all the old completed concurrent request data that carried over from the source system. In that case, you would choose Mode of Count and Mode Value of 0 to delete all completed records.There are two schools of thought on the best way to clean out the concurrent manager tables after cloning an instance. Some folks recommend that you simply truncate the FND_CONCURRENT_REQUESTS table on the target instance after cloning and before starting the concurrent manager. We prefer to run the “Purge Concurrent Requests” concurrent program rather than truncate the tables because truncating the table will remove all of the scheduled programs along with all the completed programs – we put all scheduled requests on hold before cloning to save the scheduled requests and then allow users to either
cancel the scheduled requests or take them off hold after cloning on the theory that this is easier for users than resubmitting all of their requests.
Applications System Administrator would schedule the “Purge Concurrent Request and/or Manager Data” program to run nightly, deleting all completed requests older than 7 days. Since you run nightly backups (you do, don’t you?), deleting data and files shouldn’t be a problem – you can always retrieve them from tape if necessary. What if you delete too much data? It certainly can happen – you need to keep completed concurrent request information on your system long enough so users can see the output! Don’t delete everything older than one day, then, or the user who schedules a request to run on Saturday won’t get to see the results on Monday when they come in to work. Generally folks keep 7-31 days worth of concurrent request output. What if you want to hold onto historical information to assess performance? Let’s say your manager wants to know what performance is like during the month-end close, or say you know you need to tune the different concurrent managers and want to know how the concurrent managers are performing. Youhave two options – hold onto the data for a longer period of time, which risks performance issues for everyone who needs to submit or query on a submitted request, or save the historical information elsewhere. You could, for example, create a delete trigger on the fnd_concurrent_requests table that puts data into a historical table whenever a delete occurs. Whenever the purge runs, then, the data would be deleted from fnd_concurrent_requests after landing in another table, say, fnd_concurrent_requests_history. You could then run programs that gather performance information against a view that joins fnd_concurrent_requests with fnd_concurrent_requests_history.
2) Purge Obsolete Workflow Runtime Data
Similar to the FND tables that are purged with the “Purge Concurrent Request and/or Manager Data” concurrent program, the workflow tables also need to be purged depending on the complexity of the workflows and how often they are used. Oracle provides the WF_PURGE package to assist with this effort, and a concurrent program called “Purge Obsolete Workflow Runtime Data”. MetaLink Note: 132254.1: “Speeding up and Purging Workflow v2.5 & 2.6”, gives more details on how to improve workflow performance.
Workflow records data about each step of a running workflow in the
WF_ITEMS,
WF_ITEM_ACTIVITY_STATUSES,
WF_ITEM_ACTIVITY_STATUSES_H,
WF_ACTIVITY_ATTR_VALUES,
WF_NOTIFICATIONS, and
WF_NOTIFICATION_ATTRIBUTES
tables. The size of these tables grows very rapidly and if not managed can cause performance degradation. The “Purge Obsolete Workflow Runtime Data” concurrent program is very similar to purging concurrent manager history with the “Purge Concurrent Requests and/or Manager Data” concurrent program. When you first set up workflow, you can choose scheduling timings and history retention period to be the same as your concurrent programs. Even if you are running
the “Purge Obsolete Workflow Runtime Data” concurrent program, you should continue to monitor the size of your tables and whether you are experiencing performance issues and adjust your parameters accordingly.
The parameters for the “Purge Obsolete Workflow Runtime Data” concurrent program are:
a) Item Type - You can specify Item Type when you are trying to get rid of a specific workflow (for example, the FA accounts generator is seeded to be permanent and if you are de-bugging it, then you would run the purge specifying the item-type, age=0, and persistence type = Permanent to get rid of all of
the debug history).
b) Item Key - We haven't really found a reason to use the Item Key - that gets rid of a specific instance of a specific workflow. Since this program only works on workflows that are completed, there isn’t much call for removing data for a specific instance. For the "generic" purge, Item Type and Item Key should be left
blank.
c) Age - The Age field is for how long AFTER the workflow finishes you want to keep the data before purging it. That depends on what a company does with the workflow data - are they gathering stats on number of workflows finished? Are they moving the approval history to other tables where Oracle didn't keep an approval history? We believe the history ought to be purged often, at least for data older than 30 days. Many companies purge data older than a week. If your company is doing statistics gathering and moving approval history, it will do so by using custom programs, so these programs could be run prior to the purge by including the custom programs and the purge in a request set.
d) Persistence Type - Persistence type is assigned to a workflow when you define it (look at the properties page of the Item Type in the builder tool). The choices are temporary or permanent. Most of the Oracle seeded workflows have a persistence type of temporary.
Just as you don't keep concurrent manager data forever, you don't want to keep workflow data forever either. It makes administration burdensome to have to sort through all that history. The System Administrator should set up the “Purge Obsolete Workflow Runtime Data” concurrent program for items with a Persistence Type of Temporary, and should schedule this program to run daily, choosing an appropriate Age for data retention, and leaving Item Type and Item Key blank.
If you’ve been running the Applications for some time and did not know you were supposed to run the “Purge Obsolete Workflow Runtime Data” concurrent program, your Database Administrator may need to resize the affected tables once you’ve gotten a significant amount of obsolete data removed. Your
Database Administrator may conclude that exporting and importing the Workflow tables is necessary to release the empty space that the Purge frees up. This is not a difficult task, but does require that the database be made unavailable to users temporarily while your DBA runs the export/import processes. Each company must decide how long they wish to save the history of completed workflows. While for some workflows (such as Journal Batch Approval) the workflow runtime tables provide the only history of the approvals, these workflow tables are not the appropriate place for approval history storage. If you wish to save the approval history, the approval records can be extracted to custom tables prior to purging the workflow history. If you only need to keep approval history for a short time, then just adjust the age parameter. In order to extract this history, you will need to know the internal name of the activity that asks for approval.
3) Purge Signon Login data
The “Purge Signon Login data” concurrent program deletes audit information from the
FND_LOGINS
FND_UNSUCCESSFUL_LOGINS
FND_LOGIN_RESPONSIBILITIES
FND_LOGIN_RESP_FORMS
tables. You should schedule this program to run daily. You should pick a date for the Audit Date based on your company’s security data retention requirements. Most companies pick a date a week or two prior to the current date.
Check the Increment Date Parameters box so that the Audit date will advance a day each time the program runs.
What happens if you don’t run the purge? We had one customer who ran the applications for several years without running the “Purge Signon Audit data” program. After migrating from RDBMS Version 8.0.6 to Version 8.1.7.4, they found that the System Administrator’s Security | User | Monitor screen hung trying to pull up records. Testing showed the problem was caused because the FND_LOGINS table had more than 1.5 million rows in it. Once we ran the “Purge Signon Audit data” program, performance returned to normal.
4) Delete data from temporary tables
This program deletes data from the
icx_sessions,
icx_transactions,
icx_text,
icx_context_results_temp,
icx_failures
icx_requisitioner_info tables.
The program hard codes the deletes to remove all data older than the current time minus 4 hours, but the program code says you can change how much time you wait to delete by modifying the code. Of course, that would be a customization, subject to being overwritten in the future, so unless you’re very uncomfortable with the 4 hour number, we recommend leaving the code, which is located in $ICX_TOP/sql/ICXDLTMP.sql, alone. These tables will get very large over time – the icx_sessions table has an entry for every time someone logs into the self services web applications. You should schedule this program to run daily.
Delete temporary data of PO revisions in Self Service Applications
The “Delete temporary data of PO revisions in Self Service Applications” concurrent program deletes records from the temporary table ICX_PO_REVISIONS_TEMP, where all the records for differences are stored. When you run this concurrent request, it asks for a date and purges all records that are older than this date. We recommend scheduling this report to run nightly, with the date set back one or two weeks. When you schedule this report, set it up to run daily and click on the “Increment date parameters each run” so the date will advance each time the concurrent program runs. You should work with your Purchasing Functional Users to ensure that they concur on how much data to save. While both the “Delete data from temporary tables” and “Delete temporary data of PO revisions in Self Service Applications” concurrent programs can be run from the Self Service Web Applications Manager responsibility, it might be easier to run all administrative reports such as these from the System Administrator responsibility. To add these reports to the System Administrator’s Report Group, choose Security | Responsibility | Request and query the Group “System Administrator Reports”.
Conclusion
If you haven’t been running these five concurrent programs in your environment, there’s a good chance you’ll look like a hero once you discover them. You should schedule these concurrent programs, understand what happens if they don’t run, and document their significance so that they don’t slip through the cracks if you move on to another position. Also, if the affected tables have grown very large before you began purging from them, consider exporting/importing those tables to bring their high water mark back down.
Q What happens if you don’t give cache size while defining concurrent manager?
Most often when …a request goes "inactive/no manager" and is then processed a short time later, the solution is to either increase the cache size for your Standard manger, or increase the actual number of Standard Manager processes that can run. Cache Size is set in the Concurrent/Manager/Define form. Basically, this regulates how many requests a manager will pick up for each sleep cycle.
Increasing the cache size within reason allows you to decrease the sleep cycle. In turn, your managers will wake up and scroll through the tables less frequently, which reduces the amount of work they have to do as well as the amount of system resources utilized. We see sleep cycles set to 5 seconds at some customer sites and recommend increasing the value, since the managers are waking up and re-reading the same table far too frequently. For reports that for the most part take a minimum of 1 minute to run, the queue wakes up 12 times to check for runnable processes before even one running request completes.
Sleep Time — The sleep time parameter indicates the seconds that the ICM should wait between checking for requests that are waiting to run. The default sleep time is 60, but you can lower this number if you see you have a lot of request waiting (Pending/Normal). However, reducing this number to a very low value many cause excessive cpu utilization.
Q: What scripts can be used to compile apps schema , which one is used in adadmin compile apps schema ?
Adadmin in-turn calls the procedure UTL_RECOMP.RECOMP_PARALLEL which might be in-turn issues the following commands based on the object types
if object is package body
alter package <package_name> compile ;
alter package <packae_ame> compile body;
alter view <view_name> compile;
$AD_TOP/sql/adcompsc.pls
How do you will find out version without logging into EBS/Database
<config_option type="techstack" oa_var="s_techstack">as1013</config_option>
<config_option type="techstack" oa_var="s_tnsmode">generateTNS</config_option>
<config_option type="adx" oa_var="s_apps_version">12.1.3</config_option>
<config_option type="techstack" oa_var="s_apache_mode">NORMAL</config_option>
<ias_version oa_var="s_ias_version">10.1.3.5.0</ias_version>
.
.
.
<!-- Forms Servlet and Forms Server -->
<forms_version oa_var="s_forms_version">10.1.2.3.0</forms_version>
Issue faced
1. You have encountered an unexpected error. Please contact the System Administrator for assistance (418130.1)
Cause
Un-synchronized data in APPLSYS.WF_LOCAL_USER_ROLES:HZ_PARTY
Solution
To implement the solution, please execute the following steps:
Run Concurrent manger job "Synchronize Workflow Local Tables" as sysadmin.
As you may be unable to login to applications to run the program you may also try this approach to run the request manually:
--> If you are NOT running the Oracle Applications Table Space Model run
sql> begin apps.wf_local_synch.BULKSYNCHRONIZATION(P_ORIG_SYSTEM=>'ALL', P_PARALLEL_PROCESSES=>2, P_LOGGING=>'LOGGING', P_RAISEERRORS=>TRUE, P_TEMPTABLESPACE=>'APPLSYSX'); END; /
--> If you are ARE running Oracle Applications Table Space Model run
sql> begin apps.wf_local_synch.BULKSYNCHRONIZATION(P_ORIG_SYSTEM=>'ALL', P_PARALLEL_PROCESSES=>2, P_LOGGING=>'LOGGING', P_RAISEERRORS=>TRUE, P_TEMPTABLESPACE=>'APPS_TS_TX_DATA'); END; /
2. Node ID does not exist for the current application server ID errors
Coutesy: 359998.1
Symptoms
On entering Oracle Applications on the web, the following error appears:
Node ID does not exist for the current application server id. The URL on which the above error occurs is as follows:
http://myhost.domain:port/dev60cgi/f60cgi
This error also occurs at the time of starting / stopping Apache.
adapcctl.sh start
adapcctl.sh stop
Changes
No Changes
Cause
There is a mismatch between server_id in the fnd_nodes table and appl_server_id in the dbc file and hence we are getting this "node id does not exist" error.
Unpublished Bug 4051279 describes this error in detail.
Solution
Do a select on the server_id in the fnd_nodes table:
Here is an example of what it could look like:
server_id=EA99227ED75CFE4EE030149077C4515496138833635529486962342698410274
Do a search on APPL_SERVER_ID in the .dbc file:
It could show something like:
APPL_SERVER_ID=EA99220924EF823FE030149077C450C096138833635529486962342698410274
Please manually comment out the APPL_SERVER_ID in the .dbc file
Then re-run adgendbc.sh and it will seed the node and get a good application id.
If the server_id is missing from the FND_NODES table, ie. if the above SQL does not return anything
To get the FND_NODES table populated with the correct server ID, you may have to use
Note 260887.1 ->Steps to Clean Nonexistent Nodes or IP Addresses from FND_NODES
3. Login Fails Intermittently With: Application: Fnd, Message Name: oracle.apps.fnd.framework.OAException: Application: FND, Message Name: FND_NO_DATABASE_CONNECTION [ID 1298103.1]
Issue may manifest when performing various actions including:
1) logging in to the applications homepage
2) editing favorites on the homepage
3) selecting a responsibility from the homepage
4) navigating back to the homepage
During your actions you may receive an error as follows:
oracle.apps.fnd.framework.OAException: Application: FND, Message Name: FND_NO_DATABASE_CONNECTION
The key here is the FND_NO_DATABASE_CONNECTION error
Changes
You may have upgraded to Release 12.1.2 or 12.1.3.
Cause
This issue has been identified as Bug 9908921 - FND, MESSAGE NAME: FND_NO_DATABASE_CONNECTION ...and is associated with the new homepage functionality shipped with 12.1.2
Solution
12.1.2 and 12.1.1 Customers
Apply and test patch 9908921
Be sure to stop and start application tier services after the patch is applied
12.1.3 Customers
Apply and test patch 11832737
Be sure to stop and start application tier services after the patch is applied
4. AOLJTest to Verify DBC File Settings Test [ID 304780.1]
Run the AOLJtest by using the link
http://<hostname:port>/OA_HTML/jsp/fnd/aoljtest.jsp
5. Find Forms's session details
SELECT SUBSTR(d.user_name,1,30) "User Name"
, a.pid
, b.sid
, b.serial# ,b.MACHINE,b.TERMINAL,b.PROGRAM,b.type,b.SQL_HASH_VALUE,b.MODULE,b.ACTION,b.WAIT_CLASS,b.STATE,a.BACKGROUND
FROM v$process a, v$session b, fnd_logins c, fnd_user d
WHERE a.pid = c.pid
--AND c.pid = &PID
AND d.user_name = UPPER('&USER_NAME')
--AND TO_DATE(c.start_time) = TO_DATE('&START_DATE')
AND d.user_id = c.user_id
AND a.addr = b.paddr
AND c.end_time IS NULL
/
6. How to check, which EBS Techstack Patchsets have been applied on 11i or R12 ?
Note ID: 390864.1
7. How To Audit An Oracle Applications' User?
ID 395849.1
8. Gather Concurrent Manager Related Data
ID 847839.1
9. Enabling directly login into forms via the f60cgi in 11.5.10 and above
It is possible to login however this method should only be used when debugging problems.
1. Backup and open $APPL_TOP/admin/<SID>_<hostname>.xml context file
2. Update the context variable:
s_appserverid_authentication
By default in 11.5.10, this is set to SECURE.
In previous 11i versions, this was set to OFF.
For debug purposes, you can use ON or OFF.
Modes:
- ON : Partial
- SECURE : activates full server security (SECURE mode)
- OFF : deactivates server security
3. Run Autoconfig to instantiate the change.
You should now be able to access forms directly again using the f60cgi call.
4. After you have finished your Forms debugging, please reset s_appserverid_authentication to SECURE and re-run Autoconfig.
10. Required Support Diagnostics for Hanging Databases
ID 452358.1