Some important tables for Concurrent manager
FND_NODES
FND_CONCURRENT_PROCESSES
FND_CONCURRENT_REQUESTS
FND_CONCURRENT_QUEUES
FND_CONCURRENT_PROGRAMS
FND_EXECUTABLES
FND_CP_SERVICES
FND_CONCURRENT_QUEUE_SIZE
FND_CONCURRENT_QUEUE_CONTENT
FND_CONCURRENT_PROGRAM_SERIAL
FND_CONCURRENT_TIME_PERIODS
FND_CONCURRENT_PROCESSORS
The Concurrent Managers
One of the most attractive features of the Oracle Application Software is the concurrent manager. Basically, this is a batch job scheduling system for running reports and background tasks. From the concurrent managers you can manage queues, work shifts, access and security, job priorities, job log and output, job automation, and assorted job compatibility (or incompatibility) rules. This feature is one of the key areas that can consume much of the Oracle Fin-Apps DBA/SYSADMIN time. To find more complete instructions on how to setup and use the concurrent managers and the jobs that they run, refer to the AOL Reference Manual.
Basic Tuning of the Concurrent Manager
We go back to the age old concepts of computer tuning and balance loading for OLTP versus Batch Processing. OLTP (on-line transaction processing, or "real-time" computing) is where you have end-users doing their work on the screen needing quick, real-time results -- especially if they are servicing clients in person or on the phone. These requests need to be completed as soon as possible as to not disrupt the business and revenue flow! An example of these transactions may be your Order Entry people in customer services.
Note: Just because an on-line transaction submits a job to the concurrent manager (or the "background") that this does not necessarily qualify it as a "batch-processing" job.
On the other hand, batch-type jobs can afford to be completed at a later time than when initially entered. They usually can be grouped together (batched) and processed outside of the normal business hours. Examples of these type of reports could be financial reports, summary reports, end-of-day processing, etc. Some jobs are required to assist the on-line transaction processing but can be batched (like a sales forecast or open ticket report) but needs to be completed prior to the days activities, rather than after. You may be in a 7x24 shop where OLTP is always a priority. Balancing your OLTP versus batch jobs may be a little more complicated. Still, your objective is to reduce the impact of the non-critical resource hungry jobs against the OLTP transactions. The batch jobs will just have to work when OLTP demands drop. You do this by managing queues, workshifts, priorities, incompatibility rules, and . . . end-user training or awareness. This end-user awareness and training is perhaps one of the most neglected areas, yet is so important. In order to determine which jobs can be truly classified as OLTP (real-time critical) versus batch is going to require interviews with your end-users and/or business systems analysts. One of the most common problems that I have observed is that sites pretty much leave the standard and default queues created by the installation process. Then, the jobs go into the queue and operate on a first-come first-serve basis. This will not give you the results you need.
Tips and Techniques for Concurrent Manager Management
The right answers will depend upon the results of your interviews and trial-and-error, but here are some basic ideas that some sites use. Create queues based upon the duration of a job, such as FAST versus SLOW. The FAST queue usually handles jobs that complete in within a minute and the concurrency (number of jobs that can run concurrently in the same queue) and priority is high, where the opposite criteria is held for the SLOW queue. Another technique is to setup OLTP versus BATCH queues where the workshift for OLTP is setup for prime-time business hours and BATCH for non business hours. Setting up queues for workshifts, functionality, and departments are more examples, but certainly not all of your options. I tend to favor a combination of OLTP versus BATCH functionality. By combining queues and their workshifts, concurrency, and incompatibility rules, you should strive to get the maximum throughput possible for OLTP and convince users that batch jobs which are needed for next-day activities should be moved to off-hours processing and set with lower priorities.
Starting and Stopping the Concurrent Managers
While you can start the concurrent managers within the applications, I dislike a couple of the defaults. 1) The
default pmon time is 60 seconds. My clients usually need this time to be sooner, like 30, 20, or 10 seconds. 2) I do
not like the default name of std.mgr for the internal manager. I prefer that it has the name of the instance. You can
overcome these defaults by scripting the start and shut commands with different parameters. Besides, it is very
useful for starting or shutting down the concurrent managers from the command line -- especially in the .rc Unix
scripts.
Example script for starting the managers:
#strmgr.sh
date
echo "Executing strmgr.sh script ..."
echo "Starting Concurrent Managers ..."
startmgr sysmgr="apps/fnd" mgrname=prd sleep=20
#exit
Actually, I would advise you to use symbolic parameters for the APPS password instead of hard coding it. The
"sleep" parameter tells the internal manager to search fnd_requests every 20 seconds for new requests, rather than the 60 second default. The internal log file will be called prd.mgr (typically found in $FND_TOP/log). There are other
parameters available, too, such as the debug option. Consult your manual for more details.
Example script for stopping the managers:
#stopmgr.sh
date
echo 'Stopping Concurrent Managers ...'
#The following is one command line
$FND_TOP/bin/CONCSUB apps/fnd SYSADMIN 'System Administrator' SYSADMIN WAIT=Y CONCURRENT
FND DEACTIVATE
#End of command line
ps -ef | grep LIBR
date
echo 'Concurrent Managers Stopped'
exit 0
Notice that stopmgr.sh does not run a command line executable to directly stop the managers. Instead, it submits a
concurrent job via the concsub utility. The WAIT parameter tells the job not to process any further until all the
managers have shutdown before proceeding and eventually exiting the script.
Debugging Concurrent Manager Errors
Look for errors in the logs. The internal manager's log file will usually be in $FND_TOP/log (see previous
discussion on defining log and out directories) defaulting to std.mgr or named as you specified in the command line
parameter, mgrname=<name>. The internal manager monitors the other queue managers. You will see the startup,
shutdown, print requests, and other information in this log. You may also find errors as to why the internal or
subsequent slave managers could not start.
All of the other managers have dedicated logs, too. They are preceded with a "w" or "t" followed by an identity
number, such as w139763.mgr. Each queue will have one of these log files. You can see individual jobs and
associated request ids in each of these files. You can review error messages, too. Occasionally, a job will fail and
take the manager down with it. The internal manager will sense that the queue is down and restart it on the next
pmon cycle.
Suggestion: We will discuss purging of the fnd_concurrent_request table and associated log and output files, later,
but I would make this suggestion. Purge these manager files frequently (daily) so that you can easily perform a
search on "error" when trying to debug concurrent manager errors.
Kick Starting Dead Managers
Sometimes you may encounter difficulty in starting either the internal concurrent manager or the other slave queues.
Consult the log files for error messages and take appropriate action to resolve the problem. If you are unsuccessful,
then enter the "verify" command in the concurrent manager screen to force the internal manager to read and initiate the target number of queues specified. If that doesn't work, try to deactivate or terminate the managers, then restart them. If you have trouble bringing them down, you may have to perform a "kill" on the background process. You can identify the managers with "ps -ef|grep LIBR" command. If you still encounter problems, make sure that there aren't any processes still tied to the managers. If you find any, kill them.
If you still encounter problems, then the statuses are probably improperly set in the tables. For example: You may
see the error in the internal std.mgr log file stating that it was unable to start because it has already started! You
have verified that there are no "FNDLIBR" background processes. The problem is that the tables have improper
statuses. You will have to clean up these tables. Here are some queries. I put them into scripts and keep them
handy for when the time arises because the statuses are not that easy to remember.
Reset the concurrent queues:
UPDATE fnd_concurrent_queues
SET running_processes=0, max_processes=0;
Remove any completed jobs: (optional)
DELETE FROM fnd_concurrent_requests
WHERE conc_process_status_code='C';
Set jobs with a status of Terminated to Completed with Error: (optional)
UPDATE fnd_concurrent_requests
SET status_code='E',phase_code='C'
WHERE status_code='T';
Delete any current processes:
DELETE FROM fnd_concurrent_processes;
I have listed these in descending order of frequency that I have had to use them. There is a paper available from
Oracle Support which describes these and more.
Purging Concurrent Manager Logs and Output
The concurrent managers create several table entries and file output in the /log and /out directories. You should
purge these frequently to reduce excessive table growth and fragmentation, and avoid performance degradation of
the concurrent manager processes. You should also decrease the used space on your disks from old log and report
files. This will also relieve stress on the inodes from a large number of files. Under SYSADMIN, setup a
reoccurring report called "Purge Concurrent Request and/or Manager Data". There are several parameters, but I
typically try to setup two jobs.
1) One job for "Manager" data -- that's the concurrent manager log files typically
found in $FND_TOP/log. I set the frequency to daily, and have it purge down to one day.
2) Another job for the "Request" data -- this is for all other modules outside of the SysAdmin responsibility, such as AR, PO, GL, etc. I
typically try to keep only one week's worth of data out there on the system. Your needs and capacity may vary, so
set accordingly.
This purge process does two things: 1) Deletes rows from the fnd_concurrent_requests tables, and 2) Deletes both
the log and output files from the associated $XX_TOP/log or /out directories. If for any reason the file delete did not
complete, but the table data was purged, then you will need to manually purge the output files from the /log and /out
directories. This can happen if the privileges were incorrectly set, or you replicated a copy of the production
database to your development environment, or the file system was not mounted, etc.
Purge Signon Audit Data
This is another purge report, like above. Only this purges the signon audit data which records every login to the
Oracle Applications. Set the frequency and retention equal to that of your request data purge.
Performance Tuning of Concurrent Manager Jobs
What has been described thus far is balancing job throughput. Yet, the jobs themselves may be in need of sql tuning or resolving problems in the database. We won't go into detail of sql tuning -- that is a typical skill set that should be
handled by the IT staff. What I want to discuss here are ways of identifying and classifying problems within the
Oracle Applications.
FND Tables Can Speak Volumes
The concurrent manager is just a scheduling system that keeps track of jobs, parameters, scheduling information, and
completion statuses of every job submitted. By querying these tables, you can learn much about the patterns of your
site, including performance trends.
I strongly suggest that you become familiar with these tables and develop reports against these tables. Some of the
most useful tables are the fnd_concurrent_% tables. Things to look for are which jobs are run, how many times
executed, completion status (especially "errors"), and run times for these jobs.
Where Can I Get Help?
When it comes to looking for established help on tuning your concurrent manager jobs, there is an excellent
reference that can never be exploited enough... the white paper on Managing the Concurrent Managers or ("How to
Herd Cats") by Barbara Matthews. See proceeding papers from the OAUG Fall 1997 convention. This presentation
has been very useful to me. I have modified several of these scripts to my clients' needs.
My favorites are daily errors, daily and weekly hogs, the min/max reports, and the job schedule report (note that
these are not the exact names that you'll find). Here are some ideas on how to use these reports.
The daily errors report shows me every job that completed with an error status. I review these from time to time to
look for trends. The error could be caused by a bug (so then you open a tar and look for an existing patch). The
problem is usually attributed to user error, such as bad parameter input. But don't let the error go on -- it could be an
indication that the user needs some training or other help (you'll know the user name because it provides the request
id number that allows you to view all the details and log of the job -- if you haven't purged it, yet).
The hog reports flag every job that exceeds some set time threshold (such as 20 minutes). It also sets a submission
time range, such as weekdays 6:00 AM to 6:00 PM. The idea here is that we are looking for jobs with very lengthy
completion times running during standard operating business hours (the prime OLTP window). If a job exceeds this
limit, then it is taking resources away from your OLTP users and should either be 1) tuned to reduce execution time,
or 2) moved to the "batch" processing window or queue during the off-hours.
Before you tune a "hog", I would suggest that you see if a performance patch has been issued on this program. Many
times there is, and this can save you the trouble of tuning it -- and crossing that dilemma of introducing a customized
piece of code into your environment.
The min/max reports can be modified to sort the jobs in ascending or descending order based upon the execution
time or number of times executed. This report takes some interpretative skills. For example, lets say that you
identify the job that has the longest execution time... say 4 hours! At first glance, this looks like a sql tuning
candidate. A closer look, though, reveals that the minimum time it took to run the job was only 2 minutes -- and that
the average time for 300 submissions in one day was only 5 minutes! Now, what you have is some sort of exception.
You should cross-reference this job to the "hogs" report -- it should be there. Or, see if it was in the errors. By
finding the request id of this aberrant job you can review the details. You may find that the parameters specified a
much larger data set, or was incorrect, or many other things.
If you finally determine that the job was correctly submitted and that the rest of the evidence points to an optimized
sql code set, then you have probably encountered a "non compatible" job! In other words, the job is fine by itself,
but may suffer drastically due to contention with other jobs run at the same time. With more detective work, you
should strive to find which jobs it is incompatible with and rearrange queues, priorities, or compatibility rules to
ensure that they will not run simultaneously.
The job schedule report shows all the scheduled jobs that keep submitting themselves, automatically. There are a
few things I look for, here. One is the sheer volume of jobs that may be scheduled -- are they really needed? Often
these jobs get scheduled, then forgotten, and are no longer useful. Or is it a batch oriented job that runs during peak
time that should be rescheduled to a more practical time slot? Or is the owner of the job still an employee? I have
seen many "ghost" jobs that were once submitted by users who have left the company -- but their reports still run,
regardless!
One last item about scheduled jobs. See if the jobs are overlapping themselves. When specifying the resubmission
parameters, you can have a job start at a fixed time, or reschedule at a time interval calculated when the jobs starts,
or reschedule at a time interval after the job completes. I often find jobs scheduled to resubmit some time after the
first job starts, like every 15 minutes. Maybe the job used to complete in 5 minutes. Yet, as the database grows, the
job may now be taking more than 15 minutes to complete. Hence, it submits the same job, again, when the first one
hasn't even completed, yet! Then this can cause contention degrading the performance time of both jobs and the
cycle repeats itself and degrades further and further. I would suggest that you schedule jobs to resubmit themselves
on a time delay after the previous job completes!
I Didn't Know Those Scripts Were There!
There are some other existing scripts which may be of benefit to you, but I must first put in a very strong disclaimer:
CAUTION: Do not blindly run these scripts without analyzing their purpose, impact, and possibly consulting with
Oracle Support! Test them in your development environment, first.
I must confess that I do not fully understand why all these files are here. I suspect that many are used in the
installation/upgrade and use of the applications. I have not found deliberate documentation of these scripts, other
than what I can see in some of the script text. Yet, I have used some of these scripts to great satisfaction -- or at least
to learn about the usage of certain tables. These scripts are in $FND_TOP/sql. The ones of interest for the
concurrent managers are afcm*.sql and afrq*.sql. These range from reports on the concurrent managers, locks,
gridlock, etc. You can also find useful scripts in $AD_TOP/sql, too. Again, BE CAREFUL!
Things to Avoid Regarding the Concurrent Managers
These following tips seem to be common sense, but I am still amazed at how often I see these abuses and
misunderstandings, so I will mention them...
Use of the Reprint Option: Do not allow your users to run jobs multiple times in order to recreate the same output.They can view it offline or do a reprint on a previously run job. There are other third party tools, too, that give more flexibility in viewing and formatting the outputs, too.
Use Query Enter to Find Your Jobs: If a user cannot see their job on the immediate screen, then scroll down or
enter a query to further define the job that they are looking for. I have seen sites where the user couldn't find the job they submitted on the first screen, so they would submit it again!
Whoa! on the Refresh Screen: It is very, very common to have your whole company just hitting that refresh key on the concurrent request screen in an effort to see their job go into the queue or its completion status -- especially when performance is already suffering! But this only contributes to the problem! This is one of the most common queries possible. For one, the internal manager scans this table at whatever the pmon interval (the concurrent manager
pmon, not to be confused with the Oracle background pmon process) where it scans the fnd_requests table for the
next set of pending jobs to be processed.
Discourage Multiple User Logins: Multiple logins by the same user to get more work done is often contributing
trouble to an over researched system. Sometimes this is unavoidable because the user wears different "functional"
hats and must view different screens/data within multiple responsibilities. Some also find it annoying to login and
navigate to particular work screens, and then keep idle sessions active until they need them. Try to educate your
users that they consume resources (memory, CPU, etc.) every time that they do this. In the newer NCA versions,
navigating to different screens and responsibility areas will be made easier via shortcuts and should help to eliminate
this abuse.
Eliminate Redundancy of Similar Jobs: Users often submit the same job multiple times in the same time frame,
distinguished only with minor changes to the parameters. These jobs hit the same tables over and over again and can
even create locks and resource conflicts among themselves. Many times they would find the overall throughput to be
better if they single threaded the jobs one after the other. This can be managed by user education or by the
SYSADMIN single threading the queue or placing incompatibility rules that limit the same program to run with
itself.
Another variation of this problem is having different users running the same or similar jobs at the same time. It may
be better for the SYSADMIN to schedule these jobs to resubmit themselves in the concurrent manager at
predetermined intervals and take away the ability for the end-users to submit the jobs, themselves. This should
reduce the frequency and burden on the system, yet allow the users to still have the jobs and processes run in a timely
manner for their use.
Concurrent Program
A program that implements a business functionality and needs to be executed again and again at regular interval or as per business needs is called concurrent program. They can be implement in PL/SQL, Shell Script, C/C++ etc. For example: In a production company lots n lots of orders are booked daily. So for the timely delivery we need to schedule each activity of the product creation in such a way that we can produce it just-in-time. The program to do such calculations is called concurrent program as it needs to execute on daily basis to process all the orders booked daily.
Concurrent Manager
Now when a concurent program is written, it needs to be executed daily at particular time. If we do it manual, there might be chances of delays or it might happen that two different people run the same program at the same time which might lead problems. So we need a manager which can do all this tasks for us. The responsiblity for execution of concurrent programs is given to Concurent Manager, which ensuers that each concurrent program can run successfully without any conflicts. They also ensures that the applications are not overwhelmed with requests. They also manages the batch processing and report generation.
The default installation of Oracle Applications comes with a number of pre defined concurrent managers however you can create your custom concurrent managers to spread out the load of your job processing.
Apart from taking care of the load of your jobs the concurrent managers can also schedule the jobs periodically. Also we can assign specific priority and specific times to the different programs, so that the concurrent managers can run them in specific workshifts.
Concurrent managers also allows you to tweak the number of concurrent process that it can handle concurrently. If any request exceed this prescribed limit they are automatically put on pending state. The processing of a request takes place based on the time of request submission and priority of the request submitted.
There are many pre-configured Concurrent Managers, each governing flow within each Oracle Apps areas. In addition there are "super" Concurrent Managers whose job is to govern the behavior of the slave Concurrent Managers. The Oracle e-Business suite has three important master Concurrent Managers:
Internal Concurrent Manager — The master manager is called the Internal Concurrent Manager (ICM) because it controls the behavior of all of the other managers, and because the ICM is the boss, it must be running before any other managers can be activated. The main functions of the ICM are to start up and shutdown the individual concurrent managers, and reset the other managers after one them has a failure.
Standard Manager — Another important master Concurrent Manager is called the Standard Manager (SM). The SM functions to run any reports and batch jobs that have not been defined to run in any specific product manager. Examples of specific concurrent managers include the Inventory Manager, CRP Inquiry Manager, and the Receivables Tax Manager.
Conflict Resolution Manager — The Conflict Resolution Manager (CRM) functions to check concurrent program definitions for incompatibility rules. However, the ICM can be configured to take over the CRM's job to resolve incompatibilities.
Apart from these three concurrent manages there is another type of concurrent manager known as the Transaction Manager also exists. The transaction manager is responsible for taking the load off the concurrent request table for pooling the request submitted by the user.The transaction manager takes care of these requests and sends it to standard manager directly.In a RAC environment the Transaction manager is required to be activated on each node of the RAC environment.
From the front end you could view the status of your concurrent manager by logging with the System Administration responsibility and going to the Concurrent Manager administer screen.
The status of concurrent managers and the nodes on which they are configured can also be known from the Oracle Applications manager.
Concurrent Manager Processes:
The concurrent managers are like other process which run on the oracle applications executable FNDLIBR. The FNDLIBR executable is located at $FND_TOP/bin.
You could also grep the FNDLIBR executable to check if any concurrent manager process are running
$ ps -ef|grep FNDLIBR
The $FND_TOP/sql/afcmstat.sql script gives you a list of concurrent managers and their respective status.
Below is the list of Most of the Concurrent manager processes.
FNDLIBR manages following Managers
Marketing Data Mining Manager
Transportation Manager
Session History Cleanup
UWQ Worklist Items Release for Crashed session
Collections Manager
OAM Metrics Collection Manager
Contracts Core Concurrent Manager
Standard Manager
WMS Task Archiving Manager
Oracle Provisioning Manager
INVLIBR manages following Managers
Inventory Manager
MRCLIB manages following Managers
MRP Manager
PALIBR manages following Managers
PA Streamline Manager
FNDSM The Generic Service Management Framework Process
FNDSM is executable and core component in GSM ( Generic Service Management Framework discussed above). You start FNDSM services via application listener on all Nodes in Application Tier in E-Business Suite.
UNDER WHICH MANAGER REQUEST WAS RUN
=======================================
SELECT
b.user_concurrent_queue_name
FROM
fnd_concurrent_processes a
,fnd_concurrent_queues_vl b
,fnd_concurrent_requests c
WHERE 1=1
AND a.concurrent_queue_id = b.concurrent_queue_id
AND a.concurrent_process_id = c.controlling_manager
AND c.request_id = &request_id
Concurrent Manager Scripts
Oracle supplies several useful scripts, (located in $FND_TOP/sql directory), for monitoring the concurrent managers: