PL/SQL interview questions and answers


Part 1: SQL and PL/SQL Features
Transparent Data Encryption and XQuery support are the two major new SQL-related features in Oracle Database 10g Release 2, but the list doesn't end there.
Covered in This Installment:
· Transparent Data Encryption
· XML Query
· Enhanced COMMIT
· Error-Logging Clause
· WRAP Package
· Conditional Compilation
· Unlimited DBMS Output
Transparent Data Encryption
Encryption is a topic that evokes a mixed reaction in many users: interest coupled with a sense of wariness arising from the perceived complexity of key management, which can render the setup ineffective if not done correctly. There is also performance overhead associated with encrypting and decrypting the values, which makes the process a little less palatable to most application architects. As a result, many systems are designed with no encryption at all but with strong perimeter protection instead, such as strong passwords and proper authorization schemes.
However, imagine a situation where the entire server is stolen—or even just the disks, so they can be mounted on a server of the same OS and then cleaned of data. Or, consider the case of a rogue DBA who penetrates perimeter protection in the daily course of business and then downloads sensitive customer information. In both cases, the businesses involved, if located in the state of California (and perhaps in other U.S. states shortly), would be legally obligated to notify all affected customers of a security breach.
In those rare (but certainly realistic) cases, the authentication scheme is moot. That's why transparent data encryption (TDE) is such a valuable feature for organizations that make security a top priority; it supports encryption while putting the complexity of key management in the hands of the database engine. At the same time, it lets DBAs manage database tables without actually having to see the data.
Using TDE in Oracle Database 10g Release 2, you can encrypt one or more columns of a table right out of the box; all you have to do is define the column as encrypted, without writing a single line of code. Remember, encryption requires a key and an algorithm to encrypt an input value. TDE generates a single key for a specific table. Because this approach makes key management simpler but more susceptible to theft, there is another key—a master key—that can be set at the database level. The table key is encrypted with the master key, which is required to obtain the table key. Consequently, the master key as well as the table key are required to decrypt a column. (For a more detailed discussion about encryption generally and the use of supplied packages in Oracle in particular, see my Oracle Magazine column "Encrypt Your Data Assets.")
The master key is stored outside the database in a location known as a "wallet"—by default in $ORACLE_BASE/admin/$ORACLE_SID/wallet. Conceptually, it looks like the figure below.

After TDE is configured—or more specifically the wallet and the master key are configured—you can use it to protect data values. To encrypt a column of a table, you would use the following SQL:
create table accounts
   acc_no       number       not null,
   first_name   varchar2(30) not null,
   last_name    varchar2(30) not null,
   SSN          varchar2(9)             ENCRYPT USING 'AES128',
   acc_type     varchar2(1)  not null,
   folio_id     number                  ENCRYPT USING 'AES128',
   sub_acc_type varchar2(30),
   acc_open_dt  date         not null,
   acc_mod_dt   date,
   acc_mgr_id   number
Here you have used TDE on the columns SSN and FOLIO_ID, which are now stored in encrypted manner on the table itself. However, when a user selects from the table, she sees the data in clear text because the decryption is performed during retrieval. If the disks are stolen, the information contained in the table segments remain encrypted. The thief needs the table key to see the encrypted value, but to get that he needs the master key, which is externally stored and hence unavailable.
Note the clauses after the columns SSN and FOLIO_ID, which specify ENCRYPT using the 128-bit Advanced Encryption Standard.
To set the wallet password, use the command:
alter system set encryption key authenticated BY "topSecret";
This command creates the wallet, if not created already, and then sets the password to "topSecret" (case sensitive). Then you can start using encryption in column definitions during table creation and modification.
Encrypting External Tables
In the above example, I used a hash table to encrypt columns. You can also use TDE on external tables. For instance, if you want to generate a dump file containing the data from ACCOUNTS for shipping to a different location, you can use the simple ENCRYPT clause.
create table account_ext
organization external
   type oracle_datapump
   default directory dump_dir
   location ('accounts_1_ext.dmp',
parallel 4
SSN           ENCRYPT IDENTIFIED BY "topSecret",
from accounts;
In the files accounts_*_ext.dmp, the values of SSN and FOLIO_ID will not be clear text, but encrypted. If you want to use these files as external tables, you have to supply the password as topSecret to read the files.
As you can see here, TDE is a highly desirable complement to (not a substitute for) access control.

Query XML in SQL
XML has long been a de-facto standard for datatype of many applications involving large character content. Recently it has also become a storage layout for other applications, not limited to large content only.
Oracle has provided XML integration with the database since Oracle9i Database. In that release, you could query XML content using many different methods. In Oracle Database 10g Release 2, new XQuery and XMLTable functions make it even easier to query XML contents. (Note: A thorough discussion of the XQuery specification is beyond the bounds of this article; for background, read the Oracle Magazine article "XQuery: A New Way to Search.")
First, let's examine the simpler of the two methods: XQuery. Here's an example:
SQL> xquery
  2     for $var1 in (1,2,3,4,5,6,7,8,9)
  3     let $var2 := $var1 + 1
  4     where $var2 < 6
  5     order by $var2 descending
  6    return $var2
  7 /

Result Sequence
The new SQL command xquery indicates an XQuery command. Note the command carefully: The new syntax simulates the FOR ... IN ... inline view introduced in Oracle9i Database.
The general structure of an XQuery is described by the acronym FLOWR (pronounced "flower"), which stands for FOR, LET, ORDER BY, WHERE and RETURN. In the above example, we see that the line 2 defines the source of the data, which is a series of numbers from 1 to 9. This could be any source—a bunch of scalar values or an element of an XML data, specified by the FOR clause. The line also specifies a variable to go hold these values (var1). In line 3, another variable var2 holds the value of var1 added with 1, specified with the LET clause.
For all these values returned, we are interested in only those below 6, which is specified by the clause WHERE. Then we sort the result set on the var2 value in a descending manner, shown as ORDER BY clause in line 6. Finally the values are returned to the user with the RETURN clause.
If you were to compare the syntax to the regular SQL syntax, RETURN, FOR, WHERE, and ORDER BY would be analogous to SELECT, FROM, WHERE, and ORDER BY. The LET clause has no SQL analogy but it's something specified in the other clauses.
Let's look at a practical example of this powerful new tool in action. First, create a table to hold the communication details with an account holder.
create table acc_comm_log
   acc_no number,
   comm_details xmltype
Now, insert some records into it.
insert into acc_comm_log
      <CommText>Dear Mr Smith</CommText>

insert into acc_comm_log
      <CommText>Dear Mr Jackson</CommText>

insert into acc_comm_log
      <CommText>Dear Ms Potter</CommText>
Now you can see what records are in the table:
SQL> l
  1 select acc_no,
  2       XMLQuery(
  3        'for $i in /CommRecord
  4         where $i/CommType != "EMAIL"
  5         order by $i/CommType
  6         return $i/CommDate'
  7       passing by value COMM_DETAILS
  8       returning content) XDetails
  9 from acc_comm_log
 10 /

---------- ------------------------------
         2 <CommDate>3/12/2005</CommDate>
         3 <CommDate>3/10/2005</CommDate>

The other function, XMLTable, has a similar purpose but returns the columns as if it were a regular SQL query. Here it is in action.
1  select t.column_value
  2  from acc_comm_log a,
  3         xmltable (
  4            'for $root in $date
  5            where $root/CommRecord/CommType!="EMAIL"
  6            return $root/CommRecord/CommDate/text()'
  7               passing a.comm_details as "date"
  8*        ) t
SQL> /

This example illustrates how you can use regular SQL statements against an XML table returned by the XML query. The queries follow the very structured FLOWR pattern for specifying commands.
XQuery versus XMLTable
Now that you have seen the two ways you can use XML in a regular SQL query, let's see where you should use each one and under what circumstances.
The first method, XQuery, allows you to to get the data in an XMLType, which can be manipulated as XML in any program or application that supports it. In the example you saw, the resultant output of account data is in XML format and you can use any tool, not necessarily relational, to manipulate and display that data. The second method, XMLTable, combines the functionality of regular SQL and XML. The resultant output of the account data is not XML, but relational.
Note that the source in both cases is XML, but XQuery presents the data in XML format using XMLType whereas XMLTable presents it as a relational table, which can be manipulated as a regular table. This functionality may work best for existing programs which expect a table, while bringing the power of XML into the mix.
XML is quite useful where the exact structure of the data is not well known in advance. In the example above, the communication records are different based on the mode. If it's email, then the attributes could be email address of the recipient, return address, any carbon copies (cc:, bcc:, and so on), the text of the message and so on. If it's a phone call, the attributes are phone number called, the type of number (home, work, cell, and so on), the person answered, the voicemail left, and so on. If you were to design a table that holds all possible types of attributes, it would span across many columns and may become tedious to read. However, if you just have one column as XMLType, then you can cram everything there but still retain the unique attributes of the communication type. The query can still use a simple SQL interface, making application development a breeze.
For more information about Oracle's implementation of XQuery, visit the Oracle XQuery page on OTN.

Enhanced COMMIT
When a session commits, the redo log buffer is flushed to the online redo logs on disk. This process ensures that transactions can be replayed from the redo logs if necessary when recovery is performed on the database.
Sometimes, however, you may want to trade-off the guaranteed ability to recover for better performance. With Oracle Database 10g Release 2, you now have control over how the redo stream is written to the online log files. You can control this behavior while issuing the commit statement itself, or simply make change the default behavior of the database.
Let's see how the commit statement works. After a transaction, when you issue COMMIT, you can have an additional clause:
where the <option> is what influences the redo stream. The option WAIT is the default behavior. For instance, you can issue:
This command has the same effect as COMMIT itself. The commit does not get the control back to the user until the redo stream is written to the online redo log files. If you don't want it to wait, you could issue:
In this case, the control immediately returns to the session, even before the redo streams are written to the online redo logs.
When a commit is issued, the Log Writer process writes the redo stream to the online redo logs. If you are making a series of transactions, such as in a batch processing environment, you may not want it to commit so frequently. Of course, the best course of action is to change the application to reduce the number of commits; but that may be easier said than done. In that case, you could simply issue the following commit statement:
This command will make the commit write the redo streams to the log file in batches, instead of at each commit. You can use this technique to reduce log-buffer flushing in a frequent-commit environment. If you want to write the log buffer immediately, you would issue:
If you want a specific commit behavior to be the default for a database, you could issue the following statement.
This command will make this behavior the default across the database. You can also make it at session level:
As with any parameter, the parameter behaves the setting at the system level, if set. If there is a setting at the session level, the session level setting takes precedence and finally the clause after the COMMIT statement, if given, takes precedence.
This option is not available for distributed transactions.

Catch the Error and Move On: Error Logging Clause
Suppose you are trying to insert the records of the table ACCOUNTS_NY to the table ACCOUNTS. The table ACCOUNTS has a primary key on ACC_NO column. It's possible that some rows in ACCOUNTS_NY may violate that primary key. Try using a conventional insert statement:
SQL> insert into accounts
  2  select * from accounts_ny;
insert into accounts
ERROR at line 1:
ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated
None of the records from the table ACCOUNTS_NY has been loaded. Now, try the same with error logging turned on. First, you need to create a table to hold the records rejected by the DML statement. Call that table ERR_ACCOUNTS.
Next, execute the earlier statement with the error-logging clause.
SQL> insert into accounts
  2  select * from accounts_ny
  3  log errors into err_accounts
  4  reject limit 200
  5  /

6 rows created.
Note that the table ACCOUNTS_NY contains 10 rows yet only six rows were inserted; the other four rows were rejected due to some error. To find out what it was, query the ERR_ACCOUNTS table.
 2  from err_accounts;

ORA_ERR_NUMBER$   ORA_ERR_MESG$                                       ACC_NO
 ---------------  --------------------------------------------------  ------
               1  ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) vi  9997
               1  ORA-00001: unique constraint (ARUP.PK_ACCOUNTS)vi  9998
               1  ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) vi 9999
               1  ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) vi 10000
Note the columns ORA_ERR_NUMBER$, which show the Oracle error number encountered during the DML statement execution, and the ORA_ERR_MESG$, which shows the error message. In this case you can see that four records were rejected because they violated the primary key constraint PK_ACCOUNTS. The table also captures all the column of table ACCOUNTS, including the column ACC_NO. Looking at the rejected records, note that these account numbers already exist in the table; hence the records were rejected with ORA-00001 error. Without the error-logging clause, the whole statement would have failed, with no records rejected. Through this clause, only the invalid records were rejected; all others were successful.

Protect the Code at Source: WRAP Package
PL/SQL program units often contain very sensitive and confidential information about company procedures and trade secrets, which makes them a protected entity group, similar to tables. To prevent unauthorized viewing of the source code, the programs are often obfuscated using the wrap command line utility.
You can invoke wrap only after the PL/SQL script is created; the utility creates a wrapped file from the input clear text. However, in some cases you may want to generate the wrapper dynamically inside PL/SQL code. In such a case, the wrap utility can't be invoked because no source file exists yet.
Thankfully, Oracle Database 10g Release 2 provides a supplied package that you can use to create the code in a wrapped format. This package complements, not replaces, the wrap utility. The latter is still appropriate in cases where you want to wrap a large number of source files quickly using a command line option.
For instance, imagine that you want to create the simple procedure p1 in wrapped format.
create or replace procedure p1 as
Inside the PL/SQL unit, you can create it dynamically but in wrapped format with:
     ('create or replace procedure p1 as begin null; end;')
Now you want to confirm the wrapped procedure. You can select the source text from the dictionary.
SQL> select text from user_source where name = 'P1';

procedure p1 wrapped
...and so on ...
The first line, procedure p1 wrapped, is confirmation that the procedure was created in wrapped manner. If you get the DDL of the procedure with the DBMS_METADATA.GET_DDL() function, you will still see the source as wrapped.
Sometimes you may have a slightly different requirement; you may decide to generate the PL/SQL code but not create the procedure, for example. In that case, you may save it in a file or table to be executed later. But because the above approach creates the procedure, it won't work here. Rather, you need to call another function in the package:
SQL> select dbms_ddl.wrap
  2      ('create or replace procedure p1 as begin null; end;')
  3  from dual
  4 /

create or replace procedure p1 wrapped
... and so on ...
The output of the WRAP function is the wrapped output of the PL/SQL code passed as a parameter. This parameter can be stored in a flat file or a table and executed later. This comes in handy in situations where you generate the code to be deployed elsewhere and the security of the code cannot compromised in any way.
The above approach works fine as long as you can pass the entire text of the stored code as a varchar2 datatype, which is limited to 32K. If the PL/SQL code exceeds 32K, you have to use a slightly different method: accept a collection variable as the input.
Here you can use a supplied datatype: varchar2s in the package DBMS_SQL. This is a collection datatype (TABLE OF VARCHAR2), with each element of the table accepting up to 32K of text; you can extend it to as many elements as necessary. Suppose, for example, that you have to wrap a very long procedure called myproc, which is defined as follows:
create or replace procedure myproc as
  l_key VARCHAR2(200);
  l_key := 'ARUPNANDA';
Of course, this is not a very long procedure at all; but for demonstration purposes assume it is. To create it as wrapped, you would execute the following PL/SQL block:
1  declare
2     l_input_code    dbms_sql.varchar2s;
3  begin
4     l_input_code (1) := 'Array to hold the MYPROC';
5     l_input_code (2) := 'create or replace procedure myproc as ';
6     l_input_code (3) := '  l_key VARCHAR2(200);';
7     l_input_code (4) := 'begin ';
8     l_input_code (5) := '  l_key := ''ARUPNANDA'';';
9     l_input_code (6) := 'end;';
10    l_input_code (7) := 'the end';
11    sys.dbms_ddl.create_wrapped (
12             ddl     => l_input_code,
13             lb      => 2,
14             ub      => 6
15     );
16* end;
Here we have defined a variable, l_input_code, to hold the input clear text code. In lines 4 through 10, we have populated the lines with the code we are going to wrap. In this example, for the same of simplicity, I have used very small lines. In reality, you may be forced to use quite long lines, up to 32KB in size. Similarly, I have used only 7 elements in the array; in reality you may be use several to fit the entire code.
Lines 11 through 15 show how I have called the procedure to create the procedure as wrapped. I have passed the collection as a parameter DDL, in line 12. But, take a pause here—I have assigned a comment as the first element of the array, perhaps for documentation. It's not a valid syntax, however. Similarly, I assigned another comment to the last element (7) of the array, again not a valid syntax for creating a procedure. To let the wrapping work on only the valid lines, I have specified the lowest (2) and highest elements (6) of the collection that stores our code in lines 13 and 14. The parameter LB shows the lower bound of the array, which is 2 in our example, and HB, the higher bound (6).
As you can see, using this approach, you can now create any sized procedure in wrapped format from within your PL/SQL code.

Conditional Compilation in PL/SQL: Write Once, Execute Many
Many of you have worked with the C language, which supports the concept of compiler directives. In C programs, depending on the version of the compiler involved, the value of certain variables may differ.
In Oracle Database 10g Release 2, PL/SQL has a similar feature: pre-processor directives can now be provided that are evaluated during compilation, not runtime. For example, let's create a very simple function that returns a string.
1  create or replace function myfunc
2  return varchar2
3  as
4  begin
5    $if $$ppval $then
6      return 'PPVAL was TRUE';
7    $else
8      return 'PPVAL was FALSE';
9    $end
10* end;
Note line 5, where you have used the pre-processor directives to evaluate the variable ppval. Because ppval is a pre-processor variable, not a normal PL/SQL one, you have specified it using the $$ notation. Also, to let the compiler know that it has to process the lines during compilation only, you have specified the evaluations with the special $ notation, e.g. $if instead of if. Now, compile this function with different values of the variable ppval.
SQL> alter session set plsql_ccflags = 'PPVAL:TRUE';

Session altered.
Now compile the function and execute it.
SQL> alter function myfunc compile;

Function altered.

SQL> select myfunc from dual;

The value of ppval was set to false during the compilation. Now, change the value of the variable and re-execute the function.
SQL> alter session set plsql_ccflags = 'PPVAL:FALSE';

Session altered.

SQL> select myfunc from dual;

Here although the value of ppval is FALSE in the session, the function does not take it; rather it takes the value set during the compilation. Now, recompile the function and execute it.
SQL> alter function myfunc compile;

Function altered.

SQL> select myfunc from dual;

During the compilation, the value of ppval was FALSE, and that is what was returned.
So, how can you use this feature? There are several possibilities—for example, you can use it as a debug flag to display additional messages or you can write a program that compiles differently based on platform. Because the evaluation is done during compilation and not during runtime, runtime efficiency is greatly enhanced.
The above example works fine when you have the same pre-processor flag that is referenced in all functions to be compiled. But what if you have different flag for each code? For instance, function calculate_interest may have the flag ACTIVE_STATUS_ONLY set to TRUE while function apply_interest may have flag FOREIGN_ACCOUNTS set to FALSE. To compile these with the appropriate flags you can issue:
alter function calculate_interest compile
   plsql_ccflags = 'ACTIVE_STATUS_ONLY:TRUE'
   reuse settings;
alter function apply_interest compile
   plsql_ccflags = FOREIGN_ACCOUNTS:TRUE'
   reuse settings;
Note that there is no session level setting. The clause reuse settings ensures the same compiler directives are used when the functions are recompiled later.
Let's examine another variation of this new feature. In addition to the definition of a conditional variable, you can also check a static constant of a package in the conditional compilation. For example, suppose you want to control the debugging output of a PL/SQL procedure based on a Boolean packaged constant. First you create the package as
create or replace package debug_pkg
   debug_flag constant boolean := FALSE;
The debug_flag is the constant that determines the conditional logic in the code. You can now embed the code inside the package as follows:
create or replace procedure myproc
        $if debug_pkg.debug_flag $then
                dbms_output.put_line ('Debug=T');
                dbms_output.put_line ('Debug=F');
Note that the packaged constant is referenced directly without any $ sign. In this case, there is no need to set any session- or system-level conditional compilation parameters. While the function is compiled, you do not need to pass any additional clause either. To see how this works, execute:
SQL> exec myproc

Because the value of debug_pkg.debug_flag is FALSE now, the execution of the procedure returned "F" as expected. Now, change the constant value:
create or replace package debug_pkg
   debug_flag constant boolean := TRUE;
Then, execute the procedure again:
SQL> exec myproc

The procedure picked up the value of the constant to show "T," as expected. Note a very important difference here—you did not need to recompile the procedure; the change to the constant was picked up automatically!

Unlimited DBMS Output
Remember the dreaded error that resembles the following lines?
ERROR at line 1:
ORA-20000: ORU-10027: buffer overflow, limit of 1000000 bytes
ORA-06512: at "SYS.DBMS_OUTPUT", line 32
ORA-06512: at "SYS.DBMS_OUTPUT", line 97
ORA-06512: at "SYS.DBMS_OUTPUT", line 112
ORA-06512: at line 2
It is due to the fact that the maximum possible characters handled by the supplied package dbms_output used to be 1 million bytes. In Oracle Database 10g Release 2,, that restriction has been lifted: The maximum output can now be as much as required. You can set it to "unlimited" by simply issuing
set serveroutput on
The above statement produces the following result:
SQL> show serveroutput
serveroutput ON size 2000 format WORD_WRAPPED
Note how the default value of the maximum size of the output used to be 2,000. In Oracle Database 10g Release 2, the command shows the following result:
SQL> show serveroutput
The default value is UNLIMITED.
Another inconvenience was the maximum size of a line displayed by dbms_output. The following is a typical error message for lines longer than 255 bytes.
ERROR at line 1:
ORA-20000: ORU-10028: line length overflow, limit of 255 chars per line
ORA-06512: at "SYS.DBMS_OUTPUT", line 35
ORA-06512: at "SYS.DBMS_OUTPUT", line 115
ORA-06512: at line 2
In Oracle Database 10g Release 2, the lines can be of any length.
In Part 2, I'll cover manageability features.