To learn is to change. Education is a process that changes the learner. – Unknown
As you know (or may not know) there are two engines running in the Oracle database. The PL/SQL engine for executing all your PL/SQL code and the SQL engine for executing all you SQL statements. This means that every time your PL/SQL code needs data from the database, or writes to it control is passed from the PL/SQL runtime engine to the SQL. This process involves a lot of work which has to be done by the two engines, like writing the SQL statement in an area the SQL engine can access and having the SQL engine write the result in an area the PL/SQL engine can access. This is called a context-switch.
Reduce time spent
All these context-switches take time. Not too much time individually, which is why you don’t see any slowdown of the program when you run a SQL statement that returns a single row, but when you run SQL statements in a loop then you actually perform a lot of SQL statements, which involves a lot of context switching. You actually want to reduce these context switches to a single one and bring back the entire resultset in one pass. This is where the Bulk processing capabilities of the Oracle database come in.
Instead of fetching or updating a single row at a time since Oracle 8i it is possible to use the bulk features in the Oracle database. This means you can fetch an entire collection of rows in one roundtrip to SQL engine. Similarly you can insert, update or delete an entire collection in one roundtrip.
For fetching data there are a few things to consider. Instead of fetching a single value or row you are now fetching an entire set of values or rows. So instead of fetching into simple variables or records you must now fetch into a collection of scalars or a collection of records. See Collections in Oracle for some explanation of the different types of collections.
Consider the following code fragment:
DECLARE l_ename emp.ename%TYPE; BEGIN FOR l_ename IN (SELECT ename FROM emp) LOOP DBMS_OUTPUT.PUT_LINE(l_ename); END LOOP; END;
The emp table (in the SCOTT schema) consists of 14 rows. That means this code will have to make 14 roundtrips to the database.
Let’s change the code to utilize the Bulk Collect option.
DECLARE TYPE ename_tt IS TABLE OF emp.ename%TYPE INDEX BY BINARY_INTEGER; l_enames ename_tt; BEGIN SELECT ename BULK COLLECT INTO l_enames FROM emp; FOR indx IN l_enames.FIRST .. l_enames.LAST LOOP DBMS_OUTPUT.PUT_LINE(l_enames(indx)); END LOOP; END;
Now we have just 1 roundtrip to the database. I may be that we have to code some more in PL/SQL, but our code runs a lot faster. If we declare an explicit cursor then every fetch in our code means a roundtrip to the database.
Consider the following code fragments. In the second fragment, changes that
have to be made are in bold.
DECLARE CURSOR c_emp IS SELECT * FROM emp; r_emp c_emp%ROWTYPE; BEGIN OPEN c_emp; FETCH c_emp INTO r_emp; WHILE c_emp%FOUND loop DBMS_OUTPUT.PUT_LINE(r_emp.ename); FETCH c_emp INTO r_emp; END LOOP; CLOSE c_emp; END; DECLARE CURSOR c_emp IS SELECT * FROM emp; TYPE emp_tt IS TABLE OF c_emp%ROWTYPE INDEX BY PLS_INTEGER; r_emps emp_tt; BEGIN OPEN c_emp; FETCH c_emp BULK COLLECT INTO r_emps; FOR indx IN r_emps.FIRST .. r_emps.LAST LOOP DBMS_OUTPUT.PUT_LINE(r_emps(indx).ename); END LOOP; CLOSE c_emp; END;
We can base a collection type on the cursor record retrieved. Then we define a variable based on this collection type. We still have to open the cursor but we can now fetch all the records at once and bulk collect them into our collection. After having fetched all of the records, we have to write our own loop to process all the data. The collection used for bulk fetching is automatically initialized, extended (for Nested Tables and Varray) and densely filled, starting at index value 1.
If your table is really big and you bulk collect all the data into you collection you might run out of memory. Remember, your collection takes up space in the PGA. But you still can use the bulk collect feature, only in this case you want to LIMIT the rows retrieved on every roundtrip. You can limit the rows retrieved by adding the LIMIT clause to your fetch. By creating a loop you can still fetch all the rows from the database but in smaller batches. Please note that bulk collecting data will set the cursor %NOTFOUND to true when there are not enough rows to fetch, but it can still have fetched one or more rows. Bulk collect will also NOT raise the NO_DATA_FOUND exception when there is nothing found. This means you always have to check if there is anything retrieved before you start your processing. You should also check if the number of rows retrieved is the same as the limit you provided. If it is smaller, then you can exit the loop. Or check for the cursor %NOTFOUND flag, but do this at the end of your loop.