Object Relational Features



Collections

Collections are data structures of a similar nature to arrays in other languages (and indeed they are best used in the same manner).

Associative Arrays (INDEX BY tables)

INDEX BY tables are collections and can only be declared within a PL/SQL environment. They are recognised by the use of INDEX BY [ BINARY_INTEGER | PLS_INTEGER | VARCHAR2(size) ] after the TYPE declaration.
SQL> DECLARE
  2    /* INDEX BY tables can be declared with a single attribute, i.e. */
  3    TYPE l_tab_type IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  4    l_tab   l_tab_type;
  5    /* Or with multiple attributes, via a RECORD type, i.e. */
  6    TYPE l_rec_type IS RECORD ( a  VARCHAR2(10), b  NUMBER );
  7    TYPE l_tab_rec_type IS TABLE OF l_rec_type INDEX BY BINARY_INTEGER;
  8    l_tab_rec  l_tab_rec_type;
  9  BEGIN
 10    NULL;
 11  END;
They are utilised by the standard dot-notation syntax, i.e.
BEGIN
  /* Populate the single attribute table */
  l_tab(1) := 'X';

  /* Populate the RECORD-type table */
  l_tab_rec(1).a := 'X';
  l_tab_rec(1).b := 10;
END;
INDEX BY Tables can be sparse, i.e.
DECLARE
  /* INDEX BY tables can be declared with a single attribute, i.e. */
  TYPE l_tab_type IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  l_tab   l_tab_type;
BEGIN
  /* Populate the tables */
  l_tab(1) := 'X';

  /* Prove sparsity by missing the second row */
  l_tab(3) := 'Y';
END;
Typical usage would be to use them to hold transient data (in similar ways that arrays would be used in other languages).

Usage Notes
Each INDEX BY table has various methods which can be used to manipulate the collection.

.COUNT - The number of INSTANTIATED rows in the collection.
.FIRST - The index of the first INSTANTIATED row in the collection.
.LAST  - The index of the last INSTANTIATED row in the collection.
.NEXT(i) - The index of the next INSTANTIATED row in the collection after index i.

  1  DECLARE
  2    /* INDEX BY tables can be declared with a single attribute, i.e. */
  3    TYPE l_tab_type IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  4    l_tab   l_tab_type;
  5  BEGIN
  6    /* Populate the tables */
  7    l_tab(1) := 'X';
  8    /* Prove sparsity by missing the second row */
  9    l_tab(3) := 'Y';
 10    dbms_output.put_line('COUNT : ' || l_tab.COUNT);
 11    dbms_output.put_line('FIRST : ' || l_tab.FIRST);
 12    dbms_output.put_line('LAST  : ' || l_tab.LAST);
 13    dbms_output.put_line('NEXT  : ' || l_tab.NEXT(1));
 14* END;
SQL> /
COUNT : 2
FIRST : 1
LAST  : 3
NEXT  : 3

Trying to access a row which has not be instantiated, in this case, row 2, results in :

  1  DECLARE
  2    /* INDEX BY tables can be declared with a single attribute, i.e. */
  3    TYPE l_tab_type IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  4    l_tab   l_tab_type;
  5  BEGIN
  6    /* Populate the tables */
  7    l_tab(1) := 'X';
  8    /* Prove sparsity by missing the second row */
  9    l_tab(3) := 'Y';
 10    dbms_output.put_line(l_tab(2));
 11* END;
 12  /
DECLARE
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 10
Looping through the collection
Manipulating the collection will probably involve a LOOP of some form, therefore the following usage notes are recommended :

Looping through a dense collection
If the collection is guaranteed dense, i.e. all rows instantiated, then use either
FOR i IN 1..l_tab.COUNT
LOOP
  ..
END LOOP;

/* Note, this one will only involve one call to the collection methods, and will not execute if the
   collection is empty (i.e. .COUNT = 0) */

or

FOR i IN l_tab.FIRST..l_tab.LAST
LOOP
 ..
END LOOP;
This guarantees the least amount of work (the second one being slightly more inefficient due to the two calls to the collection methods, but will cope if the data does not start at index 1).

Looping through a sparse collection

If the collection is sparse, i.e. some rows are NOT instantiated, then use :
DECLARE
  TYPE l_tab_type IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  l_tab   l_tab_type;
  i  BINARY_INTEGER;    /* Note, the declaration as BINARY_INTEGER. */
BEGIN
  i  := l_tab.FIRST;
  WHILE i IS NOT NULL
  LOOP
    ..
    i := l_tab.NEXT(i);
  END LOOP;
END;
This will diminish performance somewhat, due to the inclusion of the .NEXT(i) call within the loop, but this is the "safest" mechanism for looping through a sparse (or potentially sparse) collection.

The main problem with INDEX BY BINARY_INTEGER tables is that the INDEX BY BINARY_INTEGER does not fit in well with some situations, such as lookup data. Prior to 9i release 2 RDBMS, the only way of getting the attributes associated with a value (assuming you didn't already know the index of the row) is to literally loop through the collection until you find the particular row you're interested in (see INDEX BY tables examples above).
This can be a performance overhead.

*DEV NOTE*
For associative arrays, use the syntax:
TYPE type_name IS TABLE OF element_type [NOT NULL]
   INDEX BY [BINARY_INTEGER | PLS_INTEGER | VARCHAR2(size_limit)];
   INDEX BY key_type;
The key_type can be numeric, either BINARY_INTEGER or PLS_INTEGER. It can also be VARCHAR2 or one of its subtypes VARCHAR, STRING, or LONG. You must specify the length of a VARCHAR2-based key, except for LONG which is equivalent to declaring a key type of VARCHAR2(32760). The types RAW, LONG RAW, ROWID, CHAR, and CHARACTER are not allowed as keys for an associative array.

With 9i release 2, you can now declare an INDEX BY VARCHAR2(n) collection, allowing reference to a particular row in the collection via a string, not simply a number, i.e.
SQL> DECLARE
  2    TYPE l_type IS TABLE OF VARCHAR2(10) INDEX BY VARCHAR2(10);
  3    l_tab   l_type;
  4  BEGIN
  5    l_tab('X') := 'Y';
  6  END;
*DEV NOTE*
Just as with INDEX BY BINARY_INTEGER or PLS_INTEGER tables, you can use all the standard methods associated with INDEX BY tables with INDEX BY VARCHAR2 tables. This means that, for example, in order to get a list of indexes used within an associative array, you can use .NEXT, i.e. :
SQL> DECLARE
  2    TYPE l_type IS TABLE OF VARCHAR2(10) INDEX BY VARCHAR2(10);
  3    l_tab l_type;
  4    i  VARCHAR2(10);
  5  BEGIN
  6    l_tab('A') := '1';
  7    l_tab('B') := '2';
  8    l_tab('X') := '3';
  9    i := l_tab.FIRST;
 10    WHILE i IS NOT NULL
 11    LOOP
 12      dbms_output.put_line(i);
 13      i := l_tab.NEXT(i);
 14    END LOOP;
 15  END;
 16  /
A
B
X

PL/SQL procedure successfully completed.

Nested Tables

Nested Tables are declared similar to INDEX BY Tables, indeed all methods exist for nested tables as do for INDEX BY tables, i.e. .FIRST etc., but they can be declared persistently as a database TYPE, as well as a PL/SQL construct.

Note, all examples of the declaration of a nested table in the following examples, are declared in PL/SQL, but can be declared using the CREATE OR REPLACE TYPE syntax in SQL. SQL types would typically be used for persistent storage across tables (since Oracle supports creating tables / columns of these types) or across packages (package variables etc.)

They can be created as TYPEs of a single attribute, i.e. :
SQL> CREATE TYPE l_tab AS TABLE OF VARCHAR2(10);
  2  /

Type created.

OR as TYPEs of multiple attributes, via an OBJECT type.

SQL> CREATE TYPE l_obj AS OBJECT ( a  VARCHAR2(10),  b  NUMBER );
  2  /

Type created.

SQL> CREATE TYPE l_obj_tab AS TABLE OF l_obj;
  2  /

Type created.

They can also be created as a PL/SQL construct, in the same way as INDEX BY tables, i.e.

SQL> DECLARE
  2    TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  3    l_tab  l_tab_type;
  4  BEGIN
  5    NULL;
  6  END;

VARRAYs

For the purposes of non-persistent collections, VARRAYs are identical to nested tables, with the exception that a VARRAY is bounded, i.e. the following code declares a maximum of 10 row collection.
SQL> DECLARE
  2    TYPE l_tab IS VARRAY(10) OF VARCHAR2(20);
  3    l_tab2 l_tab;
  4  BEGIN
  5    NULL;
  6  END;
The difference between the SQL collection types boils down to the differences in
persistent storage.

Populating Collections

Populating a Nested Table

It is important to note, that nested tables have to initially be populated with the constructor method (unless using BULK COLLECT which is discussed later). Not doing so will lead to :
SQL> DECLARE
  2    TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  3    l_tab   l_tab_type;
  4  BEGIN
  5    l_tab(1) := 'X';
  6  END;
  7  /
DECLARE
*
ERROR at line 1:
ORA-06531: Reference to uninitialized collection
ORA-06512: at line 5
Populating a nested table can be done in a variety of fashions, but the main approaches are :

1. Constructor-based method

Each SQL table or object type can be populated via the use of it's constructor method, which is similar to many Object Oriented (OO) languages, i.e. Java.

Nested tables have to be initialised via the constructor, i.e.
SQL> DECLARE
  2    TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  3    l_tab   l_tab_type;
  4  BEGIN
  5    l_tab := l_tab_type('x');
  6  END;
This will create the first row. All other rows can be assigned directly as with INDEX BY tables, with the caveat that the nested table row has to be allocated memory first, via the .EXTEND method, i.e.
DECLARE
  TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  l_tab   l_tab_type;
BEGIN
  l_tab := l_tab_type('x');
  l_tab.EXTEND;
  l_tab(2) := 'X';
END;
Note, that for consistency of approach, however, the collection is usually initialised with a blank collector first, i.e.
DECLARE
  TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  l_tab   l_tab_type := l_tab_type();
BEGIN
  l_tab.EXTEND;
  l_tab(1) := 'Y';
  l_tab.EXTEND;
  l_tab(2) := 'X';
END;
Nested tables which are based on OBJECT types have to take into account the OBJECT type constructor, i.e.

Note, that OBJECT types CANNOT currently be created in PL/SQL, so has to be a SQL TYPE (see earlier example).
SQL> CREATE TYPE l_obj AS OBJECT ( a  VARCHAR2(10) );
  2  /

Type created.

SQL> DECLARE
  2    TYPE l_tab IS TABLE OF l_obj;
  3    l_tab2   l_tab := l_tab();
  4  BEGIN
  5    l_tab2.EXTEND;
  6    l_tab2(1) := l_obj('X');
  7  END;
  8  /

PL/SQL procedure successfully completed.
*DEV NOTE*
When the number of rows to be populated into a SQL collection type is known beforehand, you can use the optional .EXTEND(n) syntax, instead of having to .EXTEND every row, i.e.
DECLARE
  TYPE l_tab_type IS TABLE OF VARCHAR2(10);
  l_tab   l_tab_type := l_tab_type();
BEGIN
  l_tab.EXTEND(2);
  l_tab(1) := 'Y';
  l_tab(2) := 'X';
END;
This can potentially save a lot of recursive operations since .EXTEND can be quite expensive. It's certainly good practice to do so, but, don't expect the performance to suddenly rocket after making this change. Even on relatively complex TYPEs (both OBJECT and TABLE), you still have to go into the hundreds of thousands of rows in the collection before it starts to make a significant difference.

2. BULK COLLECT
All types of collections, with the exception of INDEX BY VARCHAR2 collections (see note below), can be automatically initialised and populated by the result set of a query. It is by far more efficient doing it this way (since it's a bulk operation) than looping around the results of a cursor and populating row by row. Note, that both implicit and explicit cursors support this mechanism.
DECLARE
  TYPE l_tab IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  l_tab2 l_tab;
BEGIN
  SELECT 'x'
  BULK COLLECT INTO l_tab2
  FROM dual;
END;
Bulk Collecting into collections based on OBJECT types involves SELECTing the OBJECT type constructor, i.e.
SQL> DECLARE
  2    type l_tab IS TABLE OF l_obj;
  3    l_tab2 l_tab;
  4  BEGIN
  5    SELECT l_obj('x')
  6    BULK COLLECT INTO l_tab2
  7    FROM dual;
  8  END;
  9  /

PL/SQL procedure successfully completed.
Standard array fetching rules do apply to this as to all cursor manipulation, i.e. setting the array size is done through the LIMIT clause of the explicit cursor. Note, explicit cursors are the only construct to support this, since there is no syntax for "stepping through" an implicit cursor. This usually comes into play (as with all arraysize settings) for large resultsets.
SQL> DECLARE
  2    TYPE l_tab IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  3    l_tab2 l_tab;
  4    CURSOR c1 IS
  5      SELECT 'x'
  6      FROM dual;
  7  BEGIN
  8    OPEN c1;
  9    FETCH c1 BULK COLLECT INTO l_tab2 LIMIT 500;
 10    WHILE c1%FOUND
 11    LOOP
 12      FETCH c1 BULK COLLECT INTO l_tab2 LIMIT 500;
 13    END LOOP;
 14    CLOSE c1;
 15  END;
 16  /

PL/SQL procedure successfully completed.
*IMPORTANT*
You cannot BULK COLLECT into a string-indexed associative array (even at 10g), i.e.
SQL> l
  1  declare
  2    type l_type is table of varchar2(10) index by varchar2(10);
  3    l_tab l_type;
  4  begin
  5    select 'x'
  6    bulk collect into l_tab
  7    from dual;
  8* end;
SQL> /
  bulk collect into l_tab
                    *
ERROR at line 6:
ORA-06550: line 6, column 21:
PLS-00657: Implementation restriction: bulk SQL with associative arrays with VARCHAR2 key is not supported.
ORA-06550: line 6, column 21:
PLS-00597: expression 'L_TAB' in the INTO list is of wrong type
ORA-06550: line 7, column 3:
PL/SQL: ORA-00904: : invalid identifier
ORA-06550: line 5, column 3:
PL/SQL: SQL Statement ignored
3. Copying Collections
Collections of the same type can be copied, i.e.
SQL> declare
  2    l_tab1  l_tab := l_tab('X');
  3    l_tab2  l_tab := l_tab('Y');
  4  begin
  5    l_tab2 := l_tab1;
  6    dbms_output.put_line(l_tab2(1));
  7  end;
  8  /
X
*IMPORTANT*
Collections CANNOT be compared and checked for (in)equality directly. This would have to be done programmatically (row by row).

Note, in Oracle 10g, you CAN directly equate collections, as well as other collection functionality such as UNION between collections etc. See
Oracle TechNet for further details.

*DEV NOTE*
You can use the RETURNING clause of a DML statement to populate collections.

Persistently Storing Collections

The main advantage (and the reason why Oracle supports object-relational features) of using collections, is that they can be stored persistently in the database. Oracle does this by supporting the use of OBJECT TABLES (discussed later) and allowing a table to have columns defined of a collection type.

*IMPORTANT*
Only Nested Tables and VARRAYs can be persistently stored.
SQL> CREATE TYPE l_tab AS TABLE OF VARCHAR2(10);
  2  /

Type created.

SQL> CREATE TABLE test_table ( nested_table   l_tab )
  2    NESTED TABLE nested_table STORE AS nested_table;

Table created.
This is the real reason why nested tables are so called. Note, the use of the NESTED TABLE .. STORE AS .. syntax is necessary for nested tables ONLY. Creating columns of a VARRAY type does not need this. The reason for this is also the reason why there is a difference between the persistent storage of collections.

Nested tables stored persistently are stored "out of line", i.e. in a seperate relational table, whereas VARRAYs are stored "in line", stored internally as a LOB. This is the underlying reason for the main difference between nested tables and VARRAYs when persistently stored, i.e.

*IMPORTANT*
Nested Tables DO NOT maintain order when stored persistently, VARRAYs do.

*IMPORTANT*
If you do not index the nested_table_id (a hidden magic column on a nested table), Oracle will have to full scan the child table whenever you access the nested table rows. Lets say the parent object has 1,000,000 rows and each of the parent rows has 10 children.

Would you want to full scan the nested table or do 10 indexed reads?
SQL> CREATE TYPE test_type AS TABLE OF VARCHAR2(10) ;
  2  /

Type created.

SQL> CREATE TABLE test_table ( a  test_type )
  2  NESTED TABLE a STORE AS a;

Table created.

SQL> CREATE INDEX TEST_IND ON a ( nested_table_id );

Index created.
*DEV NOTE*
VARRAYs can be slightly more performant than Nested Tables, when stored persistently, because of the reduction in I/O due to the VARRAY being stored "in-line". However, Nested Tables relational tables (the table specified in the STORE AS clause) can be indexed. It basically boils down to benchmark, benchmark, benchmark.

CAST and MULTISET

The use of CAST and MULTISET is a mechanism of converting a SQL statement into a collection type.

It is useful for returning and populating the contents of a collection quickly when dynamic SQL is required.

*IMPORTANT*
For static SQL, you can use the
BULK COLLECT feature. Note, it is an interesting benchmark exercise to determine which is best for the situation.
SQL> DECLARE
  2    l_test   test_type;
  3  BEGIN
  4    SELECT CAST(MULTISET(SELECT 'X' FROM temp_mc) AS test_type)
  5    INTO   l_test
  6    FROM   dual;
  7    dbms_output.put_line(l_test.COUNT);
  8  END;
  9  /
2
*DEV NOTE*
You cannot use various features directly in the MULTISET built-in, an example of which is UNION or UNION ALL, i.e.
SQL> BEGIN
  2    EXECUTE IMMEDIATE 'SELECT CAST(MULTISET(SELECT 1 FROM dual UNION SELECT 2 FROM dual) AS test_type) FROM dual';
  3  END;
  4  /
BEGIN
*
ERROR at line 1:
ORA-03001: unimplemented feature
ORA-06512: at line 2
In order to do this, you can re-engineer to use an inline view, i.e.
SQL> DECLARE
  2    l_tab  test_type;
  3  BEGIN
  4    EXECUTE IMMEDIATE 'SELECT CAST(MULTISET(SELECT A FROM (SELECT 1 a FROM dual UNION SELECT 2 a FROM dual)) AS test_type)
  5      FROM dual' INTO l_tab;
  6    dbms_output.put_line(l_tab.COUNT);
  7  end;
  8  /
2

Performance of Collections

So, how do collections measure up in terms of performance? We've already seen that they can be used to greatly simplify coding within PL/SQL (and into other environments as well), but are they quick?

Well, they can be, but they may also be a serious performance problem, if certain considerations are not understood. The following sections highlight a typical problem and various solutions.

Using ROWNUM

It's a common trick, but referencing rownum within a sub-query causes Oracle to materialize the results into temp (otherwise known as "preventing query merging taking place"). How does this help with collection performance? Well, consider the following example :
SQL> CREATE TABLE t ( a VARCHAR2(25) )
   2 /

Table created.

SQL> CREATE INDEX t_ind ON t(a);

Index created.

SQL> EXEC dbms_stats.set_table_stats(user, 'T', numrows=>100000, numblks=>1000);

PL/SQL procedure successfully completed.
Okay, now let's run a query which references the function in a subquery :
SQL> col plan_plus_exp format a200  -- ensures the autotrace line size is large enough
SQL> SET AUTOTRACE TRACEONLY EXPLAIN

SQL> SELECT a
  2  FROM   t
  3  WHERE  a IN ( SELECT column_value
  4                  FROM TABLE(f_convert('1,2,3,4')))
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1700005 Card=1000 Bytes=7000)
   1    0   NESTED LOOPS (SEMI) (Cost=1700005 Card=1000 Bytes=7000)
   2    1     INDEX (FAST FULL SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=5 Card=100000 Bytes=700000)
   3    1     COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
Note, this example uses the f_convert function from
here.

What we see is a fast full scan of the index and a NESTED LOOP.

This is because the code is being ran as (pseudo-code) :
For every row in the index (t_ind)
  call f_convert
end loop
I.e. it calls f_convert for every row in the index!

Note, on your system (especially if at 10g), you may see this as a HASH JOIN or MERGE JOIN rather than a NESTED LOOP, it all depends on things such as your block size and init.ora parameters, such as optimizer_index_cost_adj etc. The actually mechanism for joining (in this case) doesn't matter, the point is to show how to improve it in general.

So, leaving aside the index full scan for now, if we could get Oracle to only call f_convert once, i.e. materialize to temp first, then we will avoid the nested loop. Adding ROWNUM to the subquery accomplishes this. Note, we simply add rownum and then take it away. This will, obviously, only work for NUMBER datatypes ( or values which can be casted to a number ), a mechanism for non-numeric characters will involve functions such as SUBSTR etc. :
SQL> SELECT a
  2  FROM   t
  3  WHERE  a IN ( SELECT column_value+rownum-rownum
  4                  FROM TABLE(f_convert('1,2,3,4')))
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=222 Card=1000 Bytes=20000)
   1    0   HASH JOIN (SEMI) (Cost=222 Card=1000 Bytes=20000)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=153 Card=100000 Bytes=700000)
   3    1     VIEW OF 'VW_NSO_1' (Cost=17 Card=8168 Bytes=106184)
   4    3       COUNT
   5    4         COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
The NESTED LOOP is now a HASH JOIN (SEMI) of a temporary view (VW_NSO_1), proving it's been materialised first, BUT we now have a FULL TABLE SCAN of T (i.e. no index usage). The reason why, we'll cover in the next section. Again, you may see slightly different outcomes to this, but it doesn't matter, it's the idea which is important.

Correct datatypes

As with all Oracle queries, ensuring that datatypes on both sides of a given element in a predicate are the same is extremely good practice. In my experience, I've found that a lot of the time, non-use of indexes is down to developers not considering the mechanisms which Oracle uses in order to evaluate predicates. At 9i, the use of DBMS_XPLAN can make this obvious, here's a quick example :
SQL> EXPLAIN PLAN
  2  FOR
  3  SELECT a
  4  FROM   t
  5  WHERE  a = 1;

Explained.

SQL> SELECT *
  2  FROM   TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |  1000 |  7000 |   153 |
|*  1 |  TABLE ACCESS FULL   | T           |  1000 |  7000 |   153 |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
   1 - filter(TO_NUMBER("T"."A")=1)

Note: cpu costing is off

14 rows selected.
Here, it's obvious why an index will not be used, since there's a TO_NUMBER being applied to a (rather than the more logical TO_CHAR on the 1 which you may expect).

The same applies when utilising collections, the example from the previous section, has a FULL TABLE SCAN, due to this very reason, so we'll modify the sub-query to simply have a TO_CHAR on it, i.e.
SQL> SELECT a
  2  FROM   t
  3  WHERE  a IN ( SELECT TO_CHAR(column_value+rownum-rownum)
  4                  FROM TABLE(f_convert('1,2,3,4')))
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=76 Card=1000 Bytes=29000)
   1    0   HASH JOIN (SEMI) (Cost=76 Card=1000 Bytes=29000)
   2    1     INDEX (FAST FULL SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=5 Card=100000 Bytes=700000)
   3    1     VIEW OF 'VW_NSO_1' (Cost=17 Card=8168 Bytes=179696)
   4    3       COUNT
   5    4         COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
The FULL TABLE SCAN now becomes an INDEX (FAST FULL SCAN).

Using FIRST_ROWS

In the previous example, we have removed the multiple calls to the f_convert function, by simply adding ROWNUM, which forces Oracle to materialize the results first, and we've also removed the FULL TABLE SCAN by ensuring that all datatypes are correct, but we have a FAST FULL SCAN, when in reality, we may want a UNIQUE or RANGE SCAN (depending on the type of index). We can accommodate this by the use of FIRST_ROWS, which is more "index happy" than ALL_ROWS.
SQL> SELECT /*+ FIRST_ROWS */ a
  2  FROM   t
  3  WHERE  a IN ( SELECT TO_CHAR(column_value+rownum-rownum)
  4                  FROM TABLE(f_convert('1,2,3,4')))
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=8225 Card=8168000 Bytes=236872000)
   1    0   NESTED LOOPS (Cost=8225 Card=8168000 Bytes=236872000)
   2    1     VIEW OF 'VW_NSO_1' (Cost=17 Card=8168 Bytes=179696)
   3    2       SORT (UNIQUE)
   4    3         COUNT
   5    4           COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
   6    1     INDEX (RANGE SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=1 Card=1000 Bytes=7000)
We now have our UNIQUE or RANGE SCAN as applicable (whether, this is a "good" thing, is another matter of course, especially when you look at the impact on those estimated cardinalities).

Using ORDERED and PUSH_SUBQ

I'm not a big fan of hints (other than FIRST_ROWS, ALL_ROWS and APPEND, of course). I believe that the optimizer should be able, from the available statistics, to generate an optimal plan. However, I'm also a realist. However good the Oracle CBO is, it's still software, it's based on statistics and there'll still be bugs. So, we sometimes need to involve ourselves somewhat with the CBO mechanisms. ORDERED and PUSH_SUBQ are good examples of this.

According to the brilliant Oracle Metalink note 29236.1,

"PUSH_SUBQ causes all subqueries in a query block to be executed at the earliest possible time. Normally, subqueries are executed as the last is applied is outerjoined or remote or joined with a merge join".

"ORDERED : Access tables in the order of the FROM clause"

Now, for an example, let's say that a query has a sub-optimal plan considering the volumes of data involved :
SQL> SELECT a
  2  FROM t
  3  WHERE  a IN ( SELECT 1234 FROM dual );

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=17)
   1    0   HASH JOIN (Cost=7 Card=1 Bytes=17)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=14)
   3    1     VIEW OF 'VW_NSO_1' (Cost=4 Card=1 Bytes=3)
   4    3       SORT (UNIQUE) (Cost=4 Card=1)
   5    4         TABLE ACCESS (FULL) OF 'DUAL' (Cost=2 Card=1)
Now, we can see that every row of T is HASH JOINed to the result set of the materialized sub-query. In order to change this ordering, we can specify PUSH_SUBQ, and possibly ORDERED, depending on the complexity of the query, to switch the two, i.e.
SQL> SELECT /*+ ORDERED PUSH_SUBQ */ a
  2  FROM t
  3  WHERE  a IN ( SELECT 1234 FROM dual );

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=17)
   1    0   HASH JOIN (Cost=7 Card=1 Bytes=17)
   2    1     VIEW OF 'VW_NSO_1' (Cost=4 Card=1 Bytes=3)
   3    2       SORT (UNIQUE) (Cost=4 Card=1)
   4    3         TABLE ACCESS (FULL) OF 'DUAL' (Cost=2 Card=1)
   5    1     TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=14)

Using CARDINALITY hint

Take the earlier example :
SQL> SELECT a
  2  FROM   t
  3  WHERE  a IN ( SELECT TO_CHAR(column_value+rownum-rownum)
  4                  FROM TABLE(f_convert('1,2,3,4')))
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=76 Card=1000 Bytes=29000)
   1    0   HASH JOIN (SEMI) (Cost=76 Card=1000 Bytes=29000)
   2    1     INDEX (FAST FULL SCAN) OF 'T_IND' (NON-UNIQUE) (Cost=5 Card=100000 Bytes=700000)
   3    1     VIEW OF 'VW_NSO_1' (Cost=17 Card=8168 Bytes=179696)
   4    3       COUNT
   5    4         COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
This shows that our "materialised" sub-query is expected to return 8,168 rows. The problem is, though, that a large number of rows like this can make undesirable plans, since it's the plan for an eight thousand row table, not one with 4 in it!

We can "hard code" a cardinality for the collection to try and improve performance, but we'd have to know roughly the number of rows in the collection in advance. Note, that the CARDINALITY hint is new at 9i.
SQL> SELECT a
  2  FROM   t
  3  WHERE  a IN ( SELECT /*+ CARDINALITY(x 4) */ TO_CHAR(column_value+rownum-rownum)
  4                  FROM TABLE(f_convert('1,2,3,4')) X )
  5  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=28 Card=1 Bytes=29)
   1    0   HASH JOIN (Cost=28 Card=1 Bytes=29)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=7)
   3    1     VIEW OF 'VW_NSO_1' (Cost=25 Card=4 Bytes=88)
   4    3       SORT (UNIQUE) (Cost=25 Card=4 Bytes=8)
   5    4         COUNT
   6    5           COLLECTION ITERATOR (PICKLER FETCH) OF 'F_CONVERT'
This can make significant improvements to performance in these situations, since we now get a much more potentially relevant access path.

See
Cardinalities for a more detailed discussion of estimated cardinalities.

Collection Sorting

It is seldom necessary to implement array sorting methods within SQL and PL/SQL, since the advent of nested tables / VARRAYs allow selection with a SQL ORDER BY clause. However, in some circumstances (say for example, an existing system), where using nested tables etc. may not be possible / appropriate, it may be needed. Now, there are LOADS of sorting algorithms out there, all of which have their own distinctive pros and cons, search on Google and you'll see what I mean. As an example, though, here's an implementation of a popular sort algorithm I used on a recent project :

Bubble Sort

SQL> DECLARE
  2    type l_tab IS TABLE OF NUMBER(10) INDEX BY BINARY_INTEGER;
  3    l_tab2 l_tab;
  4    l_temp NUMBER;
  5  BEGIN
  6    /* set up the test data */
  7    l_tab2(1) := 10;
  8    l_tab2(2) := 15;
  9    l_tab2(3) := 5;
 10    l_tab2(4) := 20;
 11    /* implement bubble sort */
 12    for i in 1..l_tab2.COUNT-1
 13    loop
 14      for j in 2..l_tab2.COUNT
 15      loop
 16        if l_tab2(j) < l_tab2(j-1)
 17        then
 18          l_temp := l_tab2(j-1);
 19          l_tab2(j-1) := l_tab2(j);
 20          l_tab2(j) := l_temp;
 21        END IF;
 22      END LOOP;
 23    END LOOP;
 24    /* just to prove they are sorted (remove the reverse for ascending order) */
 25    FOR i IN REVERSE 1..l_tab2.COUNT
 26    LOOP
 27      dbms_output.put_line(l_tab2(i));
 28    END LOOP;
 29* END;
 30  /
20
15
10
5

PL/SQL procedure successfully completed.

Sparsity and Density

Various collection types can be sparse or dense. The following table describes the differences in order of "sparsity" :
Associative Array Can be sparse
Nested Table Initially dense, but can be made sparse through .DELETE
VARRAY HAS to be dense (cannot .DELETE elements (even the last one))

Sparse Associative Array

An associative array can be sparse or dense (initially, or otherwise) :
/* Initially dense, then made sparse */

SQL> DECLARE
  2    TYPE l_tab IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  3    l_tab2 l_tab;
  4  BEGIN
  5    l_tab2(1) := 'X';
  6    l_tab2(2) := 'x';
  7    l_tab2(3) := 'x';
  8    l_tab2.DELETE(2);
  9  END;
 10  /

PL/SQL procedure successfully completed.

/* Initially sparse */

SQL> DECLARE
  2    TYPE l_tab IS TABLE OF VARCHAR2(10) INDEX BY BINARY_INTEGER;
  3    l_tab2 l_tab;
  4  BEGIN
  5    l_tab2(1) := 'X';
  6    l_tab2(3) := 'x';
  7  END;
  8  /

PL/SQL procedure successfully completed.
Sparse Nested Table

You can make a Nested Table sparse by initially making it dense, and then .DELETEing rows, but you cannot initially make it sparse, simply because you have to .EXTEND which initialises the various entries in the object. You cannot really try this since you either have to .EXTEND(n) which means that you have NULL entries, but you still have a dense collection, i.e.
SQL> DECLARE
  2    TYPE l_tab IS TABLE OF VARCHAR2(10);
  3    l_tab2 l_tab := l_tab();
  4  BEGIN
  5    l_tab2.EXTEND(3);  -- This is the killer, you cannot .EXTEND row 1 and then row 3 seperately.
  6    l_tab2(1) := 'X';
  7    l_tab2(3) := 'X';
  8    dbms_output.put_line(l_tab2.count);
  9  END;
 10  /
3

PL/SQL procedure successfully completed.

/* Initially dense, then made sparse */

SQL> DECLARE
  2    TYPE l_tab IS TABLE OF VARCHAR2(10);
  3    l_tab2 l_tab := l_tab();
  4  BEGIN
  5    l_tab2.EXTEND(3);
  6    l_tab2(1) := 'X';
  7    l_tab2(2) := 'X';
  8    l_tab2(3) := 'X';
  9    l_tab2.DELETE(2);
 10  END;
 11  /

PL/SQL procedure successfully completed.
Sparse VARRAY (or not)

VARRAYs cannot be made sparse using any method, either initially (due to same reason as Nested Table above), or via .DELETE, i.e.
SQL> DECLARE
  2    TYPE l_tab IS VARRAY(10) OF VARCHAR2(10);
  3    l_tab2   l_tab  := l_tab();
  4  BEGIN
  5    l_tab2.EXTEND(3);
  6    l_tab2(1) := 'X';
  7    l_tab2(2) := 'X';
  8    l_tab2(3) := 'X';
  9    l_tab2.DELETE(2);
 10  END;
 11  /
  l_tab2.DELETE(2);
  *
ERROR at line 9:
ORA-06550: line 9, column 3:
PLS-00306: wrong number or types of arguments in call to 'DELETE'
ORA-06550: line 9, column 3:
PL/SQL: Statement ignored

Collection Exceptions

COLLECTION_IS_NULL Try to operate on an atomically null collection.
NO_DATA_FOUND a subscript designates an element that was deleted, or a nonexistent element of an associative array.
SUBSCRIPT_BEYOND_COUNT a subscript exceeds the number of elements in a collection.
SUBSCRIPT_OUTSIDE_LIMIT a subscript is outside the allowed range.
VALUE_ERROR a subscript is null or not convertible to the key type. This exception might occur if the key is defined as a PLS_INTEGER range, and the subscript is outside this range.

Bulk Binding

FORALL

There are situations where some applications require issuing DML for every row in a collection. With
Nested Tables or VARRAYs, it would be better from the offset to execute the DML using the TABLE operator, i.e.
INSERT INTO <table>
SELECT ..
FROM   TABLE(<collection>)
But for those situations where using nested tables / varrays would be prohibitive, or INDEX BY tables are already in use, then the use of the FORALL statement can significantly improve performance when doing DML with the contents of a collection.

Therefore, instead of :
SQL> DECLARE
  2    TYPE l_tab  IS TABLE OF VARCHAR2(10);
  3    l_tab2   l_tab  := l_tab('X');
  4  BEGIN
  5    FOR i IN 1..1
  6    LOOP
  7      INSERT INTO test_char_table VALUES ( l_tab2(i), l_tab2(i) );
  8    END LOOP;
  9  END;
you can use the bulk-binding equivalent :
SQL> DECLARE
  2    TYPE l_tab  IS TABLE OF VARCHAR2(10);
  3    l_tab2   l_tab  := l_tab('X');
  4  BEGIN
  5    FORALL i IN 1..1
  6      INSERT INTO test_char_table VALUES ( l_tab2(i), l_tab2(i) );
  7  END;
*IMPORTANT*
FORALL can only be used (this is still true at 10g) when doing DML with collections, even at 10g, (i.e. the index (i) can only be used to reference a collection index).
SQL> DECLARE
  2    TYPE l_type IS TABLE OF VARCHAR2(10);
  3    l_tab l_type := l_type();
  4  BEGIN
  5    FORALL i IN 1..1
  6      INSERT INTO test_char_table VALUES (i, i);
  7  END;
  8  /
    INSERT INTO test_char_table VALUES (i, i);
                                           *
ERROR at line 6:
ORA-06550: line 6, column 44:
PLS-00430: FORALL iteration variable I is not allowed in this context
ORA-06550: line 6, column 41:
PLS-00430: FORALL iteration variable I is not allowed in this context
ORA-06550: line 6, column 5:
PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL
Note, that the use of FORALL should be considered as shorthand for multiple DML statements, i.e.
BEGIN
  FORALL i IN 1..2
    dml_statement(i);
END;

is the equivalent of (but faster than)

BEGIN
  dml_statement(1);
  dml_statement(2);
END;
This has important ramifications for transactional control. Any code which COMMITs on an error, will commit all SUCCESSFUL dml_statements that occurred BEFORE the dml_statement in error. Something to bear in mind.

*DEV NOTE*
The index variable can only be used as a direct reference to a collection index, it CANNOT be used within any expression (even within the collection index), i.e.
SQL> DECLARE
  2    l_tab   test := test('X');
  3    l_date  DATE := sysdate;
  4  begin
  5    FORALL i IN 1..l_tab.COUNT
  6      INSERT INTO t VALUES (l_tab(i-1+1), l_date);
  7  END;
  8  /
    INSERT INTO t VALUES (l_tab(i-1+1), l_date);
                                *
ERROR at line 6:
ORA-06550: line 6, column 33:
PLS-00430: FORALL iteration variable I is not allowed in this context
ORA-06550: line 6, column 27:
PL/SQL: ORA-00904: : invalid identifier
ORA-06550: line 6, column 5:
PL/SQL: SQL Statement ignored
SAVE EXCEPTIONS

You can carry on when errors are raised, by using the SAVE EXCEPTIONS clause, i.e.
FORALL index IN lower_bound..upper_bound SAVE EXCEPTIONS
   {insert_stmt | update_stmt | delete_stmt}
All exceptions raised during the execution are saved in the new cursor attribute %BULK_EXCEPTIONS, which stores a collection of records. Each record has two fields. The first field, %BULK_EXCEPTIONS(i).ERROR_INDEX, holds the "iteration" of the FORALL statement during which the exception was raised. The second field, %BULK_EXCEPTIONS(i).ERROR_CODE, holds the corresponding Oracle error code.
DECLARE
   TYPE NumList IS TABLE OF NUMBER;
   num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
   errors  NUMBER;
   dml_errors EXCEPTION;
   PRAGMA exception_init(dml_errors, -24381);
BEGIN
   FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
      DELETE FROM emp WHERE sal > 500000/num_tab(i);
EXCEPTION
  WHEN dml_errors THEN
   errors := SQL%BULK_EXCEPTIONS.COUNT;
   dbms_output.put_line('Number of errors is ' || errors);
   FOR i IN 1..errors LOOP
      dbms_output.put_line('Error ' || i || ' occurred during '|| 'iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
      dbms_output.put_line('Oracle error is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
   END LOOP;
END;

BULK COLLECT

BULK COLLECT can be considered the reverse of FORALL. It allows array fetch population of collections (ANY type) in a single statement.

For further information, see
Populating Collections

Polymorphism and Inheritance

With the advent of the Object features within Oracle, it became possible to map Object modelled data structures directly into the database. There are many features of using the new OBJECT type within Oracle (one of which is the use of them in
collections).

TYPEs can now be extended and overridden to achieve polymorphism and inheritance features, i.e.
SQL> CREATE TYPE parent_type AS OBJECT (
  2    attribute1   VARCHAR2(10),
  3    MEMBER PROCEDURE parent_proc,
  4    MEMBER FUNCTION parent_func RETURN VARCHAR2
  5  )
  6  INSTANTIABLE NOT FINAL;
  7  /

Type created.
Note, the meaning of keywords INSTANTIABLE etc are listed below.
SQL> CREATE TYPE BODY parent_type
  2  AS
  3    MEMBER PROCEDURE parent_proc
  4    IS
  5    BEGIN
  6      NULL;
  7    END parent_proc;
  8    MEMBER FUNCTION parent_func RETURN VARCHAR2
  9    IS
 10    BEGIN
 11      RETURN 'X';
 12    END parent_func;
 13  END;
 14  /

Type body created.

SQL> CREATE TYPE child_type UNDER parent_type
  2  (
  3    child_attribute  VARCHAR2(10),
  4    MEMBER FUNCTION child_func RETURN VARCHAR2,
  5    OVERRIDING MEMBER FUNCTION parent_func RETURN VARCHAR2
  6  );
  7  /

Type created.
Note, the use of OVERRIDING. This means that we are overriding the member function declared in the parent type.
SQL> CREATE TYPE BODY child_type
  2  AS
  3    MEMBER FUNCTION child_func RETURN VARCHAR2
  4    ASype
  5    BEGIN
  6      RETURN 'Y';
  7    END child_func;
  8    OVERRIDING MEMBER FUNCTION parent_func RETURN VARCHAR2
  9    AS
 10    BEGIN
 11      RETURN 'Z';
 12    END parent_func;
 13  END;
 14  /

Type body created.

SQL> DECLARE
  2    l_parent_type   parent_type := parent_type(attribute1 => NULL);
  3    l_child_type    child_type := child_type(attribute1 => NULL, child_attribute => NULL);
  4  BEGIN
  5    l_parent_type.parent_proc;  /* Example of calling a member procedure (doesn't do much though) */
  6    dbms_output.put_line(l_parent_type.parent_func);
  7    dbms_output.put_line(l_child_type.child_func);
  8    dbms_output.put_line(l_child_type.parent_func);
  9  END;
 10  /
X
Y
Z
Note, the child_type() declaration (because child_type is a subtype of parent_type and will therefore inherit the parent_type() constructor attributes) has TWO attributes, the parent type attributes can be positionally specified as well as explicitly (as in the above example). If specified positionally, then the parent attributes are always first in the "list", i.e.
SQL> DECLARE
  2    l_child_type    child_type := child_type('A', 'B');
  3  BEGIN
  4    /* attribute1 is the attribute inherited from the parent type */
  5    dbms_output.put_line(l_child_type.attribute1);
  6    /* child_attribute is the attribute defined in the child type header */
  7    dbms_output.put_line(l_child_type.child_attribute);
  8  END;
  9  /
A
B
Quote from metalink note : 146398.1 INHERITANCE, SUBSTITUTABILITY AND POLYMORPHISM IN ORACLE9i
NOT INSTANTIABLE used to declare a type indicates that the type is an abstract type only;
that is, instances of the type cannot be created.

The default for types is INSTANTIABLE.

NOT INSTANTIABLE used to declare a method indicates that the type does
not provide an implementation for the method; any INSTANTIABLE subtype
of this type must provide its own implementation for this method.

The default for methods is INSTANTIABLE.

If any method of a type is declared NOT INSTANTIABLE, the type must be
NOT INSTANTIABLE also. It is not allowed to create instances of a type
which does not implement all of its own methods.

FINAL used to declare a type indicates the type cannot be subtyped. An
attempt to create a subtype of this type will result in an error.

NOT FINAL used to declare a type indicates the type can be subtyped.

The default for types is FINAL.

FINAL used to declare a method indicates that no subtype may override
the method. The method implementation as provided by the supertype is
used by all subtypes and cannot be changed.

NOT FINAL used to declare a method permits subtypes to override it.

The default for methods is NOT FINAL.

The combination of NOT INSTANTIABLE and FINAL is disallowed because it
serves no realistic purpose and would never be useful.

Substitutability

Substitutability is the mechanism by which a child type can be used instead of the parent type, even though the parent type is explicitly declared.

Within Oracle, this is implemented as being able to do the following :
SQL> CREATE OR REPLACE TYPE parent_type AS OBJECT ( a VARCHAR2(10) )
  2  NOT FINAL;
  3  /

Type created.

SQL> CREATE OR REPLACE TYPE child_type UNDER parent_type ( b VARCHAR2(10) );
  2  /

Type created.

SQL> DECLARE
  2    l_tab   parent_type;
  3  BEGIN
  4    l_tab := child_type('X', 'Y');
  5  END;
  6  /

PL/SQL procedure successfully completed.
This shows, that although l_tab is declared as parent_type, it can be initialised with a constructor of any subtypes of parent_type, i.e. child_type, all that happens is that the components of the child type which do not correspond to the components of the parent type are ignored, i.e.
/* "A" attribute exists in the parent type */

SQL> DECLARE
  2    l_tab   parent_type;
  3  BEGIN
  4    l_tab := child_type('X', 'Y');
  5    dbms_output.put_line(l_tab.a);
  6  END;
  7  /
X

PL/SQL procedure successfully completed.

/* "B" attribute does not exist in the parent type (but we still have to specify it in the child type
   object constructor). */

SQL> DECLARE
  2    l_tab   parent_type;
  3  BEGIN
  4    l_tab := child_type('X', 'Y');
  5    dbms_output.put_line(l_tab.b);
  6  END;
  7  /
  dbms_output.put_line(l_tab.b);
                             *
ERROR at line 5:
ORA-06550: line 5, column 30:
PLS-00302: component 'B' must be declared
ORA-06550: line 5, column 3:
PL/SQL: Statement ignored
*IMPORTANT*
Substitutability only works from child -> parent, trying to assign a parent type to a child type causes a type error, i.e.
SQL> DECLARE
  2    l_tab   child_type;
  3  BEGIN
  4    l_tab   := parent_type('X');
  5  END;
  6  /
  l_tab   := parent_type('X');
             *
ERROR at line 4:
ORA-06550: line 4, column 14:
PLS-00382: expression is of wrong type
ORA-06550: line 4, column 3:
PL/SQL: Statement ignored
Turning off Substitutability
You can prevent people from using this functionality on defined object types BUT only when they are defined on tables, it is impossible to prevent this at TYPE level, i.e.
SQL> CREATE TABLE object_table OF parent_type NOT SUBSTITUTABLE AT ALL LEVELS;

Table created.

/* Insert the parent_type instance */

SQL> INSERT INTO object_table VALUES ( parent_type ('X') )
  2  /

1 row created.
Try and insert the child type instance. Note, without the NOT SUBSTITUTABLE AT ALL LEVELS clause, then this would work OK
SQL> INSERT INTO object_table VALUES ( child_type('X', 'Y') );
INSERT INTO object_table VALUES ( child_type('X', 'Y') )
                                  *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got UDT
Note, if the relational table defines the object as an attribute, then use this syntax :
SQL> CREATE TABLE test_table ( a  parent_type )
  2    COLUMN a NOT SUBSTITUTABLE AT ALL LEVELS;

Table created.

OBJECT tables

Object tables allow relational tables to be created from the definition of OBJECT types. This means that every row is an instance of the object type, and should be treated as such, i.e.
SQL> CREATE TYPE test_type AS OBJECT ( a  VARCHAR2(10), b VARCHAR2(10) )
  2  /

Type created.

/* Create the object table. */

SQL> CREATE TABLE test_table OF test_type;

Table created.

SQL> INSERT INTO test_table VALUES (test_type('X', 'Y'));

1 row created.
Now, just demonstrate that SQL*Plus will automatically expand the object in each row for us, and present it simply as a relational table with standard columns, i.e.
SQL> SELECT * FROM test_table;

A          B
---------- ----------
X          Y

1 row selected.

SQL> SELECT a FROM test_table;

A
----------
X

1 row selected.
*IMPORTANT*
SQL provides us with the VALUE, DEREF and REF built-ins in order to manipulate objects.

The basic difference between the built-ins is :
REF   - Converts an object into an object id
DEREF - Converts an object id (i.e. REF object) into an object
VALUE - Converts an object table "row" into an object
i.e.

REF
SQL> SELECT REF(p) FROM test_table p;

REF(P)
--------------------------------------------------------------------------------
0000280209CAF4B62D769645A6805FADC9BA7D5E82E6348F4746624FB3A9C8EB820C455C4303402B
A00000

1 row selected.
VALUE
SQL> SELECT VALUE(p)
  2  FROM   test_table p;

VALUE(P)(A, B)
--------------------------------------------------------------------------------
TEST_TYPE('X', 'Y')

1 row selected.
We cannot directly fetch this object into an OBJECT type variable, since (to the PL/SQL engine), it's a relational table, so we HAVE to use the VALUE function (or REF if we're fetching into a REF object), i.e.
SQL> DECLARE
  2    l_tab  test_type;
  3  BEGIN
  4    SELECT *
  5    INTO   l_tab
  6    FROM   test_table;
  7  END;
  8  /
  FROM   test_table;
  *
ERROR at line 6:
ORA-06550: line 6, column 3:
PL/SQL: ORA-00947: not enough values
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored

SQL> set serverout on

SQL> ed
Wrote file afiedt.buf

  1  DECLARE
  2    l_tab  test_type;
  3  BEGIN
  4    SELECT VALUE(p)      /* Use VALUE to ensure we have the correct datatype / number of values */
  5    INTO   l_tab
  6    FROM   test_table p;
  7    dbms_output.put_line(l_tab.a);
  8* END;
  9  /
X

PL/SQL procedure successfully completed.
*DEV NOTE*
PL/SQL will not allow us to navigate through REF variables, i.e.
/* We can get the REF ok */

SQL> DECLARE
  2    l_tab  REF test_type;
  3   BEGIN
  4     SELECT REF(p)
  5     INTO   l_tab
  6     FROM   test_table p;
  7  END;
  8  /

PL/SQL procedure successfully completed.

/* But, we cannot utilise the values directly */

SQL> ED
Wrote file afiedt.buf

  1  DECLARE
  2    l_tab  REF test_type;
  3  BEGIN
  4    SELECT REF(p)
  5    INTO   l_tab
  6    FROM   test_table p;
  7    dbms_output.put_line(l_tab.a);
  8* END;
  9  /
  dbms_output.put_line(l_tab.a);
                             *
ERROR at line 7:
ORA-06550: line 7, column 30:
PLS-00536: Navigation through REF variables is not supported in PL/SQL.
ORA-06550: line 7, column 3:
PL/SQL: Statement ignored
In order to acheive this, you would have to fetch a DEREF'ed object into a local variable of that type, i.e. not a REF variable.

OBJECT table scoping

Creation of an OBJECT table allows us to optionally specify the allowed source OBJECT table of any nested REF columns within the OBJECT type, this allows us to prevent REF's being INSERTed which do not point to a row in the specified tables, i.e.
SQL> CREATE TYPE test_obj_type1 AS OBJECT ( a VARCHAR2(10) );
  2  /

Type created.

SQL> CREATE TYPE test_obj_type2 AS OBJECT ( b REF test_obj_type1 );
  2  /

Type created.

SQL> CREATE TABLE test_table1 OF test_obj_type1;

Table created.

/* Specify the scope for test_table2 can only contain REFs (in column b) from test_table1 */

SQL> CREATE TABLE test_table2 OF test_obj_type2
  2    ( SCOPE FOR (b) IS test_table1 );

Table created.

SQL> INSERT INTO test_table1 VALUES ( test_obj_type1('X') );

1 row created.

SQL> INSERT INTO test_table2
  2    SELECT REF(p) FROM test_table1 p;

1 row created.

/* Now, let's try INSERTing a REF from another object table of the same type, i.e. which is not specified in the
   REF scope */

SQL> CREATE TABLE test_table3 OF test_obj_type1;

Table created.

SQL> INSERT INTO test_table3 VALUES ( test_obj_type1('Y') );

1 row created.

SQL> INSERT INTO test_table2
  2    SELECT REF(p) FROM test_table3 p;
INSERT INTO test_table2
            *
ERROR at line 1:
ORA-22889: REF value does not point to scoped table

REF Objects

REF objects allow storage of "pointers" to rows, rather than the rows themselves, i.e.

This is an example of the basic use of REF objects.
/* First of all, create the OBJECT type */

SQL> CREATE TYPE test_type AS OBJECT ( a   VARCHAR2(10) );
  2  /

Type created.

/* Next, create the OBJECT table */

SQL> CREATE TABLE test_object_table OF test_type;

Table created.

/* Create a "standard" relational table with a REF column */

SQL> CREATE TABLE test_table ( b  REF test_type );

Table created.

/* Put a row into the OBJECT table (note, we need the OBJECT type constructor) */

SQL> INSERT INTO test_object_table VALUES ( test_type ('X') );

1 row created.

/* Now, we need to populate the relational table REF column. We HAVE to do this by SELECTing the REF (NOT the data itself). */

SQL> INSERT INTO test_table
  2    SELECT ref(q) FROM test_object_table q;

1 row created.

/* Proof that the relational table contains the REF */

SQL> SELECT * FROM test_table;

B
--------------------------------------------------------------------------------
0000220208C718789AB0354E2DBEECC7EE88D2D8DAC1D2090DBFCE43E0B09676AAAB298337

1 row selected.
/* In order to get the data from the REF'ed OBJECT table, we need to DEREF the REF */

SQL> SELECT DEREF(b) FROM test_table;

DEREF(B)(A)
--------------------------------------------------------------------------------
TEST_TYPE('X')

1 row selected.
In order to access the individual data elements of the OBJECT type, we reference directly or via inline view, which is possibly faster for multiple references

SQL> SELECT DEREF(b).a FROM test_table;

DEREF(B).A
----------
X

SQL> SELECT v.w.a FROM ( SELECT DEREF(b) w FROM test_table ) v;

W.A
----------
X
The real power behind this approach is the ability to reference multiple tables. They just have to be of the same type, i.e.
SQL> CREATE TABLE test_object_table2 OF test_type;

Table created.

SQL> INSERT INTO test_object_table2 VALUES ( test_type ('Y') );

1 row created.

SQL> COMMIT;

Commit complete.

SQL> INSERT INTO test_table SELECT REF(x) FROM test_object_table2 x;

1 row created.

SQL> SELECT DEREF(b).a FROM test_table;

DEREF(B).A
----------
X              <------- this row from test_object_table
Y              <------- this row from test_object_table2

2 rows selected.
AND, any updates to these tables is instantly reflected, since just storing pointers to the data and not the data itself.
SQL> UPDATE test_object_table SET a = 'Z';

1 row updated.

SQL> SELECT DEREF(b).a FROM test_table;

DEREF(B).A
----------
Z
Y

2 rows selected.
*DEV NOTE*
For REF security, see
OBJECT table scoping.

Dangling REFs

Since, REFs are not constrained, it is possible to remove the object to which the REF is pointing, i.e.
SQL> CREATE TYPE test_type AS OBJECT ( a VARCHAR2(10) );
  2  /

Type created.

SQL> CREATE TABLE test_table OF test_type;

Table created.

SQL>
SQL> CREATE TABLE test_ref_table ( a  REF test_type );

Table created.

SQL>
SQL> INSERT INTO test_table VALUES (test_type('X'));

1 row created.

SQL> INSERT INTO test_ref_table SELECT REF(p) FROM test_table p
SQL> /

1 row created.

SQL> COMMIT;

Commit complete.

/* Now, remove the object to which the REF is pointing */

SQL> TRUNCATE TABLE test_table;

Table truncated.

/* Prove the REF is still there */

SQL> SELECT * FROM test_ref_table;

A
------------------------------------------------------------------------------------------------
0000220208659A7407CA1A4603B064D5656333A4C46F965EC4FDA94279A53489F7279B4E57

1 row selected.

/* No error on DEREF */

SQL> SELECT DEREF(a) FROM test_ref_table;

DEREF(A)(A)
------------------------------------------------------------------------------------------------


1 row selected.

/* Use the IS DANGLING predicate to check for REFs pointing to non-existent objects */

SQL> select * from test_ref_table where a IS DANGLING;

A
------------------------------------------------------------------------------------------------
0000220208659A7407CA1A4603B064D5656333A4C46F965EC4FDA94279A53489F7279B4E57

1 row selected.

AnyData TYPE

SYS.ANYDATA is (sort of) the Oracle equivalent of the Variant type of other languages (such as Visual Basic). It is implemented as an OBJECT TYPE with methods and attributes.
SQL> CREATE TABLE test_table ( a SYS.ANYDATA );

Table created.

SQL> INSERT INTO test_table
  2  VALUES ( sys.anydata.convertnumber(10) );

1 row created.

SQL> INSERT INTO test_table
  2  VALUES ( sys.anydata.convertdate(sysdate) );

1 row created.

SQL> INSERT INTO test_table
  2  VALUES ( sys.anydata.convertvarchar2('X') );

1 row created.

SQL> SELECT t.a.getTypeName()
  2  FROM   test_table t;

T.A.GETTYPENAME()
---------------------------------------------------------------------
SYS.NUMBER
SYS.DATE
SYS.VARCHAR2

3 rows selected.
*DEV NOTE*
Unfortunately, Oracle doesn't have a method to display the contents of ANYDATA in a query (most useful in programs that will fetch the data, figure out what it is and do something with it -- eg: the application has some intelligence as to how to handle the data) Fortunately we can write one though:
SQL> create or replace function getData( p_x in sys.anyData ) return varchar2
  2  as
  3      l_num number;
  4      l_date date;
  5      l_varchar2 varchar2(4000);
  6  begin
  7      case p_x.gettypeName
  8          when 'SYS.NUMBER' then
  9              if ( p_x.getNumber( l_num ) = dbms_types.success )
 10              then
 11                  l_varchar2 := l_num;
 12              end if;
 13          when 'SYS.DATE' then
 14              if ( p_x.getDate( l_date ) = dbms_types.success )
 15              then
 16                  l_varchar2 := l_date;
 17              end if;
 18          when 'SYS.VARCHAR2' then
 19              if ( p_x.getVarchar2( l_varchar2 ) = dbms_types.success )
 20              then
 21                  null;
 22              end if;
 23          else
 24              l_varchar2 := '** unknown **';
 25      end case;
 26
 27      return l_varchar2;
 28  end;
 29  /

Function created.

SQL> SELECT getData(a)
  2  FROM   test_table;

GETDATA(A)
--------------------------------------------------------------------------------
10
24-FEB-03
X

3 rows selected.
*IMPORTANT*
You cannot directly index an ADT, i.e.
SQL> CREATE INDEX test_ind ON test_table ( a );
CREATE INDEX test_ind ON test_table ( a )
                                      *
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype ADT.