An Introduction to 11.2 RAC Server Pools

Oracle RAC is designed to cater for Oracle specific applications and also non-oracle applications. These applications, both Oracle and non-Oracle, when deployed on RAC, are termed as resources and Oracle RAC can manage both kinds of resources on the clusterware stack. Examples of both oracle and non-oracle resources are: database instance, database services, xclock, Apache etc. The traditional way to

Oracle RAC is designed to cater for Oracle specific applications and also non-oracle applications. These applications, both Oracle and non-Oracle, when deployed on RAC, are termed as resources and Oracle RAC can manage both kinds of resources on the clusterware stack. Examples of both oracle and non-oracle resources are: database instance, database services, xclock, Apache etc.

The traditional way to manage these resources was manually. The DBA was supposed to decide on which instance (or instances) these resources would run and would set them up accordingly. An example of this can be taken when you are configuring a service which has to be present on some nodes as Preferred and on another node(s) as Available. Although this kind of management works well, it’s not something that can scale for a cluster which has got a large number of nodes in it. Think about sitting and managing the instances over a 200 nodes cluster, it wouldn’t be very exciting would it?

Enter Server Pools

Grid computing is a concept within Oracle database which has been there since 10g. The basic meaning of grid computing is to divide the hard coupling of the availability of a resource over the machines thus letting the resources be available on a ‘wherever and whenever’ kind of basis. This means that there shouldn’t be a restriction on a said resource which must be present on a specific machine itself or can only be accessed from a specific machine. The very same concept is enhanced in 11.2 RAC with the introduction of Server Pools.

Server Pools allow the same functionality of logically dividing the cluster into small segments that can be used for varying workloads. But unlike the traditional mechanism available up to 11.1, which only allows this task by choosing the instances to run on nodes as Preferred & Available and running services using the same architecture, server pools offer a much larger list of attributes which help in the management of resources in a rather simple and transparent way. In server pools, the underlying hosts of the cluster are added (and removed) from the pools much more on-the-fly, and take the burden off the DBA’s shoulders for this task. With just a few parameters to take care of, the whole stack becomes much easier to govern and administer, while still being completely transparent yet powerful to manage all the types of different resources in the RAC clusterware environment, especially  when the number of the nodes go beyond two digits.

The built-in pools, Free and Generic

In 11.2, with the installation of the cluster, two pools get created by default-Free & Generic.

Generic Pool

When you upgrade your existing version of the clusterware to 11.2, all the nodes get mapped to the in-built Generic pool. This is an internally managed server pool and the modification of the attributes of this pool is not allowed. Besides the addition of the nodes of the previous cluster, in 11.2, when you create a database which is Admin-managed, that also becomes a part of the Generic Pool as a child pool of it.

Free Pool

This is another pool which gets created by default with the installation of the 11.2 clusterware. If it’s going to be a fresh installation of the clusterware environment, all the nodes would be attached to this pool first of all. From this pool only, the nodes would move to the manually created server pools. As like the Generic pool, this is also an internally managed pool but still some attributes are available to be modified by the dba like IMPORTANCE.

The following output is from a 3-node 11.2.0.1 cluster after a fresh installation with no database created over it:

Attributes of Server Pools

There are several attributes which can be set for the server pools. The following table (picked from Oracle documentation’s Clusterware Administration guide)  lists all the attributes, their meaning and possible values for them.

Attribute Values and Format Description
ACL String in the following format:
owner:user:rwx,pgrp:group:rwx,other::r—
Defines the owner of the server pool and which privileges are granted to various operating system users and groups. The server pool owner defines the operating system user of the owner, and which privileges that user is granted.The value of this optional attribute is populated at the time a server pool is created based on the identity of the process creating the server pool, unless explicitly overridden. The value can subsequently be changed, if such a change is allowed based on the existing privileges of the server pool.In the string:

  • owner: The operating system user of the server pool owner, followed by the privileges of the owner
  • pgrp: The operating system group that is the primary group of the owner of the server pool, followed by the privileges of members of the primary group
  • other: Followed by privileges of others
  • r: Read only
  • w: Modify attributes of the pool or delete it
  • x: Assign resources to this pool

By default, the identity of the client that creates the server pool is the owner. Also by default, root, and the userspecified in owner have full privileges. You can grant required operating system users and operating system groups their privileges by adding the following lines to the ACL attribute:
user:username:rwx
group:group_name:rwx

ACTIVE_SERVERS A string of server names in the following format:
server_name1 server_name2 …
Oracle Clusterware automatically manages this attribute, which contains the space-delimited list of servers that are currently assigned to a server pool.
EXCLUSIVE_POOLS String This optional attribute indicates if servers assigned to this server pool are shared with other server pools. A server pool can explicitly state that it is exclusive of any other server pool that has the same value for this attribute. Two or more server pools are mutually exclusive when the sets of servers assigned to them do not have a single server in common. For example, server pools A and B must be exclusive if they both set the value of this attribute to foo_A_B.Top-level server pools are mutually exclusive, by default.
IMPORTANCE Any integer from 0 to 1000 Relative importance of the server pool, with 0 denoting the lowest level of importance and 1000, the highest. This optional attribute is used to determine how to reconfigure the server pools when a node joins or leaves the cluster. The default value is 0.
MAX_SIZE Any nonnegative integer or -1 (no limit) The maximum number of servers a server pool can contain. This attribute is optional and is set to -1 (no limit), by default.Note: A value of -1 for this attribute spans the entire cluster.
MIN_SIZE Any nonnegative integer The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other pools into this one until that number is met.Note: The value of this optional attribute does not set a hard limit. It governs the priority for server assignment whenever the cluster is reconfigured. The default value is 0.
NAME String The name of the server pool, which you must specify when you create the server pool. Server pool names must be unique within the domain of names of user-created entities, such as resources, types, and servers. A server pool name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server pool name cannot begin with a period nor with ora.
PARENT_POOLS A string of space-delimited server pool names in the following format:
sp1 sp2 …
Use of this attribute makes it possible to create nested server pools. Server pools listed in this attribute are referred to asparent server pools. A server pool included in a parent server pool is referred to as a child server pool.
SERVER_NAMES A string of space-delimited server names in the following format:
server1 server2 …
A list of candidate node names that may be associated with a server pool. If this optional attribute is empty, Oracle Clusterware assumes that any server may be assigned to any server pool, to the extent allowed by values of other attributes, such as PARENT_POOLS.The server names identified as candidate node names are not validated to confirm that they are currently active cluster members. Cluster administrators can use this attribute to define servers as candidates that have not yet been added to the cluster.

You can see these attributes and the available nodes and their properties by using the command CRSCTL like below:

And the below output is showing the complete set of attributes shown for these server pools:

You can check the status of the server pools collectively but the default server pools, Generic and Free won’t be able to get completely described and only a limited view of both would be there. The Free can be still queried but the Generic server pool would remain completely inaccessible as is confirmed from the output below:

You can add your own server pools with the same command CRSCTL. The below command adds two server pools SP1, SP2 and then they are shown:

As expected, there is one host added to each of the pools and one node, which is unallocated, is still with the server pool FREE as an available host which can be allocated to either the existing pools or to a new one when it would be added.

Child Pools

Server pools can have the child pools under them as well. The child pools are going to have their own properties. The order in which the host allocated to them is going to be based on the IMPORTANCE parameter assigned to them. To understand the child pools, we can imagine a parent pool named as OLTP which represents the working of OLTP database. This parent pool now can have two further child pools, HR (human resource) and AP (account payable) . Now, we can have different attributes set up for the child pools depending on the importance that we want to give to each domain of the work. The hosts assigned are mutually exclusive within the parent pools. For the child pools, whether the same is possible for the child pools or not, would be based on the fact that what’s the value of the attribute EXCLUSIVE_POOL for them. If there is a string value assigned to this parameter and this is shared by two child pools, host assigned to one child pool can’t be shared by its other sibling.

The following is an excerpt of the output for two parent pools where child pools are added to them:

How the hosts are added to the server pools

The allocation of the hosts is going to be happening in the following order:

  1. Generic Pool
  2. Explicitly created pool
  3. Free Pool

Oracle would keep on adding the hosts to the server pools as long as they have not been equal to the values of the MIN_SIZE and MAX_SIZE on the basis of the IMPORTANCE that’s assigned to the server pools. The hosts are going to be added exclusively to the pools i.e. the hosts added to one server pool won’t be shared with the other pools. The hosts can be shared by the child pools provided that there is no value set for the parameter EXCLUSIVE_POOLS which would restrict the host sharing within the child pools as well.

In the following example a few parent and child pools are created and the hosts are attached to them. We start by creating a parent pool SP1 and then for it, a child pool SP_CHILD1:

So we can see that one host-Host02 is attached to the child pool SP_CHILD1. Now we create another child pool SP_CHILD2:

Since we haven’t set the parameter EXCLUSIVE_POOLS, the same host host02 is shared by both the children.

Now, let’s add another parent pool SP2:

So there is one server that is in the Free pool now-Host01 and there are two hosts , Host02 and Host03 which are shared by two pools SP2 and SP1 and two children SP_CHILD1 and SP_CHILD2.

In the below lines, we are again creating a few parent, and child pools underlying them.

So we have one parent pool with two children and they are sharing the node Host02. Now we add into this another parent pool SP2 and also try to add a child pool under it, but with the same name that is assigned to another child pool of SP1 parent pool:

So clearly we can see that’s not possible to be done. But after using a distinct name, it is done:

Database management changes in 11.2 -to use server pools

To incorporate the usage of server pools, there is a change introduced in the database itself. From 11.2 onwards, there would be two types of databases available, Admin Managed and Policy Managed. Choosing either one would make you use the server pools though it would vary whether those server pools would be explicitly created by you or would be the default one available already.

Admin managed database

This is sort of the same as the traditional database type i.e. where a DBA is responsible to manage the allocation of the resources and instances explicitly on the available instances, and also the instances on the respective nodes. This database type would assign the instances to a child pool of the default pool available Generic pool. Having the Generic pool as the parent, no modification of the child pools of the Admin managed database’s instances would be allowed for the DBA to be done by their self. Depending on the number of nodes being chosen initially for the database, the instances would be started on each one of them. By default, with a fresh installation of 11.2 RAC, all the nodes would get added to the Generic pool and in the upgrade, all the nodes of the previous version would also be added to the Generic pool. Although this type seems to be much simpler and like the older versions and techniques of managing the instances, this can’t be used when the number of nodes pass a number which is difficult to manage by a DBA, for example 100. In that case, the next type of the 11.2 database would be helpful-Policy Managed database.

Policy Managed Database

This type of database administration is meant to make DBAs’ life simpler by not letting them worry about managing each of their instances on each node individually, but to use a server pool to manage the instances. By using server pools, the administrative tasks to manage the database becomes much simpler.

From 11.2 , DBCA (Database Configuration Assistant) offers both the options for creating a database.  To check that a database is admin-managed or policy-managed, we can use the command SRVCTL like below:

So we can see that the database ORCLADM is Admin Managed. To convert this database to Policy managed, you can proceed as follows:

Here ORCLSP is a server pool over which the database ORCLADM would be running and we can confirm it using the same command SRVCTL:

Now, the database management is Policy Managed.

Conclusion

It’s not (yet) very common to have cluster nodes with a very large number but that’s not something which is going to take a very long time to change either. And when this would happen, the management of the clusterware resources would essentially get a huge benefit from server pools. Hopefully, this short article should give you an idea about server pools and how to get started with them.

Reference: Clusterware Administration guide (11.2)