Sep 09

Archiver Hung due to COMPATIBLE ORA-16484

This morning I woke up to a few alerts from EM about my archiver being hung, similar to the following:

Target type=Database Instance 
Target name=orcl4 
Message=The archiver hung at time/line number: Fri Sep 09 06:07:22 2016/376. 

I used the DG Broker to stop and then restart log transport.

edit database orcl set state=transport-off;
edit database orcl set state=transport-on;

But the archiver would still get hung. So its off to the alert log to get more clues. I found this in the primary’s alert log:

TT00: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (16484)
TT00: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Fri Sep 09 08:07:40 2016
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl4/trace/orcl4_tt00_16068.trc:
ORA-16484: compatibility setting is too low

The error message seems self explanatory. I have COMPATIBLE set too low. At this point, I remembered that I changed COMPATIBLE in the primary a month ago. I must have forgotten to also change this in the standby. A quick verfication proved my hypothesis. COMPATIBLE is set to in the primary but 11.2.0 in the standby. So there’s my problem. I changed COMPATIBLE in the standby, bounced it and then resumed log transport. Life was fine and everything was fixed.

If you remember right, I said that I changed COMPATIBLE in the primary a month ago. Why was this a problem today and not back then? To know that, you’d have to know the change history for this database. Last night, we released new code to production. Part of the code release was to include a new table that used Oracle 12c’s new IDENTITY column feature. This was the first 12c-only feature we deployed in our code base. The standby was attempting to create the table with the new feature but that operation could not complete because of the improper parameter setting. I’m still a little confused how this affected log transport. I would have expected only log apply to be broken, but this is how it manifested itself.


Sep 01

N+1 Redundancy and Server Consolidation

In a previous blog post, I talked about designing your RAC implementations with N+1 redundancy. This ensures the loss of a node does not overwhelm the remaining nodes should one node fail. Today, I’m going to discuss the impact of server consolidation, specifically CPU cores, and its potential N+1 impacts.

One of the main systems I work on today is a four node Oracle RAC cluster. Each nodes has 4 cores. I am licensed for all 16 cores (4 cores/node * 4 nodes = 16 total cores) in my environment. I designed this system with N+1 redundancy. If I lose a node, and lose 4 cores, I’m still good because my workload demands only require 12 cores to maintain normal, acceptable levels of performance.

When this system was initially designed, 4-core servers were commonplace. Today’s environment is different and it is getting harder to find 4-core servers. They still exist, but hardware vendors are pushing systems with greater core counts.

In a conversation with my SysAdmin recently, he wanted to order 6-core systems to replace our 3-year old servers. Well we can’t simply do that. My Oracle license is for 16 cores. I could deploy three 6-core systems, but then I’d have a total of 18 cores in the cluster and I’d need to be 2 more cores worth of Oracle licenses. If I deployed two 6-core systems, I’d have a total of 12 cores and have 4 cores worth of licenses sitting unused.

I also informed the SysAdmin about our N+1 design. The impact of moving to 6-core systems can have major impacts to the N+1 design. Remember that I said earlier that our workload demands require 12 cores to maintain normal operational levels. If we deploy 6-core machines, then two of them meet our requirements and another node, the “+1”, would be needed to ensure we can lose a node without major impacts to performance. We’d need to deploy three 6-core machines to keep our N+1 design goal intact. But this means we need to increase our license counts as I stated earlier.

At this point, my SysAdmin thought he had a good idea…we could buy two 8-core servers. That’s still 16 total cores and exactly what we are licensed for today. No increase in license fees. But if we lose a node, we go down to 8 total cores in operation which is less than I need. This brings up a very good point…

…Increasing the core count on your servers can have detrimental impacts on your N+1 redundancy.

Right now, there is no easy answer. We can still buy 4-core servers so that’s what we’ll replace the current ones with next year. But there will come a day when 4-core servers are impossible to find. We’ll need to have a plan in place by that time, keeping in mind our N+1 design goals.

If only I could hard-partition our Linux servers and let cores sit idle and still be in compliance with our Oracle license agreements.

Aug 22

opatch prereq

I was recently applying the July PSU to one of my RAC databases and I kept getting an error from OPatch that the prereq checkSystemSpace failed. It should be no surprise to anyone out there that OPatch goes through a number of pre-requisite checks before applying the patch. OPatch is trying to ensure a good chance of success and not bomb out mid-way through the patch process. 

Now checkSystemSpace is just one of the checks OPatch performs. And from its title, I can easily discern that I am short on space, probably disk space. I know this is on the disk mount where ORACLE_HOME resides as that is the only disk space I am modifying when applying the patch. But what OPatch was not telling me on the screen is how much space was needed. The OPatch log file did not tell me either. I had about 8GB of free space on that drive and the PSU was nowhere near that size. I did clean up some old files but OPatch still failed on this check.

To find out how much disk space was needed, I manually ran OPatch for this specific prerequisite check as follows:

opatch prereq checkSystemSpace -ph .

I am running a prerequisite check. The second parameter to OPatch tells it which check to run. The -ph directive tells OPatch which patch home directory to use and in my case, it is the current directory (denoted by the period). The output was similar to the following trimmed for brevity:

    Space Needed : 8780.128MB
    Space Usable : 8347.293MB
    Required amount of space(8780.128MB) is not available.
    Prereq "checkSystemSpace" failed.

By running the prereq check manually, I was able to obtain the information I needed. OPatch is looking for about 8.7GB of free space and I have 8.3GB available and I’m about 430MB short. So now I have a good idea how much space I need to clean up before this check will pass correctly.

OPatch has a number of other checks with it. To find out all of them, issue the following:

opatch prereq -h

This will not only give the names of the checks, but a short description of each. For example, you might be notified that the check  CheckApplicable has failed. What does that mean? The one paragraph description is:

              Check for the presence of the required components in
              the ORACLE_HOME and check if all the actions of the 
              given patch(es) are applicable.

So this check is ensuring that the Oracle home directory has the components needed to succeed. For example, you can’t apply a patch to fix a partitioning problem if the Partitioning option is not installed. Use OPatch to see the rest of the rereq checks OPatch is doing. Most of them go on in the background without notice so some of these might surprise you.

Next time you have a prereq check fail, try to run it manually to see if you have more information than what was initially provided when you tried to apply the patch.

Jul 21

July 2016 PSU fails to make isqora

When applying the latest PSU, I recieved the following errors from my “opatch apply” session:


Patching component oracle.odbc.ic,
Make failed to invoke "/usr/bin/make -f isqora 
   ORACLE_HOME=/u01/app/oracle/product/"....'/usr/bin/ld: cannot find -lodbcinst
collect2: error: ld returned 1 exit status
make: *** [/u01/app/oracle/product/] Error 1

The following make actions have failed :

Re-link fails on target "isqora".
Composite patch 23054246 successfully applied.
OPatch Session completed with warnings.
Log file location: /u01/app/oracle/product/
OPatch completed with warnings.



The patch was applied successfully, but the relinking did not work correctly. To fix this, I did the following:

cp $ORACLE_HOME/lib/ $ORACLE_HOME/odbc/lib/.
relink all

That’s all there was to it.

I did the copy and relink steps because I was trying to fix the error from OPatch. A better way to handle this is to do the copy first, then run ‘opatch apply’ and you won’t get any errors at all.


I see that Bug 24332805 was posted for this issue, but I’m not privileged enough to see the contents of that bug.

Jul 13

Fog Computing

Unless you’ve been living under a rock, you know that everything is about the Cloud now. At least that’s the hype vendors and the media want us to believe. Earlier this week, I was clued into a new term. Just as people are working towards Cloud Computing comes the newer Fog Computing.

So what is Fog Computing? Basically, its cloud computing that has been pushed to the data sources. In Cloud Computing, your devices and applications push the data into the proverbial Cloud. The cloud is a centralized location. The problem Fog Computing aims to address is with all of this data being pushed into the cloud, that centralized cloud compute power may be inundated with data that it has to process. Latency may be too great. Fog computing pushes processing back to the data source. Data is processed first and then what is hopefully a smaller amount of data is sent up to the cloud. Fog computing is very important when talking about the Internet of Things. Instead of many, many devices sending all that data back to one spot to be processed, only the processed data is forwarded.

When I first read about this concept, my immediate thought was how the more things change, the more they stay the same in our cyclical nature of IT. I’m old enough that in my early days in the IT field, I worked on mainframe systems. I remember this fancy new computing model that was going to take the world by storm called “Client/Server”. Mainframes couldn’t keep up with the ever-increasing demands of the day’s applications. So instead of processing all of the data in a central location, we’ll move the processing to the client side. Scalability is easier because we handle a larger workload by deploying more clients. The Internet and web computing changed things a bit because the browser became a thin client to centralized web/application servers, but we still had scalability by deploying multiple web servers behind a load balancer to make them appear as a singular unit. Cloud computing was going to be the newest version of sliced bread. Just push your processing to the cloud and you’ll sleep much better at night. But now that some in the industry are seeing some of the pitfalls to this approach, we’re pushing processing back to the edges again.

One day, you’ll hear the media and vendors espousing the benefits of fog computing and how those very vendors are there to help you handle your transition to the new promised land. And then along will come another way to do business. It goes in cycles. Mainframe (centralized) to Client/Server (distributed) to Cloud (centralized) to Fog (distributed) to ??????

On another note, I had an editor for some database-related outlet ask me to write some articles about databases in the cloud. The editor said that “cloud is THE hot topic” as we all know. Well I can only write so much about the cloud because I don’t have a ton of experience with it. My company has our internal private cloud implementation, but to be honest, I have to take a lot of liberties with the term “cloud” to call it a private cloud. When I told the editor I wasn’t sure how much I could contribute, they told me they were not finding enough experts to talk about the cloud because  “most people I talk to haven’t moved to the cloud.”  This begs the question, is it just the media and vendors espousing the cloud?

Jun 07

Oracle RAC VIP and ARP Primer

I’ve been running across a number of questions about Virtual IP (VIP) addresses for Oracle RAC lately. I hope this blog post can help shed some light on what a VIP is, how they work, and why Oracle RAC leverages them. Before I go further, I should explain that I am not a Network specialist. In fact, computer networking is probably my weakest point of everything that goes on in IT shops. So do not flame me if I do not get the networking stuff entirely, 100% accurate.  I will explain this in terms that have served me well in my DBA career, especially working with Oracle RAC.

Most people are familiar with connecting any application, SQL*Plus or others, to a single-instance database server. In the end, their connection request is sent to a specific IP address. In our diagram below, the end user wants to connect to to access the database. The network request gets routed to the network switch that that database server is connected to. This switch passes the request to the server that has the requested IP address.



Most Oracle DBAs do not have a problem understanding this concept. Life does get a little more complicated when RAC is deployed as there are multiple machines (nodes) in the cluster.  In the next diagram, we have a two-node RAC cluster, each node having a different IP address.


The end user doesn’t care which node his session is connected to. The end user just wants access to the cluster. Either node will suffice. The end user’s TNSNAMES.ORA configuration may say to try first and if that doesn’t work, try instead. In this way, Oracle RAC is providing a High Availability solution.

Now we come to the entire reason for Virtual IP addresses to be used. What if the end user is trying to access the first node ( but it is unavailable? The node is down for some reason. The end user could easily connect to the node. However, due to the way TCP/IP networks work, it can take up to ten minutes for the network connection to to timeout before will be accessed. The lengthy TCP/IP timeout wait is the sole reason for Oracle RAC to leverage VIPs. We simply want to reduce the time to access another node in the cluster should our requested node be unavailable.

A traditional IP is usually bound to the network card on the server. The IT admin will configure the server to always use that specific IP address and no other devices on the network will use the same IP.  Note: I’m trying to make this simple here and avoiding DHCP and lease registration for those that are familiar with the topics.

A virtual IP address is not bound to the network card. It isn’t even defined in the OS. The VIP is not a real IP address similar to the way a Virtual Machine is not a real system. It just appears to be real to those using it.  So let’s look at our two node cluster, but this time with VIP’s defined for them.


Our servers still have their regular IP addresses, and for NODE1 and NODE2 respectively. These two nodes also have VIPs associated with them. NODE1-VIP and NODE2-VIP are denoted as IP addresses and respectively. Each node in the RAC cluster has its regular IP address and a VIP. It may also be beneficial to know that the host name and the VIP names are often defined in DNS so that we do not have to remember the IP addresses specifically.

Notice that the end user is now requesting to access one of the VIPs. The only people who should be using these traditional IP addresses are IT administrators who need to perform work on the server. End users and any and all applications should connect with the VIP.

Remember that I said earlier that the VIP isn’t even defined in the OS? Well if that’s the case, then how does everything know that the VIP is assigned to that node? This is all handled by Grid Infrastructure (GI). When GI is installed, one of the Oracle Universal Installer (OUI) screens will ask for the names of the nodes in the cluster (the host names) along with the virtual hostname. The screen shot below shows how the 11g GI installation looked when asking for that information (screen shot from Oracle documentation).


The public hostname is configured in the OS by the administrator. The Virtual IP is not configured in the OS but Grid Infrastructure knows about it. To understand how this works, we need to digress a bit and understand Address Resolution Protocol (ARP).

When a server is started up and its networking components are initiated, Address Resolution Protocol is the mechanism that tells the switch in front of the server to route all traffic for its IP address to the MAC address of its network card. The OS, through ARP, tells the switch to go to NODE1 for requests.

When Grid Infrastructure starts, one of its startup tasks is to do something similar. GI, through ARP, tells the switch to go to NODE1 for all NODE1-VIP ( requests. Until GI starts the VIP that VIP address is un-routable.

Now here’s the magic part…when NODE1 goes down, GI on another node in the cluster will detect the outage. GI will then perform a new ARP operation that informs the switch to route the VIP to another node in the cluster. Because the VIP is virtual, it can be re-routed on the fly. In the diagram below, NODE1 has failed. Its traditional IP is no longer available as well. GI has re-ARPed the VIP to the remaining node in the cluster.


The re-ARPing of the VIP can be accomplished in seconds. The end user may experience a brief pause in their network communication between the application and the database instance, but this is much, much less than if we waited for TCP/IP timeouts.

Oracle 11gR2 introduced the SCAN Listeners. An Oracle RAC cluster can have at most three SCAN Listeners. The SCAN name is still in DNS but DNS will round-robin the SCAN name resolution to one of up to three different IP addresses.

In the diagram below, our two-node cluster now has two SCAN listeners. The end user makes a connection request to and DNS resolves the name to either or


As is shown above, those two SCAN VIPs are assigned to different nodes in the cluster. If NODE1 goes down, Grid Infrastructure will relocated both NODE1-VIP and MY-SCAN ( to a surviving node in the cluster, through the same re-ARP operation we talked about earlier. The newer SCAN listeners and their VIPs are handled the same way as the old-style VIPs.

To recap, Virtual IP addresses are used to provide quicker failover of network communications between the application and the nodes in the cluster. The OS uses Address Resolution Protocol to let the network switch know to route connections to host. Grid Infrastructure users the same ARP operations to let the network switch know where to route traffic for the VIP and the SCAN VIP. Should a node go down, GI will re-ARP the VIP and SCAN VIP to another node in the cluster.



Apr 28

EM SQL Monitor Impact

In case anyone needs a reminder, its always a good idea to determine the impact of your monitoring tools on the very database you are monitoring. Some monitoring tools are lightweight and others are more obtrusive. I am using Enterprise Manager 13c to monitor a specific SQL statement while its running. I noticed in another monitoring tool (Lighty by Orachrome) the following SQL statement was consuming a good amount of resources:

,NVL2 (


I cut off the rest of the text. This SQL statement is literally a few thousand lines long. Yikes! But that isn’t the issue.  In Lighty, I noticed the activity in this screen shot.





The top SQL statement is my CPU pig. I blacked out the SQL text to protect potentially proprietary information. Notice that last SQL statement. It is consuming a fair amount of resources for just monitoring the system.

Here is a screenshot of the EM13c window.





When I turned off the Auto Refresh (it defaults to 15 seconds), the activity ceased on the system. I then manually press the refresh button when I need an update.
There are certainly times to use the auto refresh, even every 15 seconds. Just be mindful of the potential negative impact on the database.


Apr 26

Recreate Bad RAC Node

Just recently, I was trying to apply the latest and greatest Patch Set Update (PSU) to a 2-node Oracle RAC system. Everything went smoothly on the first node. I did have problems when trying to apply the PSU to the second node. The problem wasn’t with OPatch or the PSU, but rather, I could not even bring down Grid Infrastructure (GI) successfully. And to make matters worse, it would not come up either.

I tracked my issue down to the Grid Inter Process Communication Daemon (gipcd) When issuing ‘crsctl stop crs’, I received a message stating that gipcd could not be successfully terminated. When starting GI, the startup got as far as trying to start gipcd and then it quit. I found many helpful articles on My Oracle Support (MOS) and with Google searches. Many of those documents seemed to be right on track with my issue, but I could not successfully get GI back up and running. Rebooting the node did not help either.  The remainder of this article can help even if your issue is not with gipcd, it was just the sticking point for me.

So at this juncture, I had a decision to make. I could file a Service Request (SR) on MOS. Or I could “rebuild” that node in the cluster. I knew if I filed a SR, I’d be lucky to have the node operational any time in the next week. I did not want to wait that long and if this were a production system, I could not have waited that long. So I decided to rebuild the node.  This blog post will detail the steps I took. At a high level, this is what is involved:

  1. Remove the node from the cluster
  2. Cleanup any GI and RDBMS remnants on that node.
  3. Add the node back to the cluster.
  4. Add the instance and service for the new node.
  5. Start up the instance.

In case it matters, this system is Oracle (both GI and RDBMS) running on Oracle Linux 7.  In my example, host01 is the “good” node and host02 is the “bad” node. The database name is “orcl”.  Where possible, my command will have the prompt indicating the node I am running that command from.

First, I’ll remove the bad node from the cluster.

I start by removing the RDBMS software from the good node’s inventory.

[oracle@host01]$ ./runInstaller -updateNodeList ORACLE_HOME=$RDBMS_HOME "CLUSTER_NODES={host01}" 

Then I remove the GI software from the inventory.

[oracle@host01]# ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={host01}" 
CRS=TRUE -silent

Now I’ll remove that node from the cluster registry.

[root@host01]# crsctl delete node -n host02
CRS-4661: Node host02 successfully deleted.

Remove the VIP.

[root@host01]# srvctl config vip -node host02
VIP exists: network number 1, hosting node host02
VIP Name: host02-vip
VIP IPv4 Address:
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[root@host01]# srvctl stop vip -vip host02-vip -force
[root@host01]# srvctl remove vip -vip host02-vip
Please confirm that you intend to remove the VIPs host02-vip (y/[n]) y

Then remove the instance.

[root@host01]# srvctl remove instance -db orcl -instance orcl2
Remove instance from the database orcl? (y/[n]) y

At this point, the bad node is no longer part of the cluster, from the good node’s perspective.

Next, I’ll move to the bad node and remove the software and clean up some config files.

[oracle@host02]$ rm -rf /u01/app/oracle/product/
[root@host02 ~]# rm -rf /u01/grid/crs12.1.0.2/*
[root@host02 ~]# rm /var/tmp/.oracle/*
[oracle@host02]$ /tmp]$ rm -rf /tmp/*
[root@host02]# rm /etc/oracle/ocr*
[root@host02]# rm /etc/oracle/olr*
[root@host02]# rm -rf /pkg/oracle/app/oraInventory
[root@host02]# rm -rf /etc/oracle/scls_scr

I took the easy way out and just used ‘rm’ to remove the RDBMS and Grid home software. Things are all cleaned up now. The good node thinks its part of a single-node cluster and the bad node doesn’t even know about the cluster. Next, I’ll add that node back to the cluster. I’ll use the addnode utility on host01.

[oracle@host01]$ cd $GRID_HOME/addnode
[oracle@host01]$ ./ -ignoreSysPrereqs -ignorePrereq -silent "CLUSTER_NEW_NODES={host02}" 

This will clone the GI home from host01 to host02. At the end, I am prompted to run on host02. Running this script will connect GI to the OCR and Voting disks and bring up the clusterware stack. However, I do need to run one more cleanup routine as root on host02 before I can proceed.

[root@host02]# cd $GRID_HOME/crs/install
[root@host02]# ./ -verbose -deconfig -force

It is possible that I could have run the above earlier when cleaning up the node. But this is where I executed it at this time. Now I run the script as requested.

[root@host02]# cd $GRID_HOME
[root@host02]# ./

At this point, host02 is now part of the cluster and GI is up and running. I verify with “crs_stat -t” and “olsnodes -n”. I also check the VIP.

[root@host02]# srvctl status vip -vip host02-vip
VIP host02-vip is enabled
VIP host02-vip is running on node: host02

Now back on host01, its time to clone the RDBMS software.

[oracle@host01]$ cd $RDBMS_HOME/addnode
[oracle@host01]$ ./ "CLUSTER_NEW_NODES={host02}"

This will start the OUI. Walk through the wizard to complete the clone process.

Now I’ll add the instance back on that node.

[oracle@host01]$ srvctl add instance -db orcl -instance orcl2 -node host02

If everything has gone well, the instance will start right up.

[oracle@host01]$ srvctl start instance -db orcl -instance orcl2
[oracle@host01]$ srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02
SQL> select inst_id,status from gv$instance;
---------- ------------

Awesome! All that remains is to reconfigure and start any necessary services. I have one.

srvctl modify service -db orcl -service hr_svc -modifyconfig -preferred "orcl1,orcl2"
srvctl start service -db orcl -service hr_svc -node host02
srvctl status service -db orcl


That’s it. I now have everything operational.

Hopefully this blog post has shown how easy it is to take a “bad” node out of the cluster and add it back in. This entire process took me about 2 hours to complete. Much faster than any resolution I’ve ever obtained from MOS.

I never did get to the root cause of my original issue. Taking the node out of the cluster and adding it back in got me back up and running. This process will not work if the root cause of my problem was hardware or OS-related.

And the best part for me in all of this? Because host01 already had the PSU applied bo both GI and RDBMS homes, cloning those to host02 means I did not have to run OPatch on host02. That host received the PSU patch. All I needed to do to complete the patching was run datapatch against the database.

Apr 12

Oracle RAC N+1 Redundancy

I find that when people are designing Oracle RAC architecture, they often do not think of N+1 redundancy in their implementation plans. There are two reasons to implement Oracle RAC, availability and scalability. For the purposes of this discussion, I am focusing only on the availability side. If your RAC deployments are for scalability reasons only, then this topic may not apply to you.

So what is N+1 Redundancy? Simply put, if you need N units of something, then for redundancy purposes, you should have N+1 of that item. Let’s look at a database server. It must have a power supply. That is a requirement. Without a working power supply, the server will not function at all. The minimum number of power supplies is 1. If we want this server to have a high degree of availability, we will make sure it has N+1 power supplies, or in this case, dual power supplies. If there is only one power supply and it fails, it takes the server with it. If we have an extra power supply, a spare unit, the loss of one power supply will not take down the server with it. Redundancy is a great thing to have and an essential component to a high availability infrastructure.

When designing an Oracle RAC system, the DBA needs to determine how many nodes are needed to support the end user’s demands. If the DBA determines 4 nodes are needed, and this RAC cluster must exhibit high availability traits, then it is vital for the DBA to create a 5 node cluster (4+1). If the resource demands are sufficient to keep 4 nodes busy and one node is lost, the remaining 3 will not be able to keep up with the workload. If the DBA builds the RAC system with N+1 capability in mind, then the loss of one node will not be noticeable by the end users.  If the DBA builds the RAC cluster without N+1 redundancy, then the loss of one node may be so terrible for the end user’s performance, that the entire cluster might as well be down. When designing your RAC implementations, strive for N+1 redundancy.

I remember two years ago, I had a RAC cluster that lost a node. No problem, we still had two nodes available. As I watched the performance of the two remaining nodes, they seemed to be pretty overwhelmed. Our call center started receiving complaints. I worked with other administrators on the IT team to get that node back up and running as fast as possible, but this may not always be the case if the reason for the outage is hardware related and parts need to be replaced. After the node was back in service, I monitored the cluster performance for weeks later. Our usage had grown since this system was initially designed. We had initially designed this system with N+1 redundancy in mind, but our usage grew and N went from 2 to 3. Our current 3-node cluster was no longer N+1 redundant. So I worked with management to put into the next year’s budget enough funds to procure a new node and make sure Oracle was licensed on it. I sleep much better at night knowing that I am back to N+1 redundancy.

Like many implementations out there, my RAC system is not the only High Availability feature built into our infrastructure. This RAC database is a primary to a physical standby database with Oracle’s Data Guard. I’m surprised when discussing RAC standby database’s with other Oracle DBA’s how many of them are not thinking of N+1 capability for their standby. The physical standby database is my safety net in case the primary data center is unavailable for some reason. I’ve seen so many Oracle DBA’s implement a single instance standby for a multi-node RAC primary. Ouch! I hope they never have to fail over. Their entire multi-node RAC cluster’s workload will struggle mightily on that single instance standby. So as you’re designing your RAC implementations for both the primary and the standby, consider your N+1 redundancy implications on the architecture design.

Where I probably differ from many people is that my physical standby implementations are not N+1 capable, but rather N. I skip the redundant extra node for my physical standby. Why is that? Purely from a cost perspective. My physical standby is just a safety net. I want it to work for me the day that I need it. But I hopefully never need it. The physical standby is my insurance policy in case risk becomes reality. For me, that extra “+1” at the standby site is over-insurance. I can save on the physical hardware and Oracle licensing.

So let’s say the day comes and I do failover to the standby. I have lost my N+1 redundancy. But what are the chances that I’m going to lose the primary data center *and* lose one of the nodes in my standby cluster? Pretty slim chances. The likelihood of failures at two sites at the same time is pretty small. At this point, our IT team is evaluating why our primary data center is lost and when we can most likely return our operations to that facility. If the primary data center lost all its power and the utility company says service will be restored by tomorrow, then we’ll just simply run at the standby data center even though we only have N nodes for the RAC database there. However, if the primary data center was wiped out by a fire it will like take many months before it is up and running again. It is at this point that I need to plan on getting that physical standby up to N+1 redundancy as our time using that standby as a primary will be a much longer period. So we rush order another server and add it to the cluster as soon as possible. So I design my standby RAC database as N, not N+1 with an eye on increasing it to N+1 in short order if we determine we will be using the standby for real for a longer period of time.

So there is one other special case I would like to discuss. That is where the DBA determines that N=1. For the current workload requirements, one node is sufficient. But we want to have high availability so we design a two-node RAC cluster for the primary database. We now have N+1 redundancy built into the primary. Following my last paragraph, my standby database only needs 1 node. The mistake I see some people make is to create the standby as a single-instance database. So far, their logic makes sense. The primary is N+1 and the standby is N. So far so good. Where I differ is that I make the standby a one node RAC cluster, not a pure single-instance implementation. The reason is for future growth. At some point, the DBA may find that N no longer equals 1 at the primary. Usage has grown and N needs to be 2 now. The DBA wants to grow the primary to 3 nodes (2+1). This is easily down with zero downtime to add a new node to the cluster and extend the RAC database to that new node. But its not so easily done at the standby to make the standby a 2-node cluster if that 1 node that exists is not RAC-enabled. If a pure single-instance standby is all that exists, the DBA needs to scrap it and move it to a two-node cluster. If the DBA had foresight and installed Grid Infrastructure as if the physical standby were a single-node cluster, then all the DBA has to do is to add a new node, just like they did on the primary side.

As you’re designing your RAC implementations, consider ensuring your have N+1 capability on the primary and at least N on the standby. If a company determines that the standby is too critical, they may want to implement N+1 at the standby as well. If the DBA determines that N=1, consider making the standby at least a single node RAC cluster.

Apr 05


I was working with an individual on a question in the MOSC forums recently where they asked about the TOP_LEVEL_RPI_CURSOR column of the V$SQL_SHARED_CURSOR view. There is little documentation on what this column is trying to tell the DBA.

All the Oracle docs say is that this column contains “(Y|N) Is top level RPI cursor”. So what does that mean?

I’m going to assume the reader of this post is familiar with child cursors. That will save me a large amount of introductory information. The V$SQL_SHARED_CURSOR view will tell the DBA why a child cursor and its parent have different versions in the Shared Pool. If the child cursor’s OPTIMIZER_MISMATCH column contains a ‘Y’ in this view, then the session executing the cursor had different optimizer settings than the session that was responsible for the parent cursor execution.

So what does it mean when TOP_LEVEL_RPI_CURSOR is set to Y for a child? The documentation isn’t clear. MOS has very little on the subject. And all of my Google hits on this column pretty much just regurgitate the documentation. To know why, it helps to know that RPI stands for Recursive Program Interface. This is part of the Oracle kernel that deals with recursive SQL. In our case, it deals with the fact that the SQL statement was issued at a different “depth”.

What is recursive SQL? It is SQL that is issued on your behalf, which means at a different depth as I will illustrate. First off, Oracle is performing recursive SQL all the time. At a basic level, when you issue “select * from table_name”, Oracle queries the Data Dictionary to ensure the object exists and that you have permissions on that table. How does Oracle do that? It uses other SQL statements. The statement you issue is at level 0, the base level. When Oracle issues a SQL statement to check if the table exists, that will be at the next level, level 1. Sometimes, that will cause other SQL statements to be issued at the next level, level 2.

The depth of a SQL statement is not limited to just what Oracle is doing in the background, on your behalf. Consider when you execute a stored procedure. Your call to the stored procedure is at depth 0. Any SQL statement in the stored procedure is at depth 1. If that stored procedure calls another procedure, the SQL in the other procedure will be at depth 2.

I used this information on recursive SQL and SQL depth to construct a simple example in my Oracle database. First, I created a stored procedure.

create or replace procedure my_sysdate 
 v_dt date;
 select sysdate into v_dt from dual;

I then fired up a SQL*Plus session and started a trace. I issued the same SQL statement and then I called my procedure.


SQL> alter session set sql_trace=true;
Session altered.
 2 /
SQL> exec my_sysdate;
PL/SQL procedure successfully completed.
SQL> exit


When I examined the raw trace file, I found the two calls to SYSDATE from DUAL as follows:


PARSING IN CURSOR #140670990815296 len=24 dep=0 uid=9449 oct=3 lid=9449 tim=24905125014484 hv=124468195 ad=’81477be0′ sqlid=’c749bc43qqfz3′ SELECT SYSDATE FROM DUAL

PARSING IN CURSOR #140670907623848 len=24 dep=1 uid=9449 oct=3 lid=9449 tim=24905129780963 hv=124468195 ad=’81477be0′ sqlid=’c749bc43qqfz3′ SELECT SYSDATE FROM DUAL


If you look at the trace file closely, you will see that the second one at depth=1 was a direct result of the stored procedure. Notice that even though my stored procedure was defined in all lower case, the SQL issued at depth=1 was in all upper case. As a result, when I issued the same SQL statement directly in my SQL*Plus session (at depth=0), I had to use the same upper case form of that statement so that it would have the same SQL ID value.

The trace file also shows the SQL ID. I can now query V$SQL_SHARED_CURSOR for that SQL ID value and show that TOP_LEVEL_RPI_CURSOR is set for the child.

SQL> select sql_id,top_level_rpi_cursor from v$sql_shared_cursor where sql_id='c749bc43qqfz3';
------------- -
c749bc43qqfz3 N
c749bc43qqfz3 Y

So there we have our proof. The only difference between these two cursors is that one was the depth from which they were executed. I’m not sure why Oracle needs this distinction in the Shared Pool. If anyone knows, drop me a line.

Normally, we do not care about a few extra versions, a few child cursors for a given SQL ID. If your SQL statement has a high number of versions, then it is probably not due to the different depth levels. Other reasons would be more relevant to why a SQL statement would have a high number of child cursors, a high number of different versions. But this does answer the question of what that column is telling us.



Older posts «