May 02

SQL Developer Icons

I will occasionally see a question in the OTN/MOSC forums asking what a specific icon means in SQL Developer. There is no documentation on the subject so we often have to figure it out on our own. My personal preference is that when we mouse over the icon, a tooltip will popup telling us what the icon means. Until we get that feature, or something similar, I decided to start documenting them here in my blog. Here they are:


Icon Meaning
  Green ladybug shows object Compiled For Debug
   Red circle with white ‘X’ means object is invalid
   Advanced Queuing table
   White line through middle of table showing a Partitioned Table
   Indexed Organized Table
   Global Temporary Table
   External table
Greyed out SYS IOT overflow table


If anyone has any icons I missed, private message me on Twitter (@BPeaslandDBA) or get in touch with me some other way, and I will update this blog post.

Apr 17

SQL Dev 4.2 Top SQL

One of the things that I’ve always liked about SQL Developer is how it saves me time. Sure, I get this fancy looking GUI tool that enables me to enter SQL commands to the database. But I’ve had that capability from the very beginning with SQL*Plus and Server Manager (if you’ve been around long enough to remember that one). But SQL Dev is a great time saver and its the reason why I rarely use SQL*Plus any more.

SQL Dev 4.2 was released to production last week and one of its new features I like is the Top SQL section from the Instance Viewer. This information is nothing new. I’ve been able to get to this information with SQL scripts and from Oracle’s Enterprise Manager. But SQL Dev saves me time once again.

To use this new feature, one obviously has to be running SQL Dev 4.2. If you do not have a copy of this latest version, you can get it here. Then go to the DBA panel and connect to your instance of choice. Then launch the Instance Viewer. You should be able to see the TOP SQL section quite easily.

You can click on those column headings to sort by the metric of your choice.

If you double-click on a specific SQL statement, the first time you do that for this instance, SQL Dev tries to protect you from potential harm because what follows requires the option Tuning Pack.

After you press Yes, and only do so if you are licensed for this optional pack, you are given details of that SQL statement. You can see the Explain Plan:

One of my two favorite parts of this new feature is the ability to get easy access to bind variable values. All too often when tuning SQL statements, I need to determine what a user supplied for bind variable values. 

Just by clicking on the SQL Tuning Advice tab, I can see the results of the Tuning Advisor. This is my other favorite feature and saves me a ton of time having to launch a browser and connect to Enterprise Manager, and then make about 10 other clicks until I get to the same information.

The last bit of information is showing the elapsed time history for this SQL statement.

When I first read the early adopter information for SQL Dev 4.2, a lot of it was focused on the new Formatter. While that is a great time saver to many, I rarely use it. But this is one of the new features I am really excited about.


Apr 10

Slow Migration To Cloud

Raise your hand if you’ve heard something like this before from a cloud vendor:

By insert_year, we expect that insert_very_high_percentage of IT needs will be run in the cloud.

We surely have Mark Hurd making the claim at OpenWorld 2015 that by 2025, 80% of all production apps will be in the cloud. Or Oracle saying that 100% of dev and test will be in the cloud by 2025. Other vendors have made similar claims.  Now I’m starting to hear anecdotal evidence that cloud adoption rates are not as speedy as we have been led to believe.

About a year ago, I was asked by a database-related media outlet to write some articles about cloud and Oracle databases. My response was that my company has been very slow to move to the cloud and that I doubted I could be of a good technical resource for them. The editor I spoke with then informed me that they were hearing the same thing from other technical writers and they were having a hard time coming up with good writers on cloud subjects because there was little real-world movement to the cloud. Yet the editor was trying hard to get content on their media outlet because all everyone talks about is the cloud. The editor was in a bind between what they wanted to get published and what content they could obtain.

The take-away I left with that day was that the media and the vendors are hyping the cloud, but those of us in the trenches are not jumping on the bandwagon. Which leads to this obvious conclusion…the ones that really want us in the cloud are the cloud vendors. The big reason Mark Hurd wants us to run 80% of our production apps in the cloud is because its good for their business. Cloud vendors will give us plenty of reasons why the cloud is good for our business. And those reasons are often very good ones and why we will eventually all get there.

Last week I was visiting with a CEO of a technology company, a long-time friend of mine. After chatting about the spouses and kids and lots of non-IT topics, we turned the conversation to how his business is doing. Business is great and he’s constantly busy. I then asked if his clients were moving to the cloud and if his business is helping them with those endeavors. He replied that he has almost no clients moving to the cloud as of yet. He gave me a list of reasons why his client’s cloud adoption rate was low, but primary it was too little benefit for the initial cost in getting there.

Today, I came across a SQL Server article that seems on the surface to have little to do with the cloud. Buried in that article was this quote which jumped out at me.

Cloud adoption is lower than I expected.

To me, cloud adoption is right on target with my expectations. The only ones expecting a faster transition to the cloud are vendors, media, and those who buy all the hype. The author of this article gives two major reasons why cloud adoption is not what people anticipated. I can buy the reasons as being applicable to some cloud deployments, but I think the author misses the boat on many other reasons more important to be slowing down cloud adoption.

Here are the big reasons why I see cloud transitions moving at a slower-than-expected pace, in no particular order:

  • Transitioning existing systems to the cloud is expensive. I’ve seen multiple companies move to Office 365 rather than hosting their own Exchange services. In every transition to Office 365, every company will spend a huge amount of man hours getting there. I’ve seen teams of people working 10-20 hours per week for an entire year all to make the move to the cloud. This amount of effort isn’t sold to us when the cloud vendor tries to tell us how much money we will save. Anyone who has ever moved a production enterprise system within their own company knows how much effort is needed to pull off the transition seamlessly for the end users. If you are moving to the cloud you might also need to upgrade to newer software versions, newer platforms, etc. This type of move is not done lightly. The cloud vendors have made it very easy to transition lots of things to the cloud, but they cannot make it easy for our internal organizational processes to change.
  • The best transitions to the cloud are for new implementations. It is hard for migrate existing systems anywhere. What we often see are companies that enter cloud computing with new endeavors. This lets the company gain experience with cloud technologies without disrupting existing workflows. Unfortunately for the cloud vendors, new implementations do not come along every day for every IT enterprise.
  • Regulatory Compliance. Depending on your business, you may have various regulations your company has to be in compliance in order to remain a business in that region. Cloud vendors have ramped up efforts in order to be able to provide service in as many regions as possible that meet governmental regulations. However, cloud vendors can move faster than regulations imposed on businesses and the businesses are often left trying to sort out if the cloud vendor’s solution keeps them in regulatory compliance or not. It should be no surprise that this effort, by itself, costs the company money that doesn’t need to be spent if they stay on-premises.
  • We need to protect the data. Data is paramount. Data is very valuable to the business. Data breaches are extremely costly. Even if the cloud solution is compliant with regulations, the company may still not be ready to accept the impacts if a data breach is found to be due to something a third party did or did not do properly. The company’s customers or shareholders may not be ready for this. Being the Data Guardian sometimes trumps any desire to move to the cloud. This may be a FUD factor at work, but it is real for many.
  • Field of Dreams vs. If It Ain’t Broke… The cloud vendors are trying to sell us a Field of Dreams. They built it and we should come. The farmer in the movie was right. He built it and they did come. Cloud vendors are telling us of this wonderful new playing field and we should jump on board. Life is great there. These cloud vendors, like the farmer in the movie, are right. However, there is this feeling many in the IT community have and it says “If it ain’t broke, don’t fix it”. As many companies are looking at the cloud and seeing the proverbial greener grass on the other side, we also look and see that things are fine right where they are. Quite honestly, while there are benefits to the cloud, those benefits by themselves are sometimes not a big enough business driver to make the move.

To me, the above list are the big reasons why cloud adoption is slower than some had expected. Many IT organizations across the world are simply not going to make a mass transition to the cloud in a short timeframe.

Please do not read this blog post and think that I am anti-cloud. More than anything, I’m just being a realist and how I see my company’s IT infrastructure and how the cloud fits into that. I know that we are not going to jump at the cloud because it exists and I am not alone in this line of thought. We’ll get there one day, in our own time, and that timeline will take many, many years.

Mar 16

Book Review Oracle Database Upgrade and Migration Methods: Including Oracle 12c Release 2

I had the good fortune to review a new Apress book.

Oracle Database Upgrade and Migration Methods: Including Oracle 12c Release 2

I was grateful to be asked to be the Technical Reviewer for this book. I like to think I made some contribution to the content there-in.  🙂  But to be honest, all the hard work was done by the book’s authors, Ravikumar, Krishnakumar and Basha. Compared to writing the books contents, technical reviewing is easy work.

I’ve worked with Oracle databases for over 20 years now. I’ve participated in a large number of discussion forums over the year. More currently, I can be found lurking around on the MOSC and OTN forums under the handle BPeaslandDBA. No matter where I’ve travelled, virtually, all of these spaces have seen more than their fair share of questions related to how to do what I will call “simple” upgrades. Those questions are often found answered in the Oracle Upgrade Guide for that version. I constantly see questions on how to upgrade when a standby is involved, even though that is documented as well. Then I see the harder questions like how to migrate from running Oracle on Solaris to Linux. Or how to upgrade when Multitenant is involved. This book has all the answers in one location. Each migration or upgrade is shown, step-by-step.

If you’re starting your Oracle DBA career, this book can help cover all the upgrade and migration scenarios. If you’re a seasoned DBA but haven’t tackled some of these issues, this book will help you understand your options and then lead you through the process.



Mar 07

GI 12.2 Changes

Oracle was recently released for on-premises deployments and with it comes the new Grid Infrastructure version. There are a few changes worthy of note in this new version.

  • The OUI is now launched with not as we are more familiar with.
  • The GI Management Repository (GIMR) database can now be off-cluster if you do not want to devote disk for it on your current cluster. I’m not sure that I’ll be  using this one but its nice to know its there if I need it.
  • The Cluster Interconnect can now use IPv6 addresses.
  • Reader Nodes – Oracle 12.1 introduced the Flex Cluster with hub nodes and leaf nodes. While the concept sounded good, you could not run RAC instances on leaf nodes. In 12.2, you can now run RAC instances on leaf nodes, but they are read-only. This is a great way to offload reporting capabilities.

With any list of new features comes some deprecated ones. This one stood out to me:

  • OCR and Voting disks must now be on ASM, no more CFS placement. The docs do say that you can still put database files on shared file system storage instead of using ASM, but I expect some day in the future, this will change as well.

When planning for the Oracle installation, I found it curious that the documentation now recommends at least 12GB for the GRID_HOME and that “Oracle recommends that you allocate 100GB to allow additional space for patches”. But the space-hungry demands do not stop there. In Table 8-3, you can see that depending on your configuration, you may need a lot more space than that. I’ll touch on this briefly in my next blog post.

Mar 01

Christmas Comes Early (Oracle 12.2)

Christmas must be coming early! We were previously informed that Oracle 12.2 would be released on March 15, 2017, but you can get it today for Linux and Solaris platforms. Simply go to and download the new version today. No longer is 12.2 “cloud only”.


Sadly, My Oracle Support is not up-to-date with the certification information. When I went to MOS, it has a problem with the 12.2 version. When I try to type “12.2” into the Release box, it tells me “No results for 12.2”.


The one saving grace is that the 12.2 Installation Guide has this information for Linux. Oracle 12.2 is certified for the following Linux platforms:

OL6.4 and OL7

RHEL6.4 and RHEL7


I do not run Oracle on Solaris and no other platforms are currently ready for download.

Feb 23

Oracle RAC on Third-Party Clouds

Yesterday, I came across this white paper from Oracle Corp about Oracle RAC Support on Third-Party Clouds. The paper is definitely a must-read for those who want to run Oracle RAC on AWS, Google or Azure cloud providers. The first paragraph was promising where it stated:

Oracle also maintains an open policy for Third-Party Cloud vendors to support the Oracle Database in their infrastructure.

This sounds great, but the quote is talking about the Oracle Database and not about the RAC option. I found it interesting that the paper spends a great deal of time talking about AWS and Azure but never mentions Google’s cloud.

Here is the bottom line on Oracle’s position for supporting RAC on other cloud offerings:

Oracle RAC is supported on all cloud environments supported by the Oracle Database, as long as the environment is able to provide the hardware, storage, and networking requirements as specified in the Oracle RAC and Grid Infrastructure documentation. With the exception of the Oracle Cloud, Oracle has not tested nor certified Oracle RAC in these environments.

The paper goes to great length to illustrate how AWS muddies the waters with respect to the storage and networking requirements needed to run RAC in AWS.

It should be obvious that Oracle is trying to steer its customers away from running RAC in AWS by reading this paper. Amazon has put out information letting us know how easy it is to run RAC in AWS. But after reading this Oracle white paper, I would have some concern on how well the AWS environment would work for mission-critical RAC deployments.

AWS provides shared storage for RAC by using iSCSI targets on VMs, which I would never use for anything other than a testbed. One issue is redundancy at the storage level. In order to have redundancy at the storage level:


Amazon’s proposed solution is “deploying two or three of these NAS instances and mirroring the storage between them using ASM Normal or High redundancy.” However, should both storage instances fail or be taken down (e.g. for maintenance) at the same time, there will be a complete data loss.


For the private network, AWS does not support multi-casting, a requirement for the Grid Infrastructure Cluster Interconnect. AWS gets around this by using a point-to-point VPN network using n2n ntop. But from ntop’s own website, there is no further development on this in the last two years.

Don’t get me wrong. I think that AWS is a great cloud solution for lots of different things. Admittedly, I have not run RAC on AWS, at least not yet. But if I were looking to move my company’s RAC database infrastructure to the cloud, I would seriously investigate the claims in this Oracle white paper before committing to the AWS solution. That last sentence is the entire point of this blog post.


Jan 27

Free ES for and 12.2 Release Date

The free Extended Support period for Oracle 11.2.04 has been extended for the second time. Oracle will now offer Extended Support for free to its customers with a current maintenance agreement up until Dec 31, 2018. This is almost a full two years away. Those running on Oracle can breath a sigh of relieft that they have more time to get upgraded to 12c.

I will admit I am a bit surprised that Oracle is extending free ES for another time. But maybe it shouldn’t surprise me because now it makes my statements in this Search Oracle article inaccurate.

Today we also learned that Oracle now has a release date for on-premise deployments. Oracle 12cR2 was released in 2016, but as a cloud-first release. This was an incentive from Oracle to drive you to their cloud infrastructure. Those not on the cloud yet have been eagerly awaiting the ability to download the newest version. You will be able to do so on March 15, 2017 for Intel and Solaris platforms. Other platforms will follow later.

For more details, refer to MOS Note 742060.1 and this blog article:

Jan 03

Database Upgrades – Flu Shots or Flu?

The new year is upon us and people are starting to ramp up their IT projects for 2017. Is a database upgrade on  your list of projects? Unless you’re running Oracle across the board in  your data center, it should be!

I was answering some Oracle upgrade questions today when it occurred to me that database upgrades are either a Flu Shot or a Flu decision. I see people all the time that refuse to upgrade their Oracle databases on a regular basis. I still field questions from people running Oracle 10.2, or (shudder) 9i, or (VERY BIG SHUDDER) 8i! Things are working great, database upgrades are a big pain to perform, so they leave it alone. Then the next thing they know, they are lagging very far behind. So what does this have to do with the Flu?

Every year, you can get a flu shot. Sure, you get poked with a needle and experience some pain. You may even have a few minor flu-like symptoms causing you some discomfort. But people put up with the flu shot and all its complications because getting the flu itself is so much worse.

If you upgrade regularly, its like getting a flu shot. At my company, they were running Oracle when I started working here. I upgraded everything to (skipped and then to and then to and then to (skipped Yes, each upgrade had its share of pain points, but they were minor compared to upgrading from 9.2 to Such a big upgrade is like getting the flu. Keeping regular with your database versions is like getting the flu shot.

You should either be running Oracle in  your data center across the board or making plans to do so. Notice I said “in your data center” and that’s because 12.2 is only available in the cloud as I write this. Oracle currently supports and for on-premise deployments. But is currently on free Extended Support and that free period expires in less than 5 months. It can take that long to roll the upgrade through development, into test, and into production, and sort out any issues you have.

So begin upgrade plans today. Work towards getting all Oracle databases running the version in your data center. If you’re upgrading to, then you can only perform a direct upgrade from,, or later, or otherwise direct upgrade is not supported.

Once you get to, look at how your organization can benefit from regular flu shots. If is released,  you can be sure that it will be on my radar to move towards.

Nov 30

verify_queryable_inventory returned ORA-20008: Timed out

I have one system that always receives the same error when I apply the PSU. When the PSU is being applied, the execution of datapatch fails. I get the following on my screen:


Queryable inventory could not determine the current opatch status.

Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'

and/or check the invocation log


for the complete error.


In the sqlpatch_invocation log file, the following is shown:


verify_queryable_inventory returned ORA-20008: Timed out, Job 
Load_opatch_inventory_1execution time is
more than 120Secs

Queryable inventory could not determine the current opatch status.


Datapatch is trying to ensure that DBMS_QOPATCH is installed correctly. But the check times out. There are a number of reasons for this. It takes some investigation to check system resource utilization to see if there is a bottleneck. One might also need to check database performance while this is being run.


In my case, I know I have a very slow VM. I’m working with our server team to ensure that VM is configured correctly. In the meantime, I can get past the timeout by doing the following:


alter system set events '18219841 trace name context forever';

$ORACLE_HOME/OPatch/datapatch -verbose

alter system set events '18219841 trace name context off';


Setting the event lets datapatch complete successfully.