Oct 09

Filtering Alert Logs in EM13c

I’ve been receiving ORA-1555 (snapshot too old) alerts from my databases via Enterprise Manager 13c. For production environments, these are good alerts to receive and can be an indicator that I have an issue to resolve. For my development databases, ORA-1555 errors are not a concern of mine. It is common for developers to write queries that run a long time and then tune them later on. I do not want to see alerts on ORA-1555 errors in my Inbox from dev databases. Yet EM13c does not have an obvious way to stop these alerts from coming. I recently learned that one can filter out rows of the Alert Log from EM13c’s notification functionality, thus supressing any ORA-1555 alerts from being generated.
To do this, log in to EM13c and navigate to the database in question (or update a template). Then click on Oracle Database –> Monitoring –> Metric and Collection Settings.  In the DB Alert Log section is Generic Alert Log Error. Press the pencils icon on the right for this line to edit the settings.

Scroll down to the very bottom of the next page. In the section titled Metric Collection Properties, there is one box labelled Alert Log Filter Expression. This box is a regular expression. Any lines in the Alert Log that match the regex will be filtered out from consideration. As you can see below, I added the “01555” error code.

 

Not only can you filter out any ORA-1555 errors, but you can see others that can be filtered out as well.

 

 

Jul 20

The Performance Tuning Maze

One day, you wake up and find that you are an Oracle database administrator. The gods have finally seen the light to your true potential and allowed you to work in the best job in the world! You begin your DBA career so bright-eyed and bushy-tailed. You’re creating new databases, granting privileges, writing PL/SQL code. Life is great. You cannot wait to get up in the morning, pour that first cup of coffee, and point your browser to your favorite Oracle forums, eager to soak up a lifetime’s worth of knowledge all before lunch! If those gods are still smiling on you, you may even answer a few questions and get rewarded with awesome points. Life is good. Life is sweet.

While you are still basking in the glow of your new-found career, someone comes to  you with a problem. A performance problem. You freeze. There’s a small lump in your throat as you come to the self-realization you have no idea how to solve database performance problems. What you did not realize on that day is that every DBA before you has been in exactly this same situation early in their careers. Yet, it doesn’t stop the other person from looking at you wondering why you are not solving the performance issue immediately. After all, you are the DBA and you should know how to do this stuff. This magical stuff they call (cue the angels singing) Performance Tuning. For it has been written, if you are going to be a DBA and survive, and do this job well, you will have to tune performance. Others will never know what goes on behind the curtain as you make potions and cast spells. All they care about is that you solved their performance issue and they can get on with their daily work.

At this point early in their career, every DBA decides they need to learn more about this thing they call Performance Tuning. What is it? How do I do it? How can I become the most important person in my IT department because I discovered the secret sauce to turn that 5 hour report into a 1 minute miracle?

Oh, we were so young back then…so naive. We thought it was easy. Push a few buttons, fire up a few tools and presto. The performance tuning solution is found and we are wonderful! Life seemed so easy as we ventured down that yellow brick road. But on this road, there are no scare crows. No lions. No tin men. Not even any cute little dogs named Toto. No…this road….this Oracle Performance Tuning road is filled with things much greater. On this road, we meet utilities and tools. We meet Explain Plan. We meet tkprof and SQLT. We find wonderful views like V$SGA_TARGET_ADVICE and V$SESSION_WAIT and its twin V$SESSION_EVENT (not identical twins mind you, but one look and you know they’re related).

So there you are. Your shiny DBA title still sitting under your name in every email signature you send. And you now have all of these wonderful tools at your disposal. You’ve picked up ASH and AWR because thankfully your company has gifted you the Diagnostics Pack. Your bookshelf is armed with great tomes like this one. (Shameless plug I know). Some great guy on the forums, like me, clued you in to Lighty. You have an entire toolbox at your disposal. No! Not toolbox….warchest! Small countries in other places in the world do not have the arsenal you have at your disposal. Why…I could touch that super-secret button in the SQL Tuning Advisor and blow away one of those countries, *and* make SQL ID 98byz76pkyql run faster while my coffee still has steam rising from it…I’m so good.

Remember that day you received your first performance issue and you had that lump in your throat? There’s another day like that in your DBA career. Its the day you reach the Performance Tuning Maze (cue the thunder and lightning). But this isn’t any maze. This is different. Most mazes have one entrance and one exit, with many turns and decisions along the way. This maze, why, this maze is obviously different. This maze has many, many entrances. And this maze has many, many exits. Each entrance is a different performance tuning tool. And each exit is a solution, but not all solution’s really solve the performance problem. And this is the conundrum Oracle performance tuning specialists face. I have a performance problem. I know on the other side of this maze is my solution. But which entrance do I choose? One entrance has Explain Plan written above it. Another entrance has V$DB_CACHE_ADVICE written on it. Why there are all these entrances, one for each tool at my disposal. This is a tale of my youth and hopefully, as Bilbo wrote to Frodo, this story can help you in your adventures as well.

So I pick an entrance.

I enter the maze.

Did I make a good choice?

Well let’s see where this goes. Up ahead, the road makes a left turn. But its my only choice so I go with it. Next, I come to an intersection. I can go right or left. I make a right turn. Oops…dead end. So I backtrack and take that left instead. Another dead end. Drats. I entered the maze incorrectly. Sometimes, the tools do not lead you to any solution. So I go back to the entrances and make another choice, chose a different tool.

I’ve now entered the maze a second time. But things are looking much better. I keep going. Just a few more turns. I can see light so I know I’m getting close to the end. Yes…there it is, the exit. I finally come out the other side of the maze. I have my performance tuning solution in hand but after I implement the solution, I quickly realize this did not solve my performance problem at all. Sometimes, tools can lead you to solutions that have no bearing on your specific problem. So its time for my third entrance into the maze.

Now, being an astute performance tuning specialist, I realized all they entrances I chose so far are related to overall database performance but what I’m really looking for is performance related to a specific SQL statement. But I do not know which SQL statement needs tuning. How can I figure out which one? Well three doors down is an entrance to the maze marked SQL Trace. Right next to it is a door marked EM Search Sessions. I flip a coin and choose SQL Trace. Shortly after I enter the maze, I come to a T-intersection. If I go left, it takes me back to the EM Search Sessions door. If I go right, its a straight shot to the exit. Naturally I go right. But it is at this moment that I know that sometimes, two different tools will lead you to the same answer. As I exit the maze I am given a free pass to tkprof because after all, don’t all SQL Trace roads lead straight to tkprof? I now have the offending SQL statement. But my problem isn’t solved yet. What to do?

I head back to the maze entrance. Sometimes, we get an answer from our tuning tools and we have to perform another run through the maze to drill down into the final answer. This time, I enter the SQLT door. A few twists and turns, but this maze path is pretty easy, or so it seems. I get to the end, and I not only have one answer, I have many answers. Oh…glorious day! I have found the mother of all tools.

I heard other DBAs speak of these miracle tools like SQLT and AWR Reports. How wonderful they are. These tools are so great, some DBAs only see the SQLT and AWR Report entrances.  I always thought this was stuff of legends, but here at last, I too have found the one tool to rule them all…ok…one for each hand. I have all of these answers at my disposal. Now which answer is directly related to my performance issue. Here I have my SQLT report and I have all these answers contained therein. Which answer is mine. Which one?!?!? Sometimes, tools will give you too much information. For me, who is new this this performance tuning thing, the output of SQLT might as well be written in Klingon. But lucky me, I know a fellow DBA that sits two cubes down from me that speaks Klingon. I hand him my SQLT output. He thumbs through it and within 30 seconds, he points out one tiny section of the report and says those magical words. “See…right there…that’s your problem.” With a quizzical look on my face, he waves his hand over the report and as if by magic, Google Translate has changed a few words on the page and I can now clearly see I have a table with very bad stats. Sometimes, tools with all those answers are great time savers for those that know how to use them. This Klingon speaking DBA pushes up his glasses and reveals another section of the SQLT report. “See here he says…those bad stats are forcing a FTS” as if I’m supposed to know what an FTS is at this point in my career. But I don’t want to seem like a total n00b so I smile and nod in agreement.

Ok…I’m getting closer to solving my problem. I know I have bad stats. I’m heading back to my desk eager to get to work to finally solve my problem. As I pass by the water cooler and go around the ever-present throng of my coworkers with nothing better to do all day but chat, the sun shines off one door to the maze and catches the corner of my eye…just one door. Above that door is a sign that says DBA_TABLES. Well like any good DBA, I say to myself that its not a bad idea to double check these things. Begin drawn to it, I enter the DBA_TABLES door and am once again in the maze. I make a quick turn and something jumps out at me as if to startle me. But I’m becoming good at this. I don’t care that some little maze dweller insists on telling me this table resides in tablespace USERS. I am quick to know that this makes no difference to my issue at hand. I push on and ignore all of these little imps with their spurious information. I press on. And there I have it…confirmation at the maze exit that there are no stats on this table. A quick lesson was learned here, sometimes, the tools will give you information that isn’t relevant to you on this day.

I may be new at this DBA game, but I do know this. I need to see how things are performing now, make a change, and measure the performance improvement if any. So I head back to the maze. This time, I enter the door marked SQL Developer Autotrace and I execute the offending SQL statement. I not only get the runtime of the SQL statement but I can see the number of reads and the execution plan. I quickly update the stats on the table my Klingon-speaking friend pointed out to me. (quick aside…I used to think he was a jerk but now he’s not so bad. I can learn from this guy. Maybe one day I too can speak Klingon). Then I enter the SQL Developer Autotrace door again. Not only did my query executing go from 2 minutes of execution down to 2 seconds, but reads dropped significantly and the Explain Plan improved. Ok, that last part is a bit of a stretch. I’m still too green to know the Explain Plan was better but looking back on it later in my career I know it was. I quickly learn that sometimes, the performance tuning tools are my disposal are not only there to help find the root cause of the problem, but are also there to confirm the solution actually fixed the problem. And sometimes, the tools to confirm the results are not the tools I used to find the root cause. 

I quickly informed my end user that the issue is resolved. The user grumbles something I couldn’t quite make out and checks to see if his life is actually better. And that’s when I receive it. The greatest gift a DBA could ever receive. That’s right… I received user adoration. Today, I am a miracle worker or so the user thinks.  As I am standing in this user’s cubical he shouts out “HE FIXED IT” and on cue, the entire department’s head pop up over the cubical walls like gophers out of the ground. Hurray..they cheer! I am loving life basking in the glow. Why the boss even offers to take us out to the pub after work..first round is on her.

I walk back to my desk, eager to take on the next challenge. This job could not be any sweeter.

I remember my first encounters with this Performance Tuning Maze as if it were yesterday. When we were joking over pints at the pub that night, I dared not speak of some of the things I saw in that maze. My coworkers wouldn’t understand anyway. I never tell anyone of my fights with the MOS dragons. I’ve been burned too many times. I never tell anyone how boring it is to run a query, wait an hour for a result, try again, wait for an hour, try again, wait for an hour..oops..I dozed off there. The trials and tribulations of my youth are better saved for another time. Maybe I’ll write another book.

But I learned a lot back in those days. Over time, I became better and choosing the best entrance to the maze for the problem at hand. After all, it is only with experience that one can get better with these magical performance tuning elixirs. I’ve also learned that sometimes, one tool seems to be the correct one for the job only to discover part way through the tuning effort that another tool is better suited.

I’ve also learned that it is only working with the tools and learning what they are good at and conversely what they are not good at, that I can best choose the appropriate tool for the job. Back in the day, If often felt like I was trying to pound a screw in with a hammer. Now I see a screw and know the best tool is a screwdriver.

Over time, I’ve grown the number of entrances to my performance tuning maze. I still go through the tried and true doors like with one with just a number above it, 10046. In the past, I’ve been told about magical doors that led to rainbows and unicorns only to discover yet one more grumpy old troll under a bridge. I was skeptical about Lighty being such a magical tool in the beginning, but I was wrong about that one.

Oh the tales I could tell you, but this story is really about that Performance Tuning Maze. It always comes down to that maze. Choose the best door possible, but only experience can tell you which one is best. That will let you arrive at your solution the quickest. Make a wrong turn and start over. Don’t be afraid to enter the maze multiple times. When you think you have the solution, go through the maze to verify. This magical Performance Tuning Maze with all of those wonder Oracle performance tuning utilities and tools has now become one of my favorite places to hang out. I like to add more entrances all the time, hoping that each new tool will lead me to the end of the maze much faster. Sometimes they do and sometimes they don’t.

I still remember the days when I used to hang out in the old “ratio-based tuning” maze, but I’ve moved on to greener pastures. I still chuckle when I see some new DBA standing in front of that old maze, covered in spider webs and they just can’t take the hint. And then I get cranky when I yell at them to forget about that maze and come over here where everyone else is hanging out, only to be spurned by someone who thinks they know better. Well if we ever see them again, we can say “I told you so” and have a good laugh.

I often work with people who see me using some of these shiny tools. They watch me enter the maze and come out the other side with the answer. So their obvious next question is “can I go in that door too?” I chuckle. “Sure…go right ahead”, I tell them. Armed with this cool tuning tool, but zero knowledge on how to tune Oracle, they make a pretty good, but feeble attempt. They call me over to the maze and ask me to help them solve the problem. So we fire up the tool and look at it. I instantly recognize the root cause of the issue, but the tool’s shiny bells and whistles are confusing the neophyte. At this point, I’m now speaking Klingon. Within seconds I say “See…right there…that’s your problem.” and I get back that same quizzical look I provided to my DBA mentor so many years ago. These novices always want access to the tools and think they can wield them like a master. They have no clue what is in the maze nor any clue how to navigate it. Too many people think the tools are the secret sauce when its really the person wielding the tool. Sadly, some people with access to the tools just want a quick and easy answer. They don’t want to put the time in like so many of us.

Time, time to follow the masters. We all have our version of Mt Rushmore. Carved in stone. People like Millsap, Lewis, and Shallahammer to name a few. Your Mt Rushmore may have other names or even similar ones. Others who view our Mt Rushmore, all set in stone, do not realize these fine people were our guides in the maze. They showed us how to navigate the maze. They showed us how to use the tools and which tools to use when. Those of us who learned from the masters try our level best to pay it forward and teach others, although we may never achieve such lofty heights, and that’s ok.

The moral of the story is to learn these tools, learn what they do and what they don’t do. Learn which problems they help tackle. Leverage the tools, but realize that you need to learn as much as you can so that you to can walk the maze with confidence. Sadly, I have to end my story here. Someone just came into my office with another performance tuning problem. Time to enter the maze again. Now which door do I take?

 

Jul 14

Yet Another Reason To Upgrade

As I usually do, I fire up lots of questions on the MOSC and OTN forums and pick and choose which ones I want to answer. This morning, I came across a post from a user who was asking questions about Oracle 9i. As expected, this user got the anticipated responses from the forum faithful about how old/ancient their Oracle version is and they should upgrade to a much more recent version.

Like others, I have written articles about reasons why you should upgrade. Here is one such article I wrote for SearchOracle.com:  Top Reasons To Do An Oracle Upgrade

The article above is not ground breaking and you can certainly find alternative examples from other authors as well. But in reading this forum post, it struck me that there is yet another reason to upgrade your Oracle database that I have yet to see anyone write about. That reason is:

I cannot remember the details about your very old version any more!!!

This morning’s forum discussion was referencing 9i. I haven’t used Oracle 9i since I upgraded everything to 10.2. Oracle 10.2 was released in July 2005. I do not have accurate documentation going back that far but I’m pretty confident I was running 10.2 across the board by July 2006, more than 10 years ago!

Technology changes and the pace of change is fast. I cannot be expected to remember all the intimate details of Oracle 9i (any of its versions), a version I have not used in 10 years or more. Now if you are asking me a question that is still relevant in today’s supported versions, then I can certainly provide an answer. But if the details of your answer are not applicable to today’s 11.2 or 12c versions, then I’m sorry but I cannot adequately answer your question.

In addition, I am not going to find the 9i documentation to cite in any response to your question. If I go to http://docs.oracle.com to look up something I may need to help answer your question, it gives me versions 11.2 and higher. I can choose Earler Versions, but my feeling is if you are too lazy to upgrade your database, then I’m going to be too lazy to go through that extra step of finding your version’s documentation. Another step I am unwilling to take is to download and install the old version and create a database so that I can attempt to reproduce your problem. If I cannot reproduce your issue in one of my current versions, then the answer is obvious…upgrade to resolve your issue.

The bottom line is to upgrade to a more recent version so that you can enjoy obtaining help on whichever forums you post your questions to.I’m not the only one that answers questions on MOSC/OTN but I’m also not the only one that feels this way.

Stay current with your versions so that you can interact with your peers no matter how you discuss Oracle.

Jun 29

ORA-16789: standby redo logs configured incorrectly

The DG Broker is reporting some warnings for one of my databases in the Broker configuration.

 

DGMGRL> show database orcl

Database - orcl

Role: PRIMARY
 Intended State: TRANSPORT-ON
 Instance(s):
 orcl1
 orcl2
 orcl3
 orcl4
 orcl5
 orcl6
 orcl7

Database Warning(s):
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly
 ORA-16789: standby redo logs configured incorrectly

Database Status:
WARNING

 

If these warnings were seen in the standby database, the most common reason is that the Standby Redo Logs are smaller than the primary’s Online Redo Logs. But note in the above output, this is from the primary database.

 

The reason for these warnings is quite simple…my primary does not have any SRL groups. Once I added the SRL groups, the warnings went away.

Jun 29

ORA-01264: Unable to create logfile file name

I am in the process of replacing production hardware for an Oracle RAC database. To do this with zero downtime, I am adding the new nodes and then extending the RAC database to run on both old and new nodes. Then I will remove the old nodes once they no longer have any database connections.

When adding new nodes, I needed to add additional threads of redo to support the new instances. This means I needed to add new Online Redo Log groups for those threads. In my haste to get this project completed, I forgot to take into account my physical standby database. Once the standby received the redo records to create the ORLs, it promptly threw the following errors:

Thu Jun 29 14:17:44 2017
Media Recovery Log /u01/app/oracle/admin/orcls/arch/7_63989_677462342.dbf
No OMF destination specified, unable to create logs 
Errors with log /u01/app/oracle/admin/orcls/arch/7_63989_677462342.dbf
MRP0: Background Media Recovery terminated with error 1264
Thu Jun 29 14:17:44 2017
Errors in file /u01/app/oracle/diag/rdbms/orcls/orcls1/trace/orcls1_pr00_22818.trc:
ORA-01264: Unable to create logfile file name
Recovery interrupted!
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT USING ARCHIVED LOGFILE
Recovered data files to a consistent state at change 259717189001
Thu Jun 29 14:17:45 2017
Errors in file /u01/app/oracle/diag/rdbms/orcls/orcls1/trace/orcls1_pr00_22818.trc:
ORA-01264: Unable to create logfile file name
Thu Jun 29 14:17:45 2017
MRP0: Background Media Recovery process shutdown (orcls1)

 

Silly me. Thankfully this is an easy fix. On the standby, issue:

SQL> alter system set standby_file_management=MANUAL scope=memory;

System altered.

 

Then watch the media recovery take place. We will see a warning in the alert log that the ORL will not be added, which is okay because the standby already has its ORLs.

Media Recovery Log /u01/app/oracle/admin/orcls/arch/7_63989_677462342.dbf
WARNING: standby_file_management is set to MANUAL
Online logs not added for newly enabled thread 1 

Once sufficient recovery has been applied, I set the parameter back to its original value.

SQL> alter system set standby_file_management=AUTO scope=memory;

System altered.

I could get away with doing this because there were no other file management tasks completed during the time the ORL’s were added to the primary.

May 18

Increase VOTE diskgroup in ASM for GI 12.2 Upgrade

Oracle 12.2 requires more disk space in your Voting Disk/OCR area than previous versions. In the past, I had a pretty small diskgroup for Vote and OCR. Here are the instructions I followed to setup Oracle RAC on my laptop for 12.1. Before I can upgrade to Grid Infrastruture 12.2.0.1, I need to add more space to my VOTE diskgroup. This is pretty easy since I am running on Oracle VirtualBox.

 

 


To add more space, I will add another disk file shared on the nodes and then add that disk device to my ASM diskgroup. First, I shutdown my hosts. Next, in Virtual Box Manager, I select host01 and then click on the Settings button. I then click on the Storage button.

I can see my VM’s internal disk (host01.vdi) and the three disk files I have for my shared storage as outlined in my original setup document I linked at the beginning of this post. I will click on the second plus sign to add a new hard disk. On the next screen, I click on the Create New Disk button.

In entered the file name and made sure the location is correct. I also made sure this file was big enough for my new requirements. I pressed the Create button to create the hard disk on this node. Before I go farther, I need to make this file shareable over the nodes. In File –> Virtual Media Manager I can see the file I just created. I select it and click on the Modify button. I choose the option to make this file shareable and press Ok. I then press Close.

Next, I add this harddisk to host02. I select host02 and press the Settings button. In the Storage section, I press the same icon to add a new harddisk. This time, I choose Choose Existing disk. I navigate to my file and press OK to add it to the node. I now startup both nodes.

Once everything is back up and running. I verify the new disk device on both nodes. On both nodes, /dev/sde is now present and this is my new hard disk.

[root@host01 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,  0 May 10 09:22 /dev/sda
brw-rw---- 1 root disk 8,  1 May 10 14:22 /dev/sda1
brw-rw---- 1 root disk 8,  2 May 10 09:22 /dev/sda2
brw-rw---- 1 root disk 8, 16 May 10 14:22 /dev/sdb
brw-rw---- 1 root disk 8, 17 May 10 14:22 /dev/sdb1
brw-rw---- 1 root disk 8, 32 May 10 14:22 /dev/sdc
brw-rw---- 1 root disk 8, 33 May 10 14:22 /dev/sdc1
brw-rw---- 1 root disk 8, 48 May 10 14:22 /dev/sdd
brw-rw---- 1 root disk 8, 49 May 10 14:22 /dev/sdd1
brw-rw---- 1 root disk 8, 64 May 10 14:22 /dev/sde

First I need to format the disk. I use “fdisk /dev/sde” and answer “n”, “p”,”1″, default, default, “w” when prompted. I now add this disk in ASM.

[root@host01 ~]# oracleasm createdisk VOTE2 /dev/sde1
Writing disk header: done
Instantiating disk: done

I use ‘oracleasm listdisks’ on both nodes to verify the VOTE2 disk has been added to both nodes. If the VOTE2 disk is not on host02, issue “oracleasm scandisks” to have it find it. I rarely use ASM, so for this I used the asmca utility to add the disk to the diskgroup. In the Disk Groups tab, I right clicked on the +VOTE disk group and chose Add Disks. I could see my new disk device so I selected it and pressed OK. I can now see the +VOTE disk group has plenty of space.

That’s all there is to it. My VOTE diskgroup is now much larger and I can proceed with my GI12.2 upgrade.

 

May 15

Surviving an Oracle Audit

I recently had the experience of being audited by Oracle Corporation. Nothing quite prepares you for what to expect until you have actually gone through the process. The best advice I can give anyone is to start working today as if you are undergoing an audit. If you are not thinking about being in compliance with your license agreement, by the time an audit is sprung on you, it may be too late.

To be prepared, the DBA should be well-informed of Oracle licensing policies. Reading the Oracle License Guide is a must, but it is only a start. There are many things that are not published. For example, you may not be aware that if you are running Oracle on VMWare ESX 6 or higher, that you need to license all ESX clusters across your enterprise. Oracle is also changing terms such as licensing on non-Oracle cloud services. It is incumbent on today’s Oracle DBA, especially with the cloud changes, to keep on top of Oracle’s licensing stance. Licensing can even change between versions so the DBA needs to keep track of the differences. If a support contract has lapsed, the company cannot legally upgrade their database to a version released after the contract has expired.

Once the DBA has a good feel for the licensing terms and policies, they should then determine the company’s current entitlements. Is the current license agreement “per processor” or Named User Plus (NUP)? How many cores are licensed? Even NUP licensing has to take into account the number of cores. Are optional features being used that have not been paid for? The DBA_FEATURE_USAGE_STATISTICS view can help understand which features have been used in the database and compare it to what is licensed for that database. The DBA should perform their own internal audit and make sure the environments they are in charge of are compliant with their Oracle contract. If their Oracle environment is not in conformance with the contract, the DBA needs to take steps to address the issue.

If the company wants to make sure they have everything covered in advance of an audit, Oracle has a division called License Management Services (LMS) that will help, for a fee. Oracle LMS will help understand what the company is licensed for and how to true-up or remediate to be in compliance with the contract. There are third party vendors that will also provide similar services.

When an audit starts, you will be asked by Oracle LMS for two things. One, they will ask you to fill out a detailed spreadsheet showing exactly where you are running Oracle, what options are in use, and information about the environment. Two, they will ask you to run a script to harvest license details from your systems. The script will need to be run on each and every Oracle machine in the enterprise.

This stage is where the nervous part really begins. Oracle LMS will be combing through what the company has paid for and trying to determine if the usage is in compliance. There is always fear and trepidation wondering what was missed on our end and what they might find.

It is well understood by many in the Oracle community that audits are used to drive sales. In the past, this meant a company could make bad audit findings magically disappear if they agreed to buy some new products. In today’s cloud-enabled world, many companies are finding that Oracle audits are being used to drive sales of cloud services. It is up to each company to decide if they want to fight the findings in court or to work to reach some other agreement with Oracle. For many, a court battle is even more costly.

Oracle audits can be a scary time in the DBA’s life. Be prepared by doing your homework upfront. Work to make sure your environment is as compliant as it can be. Time spent today will go a long way towards making the audit go more smoothly in the future.

May 02

SQL Developer Icons

I will occasionally see a question in the OTN/MOSC forums asking what a specific icon means in SQL Developer. There is no documentation on the subject so we often have to figure it out on our own. My personal preference is that when we mouse over the icon, a tooltip will popup telling us what the icon means. Until we get that feature, or something similar, I decided to start documenting them here in my blog. Here they are:

 

Icon Meaning
  Green ladybug shows object Compiled For Debug
   Red circle with white ‘X’ means object is invalid
   Advanced Queuing table
   White line through middle of table showing a Partitioned Table
   Indexed Organized Table
   Global Temporary Table
   External table
Greyed out SYS IOT overflow table

 

If anyone has any icons I missed, private message me on Twitter (@BPeaslandDBA) or get in touch with me some other way, and I will update this blog post.

Apr 17

SQL Dev 4.2 Top SQL

One of the things that I’ve always liked about SQL Developer is how it saves me time. Sure, I get this fancy looking GUI tool that enables me to enter SQL commands to the database. But I’ve had that capability from the very beginning with SQL*Plus and Server Manager (if you’ve been around long enough to remember that one). But SQL Dev is a great time saver and its the reason why I rarely use SQL*Plus any more.

SQL Dev 4.2 was released to production last week and one of its new features I like is the Top SQL section from the Instance Viewer. This information is nothing new. I’ve been able to get to this information with SQL scripts and from Oracle’s Enterprise Manager. But SQL Dev saves me time once again.

To use this new feature, one obviously has to be running SQL Dev 4.2. If you do not have a copy of this latest version, you can get it here. Then go to the DBA panel and connect to your instance of choice. Then launch the Instance Viewer. You should be able to see the TOP SQL section quite easily.

You can click on those column headings to sort by the metric of your choice.

If you double-click on a specific SQL statement, the first time you do that for this instance, SQL Dev tries to protect you from potential harm because what follows requires the option Tuning Pack.

After you press Yes, and only do so if you are licensed for this optional pack, you are given details of that SQL statement. You can see the Explain Plan:

One of my two favorite parts of this new feature is the ability to get easy access to bind variable values. All too often when tuning SQL statements, I need to determine what a user supplied for bind variable values. 

Just by clicking on the SQL Tuning Advice tab, I can see the results of the Tuning Advisor. This is my other favorite feature and saves me a ton of time having to launch a browser and connect to Enterprise Manager, and then make about 10 other clicks until I get to the same information.

The last bit of information is showing the elapsed time history for this SQL statement.

When I first read the early adopter information for SQL Dev 4.2, a lot of it was focused on the new Formatter. While that is a great time saver to many, I rarely use it. But this is one of the new features I am really excited about.

 

Apr 10

Slow Migration To Cloud

Raise your hand if you’ve heard something like this before from a cloud vendor:

By insert_year, we expect that insert_very_high_percentage of IT needs will be run in the cloud.

We surely have Mark Hurd making the claim at OpenWorld 2015 that by 2025, 80% of all production apps will be in the cloud. Or Oracle saying that 100% of dev and test will be in the cloud by 2025. Other vendors have made similar claims.  Now I’m starting to hear anecdotal evidence that cloud adoption rates are not as speedy as we have been led to believe.

About a year ago, I was asked by a database-related media outlet to write some articles about cloud and Oracle databases. My response was that my company has been very slow to move to the cloud and that I doubted I could be of a good technical resource for them. The editor I spoke with then informed me that they were hearing the same thing from other technical writers and they were having a hard time coming up with good writers on cloud subjects because there was little real-world movement to the cloud. Yet the editor was trying hard to get content on their media outlet because all everyone talks about is the cloud. The editor was in a bind between what they wanted to get published and what content they could obtain.

The take-away I left with that day was that the media and the vendors are hyping the cloud, but those of us in the trenches are not jumping on the bandwagon. Which leads to this obvious conclusion…the ones that really want us in the cloud are the cloud vendors. The big reason Mark Hurd wants us to run 80% of our production apps in the cloud is because its good for their business. Cloud vendors will give us plenty of reasons why the cloud is good for our business. And those reasons are often very good ones and why we will eventually all get there.

Last week I was visiting with a CEO of a technology company, a long-time friend of mine. After chatting about the spouses and kids and lots of non-IT topics, we turned the conversation to how his business is doing. Business is great and he’s constantly busy. I then asked if his clients were moving to the cloud and if his business is helping them with those endeavors. He replied that he has almost no clients moving to the cloud as of yet. He gave me a list of reasons why his client’s cloud adoption rate was low, but primary it was too little benefit for the initial cost in getting there.

Today, I came across a SQL Server article that seems on the surface to have little to do with the cloud. Buried in that article was this quote which jumped out at me.

Cloud adoption is lower than I expected.

To me, cloud adoption is right on target with my expectations. The only ones expecting a faster transition to the cloud are vendors, media, and those who buy all the hype. The author of this article gives two major reasons why cloud adoption is not what people anticipated. I can buy the reasons as being applicable to some cloud deployments, but I think the author misses the boat on many other reasons more important to be slowing down cloud adoption.

Here are the big reasons why I see cloud transitions moving at a slower-than-expected pace, in no particular order:

  • Transitioning existing systems to the cloud is expensive. I’ve seen multiple companies move to Office 365 rather than hosting their own Exchange services. In every transition to Office 365, every company will spend a huge amount of man hours getting there. I’ve seen teams of people working 10-20 hours per week for an entire year all to make the move to the cloud. This amount of effort isn’t sold to us when the cloud vendor tries to tell us how much money we will save. Anyone who has ever moved a production enterprise system within their own company knows how much effort is needed to pull off the transition seamlessly for the end users. If you are moving to the cloud you might also need to upgrade to newer software versions, newer platforms, etc. This type of move is not done lightly. The cloud vendors have made it very easy to transition lots of things to the cloud, but they cannot make it easy for our internal organizational processes to change.
  • The best transitions to the cloud are for new implementations. It is hard for migrate existing systems anywhere. What we often see are companies that enter cloud computing with new endeavors. This lets the company gain experience with cloud technologies without disrupting existing workflows. Unfortunately for the cloud vendors, new implementations do not come along every day for every IT enterprise.
  • Regulatory Compliance. Depending on your business, you may have various regulations your company has to be in compliance in order to remain a business in that region. Cloud vendors have ramped up efforts in order to be able to provide service in as many regions as possible that meet governmental regulations. However, cloud vendors can move faster than regulations imposed on businesses and the businesses are often left trying to sort out if the cloud vendor’s solution keeps them in regulatory compliance or not. It should be no surprise that this effort, by itself, costs the company money that doesn’t need to be spent if they stay on-premises.
  • We need to protect the data. Data is paramount. Data is very valuable to the business. Data breaches are extremely costly. Even if the cloud solution is compliant with regulations, the company may still not be ready to accept the impacts if a data breach is found to be due to something a third party did or did not do properly. The company’s customers or shareholders may not be ready for this. Being the Data Guardian sometimes trumps any desire to move to the cloud. This may be a FUD factor at work, but it is real for many.
  • Field of Dreams vs. If It Ain’t Broke… The cloud vendors are trying to sell us a Field of Dreams. They built it and we should come. The farmer in the movie was right. He built it and they did come. Cloud vendors are telling us of this wonderful new playing field and we should jump on board. Life is great there. These cloud vendors, like the farmer in the movie, are right. However, there is this feeling many in the IT community have and it says “If it ain’t broke, don’t fix it”. As many companies are looking at the cloud and seeing the proverbial greener grass on the other side, we also look and see that things are fine right where they are. Quite honestly, while there are benefits to the cloud, those benefits by themselves are sometimes not a big enough business driver to make the move.

To me, the above list are the big reasons why cloud adoption is slower than some had expected. Many IT organizations across the world are simply not going to make a mass transition to the cloud in a short timeframe.

Please do not read this blog post and think that I am anti-cloud. More than anything, I’m just being a realist and how I see my company’s IT infrastructure and how the cloud fits into that. I know that we are not going to jump at the cloud because it exists and I am not alone in this line of thought. We’ll get there one day, in our own time, and that timeline will take many, many years.