Jan 29

Complacency leads to: Risk Becomes Reality

I was participating in a recent thread on the OTN community where someone was asking questions about downgrading after a database upgrade. One of the responses asked how many people actually practice database downgrades. I created this poll to find out.

I was surprised to find one contribution to that thread which said:

I have done my fair share of upgrades – and never had to downgrade even once

Now that poster didn’t explicitly say it, but it was almost as if that individual was saying that practicing downgrades was a waste of time because they won’t ever need it. I’ll give the poster the benefit of the doubt and that this Oracle employee was not actually saying this. I’m not trying to pick on this individual. I’ll let this thread provide me the opportunity to discuss the topic from a more generic viewpoint.   (Update: the poster who prompted me to write this blog entry has come back to the thread in the time it took me to write this and did say, ” did not mean to imply that we should not ‘test’ downgrades.” )

Back in July, I wrote a blog post about The Data Guardian. In that blog post, I said:

the database exists for one reason, to provide efficient and timely access to the data.

The DBA needs to protect the data. That is job #1. Job #2 is for the DBA to provide efficient and timely access to the data. What good is having the data if the people who need access to it cannot get to the data? If those people have terrible performance when interacting with the data, then they might as well have no access.

As the DBA, we need to perform risk management. We need to determine what risks might become reality. The DBAs job is to measure those risks and determine two plans of action. What steps can be taken to avoid that risk becoming reality and what steps do I need to take to resolve the issue when that risk does become a reality?

Even a junior-level DBA will understand the importance of backups. Backups are a risk management strategy. If data is lost, we can recover the data from the backup. And even a junior-level DBA understands the importance of being able to restore from the backup.

In this OTN thread, I wrote this:

Painful fact of anyone in IT:   The moment we become complacent and willfully refuse to perform risk mitigation is the moment in time the wheels are set in motion and that risk becomes reality. It is that moment our careers hang in the balance. Will you be prepared to respond?

To me, this is a Murphy’s Law sort of thing. I’ve said similar things in the past. The idea (and its the whole point of this blog entry) is that if I don’t take appropriate risk management steps, then I’m just asking the gods to turn that risk into reality. If I refuse to adjust my rear view mirror and use it when I’m backing up my vehicle, well that’s the day I back into something. If I refuse to tie my shoelaces, well that’s the day I step on one and trip. They day I refuse to wear protective googles when using a powertool is the day I get something in my eye. The day I go to the beach and refuse to put on sun screen is the day I’ll come home with a sunburn. You get the idea.

Some readers may be thinking that I’m crazy and that the universe doesn’t have this master plan to screw with me just because I’m being complacent. And I would agree. So I’ll say it another way, if I do not plan to mitigate risk, then I have done nothing to stop it from becoming a reality. The chances of it becoming a reality do not decrease because of my inaction. 

There are two major components to risk management. 1) determining the probability of that risk item occurring and 2) determining the impact when that risk does occur. The items that have the highest probability of occurring are mitigated first. This is easy and something that many working on risk management often do. They put the risk items into a spreadsheet and fill in some value for the probability of that risk occurring. When complete, they sort on the probability column and start risk mitigation from the top down. Many risk management strategies draw a line somewhere in the middle of the list and decide any risk item below that line has too low probability that we won’t worry about that risk item. We can’t mitigate all possible risks in the universe. There just isn’t enough time to handle it all. So we have to draw the line somewhere.

One of the failings I see all the time is that risk management does not spend much time focusing on the impact of that risk becoming reality. The spreadsheet needs to include a similar column providing a rating of the impact to the business for that risk item. The risk manager needs to sort the spreadsheet on this column as well. Any items that have a big impact needs to have risk mitigation activities even if that item has a low probability of occurring! Sadly, too many in the risk management business fail to include this step of assessing the risk impact. Again, when the spreadsheet is sorted by impact to the business, a line is drawn somewhere.

One may find that risk items with a HIGH probability have a LOW or even VERY LOW impact to the business. I like risk management spreadsheets that include a third column which is “probability x impact”. This column helps understand the relationship between the two risk components.

Side Bar: Notice how when I talk about risk management I talk about probability and impact. If you aren’t thinking about both of these areas, then you are only performing half the risk management you should be.

Let’s go back to the database upgrade question that prompted this blog post. I think that everyone reading this blog article should agree that upgrading an Oracle database is a risky proposition. There are so many different things that could go wrong with an Oracle database upgrade. The probability of an upgrade failure is HIGH. Risk mitigation items often include, but are not limited to, practicing the upgrade on clones of production and backing up the database before the upgrade process begins. Why do we do this? Well the impact to the business is VERY HIGH. If we fail when upgrading our production database, then our business users have no access to the data. We aren’t a very good Data Guardian if we cannot get past this failure. If we practice the upgrade sufficiently in non-production environments, we can reduce the probability of the risk item to MEDIUM. But in all likelihood, we cannot reduce that specific risk probability to LOW. That is why we take the backup before the upgrade begins. Should still have problems even though we have done our level-best reduce the probability of that risk item, the impact to the business is still VERY HIGH. So the DBA’s risk remediation strategy is to take notes on where and what caused the upgrade to fail, and to restore from the backup. The database is up and running and we have eliminated the impact to the business. The DBA then goes back to the drawing board to determine how to resolve what went wrong. The DBA is attempting to reducing the probability of that problem occurring again when they back at a later point in time to do the upgrade process again.

So let’s go back to the comment in the OTN thread where it seemed to be saying that practicing database downgrades isn’t worth the time. I disagree. And my disagreement has everything to do with the impact to the business. I do agree with the the comment the poster said in their reply.

thorough testing of all of the critical parts will identify any issues and have them resolved before the production upgrade.

I agree with that 100%. Why do we do this “thorough testing”? It is all because of risk mitigation. We are attempting to reduce the probability that the upgrade will cause poor performance or cause application functionality to break. But even as that poster said, “There will always be issues that pop-up in production after the upgrade because it is impossible to test 100% of your application.”  Again, I agree 100% with what this poster is saying here. But what about the impact to the business? I’ll get to that in a minute, but first I have to digress a bit in this next paragraph…

I recently upgraded a critical production system from 11.2.0.4 to the 12.1.0.2 version. Where I work, we have more application testing than I’ve ever seen in my other jobs. We have a full QA team that does testing for us. We even have a team that is in charge of our automated testing efforts. We have automated robots that exercise our application code nightly. On top of all of that, we have another automated routine that whenever people push code changes to Test or Prod, this routine does a quick examination of critical code paths. I upgraded development environments (more than 15 of them) to 12.1.0.2 and then waited one month. I then upgraded Test and waited 3 weeks before I upgraded production. There were issues found and resolved before we upgraded production. But even after all of that, I had big issues once production was upgraded. You can visit my blog posts in mid-October to mid-December to see some of those issues. I was very close to downgrading this database but I managed to work through the issues instead. Now back to the point I was making…

After the upgrade is complete, the database is opened for business. Application users are now allowed to use the application. What happens inside the database at this point? Transactions! And transactions mean data changes. At the point in time the DBA opens the database for business after an upgrade is complete, data changes start occurring. After all this that’s the whole point of the the database, isn’t it?  Capture data changes and make data available to the application’s end users.

So what happens if you’re in the boat I was last Fall with my database upgrade? I was hitting things that we did not see in non-production, even after all of our testing. The impact to the business was HIGH. I need to be able to reduce this impact to the business. I had three options. 1) Fix the issues, one by one. 2) Restore from the backup I took before the upgrade so that I could get the database back to the old version. 3) Downgrade the database and go back to the drawing board. I chose the first option. as I always have during my career. But what if that was not sufficient? It can take time to resolve the issues. Some businesses simply cannot afford that kind of time with that negative impact to the business. How many websites have been abandoned because performance was terrible or things didn’t work correctly? And for the strong majority of production databases out there, option 2 has a very terrible impact to the business! You’ll lose transactions after the upgrade was completed! The DBA won’t be able to roll forward past the upgrade while keeping the database at the old version, so data will be lost and for many production databases, this is unacceptable. The business may be able to afford one hour of data loss, but how many people would pull the trigger on this action within one hour of the upgrade? In all likelihood, this action would be performed days after the upgrade and the impact to the business for that kind of data loss is well above VERY HIGH. So that leaves option 3 as the option with the lowest impact to the business to help resolve whatever impacts the business is experiencing after the upgrade.

You can probably tell from that last paragraph that I feel that it is important for the Oracle DBA to know how to downgrade their database after an upgrade is complete. I’ll concede that the probability of the DBA needing to perform a downgrade is VERY LOW. But the impact of not downgrading may be catastrophic to the business. (There’s those two words again). Because the probability is low, I don’t practice downgrades often, but because the impact of not being able to downgrade is very high, I do practice them once in awhile.

So in closing, I’m going to go back to that Murphy’s Law thing again. The universe is not conspiring against me, but as the Data Guardian, I need to practice good risk management principles. That means assessing the probability and the impact of risk items imposed by my change. While the universe and the gods may not make Murphy’s Law or its cousins kick into gear, I’m not going myself any favors by mitigating risk items. I am not reducing the probability one bit.

 

 

 

Jan 26

Identifying ASH Sequence Contention in RAC

In Chapter 3 of Oracle RAC Performance Tuning, I showed how improper CACHE values for sequences can cause poor performance in Oracle RAC. I also showed how to spot sequence contention when looking at a session’s wait events.

Today, I was working with a developer who was creating a new sequence. The developer had a CACHE value of 100, which made me initially thing was too low of a value. I spotted this low setting during code review. The developer thinks the CACHE value is fine but I’m not convinced. We will test this under load to see if the CACHE value needs to be adjusted.

In the meantime, I was thinking “what if I missed this during code review?” And a follow-on question, “what if we didn’t notice anything during load testing?” I want to be able to go back and determine which sequences, if any, would be candidates for having an improper CACHE setting. I could certainly trace sessions and analyze the trace files, but that would be too painful. So I devised a script that I can run against Active Session History to help determine candidate sequences.

 

select sh.sql_id,to_char(st.sql_text),count(*)
from dba_hist_active_sess_history sh
join dba_hist_sqltext st
 on sh.sql_id=st.sql_id
where st.sql_text like '%NEXTVAL%'
 and (event='row cache lock' or event like 'gc current block %-way')
group by sh.sql_id,to_char(st.sql_text)
order by count(*) desc;

 

This is not a perfect science due to the nature of ASH collection. The session experiencing the contention would need to be caught at just the right time to be in the DBA_HIST_ACTIVE_SESSION_HISTORY table.  But the SQL statement above does give me some candidates for consideration. Not every sequence accessed in the SQL statements returned need to have their CACHE values adjusted. Further analysis would be needed. However, this does give me a list of ones to consider. And it can help answer my initial questions. If I missed the sequence creation during code review, I can hopefully find it later if the sequence is a problem for application performance.

Dec 28

SQL Developer 4.1.3 Released

The newest version of SQL Developer was released while I was out of the office last week. The download is in the usual place.

http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html

 

While you’re there, grab the SQLcl download as well. It was last updated 2 weeks ago.

 

Dec 22

A Trip Through the GIMR

Oracle Grid Infrastructure includes the Cluster Health Monitor (CHM) which regularly captures OS-related performance information. In early versions, CHM use a Berkeley DB for its data store. In Grid Infrastructure 12.1.0.2, it is now required to use an Oracle database for the data store. This Oracle database is called the Grid Infrastructure Management Repository (GIMR). Many people are already aware that the GIMR runs with the database name “-MGMTDB” and runs on only one node of the GI cluster. Should that node become available, GI will automatically start the GIMR on a remaining node.

The above paragraph is about all of the background information I am going to provide on the GIMR. If the reader wants to know more, they can certainly do a web search for info on how to manage (what little management is needed of this database), and how to start and stop the database and its dedicated listener.

This blog post intends to educate the reader on how to to access the GIMR database and extract meaningful information from it. More web searching can show how to use command line utilities to export data from the GIMR. And there is a graphical utility, CHMOSG, that can be used to view the CHM data in the repository. But just for fun, I thought I would show how to get to the data directly.

First, you need to know which node the database is running on. On any node, I can issue the following:

[oracle@host01 bin]$ cd /u01/app/crs12.1.0.2
[oracle@host01 bin]$ ./crs_stat -t | grep -i mgmt
ora.MGMTLSNR ora....nr.type ONLINE ONLINE host01 
ora.mgmtdb ora....db.type ONLINE ONLINE host01

The above shows the database and the listener are running on host01. Now that I know the instance’s node, I can sign on to that node, and set my environment variables to connect to the instance. This database runs out of the Grid Infrastructure home, not the RDBMS home. So I need to set my ORACLE_HOME correctly. Also, the instance name starts with a dash so I need to wrap the SID in double quotes.

[oracle@host01 ~]$ export ORACLE_HOME=/u01/app/crs12.1.0.2
[oracle@host01 ~]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@host01 ~]$ export ORACLE_SID="-MGMTDB"

I can now connect to the instance and verify I am connected to the proper one.

[oracle@host01 ~]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 21 15:17:21 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
SQL> connect / as sysdba
Connected.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
-MGMTDB

This database is an Oracle multitenant database, which one PDB. I need to determine the PDB name. The PDB name will be the same as the cluster name. I can remind myself of the cluster name by querying V$ACTIVE_SERVICES.

SQL> select name,con_id
 2 from v$active_services;
NAME                                                      CON_ID
----------------------------------------------------- ----------
my_cluster                                                     3
-MGMTDBXDB                                                     1
_mgmtdb                                                        1
SYS$BACKGROUND                                                 1
SYS$USERS                                                      1
SQL> alter session set container=my_cluster;
Session altered.

Only one service has the container id not equal to 1 (1 is the CDB) so it must be the PDB I am looking for. I modify my session to use the PDB as its container.

My next task is to get a list of tables owned by CHM.

SQL> select table_name from dba_tables where owner='CHM'
 2 order by table_name;
TABLE_NAME
--------------------------------------------------------------------------------
CHMOS_ACTIVE_CONFIG_INT_TBL
CHMOS_ASM_CONFIG_INT_TBL
CHMOS_CPU_INT_TBL
CHMOS_DEVICE_INT_TBL
CHMOS_FILESYSTEM_INT_TBL
CHMOS_NIC_INT_TBL
CHMOS_PROCESS_INT_TBL
CHMOS_STATIC_CONFIG_INT_TBL
CHMOS_SYSTEM_PERIODIC_INT_TBL
CHMOS_SYSTEM_SAMPLE_INT_TBL

Just 10 tables in the schema. The first table in the list shows some configuration information about the CHM monitored hosts.

SQL> select hostname,NUMPHYCPUS,NUMCPUS,NUMDISKS
 2 from CHM.CHMOS_ACTIVE_CONFIG_INT_TBL;
HOSTNAME   NUMPHYCPUS NUMCPUS    NUMDISKS
---------- ---------- ---------- ----------
host01              1          2          3
host02              1          2          3

I can see that CHM is gathering information about two nodes in the cluster. I can see the number of physical CPU’s for each node and the number of total cores (2). These nodes also have 3 disks.

We can also learn information about the OS.

SQL> select hostname,osname,chiptype
 2 from CHM.CHMOS_STATIC_CONFIG_INT_TBL;
HOSTNAME OSNAME CHIPTYPE
---------- ---------- --------------------
host01 Linux Intel(R)
host02 Linux Intel(R)

There is a good amount of information in these tables and it just takes some trial and error to figure out what is in there. For example, I can use this query to get a count of processes running on host01 ordered over time.

select begintime,count(*)
from CHM.CHMOS_PROCESS_INT_TBL
where hostname='host01'
group by begintime
order by begintime;

I intentionally did not include the output as it would be too long for a blog post. Here are a few more sample queries that you can try on your GIMR database.

Disk I/O activity for a specific host over time.

select begintime,
DISK_BYTESREADPERSEC/1024/1024 as MB_READ_SEC,
DISK_BYTESWRITTENPERSEC/1024/1024 as MB_WRITE_SEC,
DISK_NUMIOSPERSEC as IO_PER_SEC
from CHM.CHMOS_SYSTEM_SAMPLE_INT_TBL
where hostname='host01'
order by begintime;

Swapping on a specific host over time.

select begintime,swpin,swpout
from CHM.CHMOS_SYSTEM_SAMPLE_INT_TBL
where hostname='host01'
order by begintime;

The next SQL statement will compute a histogram of disk I/O activity. I’m sure someone else can come up with a more elegant version as my SQL statements tend to be more brute-force.

select first.num_count as "<=10ms",
 second.num_count as "<=20ms",
 third.num_count as "<=50ms",
 fourth.num_count as "<=100ms",
 fifth.num_count as "<=500ms",
 final.num_count as ">500ms"
from
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency between 0 and 10) first,
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency between 11 and 20) second,
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency between 21 and 50) third,
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency between 51 and 100) fourth,
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency between 101 and 500) fifth,
(select count(*) as num_count from CHM.CHMOS_DEVICE_INT_TBL 
 where devid='sda1' and latency > 500) final;
<=10ms     <=20ms     <=50ms     <=100ms    <=500ms    >500ms
---------- ---------- ---------- ---------- ---------- ----------
    150693          10         1          0          0          0

 

There is a good amount of information in the CHM schema. I expect mostly that this information is just educational and most people will not be querying the CHM tables directly. But this is good information to know and may help others.

 

Dec 11

LongOpsWatcher in SQL Dev

I saw a video of someone who used the new command line utility, sqlcl to create a bar graph showing the progress of long operations in Oracle, as seen from V$SESSION_LONGOPS. That video inspired me to do something similar in SQL Developer.

Below is a video of LongOpsWatcher in action. You can see the time remaining. It calculates the completion percentage and includes a bar chart. I selected a 5 second refresh rate.

 

There is no way for this SQL Developer report to automatically launch the report with a non-zero refresh rate. Maybe that will come in a future version. I filled out an enhancement request and I’ve been told others have offered a similar suggestion.

Here is the SQL statement used in this SQL Developer report:

select inst_id,sid,message,time_remaining,to_char((sofar/totalwork)*100,'990.00') as pct_complete,
'SQLDEV:GAUGE:0:100:0:100:'||nvl(trunc((sofar/totalwork)*100,2),0) as pct_bar
from gv$session_longops
where time_remaining>0

 

Feel free to modify to suit  your needs.

 

Update (12/15/2015):  A few days after I posted this entry, I was referred to a similar blog entry by Uwe Kuchler. This post shows a nice Long Ops watcher in SQL Dev with lots of drill-down capability into what the long running session is doing. You can view the blog entry here:  http://oraculix.com/2015/12/14/reporting-long-running-operations-in-sql-developer/comment-page-1/#comment-1044

Dec 08

Fast Split Partitioning

I have a partitioned table for some application logging. A few years ago, I partitioned the table with one partition per month. As we near 2016, its time for me to add partitions for the new year. The partitioned table has, as its last two partitions, the partition for December 2015 and a partition using MAXVALUE. I never plan on having any data in the MAXVALUE partition. It is just there for making SPLIT PARTITION operations easier.

In the past, I would add partitions with commands similar to the following:

ALTER TABLE usage_tracking
SPLIT PARTITION usage_tracking_pmax AT (TO_DATE('02/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS))
INTO (PARTITION usage_tracking_p201601, PARTITION usage_tracking_pmax);
ALTER TABLE usage_tracking
SPLIT PARTITION usage_tracking_pmax AT (TO_DATE('03/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS'))
INTO (PARTITION usage_tracking_p201602, PARTITION usage_tracking_pmax);

 

The SQL statements above will split the MAXVALUE partition into two partitions. There are 12 such commands, one for each month.

This year, when I tried to run the script for 2016 in a non-production environment, I was surprised to find these commands took about 30 minutes for each to complete. In previous years, they completed in seconds. Remember that USAGE_TRACKING_PMAX is empty so no data needs to be moved into an appropriate partition.

In analyzing the activity of my session performing the SPLIT, I could clearly see db file wait events which were tracked to this partitioned table. It was obvious that the SPLIT operation was reading the max partition, even though it was empty.

Previous years worked fine, but this database was recently upgraded to Oracle 12c. I found information on how to perform a fast split partition operation in MOS Note 1268714.1 which says this applies to Oracle 10.2.0.3 and higher, but I did not have any issues in 11.2.0.4. It was probably just dumb luck and I don’t have an 11g database to check this out on as all of mine have been upgraded. As such, rather than focusing on what changed, I’ll just address the problem and get on with my day.

Per the MOS note, to perform a fast split partition on this empty partition, I need to make sure that I have stats on the empty partition.

I confirmed that the NUM_ROWS was 0 for this empty partition. So I didn’t have to calculate stats on the partition. My first SPLIT PARTITION operation was very fast, just a few seconds. The partition was empty and Oracle knew it. What surprised me was that the new partition, USAGE_TRACKING_P201601 and USAGE_TRACKING_PMAX went to NULL values for statistics. This meant that performing the SPLIT PARTITION operation for the second new partition would take a long time. Here is an example of what I mean. First, we can see 0 rows in the max value partition.

SQL> select num_rows from dba_tab_partitions
  2  where partition_name='USAGE_TRACKING_PMAX';

  NUM_ROWS
----------
         0

Now I’ll split that partition.

 

SQL> ALTER TABLE usage_tracking
  2  SPLIT PARTITION usage_tracking_pmax AT ( TO_DATE('02/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS') )
  3  INTO (PARTITION usage_tracking_p201601, PARTITION usage_tracking_pmax);

Table altered.
Elapsed: 00:00:03.13

 

Notice now that the last two partitions now have no stats.

 

SQL> select num_rows from dba_tab_partitions
  2  where partition_name='USAGE_TRACKING_PMAX';

  NUM_ROWS
----------


SQL> select num_rows from dba_tab_partitions
  2  where partition_name='USAGE_TRACKING_P201601';

  NUM_ROWS
----------


With no stats, the next split partition to create the February 2016 partition takes a long time.

 

SQL> ALTER TABLE nau_system.usage_tracking
  2  SPLIT PARTITION usage_tracking_pmax AT (TO_DATE('03/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS'))
  3  INTO (PARTITION usage_tracking_p201602, PARTITION usage_tracking_pmax);

Table altered.
Elapsed: 00:27:41.09

 

As the MOS note says, we need the stats on the partition to perform a fast split operation. The solution is to calculate stats on the partition, and then use one ALTER TABLE command to create all the partitions at once.

BEGIN
 DBMS_STATS.gather_table_stats (tabname=>'USAGE_TRACKING',
 partname => 'USAGE_TRACKING_PMAX',
 granularity => 'PARTITION');
 END;
 /
ALTER TABLE usage_tracking
SPLIT PARTITION usage_tracking_pmax INTO
 (PARTITION usage_tracking_p201601 VALUES LESS THAN (TO_DATE('02/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201602 VALUES LESS THAN (TO_DATE('03/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201603 VALUES LESS THAN (TO_DATE('04/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201604 VALUES LESS THAN (TO_DATE('05/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201605 VALUES LESS THAN (TO_DATE('06/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201606 VALUES LESS THAN (TO_DATE('07/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201607 VALUES LESS THAN (TO_DATE('08/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201608 VALUES LESS THAN (TO_DATE('09/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS')),
  PARTITION usage_tracking_p201609 VALUES LESS THAN (TO_DATE('10/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS') ),
  PARTITION usage_tracking_p201610 VALUES LESS THAN (TO_DATE('11/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS') ),
  PARTITION usage_tracking_p201611 VALUES LESS THAN (TO_DATE('12/01/2016 00:00:00','MM/DD/YYYY HH24:MI:SS') ),
  PARTITION usage_tracking_p201612 VALUES LESS THAN (TO_DATE('01/01/2017 00:00:00','MM/DD/YYYY HH24:MI:SS') ),
  PARTITION usage_tracking_pmax);

 

If I would have left the script to performing 12 individual SPLIT PARTITION operations, then I would have needed to recalculate stats on the max partition between each one. Using one command was more efficient.

Dec 02

SQLT in 12c Can’t Gather Stats

After upgrading to 12c, I did have some issues where processing in our database was running into the following errors:

ORA-20000: Unable to gather statistics concurrently: insufficient privileges

The fix was pretty easy. I found information on Tim’s Oracle Base website on the workaround here: https://oracle-base.com/articles/12c/concurrent-statistics-collection-12cr1

Today I tried to run SQLT’s sqlxtract script to help tune a problem SQL statement. I was surprised when it failed early on. I checked the log and found that SQLT was running into this same issue. The workaround was the same in that I just granted the following:

CREATE JOB

MANAGE SCHEDULER

MANAGE ANY QUEUE

I granted these system privileges to both SQLTEXPLAIN and SQLTXADMIN.

Nov 30

cd: -M: invalid option

I’m trying to clean up trace files on one of my RAC testbeds. Oracle Corp was gracious enough to name the database “-MGMTDB” for me to give me a nice challenge (dripping with sarcasm). Here I am in my DIAGNOTIC_DEST and we can see two databases.

[oracle@host01 trace]$ cd /u01/app/oracle/diag/rdbms
[oracle@host01 rdbms]$ ls -l
total 8
drwxr-x--- 3 oracle oinstall 4096 Jun 17 14:07 _mgmtdb
drwxr-x--- 3 oracle oinstall 4096 Aug 10 13:13 resp

The directory ‘resp’ is for my Research Primary database, a testbed. The first entry is for the Cluster Health Monitor (CHM) repository database on my Grid Infrastructure 12.1.0.2 system. I can change directory easily enough.

[oracle@host01 rdbms]$ cd _mgmtdb
[oracle@host01 _mgmtdb]$ ls -l
total 4
-rw-r----- 1 oracle oinstall 0 Jun 17 14:07 i_1.mif
drwxr-x--- 16 oracle oinstall 4096 Jun 17 14:06 -MGMTDB

But now I have trouble with the next ‘cd’ command.

[oracle@host01 _mgmtdb]$ cd -MGMTDB
-bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

To get around that, I need to use “dot-slash” before the directory name.

[oracle@host01 _mgmtdb]$ cd ./-MGMTDB
[oracle@host01 -MGMTDB]$ cd trace

Now like any other Oracle trace directory, I have lots of .trc and .trm files, similar to these:

-rw-r----- 1 oracle oinstall 21301 Nov 30 13:43 -MGMTDB_vktm_5472.trc
-rw-r----- 1 oracle oinstall 1946 Nov 30 13:43 -MGMTDB_vktm_5472.trm

So how to remove them? I get an error because ‘rm’ thinks that “-M’ is a parameter.

[oracle@host01 trace]$ rm *.trc *.trm
rm: invalid option -- M
Try `rm ./-MGMTDB_ckpt_5494.trc' to remove the file `-MGMTDB_ckpt_5494.trc'.
Try `rm --help' for more information.

The trick is to use “–” to tell the command line that what follows is no longer a list of parameters.

[oracle@host01 trace]$ rm -- *.trc *.trm

Life would have been such much easier if Oracle would have remembered that almost everyone runs Oracle on *nix with these silly parameters that also start with a dash.

Nov 16

Dynamic Sampling Killing Me in 12c

Eleven days ago, I blogged about how Adaptive Dynamic Stats was consuming resources in my production RAC databases.

After putting out that fire, I was on to examine some poorly performing queries being reported by our QA people in Test and other non-production databases. I did as any good Oracle DBA would do. I gathered a stored procedure call that duplicated the problem. In my session, I started a SQL trace and ran the stored procedure. It took 50 seconds to complete, when it used to take 5 seconds or less before I upgraded from 11.2.0.4 to 12.1.0.2. This stored procedure contains a number of SQL statements and a SQL trace seemed like a logical place to start. I needed to know which SQL statement in the procedure was causing the problems.

I ran the SQL trace file through TKPROF and was surprised by the results. The SQL statements in the stored procedure seemed to be executing pretty quickly. But I was greeted by many statements similar to the following:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
 optimizer_features_enable(default) no_parallel */ SUM(C1)
FROM
 (SELECT /*+ qb_name("innerQuery") INDEX_FFS( "XXX"
 "INDEX_NAME") */ 1 AS C1 FROM
 "OWNER"."TABLE_NAME" SAMPLE BLOCK(71.048, 8) SEED(1)
 "XXX") innerQuery

This is Dynamic Sampling at work. In looking at all of the Dynamic Sampling statements being executed in my trace file, I was able to determine that these accounted for 45 seconds of the overall runtime! Yikes!

Dynamic Sampling is supposed to help me out. The time spent obtaining some sample statistics is supposed to be much smaller than the amount of time saved by executing the SQL statement with better stats. If it doesn’t, your SQL statement performance can suffer, as was my case.

I noted one thing that I thought was interesting was that these Dynamic Sampling queries were executed once for each table and once for each of its indexes. One of the tables involved in my query has 7 indexes on it, so for that one table, I had 8 Dynamic Sampling queries!

In my blog post 11 days ago, I had set the optimizer_dynamic_sampling parameter to 0, which stops these queries from being executed. I had not  yet put that change into our Test environment so I needed to do so. As soon as I did, query performance returned to normal. The default value of this parameter for my database is 2. Your default value can different depending on the value of the optimizer_features_enable setting. According to this blog post, a value of 2 means that dynamic sampling will kick in when at least one of the tables has no statistics. But to be honest, dynamic sampling isn’t giving me any benefits and only causes me harm. So I’ll just leave it off in its entirety for now.

 

 

Nov 16

Lighty Purchasing Adventure

Those that follow me on Twitter and in this blog know that I really like Orachrome’s Lighty for performance tuning. I prefer this over Enterprise Manager. I’ve gotten to the point where I get depressed if Lighty isn’t available to me and I am forced to use EM in its place. Don’t get me wrong, EM is a fine product and I still recommend it and EM does so much more than performance tuning. But for watching and troubleshooting database performance problems, Lighty is one of the top tools in my list!

It was quite an adventure getting the product licensed at my company. I figured I’d describe our adventure. While the adventure was ongoing, I had to resort to using Lighty after my trial expired, which was a bit painful at times. Thankfully this is all over, but here’s a recap of how the trip unfolded.

The first twist to purchasing Lighty, was that the quote came to us in Euros. SETRA Conseil, who bought Orachrome, is a French company. So its natural they would quote in Euros. Their web site lists prices in both Euros and US Dollars. I needed the quote in the latter and it took an extra day to obtain the quote. We’ll dock Lighty one point since they couldn’t read my mind and know exactly what I wanted.   Lighty -1.

Next, we received an invoice and Lighty wanted us to perform a wire transfer, after which they would send us the license keys. Apparently, my company cannot perform wire transfers here in the US where I’m based. So they sent it to our parent company, which sent the invoice to an accounting department on the other side of the planet. I’m not sure what happened, but our accounting department was unable to successful perform a wire transfer.   My Company -1.

The fine folks at Lighty did not have a direct method to accept a credit card payment.   Lighty -1.   (now down two points)

However, they were more than willing to work with us, so they had a way for us to pay via credit card through Pay Pal. Light +1  (now down only 1 point. We’re both tied at -1).

My company had problems paying via credit card on Pay Pal. My Company -1 (now down two points).

The fine folks at Lighty offered us an extra license for our troubles. Lighty +1  (they are now holding steady at zero…neither up nor down…).

My company was finally able to sort out the issues with Pay Pal. Within an hour of my purchasing agent letting me know the transaction was complete, Lighty had shipped me the license. Lighty +1 (they are now on the positive side).

So it was a bit of an adventure. I’m not in accounting so I’m not sure why there was so much hassle. Life could have been made easier if Lighty could have accepted a credit card payment directly, but this ended up not being a workable situation.

So in the final score, I have Lighty +1 and My Company -2. :)

But if you factor in the great product I am now fully licensed for, I think its Lighty +999 My Company -2.

 

Older posts «