When performing database upgrades, adequate testing is important to understand the impacts, both positive and negative, the database upgrade has on the application. I have been preparing to upgrade databases from the 126.96.36.199 version to 188.8.131.52. One weekend, myself and another DBA spent some time upgrading about half of our production databases to the new target version. First thing on Monday morning I got a call from a Developer who had a query that was now running slowly. Why did we upgrade just half of the dev databases? For this specific reason. I immediately suspected the query performance was version related. I was able to formulate a reproducible test case. I ran the test case against all of the dev databases. The 184.108.40.206 databases executed the query in 30 seconds, consistently across the board. The 220.127.116.11 databases ran the same query in 3.5 minutes, repeatable in the same version. Because we only upgraded half of the databases, I was able to verify if the issue was version-related…and it was…at least on the surface.
After any database upgrade that has SQL statements performing worse, a common “fix” is to upgrade the table and index stats so that the new optimizer version has good information to work with. Updating stats did not fix the problem. I could see that in the 18.104.22.168 database, the CBO was choosing to use an index and since everything it needed was in the index, it did not access the table. Furthermore, the join was performed with a Nested Loops algorithm. In the 22.214.171.124 database, the same index was used but the table was also accessed and a Hash Join algorithm was used. Why was the CBO making two different decisions?
Anytime we need to peak into the CBO decision making process, it means we need to use the 10053 trace. I captured the trace files from each version. The first part of the trace file shows the optimizer-related initialization parameters. All the parameters were the same except for OPTIMIZER_FEATURES_ENABLE and DB_FILE_MULTIBLOCK_READ_COUNT. Neither of these are explicitly set so they are default values. Obviously, O_F_E has a different default value for each database version. I was surprised that DB_F_M_R_C changed its default value from 126.96.36.199 to 188.8.131.52. I tried to explicitly set the parameter values in the 184.108.40.206 database to match the 220.127.116.11 database but it did not improve the runtime. These parameters, while different, were not having any bearing on the query performance.
The next part of the 10053 trace shows statistics on the tables involved in the query. These were identical in both versions so stats weren’t the issue.
The next part of the 10053 trace shows the table access paths and which one is deemed to have the lowest cost. Here is where the mystery got interesting. In the 18.104.22.168 version, the CBO determined that the cost to access the table using the index was 1258 and the cost of using just the index alone was 351. In the 22.214.171.124 version, the CBO determined the cost to access the table using the index was 127 and the cost of using just the index alone was 351. In fact, all of the table access paths examined by the CBO were identical in both version. It was in that very first cost calculation that the CBO determined a low cost in 126.96.36.199 and a higher cost in 188.8.131.52, thus leading to one access path for one version and another access path for the other version. In the part of the 10053 where it considers which join method to use, the answers were different because the chosen access paths were different.
I still have no answers as to why 184.108.40.206 made that one calculation differently than 220.127.116.11 did, especially when all the other access path calculations were identical in both versions. That one puzzles me and I might need the help of higher powers to get to the answer.
That being said, I was able to determine the root cause of the problem and it wasn’t really version related after all. The problem was that we had in our WHERE clause the following:
WHERE column = :b1
It seems innocent enough. The problem is that the column is defined as VARCHAR2(4) and the bind variable is declared as NUMBER. Oracle performs an implicit conversion. Because the CBO doesn’t have an accurate picture of the bind variable’s contents, it obtains a suboptimal execution plan. Changing the datatype of the bind varialbe fixed the issue. The query now ran in 10 seconds! Wait…it went from 3.5 minutes down to 10 seconds, which is great, but in 18.104.22.168 it was running in 30 seconds. Because the bind variable had the wrong datatype there as well. The proper data type in the 22.214.171.124 had the query running in…you guessed it…10 seconds. This is why I say the problem turned out to not be a version-related issue. We had the same problem in 126.96.36.199 a query that could be improved with the proper datatypes. The new version just magnified an existing problem we didn’t know we had.
All of this highlights the importance of proper testing even for simple patchset upgrades.