Oct 28

ORA-24247: network access denied by access control list (ACL)

I have a regularly scheduled job on an Oracle RAC database that will send me an email alert for a condition. This happens every 30 minutes. The job has been failing on one of the nodes, but not the others. The job spits out these errors:


ORA-12012: error on auto execute of job "OWNER"."JOB_NAME"
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 17
ORA-06512: at "SYS.UTL_TCP", line 267
ORA-06512: at "SYS.UTL_SMTP", line 161
ORA-06512: at "SYS.UTL_SMTP", line 197
ORA-06512: at "SYS.UTL_MAIL", line 386
ORA-06512: at "SYS.UTL_MAIL", line 599
ORA-06512: at line 41

What is odd about this one is that the following works on all instances:

SQL> exec utl_mail.send(sender=>'me@acme.com', -
> recipients=>'me@acme.com', -
> subject=>'test from orcl1', -
> message=>'test from orcl1', -
> mime_type=>'text; charset=us-ascii');
PL/SQL procedure successfully completed.

So when I send the email on the instance, it works fine. But the job owner is getting the error. So create a ACL and assign privs.


SQL> exec dbms_network_acl_admin.create_acl ( -
> acl=>'utl_mail_acl.xml', -
> description=>'ACL for using UTL_MAIL', -
> principal=>'OWNER', -
> is_grant=>TRUE, -
> privilege=>'connect', -
> start_date=>SYSTIMESTAMP, -
> end_date=>NULL);
PL/SQL procedure successfully completed.
SQL> exec dbms_network_acl_admin.assign_acl( -
> acl=>'utl_mail_acl.xml', -
> host=>'smtprelay.acme.com', -
> lower_port=>25, upper_port=>NULL);
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.


Now the procedure works as directed.

Oct 23

Oct2014CPU Crashes ArcGIS Desktop

Right after I applied the Oct2014 SPU to our development database, members of our IT staff started complaining that direct-connect connections with ArcCatalog and ArcMap would crash. The app wouldn’t even connect to the database. I tried various things…even upgrading Oracle Client to to match the database version (it was but nothing worked. I even went so far as to enable both 10046 tracing and client-side SQL*Net tracing. In the 10046 trace, I could see where SQL statements were issued to the database. The Listener log confirmed the client established a connection and the 10046 shows the standard SQL statements that were issued to the the Oracle database any time ArcCatalog makes a direct-connect connection. Except at the end of the 10046 trace file, was this last SQL statement:


PARSING IN CURSOR #140250835575144 len=279 dep=0 uid=9459 oct=3 lid=9459 tim=1413920974829489 hv=3533534632 ad='7963a438' sqlid='5hq4svb99uxd8'
SELECT r.owner, r.table_name, x.column_name, x.column_id, x.index_id, x.registration_id, x.minimum_id, x.config_keyword,x.xflags FROM SDE.table_registry r, SDE.sde_xml_columns x WHERE r.registration_id = x.registration_id AND (( r.table_name = 'GDB_ITEMS' AND r.owner = 'SDE'))
PARSE #140250835575144:c=4999,e=5796,p=0,cr=147,cu=0,mis=1,r=0,dep=0,og=1,plh=1755489251,tim=1413920974829487
WAIT #140250835575144: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=297281 tim=1413920974829548

So the SQL was issued and parse. And then before execution, the SQL*Net message to client wait event occurred. And that’s the end of it.  So I turned to SQL*Net tracing. That trace revealed the following:


DDE: Flood control is not active
Incident 1 created, dump file: c:\oracle\product\11.2.0\client_2\log\oradiag_bpeasland\diag\clients\user_bpeasland\host_525531546_80\incident\incdir_1\ora_26000_24088_i1.trc
oci-24550 [3221225477] [Unhandled exception: Code=c0000005 Flags=0
] [] [] [] [] [] [] [] [] [] []


Well the OCI-24550 error wasn’t very informative. I was trying to do some more digging when a colleague found an ESRI document that describes this exact behavior and they now have Bug # 82555. Here is that document:



ESRI says to avoid the patch. But I’d rather not wait for ESRI and Oracle to quit pointing fingers at each other. It has also been my experience that ESRI bugs are not fixed expeditiously. The workaround to grant the SELECT_CATALOG_ROLE role has worked quite well for me. I hope this helps others who have the same problem.


Oct 01

Zero Data Loss Recovery Appliance

Oracle’s standby databases have been around for a long time now. The primary ships redo to the standby to keep them in sync. It seems to be a natural fit that Oracle has now extended this concept to a backup and recovery appliance. The idea is that you take one backup of your database at the start. That’s it…one backup. No more full or incremental backups. The Oracle database sends redo to the appliance which then applies the redo to the backup on the device. The backup on the appliance is always kept up-to-date.

When I attending Open World last year, I had heard about this device. But even then, Oracle was quick to say that the appliance was not released for general availability at that time. This year, the device is available and was discussed at the conference this week.

More information can be found here: http://www.oracle.com/us/corporate/features/zero-data-loss-recovery-appliance/index.html


Sep 19

Error 1033 received logging on to the standby

Upgraded production to a few nights ago. The primary is 3-node RAC and the standby is 2-node RAC. Notice that one of the threads was not transmitting redo to the standby. Saw this repeatedly in the alert log:


Error 1033 received logging on to the standby


Turns out this was a problem of my own making. In $ORACLE_HOME/dbs, I had the following:


-rw-rw—- 1 oracle oinstall 1544 Sep 18 01:44 hc_ncpp5.dat
-rw-r–r– 1 oracle oinstall 55 Sep 18 01:38 initncpp5.ora
lrwxrwxrwx 1 oracle oinstall 40 Sep 18 01:38 orapwnp5 -> /u01/app/oracle/admin/ncpp/dbs/orapwncpp
lrwxrwxrwx 1 oracle oinstall 45 Sep 18 01:38 spfilencpp5.ora -> /u01/app/oracle/admin/ncpp/dbs/spfilencpp.ora

Since the primary is RAC, I put the password file and spfile on shared storage. I then create softlinks in $ORACLE_HOME/dbs. The softlink was a typo. That’s what I get for staying up until 3am while sick when trying to upgrade a production database. The fix was as simple as:

mv orapwnp5 orapwncpp5

That fixed everything for me!



Sep 16

Good Time for DBAs?

Is this a good time to be a DBA? My biased opinion is that any time is a good time to be a DBA. The US Bureau of Labor Statistics released an outlook indicating that DBA positions are expected to increase 15% between 2012 and 2022.

Now comes this article that says about 50% of DBAs are expected to leave the market in the next 10 years.

Demand is rising!

Sep 05

Oracle coming in 2016

Oracle will be releasing Oracle 12cR2 in the first half of 2016. See Metalink Note 742060.1 for the current release schedule.

The Oracle patchset is not on the list but there is a chance it will be out before 12cR2. We’ll have to wait and see I guess.

Now for the burning question…do I upgrade to or maybe or hold off until

Sep 02

Importance of Testing

I am working on upgrading all of our production databases to the Oracle version. My company’s most important database serves a custom, in-house developed application so we have the luxury (and sometime the curse) of having complete control over the application source code. If I discover a version-specific issue with a their party application, I file a trouble ticket with the vendor to get the issue fixed. But for our own application, I often have to diagnose the problem and determine how to get the issue fixed.

Since I have been at this company, I have upgraded from to and then to and now to The two previous upgrades went just fine. No problems. So I have been very surprised that the upgrade from to has been problematic for our application.


I never expected to find issues with this simple patchset upgrade. I’m not skipping versions and shouldn’t introduce too many problems.  My first issue I blogged about here:


The next problem is a query similar to the following in our application code:

FROM our_table
ORDER BY columnB;

The above query will now return an ORA-01791 error in Oracle but it ran just fine in previous versions. When a DISTINCT is used, and the ORDER BY clause has a column not seen in the SELECT clause, the ORA-1791 error will be raised. Oracle says that the fact that this used to work is a bug. The bug is fixed in so the above now raises an exception.

When I first was made aware of this issue, my initial thought was why are we ordering by a column not in the SELECT caluse? The end user won’t know the data is ordered because they can’t see the order. Then I found out that this routine is only used for internal processing. Well machine’s work just fine without ordering the data. So the simple fix on our end was to remove the GROUP by clause. As soon as the code changes gets into production, I can proceed with my database upgrade.

It is so important that I’ll say it again:


At this company, we follow a strict process for changes. The change is made in development first. And then after a period of time, the change is made in Test environment. And then after a period of time, if there are no issues, the change can proceed to production. We also have a custom test application that exercises key components of our application so even if our testers are not hitting that portion of the app, our automated test suite will.

Without adequate changes, the two issues we encountered would most likely not have been noticed until the change was in production. Then the DBA would have been blamed, even though both of these issues were application code problems. Test, test, and test again.

Aug 22

GIMR now mandatory for GI12.1.0.2

I found this nice blog entry today:




Aug 19

Sticky Upgrade Problem

When performing database upgrades, adequate testing is important to understand the impacts, both positive and negative, the database upgrade has on the application. I have been preparing to upgrade databases from the version to One weekend, myself and another DBA spent some time upgrading about half of our production databases to the new target version. First thing on Monday morning I got a call from a Developer who had a query that was now running slowly. Why did we upgrade just half of the dev databases? For this specific reason. I immediately suspected the query performance was version related. I was able to formulate a reproducible test case. I ran the test case against all of the dev databases. The databases executed the query in 30 seconds, consistently across the board. The databases ran the same query in 3.5 minutes, repeatable in the same version. Because we only upgraded half of the databases, I was able to verify if the issue was version-related…and it was…at least on the surface.

After any database upgrade that has SQL statements performing worse, a common “fix” is to upgrade the table and index stats so that the new optimizer version has good information to work with. Updating stats did not fix the problem. I could see that in the database, the CBO was choosing to use an index and since everything it needed was in the index, it did not access the table. Furthermore, the join was performed with a Nested Loops algorithm. In the database, the same index was used but the table was also accessed and a Hash Join algorithm was used. Why was the CBO making two different decisions?

Anytime we need to peak into the CBO decision making process, it means we need to use the 10053 trace. I captured the trace files from each version. The first part of the trace file shows the optimizer-related initialization parameters. All the parameters were the same except for OPTIMIZER_FEATURES_ENABLE and DB_FILE_MULTIBLOCK_READ_COUNT. Neither of these are explicitly set so they are default values. Obviously, O_F_E has a different default value for each database version. I was surprised that DB_F_M_R_C changed its default value from to I tried to explicitly set the parameter values in the database to match the database but it did not improve the runtime. These parameters, while different, were not having any bearing on the query performance.

The next part of the 10053 trace shows statistics on the tables involved in the query. These were identical in both versions so stats weren’t the issue.

The next part of the 10053 trace shows the table access paths and which one is deemed to have the lowest cost. Here is where the mystery got interesting. In the version, the CBO determined that the cost to access the table using the index was 1258 and the cost of using just the index alone was 351. In the version, the CBO determined the cost to access the table using the index was 127 and the cost of using just the index alone was 351. In fact, all of the table access paths examined by the CBO were identical in both version. It was in that very first cost calculation that the CBO determined a low cost in and a higher cost in, thus leading to one access path for one version and another access path for the other version. In the part of the 10053 where it considers which join method to use, the answers were different because the chosen access paths were different.

I still have no answers as to why made that one calculation differently than did, especially when all the other access path calculations were identical in both versions. That one puzzles me and I might need the help of higher powers to get to the answer.

That being said, I was able to determine the root cause of the problem and it wasn’t really version related after all. The problem was that we had in our WHERE clause the following:

WHERE column = :b1

It seems innocent enough. The problem is that the column is defined as VARCHAR2(4) and the bind variable is declared as NUMBER. Oracle performs an implicit conversion. Because the CBO doesn’t have an accurate picture of the bind variable’s contents, it obtains a suboptimal execution plan. Changing the datatype of the bind varialbe fixed the issue. The query now ran in 10 seconds! Wait…it went from 3.5 minutes down to 10 seconds, which is great, but in it was running in 30 seconds. Because the bind variable had the wrong datatype there as well. The proper data type in the had the query running in…you guessed it…10 seconds.  This is why I say the problem turned out to not be a version-related issue. We had the same problem in a query that could be improved with the proper datatypes. The new version just magnified an existing problem we didn’t know we had.

All of this highlights the importance of proper testing even for simple patchset upgrades.



Aug 14

Result Cache

I was playing around with the Result Cache the other day…I know…this isn’t a new feature and has been available for awhile. Unfortunately, it can take a while to get to things I guess.

In my simple test, I had a query that exhibited this behaviour:

   invoices i
   invoice_detail det
on i.dept_id=det.dept_id

call    count       cpu   elapsed       disk      query   current       rows
------- ------  -------  -------- ---------- ---------- ---------  ---------
Parse        1     0.00      0.00          0          0          0         0
Execute      1     0.00      0.00          0          0          0         0
Fetch        2     2.77      6.66      75521      75583          0         1
------- ------  -------  -------- ---------- ---------- ---------- ---------
total        4     2.77      6.67      75521      75583          0         1

75,000 disk reads to return 1 row. Ouch! Now run this through the Result Cache and get some really nice numbers. :)


   /*+ result_cache */
   invoices i
   invoice_detail det
   on i.dept_id=det.dept_id

call     count     cpu   elapsed       disk      query    current       rows
------- ------  ------ --------- ---------- ---------- ----------  ---------
Parse        1    0.00      0.00          0          0          0          0
Execute      1    0.00      0.00          0          0          0          0
Fetch        2    0.00      0.00          0          0          0          1
------- ------  ------ --------- ---------- ---------- ----------  ---------
total        4    0.00      0.00          0          0          0          1


Still 1 row returned but zero disk reads, zero current blocks, and basically zero elapsed time. Nice!


The Result Cache works best when returning a few number of rows on tables that do not change often. DML operations on the underlying tables will invalidate the Result Cache entry and the work will need to be performed anew before it will be stored in the Result Cache.

Sometime soon, when I get a chance, I’m going to figure out the impact of bind variables on queries that use the Result Cache.

Older posts «