I received an alert from Enterprise Manager that one of my production databases was getting low on disk space. I tracked it down to $GRID_HOME/.patch_storage which was consuming 30GB of my 90GB drive. Yikes!
The first thing I did was to run the opatch cleanup routine as I documented here back in 2013: http://www.peasland.net/2013/03/21/patch_storage/
Unfortunately, it didn’t clean up anything.
This time, I had to resort to a manual cleanup. Here are the steps I did.
The files in .patch_storage start with the patch molecule number and a timestamp. For example: 19582630_Nov_14_2014_21_43_23
I need to ask opatch if that patch is still in the inventory.
$ORACLE_HOME/OPatch/opatch lsinventory|grep 19582630
20075154, 20641027, 22271856, 20548410, 19016964, 19582630
lsinventory shows the patch is in the inventory. I move on to the next patch.
When my lsinventory command returns nothing, the patch is not in the inventory. MOS Note 550522.1 says you can remove that directory as its no longer needed. The ever-cautious DBA personality in me wants to ensure I can recover from a simple “rm -rf dir_name” command. So I tar and gzip the directory first, then remove the directory.
tar cvf 25869825_Jul_3_2017_23_11_58.tar 25869825_Jul_3_2017_23_11_58
rm -rf 25869825_Jul_3_2017_23_11_58
Its painstaking work doing this for each and every patch. I’m sure someone who is better than me with sed and awk and shell scripting could automate this process.
By following these steps, my .patch_storage directory dropped from 30GB down to 11GB.
Next quarter when I apply my CPU again, should opatch cry foul and demand these be placed back, I can quickly unzip and extract the tarball and opatch should be happy.
I did this operation on $GRID_HOME but it will work on $RDBMS_HOME as well. Also, since this is Oracle RAC, I may want to do this on all nodes in the cluster.