We have been setting up DB links to non-Oracle DBs for a migration therefore have been getting to grips with the nuances of DG4MSQL. Once we’d created and amended all the required files we tested the link which just appeared to hang for a few minutes. We then started seeing errors on the application front end complaining about database connectivity. In the words of Scooby-Doo ‘ruh-roh’!
Our SQLPlus connection was also hung so we had to abort it. Luckily we saw the problem almost immediately:
/tmp was at 100%
There was one very large file in there taking all the space. Running ‘fuser’ against it told us it was the DG4MSQL process that had caused it. We killed the PID and moved the file to an overflow directory so we could check the contents. Surprisingly to us, it contained the returns we expected from our test query…it turns out that the standard Oracle DG4MSQL behaviour is to hold the inbound results in /tmp until complete before forwarding to the database and then to the user. Sadly, this is expected behaviour so our options are limited as we will ALWAYS be pulling back a very large dataset.
1) Increase /tmp space – not a great option as we don’t want to massively overspec it but we have no way of being able to calculate the space requirements. We really don’t to play trial and error here. We have no connectivity from our test/dev sites so can’t use those either.
2) Use a little known ENV flag in the listener.ora file that sends the incoming data to a different folder. As we have a v large data transfer directory that is rarely used this will work well in our situation.
If you don’t have any ‘spare’ directories lying around and need to do large queries on a remote database you’ll need to raise an SR with Oracle to get some advice on how to proceed. We were lucky in this case that we’ve managed to resolve the problem without needing to go down that route.
I struggled to find any useful info on this until stumbling across the Oracle doc page by chance.