Improved Logging for Very Slow SQL Queries: add Backtraces

MySQL has a feature to log slow queries, and it's nice, but the problem is, a lot of the queries look alike. So what you want is a backtrace so you can find the code that created the query. This is a modification to (pretty much) any db abstration layer. What you do is log each query, and generate backtraces if the execution time is long. If a script doesn't have any slow queries, the logged query is deleted.

The bad news is, if a query times out or the page loading stops, the code to delete the log file won't execute, and you'll be stuck with an irrelevant log file. To fix that, I run a cron job to delete small log files. It's not a perfect solution, but it helps clean up /tmp

(A better solution would be to use grep's return value as input to a statement that deletes a file. That would probably need to be in a script, to be safe.)

   function execute_statement_return_autokey($sql)
        //runs a SQL statement for an INSERT and returns the ID of the
        //row added
        global $db_debug;
        global $db_debug_save_only_backtraces;
        if ($db_debug) {
                $fh = fopen('/tmp/sf-active-db-log-'.getmypid(),'a+');   
                $time_start = time();
                fwrite($fh,"---start execute_statement_return_autokey \n");
                fwrite($fh, $sql);

        // execute query here...     

        if ($db_debug) { 
                $elapsed = time() - $time_start;
                if ($elapsed > 10) {
                    fwrite($fh, "\n--------------backtrace-------\n");
                    fwrite($fh, serialize(debug_backtrace()));
                fwrite($fh,"\n---end, $elapsed seconds\n");
                if ($db_debug_save_only_backtraces && !$db_debug_backtrace)

        // other code here, like error checking