Tuesday, March 4, 2008

Logging: How Did Things Turn Out?

A post, today, to the Boulder Linux Users' Group mailing list asks this interesting question:
I have a bash script that is setting a series of processes in the background and I want to see what their exit codes are. I have played around trying to find a way to do it and I haven't found a reliable way, yet. Any suggestions?
Here's my solution, which contains a generally useful trick: using a filename to contain information you can use at a glance.
The biggest problem is saving $? for each command, especially when you're throwing some or all of them into the background, so they run in parallel.

An easy solution, if practical, is to wrap each process in a shell script that saves its exit status.
Thus

$ cat foo
#!/bin/bash
ls bogus-filename
echo $? > $0.status
$ foo &> /dev/null &
$ cat foo.status
2

As a variant on that, I sometimes use a logging module, which redirects all output to a logfile and renames the logfile on completion. Some variant on this will do the trick:


$ cat foo
#!/bin/bash
source ./logging.sh
ls $* # your command goes here
$ cat logging.sh
# save a log, with the error status

rm -f $0.[0-9]* $0.out
LOG=$0.out
exec &>$LOG
trap 'mv $LOG ${LOG/out/$?}' EXIT
$ foo; ls
foo
foo.0
$ foo bogusfile; ls
foo
foo.2

Hope this helps.
You can see from the logfile name whether the job's still running or how it exited. Moreover, while the job is still running, you can watch its progress with a tail -f on the logfile.

Encapsulating tricks like this in shell modules, like logging.sh means you don't have to re-invent the wheel every time.

No comments: