722 Radio Drive, Lewis Center, OH 43035 740-549-3701 info@horizonsystems.com

Slow performance in Unix / Linux while debugging

Got this question the other day and thought I would share some insight as to what happens when
you are watching the output of a UNIX / Linux application in an Xterm window.

I was asked what was happening and why it took so much longer for an application that was outputting a
lot of output to the terminal, and why it ran so much longer than if it was simply redirected to a file.

What happens is the terminal (stdout/stderr) is a file itself, but it is a special file, character based.
Every character is treated as an event. The OS needs to get confirmation that every one of them was delivered
to the file before it can go on.

If you are outputing lots of information, then you can encounter a long run time compared to redirecting the output
directly to the filing system where it is written to a block oriented device. Lots of benefits such as caching takes place
and bulk writes. Application will run considerably faster this way. You might even see greater amounts of CPU time on
your application because it's running in a more efficient manner.

This is useful in times of development and you have lots of debugging information being output. Don't worry, your app is
most likely running as it should. You're just writting to a much slower device, the terminal.

If you need to output lots of information, and be able to see it when it's running, then try this:

$ program_outputing_lots_of_data > outputfile 2>&1 & # captures both stdout and stderr to the same file.
$ tail -f outputfile

There are more ways to do this, but this is fairly simplistic and works most of the time.

Your application will run at full speed and you can still see the output as it's running. You have to manage it a little
differently such as with job control with your favorite shell.

(0 Votes)

Super User

Follow us

Contact Info

722 Radio Drive, Lewis Center, Ohio 43035