Unix buffering delays output to stdout, ruins your day

Let's say you have the following program:

cat>example.py<<'EOF'
#!/usr/bin/python
import time
while True:
    print 'hello world'
    time.sleep(1)
EOF

chmod +x ./example.py

If you run this program from a terminal, it will print hello world every second.

But redirect the output to a file and something different happens:

./example.py > output &
tail -f output

You won't see any output! (At least not for a long while)

The same is true if you redirect example.py's through a unix pipe which you can do on the shell:

./example.py | cat

Or in Python:

from subprocess import Popen, PIPE
p = Popen('./example.py', stdout=PIPE)
while True:
    print p.stdout.readline(),

Behind the scenes, the culprit is Unix stdio buffering, as implemented on Linux by glibc which is a system library that most programs implemented in C use to handle basic stuff (e.g., IO).

The idea behind Unix buffering is to improve IO performance by batching together IO calls at the application level (AKA userland) and thus minimizing relatively expensive kernel level read/write operations.

By default writes to stdout pass through a 4096 byte buffer, unless stdout happens to be a terminal/tty in which case it is line buffered.

Hence the inconsistency between the immediate output when your program is writing to the terminal and the delayed output when it is writing to a pipe or file.

Programmers that don't want their application's output buffered can either:

  1. Ask for an explicit buffer flush when appropriate.

    In C:

    fflush(stdout)
    

    In Python:

    sys.stdout.flush()
    
  2. Turn off buffering. See the setvbuf() man page for instructions on how to do this in C. In Python you can do this be reopening sys.stdout in unbuffered mode:

    sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
    
  3. Output to stderr instead which is unbuffered by default, but this is a bit of an ugly hack.

Changing a program to flush as needed or not buffer is practical when you are the author, but it's a bit more problematic when you just want to run an existing program without Unix buffering getting in your way.

Fortunately, in most recent Linux distributions (including TKL 11 / Ubuntu Lucid / Debian Squeeze) there's a new command called stdbuf which allows you to configure the default Unix buffering for an arbitrary program.

glibc maintainers have persistently rejected proposals to allow the default Unix buffering scheme to be configured at the glibc level (e.g., via an magic environment variable) but fortunately, it's possible to override system libraries via LD_PRELOAD, and the new setbuf command takes advantage of that.

Another alternative (if you can't or don't want to use stdbuf), is to allocate a pty (pseudo terminal) and connect your program's output to that. As far as I can tell this has negligible impact on performance. It's just a little bit more complex if you're not using a nice high-level interface to command execution such as the Command module in turnkey-pylib.

Comments

saravana's picture

this post is good. the author takes us through both pythonic and unix worlds, and explain us neatly, what to do...

Ramesh's picture

Very nicely explained, thank you!

mgautier's picture

Thank you so much to make me discover stdbuf.

Stdout buffering is such a pain when you play with subprocess. stdbuf is the command I'm needing.

You save my days. Thank you.

Pages

Add new comment