Ok, looking thru some version of tail.c (http://minnie.tuhs.org/UnixTree/V7/usr/src/cmd/tail.c.html) I remembered a technique I've used almost 15 years ago to speed up searching thru large (100.000+ lines) textfiles. Do binary reads of large blocks into a buffer and search for the EOL markers, count those and combine that some efficient memcpy stuff. Back then it was worth it cause the filesearch was almost as fast as sequential disk i/o would allow. Seem to remember that large buffers in a multiple of the disk block size really helped back then. Might be a little of a hassle nowadays with them blazing fast puters, I might end up with 3x the lines of code and only 5% speed increase. And it's not speed I'm interested in to begin with, it's more file i/o I'd like to reduce. Hmmm... while typing I'm thinking... read blocks backwards... but then I'd need a seek function... /me is confusing himself.. need sleep..

Edited by iffy (2005-09-05 12:09 AM)