Bruno Wolff III wrote: > On Wed, Sep 12, 2007 at 16:25:09 -0400, > Matthew Flaschen <matthew.flaschen@xxxxxxxxxx> wrote: >> aragonx@xxxxxxxxxx wrote: >>> I was told by a coworker that all UNIX varieties have to do an ordered >>> list search when they have to preform any operations on a directory. They >>> also stated that if there is more than 100k files in a directory, these >>> tools would fail. >> I'll take that as a challenge. > > I have directories with several million files in them. Just curious...what for? > Lookups of a single file seem to be fast, In a folder with files numbered one through 1 to 200000 (OP said ~100k was a wall), I get: time ls -l | wc -l 200004 ls --color=auto -l 1.90s user 0.83s system 98% cpu 2.775 total wc -l 0.02s user 0.07s system 3% cpu 2.772 total time touch 30079 touch 30079 0.00s user 0.00s system 13% cpu 0.022 total These both seem acceptable > however I have found mv and cp are unexpectedly slow and use up lots of memory. So I expect there are some problems with the way > that code was written. Possibly, but most code like this doesn't have problems per se, but tradeoffs. Matt Flaschen -- fedora-list mailing list fedora-list@xxxxxxxxxx To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list