> -----Original Message----- > From: Robert Cummings [mailto:robert@xxxxxxxxxxxxx] > Sent: Thursday, July 10, 2008 11:24 AM > To: Boyd, Todd M. > Cc: Daniel Brown; php-general@xxxxxxxxxxxxx > Subject: Re: OT - RE: [PHP] scalable web gallery > > On Thu, 2008-07-10 at 10:18 -0500, Boyd, Todd M. wrote: > > > -----Original Message----- > > > From: Daniel Brown [mailto:parasane@xxxxxxxxx] > > > Sent: Thursday, July 10, 2008 9:42 AM > > > To: paragasu > > > Cc: php-general@xxxxxxxxxxxxx > > > Subject: Re: scalable web gallery > > > > ---8<--- snip > > > > > And for the record, in the "olden days," there was a limit of > > > about 2048 files per directory, back when no one thought there > would > > > ever be a need for nearly that many files. Then, with improved > > > filesystems, the limit was rumored to be another magic number: > 65535. > > > That depended on the operating system, filesystem, and the kernel. > I > > > think (but don't quote me on this) that BeOS had the 65535 limit. > > > > > > Now, on an ext3 filesystem (we're not counting ReiserFS because > > > (1) I was never a fan, and (2) he might kill me if I say something > > > bad! 8^O) you're okay with hundreds of thousands of files per > > > directory. ls'ing will be a bit of a pain in the ass, and if > you're > > > in Bash, you probably don't want to double-TAB the directory, but > all > > > in all, you'll be okay. > > > > > > Still, I'd create 16 subdirectories under the images directory: > > > 0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,f. Then name the file as an MD5 hash > of > > > the image uploaded, and place it in the directory matching the > first > > > character of the new file name. > > > > Aren't directory structures in Windows (>2.x) and even DOS (>4.x) > built > > with B-Trees? I wouldn't figure there would be any kind of > > limit--excepting memory, of course--to how many files or > subdirectories > > can be linked to a single node. > > > > Been a while since I've played with those underlying data structures > we > > all take for granted, though, so maybe I'm way off base. > > They may all be B-Trees but the storage mechanism often differs between > one filesystem and another. FAT16 and FAT32 both suffered from > limitations on the number of files that could exist in a directory. > Actually, I may be wrong about FAT32, but I do know for certain it had > massive slowdown if it hit some magic number. tedd also sent me an e-mail that sparked a memory of mine... the b-trees, regardless of their efficiency, still assign each dir/file INode an identifying number. This number, obviously, can only get so large in the context of one b-tree object (i.e., a directory). In spite of this mental exercise, I do *NOT* miss my Data Structures & Algorithms class. :) Todd Boyd Web Programmer -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php