Depends entirely on how many images you expect to be held in this
folder at any one time. Whilst all modern operating systems can cope
with lots of files, you hit a certain level* beyond which system
performance suffers increasingly.
Personally I'd create sub-dirs per user.
* Don't ask me what, but it's not a very high number IIRC.
It depends on the OS and what you're going to be doing. I think a couple
of years ago it was more relevant, but still, it's worth considering
today. My memory is that you can stuff a lot of files into a single
directory provided you access them directly and don't ever want to list
them out.
Still, that aside, there are very valid reasons for splitting them up into
subfolders.
- You avoid any "lots of files in a single directory" problem.
- You create "break points" so to speak that would allow you to add a hard
drive seemlessly.
- You potentially make it easier to back up.
If I'm just dealing with images whose names are unique and roughly
sequential numbers I tend to create a structure like this:
A/B/NNNAB.jpg
Where A and B are 0 - 9.
This at least gives me a nice even spread of files. And if I run out of
disk space I can add a new disk and move say the top level 0, 1, 2, 3, 4
to it without affecting any of the code.
You can do the same for users, but you have to watch out for situations in
which you probably don't have any users that start with Q or X, but maybe
a lot that start with R so you can get a bit lopsided.
Interesting, I just created a "album" kind of section for a clients
site, but i am dumping all the images of all users into the folder
"user_album_pics" he's on a dedicated linux server with 2gigs ram and
300gigs hdd space...
Any rough estimates what number of images would be too much? and does
anyone think i should make folders for each user? Each user is limited
to max 3mb of pics though...
"df -hi" is your friend.
From one of my boxes, I get this output:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/da0s1a 252M 43M 189M 19% 1577 30933 5% /
/dev/da0s1h 23G 4.8G 16G 23% 59514 2942084 2% /local
/dev/da0s1e 504M 356K 463M 0% 40 64854 0% /tmp
/dev/da0s1g 7.9G 1.8G 5.4G 25% 223147 811603 22% /usr
/dev/da0s1f 1008M 72M 855M 8% 1398 128392 1% /var
/dev/ad0c 147G 41G 105G 28% 1962 19327060 0% /ad0
The 'iused' and 'ifree' columns tell you how many files/directories are
on that filesystem and how many you have free. So this gives you the
max number of files you can store on that filesystem before you start to
get serious errors.
And the inode stuff is directly tied to how you've created that
filesystem (block size, etc...) On freebsd, this can be adjusted when
you create the filesystem... see the 'newfs' man page for more. Also
the 'tuning' manpage...
Hope this helps...
-philip
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php