Lucas Zinato Carraro wrote:
- Exist a recommended size to a Backend server ( Ex: 1 Tb )?
Hardware-wise your setup is probably overkill. Nothing wrong with that. Sizing of filesystems IMO should be based on your tolerance for long fsck during a disaster. I run ZFS which has none of that and don't want to ever see it again on a mail-spool. Linux journals IME reduce the probability of it but you will still find yourself looking at fsck prompt and having to decide: Y = hours of downtime while I make sure it's actuallly OK N = get it going, cross fingers. Most Linux admins don't turn on full data journalling anyhow quoting "performance reasons" they leave the default which is journalling metadata. So you don't really know how your data is doing until it goes kablooey and you do an fsck with the filesystem unmounted. I wouldn't go over 500 megs per FS until Linux has production BTRFS or something similar. In ZFS the backups are trivial. A script does a snapshot at 23:55 takes a few seconds to complete, then the backup is made of the most recent snapshot. We keep 14 days of snapshots in the pool almost all recovery operations are satisfied from that without hitting tape. The overhead of our snapshots increases storage about 50% but we are still FAR below max usage at only about 20% filled pools with LZJB compression in the meta dirs and gzip on the messages.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
---- Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html