On 09/08/2017 09:49 AM, hw wrote:
Mark Haney wrote:
I hate top posting, but since you've got two items I want to comment
on, I'll suck it up for now.
I do, too, yet sometimes it´s reasonable. I also hate it when the lines
are too long :)
I'm afraid you'll have to live with it a bit longer. Sorry.
Having SSDs alone will give you great performance regardless of
filesystem.
It depends, i. e. I can´t tell how these SSDs would behave if large
amounts of
data would be written and/or read to/from them over extended periods
of time because
I haven´t tested that. That isn´t the application, anyway.
If your I/O is going to be heavy (and you've not mentioned expected
traffic, so we can only go on what little we glean from your posts),
then SSDs will likely start having issues sooner than a mechanical drive
might. (Though, YMMV.) As I've said, we process 600 million messages a
month, on primary SSDs in a VMWare cluster, with mechanical storage for
older, archived user mail. Archived, may not be exactly correct, but
the context should be clear.
BTRFS isn't going to impact I/O any more significantly than, say, XFS.
But mdadm does, the impact is severe. I know there are ppl saying
otherwise,
but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other
services. I don´t
know if the software RAID of btrfs is better in that or not, though,
but I´m
seeing btrfs on SSDs being fast, and testing with the particular
application has
shown a speedup of factor 20--30.
I never said anything about MD RAID. I trust that about as far as I
could throw it. And having had 5 surgeries on my throwing shoulder
wouldn't be far.
That is the crucial improvement. If the hardware RAID delivers that,
I´ll use
that and probably remove the SSDs from the machine as it wouldn´t even
make sense
to put temporary data onto them because that would involve software RAID.
Again, if the idea is to have fast primary storage, there are pretty
large SSDs available now and I've hardware RAIDED SSDs before without
trouble, though not for any heavy lifting, it's my test servers at home.
Without an idea of the expected mail traffic, this is all speculation.
It does have serious stability/data integrity issues that XFS doesn't
have. There's no reason not to use SSDs for storage of immediate
data and mechanical drives for archival data storage.
As for VMs we run a huge Zimbra cluster in VMs on VPC with large
primary SSD volumes and even larger (and slower) secondary volumes
for archived mail. It's all CentOS 6 and works very well. We
process 600 million emails a month on that virtual cluster. All EXT4
inside LVM.
Do you use hardware RAID with SSDs?
We do not here where I work, but that was setup LONG before I arrived.
I can't tell you what to do, but it seems to me you're viewing your
setup from a narrow SSD/BTRFS standpoint. Lots of ways to skin that
cat.
That´s because I do not store data on a single disk, without
redundancy, and
the SSDs I have are not suitable for hardware RAID. So what else is
there but
either md-RAID or btrfs when I do not want to use ZFS? I also do not
want to
use md-RAID, hence only btrfs remains. I also like to use
sub-volumes, though
that isn´t a requirement (because I can use directories instead and
loose the
ability to make snapshots).
If the SSDs you have aren't suitable for hardware RAID, then they aren't
good for production level mail spools, IMHO. I mean, you're talking
like you're expecting a metric buttload of mail traffic, so it stands to
reason you'll need really beefy hardware. I don't think you can do what
you seem to need on budget hardware. Personally, and solely based on
this thread alone, if I was building this in-house, I'd get a decent
server cluster together and build a FC or iSCSI SAN to a Nimble storage
array with Flash/SSD front ends and large HDDs in the back end. This
solves virtually all your problems. The servers will have tiny SSD boot
drives (which I prefer over booting from the SAN) and then everything
else gets handled by the storage back-end.
In effect this is how our mail servers are setup here. And they are
virtual.
I stay away from LVM because that just sucks. It wouldn´t even have
any advantage
in this case.
LVM is a joke. It's always been something I've avoided like the plague.
On 09/08/2017 08:07 AM, hw wrote:
PS:
What kind of storage solutions do people use for cyrus mail spools?
Apparently
you can not use remote storage, at least not NFS. That even makes
it difficult
to use a VM due to limitations of available disk space.
I´m reluctant to use btrfs, but there doesn´t seem to be any
reasonable alternative.
hw wrote:
Mark Haney wrote:
On 09/07/2017 01:57 PM, hw wrote:
Hi,
is there anything that speaks against putting a cyrus mail spool
onto a
btrfs subvolume?
I might be the lone voice on this, but I refuse to use btrfs for
anything, much less a mail spool. I used it in production on DB
and Web servers and fought corruption issues and scrubs hanging
the system more times than I can count. (This was within the last
24 months.) I was told by certain mailing lists, that btrfs isn't
considered production level. So, I scrapped the lot, went to xfs
and haven't had a problem since.
I'm not sure why you'd want your mail spool on a filesystem and
seems to hate being hammered with reads/writes. Personally, on all
my mail spools, I use XFS or EXT4. OUr servers here handle
600million messages a month without trouble on those filesystems.
Just my $0.02.
Btrfs appears rather useful because the disks are SSDs, because it
allows me to create subvolumes and because it handles SSDs nicely.
Unfortunately, the SSDs are not suited for hardware RAID.
The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
and md RAID has severe performance penalties which I´m not willing to
afford.
Part of the data I plan to store on these SSDs greatly benefits from
the low latency, making things about 20--30 times faster for an
important
application.
So what should I do?
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos
--
Mark Haney
Network Engineer at NeoNova
919-460-3330 option 1
mark.haney@xxxxxxxxxxx
www.neonova.net
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos