> > On Nov 5, 2016, at 3:52 AM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote: > Cache is hardly used, I think you'll find with VM workload you're only getting around 4% hit rates. You're better off using the SSD for slog, it improves sync writes consdierably. > > I tried the Samsung 850 pro, found them pretty bad in practice. Their sustained seq writes were atrocious in production and their lifetime very limted. Gluster/VM usage resultys in very high writes, ours all packed it in under a year. > > We have Kingston Hyper somethings :) they have a TBW of 300TB which is much better and their uncompressed write speed is very high. > On Nov 5, 2016, at 6:21 AM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote: > > On 5/11/2016 9:20 PM, Gandalf Corvotempesta wrote: >> I don't see any advantage doing a single RAIDz10, only drawbacks. >> With multiple RAIDZ1 you get the same space, same features and same >> performances as a single RAIDZ10 but much more availability and safety >> for your data. > > Better local IOPS, better use of ZIl and SLOG. Lower memory requirements. This ^^^ for sure. Plus with sharding, gluster has a much smaller job, so bigger bricks don’t become the heal burden they could be if you had to do full heals. ZFS can bring a brick back up to “full strength” much faster than gluster could do a full heal if he lost part of it. And Z10 is a good way to get more IOPS out of the volume. Same reason I’ve moved to stripes on my, with gluster providing the “safety factor” for any one server. The occasional rebuild if i lose a whole volume won’t kill anything, and completes fast enough not to bother me. If I were running Z10 (or any Z raids for that matter), I’d probably want a spare disk in the pool now that zed works. Which is another good reason for a larger single pool, now that i think about it, better coverage for a single spare. I haven’t had the same experience with the 850 Pros Lindsay has, but I’m going to take a look at my cache hit rates now for sure. I have about ~150G of cache on each box, and they tend to run about 80% full over time. Have to see how effective those are being. I did find that using a little bit of them for an slog really slowed the writes down, which is why I’m not atm. And will keep the Kingston’s in mind, been reconsidering finding a good slog disk, maybe with a cheap raid card in front of it a poor mans zeus. Think I’m going to have to setup something simple to gather layouts and stats with a common test protocol, could be useful… Or is there something like this already out there? -Darrell _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users