Hi Lindsay
Just noticed that you have your ZFS logs on a single disk, you like living dangerously ;-) a you should have a mirror for the slog to be on the safe side.
Cheers,
M.
-------- Original Message --------Subject: Re: [Gluster-users] Improving IOPSLocal Time: November 5, 2016 9:52 AMUTC Time: November 5, 2016 8:52 AMFrom: lindsay.mathieson@xxxxxxxxxTo: Darrell Budic <budic@xxxxxxxxxxxxxxxx>gluster-users <Gluster-users@xxxxxxxxxxx>On 5/11/2016 1:30 AM, Darrell Budic wrote:> What’s your CPU and disk layout for those? You’re close to what I’m running, curious how it compares.All my nodes are running RAIDZ10. I have SSD 5GB slog partion, 100GB CacheCache is hardly used, I think you'll find with VM workload you're onlygetting around 4% hit rates. You're better off using the SSD for slog,it improves sync writes consdierably.I tried the Samsung 850 pro, found them pretty bad in practice. Theirsustained seq writes were atrocious in production and their lifetimevery limted. Gluster/VM usage resultys in very high writes, ours allpacked it in under a year.We have Kingston Hyper somethings :) they have a TBW of 300TB which ismuch better and their uncompressed write speed is very high.> hat are you doing to benchmark your IO?bonnie++ on the ZFS pools and Crystal DiskMark in the VM's, plus testreal world workloads.VNA:- 2 * Xeon E5-2660 2.2 Ghz- 64Gb RAM- 2*1G balance-alb (Gluster Network)- 1G Public networkpool: tankconfig:NAME STATE READ WRITE CKSUMtank ONLINE 0 0 0mirror-0 ONLINE 0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901 ONLINE0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WX81E81AFWJ4 ONLINE0 0 0mirror-1 ONLINE 0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV240 ONLINE0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV027 ONLINE0 0 0mirror-2 ONLINE 0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU903 ONLINE0 0 0ata-WDC_WD6000HLHX-01JJPV0_WD-WXB1E81EFFT2 ONLINE0 0 0mirror-4 ONLINE 0 0 0ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1UFDFKA ONLINE 00 0ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7ZKLK52 ONLINE 00 0logsata-KINGSTON_SHSS37A240G_50026B7267031966-part1 ONLINE0 0 0cacheata-KINGSTON_SHSS37A240G_50026B7267031966-part2 ONLINE0 0 0VNB, VNG- Xenon E5-2620 2Ghz- 64GB RAM- 2*1G balance-alb (Gluster Network)- 1G Public networkpool: tankconfig:NAME STATE READ WRITE CKSUMtank ONLINE 0 0 0mirror-0 ONLINE 0 0 0ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2874892 ONLINE 00 0ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR8C2 ONLINE 00 0mirror-1 ONLINE 0 0 0ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR3Y0 ONLINE 00 0ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR84T ONLINE 00 0logsata-KINGSTON_SHSS37A240G_50026B7266074B8A-part1 ONLINE0 0 0cacheata-KINGSTON_SHSS37A240G_50026B7266074B8A-part2 ONLINE0 0 0>> My prod cluster:> 3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro used for zfs cache, no zil> 2x 9 x 1G drives in straight zfs stripe> 1x 8 x 2G drives in straight zfs stripe>> I use lz4 compressions on my stores. The underlying storage seems to be capable of ~400MB/s writes and 1-1.5GB/s reads, although the pair of 850s I’m caching on probably max out around 1.2GB/s.--Lindsay Mathieson_______________________________________________Gluster-users mailing listGluster-users@xxxxxxxxxxxhttp://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users