Re: Improving IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lindsay

Just noticed that you have your ZFS logs on a single disk, you like living dangerously ;-) a you should have a mirror for the slog to be on the safe side.

Cheers,
M.






-------- Original Message --------
Subject: Re: [Gluster-users] Improving IOPS
Local Time: November 5, 2016 9:52 AM
UTC Time: November 5, 2016 8:52 AM
From: lindsay.mathieson@xxxxxxxxx
To: Darrell Budic <budic@xxxxxxxxxxxxxxxx>
gluster-users <Gluster-users@xxxxxxxxxxx>

On 5/11/2016 1:30 AM, Darrell Budic wrote:
> What’s your CPU and disk layout for those? You’re close to what I’m running, curious how it compares.

All my nodes are running RAIDZ10. I have SSD 5GB slog partion, 100GB Cache

Cache is hardly used, I think you'll find with VM workload you're only
getting around 4% hit rates. You're better off using the SSD for slog,
it improves sync writes consdierably.

I tried the Samsung 850 pro, found them pretty bad in practice. Their
sustained seq writes were atrocious in production and their lifetime
very limted. Gluster/VM usage resultys in very high writes, ours all
packed it in under a year.

We have Kingston Hyper somethings :) they have a TBW of 300TB which is
much better and their uncompressed write speed is very high.


> hat are you doing to benchmark your IO?

bonnie++ on the ZFS pools and Crystal DiskMark in the VM's, plus test
real world workloads.


VNA:

- 2 * Xeon E5-2660 2.2 Ghz
- 64Gb RAM
- 2*1G balance-alb (Gluster Network)
- 1G Public network

pool: tank
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901 ONLINE
0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WX81E81AFWJ4 ONLINE
0 0 0
mirror-1 ONLINE 0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV240 ONLINE
0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV027 ONLINE
0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU903 ONLINE
0 0 0
ata-WDC_WD6000HLHX-01JJPV0_WD-WXB1E81EFFT2 ONLINE
0 0 0
mirror-4 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1UFDFKA ONLINE 0
0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7ZKLK52 ONLINE 0
0 0
logs
ata-KINGSTON_SHSS37A240G_50026B7267031966-part1 ONLINE
0 0 0
cache
ata-KINGSTON_SHSS37A240G_50026B7267031966-part2 ONLINE
0 0 0

VNB, VNG
- Xenon E5-2620 2Ghz
- 64GB RAM
- 2*1G balance-alb (Gluster Network)
- 1G Public network
pool: tank
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2874892 ONLINE 0
0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR8C2 ONLINE 0
0 0
mirror-1 ONLINE 0 0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR3Y0 ONLINE 0
0 0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR84T ONLINE 0
0 0
logs
ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part1 ONLINE
0 0 0
cache
ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part2 ONLINE
0 0 0

>
> My prod cluster:
> 3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro used for zfs cache, no zil
> 2x 9 x 1G drives in straight zfs stripe
> 1x 8 x 2G drives in straight zfs stripe
>
> I use lz4 compressions on my stores. The underlying storage seems to be capable of ~400MB/s writes and 1-1.5GB/s reads, although the pair of 850s I’m caching on probably max out around 1.2GB/s.


--
Lindsay Mathieson

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux