Re: SSD randwrite performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

i had the 250 GB Samsung PRO. They suck for journals because they are
super slow in the - for ceph - required dsync.

Have a look at

https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

for more informations.

I advice you also to drop this 1 TB desktop SSD's. All this kind of
SSD's can usually not provide reliable IOPS. They can peak a lot for a
short amount of time, but then they die away under pressure ( even below
7200 RPM rotating hdd's ).

Good luck with your testing !

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 24.05.2016 um 20:20 schrieb Max A. Krasilnikov:
> Hello!
> 
> I have cluster with 5 SSD drives as OSD backed by SSD journals, one per osd. One
> osd per node.
> 
> Data drives is Samsung 850 EVO 1TB, journals are Samsung 850 EVO 250G, journal
> partition is 24GB, data partition is 790GB. OSD nodes connected by 2x10Gbps
> linux bonding for data/cluster network.
> 
> When doing random write with 4k blocks with direct=1, buffered=0,
> iodepth=32..1024, ioengine=libaio from nova qemu virthost I can get no more than
> 9kiops. Randread is about 13-15 kiops.
> 
> Trouble is that randwrite not depends on iodepth. read, write can be up to
> 140kiops, randread up to 15 kiops. randwrite is always 2-9 kiops.
> 
> Ceph cluster is mixed of jewel and hammer, upgrading now to jewel. On Hammer I
> got the same results.
> 
> All journals can do up to 32kiops with the same config for fio.
> 
> I am confused because EMC ScaleIO can do much more iops what is boring my boss
> :)
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux