Re: Ceph test cluster, how to estimate performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Hi guys,
Thanks for replies. I looked through that table - hmmm, that is really true - samsung pro is not really pro. Well im getting what im paying for. Mainly my question was - am i getting adequate to my disk performance, and seems like yes i am. My tests shows 7-8 kIOPs, replicatio factor 2, so it's exactly ~15-16 kIOPs that is strongly correlate with that table. It points me that i don't lose anything on the way to ceph cluster - no issue with client, no issue with network (funny, everything throuhg virtio), and there is no problem with CPU. Only disk is bottle neck. Now i know what exactly is happening with my cluster.

Thanks again for help.

Hello Daniel,

yes Samsung "Pro" SSD series aren't to much "pro", especially when it's
about write IOPS. I would tend to say get some Intel S4510 if you can
afford it. It you can't you can still try to activate overprovisioning
on the SSD, I would trend to say reserve 10-30% of the SSD for wear
leveling (writing). First check the number of sectors with hdparm -N
/dev/sdX then set a permanent HPA (host protected area) to the disk. The
"p" and no space is important.....

hdparm -Np${SECTORS} --yes-i-know-what-i-am-xxxxx /dev/sdX

Wait a little (!), power cycle and re-check the disk with hdparm -N
/dev/sdX. My Samsung 850 Pro are a little reluctant to accept the
setting, but after some tries or a little waiting the change gets permanent.

At least the Samsung 850 pro stopped to die suddenly with that setting.
Without it the SSD occasionally disconnected from the bus and reappeared
after power cycle. I suspect it ran of of wear something.

HTH,

derjohn

On 13.10.20 08:41, Martin Verges wrote:
Hello Daniel,

just throw away your crappy Samsung SSD 860 Pro. It won't work in an
acceptable way.

See
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit?usp=sharing
for a performance indication of individual disks.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Am Di., 13. Okt. 2020 um 07:31 Uhr schrieb Daniel Mezentsev <dan@xxxxxxxxxx
:
Hi Ceph users,

Im working on  common lisp client utilizing rados library. Got some
results, but don't know how to estimate if i am getting correct
performance. I'm running test cluster from laptop - 2 OSDs -  VM, RAM
4Gb, 4 vCPU each, monitors and mgr are running from the same VM(s). As
for storage, i have Samsung SSD 860 Pro, 512G. Disk is splitted into 2
logical volumes (LVMs), and that volumes are attached to VMs. I know
that i can't expect too much from that layout, just want to know if im
getting adequate numbers. Im doing read/write operations on very small
objects - up to 1kb. In async write im getting ~7.5-8.0 KIOPS.
Synchronouse read - pretty much the same 7.5-8.0 KIOPS. Async read is
segfaulting don't know why. Disk itself is capable to deliver well
above 50 KIOPS. Difference is magnitude. Any info is more welcome.
  Daniel Mezentsev, founder
(+1) 604 313 8592.
Soleks Data Group.
Shaping the clouds.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Andreas John
net-lab GmbH  |  Frankfurter Str. 99  |  63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net

Facebook: https://www.facebook.com/netlabdotnet
Twitter: https://twitter.com/netlabdotnet

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxxxx unsubscribe send an email to ceph-users-leave@xxxxxxx
 Daniel Mezentsev, founder
(+1) 604 313 8592.
Soleks Data Group.
Shaping the clouds.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux