Re: Ceph test cluster, how to estimate performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Daniel,

just throw away your crappy Samsung SSD 860 Pro. It won't work in an
acceptable way.

See
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit?usp=sharing
for a performance indication of individual disks.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Di., 13. Okt. 2020 um 07:31 Uhr schrieb Daniel Mezentsev <dan@xxxxxxxxxx
>:

> Hi Ceph users,
>
> Im working on  common lisp client utilizing rados library. Got some
> results, but don't know how to estimate if i am getting correct
> performance. I'm running test cluster from laptop - 2 OSDs -  VM, RAM
> 4Gb, 4 vCPU each, monitors and mgr are running from the same VM(s). As
> for storage, i have Samsung SSD 860 Pro, 512G. Disk is splitted into 2
> logical volumes (LVMs), and that volumes are attached to VMs. I know
> that i can't expect too much from that layout, just want to know if im
> getting adequate numbers. Im doing read/write operations on very small
> objects - up to 1kb. In async write im getting ~7.5-8.0 KIOPS.
> Synchronouse read - pretty much the same 7.5-8.0 KIOPS. Async read is
> segfaulting don't know why. Disk itself is capable to deliver well
> above 50 KIOPS. Difference is magnitude. Any info is more welcome.
>   Daniel Mezentsev, founder
> (+1) 604 313 8592.
> Soleks Data Group.
> Shaping the clouds.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux