Re: rados block on SSD - performance - how to tune and get insight?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph is a massive overhead, so it seems it maxes out at ~10000 (at most 15000) write iops per one ssd with queue depth of 128 and ~1000 iops with queue depth of 1 (1ms latency). Or maybe 2000-2500 write iops (0.4-0.5ms) with best possible hardware. Micron has only squeezed ~8750 iops from each of their NVMes in their reference setup... the same NVMes reached 290000 iops in their setup when connected directly.

Hi Maged

Thanks for your reply.

6k is low as a max write iops value..even for single client. for cluster
of 3 nodes, we see from 10k to 60k write iops depending on hardware.

can you increase your threads to 64 or 128 via -t parameter

I can absolutely get it higher by increasing the parallism. But I
may have missed to explain my purpuse - I'm intested in how close to
putting local SSD/NVMe in servers I can get with RDB. Thus putting
parallel scenarios that I would never see in production in the
tests does not really help my understanding. I think a concurrency level
of 16 is in the top of what I would expect our PostgreSQL databases to do
in real life.

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux