quincy: test cluster on nvme: fast write, slow read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Doing some lab tests to understand why ceph isnt working for us,
and here's the first puzzle:

setup: A completely fresh quincy cluster, 64 core EPYC 7713, 2 nvme drives

> ceph osd crush rule create-replicated osd default osd ssd
> ceph osd pool create  rbd replicated osd --size 2

> dd if=/dev/rbd0 of=/tmp/testfile   status=progress bs=4M count=1000
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 7.0152 s, 598 MB/s

> dd of=/dev/rbd0 if=/tmp/testfile   status=progress bs=4M count=1000
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 3.82156 s, 1.1 GB/s

write performance is 1/3 of raw nvme, which i suppose is expected (not
very good tho)
but why is read performance so bad?

top shows only one core is being utilized at 40% cpu.
it can't be network either, since this is all localhost.




thanks
Arvid




-- 
+4916093821054
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux