weird performance issue on ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi people, we got an interesting issue here and I would like to ask if anyone seen anything like this before.


First: our system:

The ceph version is 17.2.1 but we also seen the same behaviour on 16.2.9.

Our kernel version is 5.13.0-51 and our NVMe disks are Samsung PM983.

In our deployment we got 12 nodes in total, 72 disks and 2 osd per disk makes 144 osd in total.

The depoyment was done by ceph-rook with default values, 6 CPU cores allocated to the OSD each and 4GB of memory allocated to each OSD.


The issue we are experiencing: We create for example 100 volumes via ceph-csi and attach it to kubernetes pods via rbd. We talk about 100 volumes in total, 2GB each. We run fio performance tests (read, write, mixed) on them so the volumes are being used heavily. Ceph delivers good performance, no problems as all.

Performance we get for example: read iops 3371027 write iops: 727714 read bw: 79.9 GB/s write bw: 31.2 GB/s


After the tests are complete, these volumes just sitting there doing nothing for a longer period of time for example 48 hours. After that, we clean the pods up, clean the volumes up and delete them.

Recreate the volumes and pods once more, same spec (2GB each 100 pods) then run the same tests once again. We don’t even have half the performance of that we have measured before leaving the pods sitting there doing notning for 2 days.


Performance we get after deleting the volumes and recreating them, rerun the tests: read iops: 1716239 write iops: 370631 read bw: 37.8 GB/s write bw: 7.47 GB/s

We can clearly see that it’s a big performance loss.


If we clean up the ceph deployment, wipe the disks out completely and redeploy, the cluster once again delivering great performance.


We haven’t seen such a behaviour with ceph version 14.x


Has anyone seen such a thing? Thanks in advance!

Zoltan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux