Re: high number of kernel clients per osd slow down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/03/2021 19:41, Stefan Kooman wrote:
On 3/19/21 7:20 PM, Andrej Filipcic wrote:

Hi,

I am testing 15.2.10 on a large cluster (RH8). cephfs pool (size=1) with 122 nvme OSDs works fine till the number of clients is relatively low. Writing from 400 kernel clients (ior benchmark), 8 streams each, causes issues. Writes are initially fast at 100GB/s but then they drop to <1GB/s after few minutes, while the OSDs use 300% cpu each. My guess is that the OSDs are overloaded with requests from too many clients, since it does not happen till there are ~3-4 streams OSD. The OSDs log do not show anything problematic.

tried to increase osd_op_num_threads_per_shard_ssd, did not help. Restarting OSDs recovers the situation for few minutes.

Writing to HDD pool with 1500 HDDs does not have any issues at all under same conditions.

Any hints, settings to improve this?

Not yet, just questions. How many PG's per NVMe do you have? How much memory per OSD (osd_memory_target) is configured? Do you have enabled "bluefs_buffered_io" on the OSDs?


8k PGs/112 , so 67.
memory target is 7GB, 256 GB memory /26 OSDs, about 1/2 of memory is free. nvme OSDs use 3-4GB for that test.
bluefs_buffered_io is set to true.

Cheers,
Andrej


Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
_____________________________________________________________
   prof. dr. Andrej Filipcic,   E-mail: Andrej.Filipcic@xxxxxx
   Department of Experimental High Energy Physics - F9
   Jozef Stefan Institute, Jamova 39, P.o.Box 3000
   SI-1001 Ljubljana, Slovenia
   Tel.: +386-1-477-3674    Fax: +386-1-425-7074
-------------------------------------------------------------
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux