Re: CephFS perforamnce degradation in root directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/9/22 4:07 PM, Robert Sander wrote:
Hi,

we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to a CloudStack system as primary storage.

When copying a large file into the root directory of the CephFS the bandwidth drops from 500MB/s to 50MB/s after around 30 seconds. We see some MDS activity in the output of "ceph fs status" at the same time.

When copying the same file to a subdirectory of the CephFS the performance stays at 500MB/s for the whole time. MDS activity does not seems to influence the performance here.

There are appr 270 other files in the root directory. CloudStack stores VM images in qcow2 format there.

Is this a known issue?
Is there something special with the root directory of a CephFS wrt write performance?

AFAIK there is no special with the root dir. From my local test there is not difference with the subdir.

BTW, could you test it for more than once for the root dir ? When you are doing this for the first time the ceph may need to allocate the disk spaces, which will take a little time.

Thanks.


Regards

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux