CephFS perforamnce degradation in root directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to a CloudStack system as primary storage.

When copying a large file into the root directory of the CephFS the bandwidth drops from 500MB/s to 50MB/s after around 30 seconds. We see some MDS activity in the output of "ceph fs status" at the same time.

When copying the same file to a subdirectory of the CephFS the performance stays at 500MB/s for the whole time. MDS activity does not seems to influence the performance here.

There are appr 270 other files in the root directory. CloudStack stores VM images in qcow2 format there.

Is this a known issue?
Is there something special with the root directory of a CephFS wrt write performance?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux