Re: Write i/o in CephFS metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi!
> 
> I've confirmed that the write IO to the metadata pool is coming form active MDSes.
> 
> I'm experiencing very poor write performance on clients and I would like to see if there's anything I can do to optimise the performance.
> 
> Right now, I'm specifically focussing on speeding up this use case:
> 
> In CephFS mounted dir:
> 
> $ time unzip -q wordpress-seo.12.9.1.zip 
> 
> real	0m47.596s
> user	0m0.218s
> sys	0m0.157s
> 
> On RBD mount:
> 
> $ time unzip -q wordpress-seo.12.9.1.zip 
> 
> real	0m0.176s
> user	0m0.131s
> sys	0m0.045s
> 
> The difference is just too big. I'm having real trouble finding a good reference to check my setup for bad configuration etc.
> 
> I have network bandwidth, RAM and CPU to spare, but I'm unsure on how to put it to work to help my case.

Are there a lot of directories to be created from that zip file? I think
it boils down to the directory operations that need to be performed
synchrously. See
https://fosdem.org/2020/schedule/event/sds_ceph_async_directory_ops/
https://fosdem.org/2020/schedule/event/sds_ceph_async_directory_ops/attachments/slides/3962/export/events/attachments/sds_ceph_async_directory_ops/slides/3962/async_dirops_cephfs.pdf
https://video.fosdem.org/2020/H.1308/sds_ceph_async_directory_ops.webm

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux