Re: how to speed up hundreds of millions small files read base on cephfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/1/22 10:58, zxcs wrote:
Hi, experts,

We are using cephfs(15.2.*) with kernel mount on our production environment. And these days when we do massive read from cluster(multi processes),  ceph health always report slow ops for some osds(build with hdd(8TB) which using ssd as db cache).

There might be an imbalance between OSDs (#primary PGs). Are your (primary) PGs evenly balanced between all OSDs? Have you provisioned enough PGs? I.e. do you use ceph balancer (ceph balancer status)? Check ceph osd df and see if VAR / STDDEV are looking good.


our cluster have more read than write request.

health log like below:
100 slow ops, oldest one blocked for 114 sec, [osd.* ...] has slow ops (SLOW _OPS)

my question is does there any best practices to process hundreds of millions small files(means 100kb-300kb each file and 10000+ files in each directory, also more than 5000 directory)? A Any config we can tune or any patch we can apply try to speed up the read(more important than write) and any other file system we could try (we also not sure cephfs is the best choice to store such huge small files )?

Please experts shed some light here! We really need your are help here!

Are you limited by HDD IOPS? You can check with iostat for example.

You don't talk about MDS slow ops or metadata slow ops, so scaling with more MDS daemons does not seem to be necessary at this point.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux