Re: Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 19, 2018 at 11:45 PM, Nicolas Huillard
<nhuillard@xxxxxxxxxxx> wrote:
> Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit :
>> Default for mds_log_events_per_segment is 1024, in my set up I ended
>> up with 8192.
>> I calculated that value like IOPS / log segments * 5 seconds (afaik
>> MDS performs journal maintenance once in 5 seconds by default).
>
> I tried 4096 from the initial 1024, then 8192 at the time of your
> answer, then 16384, with not much improvements...
>
> Then I tried to reduce the number of MDS, from 4 to 1, which definitely
> works (sorry if my initial mail didn't make it very clear that I was
> using many MDSs, even though it mentioned mds.2).
> I now have low rate of metadata write (40-50kBps), and the inter-DC
> link load reflects the size and direction of the actual data.
>
> I'll now try to reduce mds_log_events_per_segment back to its original
> value (1024), because performance is not optimal, and stutters a bit
> too much.
>
> Thanks for your advice!
>

This seems like load balancer bug. Improving load balancer is on the
top of our todo list.

Regards
Yan, Zheng

> --
> Nicolas Huillard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux