Re: Write i/o in CephFS metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 2 Feb 2020, at 12:45, Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
> 
> On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha <samy@xxxxxx> wrote:
>> 
>> Hi!
>> 
>> I've been running CephFS for a while now and ever since setting it up, I've seen unexpectedly large write i/o on the CephFS metadata pool.
>> 
>> The filesystem is otherwise stable and I'm seeing no usage issues.
>> 
>> I'm in a read-intensive environment, from the clients' perspective and throughput for the metadata pool is consistently larger than that of the data pool.
>> 
>> For example:
>> 
>> # ceph osd pool stats
>> pool cephfs_data id 1
>>  client io 7.6 MiB/s rd, 19 KiB/s wr, 404 op/s rd, 1 op/s wr
>> 
>> pool cephfs_metadata id 2
>>  client io 338 KiB/s rd, 43 MiB/s wr, 84 op/s rd, 26 op/s wr
>> 
>> I realise, of course, that this is a momentary display of statistics, but I see this unbalanced r/w activity consistently when monitoring it live.
>> 
>> I would like some insight into what may be causing this large imbalance in r/w, especially since I'm in a read-intensive (web hosting) environment.
> 
> The MDS is still writing its journal and updating the "open file
> table". The MDS needs to record certain information about the state of
> its cache and the state issued to clients. Even if the clients aren't
> changing anything. (This is workload dependent but will be most
> obvious when clients are opening files _not_ in cache already.)
> 
> -- 
> Patrick Donnelly, Ph.D.
> He / Him / His
> Senior Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
> 

Hi Patrick,

Thanks for this extra information.

I should be able to confirm this by checking network traffic flowing from the MDSes to the OSDs, and compare it to what's coming in from the CephFS clients.

I'll report back when I have more information on that. I'm a little caught up in other stuff right now, but I wanted to just acknowledge your message.

Samy



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux