Re: 1 clients failing to respond to cache pressure (quincy:17.2.6)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm not sure if I understand correctly:

I decided to distribute subvolumes across multiple pools instead of
multi-active-mds.
With this method I will have multiple MDS and [1x cephfs clients for each
pool / Host]

Those two statements contradict each other, either you have multi-active MDS or not. Great that you were able to tune your clients, that's really interesting although I haven't looked too deep into your results. But do the actual clients reflect the same improvement (if you already tested that) or was the improvement only for you fio tests? Neverthelesse, quite good IOPS!

Zitat von Özkan Göksu <ozkangksu@xxxxxxxxx>:

Thank you Frank.

My focus is actually performance tuning.
After your mail, I started to investigate client-side.

I think the kernel tunings work great now.
After the tunings I didn't get any warning again.

Now I will continue with performance tunings.
I decided to distribute subvolumes across multiple pools instead of
multi-active-mds.
With this method I will have multiple MDS and [1x cephfs clients for each
pool / Host]

To hide subvolume uuids, I'm using "mount --bind kernel links" and I wonder
is it able to create performance issues on cephfs clients?

Best regards.



Frank Schilder <frans@xxxxxx>, 27 Oca 2024 Cmt, 12:34 tarihinde şunu yazdı:

Hi Özkan,

> ... The client is actually at idle mode and there is no reason to fail
at all. ...

if you re-read my message, you will notice that I wrote that

- its not the client failing, its a false positive error flag that
- is not cleared for idle clients.

You seem to encounter exactly this situation and a simple

echo 3 > /proc/sys/vm/drop_caches

would probably have cleared the warning. There is nothing wrong with your
client, its an issue with the client-MDS communication protocol that is
probably still under review. You will encounter these warnings every now
and then until its fixed.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux