Re: MDS cache tunning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, very interesting!! I have reduced the number of MDS to one. Only one question more, out of curiosity, from what number can we consider that there are many clients?



El 27/5/21 a las 9:24, Dan van der Ster escribió:
On Thu, May 27, 2021 at 9:21 AM Andres Rojas Guerrero <a.rojas@xxxxxxx> wrote:



El 26/5/21 a las 16:51, Dan van der Ster escribió:
I see you have two active MDSs. Is your cluster more stable if you use
only one single active MDS?

Good question!! I read form Ceph Doc:

"You should configure multiple active MDS daemons when your metadata
performance is bottlenecked on the single MDS that runs by default."

"Workloads that typically benefit from a larger number of active MDS
daemons are those with many clients, perhaps working on many separate
directories."

I have more or less 25 concurrent clients, but working in the same
directory, Is that number a lot of clients?

And I assumed that two are always better than one.

25 isn't many clients, but if they are operating in the same directory
it will create a lot of contention between the two MDSs, which might
explain some of the issues you observe.
I recommend that you reduce back to 1 active mds and observe the
system stability and performance.

-- dan


--
*******************************************************
Andrés Rojas Guerrero
Unidad Sistemas Linux
Area Arquitectura Tecnológica
Secretaría General Adjunta de Informática
Consejo Superior de Investigaciones Científicas (CSIC)
Pinar 19
28006 - Madrid
Tel: +34 915680059 -- Ext. 990059
email: a.rojas@xxxxxxx
ID comunicate.csic.es: @50852720l:matrix.csic.es
*******************************************************

Attachment: OpenPGP_0x2DEE9321B16B4A68.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux