Re: Cephfs metadata and MDS on same node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jesper,

It could make sens only if:

1. the metadata the client's asking for was not already cached in RAM

2. the metadata pool was hosted on very low latency devices like NVMes

3. you could make sure that each client's metadata requests would be served from PGs for which the primary OSD is local to the MDS the client's talking to which in realy life is impossible to achieve as you cannot pin cephfs trees and their related metadata objects to specific PGs.

Best regards,

Frédéric.

--
Cordialement,

Frédéric Nass

Direction du Numérique
Sous-Direction Infrastructures et Services
Université de Lorraine.

Le 09/03/2021 à 16:03, Jesper Lykkegaard Karlsen a écrit :
Dear Ceph’ers

I am about to upgrade MDS nodes for Cephfs in the Ceph cluster (erasure code 8+3 ) I am administrating.

Since they will get plenty of memory and CPU cores, I was wondering if it would be a good idea to move metadata OSDs (NVMe's currently on OSD nodes together with cephfs_data ODS (HDD)) to the MDS nodes?

Configured as:

4 x MDS with each a metadata OSD and configured with 4 x replication

so each metadata OSD would have a complete copy of metadata.

I know MDS, stores al lot of metadata in RAM, but if metadata OSDs were on MDS nodes, would that not bring down latency?

Anyway, I am just asking for your opinion on this? Pros and cons or even better somebody who actually have tried this?

Best regards,
Jesper

--------------------------
Jesper Lykkegaard Karlsen
Scientific Computing
Centre for Structural Biology
Department of Molecular Biology and Genetics
Aarhus University
Gustav Wieds Vej 10
8000 Aarhus C

E-mail: jelka@xxxxxxxxx<mailto:jelka@xxxxxxxxx>
Tlf:    +45 50906203

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Cordialement,

Frédéric Nass

Direction du Numérique
Sous-Direction Infrastructures et Services
Université de Lorraine.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux