Hi Marco,
the MDS cache size is depending heavily on the load and the number of
clients that access your cephFS (as always I'd say).
The mentioned 4 GB of RAM is appropriate for a few clients with no
special requirements regarding performance, so basically it's a
minimal sizing (as the deployment guide also states).
One example from "real life" is a MDS with 8 GB RAM serving mostly
home directories and some working directories for development. The
total amount of (connected) clients is around 70, but not all of them
are actually changing files constantly, I'd say the active clients
could be around 20.
Regards,
Eugen
Zitat von Marco Mühlenbeck <marco.muehlenbeck@xxxxxxxxxxxxxx>:
Hi together,
I am new here. I am a little bit confused about the discussion about
the amount RAM of the metadata server.
In the SUSE Deployment Guide for SUSE Enterprise Storage 6 (release
2020-01-27) in the chapter "2.2 Minimum Cluster Configuration" there
the is a sentence:
"... Metadata Servers require incremental 4 GB RAM and four cores."
Your discussion is about 128 GB and 256 GB. This is far away from
the SUSE min. requirements.
Can you explain that or give any hint to that why the value are so different?
Marco
Am 07.02.2020 um 09:05 schrieb Stefan Kooman:
Quoting Wido den Hollander (wido@xxxxxxxx):
On 2/6/20 11:01 PM, Matt Larson wrote:
Hi, we are planning out a Ceph storage cluster and were choosing
between 64GB, 128GB, or even 256GB on metadata servers. We are
considering having 2 metadata servers overall.
Does going to high levels of RAM possibly yield any performance
benefits? Is there a size beyond which there are just diminishing
returns vs cost?
The MDS will try to cache as much inodes as you allow it to.
So the amount of users nor the total amount of bytes doesn't matter,
it's the amount of inodes, thus: files and directories.
If clients are using unique datasets (files / directories) than the
amount of clients do matter. If that is the case you might also ask
yourself why you need a clustered filesystem, as it will definitely not
speed things up compared to a local fs (metadata operations that is).
The more you have of those, the more memory it requires.
To clarify: in (active) use. Just having a lot of data around does not
necessarily require a lot of memory.
A lot of small files? A lot of memory!
The expected use case would be for a cluster where there might be
10-20 concurrent users working on individual datasets of 5TB in size.
I expect there would be lots of reads of the 5TB datasets matched with
the creation of hundreds to thousands of smaller files during
processing of the images.
Hundreds to thousands of files is not a lot. Are these datasets to be
stored permanently, or only temporarily? I guess it is convenient to
just configure one fs for all clients to use, but it might not be the
best fit / best performing solution in your case.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx