Re: How to sizing nfs-ganesha.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

Unfortunately, there isn't a good guide for sizing Ganesha. It's pretty light weight, and so the machines it needs are generally smaller than what Ceph needs, so you probably won't have much of a problem.

The scaling of Ganesha is in 2 factors, based on the workload involved: the CPU usage scales with the number of clients, and the memory scales with the size of the working set of files.

The number of clients is controlled by the RPC_Max_Connections parameter (default 1024), and the max number of parallel operations is controlled by the RPC_Ioq_ThrdMax parameter, both in the NFS_CORE_PARAM block. You probably won't need to change these, unless you have a *lot* of clients. With these settings, any decent server-class multi-core CPU can keep up with demand.

Memory usage is controlled by several parameters, controlling 2 separate caches: the handle cache, and the dirent cache.

The handle cache is global to the machine, and there is one handle per file/directory/symlink/etc. When this cache is full, entries in it will start to be re-used, and performance will degrade. This is controlled by the Entries_HWMark parameter in the MDCACHE block, and defaults to 100,000. This is deliberately sized low, so that small deployments can be made in containers or on small systems. If you have a large data set, you will definitely need to raise this. Memory per handle is fairly small, in the mid 10s of k, so this can be raised a lot. The handle cache is the largest user of memory on a Ganesha system.

The dirent cache is per-directory, and it makes a very large difference in directory listing performance. Dirents are stored in chunks of 1000, and the number of chunks saved per directory is controlled by the Dir_Chunk parameter in the MDCACHE block. It defaults to 128, which is again low. If you have large directories that are listed commonly (or are listed by multiple clients at once) you probably want to raise it. Dirent memory is generally small, dominated by the size of the filename in the dirent, but keep in mind the dirent cache is per-directory, so if you have lots of directories in your working set, this could take a significant amount of memory.

Ganesha does not do any data caching, only metadata caching, so it's memory doesn't need to scale with the amount of data being used.

In general, Ganesha is a lightweight daemon, since it's primarily a translator, and it will use much less resources than the equivalent CephFS MDS or RGW serving the same workload.

Daniel

On 3/20/21 5:01 AM, Quang Lê wrote:
Hi guys,

I'm using manila-openstack to provide a filesystem service using
backend CEPHFS. My design use nfs-ganesha as the gateway for the VM in
openstack mount to CephFS. I am having problems with sizing the
ganesha-servers.

Can anyone suggest me *what are the hardware requirements of the ganesha
server are* or *what parameters are needed to consider when sizing a
ganesha server* ?

My simple topology in link: *https://i.imgur.com/xrYqxAh.png
<https://i.imgur.com/xrYqxAh.png>*

Thank you guys.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux