Re: cephfs - mds hardware recommendation for 40 million files and 500 users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 26, 2016 at 9:53 PM, Mike Miller <millermike287@xxxxxxxxx> wrote:
> Hi,
>
> we have started to migrate user homes to cephfs with the mds server 32GB
> RAM. With multiple rsync threads copying this seems to be undersized; the
> mds process consumes all memory 32GB fitting about 4 million caps.
>
> Any hardware recommendation for about 40 million files and about 500 users?

As Greg says, your working set is the important thing rather than the
overall number of files in the system.

If, for example, you are using fuse clients with the default client
cache size (16384) then your working set for 500 clients will be
around 8 million, assuming the clients are accessing unique files
(likely for home directories).  Look at the memory usage of your
existing MDS vs. the value of the mds.inodes performance counter to
work out how much RAM is being used per inode.

> Currently, we are on hammer 0.94.5 and linux ubuntu, kernel 3.13.

You should definitely update to Jewel for your cephfs rollout, and if
you're using the kernel client anywhere make sure you've got a 4.x
kernel.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux