Re: Balanced MDS, all as active and recomended client settings.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Daniel,

On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco <d.carrasco@xxxxxxxxx> wrote:
> Is possible to make a better distribution on the MDS load of both nodes?.

We are aware of bugs with the balancer which are being worked on. You
can also manually create a partition if the workload can benefit:

https://ceph.com/community/new-luminous-cephfs-subtree-pinning/

> Is posible to set all nodes as Active without problems?

No. I recommend you read the docs carefully:

http://docs.ceph.com/docs/master/cephfs/multimds/

> My last question is if someone can recomend me a good client configuration
> like cache size, and maybe something to lower the metadata servers load.

>>
>> ##
>> [mds]
>>  mds_cache_size = 250000
>>  mds_cache_memory_limit = 792723456

You should only specify one of those. See also:

http://docs.ceph.com/docs/master/cephfs/cache-size-limits/

-- 
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux