Re: CephFS design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




>> Can you suggest me what is a good cephfs design?

One that uses copious complements of my employer’s components, naturally ;)

>> I've never used it, only
>> rgw and rbd we have, but want to give a try. Howvere in the mail list I saw
>> a huge amount of issues with cephfs

Something to remember about the list is that people are far more likely to post when they have a problem than when things are running fine, so it’s easy to mistake that for instabiliity.  For every issue posted, there are a bunch of clusters humming right along.

>> so would like to go with some let's say
>> bulletproof best practices.
>> 
>> Like separate the mds from mon and mgr?
>> Need a lot of memory?
>> Should be on ssd or nvme?
>> How many cpu/disk ...


Like Peter wrote, that’s very dependent on the scale and nature of your workload.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux