Le 24/01/2024 à 09:45:56+0100, Robert Sander a écrit Hi > > On 1/24/24 09:40, Albert Shih wrote: > > > Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30 > > cephfs (currently and that number will increase with time). > > Why do you need 20 - 30 separate CephFS instances? 99.99% because I'm newbie with ceph and don't understand clearly how the autorisation work with cephfs ;-) If I say 20-30 it's because I currently have on my classic ZFS/NFS server around 25 «datasets» exported to various server. But because you question I understand I can put many export «inside» one cephfs. > > and put all my cephfs inside two of them. Or should I create for each > > cephfs a couple of pool metadata/data ? > > Each CephFS instance needs their own pools, at least two (data + metadata) > per instance. And each CephFS needs at least one MDS running, better with an > additional cold or even hot standby MDS. Ok. I got for my ceph cluster two set of servers, first set are for services (mgr,mon,etc.) with ssd and don't currently run any osd (but still have 2 ssd not used), I also got a second set of server with HDD and 2 SSD. The data pool will be on the second set (with HDD). Where should I run the MDS and on which osd ? > > > Il will also need to have ceph S3 storage, same question, should I have a > > designated pool for S3 storage or can/should I use the same > > cephfs_data_replicated/erasure pool ? > > No, S3 needs its own pools. It cannot re-use CephFS pools. Ok thanks. Regards -- Albert SHIH 🦫 🐸 France Heure locale/Local time: mer. 24 janv. 2024 09:55:26 CET _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx