Re: How many pool for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Le 24/01/2024 à 10:33:45+0100, Robert Sander a écrit

> On 1/24/24 10:08, Albert Shih wrote:
> > 99.99% because I'm newbie with ceph and don't understand clearly how
> > the autorisation work with cephfs ;-)
> I strongly recommend you to ask for a expierenced Ceph consultant that helps
> you design and setup your storage cluster.

I known I'm working on (meaning I'm waiting my administration to do «what
need to be done)...

> It looks like you try to make design decisions that will heavily influence
> performance of the system.

I'm well aware....

> > If I say 20-30 it's because I currently have on my classic ZFS/NFS server
> > around 25 «datasets» exported to various server.
> The next question is how would the "consumers" access the filesystem: Via
> NFS or mounted directly. Even with the second option you can separate client
> access via CephX keys as David already wrote.

The separate client key would be more than enough for us. 

> > Ok. I got for my ceph cluster two set of servers, first set are for
> > services (mgr,mon,etc.) with ssd and don't currently run any osd (but still
> > have 2 ssd not used), I also got a second set of server with HDD and 2 SSD. The data pool will be on
> > the second set (with HDD). Where should I run the MDS and on which osd ?
> Do you intend to use the Ceph cluster only for archival storage?

Mostly yes. 

> Hwo large is your second set of Ceph nodes, how many HDDs in each? Do you

Huge ;-) 

I got 6 ceph server with ... 60 HDD. (I know, I know it's not ideal)

> intend to use the SSDs for the OSDs' RocksDB?

RocksDB ? no...

> Where do you plan to store the metadata pools for CephFS? They should be

That's exactly the question...

My cluster are :

  5 server with «small» ssd for service (each got 2 ssd no currently used)
  6 server with «huge» HDD for data (each got 2 ssd no currently used)

so for my cephfs metadata I can put them on my 5 servers for services (but
that's mean the mds running on those 5 servers) or should I use the ssd on
the 6 server who hold the OSD for data 


Albert SHIH 🦫 🐸
Heure locale/Local time:
mer. 24 janv. 2024 10:48:11 CET
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux