Re: Feedback of the used configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have a look at cephfs subvolumes:
https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-subvolumes

They are internally just directories with quota/pool placement
layout/namespace with some mgr magic to make it easier than doing that all
by hand

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Wed, Jun 24, 2020 at 4:38 PM Simon Sutter <ssutter@xxxxxxxxxxx> wrote:

> Hello,
>
> After two months of the "ceph try and error game", I finally managed to
> get an Octopuss cluster up and running.
> The unconventional thing about it is, it's just for hot backups, no
> virtual machines on there.
> All the  nodes are without any caching ssd's, just plain hdd's.
> At the moment there are eight of them with a total of 50TB. We are
> planning to go up to 25 and bigger disks so we end on 300TB-400TB
>
> I decided to go with cephfs, because I don't have any experience in things
> like S3 and I need to read the same file system from more than one client.
>
> I made one cephfs with a replicated pool.
> On there I added erasure-coded pools to save some Storage.
> To add those pools, I did it with the setfattr command like this:
> setfattr -n ceph.dir.layout.pool -v ec_data_server1 /cephfs/nfs/server1
>
> Some of our servers cannot use cephfs (old kernels, special OS's) so I
> have to use nfs.
> This is set up with the included ganesha-nfs.
> Exported is the /cephfs/nfs folder and clients can mount folders below
> this.
>
> There are two final questions:
>
> -          Was it right to go with the way of "mounting" pools with
> setfattr, or should I have used multiple cephfs?
>
> First I was thinking about using multiple cephfs but there are warnings
> everywhere. The deeper I got in, the more it seems I would have been fine
> with multiple cephfs.
>
> -          Is there a way I don't know, but it would be easier?
>
> I still don't know much about Rest, S3, RBD etc... so there may be a
> better way
>
> Other remarks are desired.
>
> Thanks in advance,
> Simon
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux