Re: Feedback of the used configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Paul,

Thanks for the Answer.
I took a look at the subvolumes, but they are a bit odd in my opinion.
If I create one with a subvolume-group, the folder structure will look like this:
/cephfs/volumes/group-name/subvolume-name/random-uuid/
And I have to issue two commands, first set the group and then set the volume name, but why so complicated?

Wouldn't it be easier to  just make subvolumes anywhere inside the cephfs?
I can see the intended use for groups, but if I want to publish a pool in some different directory that's not possible (except for setfattr).
Without first creating subvolume-groups, the orchestrator creates subvolumes in the /cephfs/volumes/_nogroup/subvolume-name/randmon-uuid/ folder.

And the more important question is, why is there a new folder with a random uuid inside the subvolume?
I try to understand the points the devs had, when they developed this, but for me, this is something I have to explain to some devs in our team and at the moment I can't.

It is indeed easier to deploy but comes with much less flexibility.
Maybe something to write in the tracker about?

Thanks in advance,
Simon

Von: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
Gesendet: Mittwoch, 24. Juni 2020 17:35
An: Simon Sutter <ssutter@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Betreff: Re:  Feedback of the used configuration

Have a look at cephfs subvolumes: https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-subvolumes

They are internally just directories with quota/pool placement layout/namespace with some mgr magic to make it easier than doing that all by hand

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90


On Wed, Jun 24, 2020 at 4:38 PM Simon Sutter <ssutter@xxxxxxxxxxx<mailto:ssutter@xxxxxxxxxxx>> wrote:
Hello,

After two months of the "ceph try and error game", I finally managed to get an Octopuss cluster up and running.
The unconventional thing about it is, it's just for hot backups, no virtual machines on there.
All the  nodes are without any caching ssd's, just plain hdd's.
At the moment there are eight of them with a total of 50TB. We are planning to go up to 25 and bigger disks so we end on 300TB-400TB

I decided to go with cephfs, because I don't have any experience in things like S3 and I need to read the same file system from more than one client.

I made one cephfs with a replicated pool.
On there I added erasure-coded pools to save some Storage.
To add those pools, I did it with the setfattr command like this:
setfattr -n ceph.dir.layout.pool -v ec_data_server1 /cephfs/nfs/server1

Some of our servers cannot use cephfs (old kernels, special OS's) so I have to use nfs.
This is set up with the included ganesha-nfs.
Exported is the /cephfs/nfs folder and clients can mount folders below this.

There are two final questions:

-          Was it right to go with the way of "mounting" pools with setfattr, or should I have used multiple cephfs?

First I was thinking about using multiple cephfs but there are warnings everywhere. The deeper I got in, the more it seems I would have been fine with multiple cephfs.

-          Is there a way I don't know, but it would be easier?

I still don't know much about Rest, S3, RBD etc... so there may be a better way

Other remarks are desired.

Thanks in advance,
Simon
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux