Re: Stupid question about ceph fs volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Albert,
Never used EC for (root) data pool.

Le jeu. 25 janv. 2024 à 12:08, Albert Shih <Albert.Shih@xxxxxxxx> a écrit :

> Le 25/01/2024 à 08:42:19+0000, Eugen Block a écrit
> > Hi,
> >
> > it's really as easy as it sounds (fresh test cluster on 18.2.1 without
> any
> > pools yet):
> >
> > ceph:~ # ceph fs volume create cephfs
>
> Yes...I already try that with the label and works fine.
>
> But I prefer to use «my» pools. Because I have ssd/hdd and want also try
> «erasure coding» pool for the data.
>

> I also need to set the pg_num and pgp_num (I know I can do that after the
> creation).


> So I manage to do ... half what I want...
>
> In fact
>
>   ceph fs volume create thing
>
> will create two pools
>
>   cephfs.thing.meta
>   cephfs.thing.data
>
> and if those pool already existe it will use them.
>
> But that's only if the data are replicated no with erasure coding....(maybe
> I forget something config on the pool).
>
> Well I will currently continue my test with replicated data.
>
> > The pools and the daemons are created automatically (you can control the
> > placement of the daemons with the --placement option). Note that the
> > metadata pool needs to be on fast storage, so you might need to change
> the
> > ruleset for the metadata pool after creation in case you have HDDs in
> place.
> > Changing pools after the creation can be done via ceph fs commands:
> >
> > ceph:~ # ceph osd pool create cephfs_data2
> > pool 'cephfs_data2' created
> >
> > ceph:~ # ceph fs add_data_pool cephfs cephfs_data2
> >   Pool 'cephfs_data2' (id '4') has pg autoscale mode 'on' but is not
> marked
> > as bulk.
> >   Consider setting the flag by running
> >     # ceph osd pool set cephfs_data2 bulk true
> > added data pool 4 to fsmap
> >
> > ceph:~ # ceph fs status
> > cephfs - 0 clients
> > ======
> > RANK  STATE             MDS               ACTIVITY     DNS    INOS   DIRS
> > CAPS
> >  0    active  cephfs.soc9-ceph.uqcybj  Reqs:    0 /s    10     13     12
> > 0
> >        POOL           TYPE     USED  AVAIL
> > cephfs.cephfs.meta  metadata  64.0k  13.8G
> > cephfs.cephfs.data    data       0   13.8G
> >    cephfs_data2       data       0   13.8G
> >
> >
> > You can't remove the default data pool, though (here it's
> > cephfs.cephfs.data). If you want to control the pool creation you can
> fall
> > back to the method you mentioned, create pools as you require them and
> then
> > create a new cephfs, and deploy the mds service.
>
> Yes, but I'm guessing the
>
>   ceph fs volume
>
> are the «future» so it would be super nice to add (at least) the option to
> choose the couple of pool...
>
> >
> > I haven't looked too deep into changing the default pool yet, so there
> might
> > be a way to switch that as well.
>
> Ok. I will also try but...well...newbie ;-)
>
> Anyway thanks.
>
> regards
>
> --
> Albert SHIH 🦫 🐸
> France
> Heure locale/Local time:
> jeu. 25 janv. 2024 12:00:08 CET
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux