Re: Stupid question about ceph fs volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It would be a pleasure to complete the documentation but we would need to
test or have someone confirm what I have assumed.

Concerning the warning, I think we should not talk about the discovery
procedure.
While the discovery procedure has already saved some entities, it has also
put entities at risk if misused.
________________________________________________________

Cordialement,

*David CASIER*
________________________________________________________



Le jeu. 25 janv. 2024 à 14:45, Eugen Block <eblock@xxxxxx> a écrit :

> Oh right, I forgot about that, good point! But if that is (still) true
> then this should definitely be in the docs as a warning for EC pools
> in cephfs!
>
> Zitat von "David C." <david.casier@xxxxxxxx>:
>
> > In case the root is EC, it is likely that is not possible to apply the
> > disaster recovery procedure, (no xattr layout/parent on the data pool).
> >
> > ________________________________________________________
> >
> > Cordialement,
> >
> > *David CASIER*
> > ________________________________________________________
> >
> >
> > Le jeu. 25 janv. 2024 à 13:03, Eugen Block <eblock@xxxxxx> a écrit :
> >
> >> I'm not sure if using EC as default data pool for cephfs is still
> >> discouraged as stated in the output when attempting to do that, the
> >> docs don't mention that (at least not in the link I sent in the last
> >> mail):
> >>
> >> ceph:~ # ceph fs new cephfs cephfs_metadata cephfs_data
> >> Error EINVAL: pool 'cephfs_data' (id '8') is an erasure-coded pool.
> >> Use of an EC pool for the default data pool is discouraged; see the
> >> online CephFS documentation for more information. Use --force to
> >> override.
> >>
> >> ceph:~ # ceph fs new cephfs cephfs_metadata cephfs_data --force
> >> new fs with metadata pool 6 and data pool 8
> >>
> >> CC'ing Zac here to hopefully clear that up.
> >>
> >> Zitat von "David C." <david.casier@xxxxxxxx>:
> >>
> >> > Albert,
> >> > Never used EC for (root) data pool.
> >> >
> >> > Le jeu. 25 janv. 2024 à 12:08, Albert Shih <Albert.Shih@xxxxxxxx> a
> >> écrit :
> >> >
> >> >> Le 25/01/2024 à 08:42:19+0000, Eugen Block a écrit
> >> >> > Hi,
> >> >> >
> >> >> > it's really as easy as it sounds (fresh test cluster on 18.2.1
> without
> >> >> any
> >> >> > pools yet):
> >> >> >
> >> >> > ceph:~ # ceph fs volume create cephfs
> >> >>
> >> >> Yes...I already try that with the label and works fine.
> >> >>
> >> >> But I prefer to use «my» pools. Because I have ssd/hdd and want also
> try
> >> >> «erasure coding» pool for the data.
> >> >>
> >> >
> >> >> I also need to set the pg_num and pgp_num (I know I can do that after
> >> the
> >> >> creation).
> >> >
> >> >
> >> >> So I manage to do ... half what I want...
> >> >>
> >> >> In fact
> >> >>
> >> >>   ceph fs volume create thing
> >> >>
> >> >> will create two pools
> >> >>
> >> >>   cephfs.thing.meta
> >> >>   cephfs.thing.data
> >> >>
> >> >> and if those pool already existe it will use them.
> >> >>
> >> >> But that's only if the data are replicated no with erasure
> >> coding....(maybe
> >> >> I forget something config on the pool).
> >> >>
> >> >> Well I will currently continue my test with replicated data.
> >> >>
> >> >> > The pools and the daemons are created automatically (you can
> control
> >> the
> >> >> > placement of the daemons with the --placement option). Note that
> the
> >> >> > metadata pool needs to be on fast storage, so you might need to
> change
> >> >> the
> >> >> > ruleset for the metadata pool after creation in case you have HDDs
> in
> >> >> place.
> >> >> > Changing pools after the creation can be done via ceph fs commands:
> >> >> >
> >> >> > ceph:~ # ceph osd pool create cephfs_data2
> >> >> > pool 'cephfs_data2' created
> >> >> >
> >> >> > ceph:~ # ceph fs add_data_pool cephfs cephfs_data2
> >> >> >   Pool 'cephfs_data2' (id '4') has pg autoscale mode 'on' but is
> not
> >> >> marked
> >> >> > as bulk.
> >> >> >   Consider setting the flag by running
> >> >> >     # ceph osd pool set cephfs_data2 bulk true
> >> >> > added data pool 4 to fsmap
> >> >> >
> >> >> > ceph:~ # ceph fs status
> >> >> > cephfs - 0 clients
> >> >> > ======
> >> >> > RANK  STATE             MDS               ACTIVITY     DNS    INOS
> >>  DIRS
> >> >> > CAPS
> >> >> >  0    active  cephfs.soc9-ceph.uqcybj  Reqs:    0 /s    10     13
> >>  12
> >> >> > 0
> >> >> >        POOL           TYPE     USED  AVAIL
> >> >> > cephfs.cephfs.meta  metadata  64.0k  13.8G
> >> >> > cephfs.cephfs.data    data       0   13.8G
> >> >> >    cephfs_data2       data       0   13.8G
> >> >> >
> >> >> >
> >> >> > You can't remove the default data pool, though (here it's
> >> >> > cephfs.cephfs.data). If you want to control the pool creation you
> can
> >> >> fall
> >> >> > back to the method you mentioned, create pools as you require them
> and
> >> >> then
> >> >> > create a new cephfs, and deploy the mds service.
> >> >>
> >> >> Yes, but I'm guessing the
> >> >>
> >> >>   ceph fs volume
> >> >>
> >> >> are the «future» so it would be super nice to add (at least) the
> option
> >> to
> >> >> choose the couple of pool...
> >> >>
> >> >> >
> >> >> > I haven't looked too deep into changing the default pool yet, so
> there
> >> >> might
> >> >> > be a way to switch that as well.
> >> >>
> >> >> Ok. I will also try but...well...newbie ;-)
> >> >>
> >> >> Anyway thanks.
> >> >>
> >> >> regards
> >> >>
> >> >> --
> >> >> Albert SHIH 🦫 🐸
> >> >> France
> >> >> Heure locale/Local time:
> >> >> jeu. 25 janv. 2024 12:00:08 CET
> >> >> _______________________________________________
> >> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux