Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, thanks Venky!

Am Do., 20. Apr. 2023 um 06:12 Uhr schrieb Venky Shankar <
vshankar@xxxxxxxxxx>:

> Hi Reto,
>
> On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> >
> > On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
> > >
> > >
> > > Hi,
> > >
> > > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov <
> idryomov@xxxxxxxxx>:
> > >>
> > >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
> > >> >
> > >> > yes, I used the same ecpool_hdd also for cephfs file systems. The
> new pool ecpool_test I've created for a test, I've also created it with
> application profile 'cephfs', but there aren't any cephfs filesystem
> attached to it.
> > >>
> > >> This is not and has never been supported.
> > >
> > >
> > > Do you mean 1) using the same erasure coded pool for both rbd and
> cephfs, or  2) multiple cephfs filesystem using the same erasure coded pool
> via ceph.dir.layout.pool="ecpool_hdd"?
> >
> > (1), using the same EC pool for both RBD and CephFS.
> >
> > >
> > > 1)
> > >
> > >
> > > 2)
> > > rgysi cephfs filesystem
> > > rgysi - 5 clients
> > > =====
> > > RANK  STATE           MDS             ACTIVITY     DNS    INOS   DIRS
>  CAPS
> > > 0    active  rgysi.debian.uhgqen  Reqs:    0 /s   409k   408k  40.8k
> 16.5k
> > >       POOL          TYPE     USED  AVAIL
> > > cephfs.rgysi.meta  metadata  1454M  2114G
> > > cephfs.rgysi.data    data    4898G  17.6T
> > >    ecpool_hdd       data    29.3T  29.6T
> > >
> > > root@zephir:~# getfattr -n ceph.dir.layout /home/rgysi/am/ecpool/
> > > getfattr: Removing leading '/' from absolute path names
> > > # file: home/rgysi/am/ecpool/
> > > ceph.dir.layout="stripe_unit=4194304 stripe_count=1
> object_size=4194304 pool=ecpool_hdd"
> > >
> > > root@zephir:~#
> > >
> > > backups cephfs filesystem
> > > backups - 2 clients
> > > =======
> > > RANK  STATE            MDS              ACTIVITY     DNS    INOS
>  DIRS   CAPS
> > > 0    active  backups.debian.runngh  Reqs:    0 /s   253k   253k
> 21.3k   899
> > >        POOL           TYPE     USED  AVAIL
> > > cephfs.backups.meta  metadata  1364M  2114G
> > > cephfs.backups.data    data    16.7T  16.4T
> > >     ecpool_hdd        data    29.3T  29.6T
> > >
> > > root@zephir:~# getfattr -n ceph.dir.layout
> /mnt/backups/windows/windows-drives/
> > > getfattr: Removing leading '/' from absolute path names
> > > # file: mnt/backups/windows/windows-drives/
> > > ceph.dir.layout="stripe_unit=4194304 stripe_count=1
> object_size=4194304 pool=ecpool_hdd"
> > >
> > > root@zephir:~#
> > >
> > >
> > >
> > > So I guess I should use a different ec datapool for rbd and for each
> of the cephfs filesystems in the future, correct?
> >
> > Definitely a different EC pool for RBD (i.e. don't mix with CephFS).
> > Not sure about the _each_ of the filesystems bit -- Venky or Patrick can
> > comment on whether sharing an EC pool between filesystems is OK.
>
> That's true for CephFS too -- different pools for each ceph file
> system is recommended.
>
> You can use the `--allow-dangerous-metadata-overlay` option when
> creating a ceph file system to reuse metadata and data pools if those
> are already in use, however, it's only to be used during emergency
> situations.
>
> >
> > Thanks,
> >
> >                 Ilya
> >
>
>
> --
> Cheers,
> Venky
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux