Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
>
> yes, I used the same ecpool_hdd also for cephfs file systems. The new pool ecpool_test I've created for a test, I've also created it with application profile 'cephfs', but there aren't any cephfs filesystem attached to it.

This is not and has never been supported.

> root@zephir:~# ceph fs status
> backups - 2 clients
> =======
> RANK  STATE            MDS              ACTIVITY     DNS    INOS   DIRS   CAPS
> 0    active  backups.debian.runngh  Reqs:    0 /s   253k   253k  21.3k   899
>        POOL           TYPE     USED  AVAIL
> cephfs.backups.meta  metadata  1366M  2115G
> cephfs.backups.data    data    16.7T  16.4T
>     ecpool_hdd        data    29.3T  29.6T
> rgysi - 5 clients
> =====
> RANK  STATE           MDS             ACTIVITY     DNS    INOS   DIRS   CAPS
> 0    active  rgysi.debian.uhgqen  Reqs:    0 /s   409k   408k  40.8k  24.5k
>       POOL          TYPE     USED  AVAIL
> cephfs.rgysi.meta  metadata  1453M  2115G
> cephfs.rgysi.data    data    4898G  17.6T
>    ecpool_hdd       data    29.3T  29.6T
> jellyfin - 1 clients
> ========
> RANK  STATE            MDS               ACTIVITY     DNS    INOS   DIRS   CAPS
> 0    active  jellyfin.debian.dcsocv  Reqs:    0 /s  11.2k  10.9k  1935   1922
>        POOL            TYPE     USED  AVAIL
> cephfs.jellyfin.meta  metadata  1076M  2115G
> cephfs.jellyfin.data    data       0   17.6T
>     ecpool_hdd         data    29.3T  29.6T
>     STANDBY MDS
> jellyfin.zephir.iqywsn
> backups.zephir.ygigch
> rgysi.zephir.diylss
> MDS version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
> root@zephir:~#
>
> I think I remember that I've read once something in documentation that using the same pool for <x> could lead to ?naming? conflicts or something. But later on I couldn't find it anymore and couldn't remember what <x> was, and then I forgot about it.
> So my understanding of the pull request is that I should migrate the cephfs data from ecpool_hdd to a separate erasure code pool for cephfs and then remove the 'cephfs' application tag from the ecpool_hdd pool, correct?

There might be more to it as far "unregistering" the pool from CephFS
goes.  Venky and Patrick (CCed) should be able to help with that.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux