Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've just tried this on 17.2.6 and it worked fine....

On 17/04/2023 12:57, Reto Gysi wrote:
Dear Ceph Users,

After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
to create snapshots
of images that have an erasure coded datapool.

root@zephir:~# rbd snap create ceph-dev@backup_20230417
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (95) Operation not supported


root@zephir:~# rbd info ceph-dev
rbd image 'ceph-dev':
        size 10 GiB in 2560 objects
        order 22 (4 MiB objects)
        snapshot_count: 11
        id: d2f3d287f13c7b
        data_pool: ecpool_hdd
        block_name_prefix: rbd_data.7.d2f3d287f13c7b
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool
        op_features:
        flags:
        create_timestamp: Wed Nov 23 17:01:03 2022
        access_timestamp: Sun Apr 16 17:20:58 2023
        modify_timestamp: Wed Nov 23 17:01:03 2022
root@zephir:~#

Before the upgrade I was able to create snapshots of this pool:

SNAPID  NAME                                    SIZE    PROTECTED
  TIMESTAMP
  1538  ceph-dev_2023-03-05T02:00:09.030+01:00  10 GiB             Sun Mar
  5 02:00:14 2023
  1545  ceph-dev_2023-03-06T02:00:03.832+01:00  10 GiB             Mon Mar
  6 02:00:05 2023
  1903  ceph-dev_2023-04-05T03:22:01.315+02:00  10 GiB             Wed Apr
  5 03:22:02 2023
  1909  ceph-dev_2023-04-05T03:35:56.748+02:00  10 GiB             Wed Apr
  5 03:35:57 2023
  1915  ceph-dev_2023-04-05T03:37:23.778+02:00  10 GiB             Wed Apr
  5 03:37:24 2023
  1930  ceph-dev_2023-04-06T02:00:06.159+02:00  10 GiB             Thu Apr
  6 02:00:07 2023
  1940  ceph-dev_2023-04-07T02:00:05.913+02:00  10 GiB             Fri Apr
  7 02:00:06 2023
  1952  ceph-dev_2023-04-08T02:00:06.534+02:00  10 GiB             Sat Apr
  8 02:00:07 2023
  1964  ceph-dev_2023-04-09T02:00:06.430+02:00  10 GiB             Sun Apr
  9 02:00:07 2023
  2003  ceph-dev_2023-04-11T02:00:09.750+02:00  10 GiB             Tue Apr
11 02:00:10 2023
  2014  ceph-dev_2023-04-12T02:00:09.528+02:00  10 GiB             Wed Apr
12 02:00:10 2023
root@zephir:~#

I have looked through the release notes of 17.2.6 but couldn't find
anything obvious regarding rbd and ec pools.

Does anyone else have this problem?

Do I need to change some config setting, or was this feature disabled or is
it a bug?

Ceph version info:
root@zephir:~# ceph orch upgrade check --ceph_version 17.2.6
{
    "needs_update": {},
    "non_ceph_image_daemons": [
        "promtail.debian",
        "node-exporter.debian",
        "promtail.zephir",
        "grafana.zephir",
        "node-exporter.zephir",
        "prometheus.zephir",
        "loki.zephir",
        "alertmanager.zephir"
    ],
    "target_digest": "
quay.io/ceph/ceph@sha256:1161e35e4e02cf377c93b913ce78773f8413f5a8d7c5eaee4b4773a4f9dd6635",

    "target_id":
"9cea3956c04b2d889b91b58f957577fcb4eacd3852df073e3e2567f159fcdbf8",
    "target_name": "quay.io/ceph/ceph:v17.2.6",
    "target_version": "ceph version 17.2.6
(d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)",
    "up_to_date": [
        "iscsi.rbd.debian.ijztzu",
        "mds.jellyfin.debian.dcsocv",
        "mon.debian",
        "osd.13",
        "osd.6",
        "mds.backups.debian.runngh",
        "mds.rgysi.debian.uhgqen",
        "crash.debian",
        "mgr.debian.sookxi",
        "iscsi.rbd.zephir.viqahd",
        "osd.1",
        "mds.jellyfin.zephir.iqywsn",
        "osd.12",
        "osd.7",
        "osd.2",
        "crash.zephir",
        "rgw.default.zephir.jqmick",
        "mds.backups.zephir.ygigch",
        "osd.0",
        "osd.4",
        "mon.zephir",
        "mgr.zephir.enywvy",
        "mds.rgysi.zephir.diylss",
        "osd.3",
        "osd.10",
        "osd.5",
        "osd.8",
        "osd.11"
    ]
}
root@zephir:~#
root@zephir:~# rbd --version
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy
(stable)
root@zephir:~#

Cheers

Reto Gysi
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux