Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
>
> Ah, yes indeed I had disabled log-to-stderr in cluster wide config.
> root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1 --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1

Hi Reto,

So "rbd snap create" is failing to allocate a snap ID:

2023-04-18T23:25:42.779+0200 7f4a8963a700  5
librbd::SnapshotCreateRequest: 0x7f4a68013ec0 send_allocate_snap_id
2023-04-18T23:25:42.779+0200 7f4a8963a700  1 --
192.168.1.1:0/1547580829 -->
[v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] -- pool_op(create
unmanaged snap pool 37 tid 22 name  v0) v4 -- 0x7f4a68017430 con
0x55637d589a60
2023-04-18T23:25:42.779+0200 7f4a7bfff700  1 --
192.168.1.1:0/1547580829 <== mon.1 v2:192.168.1.10:3300/0 6 ====
pool_op_reply(tid 22 (95) Operation not supported v72776) v1 ====
43+0+0 (secure 0 0 0) 0x7f4a80087080 con 0x55637d589a60
2023-04-18T23:25:42.779+0200 7f4a89e3b700  5
librbd::SnapshotCreateRequest: 0x7f4a68013ec0 handle_allocate_snap_id:
r=-95, snap_id=18446744073709551614

It's most likely coming from https://github.com/ceph/ceph/pull/47753
(which was backported to 17.2.6, this explains why it showed up after
the upgrade).  The fact that both the old and the new EC pools have
a cephfs application tag instead of just rbd is suspicious:

pool 37 'ecpool_hdd' erasure profile 3-2-jerasure size 5 min_size 4
crush_rule 5 object_hash rjenkins pg_num 128 pgp_num 128
autoscale_mode on last_change 72385 lfor 0/0/65311 flags
hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 12288
compression_algorithm lz4 compression_mode aggressive
compression_required_ratio 0.875 application cephfs,rbd
pool 87 'ecpool_test' erasure profile 3-2-jerasure size 5 min_size 4
crush_rule 9 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on
last_change 72720 flags hashpspool,ec_overwrites,selfmanaged_snaps
stripe_width 12288 compression_algorithm lz4 compression_mode
aggressive compression_required_ratio 0.825 application cephfs,rbd

Do you recall attaching either of these to a filesystem?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux