Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen

Yes, I used the default setting of rbd_default_pool='rbd'. I don't have
anything set for default_data_pool.
root@zephir:~# ceph config show-with-defaults mon.zephir | grep -E
"default(_data)*_pool"
osd_default_data_pool_replay_window                         45

                                                                  default

rbd_default_data_pool

                                                                  default

rbd_default_pool                                            rbd

                                                                  default

root@zephir:~#

If I don't specify a data-pool during 'ceph create <image>' it will create
the image with pool 'rbd' and without a separate data pool.
pool 'rbd' is a replica 3 pool.

adding '-p rbd' to the snap create command doesn't change/fix the error:
root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms
1 --debug-rbd 20
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1  Processor -- start
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --  start start
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --2-  >> [v2:
192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x56151b58f2b0
0x56151b58f680 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
0x56151b598320 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --2-  >> [v2:
192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x56151b598860
0x56151b59ac40 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --  --> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] -- mon_getmap magic: 0 v1 --
0x56151b47cb70 con 0x56151b58fc50
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --  --> [v2:
192.168.1.10:3300/0,v1:192.168.1.10:6789/0] -- mon_getmap magic: 0 v1 --
0x56151b4761b0 con 0x56151b58f2b0
2023-04-18T19:25:23.002+0200 7f1a036ff4c0  1 --  --> [v2:
192.168.43.208:3300/0,v1:192.168.43.208:6789/0] -- mon_getmap magic: 0 v1
-- 0x56151b412680 con 0x56151b598860
2023-04-18T19:25:23.002+0200 7f19f8d43700  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
0x56151b598320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supporte
d=3 required=0
2023-04-18T19:25:23.002+0200 7f1a01544700  1 --2-  >> [v2:
192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x56151b58f2b0
0x56151b58f680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload suppor
ted=3 required=0
2023-04-18T19:25:23.002+0200 7f19f8d43700  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
0x56151b598320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto
rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.1.1:3300
/0 says I am v2:192.168.1.1:35346/0 (socket says 192.168.1.1:35346)
2023-04-18T19:25:23.002+0200 7f19f8d43700  1 -- 192.168.1.1:0/2631157109
learned_addr learned my addr 192.168.1.1:0/2631157109 (peer_addr_for_me v2:
192.168.1.1:0/0)
2023-04-18T19:25:23.002+0200 7f1a01544700  1 -- 192.168.1.1:0/2631157109 >>
[v2:192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x56151b598860
msgr2=0x56151b59ac40 unknown :-1 s=STATE_CONNECTING_RE l=0).mark_down
2023-04-18T19:25:23.002+0200 7f1a01544700  1 --2- 192.168.1.1:0/2631157109
>> [v2:192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x56151b598860
0x56151b59ac40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0).stop
2023-04-18T19:25:23.002+0200 7f1a01544700  1 -- 192.168.1.1:0/2631157109 >>
[v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
msgr2=0x56151b598320 unknown :-1 s=STATE_CONNECTION_ESTABLISHED
l=0).mark_down
2023-04-18T19:25:23.002+0200 7f1a01544700  1 --2- 192.168.1.1:0/2631157109
>> [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
0x56151b598320 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto
rx=0 tx=0 comp rx=0 tx=0).stop
2023-04-18T19:25:23.002+0200 7f1a01544700  1 -- 192.168.1.1:0/2631157109
--> [v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] --
mon_subscribe({config=0+,monmap=0+}) v3 -- 0x56151b41d7f0 con
0x56151b58f2b0
2023-04-18T19:25:23.002+0200 7f19f8d43700  1 --2- 192.168.1.1:0/2631157109
>> [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x56151b58fc50
0x56151b598320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0
comp rx=0 tx=0).handle_auth_done state
changed!
2023-04-18T19:25:23.002+0200 7f1a01544700  1 --2- 192.168.1.1:0/2631157109
>> [v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x56151b58f2b0
0x56151b58f680 secure :-1 s=READY pgs=214 cs=0 l=1 rev1=1 crypto
rx=0x7f19f400a5d0 tx=0x7f19f4005d40 comp rx=0 t
x=0).ready entity=mon.1 client_cookie=a9059b943a3e6f58 server_cookie=0
in_seq=0 out_seq=0
2023-04-18T19:25:23.002+0200 7f1a00d43700  1 -- 192.168.1.1:0/2631157109
<== mon.1 v2:192.168.1.10:3300/0 1 ==== mon_map magic: 0 v1 ==== 467+0+0
(secure 0 0 0) 0x7f19f400f590 con 0x56151b58f2b0
2023-04-18T19:25:23.002+0200 7f1a00d43700  1 -- 192.168.1.1:0/2631157109
<== mon.1 v2:192.168.1.10:3300/0 2 ==== config(40 keys) v1 ==== 1486+0+0
(secure 0 0 0) 0x7f19f400fd30 con 0x56151b58f2b0
2023-04-18T19:25:23.002+0200 7f1a00d43700  1 -- 192.168.1.1:0/2631157109
<== mon.1 v2:192.168.1.10:3300/0 3 ==== mon_map magic: 0 v1 ==== 467+0+0
(secure 0 0 0) 0x7f19f400e880 con 0x56151b58f2b0
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (95) Operation not supported
root@zephir:~#

BTW: I'm running Debian 11, with Kernel 6.1.12 if that matters
root@zephir:~# uname -a
Linux zephir 6.1.12 #5 SMP PREEMPT_DYNAMIC Mon Mar 27 16:36:27 CEST 2023
x86_64 GNU/Linux
root@zephir:~# cat /etc/debian_version
11.6
root@zephir:~#


Am Di., 18. Apr. 2023 um 19:01 Uhr schrieb Eugen Block <eblock@xxxxxx>:

> You don't seem to specify a pool name to the snap create command, does
> your rbd_default_pool match the desired pool? And also does
> rbd_default_data_pool match what you expect (if those values are even
> set)? I've never used custom values for those configs but if you don't
> specify a pool name the default name "rbd" is expected by ceph. At
> least that's how I know it.
>
> Zitat von Reto Gysi <rlgysi@xxxxxxxxx>:
>
> > Hi Ilya
> >
> > Sure.
> >
> > root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
> > --debug-rbd 20 >/home/rgysi/log.txt 2>&1
> > root@zephir:~#
> >
> > Am Di., 18. Apr. 2023 um 16:19 Uhr schrieb Ilya Dryomov <
> idryomov@xxxxxxxxx
> >> :
> >
> >> On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
> >> >
> >> > Hi,
> >> >
> >> > Yes both snap create commands were executed as user admin:
> >> > client.admin
> >> >        caps: [mds] allow *
> >> >        caps: [mgr] allow *
> >> >        caps: [mon] allow *
> >> >        caps: [osd] allow *
> >> >
> >> > deep scrubbing+repair of ecpool_hdd is still ongoing, but so far the
> >> > problem still exists
> >>
> >> Hi Reto,
> >>
> >> Deep scrubbing is unlikely to help with a "Operation not supported"
> >> error.
> >>
> >> I really doubt that the output that you attached in one of the previous
> >> emails is all that is logged.  Even in the successful case, there is not
> >> a single RBD-related debug log.  I would suggest repeating the test
> >> with an explicit redirection and attaching the file itself.
> >>
> >> Thanks,
> >>
> >>                 Ilya
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux