Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

Thanks for the reply. Here's is the output:

root@zephir:~# rbd status ceph-dev
Watchers:
       watcher=192.168.1.1:0/338620854 client.19264246
cookie=18446462598732840969

root@zephir:~# rbd snap create ceph-dev@backup --debug-ms 1 --debug-rbd 20
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1  Processor -- start
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --  start start
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x5558b4e3c260
0x5558b4e3c630 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --2-  >> [v2:
192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x5558b4e3cc00
0x5558b4e452d0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --2-  >> [v2:
192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x5558b4e45810
0x5558b4e47bf0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0
comp rx=0 tx=0).connect
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --  --> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] -- mon_getmap magic: 0 v1 --
0x5558b4d29b70 con 0x5558b4e3c260
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --  --> [v2:
192.168.1.10:3300/0,v1:192.168.1.10:6789/0] -- mon_getmap magic: 0 v1 --
0x5558b4d231b0 con 0x5558b4e3cc00
2023-04-17T18:23:16.211+0200 7f5e05dca4c0  1 --  --> [v2:
192.168.43.208:3300/0,v1:192.168.43.208:6789/0] -- mon_getmap magic: 0 v1
-- 0x5558b4cbf680 con 0x5558b4e45810
2023-04-17T18:23:16.211+0200 7f5e04103700  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x5558b4e3c260
0x5558b4e3c630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supporte
d=3 required=0
2023-04-17T18:23:16.211+0200 7f5e04103700  1 --2-  >> [v2:
192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x5558b4e3c260
0x5558b4e3c630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto
rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.1.1:3300
/0 says I am v2:192.168.1.1:42714/0 (socket says 192.168.1.1:42714)
2023-04-17T18:23:16.211+0200 7f5e04103700  1 -- 192.168.1.1:0/1614127865
learned_addr learned my addr 192.168.1.1:0/1614127865 (peer_addr_for_me v2:
192.168.1.1:0/0)
2023-04-17T18:23:16.211+0200 7f5e03902700  1 --2- 192.168.1.1:0/1614127865
>> [v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x5558b4e3cc00
0x5558b4e452d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0)._handle_pe
er_banner_payload supported=3 required=0
2023-04-17T18:23:16.211+0200 7f5e04103700  1 -- 192.168.1.1:0/1614127865 >>
[v2:192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x5558b4e45810
msgr2=0x5558b4e47bf0 unknown :-1 s=STATE_CONNECTING_RE l=0).mark_down
2023-04-17T18:23:16.211+0200 7f5e04103700  1 --2- 192.168.1.1:0/1614127865
>> [v2:192.168.43.208:3300/0,v1:192.168.43.208:6789/0] conn(0x5558b4e45810
0x5558b4e47bf0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=0 crypto
rx=0 tx=0 comp rx=0 tx=0).stop
2023-04-17T18:23:16.211+0200 7f5e04103700  1 -- 192.168.1.1:0/1614127865 >>
[v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x5558b4e3cc00
msgr2=0x5558b4e452d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED
l=0).mark_down
2023-04-17T18:23:16.211+0200 7f5e04103700  1 --2- 192.168.1.1:0/1614127865
>> [v2:192.168.1.10:3300/0,v1:192.168.1.10:6789/0] conn(0x5558b4e3cc00
0x5558b4e452d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto
rx=0 tx=0 comp rx=0 tx=0).stop
2023-04-17T18:23:16.211+0200 7f5e04103700  1 -- 192.168.1.1:0/1614127865
--> [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] --
mon_subscribe({config=0+,monmap=0+}) v3 -- 0x5558b4cd6d60 con
0x5558b4e3c260
2023-04-17T18:23:16.211+0200 7f5e04103700  1 --2- 192.168.1.1:0/1614127865
>> [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] conn(0x5558b4e3c260
0x5558b4e3c630 secure :-1 s=READY pgs=355 cs=0 l=1 rev1=1 crypto
rx=0x7f5df400a700 tx=0x7f5df4005b10 comp rx=0 tx=
0).ready entity=mon.0 client_cookie=2d9464291c26a0e7 server_cookie=0
in_seq=0 out_seq=0
2023-04-17T18:23:16.211+0200 7f5e03101700  1 -- 192.168.1.1:0/1614127865
<== mon.0 v2:192.168.1.1:3300/0 1 ==== mon_map magic: 0 v1 ==== 467+0+0
(secure 0 0 0) 0x7f5df40089b0 con 0x5558b4e3c260
2023-04-17T18:23:16.211+0200 7f5e03101700  1 -- 192.168.1.1:0/1614127865
<== mon.0 v2:192.168.1.1:3300/0 2 ==== config(39 keys) v1 ==== 1461+0+0
(secure 0 0 0) 0x7f5df4008b10 con 0x5558b4e3c260
2023-04-17T18:23:16.211+0200 7f5e03101700  1 -- 192.168.1.1:0/1614127865
<== mon.0 v2:192.168.1.1:3300/0 3 ==== mon_map magic: 0 v1 ==== 467+0+0
(secure 0 0 0) 0x7f5df4011e60 con 0x5558b4e3c260
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (95) Operation not supported
root@zephir:~#

root@zephir:~# rbd config image list ceph-dev
There are 78 values:
Name                                         Value        Source
rbd_atime_update_interval                    60           config
rbd_balance_parent_reads                     false        config
rbd_balance_snap_reads                       false        config
rbd_blkin_trace_all                          false        config
rbd_blocklist_expire_seconds                 0            config
rbd_blocklist_on_break_lock                  true         config
rbd_cache                                    true         config
rbd_cache_block_writes_upfront               false        config
rbd_cache_max_dirty                          25165824     config
rbd_cache_max_dirty_age                      1.000000     config
rbd_cache_max_dirty_object                   0            config
rbd_cache_policy                             writearound  config
rbd_cache_size                               33554432     config
rbd_cache_target_dirty                       16777216     config
rbd_cache_writethrough_until_flush           false        config
rbd_clone_copy_on_read                       false        config
rbd_compression_hint                         none         config
rbd_concurrent_management_ops                10           config
rbd_default_map_options                                   config
rbd_default_snapshot_quiesce_mode            required     config
rbd_disable_zero_copy_writes                 true         config
rbd_discard_granularity_bytes                65536        config
rbd_enable_alloc_hint                        true         config
rbd_invalidate_object_map_on_timeout         true         config
rbd_io_scheduler                             simple       config
rbd_io_scheduler_simple_max_delay            0            config
rbd_journal_commit_age                       5.000000     config
rbd_journal_max_concurrent_object_sets       0            config
rbd_journal_max_payload_bytes                16384        config
rbd_journal_object_flush_age                 0.000000     config
rbd_journal_object_flush_bytes               1048576      config
rbd_journal_object_flush_interval            0            config
rbd_journal_object_max_in_flight_appends     0            config
rbd_journal_object_writethrough_until_flush  true         config
rbd_localize_parent_reads                    false        config
rbd_localize_snap_reads                      false        config
rbd_mirroring_delete_delay                   0            config
rbd_mirroring_max_mirroring_snapshots        5            config
rbd_mirroring_replay_delay                   0            config
rbd_mirroring_resync_after_disconnect        false        config
rbd_move_parent_to_trash_on_remove           false        config
rbd_move_to_trash_on_remove                  true         config
rbd_move_to_trash_on_remove_expire_seconds   0            config
rbd_mtime_update_interval                    60           config
rbd_non_blocking_aio                         true         config
rbd_parent_cache_enabled                     false        config
rbd_persistent_cache_mode                    disabled     config
rbd_persistent_cache_path                    /tmp         config
rbd_persistent_cache_size                    1073741824   config
rbd_plugins                                               config
rbd_qos_bps_burst                            0            config
rbd_qos_bps_burst_seconds                    1            config
rbd_qos_bps_limit                            0            config
rbd_qos_exclude_ops                          0            config
rbd_qos_iops_burst                           0            config
rbd_qos_iops_burst_seconds                   1            config
rbd_qos_iops_limit                           0            config
rbd_qos_read_bps_burst                       0            config
rbd_qos_read_bps_burst_seconds               1            config
rbd_qos_read_bps_limit                       0            config
rbd_qos_read_iops_burst                      0            config
rbd_qos_read_iops_burst_seconds              1            config
rbd_qos_read_iops_limit                      0            config
rbd_qos_schedule_tick_min                    50           config
rbd_qos_write_bps_burst                      0            config
rbd_qos_write_bps_burst_seconds              1            config
rbd_qos_write_bps_limit                      0            config
rbd_qos_write_iops_burst                     0            config
rbd_qos_write_iops_burst_seconds             1            config
rbd_qos_write_iops_limit                     0            config
rbd_quiesce_notification_attempts            10           config
rbd_read_from_replica_policy                 default      config
rbd_readahead_disable_after_bytes            52428800     config
rbd_readahead_max_bytes                      524288       config
rbd_readahead_trigger_requests               10           config
rbd_request_timed_out_seconds                30           config
rbd_skip_partial_discard                     true         config
rbd_sparse_read_threshold_bytes              65536        config

Cheers

Reto Gysi

Am Mo., 17. Apr. 2023 um 17:31 Uhr schrieb Ilya Dryomov <idryomov@xxxxxxxxx
>:

> On Mon, Apr 17, 2023 at 2:01 PM Reto Gysi <rlgysi@xxxxxxxxx> wrote:
> >
> > Dear Ceph Users,
> >
> > After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
> > to create snapshots
> > of images that have an erasure coded datapool.
> >
> > root@zephir:~# rbd snap create ceph-dev@backup_20230417
> > Creating snap: 10% complete...failed.
> > rbd: failed to create snapshot: (95) Operation not supported
> >
> >
> > root@zephir:~# rbd info ceph-dev
> > rbd image 'ceph-dev':
> >        size 10 GiB in 2560 objects
> >        order 22 (4 MiB objects)
> >        snapshot_count: 11
> >        id: d2f3d287f13c7b
> >        data_pool: ecpool_hdd
> >        block_name_prefix: rbd_data.7.d2f3d287f13c7b
> >        format: 2
> >        features: layering, exclusive-lock, object-map, fast-diff,
> > deep-flatten, data-pool
> >        op_features:
> >        flags:
> >        create_timestamp: Wed Nov 23 17:01:03 2022
> >        access_timestamp: Sun Apr 16 17:20:58 2023
> >        modify_timestamp: Wed Nov 23 17:01:03 2022
> > root@zephir:~#
> >
> > Before the upgrade I was able to create snapshots of this pool:
> >
> > SNAPID  NAME                                    SIZE    PROTECTED
> >  TIMESTAMP
> >  1538  ceph-dev_2023-03-05T02:00:09.030+01:00  10 GiB             Sun Mar
> >  5 02:00:14 2023
> >  1545  ceph-dev_2023-03-06T02:00:03.832+01:00  10 GiB             Mon Mar
> >  6 02:00:05 2023
> >  1903  ceph-dev_2023-04-05T03:22:01.315+02:00  10 GiB             Wed Apr
> >  5 03:22:02 2023
> >  1909  ceph-dev_2023-04-05T03:35:56.748+02:00  10 GiB             Wed Apr
> >  5 03:35:57 2023
> >  1915  ceph-dev_2023-04-05T03:37:23.778+02:00  10 GiB             Wed Apr
> >  5 03:37:24 2023
> >  1930  ceph-dev_2023-04-06T02:00:06.159+02:00  10 GiB             Thu Apr
> >  6 02:00:07 2023
> >  1940  ceph-dev_2023-04-07T02:00:05.913+02:00  10 GiB             Fri Apr
> >  7 02:00:06 2023
> >  1952  ceph-dev_2023-04-08T02:00:06.534+02:00  10 GiB             Sat Apr
> >  8 02:00:07 2023
> >  1964  ceph-dev_2023-04-09T02:00:06.430+02:00  10 GiB             Sun Apr
> >  9 02:00:07 2023
> >  2003  ceph-dev_2023-04-11T02:00:09.750+02:00  10 GiB             Tue Apr
> > 11 02:00:10 2023
> >  2014  ceph-dev_2023-04-12T02:00:09.528+02:00  10 GiB             Wed Apr
> > 12 02:00:10 2023
> > root@zephir:~#
> >
> > I have looked through the release notes of 17.2.6 but couldn't find
> > anything obvious regarding rbd and ec pools.
> >
> > Does anyone else have this problem?
> >
> > Do I need to change some config setting, or was this feature disabled or
> is
> > it a bug?
>
> Hi Reto,
>
> Nothing was disabled and no config changes are expected.  This should just
> work.
>
> What is the output of "rbd status" for that image?
>
> Can you reproduce with "--debug-ms 1 --debug-rbd 20" appended to "rbd
> snap create" command and attach a file with the output?
>
> Thanks,
>
>                 Ilya
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux