RGW Multisite Sync Policy - Bucket Specific - Core Dump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We use Ceph RBD/FS extensively and are starting down our RGW journey. We have 3 sites and want to replicate buckets from a single “primary” to multiple “backup” site. Each site has a Ceph cluster and they are all configured as part of a Multisite setup.

I am using the instructions at https://docs.ceph.com/en/quincy/radosgw/multisite-sync-policy/#example-3-mirror-a-specific-bucket to try and configure a single bucket to replicate from one zone to two other zones in a directional manner (not symmetric)

When I follow the example I get a core dump on the final radosgw-admin sync group pipe create command.

It would be great if someone with experience with Multisite Sync Policy could take a look my commands and see if there is anything glaringly wrong with what I am trying to do.

BTW: The Multisite Sync Policy docs are, IMHO, the most opaque/confusing section on the doc site overall.

Setup:
  Version: 16.2.10
  Clusters: 3
  Zonegroup: us
  Zones: us-dev-1, us-dev-2, us-dev-3
  Tenant: elvis
  Bucket: artifact

radosgw-admin sync group create \
    --group-id=us \
    --status=allowed

radosgw-admin sync group flow create \
    --group-id=us \
    --flow-id=dev1-to-dev2 \
    --flow-type=directional \
    --source-zone=us-dev-1 \
    --dest-zone=us-dev-2

radosgw-admin sync group flow create \
    --group-id=us \
    --flow-id=dev1-to-dev3 \
    --flow-type=directional \
    --source-zone=us-dev-1 \
    --dest-zone=us-rose-3-dev

radosgw-admin sync group pipe create \
    --group-id=us \
    --pipe-id=us-all \
    --source-zones='*' \
    --source-bucket='*' \
    --dest-zones='*' \
    --dest-bucket='*'

radosgw-admin sync group create \
    --bucket=elvis/artifact \
    --group-id=elvis-artifact \
    --status=enabled
    
radosgw-admin sync group pipe create \
    --bucket=elvis/artifact \
    --group-id=elvis-artifact \
    --pipe-id=pipe1 \
    --source-zones='us-dev-1'\
    --dest-zones='us-dev-2,us-rose-3-dev'

/usr/include/c++/8/optional:714: constexpr _Tp& std::_Optional_base<_Tp, <anonymous>, <anonymous> >::_M_get() [with _Tp = rgw_bucket; bool <anonymous> = false; bool <anonymous> = false]: Assertion 'this->_M_is_engaged()' failed.
*** Caught signal (Aborted) **
 in thread 7f0092e41380 thread_name:radosgw-admin
 ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
 1: /lib64/libpthread.so.0(+0x12ce0) [0x7f0086ffece0]
 2: gsignal()
 3: abort()
 4: radosgw-admin(+0x35fff8) [0x563d0c5f5ff8]
 5: (rgw_sync_bucket_entities::set_bucket(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >)+0x67) [0x563d0c879c07]
 6: main()
 7: __libc_start_main()
 8: _start()
2022-07-28T09:24:39.445-0700 7f0092e41380 -1 *** Caught signal (Aborted) **
 in thread 7f0092e41380 thread_name:radosgw-admin

 ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
 1: /lib64/libpthread.so.0(+0x12ce0) [0x7f0086ffece0]
 2: gsignal()
 3: abort()
 4: radosgw-admin(+0x35fff8) [0x563d0c5f5ff8]
 5: (rgw_sync_bucket_entities::set_bucket(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >)+0x67) [0x563d0c879c07]
 6: main()
 7: __libc_start_main()
 8: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
  -494> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command assert hook 0x563d0e75fbd0
  -493> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command abort hook 0x563d0e75fbd0
  -492> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command leak_some_memory hook 0x563d0e75fbd0
  -491> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perfcounters_dump hook 0x563d0e75fbd0
  -490> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command 1 hook 0x563d0e75fbd0
  -489> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perf dump hook 0x563d0e75fbd0
  -488> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perfcounters_schema hook 0x563d0e75fbd0
  -487> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perf histogram dump hook 0x563d0e75fbd0
  -486> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command 2 hook 0x563d0e75fbd0
  -485> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perf schema hook 0x563d0e75fbd0
  -484> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perf histogram schema hook 0x563d0e75fbd0
  -483> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command perf reset hook 0x563d0e75fbd0
  -482> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config show hook 0x563d0e75fbd0
  -481> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config help hook 0x563d0e75fbd0
  -480> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config set hook 0x563d0e75fbd0
  -479> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config unset hook 0x563d0e75fbd0
  -478> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config get hook 0x563d0e75fbd0
  -477> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config diff hook 0x563d0e75fbd0
  -476> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command config diff get hook 0x563d0e75fbd0
  -475> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command injectargs hook 0x563d0e75fbd0
  -474> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command log flush hook 0x563d0e75fbd0
  -473> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command log dump hook 0x563d0e75fbd0
  -472> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command log reopen hook 0x563d0e75fbd0
  -471> 2022-07-28T09:24:39.316-0700 7f0092e41380  5 asok(0x563d0e6ece90) register_command dump_mempools hook 0x563d0e762058
  -470> 2022-07-28T09:24:39.326-0700 7f0092e41380 10 monclient: get_monmap_and_config
  -469> 2022-07-28T09:24:39.326-0700 7f0092e41380 10 monclient: build_initial_monmap
  -468> 2022-07-28T09:24:39.326-0700 7f0092e41380 10 monclient: monmap:



-- 


Mark Selby
Sr Linux Administrator, The Voleon Group
mselby@xxxxxxxxxx 
 
 This email is subject to important conditions and disclosures that are listed on this web page: https://voleon.com/disclaimer/.
 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux