RGW - Keyring Storage Cluster Users ceph for secondary RGW multisite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, i need help for configure a Storage Cluster Users for a  secondary rados gateway.

My multisite RGW configuration & sync works with lot of capabilities (osd 'allow rwx, mon 'allow profile simple-rados-client', mgr 'allow profile rbd') but i would like  avoided to use osd 'allow rwx'.

Actually, for the master zone, it's work with the follow  osd cap:
'allow rwx pool=myrootpool, allow rwx pool=myzone.rgw.buckets.index, allow rwx pool=myzone.rgw.buckets.data, ...'

but for the secondary zone, it doesn't work with an exaustive pool list but just with osd 'allow rwx'.
Is there need other osd cap for  secondary RGW ?

configuration
ceph version 16.2.9
1 cluster ceph
i don't use .rgw.root for realm and zone info
two radosGW (master and secondary)


secondary error logs:
   -20> 2023-01-03T16:10:58.519+0100 7f14fe28d840  5 asok(0x55c71bd1c100) register_command sync trace show hook 0x7f14f0002700
   -19> 2023-01-03T16:10:58.519+0100 7f14fe28d840  5 asok(0x55c71bd1c100) register_command sync trace history hook 0x7f14f0002700
   -18> 2023-01-03T16:10:58.519+0100 7f14fe28d840  5 asok(0x55c71bd1c100) register_command sync trace active hook 0x7f14f0002700
   -17> 2023-01-03T16:10:58.519+0100 7f14fe28d840  5 asok(0x55c71bd1c100) register_command sync trace active_short hook 0x7f14f0002700
   -16> 2023-01-03T16:10:58.523+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000002
   -15> 2023-01-03T16:10:58.523+0100 7f14fe28d840  5 rgw main: starting data sync thread for zone pvid-qualif-0.s3
   -14> 2023-01-03T16:10:58.523+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000003
   -13> 2023-01-03T16:10:58.523+0100 7f1457fef700  5 lifecycle: schedule life cycle next start time: Tue Jan  3 23:00:00 2023
   -12> 2023-01-03T16:10:58.523+0100 7f1455feb700  5 lifecycle: schedule life cycle next start time: Tue Jan  3 23:00:00 2023
   -11> 2023-01-03T16:10:58.523+0100 7f1453fe7700  5 lifecycle: schedule life cycle next start time: Tue Jan  3 23:00:00 2023
   -10> 2023-01-03T16:10:58.523+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000004
    -9> 2023-01-03T16:10:58.523+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000005
    -8> 2023-01-03T16:10:58.523+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000006
    -7> 2023-01-03T16:10:58.527+0100 7f14fe28d840  0 framework: beast
    -6> 2023-01-03T16:10:58.527+0100 7f14fe28d840  0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
    -5> 2023-01-03T16:10:58.527+0100 7f14fe28d840  0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
    -4> 2023-01-03T16:10:58.527+0100 7f14fe28d840  0 starting handler: beast
    -3> 2023-01-03T16:10:58.527+0100 7f1464ff9700  5 rgw object expirer Worker thread: process_single_shard(): failed to acquire lock on obj_delete_at_hint.0000000007
    -2> 2023-01-03T16:10:58.527+0100 7f14fe28d840  4 frontend listening on 0.0.0.0:443
    -1> 2023-01-03T16:10:58.527+0100 7f14fe28d840  4 frontend listening on [::]:443
     0> 2023-01-03T16:10:58.527+0100 7f1452fe5700 -1 *** Caught signal (Aborted) **
 in thread 7f1452fe5700 thread_name:rgw_user_st_syn

 ceph version 16.2.9 (a569859f5e07da0c4c39da81d5fb5675cd95da49) pacific (stable)
 1: /lib/x86_64-linux-gnu/libc.so.6(+0x3bd60) [0x7f150a491d60]
 2: gsignal()
 3: abort()
 4: /lib/x86_64-linux-gnu/libstdc++.so.6(+0x9a7ec) [0x7f1500ac67ec]
 5: /lib/x86_64-linux-gnu/libstdc++.so.6(+0xa5966) [0x7f1500ad1966]
 6: /lib/x86_64-linux-gnu/libstdc++.so.6(+0xa59d1) [0x7f1500ad19d1]
 7: /lib/x86_64-linux-gnu/libstdc++.so.6(+0xa5c65) [0x7f1500ad1c65]
 8: /lib/librados.so.2(+0x36b7a) [0x7f150a03cb7a]
 9: /lib/librados.so.2(+0x7cd20) [0x7f150a082d20]
 10: (librados::v14_2_0::IoCtx::nobjects_begin(librados::v14_2_0::ObjectCursor const&, ceph::buffer::v15_2_0::list const&)+0x59) [0x7f150a08d749]
 11: (RGWSI_RADOS::Pool::List::init(DoutPrefixProvider const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RGWAccessListFilter*)+0x2e5) [0x7f150b12d615]
 12: (RGWSI_SysObj_Core::pool_list_objects_init(DoutPrefixProvider const*, rgw_pool const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RGWSI_SysObj::Pool::ListCtx*)+0x24f) [0x7f150abb6c6f]
 13: (RGWSI_MetaBackend_SObj::list_init(DoutPrefixProvider const*, RGWSI_MetaBackend::Context*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x235) [0x7f150b11f8e5]
 14: (RGWMetadataHandler_GenericMetaBE::list_keys_init(DoutPrefixProvider const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x41) [0x7f150ace3f71]
 15: (RGWMetadataManager::list_keys_init(DoutPrefixProvider const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x81) [0x7f150ace88a1]
 16: (RGWMetadataManager::list_keys_init(DoutPrefixProvider const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x3d) [0x7f150ace895d]
 17: (RGWUserStatsCache::sync_all_users(DoutPrefixProvider const*, optional_yield)+0x72) [0x7f150aeb88f2]
 18: (RGWUserStatsCache::UserSyncThread::entry()+0x91) [0x7f150aec08a1]
 19: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7) [0x7f1500c01ea7]
 20: clone()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux