multisite sync error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
I have two zonegroup inside a realm, each one has a zone like the one below:

REALM
+--------------------------------------------------------------------+
| +--------------------------+ +-------------------------+ |
| | | | | |
| | zonegroup 1 (Master) | | zonegroup 2 | |
| | | | | |
| | | | | |
| | +-----------+ | | +-----------+ | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | zone 1 | | | | zone 2 | | |
| | | | | | | | | |
| | | | | | | | | |
| | +-----------+ | | +-----------+ | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +--------------------------+ +-------------------------+ |
+--------------------------------------------------------------------+

Ceph version: 16.2.7

I have some problems:

1- I have an extra period in zone1, how do I remove this period?

period list in zone1:

{
    "periods": [
        "16247df3-b22c-49b8-ab4e-d4bdca5395df:staging",
        "290bfb2c-d573-44c1-a675-b7b9da51454f:staging",
        "31198da0-6b47-4139-b6a6-bb38aef6b5f6",
        "a3c6431f-baea-41db-b0fe-2f2c0204645b"
    ]
}


period list in zone2:

{
    "periods": [
        "290bfb2c-d573-44c1-a675-b7b9da51454f:staging",
        "31198da0-6b47-4139-b6a6-bb38aef6b5f6",
        "a3c6431f-baea-41db-b0fe-2f2c0204645b"
    ]
}

2- when I try to commit period on zone2, all RGW crashed in zone1

    "backtrace": [

        "/lib64/libpthread.so.0(+0x12c20) [0x7f48d37cbc20]",

        "(UserAsyncRefreshHandler::init_fetch()+0x4e) [0x7f48de78453e]",

        "(RGWQuotaCache<rgw_user>::async_refresh(rgw_user const&,
rgw_bucket const&, RGWQuotaCacheStats&)+0x19f) [0x7f48de7888bf]",

        "(RGWQuotaCache<rgw_user>::get_stats(rgw_user const&, rgw_bucket
const&, RGWStorageStats&, optional_yield, DoutPrefixProvider const*)+0x113)
[0x7f48de78bb93]",

        "(RGWQuotaHandlerImpl::check_quota(rgw_user const&, rgw_bucket&,
RGWQuotaInfo&, RGWQuotaInfo&, unsigned long, unsigned long,
optional_yield)+0x12d) [0x7f48de78c34d]",

        "(rgw::sal::RGWRadosBucket::check_quota(RGWQuotaInfo&,
RGWQuotaInfo&, unsigned long, optional_yield, bool)+0x45) [0x7f48de875185]",

        "(RGWPutObj::execute(optional_yield)+0x10a6) [0x7f48de7572e6]",

        "(rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*,
req_state*, optional_yield, bool)+0xb36) [0x7f48de3d8f86]",

        "(process_request(rgw::sal::RGWRadosStore*, RGWREST*, RGWRequest*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, rgw::auth::StrategyRegistry const&,
RGWRestfulIO*, OpsLogSink*, optional_yield, rgw::dmclock::Scheduler*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> >*, std::chrono::duration<unsigned long,
std::ratio<1l, 1000000000l> >*, int*)+0x2891) [0x7f48de3dce21]",

        "/lib64/libradosgw.so.2(+0x4b1b63) [0x7f48de32cb63]",

        "/lib64/libradosgw.so.2(+0x4b3604) [0x7f48de32e604]",

        "/lib64/libradosgw.so.2(+0x4b386e) [0x7f48de32e86e]",

        "make_fcontext()"

    ],



3- "Directory not empty"

I've got too many errors in zone2 during syncing metadata:

                "info": {

                    "source_zone": "70271812-eb5c-472a-9050-16e289e78941",

                    "error_code": 39,

                    "message": "Failed to read remote metadata entry: (39)
Directory not empty"

                }
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux