Re: ceph rgw zone create fails EINVAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would guess that it probably does, but I don't know for sure.

Daniel

On 6/26/24 10:04 AM, Adam King wrote:
Interesting. Given this is coming from a radosgw-admin call being done from within the rgw mgr module, I wonder if a  radosgw-admin log file is ending up in the active mgr container when this happens.

On Wed, Jun 26, 2024 at 9:04 AM Daniel Gryniewicz <dang@xxxxxxxxxx <mailto:dang@xxxxxxxxxx>> wrote:

    On 6/25/24 3:21 PM, Matthew Vernon wrote:
     > On 24/06/2024 21:18, Matthew Vernon wrote:
     >
     >> 2024-06-24T17:33:26.880065+00:00 moss-be2001 ceph-mgr[129346]: [rgw
     >> ERROR root] Non-zero return from ['radosgw-admin', '-k',
     >> '/var/lib/ceph/mgr/ceph-moss-be2001.qvwcaq/keyring', '-n',
     >> 'mgr.moss-be2001.qvwcaq', 'realm', 'pull', '--url',
     >> 'https://apus.svc.eqiad.wmnet:443
    <https://apus.svc.eqiad.wmnet:443>', '--access-key', 'REDACTED',
     >> '--secret', 'REDACTED', '--rgw-realm', 'apus']: request failed: (5)
     >> Input/output error
     >>
     >> EIO is an odd sort of error [doesn't sound very network-y], and I
     >> don't think I see any corresponding request in the radosgw logs
    in the
     >> primary zone. From the CLI outside the container I can do e.g. curl
     >> https://apus.svc.eqiad.wmnet/ <https://apus.svc.eqiad.wmnet/>
    just fine, are there other things worth
     >> checking here? Could it matter that the mgr node isn't an rgw?
     >
     > ...the answer turned out to be "container image lacked the
    relevant CA
     > details to validate the TLS of the other end".
     >

    Also, for the record, radosgw-admin logs do not end up in the same log
    file as RGW's logs.  Each invocation of radosgw-admin makes it's own
    log
    file for the run of that command.  (This is because radosgw-admin is
    really a stripped down version of RGW itself, and it does not
    communicate with the running RGWs, but connects to the Ceph cluster
    directly.)  They're generally small, and frequently empty, but should
    have error messages in them on failure.

    Daniel
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux