Re: Dashboard and Object Gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Tim,

I was also struggling with this when I was configuring the object gateway for the first time.

There is a few things that you should check to make sure the dashboard would work.

1. You need to have the admin api enabled on all rgws with the rgw_enable_apis option. (As far as I know you are not able to force the dashboard to use one rgw instance)
2. It seems that you have the rgw_admin_entry set to a non default value - the default is admin but it seems that you have “default" (by the name of the bucket) make sure that you have this also set on all rgws.

You can confirm that both of these settings are set properly by sending GET request to ${rgw-ip}:${port}/${rgw_admin_entry}  “default" in your case -> it should return 405 Method Not Supported

Btw there is actually no bucket that you would be able to see in the administration. It’s just abstraction on the rgw.

Reagards,

Ondrej

> On 16. 10. 2023, at 22:00, Tim Holloway <timh@xxxxxxxxxxxxx> wrote:
> 
> First, an abject apology for the horrors I'm about to unveil. I made a
> cold migration from GlusterFS to Ceph a few months back, so it was a
> learn-/screwup/-as-you-go affair.
> 
> For reasons of presumed compatibility with some of my older servers, I
> started with Ceph Octopus. Unfortunately, Octopus seems to have been a
> nexus of transitions from older Ceph organization and management to a
> newer (cephadm) system combined with a relocation of many ceph
> resources and compounded by stale bits of documentation (notably some
> references to SysV procedures and an obsolete installer that doesn't
> even come with Octopus).
> 
> A far bigger problem was a known issue where actions would be scheduled
> but never executed if the system was even slightly dirty. And of
> course, since my system was hopelessly dirty, that was a major issue.
> Finally I took a risk and bumped up to Pacific, where that issue no
> longer exists. I won't say that I'm 100% clean even now, but at least
> the remaining crud is in areas where it cannot do any harm. Presumably.
> 
> Given that, the only bar now remaining to total joy has been my
> inability to connect via the Ceph Dashboard to the Object Gateway.
> 
> This seems to be an oft-reported problem, but generally referenced
> relative to higher-level administrative interfaces like Kubernetes and
> rook. I'm interfacing more directly, however. Regardless, the error
> reported is notably familiar:
> 
> [quote]
> The Object Gateway Service is not configured
> Error connecting to Object Gateway: RGW REST API failed request with
> status code 404
> (b'{"Code":"NoSuchBucket","Message":"","BucketName":"default","RequestI
> d":"tx00' b'000dd0c65b8bda685b4-00652d8e0f-5e3a9b-
> default","HostId":"5e3a9b-default-defa' b'ult"}')
> Please consult the documentation on how to configure and enable the
> Object Gateway management functionality. 
> [/quote]
> 
> In point of fact, what this REALLY means in my case is that the bucket
> that is supposed to contain the necessary information for the dashboard
> and rgw to communicate has not been created. Presumably that SHOULDhave
> been done by the "ceph dashboard set-rgw-credentials" command, but
> apparently isn't, because the default zone has no buckets at all, much
> less one named "default".
> 
> By way of reference, the dashboard is definitely trying to interact
> with the rgw container, because trying object gateway options on the
> dashboard result in the container logging the following.
> 
> beast: 0x7efd29621620: 10.0.1.16 - dashboard [16/Oct/2023:19:25:03.678
> +0000] "GET /default/metadata/user?myself HTTP/1.1" 404
> 
> To make everything happy, I'd be glad to accept instructions on how to
> manually brute-force construct this bucket.
> 
> Of course, as a cleaner long-term solution, it would be nice if the
> failure to create could be detected and logged.
> 
> And of course, the ultimate solution: something that would assist in
> making whatever processes are unhappy be happy.
> 
>    Thanks,
>      Tim
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux