Re: Problem while upgrade 17.2.6 to 17.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Istvan,

yes the problem is solved but I don't remember how I did it 

I upgrade in 19.2.1 last week

Thank You

 



De :    "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>
A :     "David C. " <david.casier@xxxxxxxx>, "Jean-Marc FONTANA" 
<jean-marc.fontana@xxxxxxx>
Cc :    "ceph-users@xxxxxxx" <ceph-users@xxxxxxx>, "Alejandro Herrero" 
<alexandre.schmitt@xxxxxxx>
Date :  21/02/2025 03:57
Objet : Re:  Re: Problem while upgrade 17.2.6 to 17.2.7



Hi Jean-Marc,

Have you managed to solve this issue?

Thank you


From: David C. <david.casier@xxxxxxxx>
Sent: Friday, November 17, 2023 5:55 PM
To: Jean-Marc FONTANA <jean-marc.fontana@xxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>; Alejandro Herrero 
<alexandre.schmitt@xxxxxxx>
Subject:  Re: Problem while upgrade 17.2.6 to 17.2.7 
 
Email received from the internet. If in doubt, don't click any link nor 
open any attachment !
________________________________

Hi,

don't you have a traceback below that ?

You probably have a communication problem (ssl ? ) between the dashboard
and the rgw.

Maybe check the settings: ceph dashboard get-rgw-api-*
=>
https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-the-object-gateway-management-frontend





Le ven. 17 nov. 2023 à 11:22, Jean-Marc FONTANA 
<jean-marc.fontana@xxxxxxx>
a écrit :

> Hello, everyone,
>
> There's nothing cephadm.log in /var/log/ceph.
>
> To get something else, we tried what David C. proposed (thanks to him 
!!)
> and found:
>
> nov. 17 10:53:54 svtcephmonv3 ceph-mgr[727]: [balancer ERROR root] 
execute
> error: r = -1, detail = min_compat_client jewel < luminous, which is
> required for pg-upmap. Try 'ceph osd set-require-min-compat-client
> luminous' before using the new interface
> nov. 17 10:54:54 svtcephmonv3 ceph-mgr[727]: [balancer ERROR root] 
execute
> error: r = -1, detail = min_compat_client jewel < luminous, which is
> required for pg-upmap. Try 'ceph osd set-require-min-compat-client
> luminous' before using the new interface
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR exception]
> Internal Server Error
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request]
> [::ffff:192.168.114.32:53414] [GET] [500] [0.026s] [testadmin] [513.0B]
> /api/rgw/daemon
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request]
> [b'{"status": "500 Internal Server Error", "detail": "The server
> encountered an unexpected condition which prevented it from fulfilling 
the
> request.", "request_id":
> "961b2a25-5c14-4c67-a82a-431f08684f80"}
> ']
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR exception]
> Internal Server Error
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request]
> [::ffff:192.168.114.32:53409] [GET] [500] [0.012s] [testadmin] [513.0B]
> /api/rgw/daemon
> nov. 17 10:55:56 svtcephmonv3 ceph-mgr[727]: [dashboard ERROR request]
> [b'{"status": "500 Internal Server Error", "detail": "The server
> encountered an unexpected condition which prevented it from fulfilling 
the
> request.", "request_id": "baf41a81-1e6b-4422-97a7-bd96b832dc5a"}
>
> The error about min_compat_client has been fixed with the suggested
> command ( that is a nice result :) ),
> but the web interface still keeps on going on error.
>
> Thanks for your helping,
>
> JM
> Le 17/11/2023 à 07:33, Nizamudeen A a écrit :
>
> Hi,
>
> I think it should be in /var/log/ceph/ceph-mgr.<hostname>.log, probably 
you
> can reproduce this error again and hopefully
> you'll be able to see a python traceback or something related to rgw in 
the
> mgr logs.
>
> Regards
>
> On Thu, Nov 16, 2023 at 7:43?PM Jean-Marc FONTANA 
<jean-marc.fontana@xxxxxxx> <jean-marc.fontana@xxxxxxx>
> wrote:
>
>
> Hello,
>
> These are the last lines of /var/log/ceph/cephadm.log of the active mgr
> machine after an error occured.
> As I don't feel this will be very helpfull, would you please tell us 
where
> to look ?
>
> Best regards,
>
> JM Fontana
>
> 2023-11-16 14:45:08,200 7f341eae8740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:46:10,406 7fca81386740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:47:12,594 7fd48f814740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:48:14,857 7fd0b24b1740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'check-host']
> 2023-11-16 14:48:14,990 7fd0b24b1740 INFO podman (/usr/bin/podman) 
version
> 3.0.1 is present
> 2023-11-16 14:48:14,992 7fd0b24b1740 INFO systemctl is present
> 2023-11-16 14:48:14,993 7fd0b24b1740 INFO lvcreate is present
> 2023-11-16 14:48:15,041 7fd0b24b1740 INFO Unit chrony.service is enabled
> and running
> 2023-11-16 14:48:15,043 7fd0b24b1740 INFO Host looks OK
> 2023-11-16 14:48:15,655 7f36b81fd740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--image', 
'quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
> '--timeout', '895', 'ls']
> 2023-11-16 14:48:17,662 7f17bfc28740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:49:20,131 7fc8a9cc1740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:50:22,284 7f1a6a7eb740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:51:24,505 7f1798dd5740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:52:26,574 7f0185a55740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:53:28,630 7f9bc3fff740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:54:30,673 7fc3752d0740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:55:32,662 7fd3865f8740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:56:34,686 7f73eedd2740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:57:36,799 7fbce19d2740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:58:38,874 7f8b5be4d740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'check-host']
> 2023-11-16 14:58:38,983 7f8b5be4d740 INFO podman (/usr/bin/podman) 
version
> 3.0.1 is present
> 2023-11-16 14:58:38,985 7f8b5be4d740 INFO systemctl is present
> 2023-11-16 14:58:38,987 7f8b5be4d740 INFO lvcreate is present
> 2023-11-16 14:58:39,050 7f8b5be4d740 INFO Unit chrony.service is enabled
> and running
> 2023-11-16 14:58:39,053 7f8b5be4d740 INFO Host looks OK
> 2023-11-16 14:58:39,923 7f5878f1c740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--image', 
'quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
> '--timeout', '895', 'ls']
> 2023-11-16 14:58:41,730 7fd774f12740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 14:59:44,116 7f5822228740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:00:46,276 7fbc86e16740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:01:48,291 7fec587af740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:02:50,500 7f6338963740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:02:51,882 7fbc52e2f740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--image', 
'quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
> '--timeout', '895', 'list-networks']
> 2023-11-16 15:03:53,692 7f652d1e6740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:04:56,193 7f2c66ce3740 DEBUG
> 
--------------------------------------------------------------------------------
> cephadm ['--timeout', '895', 'gather-facts']
> Le 16/11/2023 à 12:41, Nizamudeen A a écrit :
>
> Hello,
>
> can you also add the mgr logs at the time of this error?
>
> Regards,
>
> On Thu, Nov 16, 2023 at 4:12?PM Jean-Marc FONTANA 
<jean-marc.fontana@xxxxxxx> wrote:
>
>
> Hello David,
>
> We tried what you pointed in your message. First, it was set to
>
> "s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
>
> We tried to set it to "s3, s3website, swift, swift_auth, admin, sts,
> iam, subpub, notifications"
>
> and then to "s3, s3website, swift, swift_auth, admin, sts, iam,
> notifications",
>
> with no success at each time.
>
> We tried then
>
>        ceph dashboard reset-rgw-api-admin-resource
>
> or
>
>        ceph dashboard set-rgw-api-admin-resource XXX
>
> getting a 500 internal error message in a red box on the upper corner
> with the first one
>
> or the 404 error message with the second one.
>
> Thanks for your helping,
>
> Cordialement,
>
> JM Fontana
>
>
> Le 14/11/2023 à 20:53, David C. a écrit :
>
> Hi Jean Marc,
>
> maybe look at this parameter "rgw_enable_apis", if the values you have
> correspond to the default (need rgw restart) :
>
>
>
> 
https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis

>
> ceph config get client.rgw rgw_enable_apis
>
> ________________________________________________________
>
> Cordialement,
>
> *David CASIER*
>
> ________________________________________________________
>
>
>
> Le mar. 14 nov. 2023 à 11:45, Jean-Marc 
FONTANA<jean-marc.fontana@xxxxxxx> <jean-marc.fontana@xxxxxxx> a écrit :
>
>     Hello everyone,
>
>     We operate two clusters that we installed with ceph-deploy in
>     Nautilus
>     version on Debian 10. We use them for external S3 storage
>     (owncloud) and
>     rbd disk images.We had them upgraded to Octopus and Pacific
>     versions on
>     Debian 11 and recently converted them to cephadm and upgraded to
>     Quincy
>     (17.2.6).
>
>     As we now have the orchestrator, we tried updating to 17.2.7 using
>     the
>     command# ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7
>     <http://quay.io/ceph/ceph:v17.2.7> <http://quay.io/ceph/ceph:v17.2.7
>
>
>     Everything went well, both clusters work perfectly for our use,
>     except
>     that the Rados gateway configuration is no longer accessible from
>
> the
>
>     dashboard with the following error messageError connecting to Object
>     Gateway: RGW REST API failed request with status code 404.
>
>     We tried a few solutions found on the internet (reset rgw
>     credentials,
>     restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.
>
>     Does somebody have an idea ?
>
>     Best regards,
>
>     Jean-Marc Fontana
>     _______________________________________________
>     ceph-users mailing list -- ceph-users@xxxxxxx
>     To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by 
copyright or other legal rules. If you have received it by mistake please 
let us know by reply email and delete it from your system. It is 
prohibited to copy this message or disclose its content to anyone. Any 
confidentiality or privilege is not waived or lost by any mistaken 
delivery or unauthorized disclosure of the message. All messages sent to 
and from Agoda may be monitored to ensure compliance with company 
policies, to protect the company's interests and to remove potential 
malware. Electronic messages may be intercepted, amended, lost or deleted, 
or contain viruses. 


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux