Re: http_proxy settings for cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That's a good notion, and was next on my list.

I have actually tracked down the root cause here - it's sudo.

Sudo does:

Defaults env_reset

And ceph orch calls podman within a sudo - so whilst the containers were getting an Env just fine, the deploy process wasn't.

Adding:

Defaults env_keep += "http_proxy https_proxy no_proxy"

To /etc/sudoers.d/

Seems to have resolved the issue.

podman doesn't _seem_ to have a similar config file for setting such things.


This is perhaps not an optimal approach, but does seem to resolve the core problem.


From: GARCIA, SAMUEL <samuel.garcia@xxxxxxxx>
Sent: 15 July 2022 15:09
To: Ed Rolison <ed.rolison@xxxxxxxx>; ceph-users@xxxxxxx
Subject: Re: http_proxy settings for cephadm

This email originated from outside Oxford Asset Management


Hello Ed,

Personally, I set the proxy for docker instead of using the environment variables :

mkdir -p /etc/systemd/system/docker.service.d
cat << EOF >> /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://my-proxy:port";
Environment="HTTPS_PROXY=http://my-proxy:port "
Environment="NO_PROXY=localhost,127.0.0.1"
EOF
systemctl daemon-reload
systemctl restart docker

This allows cephadm to transparently use the proxy when required.

Note that I'm working on RHEL8, I removed podman and use Docker instead.

Kind regards,

Sam


De : Ed Rolison <ed.rolison@xxxxxxxx<mailto:ed.rolison@xxxxxxxx>>
Date : vendredi, 15 juillet 2022 à 14:36
À : ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> <ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>>
Objet :  http_proxy settings for cephadm
Caution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.

Hello everyone. I'm having a bit of a headache at the moment, trying to track down how I "should" be configuring proxy settings.

When I was running Pacific, I think I managed to get things working, via setting a proxy in /etc/environment.
Although note that if you do this, you'll have to also set a no_proxy, because that makes Prometheus error.

I've upgraded to Quincy successfully, although I did need to manually pull some podman
My current proxy settings are:

http_proxy=my_proxy:3128
https_proxy=my_proxy:3128
no_proxy=10.0.0.0/8,my_subdomain

As far as I can tell with untangling all this, /etc/environment seems to be the only place it gets 'picked up' within the cephadm environment - setting profile.d or similar doesn't seem to, and neither does /etc/default/ceph. (I was clutching at straws a bit).

But suffice to say, with sufficient fettling this now appears in my 'mgr' environment when I attach with podman exec -it <container> /bin/bash
and then run 'env' - I can 'curl' quay.io, and off we go.

However when I run ceph orch upgrade --ceph-version 17.2.1

This bombs out, with 'failed to pull target image'. ceph orch upgrade check does pretty much the same - unable to resolve quay.io, in a way very consistent with not trying to use a proxy.

I've managed to work around this with 'ceph orch host add' because manually pulling the container from the command line (just a 'podman pull' using the default env) seems to work just fine.

But ceph orch upgrade seems to not be content with this - at least not any more, I'm _fairly_ sure this is how I upgraded from 16 to 17 by manually pulling containers.

But what I'm really trying to figure out is quite why I cannot set a proxy for podman - it seems the 'upgrade.py' might just ignore the env, but also insist on pulling the container 'properly'.
I also have a suspicion that I could do some filthy hack that involve setting container_image and again - manually pulling the container.

(This is a 'test' env, so I'm not too worried about messing it up).

But does anyone have any suggestions for effectively using cephadm from behind a proxy?



Ed Rolison
OxFORD Asset Management
OxAM House | 6 George Street
Oxford | OX1 2BW | England
ed.rolison@xxxxxxxx<mailto:%20ed.rolison@xxxxxxxx>
Phone: +44 (0) 1865 248 248


This confidential email is subject to OxFORD Asset Management's Legal Notice<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.oxam.com%2Flegal-notices&amp;data=05%7C01%7Csamuel.garcia%40atos.net%7C716ecc033cf54ecf294008da665ea6a5%7C33440fc6b7c7412cbb730e70b0198d5a%7C0%7C0%7C637934853970982162%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=GyB1VW0FxrhdrH1kbP44twtkOFC7X5GzMmSM%2FU0Kz1I%3D&amp;reserved=0> and Privacy Notice<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.oxam.com%2Fprivacy-and-cookies&amp;data=05%7C01%7Csamuel.garcia%40atos.net%7C716ecc033cf54ecf294008da665ea6a5%7C33440fc6b7c7412cbb730e70b0198d5a%7C0%7C0%7C637934853970982162%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=Y4PcC%2BUJn8YyJ8CZyZTCAOBXTbcE21P6Ur%2BT4nOLaLI%3D&amp;reserved=0>.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>

This confidential email is subject to OxFORD Asset Management's Legal Notice<www.oxam.com/legal-notices> and Privacy Notice<www.oxam.com/privacy-and-cookies>.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux