Re: deploy ceph cluster in isolated environment -- NO INTERNET

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



answers to questions you made:

namespace is: my_ceph

i did use version number for image tag (nothing changed)

I did not set any fallback by myself, I just download the image from RPM,
change its tag and push in my local repository

adminnode is the cluster admin and also local image repository

yes, adminnode is the registry


here is the configuration of the registry (this is replicated on all nodes
- /etc/containers/registries.conf):

[[registry]]
prefix = "adminnode:5000"
insecure = true
location = "adminnode:5000"


internet access is blocked so adminnode cannot pull any images form quay.

-----------------------------------------------------------------------

as i told you before i can download the image using tag

podman pull adminnode:5000/my_ceph:v17.2.3
here is the output:

    root@adminnode:~# podman pull adminnode:5000/my_ceph:v17.2.3
    Trying to pull adminnode:5000/my_ceph:v17.2.3...
    Getting image source signatures
    Copying blob 58149c38763c skipped: already exists
    Copying blob 61d755b02433 skipped: already exists
    Copying blob 6521843dd476 skipped: already exists
    Copying blob 4bb16177726c skipped: already exists
    Copying blob f94384149dc9 [--------------------------------------] 0.0b
/ 0.0b
    Copying config 44957ee5ff done
    Writing manifest to image destination
    Storing signatures
    44957ee5ff339b873ccc29e61529563390d02277ea006f04ee8f42810408ed8b


I am just wondering that how can I tell cephadm to use tag number instead
of digest :/

here is also the output of the cephadm pull on adminnode (it is not
possible to download image from itself using digest):


    root@adminnode:~# cephadm pull
    Using recent ceph image adminnode:5000/my_ceph@sha256
:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca
    Pulling container image adminnode:5000/my_ceph@sha256
:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca...
    Non-zero exit code 125 from /usr/bin/podman pull
adminnode:5000/my_ceph@sha256
:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca
    /usr/bin/podman: stderr Trying to pull adminnode:5000/my_ceph@sha256
:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca...
    /usr/bin/podman: stderr Error: initializing source
docker://adminnode:5000/my_ceph@sha256:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca:
reading manifest
sha256:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca in
adminnode:5000/my_ceph: manifest unknown: manifest unknown
    ERROR: Failed command: /usr/bin/podman pull
adminnode:5000/my_ceph@sha256
:ada31f505d2c3249a531c972bcceb99ae5564e0843b40027dd3c454317a06eca


On Sun, Jul 31, 2022 at 1:58 PM Alvaro Soto <alsotoes@xxxxxxxxx> wrote:

> What is the namespace you are using? Ceph or my_ceph, also stop using the
> latest tag, that will force to always go and check the repository even if
> you have the image in your local node, no tag will always default to latest.
>
> If podman pull and cephadm commands are behaving differently while pulling
> the exact same image, it will be a good idea to debug default parameters on
> those two.
>
> The inspection showed also quay urls, is the internal registry configured
> to fallback to quay?
>
> Another thing is that you mentioned the command worked fine only in the
> admin node, and you're pulling images from the admin node, right? So
> adminnode is also the registry? What are you using as internal registry?
> What is the registry configuration?
>
> Are you sure that pulling the image from admin node that I think is the
> same node as the registry is pulling the image local and not going out to
> quay?
>
> Cheers.
>
> ---
> Alvaro Soto.
>
> Note: My work hours may not be your work hours. Please do not feel the
> need to respond during a time that is not convenient for you.
> ----------------------------------------------------------
> Great people talk about ideas,
> ordinary people talk about things,
> small people talk... about other people.
>
> On Sun, Jul 31, 2022, 3:37 AM Hossein Dehghanpoor <
> hossein.dehghanpoor@xxxxxxxxx> wrote:
>
>> sure it is :D
>> I am sure there is something wrong with the local registry. something
>> related to manifest.
>> any help would be appreciated,
>>
>> On Sun, Jul 31, 2022 at 1:00 PM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:
>>
>> >
>> > Is it not easier to start with installing from rpm's?
>> >
>> >
>> > >
>> > > Hi guys
>> > > i am going to deploy local image repository using this guide:
>> > >
>> > > https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-
>> > > isolated-environment
>> > >
>> > > so i did everything to make my repository available to all cluster
>> > > nodes.
>> > > so each node would be able to pull the image (that i already pushed to
>> > > my
>> > > local repository) by tag.
>> > >
>> > > but there is a problem when i try to create my own cluster using this
>> > > command:
>> > >
>> > > cephadm --image adminnode:5000/ceph:latest --mon-ip 10.0.40.10
>> > >
>> > > this command will be executed successfully on the admin node, but
>> when i
>> > > try to add other hosts to the cluster there is no success for them to
>> > > pull
>> > > the image. after a bit digging i figured out that other nodes try to
>> use
>> > > the image digest to pull from the localrepository. after that I did
>> > > check
>> > > and I could not download the images from my local repository using the
>> > > digest.
>> > >
>> > > how can I solve this issue?
>> > >
>> > > thank you in advance. <3
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@xxxxxxx
>> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux