Re: Some questions about cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In regards to
>
> From the reading you gave me I have understood the following :
> 1 - Set osd_memory_target_autotune to true then set
> autotune_memory_target_ratio to 0.2
> 2 - Or do the math. For my setup I have 384Go per node, each node has 4
> nvme disks of 7.6To, 0.2 of memory is 19.5G. So each OSD will have 19G of
> memory.
>
> Question : Should I take into account the size of the disk when calculating
> the required memory for an OSD?
>
The memory in question is RAM, not disk space. To see the exact value
cephadm will see for the amount of memory (in kb, we multiply by 1024 when
actually using it) when doing this autotuning, you can run

[root@vm-00 ~]# cephadm gather-facts | grep memory_total
  "memory_total_kb": 40802184,

on your machine. Then it multiplies that by the ratio and subtracts out an
amount for every non-OSD daemon on the node. Specifically (taking this from
the code)

    min_size_by_type = {
        'mds': 4096 * 1048576,
        'mgr': 4096 * 1048576,
        'mon': 1024 * 1048576,
        'crash': 128 * 1048576,
        'keepalived': 128 * 1048576,
        'haproxy': 128 * 1048576,
    }
    default_size = 1024 * 1048576

so 1 GB for most daemons, with mgr and mds requiring extra (although for
mds it also uses the `mds_cache_memory_limit` config option if it's set)
and some others requiring less. What's left after all that is done is then
divided by the number of OSDs deployed on the host. If that number ends up
too small, however, there is some floor that it won't set below, but I
can't remember off the top of my head what that is. Maybe 4 GB.

On Mon, Feb 26, 2024 at 5:10 AM wodel youchi <wodel.youchi@xxxxxxxxx> wrote:

> Thank you all for your help.
>
> @Adam
> From the reading you gave me I have understood the following :
> 1 - Set osd_memory_target_autotune to true then set
> autotune_memory_target_ratio to 0.2
> 2 - Or do the math. For my setup I have 384Go per node, each node has 4
> nvme disks of 7.6To, 0.2 of memory is 19.5G. So each OSD will have 19G of
> memory.
>
> Question : Should I take into account the size of the disk when calculating
> the required memory for an OSD?
>
>
> I have another problem, the local registry. I deployed a local registry
> with the required images, then I used cephadm-ansible to prepare my hosts
> and inject the local registry url into /etc/container/registry.conf file
>
> Then I tried to deploy using this command on the admin node:
> cephadm --image 192.168.2.36:4000/ceph/ceph:v17 bootstrap --mon-ip
> 10.1.0.23 --cluster-network 10.2.0.0/16
>
> After the boot strap I found that it still downloads the images from the
> internet, even the ceph image itself, I see two images one from my registry
> the second from quay.
>
> There is a section that talks about using a local registry here
>
> https://docs.ceph.com/en/reef/cephadm/install/#deployment-in-an-isolated-environment
> ,
> but it's not clear especially about the other images. It talks about
> preparing a temporary file named initial-ceph.conf, then it does not use
> it???!!!
>
> Could you help?
>
> Regards.
>
> Le jeu. 22 févr. 2024 à 11:10, Eugen Block <eblock@xxxxxx> a écrit :
>
> > Hi,
> >
> > just responding to the last questions:
> >
> > >    - After the bootstrap, the Web interface was accessible :
> > >       - How can I access the wizard page again? If I don't use it the
> > first
> > >       time I could not find another way to get it.
> >
> > I don't know how to recall the wizard, but you should be able to
> > create a new dashboard user with your desired role (e. g.
> > administrator) from the CLI:
> >
> > ceph dashboard ac-user-create <username> [<rolename>] -i
> > <file_with_password>
> >
> > >       - I had a problem with telemetry, I did not configure telemetry,
> > then
> > >       when I clicked the button, the web gui became
> > inaccessible.....????!!!
> >
> > You can see what happened in the active MGR log.
> >
> > Zitat von wodel youchi <wodel.youchi@xxxxxxxxx>:
> >
> > > Hi,
> > >
> > > I have some questions about ceph using cephadm.
> > >
> > > I used to deploy ceph using ceph-ansible, now I have to move to
> cephadm,
> > I
> > > am in my learning journey.
> > >
> > >
> > >    - How can I tell my cluster that it's a part of an HCI deployment?
> > With
> > >    ceph-ansible it was easy using is_hci : yes
> > >    - The documentation of ceph does not indicate what versions of
> > grafana,
> > >    prometheus, ...etc should be used with a certain version.
> > >       - I am trying to deploy Quincy, I did a bootstrap to see what
> > >       containers were downloaded and their version.
> > >       - I am asking because I need to use a local registry to deploy
> > those
> > >       images.
> > >    - After the bootstrap, the Web interface was accessible :
> > >       - How can I access the wizard page again? If I don't use it the
> > first
> > >       time I could not find another way to get it.
> > >       - I had a problem with telemetry, I did not configure telemetry,
> > then
> > >       when I clicked the button, the web gui became
> > inaccessible.....????!!!
> > >
> > >
> > >
> > > Regards.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux