Re: Centralized config mask not being applied to host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for the confirmation. In my case using either the device class (class:hdd vs ssd) or a top level root (default vs default-ssd) might be good enough. But we *do* have hosts with differing amounts of memory/ too, so it would be great if this could be fixed and patched!

Sorry - while I did look for posts concerning this before sending this one, I clearly didn't look back far enough!

regards

Mark

On 26/11/21 16:36, Richard Bade wrote:
Hi Mark,
I have noticed exactly the same thing on Nautilus where host didn't
work but chassis did work. I posted to this mailing list a few weeks
ago.
It's very strange that the host filter is not working. I also could
not find any errors logged for this, so it looks like it's just
ignoring the setting.
I don't know if this is still a problem in Octopus or Pacific as I
have not had a chance to upgrade our dev cluster yet.

My workaround is that I've got a chassis level in my crush above host
and the settings which differ are the same across a chassis. This may
not suit your situation though.

Rich

On Fri, 26 Nov 2021 at 15:23, Mark Kirkwood
<markkirkwood@xxxxxxxxxxxxxxxx> wrote:
HI all,

I'm looking at doing a Luminous to Nautilus upgrade.  I'd like to
assimilate the config into the mon db. However we do have hosts with
differing [osd] config sections in their current ceph.conf files. I was
looking at using the crush type host:xxx to set these differently if
required.

However my test case is not applying the mask to the particular host, e.g:

markir@ceph2:~$ sudo ceph config set osd/host:ceph2 osd_memory_target
1073741824

markir@ceph2:~$ sudo ceph config dump
WHO    MASK       LEVEL OPTION
VALUE RO
global            advanced
auth_client_required                           cephx *
global            advanced
auth_cluster_required                          cephx *
global            advanced
auth_service_required                          cephx *
global            advanced
cluster_network                                192.168.124.0/24 *
global            advanced osd_pool_default_size 2
global            advanced
public_network                                 192.168.123.0/24 *
    mon             advanced mon_warn_on_insecure_global_id_reclaim false
    mon             advanced
mon_warn_on_insecure_global_id_reclaim_allowed false
    osd  host:ceph2 basic osd_memory_target
1073741824


markir@ceph2:~$ sudo ceph config get osd.1
WHO    MASK       LEVEL    OPTION VALUE            RO
global            advanced auth_client_required cephx            *
global            advanced auth_cluster_required cephx            *
global            advanced auth_service_required cephx            *
global            advanced cluster_network 192.168.124.0/24 *
osd    host:ceph2 basic    osd_memory_target 1073741824
global            advanced osd_pool_default_size 2
global            advanced public_network 192.168.123.0/24 *

markir@ceph2:~$ sudo ceph config show osd.1
NAME                  VALUE             SOURCE OVERRIDES IGNORES
auth_client_required  cephx mon
auth_cluster_required cephx mon
auth_service_required cephx mon
cluster_network       192.168.124.0/24 mon
daemonize             false override
keyring               $osd_data/keyring default
leveldb_log default
mon_host              192.168.123.20 file
mon_initial_members   ceph0 file
osd_pool_default_size 2 mon
public_network        192.168.123.0/24 mon
rbd_default_features  61 default
setgroup              ceph cmdline
setuser               ceph              cmdline

If I use a different mask, e.g: osd/class:hdd or even osd/root:default
then the setting *is* applied. I'm scratching my head about this - no
errors in either the mon or osd log to indicate why it is not being applied.

The is ceph 14.2.22. And osd.1 is really on host ceph2:

markir@ceph2:~$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       0.11719 root default
-3       0.02930     host ceph1
   0   hdd 0.02930         osd.0      up  1.00000 1.00000
-5       0.02930     host ceph2
   1   hdd 0.02930         osd.1      up  1.00000 1.00000
-7       0.02930     host ceph3
   2   hdd 0.02930         osd.2      up  1.00000 1.00000
-9       0.02930     host ceph4
   3   hdd 0.02930         osd.3      up  1.00000 1.00000

regards

Mark



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux