Re: OSD (and probably other settings) not being picked up outside of the [global] section

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


On Mon, 20 Oct 2014 17:09:57 -0700 Craig Lewis wrote:

> I'm still running Emperor, but I'm not seeing that behavior.  My
> ceph.conf is pretty similar:

Yeah, I tested things extensively with Emperor back in the day and at that
time frequently verified that changes in the config file were reflected in
the running configuration after a restart.

Until last week I of course blissfully assumed that this basic
functionality would still work in Firefly. ^o^

> [global]
>   mon initial members = ceph0
>   mon host = 10.129.0.6:6789, 10.129.0.7:6789, 10.129.0.8:6789
>   cluster network = 10.130.0.0/16
>   osd pool default flag hashpspool = true
>   osd pool default min size = 2
>   osd pool default size = 3
>   public network = 10.129.0.0/16
> 
> [osd]
>   osd journal size = 6144
>   osd mkfs options xfs = -s size=4096
>   osd mkfs type = xfs
>   osd mount options xfs = rw,noatime,nodiratime,nosuid,noexec,inode64
> 
> 
> 
> If you manually run ceph-disk-prepare and ceph-disk-activate, are the
> mkfs params being picked up?
> 
No idea really, I will have to test that.
Of course with ceph-deploy (and I assume ceph-disk-prepare) the "activate"
bit is a bit of misnomer, as the udev magic will happily activate an OSD
instantly after creation despite me using just "ceph-deploy osd
prepare ...".

> For the daemon configs, you can query a running daemon to see what it's
> config params are:
> root@ceph0:~# ceph daemon osd.0 config get 'osd_op_threads'
> { "osd_op_threads": "2"}
> root@ceph0:~# ceph daemon osd.0 config get 'osd_scrub_load_threshold'
> { "osd_scrub_load_threshold": "0.5"}
> 
I of course know that, that is how I found out that things didn't get
picked up.

> 
> While we try to figure this out, you can tell the running daemons to use
> your values with:
> ceph tell osd.\* --inject_args '--osd_op_threads 10'
> 
That I'm also aware of, but for the time being having everything in
[global] resolves the problem and more importantly makes it reboot proof.

Christian
> 
> 
> 
> On Thu, Oct 16, 2014 at 6:54 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> >
> > Hello,
> >
> > Consider this rather basic configuration file:
> > ---
> > [global]
> > fsid = e6687ef7-54e1-44bd-8072-f9ecab00815
> > mon_initial_members = ceph-01, comp-01, comp-02
> > mon_host = 10.0.0.21,10.0.0.5,10.0.0.6
> > auth_cluster_required = cephx
> > auth_service_required = cephx
> > auth_client_required = cephx
> > filestore_xattr_use_omap = true
> > mon_osd_downout_subtree_limit = host
> > public_network = 10.0.0.0/8
> > osd_pool_default_pg_num = 2048
> > osd_pool_default_pgp_num = 2048
> > osd_crush_chooseleaf_type = 1
> >
> > [osd]
> > osd_mkfs_type = ext4
> > osd_mkfs_options_ext4 = -J size=1024 -E
> > lazy_itable_init=0,lazy_journal_init=0
> > osd_op_threads = 10
> > osd_scrub_load_threshold = 2.5
> > filestore_max_sync_interval = 10
> > ---
> >
> > Let us slide the annoying fact that ceph ignores the pg and pgp
> > settings when creating the initial pools.
> > And that monitors are preferred based on IP address instead of the
> > sequence they're listed in the config file.
> >
> > Interestingly ceph-deploy correctly picks up the mkfs_options but why
> > it fails to choose the mkfs_type as default is beyond me.
> >
> > The real issue is that the other three OSD setting are NOT picked up by
> > ceph on startup.
> > But they sure are when moved to the global section.
> >
> > Anybody else seeing this (both with 0.80.1 and 0.80.6)?
> >
> > Regards,
> >
> > Christian
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux