Re: giant release osd down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/11/14 03:02, Sage Weil wrote:
On Mon, 3 Nov 2014, Mark Kirkwood wrote:

Ah, I missed that thread.  Sounds like three separate bugs:

- pool defaults not used for initial pools
- osd_mkfs_type not respected by ceph-disk
- osd_* settings not working

The last one is a real shock; I would expect all kinds of things to break
very badly if the [osd] section config behavior was not working.

I wonder if this sort of thing has escaped notice because ceph-deploy seems to plonk stuff into [global] only, I guess this acts as an implicit encouragement to have everything in there (e.g I note in our production setup that we have the rbd_cache* settings in [global] instead of [client]).

regards

Mark




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux