Re: RFC: tool for applying 'ceph daemon <osd>' command to all OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/21/2015 11:29 PM, Gregory Farnum wrote:
> On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick <dmick@xxxxxxxxxx> wrote:
>> I needed something to fetch current config values from all OSDs (sorta
>> the opposite of 'injectargs --key value), so I hacked it, and then
>> spiffed it up a bit.  Does this seem like something that would be useful
>> in this form in the upstream Ceph, or does anyone have any thoughts on
>> its design or structure?
>>
>> It requires a locally-installed ceph CLI and a ceph.conf that points to
>> the cluster and any required keyrings.  You can also provide it with
>> a YAML file mapping host to osds if you want to save time collecting
>> that info for a statically-defined cluster, or if you want just a subset
>> of OSDs.
>>
>> https://github.com/dmick/tools/blob/master/osd_daemon_cmd.py
>>
>> Excerpt from usage:
>>
>> Execute a Ceph osd daemon command on every OSD in a cluster with
>> one connection to each OSD host.
>>
>> Usage:
>>     osd_daemon_cmd [-c CONF] [-u USER] [-f FILE] (COMMAND | -k KEY)
>>
>> Options:
>>    -c CONF   ceph.conf file to use [default: ./ceph.conf]
>>    -u USER   user to connect with ssh
>>    -f FILE   get names and osds from yaml
>>    COMMAND   command other than "config get" to execute
>>    -k KEY    config key to retrieve with config get <key>
> 
> I naively like the functionality being available, but if I'm skimming
> this correctly it looks like you're relying on the local node being
> able to passwordless-ssh to all of the nodes, and for that account to
> be able to access the ceph admin sockets. Granted we rely on the ssh
> for ceph-deploy as well, so maybe that's okay, but I'm not sure in
> this case since it implies a lot more network openness.

Yep; it's basically the same model and role assumed as "cluster destroyer".

> Relatedly (perhaps in an opposing direction), maybe we want anything
> exposed over the network to have some sort of explicit permissions
> model?

Well, I've heard that idea floated about the admin socket for years, but
I don't think anyone's hot to add cephx to it :)

> Maybe not and we should just ship the script for trusted users. I
> would have liked it on the long-running cluster I'm sure you built it
> for. ;)

it's like you're clairvoyant.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux