Re: ssh passwords

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From my perspective, I want to ensure that we have a script that helps
users get Ceph up and running as quickly as possible so they can play,
explore and evaluate it. With this goal in mind, I would prefer to
lean towards the KISS principle to reduce the potential failure
scenarios which a) deter casual users and b) generate lots of support
overhead.

This might be achieved in a number of ways: by disaggregating the
provisioning (one-time) from configuration (ongoing) hooks which then
allow users to insert their own tools; or, by having a more prescribed
environment which still avoids unnecessarily forcing things like a
security model (password-less SSH) on users.

I think the production-ready aspect of the script should come from how
many use-cases it meets more than the robustness of it, but right now
I think the quality of the script is not good enough for our main
use-case, the first-time evaluator of Ceph.

I should have kicked off this conversation further in advance of our
chat tomorrow but am still interested to hear from other users what
they like/dislike about the script or what other tools they eventually
adopted for their Ceph provisioning or configuration management.

Neil

On Tue, Jan 22, 2013 at 7:43 PM, Travis Rhoden <trhoden@xxxxxxxxx> wrote:
> Since you are chatting about ceph-deploy tomorrow, I'll chime in with
> a bit more.
>
> I'm interested in ceph-deploy since it can be a light-weight
> production appropriate installer.  The docs repeatedly warn that
> mkcephfs is not intended for production clusters, and Neil reminds us
> that the expectation is that production clusters will likely use a
> config management tool.  Seems like that is most likely Chef, but I
> know there are others.
>
> I've always wondered "why" mkcephfs isn't suitable.  Looking at the
> Chef recipes and ceph-deploy, my best explanation is that these other
> tools use ceph-disk-prepare to take advantage of GPT and to label the
> disks for Ceph's use.  Then you can use the Upstart scripts to auto
> recognize prepared disks and automatically add them to the cluster.
> This scales a lot better than having to add each disk (assigned to a
> node) in ceph.conf and using /etc/init.d/ceph to stop/start the
> cluster.   It also makes it quite a lot easier to add new OSDs to the
> cluster.  Is that about right?
>
> If that's on the right track, I'm interested in ceph-deploy to achieve
> these goals because at the moment we're not interested in deploying
> Chef (or Puppet), great tools that they are.  Down the road, sure, but
> only once we have some in-house experience/expertise, which we
> currently do not.  Having a standalone tool that is just "simple"
> Python seems like a nice alternative.
>
> Those are my thoughts!
>
>  - Travis
>
> On Tue, Jan 22, 2013 at 7:09 PM, Neil Levine <neil.levine@xxxxxxxxxxx> wrote:
>> We're having a chat about ceph-deploy tomorrow. We need to strike a
>> balance between its being a useful tool for standing up a quick
>> cluster and its ignoring the UNIX philosophy and trying to do to much.
>>
>> My assumption is that for most production operations, or at the point
>> where people decide to invest in Ceph, users will already have
>> selected a parallel execution and/or configuration management tool.
>> Ensuring new or early PoC adopters, who perhaps don't want to wade
>> into the wider-operational frameworks issues, is probably where the
>> tool is best focused.
>>
>> Neil
>>
>> On Tue, Jan 22, 2013 at 3:57 PM, Travis Rhoden <trhoden@xxxxxxxxx> wrote:
>>> On Tue, Jan 22, 2013 at 6:14 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
>>>> On Tue, 22 Jan 2013, Neil Levine wrote:
>>>>> Out of interest, would people prefer that the Ceph deployment script
>>>>> didn't try to handle server-server file copy and just did the local
>>>>> setup only, or is it useful that it tries to be a mini-config
>>>>> management tool at the same time?
>>>>
>>>> BTW, you can also run mkcephfs that way; the man page will let you run
>>>> individual steps and do the remote execution parts yourself.
>>>>
>>>> But I'm also curious what people think of the 'normal' usage... anyone?
>>>>
>>>> sage
>>>>
>>>
>>> While I am interested to see where ceph-deploy goes, I do think
>>> mkcephfs in its current form is quite useful.  It does allow you to
>>> stand up decent size clusters with relative ease and is fairly fast.
>>> It has also come quite a ways since since the pre-argonaut form -- the
>>> recent --mkfs additions coupled with the auto-mounting in
>>> /etc/init.d/ceph is pretty slick.  It was a nice discovery for me last
>>> week, as I hadn't created a cluster from scratch since 0.50 or so.
>>>
>>>  - Travis
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux