Re: Pentagon Orange redefined in ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Jul 9, 2015, at 12:45 PM, Owen Synge <osynge@xxxxxxxx> wrote:
> 
> Typo:
> 
> On 07/09/2015 09:37 PM, Owen Synge wrote:
>> Dear all,
>> 
>> Their are other details to be discussed, and hopefully lead to
>> agreement, but lets get to issue #1. The style issues still apply to
>> ceph and ceph-deploy.
>> 
>> From what you said, in my opinion the "boat anchor" in ceph-deploy is
>> redefined, as coupling of facade pattern, where all data is available,
>> to the ssh loop in a connection. This is probably the biggest single
>> architectural issue in ceph-deploy.
>> 
>> Travis Rhoden stated that the modules are imported as objects as they
>> are "instantiated", I should check this, this is very good news and
>> removes many objections to the outcome.

I went back through my previous messages to make sure I knew what I said.  What I found was:

> Furthermore, I don’t think it’s the facade paradigm that would limit you to one host at a time at all.  It’s one host module (facade) instantiated per host.  You could do many of these at once — I don’t see any reason why you couldn’t.  I’ve never tried it out, and I don’t know if python-remoto handles concurrency, but I don’t think the facade paradigm prevents it.

As you say, it would be very good news if things were indeed instantiated, but I am afraid it is not good news and that I was wrong. As I’ve started to re-familiarize myself with this bit of the code base (it has been a while), I’m starting to better understand your points.  Each remote connection is indeed handled by a direct assignment of the needed module (be it CentOS, Debian, etc), and the module is used directly.  It is not an instance of a class.

This does have the drawbacks that you’ve mentioned — that you can only do one at a time, that variables would not be thread safe, etc.  This has not been an issue thusfar since adding concurrency to ceph-deploy hasn’t really been on the road map.  As far as being an “architectural issue”, it depends on where ceph-deploy is going.  When the current host module paradigm was put in place, it was a *vast* improvement of what was there before, but that does not mean it now always has to be that way.

I would be against making any major changes in the current 1.5.x series, but we could start to talk about an improved 1.6.x series.  There are other non-backwards compatible changes that I’ve been considering as well, in addition to wanting to remove some deprecated code.

>> 
>> The discussion of point 
> 
> 	2) façade requires code layout inflexibility.
> 
>> is still worth continuing though in a
>> separate thread as it is still important enough to require discussion,
>> but it is of a style and good practice discussion rather than Boat
>> Anchor problem level.
>> 
>> Many other topics are unaffected.

I think the biggest challenge that you and I may run into is an impedance mismatch on priority for getting architectural changes into ceph-deploy. It’s pretty low priority for me, job-wise, and would be something that would happen over the course of a few months, with a few changes here and there.  Getting bug fixes in, keeping pace with new features in upstream Ceph, and improving usability in the 1.5.x series does get much more attention from me since we have community users (and downstream products) using it every day.

>> 
>> On 07/09/2015 07:00 PM, Travis Rhoden wrote:
>>>> (1A) You have to close one facade to start anouther, eg in ceph-deploy
>>>>> you have to close each connection before connecting to the next server
>>>>> so making it slow to use as all state has to be gathered.
>>> concurrency has come up before in ceph-deploy.  It has been our explicit goal to make ceph-deploy as simple and *clear* as possible for users, with one of the main purposes to be extremely verbose and essentially *teach* a user how to deploy a Ceph cluster.  That’s why it prints everything it does by default, shows every remote command, and prints the output back in order.  Concurrency would muddy those waters, though we do all want things to go faster.
>>> 
>>> It is not necessarily the facade pattern that is the limitation there — it is the implementation within ceph-deploy.  We simply do a “for host is hostnames…” loop everywhere — it doesn’t matter what we are using underneath, we are doing one SSH connection at a time.
>> 
>> Best regards
>> 
>> Owen
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
> 
> -- 
> SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB
> 21284 (AG
> Nürnberg)
> 
> Maxfeldstraße 5
> 
> 90409 Nürnberg
> 
> Germany

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux