RE: disk enclosure LEDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: John Spray [mailto:jspray@xxxxxxxxxx]
> Sent: Monday, December 05, 2016 4:20 PM
> To: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
> Cc: Lars Marowsky-Bree <lmb@xxxxxxxx>; Ceph Development <ceph-
> devel@xxxxxxxxxxxxxxx>
> Subject: Re: disk enclosure LEDs
> 
> On Mon, Dec 5, 2016 at 10:50 PM, Allen Samuels
> <Allen.Samuels@xxxxxxxxxxx> wrote:
> >> -----Original Message-----
> >> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
> >> owner@xxxxxxxxxxxxxxx] On Behalf Of John Spray
> >> Sent: Monday, December 05, 2016 11:28 AM
> >> To: Lars Marowsky-Bree <lmb@xxxxxxxx>
> >> Cc: Ceph Development <ceph-devel@xxxxxxxxxxxxxxx>
> >> Subject: Re: disk enclosure LEDs
> >>
> >> On Mon, Dec 5, 2016 at 6:41 PM, Lars Marowsky-Bree <lmb@xxxxxxxx>
> >> wrote:
> >> > On 2016-12-05T18:02:08, Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
> >> wrote:
> >> >
> >> >> I'm indifferent to agent vs. agent-less.
> >> >>
> >> >> I *believe* that having a ceph-private distribution is
> >> >> easier/simpler/more
> >> reliable than trying to layer over some other system (ansible, salt,
> >> etc.) [i.e., I agree with John]. But this isn't a strongly held belief.
> >> >>
> >> >> I'm *metaphysically certain* that whatever distribution scheme is
> >> adopted that it not be optional. A large barrier to adoption of Ceph
> >> today is the lack of "middle-ware" that handles infrequent
> >> operational events (node addition/removal, media failure/recovery,
> >> migration, etc.). IMO, this middle- ware will have to be a standard
> >> part of Ceph, i.e., fully functional "out of the box" without
> >> site-specific twiddling (though having a mechanism to insert site-specific
> stuff is fine with me, it just can't be *required*).
> >> >>
> >> >> In my mind, the distribution scheme is the next step in the
> >> >> evolution of Ceph-mgr. It's what's missing :)
> >> >
> >> > I see the benefits of having a ceph-specific agent for hardware
> >> > interaction. However, that then shifts the problem for
> >> > bootstrapping said Ceph agent.
> >>
> >> Bootstrapping would be the same as we already have for installing
> >> OSDs and MDSs.  So ceph-deploy/ceph-ansible/whatever needs to be
> able
> >> to do the same thing for the per-host agent that it currently does
> >> for OSDs, no overall increase in complexity.
> >>
> >> > And when you open the can of worms that is server addition/removal,
> >> > etc we start hitting the question of either spinning up a
> >> > distribution mechanism as well.
> >> >
> >> > When we want to look at container-izing Ceph in hyper-converged
> >> > environments, this gets even worse.
> >>
> >> I'm imagining that in a container-per-service model, where something
> >> external has configured the OSD containers to have access to the
> >> block device that they will run on, it doesn't seem unreasonable to
> >> have the same configuration process set up the ceph agent container
> >> with access to all the OSD block devices.  What are your thoughts
> >> about how this would (or
> >> wouldn't) work?
> >
> > The current OSD design is per-drive and not-reliable. We need a piece of
> software, running on the target system, that's NOT per-drive and NOT not-
> reliable (i.e., reliable :)). We need the management system to be able to dig
> out of the OSD's system why it crashed -- i.e., read logs and other types of
> status, etc. It's possible to mutate the OSD there, but I don't think it's easy
> nor soon.
> 
> I think I've lost you there -- what's the relation between what you've just
> said and the issue of containerisation?

Perhaps none, just that the container world tends to want to ignore box boundaries, storage management doesn't have that luxury.

> 
> John
> 
> >>
> >> >
> >> > e.g., the cephalopod turns into a cephaloblob.  (Sorry. I'm
> >> > terrible with puns.)
> >> >
> >> > I need a mechanism for interacting with enclosures (to stick with
> >> > the example), but I don't need it to be part of Ceph, since I need
> >> > it for other parts of my infrastructure too anyway.
> >> >
> >> >
> >> > If it's part of Ceph, I end up writing a special case for Ceph.
> >>
> >> I think this would cease to be a problem for you if we just had a
> >> flag in Ceph to disable its own smartmontools type stuff?  That way
> >> when someone was using an external tool there would be no conflict.
> >>
> >> There is some duplication of effort, but I don't think that's
> >> intrinsically
> >> problematic: I predict that we'll always have many users who do not
> >> take up any of the external tools and will benefit from the built-in Ceph
> bits.
> >>
> >> > And I need a way to handle it when Ceph itself isn't around yet;
> >> > how do I blink an enclosure that receives a new disk? Ah, I
> >> > pre-register a given enclosure with Ceph, before an OSD is even
> >> > created. I know Ceph has many tentacles, but ... ;-)
> >>
> >> While at runtime we shouldn't have two agents competing to manage the
> >> same device, I think it is reasonable to have a separate piece of
> >> software that does installation vs. does the ongoing monitoring.  We
> >> shouldn't let the constraints over installation (especially the need
> >> to operate on cephless
> >> machines) restrict how we manage systems through their life cycles.
> >> Again, I don't think the built-in Ceph functionality is mutually
> >> exclusive with having a good external installation tool that touches some
> of the same functionality.
> >>
> >> John
> >>
> >>
> >> >
> >> >
> >> > Regards,
> >> >     Lars
> >> >
> >> > --
> >> > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
> >> > Norton, HRB 21284 (AG Nürnberg) "Experience is the name everyone
> >> > gives to their mistakes." -- Oscar Wilde
> >> >
> >> > --
> >> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More
> >> majordomo
> >> > info at  http://vger.kernel.org/majordomo-info.html
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >> in the body of a message to majordomo@xxxxxxxxxxxxxxx More
> majordomo
> >> info at http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux