Introducing libstoragemgmt as a Ceph dependency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey ceph-devel,

Back around this time in 2015, I started asking questions about what the Ceph community would expect around hardware integration with an underlying storage system (my primary concern was blinking LEDs): http://www.spinics.net/lists/ceph-devel/msg23404.html

I received some good feedback, which turned into this blueprint here: http://tracker.ceph.com/projects/ceph/wiki/Calamariapihardwarestorage

That set me down a windy path of pulling device paths for underlying devices via the OSD, pulling metadata attributes for OSDs into calamari, and eventually over to libstoragemgmt (Github link: https://github.com/libstorage/libstoragemgmt). 

libstoragemgmt covers what I perceived to be the initial requirements for an API to talk to underlying storage devices. It's vendor agnostic, extensible, and provides C and python bindings. Over the last 6 months or so, we've spent time in the libstoragemgmt source adding a set of APIs that actually submit SCSI Enclosure Service commands via C and python bindings. The first set of functionality introduced was commands to enable and disable the IDENT and FAULT LEDs. These APIs sit alongside the vendor-specific interfaces that libstoragemgmt already provided (there's good support for Smart Array and MegaRAID, for example). We're working on NVMe support too, though that may slip out of the upcoming 1.3 release.

My involvement in libstoragemgmt, as mentioned earlier, was with the purpose of integrating better hardware monitoring and management support into Ceph. I did some work on my own in a fork that I created of libstoragemgmt to integrate it into the Ceph cli that exists today (note that the APIs in upstream libstoragemgmt have changed, but the code would be conceptually similar): https://github.com/joehandzik/ceph/tree/wip-hw-mgmt-cli

John Spray has already mentioned to me that I should convert my message interface to use "tell" commands instead, but aside from that I think this is roughly what a final pull would look like (with test framework code). I have tested this on HPE hardware, so the concept does work. All of this functionality will require an opt-in from the user via a ceph.conf value (to tell Ceph what storage device they're using). If the value is never set, the user will  not be able to use these APIs.

I'm giving all the above for context, but what I really need to know from the ceph-devel community is: Does anyone have any concerns with adding libstoragemgmt as a Ceph dependency? It would only work for the standard Linux flavors and distros (Rhel and Centos, OpenSuse and SLES, Debian and Ubuntu). The features I'm adding would be no-ops or disabled for other distros unless someone in the community finds a way to distribute libstoragemgmt for their operating environment or finds an adequate replacement for the functionality. It's also worth noting that I'm completely dependent on the previously mentioned 1.3 release of libstoragemgmt, which is why I've waited on this for a while (the blueprint was initially intended for Jewel). 

I've copied the libstoragemgmt leads here, please feel free to ask any questions that come to mind. I'm looking to add the dependency and at least one or two features in during the Kraken development cycle. In the future, my plan is to extend libstoragemgmt to support extracting SMART data from drives, and expose that data through Ceph in some useful way.

Joe Handzik
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux