Re: [Gluster-devel] Introducing Tendrl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/20/2016 10:23 AM, Gerard Braad wrote:
Hi Mrugesh,

On Tue, Sep 20, 2016 at 3:10 PM, Mrugesh Karnik <mkarnik@xxxxxxxxxx> wrote:
I'd like to introduce the Tendrl project. Tendrl aims to build a
management interface for Ceph. We've pushed some documentation to the
On Tue, Sep 20, 2016 at 3:15 PM, Mrugesh Karnik <mkarnik@xxxxxxxxxx> wrote:
I'd like to introduce the Tendrl project. Tendrl aims to build a
management interface for Gluster. We've pushed some documentation to
It might help to introduce Tendrl as the "Universal Storage Manager'"
with a possibility to either manage Ceph and/or Gluster.
I understand you want specific feedback, but a clear definition of the
tool would be helpful.


(Apologies for reposting my response - gmail injected html into what I thought was a text reply and it bounced from ceph-devel.)

Hi Gerard,

I see the goal differently.

It is better to think of tendryl as one component of a whole management application stack. At the bottom, we will have ceph specific components (ceph-mgr) and gluster specific components (glusterd), as well as other local storage/file system components like libstoragemgt and so on.

Tendryl is the next layer up from that, but it itself is meant to be consumed by presentation layers. For a stand alone thing that we hope to use at Red Hat, there will be a universal storage manager stack with everything I mentioned above in it, as well as the GUI code.

Other projects will hopefully find this useful enough and plug some or all of the components into other management stacks.

From my point of view, the job is to try to provide as much as possible re-usable components that will be generically interesting to a wide variety of applications. It is definitely not about trying to make all storage stacks look the same and force artificial new names/concepts/etc on the users. Of course, any one application will tend to have a similar "skin" for UX elements to try and make it consistent for users.

If we do it right, people passionate about Ceph but who don't care about Gluster will be able to be avoid getting tied up in something out of their interest. Same going the other way around for Gluster developers who don't care or know about Ceph. Over time, this might extend to other storage types like Samba or NFS Ganesha clusters, etc.

Regards,

Ric




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux