Re: Fwd: tendrl stragegy for ceph cluster map hierarchy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/08/2017 03:52 PM, Sage Weil wrote:
> Usually the OSDs know enough to put the devices under the correct host, 
> but usually hosts don't know where they exist within the larger cluster.  
> (There are hooks to do this on the host, but it currently relies on 
> something like ansible or chef or puppet to put this somewhere in /etc so 
> that we can tell which rack etc the host lives in.)  That means it's 
> usually the admin who ensure the hosts are positioned properly in the 
> overall hierarchy.  So probably teh first thing would be to make the GUI 
> let you create racks/rows/datacenters/whatever and drag parts of the tree 
> around.

That's a good point. Right now Tendrl is in a early stage of
development and don't have this yet. That said, we need to consider
this feature from the beginning.

I expect that ceph-ansible and ceph-installer provides a way to
achieve this (for the node to know where in hierarchy it is for
ceph to place it into correct place in the crush cluster hierarchy).

To Tendrl team: do we have a plan for such feature?

> As far as visualizing the cluster, the key thing I think we should address 
> from the get-go is how to do it in a way that will scale gracefully to 
> clusters with 100s, 1000s, and 10000s of OSDs.  The most promising 
> thing I've seen (and admittedly I haven't seen much) was from a 
> paper someone (Loic?) sent around a few weeks ago:
> 
> 	http://www.aviz.fr/wiki/uploads/Teaching2014/bundles_infovis.pdf
> 
> See, for example Fig 1 and Fig 13a.  The circle grouping can scale down as 
> the cluster scales up (or you zoom in/out).  (And the actual subject 
> of the paper--data flows--can be applied to show things like data movement 
> during rebalancing/recovery or proposed CRUSH changes.)

Thanks for the link the the paper.

What is the opinion of the Tendrl design and gui people about this
idea? We don't need to have a proper visualization of the cluster in
1st version, but when we start working on it, having scaling to
10000s OSDs in mind is a very good approach.

-- 
Martin Bukatovic
USM QE team
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux