Re: Brainstorming ideas for Python-CRUSH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I like the idea

Being able to play around with different configuration options and using this tool as a sanity checker or showing what will change as well as whether or not the changes could cause health warn or health err.

For example, if I were to change the replication level of a pool, how much space would be left as well as an estimate for how long it would take to rebalance.

Benchmark capabilities, replication, crush changes, osd add/drop, node add/drop, iops, read/write performance

Regards,
Logan

----- On Mar 21, 2017, at 6:58 AM, Xavier Villaneau <xvillaneau+ceph@xxxxxxxxx> wrote:
Hello all,

A few weeks ago Loïc Dachary presented his work on python-crush to the ceph-devel list, but I don't think it's been done here yet. In a few words, python-crush is a new Python 2 and 3 library / API for the CRUSH algorithm. It also provides a CLI executable with a few built-in tools related to CRUSH maps. If you want to try it, follow the instructions from its documentation page:
http://crush.readthedocs.io/en/latest/

Currently the crush CLI has two features:
 - analyze: Get a estimation of how (un)evenly the objects will be placed into your cluster
 - compare: Get a summary of how much data would be moved around if the map was changed
Both these tools are very basic and have a few known caveats. But nothing that cannot be fixed, the project is still young and open to suggestions and contributions.

This is where we'd like to hear from the users' community feedback, given everyone's experience in operating (or just messing around with) Ceph clusters. What kind of CRUSH / data placement tools would be interesting to have? Are there some very common architectural / technical questions related to CRUSH that such tools would help answering? Any specific cases where such a thing could have spared you some pain?

Here a few ideas on top of my head, to help with starting the discussion:
 - Static analysis of the failure domains, with detection of potential SPOFs
 - Help to capacity planning, estimations of how much data could practically be stored in a cluster
 - Built-in basic scenarios for "compare" such as adding a node or removing an OSD.
 
Please share your ideas, those will eventually help making a better tool!
Regards,
--
Xavier Villaneau
Software Engineer, working with Ceph during day and sometimes at night too.
Storage R&D at Concurrent Computer Corporation, Atlanta USA

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux