Hi,
First I would like to state that with all its limitiation, I have
managed to build multiple
clusters with ceph-deploy and without it, I would have been totally
lost. Things
that I feel would improve it include:
A debug mode where it lists everything it is doing. This will be
helpful in the future
when I move to a more integrated tool then ceph-deploy, as I could see
exactly
how ceph-deploy built my test cluster.
To understand more types of linux storage devices. I have spent hours
trying to make it
understand multipath devices, as I happen to have a large number of
these in my lab,
but so far I have not made it work.
Really good documentation on all the ceph-deploy options.
Lastly, this is not just a ceph-deploy thing, but documentation
explaining how things
boot up, and interact. Ceph-deploy depends on tools like ceph-disk to
mount OSD
disks on the servers during boot, and I learned the hard way that if a
OSD is on a LUN
that is seen by more then one OSD node, you can corrupt data, as each
OSD node
tries to mount all the ODS it can find.
There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT
on
Monday). We'll be going over what we currently have in backlog for
improvements, but if you have any opinions about what else ceph-deploy
should or should not do or areas where it is problematic, please reply
to
this thread to let us know what you think, and/or join the CDS
discussion
hangout.
Thank
Eric
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html