On Wed, Feb 19, 2014 at 1:31 PM, mike smith <michaelsmithconsult@xxxxxxxxx> wrote: > I am trying to learn about Ceph and have been looking at the documentation > and speaking to colleagues who work with it and had a question that I could > not get the answer to. As I understand it, the Crush map is updated every > time a disk is added. This causes the OSDs to migrate their data in > placement groups to new OSDs. During that time, is the data on the old OSDs > accessible for reads? > E.g if I have 2 copies on OSD-x and OSD-y. If OSD-x fails, while OSD-y is > migrating to OSD-z, will the system fall back to OSD-y or will I get a read > error, and would I have to retry the read after the migration? There can be a very brief period of inaccessibility (a few seconds if things go very badly, more likely several tens of milliseconds) while the OSDs do some coordination, but in general you can expect your data to remain available. The Ceph clients handle routing the data requests to the correct place for you, including outstanding requests across cluster configuration changes, so you certainly don't need to re-issue the requests yourself. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com