On Mon, Mar 5, 2012 at 10:43, Matt Weil <mweil@xxxxxxxxxxxxxxxx> wrote: > I recreated the file system and got this on the client until it was > rebooted. > > Is there a step I missed? > >> libceph: bad fsid, had 02f7ef57-25e0-475f-8948-5c562a4d370c got >> 6089a28f-8ff4-42f5-8825-1a929f5770bd Sounds like you had the client in continuous operation even when you deleted the previous cephfs and created a new one -- the client doesn't know that that happened, and was warning about an unexpected destination. If you destroy & recreate your whole cluster, you need to restart clients too. Neatest way would be to have the client apps be down during the fs recreate, otherwise you will get these log messages. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html