Referencing this old thread below, I am wondering what is the proper way to install say new versions of ceph and start up daemons but keep all the data on the osd drives. I had been using ceph-deploy new which I guess creates a new cluster fsid. Normally for my testing I had been starting with clean osd drives but I would also like to be able to restart and leave the osd drives as is. -- Tom > Hi, > I have faced a similar issue. This happens if the ceph disks aren't > purged/cleaned completely. Clear of the contents in the /dev/sdb1 device. > There is a file named ceph_fsid in the disk which would have the old > cluster's fsid. This needs to be deleted for it to work. > > Hope it helps. > > Sharmila On Mon, May 26, 2014 at 2:52 PM, JinHwan Hwang <calanchue at gmail.com> wrote: > I'm trying to install ceph 0.80.1 on ubuntu 14.04. All other things goes > well except 'activate osd' phase. It tells me they can't find proper fsid > when i do 'activate osd'. This is not my first time of installing ceph, and > all the process i did was ok when i did on other(though they were ubuntu > 12.04 , virtual machines, ceph-emperor) > > ceph at ceph-mon:~$ ceph-deploy osd activate ceph-osd0:/dev/sdb1 > ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1 ceph-osd1:/dev/sdc1 > ... > [ceph-osd0][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph > with fsid 05b994a0-20f9-48d7-8d34-107ffcb39e5b > .. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com