Struggling with mds. It seems very fragile.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using ceph version 0.3 and the ceph kernel that comes with ubuntu 11.04.

I've setup 5 osds and one one mon and mds on one machine. When I first
started, without writing any data to the ceph fs my mds would keep
crashing. I fixed that problem by deleting the mod data directory and
the ceph data directories and restarting ceph. I then started copying
test data from a 2tb external drive to my ceph fs. I left my computer
and came back and could not login to my machine. I saw that the
external drive light was blinking so something was going on. I did a
hard power off thinking I would just delete the last file that was
copied over and start over.

As expected, I could not start up ceph again. I had to delete all the
data directories again to get ceph up again. Is there any way to flush
whatever to get ceph back to some sort of stage where you can enter
back into the fs without having to purge everything a start over?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux