Re: Struggling with mds. It seems very fragile.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 8, 2011 at 20:39, Vineet Jain <vinjvinj@xxxxxxxxx> wrote:
> Using ceph version 0.3 and the ceph kernel that comes with ubuntu 11.04.
>
> I've setup 5 osds and one one mon and mds on one machine. When I first
> started, without writing any data to the ceph fs my mds would keep
> crashing. I fixed that problem by deleting the mod data directory and
> the ceph data directories and restarting ceph. I then started copying
> test data from a 2tb external drive to my ceph fs. I left my computer
> and came back and could not login to my machine. I saw that the
> external drive light was blinking so something was going on. I did a
> hard power off thinking I would just delete the last file that was
> copied over and start over.
>
> As expected, I could not start up ceph again. I had to delete all the
> data directories again to get ceph up again. Is there any way to flush
> whatever to get ceph back to some sort of stage where you can enter
> back into the fs without having to purge everything a start over?

Can you please provide core dumps and log messages from those MDS
crashes? Getting tickets filed at
http://tracker.newdream.net/projects/ceph with the relevant
information is what will help us fix your problems.

Recovery, where not automatic, depends very much on the crash you saw.
We'd be glad to help, but need more information to do so.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux