Re: Random data corruption in VM, possibly caused by rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 7, 2012 at 2:36 PM, Guido Winkelmann
<guido-ceph@xxxxxxxxxxxxxxxxx> wrote:
> Again, I'll try that tomorrow. BTW, I could use some advice on how to go about
> that. Right I would stop one osd process (not the whole machine), reformat and
> remount its btrfs devices as XFS, delete the journal, restart the osd, wait
> until the cluster is healthy again, repeat for all the osds in the cluster. Is
> that sufficient?

Before restarting the osd, you need to do a ceph-osd --mkfs.
Otherwise, yeah that looks good.

> The rbd volume in question was created as a copy, using the rbd cp command,
> from a template volume. I cannot recall seeing any corruption while using the
> original volume (which was created using rbd import). Maybe the bug only bites
> volumes that have been created as copies of other volumes? I'll have to do
> more tests along those lines as well...

Hmm. There should be no difference between the end results of rbd cp
and rbd import.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux