Re: Ceph version 0.56.1, data loss on power failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 16 Jan 2013, Wido den Hollander wrote:
> 
> Op 16 jan. 2013 om 18:00 heeft Sage Weil <sage@xxxxxxxxxxx> het volgende geschreven:
> 
> > On Wed, 16 Jan 2013, Wido den Hollander wrote:
> >> 
> >> On 01/16/2013 11:50 AM, Marcin Szukala wrote:
> >>> Hi all,
> >>> 
> >>> Any ideas how can I resolve my issue? Or where the problem is?
> >>> 
> >>> Let me describe the issue.
> >>> Host boots up and maps RBD image with XFS filesystems
> >>> Host mounts the filesystems from the RBD image
> >>> Host starts to write data to the mounted filesystems
> >>> Host experiences power failure
> >>> Host comes up and map the RBD image
> >>> Host mounts the filesystems from the RBD image
> >>> All data from all filesystems is lost
> >>> Host is able to use the filesystems with no problems.
> >>> 
> >>> Filesystem is XFS, no errors on filesystem,
> >> 
> >> That simply does not make sense to me. How can all data be gone and the FS
> >> just mount cleanly.
> >> 
> >> Can you try to format the RBD with EXT4 and see if that makes any difference.
> >> 
> >> Could you also try to run a "sync" prior to pulling the power from the host to
> >> see if that makes any difference.
> > 
> > A few other quick questions:
> > 
> > What version of qemu and librbd are you using?  What is the command line 
> > that is used to start the VM?  This could be a problem with the qemu 
> > and librbd caching configuration.
> > 
> 
> I don't think he uses Qemu. From what I understand he uses kernel RBD 
> since he uses the words 'map' and 'unmap'

That's what I originally thought too, and then I saw

> >>> root@openstack-1:/etc/init# ceph -s

and wasn't sure...

Marcin?

sage





> >>>    health HEALTH_OK
> >>>    monmap e1: 3 mons at
> >>> {a=10.3.82.102:6789/0,b=10.3.82.103:6789/0,d=10.3.82.105:6789/0},
> >>> election epoch 10, quorum 0,1,2 a,b,d
> >>>    osdmap e132: 56 osds: 56 up, 56 in
> >>>     pgmap v87165: 13744 pgs: 13744 active+clean; 52727 MB data, 102 GB
> >>> used, 52028 GB / 52131 GB avail
> >>>    mdsmap e1: 0/0/1 up
> >>> 
> >>> Regards,
> >>> Marcin
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux