Re: Data loss after force umount !

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks everyone ,
 
    I think `umount -l` is the error,we shouldn't just do this operation without any other conjunction operation.

    I will continue to do more extreme test. I shouldn't exec `umount -l` , and I need to stop anyone to exec `umount -l`.

    Lots of thanks !

---------- Forwarded message ----------
From: Michael Lowe <j.michael.lowe@xxxxxxxxx>
Date: 2013/10/8
Subject: Re: Data loss after force umount !
To: higkoohk <higkoohk@xxxxxxxxx>


It won't unmount until the processes with open files exit, umount -l is usually used in conjunction with lsof and kill.  You probably didn't actually get the file system unmounted.

Sent from my iPad

On Oct 7, 2013, at 9:40 PM, higkoohk <higkoohk@xxxxxxxxx> wrote:

Hi Michael ,

umount -l : Lazy  unmount.  Detach  the  filesystem  from  the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore.

Do you mean '-f' ?

umount  -f :  Force unmount (in case of an unreachable NFS system). 


But what do I do when `umount -l` be exec ?



2013/10/8 Michael Lowe <j.michael.lowe@xxxxxxxxx>
That doesn't force the unmount.

Sent from my iPad

2013/10/8 Yan, Zheng <ukernel@xxxxxxxxx>


2013-10-8 上午9:00于 "higkoohk" <higkoohk@xxxxxxxxx>写道:


>
> Thanks everyone, the env like this :
>
> Linux 3.0.97-1.el6.elrepo.x86_64 CentOS 6.4
>
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> /dev/sdd1 on /var/lib/ceph/osd/ceph-2 type xfs (rw)
> /dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw)
> /dev/sdc1 on /var/lib/ceph/osd/ceph-4 type xfs (rw)
>
> meta-data=""              isize=2048   agcount=4, agsize=8895321 blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=35581281, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=17373, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> /usr/libexec/qemu-kvm -name centos6-clone2 -S -machine rhel6.4.0,accel=kvm -m 1000 -smp 2,sockets=2,cores=1,threads=1 -uuid dd1a7093-bdea-4816-8a62-df61cb0c9bfa -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/centos6-clone2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:rbd/centos6-clone2:auth_supported=none:mon_host=agent21.kisops.org\:6789,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5c:71:f1,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
> Use `umount -l` to force umount.

`umount -l` is not force umount, it just detach the fs from the hierarchy. The fs is mounted internelly in the kernel if there still are references. I'm not surprise the fs errors If you use `umount -l` to umount the rbd in one guest, then mount the  rbd in another guest.

Yan, zheng


>
> Anything else ?
>
>
> 2013/10/8 Mark Nelson <mark.nelson@xxxxxxxxxxx>
>>
>> Also, mkfs, mount, and kvm disk options?
>>
>> Mark
>>
>>
>> On 10/07/2013 03:15 PM, Samuel Just wrote:
>>>
>>> Sounds like it's probably an issue with the fs on the rbd disk?  What
>>> fs was the vm using on the rbd?
>>> -Sam
>>>
>>> On Mon, Oct 7, 2013 at 8:11 AM, higkoohk <higkoohk@xxxxxxxxx> wrote:
>>>>
>>>> We use ceph as the storage of kvm .
>>>>
>>>> I found the VMs errors when force umount the ceph disk.
>>>>
>>>> Is it just right ? How to repair it ?
>>>>
>>>> Many thanks .
>>>>
>>>> --higkoo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux