Re: Metadata filesystem XFS gluster 3.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 03/31/2015 09:05 AM, Félix de Lelelis wrote:
Hi,

I had a problem with a filesystem xfs on gluster. The filesystem metadata was filled:

Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: space map metadata: unable to allocate new metadata block
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: 252:2: metadata operation 'dm_thin_insert_block' failed: error = -28
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: 252:2: aborting current metadata transaction
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: 252:2: switching pool to read-only mode
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): metadata I/O error: block 0x701830 ("xfs_buf_iodone_callbacks") error 5 numblks 8
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): metadata I/O error: block 0x68047c ("xlog_iodone") error 5 numblks 64
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): xfs_do_force_shutdown(0x2) called from line 1170 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa012a4c1
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): Log I/O Error Detected.  Shutting down filesystem
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): Please umount the filesystem and rectify the problem(s)
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): metadata I/O error: block 0x6804bc ("xlog_iodone") error 5 numblks 64
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): xfs_do_force_shutdown(0x2) called from line 1170 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa012a4c1
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): xfs_log_force: error 5 returned.
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: attempt to access beyond end of device
Mar 27 13:58:18 srv-vln-des2 kernel: dm-0: rw=0, want=562056, limit=24576
Mar 27 13:58:18 srv-vln-des2 kernel: device-mapper: thin: process_bio_read_only: dm_thin_find_block() failed: error = -5
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): metadata I/O error: block 0x6804fc ("xlog_iodone") error 5 numblks 64
Mar 27 13:58:18 srv-vln-des2 kernel: XFS (dm-4): xfs_do_force_shutdown(0x2) called from line 1170 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa012a4c



After that, gluster was shutdown and with it the 2 server are shtudown too. The lvm partition was missing and so far I haven't been able restore the file system. All data is missing??
I don't understand the situation and I don't know if it's due a xfs filesystem or glusterfs fail. Someone it has been this situation?

Thanks.


Hi,

Do you run thin provisioned on LVM?

There are some discussions about this:

https://www.redhat.com/archives/linux-lvm/2014-December/msg00015.html

You definitely ran out of metadata space.  Which version of the kernel
and lvm2 userspace are you using?

See the "Metadata space exhaustion" section of the lvmthin manpage in a
recent lvm2 release to guide you on how to recover.

Also, once you've gotten past ths you really should configure lvm2 to
autoextend the thin-pool (both data and metadata) as needed in response
to low watermark, etc.  See "Automatically extend thin pool LV" in
lvmthin manpage.
Maybe related to:
https://bugzilla.redhat.com/show_bug.cgi?id=1097948

Hope this helps.





Met vriendelijke groet, With kind regards,

Jorick Astrego


Netbulae Virtualization Experts


Tel: 053 20 30 270 info@xxxxxxxxxxx Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux