Hi XFS gurus, I've a XFS related problem that boggles my find and I couldn't find a solution yet. I've got a virtual machine (huddle) that gets a ~66TB logical volume from the host handed as (virtio) block device (/dev/vdb). For ease of maintenance I didn't partition the device but formatted it directly with xfs. The system at the time of formatting was Ubuntu Lucid 64bit. A few days ago I upgraded the virtual machine to Ubuntu LTS 'precise', Kernel 3.2, and got the following error while trying to mount the device: root@huddle:~# mount /dev/vdb /mnt/storage mount: /dev/vdb: can't read superblock dmesg shows some more info: root@huddle:~# dmesg | tail [ 672.774206] end_request: I/O error, dev vdb, sector 0 [ 672.774393] XFS (vdb): SB buffer read failed At first I thought the block device had some error and checked the virtual machine configuration and host system. From the host system (Ubuntu lucid 64bit, Kernel 2.6) I can still mount the xfs formatted device without problems. I also ran xfs_repair -n that didn't show any problem. I tried to hand the virual machine a different ext4 formated block device (also without partition and preformatted). This didn't yield any mount problems. The Ubuntu 'precise' machine has an older kernel (2.6.32-42) too. Booting this kernel the xfs formatted block device gets mounted without error. The curious part is that it is still possible to mount the volume under Kernel 3.2 without error using the loop option: root@huddle:~# mount -v -t xfs -o loop /dev/vdb /mnt/storage/ Trying xfs_repair also brings up the I/O Error unless I use it with the -f option under Kernel 3.2. Obviously the problem is Kernel 3.2 related. I'm not sure if I'm at the right place in the XFS Mailinglist but thought it would make a good starting point since I couldn't find anything related in bugzilla or the web in general and the problem didn't show up using ext4 (so may not be a generic kernel problem). Any suggestion for a solution (without partitioning the device) would be greatly appreciated since the use of the loop back device doesn't support quotas and I guess brings a performance penalty as well (but I haven't tested it yet). Here is some more information about the virtual machine: It's using Ubuntu LTS 'precise' 64bit root@huddle:~# uname -a Linux huddle 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The XFS tools are the following version, though using kernel 2.6.32 the same tools work without problem. root@huddle:~# xfs_repair -V xfs_repair version 3.1.7 That's how the logical volume from the host looks inside the virtual machine: root@huddle:~# ls -l /dev/vdb brw-rw---- 1 root disk 253, 16 Sep 17 19:36 /dev/vdb Running any kernel, blkid still identifies the device correctly as xfs volume: root@huddle:~# blkid /dev/vdb /dev/vdb: UUID="5adcd575-d3f2-48c3-81de-104f125b275e" TYPE="xfs" Thanks in advance. Richard -- --------------------------------------------------------------------- Systemadministration [a] Department for Theoretical Chemistry University of Vienna Waehringer Strasse 17/3/304, 1090 Wien, Austria [p] +43 664 920 32 95 [m] hawk@xxxxxxxxxxxxxxxx [w] http://www.tbi.univie.ac.at/~hawk ---------------------------------------------------------------------
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs