Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2021/11/01 14:12, Dave Chinner wrote:
Can you attach the full output for the xfs_dump and xfsrestore
commands
---
I can as soon as I do ones that I can capture.

I can restore the backup taken this morning (a level 0) to
an alternate partition -- it is doing that now and generating the same messages about files being stored in the orphanage
as we "speak", it will take a while to xfsrestore 1.4T.

At the same time, I'm generating a new level 0 backup (something
that was done this morning) resulting in a 1574649321568 byte (~1.4T) output file.

So far, the process doing the xfsdump shows:
xfsdump -b 268435456 -l 0 -L home -e - /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.8 (dump format 3.0)
xfsdump: level 0 dump of Ishtar:/home
xfsdump: dump date: Mon Nov  1 18:15:07 2021
xfsdump: session id: 8f996280-21df-42c5-b0a0-3f1584ae1f54
xfsdump: session label: "home"
xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1587242183552 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories

I'm using a 256M blocksize that is buffered via mbuffer
using 5 buffers of the same size (256M) to the output file.

xfsrestore uses a normal file read...hmm...I wonder
if a direct read might be faster, like using 'dd' to perform
an unbuffered read and pipe write to xfsrestore....  maybe something
for future exploring...



grepping for '/home\s' on output of mount:

/bin/mount|grep -P '/home\s'

shows only 1 entry -- nothing mounted on top of it:

/dev/mapper/Space-Home2 on /home type xfs (...)

I have bind-mounts of things like
/home/opt  on /opt, but that shouldn't affect the root node,
as far as I know.

So what would cause the root node to differ from the mountdir
ino?

I try mounting the same filesystem someplace new:

# df .
Filesystem        Size  Used Avail Use% Mounted on
/dev/Space/Home2  2.0T  1.5T  569G  73% /home
mkdir /home2
Ishtar:home# mount /dev/Space/Home2 /home2
Ishtar:home# ll -di /home /home2
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home/
256 drwxr-xr-x 40 4096 Nov  1 10:23 /home2/

Shows 256 as the root inode.  So why is xfsdump claiming
192 is root inode?

IIRC, it's because xfsdump thinks that the first inode in the
filesystem is the root inode. Which is not always true - there are
corner cases to do with stripe alignment, btree roots relocating and
now sparse inodes that can result in new inodes being allocated at a
lower number than the root inode.

Indeed, the "bind mount?" message is an indication that xfsdump
found that the first inode was not the same as the root inode, and
so that's likely what has happened here.

Now that I think about this, ISTR the above "inodes before root
inode" situation being reported at some point in the past. Yeah:

https://lore.kernel.org/linux-xfs/f66f26f7-5e29-80fc-206c-9a53cf4640fa@xxxxxxxxxx/

Eric, can you remember what came of those patches?

Cheers,

Dave.



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux