Hi Dave! > I didn't think anything other than log recovery tries to vmap > buffers. This is clearly not in log recovery. Can you post an > unedited error log, how much data you are rsyncing, the > configuration of your filesystem (xfs_info, mount options, loop dev > config, etc) to give us an idea of what you are doing to trigger > this? * I'm attaching the latest kern.log, unedited * I am syncing the gentoo portage tree, not much data but many small files (currently 228MiB in 117880 files). This sync is done twice per hour. * I already posted my xfs_info and mount options in another post. Maybe i should note that the loop file system was deliberately created with blocksize=512 to accommodate the fs to the nature of the portage tree (many small files...) * TBH i don't know about any special configuration of the loop device. I just created an empty file with "dd if=/dev/zero ..." and then did mkfs.xfs on it. > Can't you run on a 64-bit machine? 80% of my machines are 64-bit and i never saw anything like that on them. But otoh i dont' use loop devices very much. Unfortunately this is machine is old hardware (P4 class) which can't run a 64-bit kernel. > Can you downgrade your kernel and run the loop device there to tell > us whether this is actually a regression or not? If it is a > regression, then if you could run a bisect to find the exact patch > that causes it woul dbe very helpful.... Already did that yesterday and i can confirm it has the same problem - so no regression. The kern.log i attached to this mail is from Kernel 2.6.35.8 cheers, Michael
Attachment:
kern.log.gz
Description: GNU Zip compressed data
_______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs