Hi all Iterating over my drives, I ran mdadm --examine-badblocks to look for clues for an issue where XFS reports Aug 12 23:29:30 hostname kernel: [548754.725102] XFS (dm-6): metadata I/O error in "xfs_buf_iodone_callback_error" at daddr 0x3a85f3430 len 16 error 5 apparently repeating the same sector number over and over again. The XFS FS is on top of LVM, placed on a RAID-6, currently consisting of nine 2TB drives with two spares. The drives are of various makes and models. Running wth --examine-badblocks, it spits out a blocklist for some of the drives. I find it a bit unclear why, since none of them have anything bad reported in smart data. Well, ok, something it might be (silent errors etc). There are also no I/O errors in the kernel log when this happens. However, back to --examine-badblocks. It seems it's reporting the same sector numbers in the list for several (up to eight) drives. If I understand this correctly, something strange has hit and damanged all drives on fixed sector numbers, such as this Bad-blocks on /dev/sdm: 436362944 for 128 sectors It doesn't seem very likely, to be honest, that a lot of drives suddenly damage the same sector at once. I can see the same occur on a friend's server - sectors with identical 'bad' sector numbers been listed on individual drives. So I just wonder: 1. How can this happen? Does it replicate the list if a drive has recorded badblocks on it? 2. Is it possible to somehow reset the list and rather do a full scan again? Something smells fishy here There's some talk about it here, https://raid.wiki.kernel.org/index.php/The_Badblocks_controversy, and I also wonder if this is still enabled by default. It doesn't make much sense… Vennlig hilsen roy -- Roy Sigurd Karlsbakk (+47) 98013356 http://blogg.karlsbakk.net/ GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- Hið góða skaltu í stein höggva, hið illa í snjó rita.