Hi Dan, I have question about what ADMA + RAID-5 and disk failure. In the event of disk failure what would happen to the descriptors which are queued up the DMA engine for XOR operation ? This is what I am trying. 1. Create RAID-5 using 4 disks. 2. Install xfs file system and mount it. 3. Force the disk failure using mdadm -f /dev/md0 /dev/sdc 4. Then hot remove the drive mdadm -r /dev/md0 /dev/sdc 5. Un-mount the /dev/md0. Then run file system check using xfs_repair -L /dev/md0. I see messages as shown below. These messages doesn't show up if I don't use ADMA. But there seem to be no data loss. But still I there seem to be a some kind of data inconsistency. Regards, Marri ----- messages --- Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... bad magic # 0x83e1001c in inobt block 22/47 expected level 0 got 14369 in inobt block 22/47 dubious inode btree block header 22/47 badly aligned inode rec (starting inode = 6375867046) bad starting inode # (6375867046 (0x16 0x7c0802a6)) in ino rec, skipping rec -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html