Write-Intent Bitmaps and disk caches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm a little confused on the write-intent bitmaps and how they interact with the disk caches - I would appreciate any clarification here.

The way I understand it, if a device fails in a raid set, the bitmap will track regions that have changed since the failed device left the array. If the device is added back in, the re-sync time is much shorter since only some blocks have to be re-sync'ed.

But in this case it's possible that blocks A, B, and C were written to the device, and the failure was detected on block C. Thus blocks A and B would be probably held in the device's cache (hard drive cache, or this problem gets worse if working with disk arrays as devices, since they have much larger caches). When the device was re-added, block C would presumably be re-synced as requested by the bitmap, but would A and B be lost forever because they fell out of the cache on the drive?

Does the bitmap, or something else, take this into account? Or will this eventually lead to an inconsistent read or data corruption down the road? Or am I just out waving my paranoid flag today?


--
-===========================-
 Ty! Boyack
 NREL Unix Network Manager
 ty@xxxxxxxxxxxxxxxxxx
 (970) 491-1186
-===========================-

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux