You should take this to the device mapper list, but I'll try here. For lurkers, this drawing may be helpful: http://www.thomas-krenn.com/en/oss/linux-io-stack-diagram/linux-io-stack-diagram_v1.0.pdf On Wed, May 8, 2013 at 11:15 AM, neha naik <nehanaik27@xxxxxxxxx> wrote: > Hi Greg, > Thanks for the information. I have another question :). > Is there some less flexibility if we use device mapper target? > For example in block device driver you can use the api such that it won't > use the OS io scheduler, so the io comes directly to the block device driver > through the 'make_request' call. With the device mapper i don't think that > happens(looking at the api calls). I believe that is correct and one of the reasons DRBD is not part of DM. DRBD has various modes including those where it guarantees the I/Os on both the main target and on the replicated target hit in the exact same order. I don't believe that DM can be used to totally control disk I/O. It is meant to be stackable, so I think you lose some exact control. > Does this mean that stuff like io > scheduling. barrier control etc is done > by the device mapper itself and we can focus only on 'mapping' the io. As shown in the diagram linked above, DM sits above the i/o schedulers so you should not have to worry about it. If you want to play with schedulers, I think that should be done outside of DM. Thus I believe you can ignore barriers/scheduling UNLESS you create a target that needs special barrier/scheduling control. The obvious example of something needing that is raid5. If you get a barrier that forces data out to a single disk in a raid, you MUST ensure that the raid checksum is calculated and written out prior to calling the barrier complete. That is going to take special handling no matter what you do. It's been a couple years since I dug into raid 5/6 as relates to barriers, but it used to be that code simply didn't do the right thing in mdraid and DM did not support raid 5/6, so yes the coders could ignore it, but they created broken logic when they did. Greg _______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies