Hi, I've set up an environment where I can collect the I/O service time of IDE-DMA requests. Since IDE disks only allow one outstanding request at any instance in time, I can collect timing information by simply using printk() to get the I/O starting time at __ide_dma_read/write() and I/O ending time at __ide_dma_intr(). Because I didn't want printk() to go to syslog, I used netconsole to collect all the printk() statements on another computer connected by Ethernet. Anyway, I noticed something very strange in my experiments. I used two different drives in my experiments - Western Digital 800BB and Maxtor 6L040J2, and they all exhibit this strange behavior. I measured the average I/O service time for these drives to be around 2.5-3.0 msec, which is fairly reasonable since the drives I was using are all 7200rpm IDE drives. However, the thing that doesn't make sense to me is the maximum I/O service time - they can be as large as 50-80 msec. How can this be? Bad block remapping? I doubt it as I used two different drives, and these are all very new drives. Did I setup my measurement correctly? Did I miss anything? - Kyle __________________________________ Do you Yahoo!? The all-new My Yahoo! - Get yours free! http://my.yahoo.com -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/