On Sun, 2015-02-08 at 10:45 +0800, Greg KH wrote: > On Sat, Feb 07, 2015 at 09:27:05PM -0500, Laurence Oberman wrote: > > Hello > > Its not going to be tens of thousands of devices. That count was an > > aggregate based on 1000's of servers. > > In reality its unlikely to ever be more than 100 tapes drives per > > individual Linux kernel instance. > > Therefore sysfs will be the valid way to do this and make the data > > available to user space. > > Even if it is only 2 tape drives, again, what's wrong with using the > existing i/o statistic interfaces that all block devices have? Tape is a character device. It only uses block via SCSI (SCSI uses block to give an issue queue for every device). One of the problems with this model is that the block kobj, where all the statistics hang, is actually never exposed for these devices because they don't have a block name. Even granted that we could alter block to give names to the nameless queues and expose them in /sys/block, we'd still have the problem, the queue statistics are the property of the pluggable I/O scheduler, so there's a disconnect between the SCSI upper layer drivers and the block scheduler (since the latter is embedded by design). Pulling that apart would get us into a fairly nasty layering violation (drivers aren't supposed to care about the scheulders). > Don't go > making special one-off interfaces for one type of device if at all > possible. I don't really see any way around this. The statistics the block schedulers collect are relevant to I/O load balancing; that's not at all the same class of statistics as the users of tape are interested in. This problem is equivalent to the fibrechannel one where we collect the fc_host_statistics in the scsi_transport_fc.c class as an attribute group (block doesn't want to see or know any of the information because it's all relevant to the transport, not the block abstraction). James -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html