On Fri, May 8, 2020, at 17:09, Mike Snitzer wrote: > On Fri, May 08 2020 at 3:22pm -0400, > kj@xxxxxxxxxx <kj@xxxxxxxxxx> wrote: > > > On Thu, May 7, 2020, at 21:12, Chaitanya Kulkarni wrote: > > > On 05/07/2020 04:06 PM, Kjetil Orbekk wrote: > > > > + if (tio->error) > > > > + atomic_inc(&md->ioerr_cnt); > > > > > > Given that there are so many errors how would user know what > > > kind of error is generated and how many times? > > > > The intended use case is to provide an easy way to check if errors > > have occurred at all, and then the user needs to investigate using > > other means. I replied with more detail to Alasdair's email. > > But most operations initiated by the user that fail will be felt by the > upper layer that the user is interfacing with. > > Only exception that springs to mind is dm-writecache's writeback that > occurs after writes have already been acknowledged back to the upper > layers -- in that case dm-writecache provides an error flag that is > exposed via writecache_status. > > Anyway, just not seeing why you need a upper-layer use-case agnostic > flag to know an error occurred in DM. That layers that interface with > the DM device will have been notified of all errors. It's used as a signal by a probing process which is not in the IO path itself. > And why just for DM devices? Why not all block devices (in which case > DM would get this feature "for free")? This sounds like a good idea to me. Looks like I could add this to disk_stats and expose it through the block/<device>/stats file. I'll try to see if this is feasible. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel