We have not had any error handling as part of the block layer's *add_disk() paths since the code's inception. As support for that is being added the question becomes if using the kernel's error injection infrastructure is actually appropriate here in consideration of scaling similar strategies in other places for fs/block. Alternatives being considered are things like live patching, eBPF, and kunit. Scaling error injection using the kernel error injection infrastructure can scale by using debugfs for variability in a specific target error path of interest, as demonstrated in my latest patch series. This allows us to ensure that generic routines are only forced to fail in the context of add_disk(), for example, and not other aras of the kernel. However, this still means adding boilerplate code for each piece of code we want to force failure on a code path. This also implies we'd have to ensure we ask developers to add a new call error injection knobs and call per added code which can fail on the area of interest. This begs the question -- can we do better? An alternative example is to use live patching for each error call we want to modify, however this poses the difficulty in that some calls are very generic and we may not want to modify all instances of that routine, but only *in context* of say, the add_disk() callers. That is, for example, since add_disk() calls device_add(), we want to at some point be able to test having that routine fail, but only if called within the add_disk() path, and not for other cases. This makes using live patching difficult. Likewise, there are the concerns of possible alaternative bugs when using rmmod, or races not yet exposed. If I can, I'll try to see if I can address what this might look like with eBFP before LSFMM, but until then, the question still stands as to what the best path forward might be for new code as we consider adding error injection. Luis