On Mon, Jul 16, 2018 at 8:21 PM Wol's lists <antlists@xxxxxxxxxxxxxxx> wrote: > Sorry but we live in the real world too. I get you want things fixed and > you want them to work, but we've been bitten too often by USB letting us > down at critical moments. What's the point of setting your array to > rebuild if it's unlikely to complete? > > I don't know how long a streaming copy would take on one of your disks, > but the MTBF of USB is probably a similar order of magnitude. As I found > out personally with PATA, sharing a controller leads to PAIN PAIN PAIN. > As Roger says you probably have two - or three tops - USB ports, and you > probably don't have a clue what devices are on them. It only takes a > device you forgot (or didn't realise) was there to stall the > port/controller/hub, and your recovery will get trashed. > > Will your computer(s) take an add-in PCIe board? At about £30 apiece > that's two extra (e)SATA ports. Then you're out another £30 for a eSATA > dock, and when your next disk croaks you have a RELIABLE way of > rebuilding it, not something that's going to crap out on you without > warning. It's not hugely an issue for me now. I know from experience that mixing USB/SATA loses data and I'll likely be moving away from raid generally. Nothing to do with this issue, just that a redundant array of computers with storage built on top of that is more of an option now. But I do think that your project would benefit from two things: 1) Explaining to users that log messages like "bio too big device mdX (248 > 240)" mean data corruption. It means data has almost certainly been lost. So beyond all the "USB is bad" *opinions* that you have all expressed, using USB (or dm-crypt in some setups and possibly other bus combinations) risks data corruption. So for instance this article doesn't warn about mixing buses (https://raid.wiki.kernel.org/index.php/Devices) and this one just expresses hardware opinions without mentioning the underlining data loss issue for mixed buses (https://raid.wiki.kernel.org/index.php/What%27s_all_this_with_USB%3F). 2) Pushing for the patches below to be accepted to avoid this error. This would be the ideal solution. I'm operating under the possibly incorrect assumption that as RAID developers you want to make sure the system you build is robust and protects user data as much as practicable. If that's true, at least step 1 would help in your mission. Learning about data loss scenarios from experience isn't ideal. Turning that experience into documentation would help. I'm happy to update the wiki with my research if that would help achieve #1. Lastly, my question is about USB but I've noted this affects other configurations. In spite of that your responses have solely focused on USB. As I've pointed out you can get this error with dm-crypt (https://lists.debian.org/debian-kernel/2015/09/msg00033.html) which includes a link to an as yet not accepted patch to fix this issue generally (https://www.redhat.com/archives/dm-devel/2012-May/msg00159.html). And obviously this issue could affect other mixed bus deployments. Again, if your goals in this project are not reliable, robust storage but something else (which is totally valid - as I said, you do you), then feel free to keep trashing USB and ignoring the issue. That's cool. I'm fine with that. But I operate under the assumption that your goal is reliable, robust storage and that you'd want feedback on how to get there. The more limited RAID1 patch I mentioned earlier in this thread or Overstreet's more general patch might be ways to get there; to get away from silently, with no direct warning, of corrupting data. So my apologies if I've misunderstood your purpose. But if I have correctly ascertained your purpose, your responses have been getting in the way of your goals. That seems unfortunate and a thing you might want to review. Let me know what I need to do if you'd like me to clarify your documentation. Otherwise, best of luck. Kevin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html