On Thu, Feb 7, 2019 at 6:30 AM David T-G <davidtg@xxxxxxxxxxxxxxx> wrote: > > diskfarm:root:4:~> parted /dev/md0 print > Error: end of file while reading /dev/md0 > Retry/Ignore/Cancel? ignore > Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used. [snip] > when poking, I at first thought that this was a RAID issue, but all of > the md reports look good and apparently the GPT table issue is common, so > I'll leave all of that out unless someone asks for it. A corrupt backup GPT is a huge redflag that there's user confusion, that has then led to the storage stack itself becoming confused. Since GPT partitioning an array, in particular with just one partition, seems unnecessarily complicated and thus pointless; I'm suspicious that /dev/md0 is not in fact partitioned - that GPT very well may belong to the first member device of the array. Not the array. And the reason the backup is "corrupt" is because parted+fdisk looking at the end of /dev/md0 rather than the end of the device this GPT actually belongs to. So I suspect GPT and XFS have stepped on each other possibly more than once each which is why both have corruption; and the mdadm metadata doesn't. Or even possible that one or more signatures in this storage stack are stale, not having previously been properly wiped, and now are haunting this storage stack. I wouldn't make any writes until you've double checked what the layout is supposed to be. First check if the individual member drives are GPT partitioned, and whether their primary and backups are valid (not corrupt); if there's corruption don't fix it yet. Right now you just need to focus on what all of the on disk metadata says is true, and then you'll be able to discover what metadata is wrong and contributing to all this confusion. -- Chris Murphy