Re: Re[20]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 9, 2019 at 9:44 PM <no_spam@xxxxxxxxxxxx> wrote:
>
>
> Chris,
> Synology was able to restore access to my data volume and I was able
> to grab the original dd save of my data partitions. I seem to have
> access to all the data on the drive; so I can probably do a backup /
> restore cycle.

Well at the very least do the backup so that the crisis is over.


>
> What they had me do was remove all the drives; put a dummy drive in (a
> 1TB); install the NAS software then hot plug the defective disks into
> the enclosure. They then remoted into the box and did some "magic" to
> restore the raid super blocks and the btrfs data.
>
> What's weird is they somehow managed to break the system partition
> (which I think is /dev/md0) on the original disks. It now shows as a
> corrupted system partition. I had thought that the reason for the
> message was that the NASrecovery "drive" was still in the system.
> Rebooting without that drive gives the setup wizard. Rebooting with
> the original drives works with my original settings; but the data
> volume is still showing crashed.

I can't really parse this. If the critical data is out, and the most
important of it has been duplicated or even triplicated, then I
personally would just blow away everything and start from scratch,
presumably with the new big drives you were planning on migrating to.


>
> I specifically asked the tech support contact the following on Monday night:
>
> "
> The files looked intact; but I saw a message about invalid system
> partition on those drives. I figured it was simply because there was a
> system partition on the original disks; and it couldn't be mounted
> while the "recovery" drive was still in the system.
>
> I did a controlled power down; and pulled the "recovery" drive. Upon
> restarting the system is going back into setup mode.
>
> I stopped there because I expected that my configuration information
> should still be on the degraded volume and I did not want to risk the
> setup "overwriting" the configuration data or even worse the volume
> data... you painstakingly saved.
>
> I'm going to power down the nas; and put the "recovery" drive back in
> so that you can remote login to the system again. Please let me know
> that I need to do to re-enable the system partition(s) so that the nas
> will function as configured before the dd wipe.
> "
>
> He responded as follows:
> "
> You can repair the system partition simply by clicking the "Repair"
> link in the Overview tab of Storage Manager. Please note that this
> will overwrite your original DSM installation and configuration files
> with the one from the recovery drive, but so would reinstalling DSM
> without the recovery drive.

I can't really parse all of that either. It sounds like the repair is
actually some kind of reset, which is reasonable.

Again, if you can get your data out, that's the main goal. A repair
that gets sends you back into a time machine sounds nice, but long
term stability wise I think it's best to blow away everything and
create a new NAS with the new drives and all new settings using their
latest software revision, and then repopulate from your backups.

And then make sure you have a backup strategy that backs up the really
important stuff more frequently. The fact of any raid or NAS is that
they can break, and your sanity is best off not caring if it does, and
the only way you can not care is if the data you care about is
replicated. Including even offsite for the really important stuff.




> "
>
>
> I've replied with the following unanswered reply:
> "
> Neither of those options is really ideal.
>
> I looked at the NAS; and the original NAS settings appear to be under
> /dev/md100 while the NasRecovery files are in dev/md0
>
> Are you sure there is no way to point the rootfs back to the original
> UUID with the files intact?
>
> I did try to boot with my original disk3 and while the system booted;
> it still showed the array as crashed so I'm back to the NASrecovery
> disk.
> "
>
>
> It seems I'm so close to a fully restored NAS box that I'm just hoping
> for some miracle. I guess I should count my blessings they restored
> access to my files... but seems weird they couldn't or won't correct
> for the corrupted system partition. I feel like there is probably a
> simple solution which would allow me to point the working / original
> NAS system files to the "recovered" btrfs data; but It's really kinda
> above my head right now. Too much black magic used without a clear
> explanation.

I can't answer any of that. If the reset repair of the system
partitions does not cause the array with your data on it to also get
wiped, then I'd say it's a pretty low penalty to have to restore some
settings or whatever.


> If you have any idea; I'm open to the suggestion. I think the right
> thing for me to do right now is go ahead and do a backup of the
> restored files - just incase.

At this point I can't tell for sure what you have backed up, what you
can backup, what you can't backup. But no matter what, before fixing,
resetting, or starting over from scratch, make multiple backups so
that you're satisfied that you can start totally from scratch.



-- 
Chris Murphy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux