Sorry I probably can't bring you back the fs, I just have a possibly relevant datapoint to share: with recent Ubuntu 20.04 and a rather convoluted storage stack (i.e. LVM volumes consumed not by fs directly but insted by e.g. luks and/or bcache), on shutdown systemd service blk-availability.service would lead to 1m30s of frantic disk activity without ever finishing. I did not have much time to investigate (and I don't right now), but I remember coming to the conclusion that blk-availability or the invoked scripts do not honor the case where an LVM volume is consumed by anything else than a plain filesystem, and so they loop endlessly until finally being killed by systemd. I haven't lost data AFAICT, but it's a nuisance, and I disable blk-availability whenever I see it, and I don't even fully understand what it intends to achieve. It is possible that in your case the same type of frantic disk activity kept some vital fs information from actually reaching the disk. Regards Matthias On Fri, Aug 21, 2020 at 11:58:08AM +0200, Swâmi Petaramesh wrote: > Hello, > > I have a Manjaro system on which the disks setup is as follows : > > sda : mechanical HD > > - sda1 -> LUKS encryption -> bcache backing dev bcache0 -> BTRFS FS -> /home > > sdb : SSD > > - sdb1 -> System EFI partition > > - sdb2 -> LUKS encryption -> BTRFS FS -> / (system root FS) > > - sdb3 -> LUKS encryption -> bcache cache dev bcache0 (for /home) > > - sdb4 -> LUKS encryption -> SWAP > > bcache working in writeback mode. > > This setup had worked perfectly flawlessly for more than a year with > different kernel versions. > > Then I upgraded to Manjaro kernel 5.8 > > I was immediately under the impression that the overall disks access > performance had much worsened. > > Then, after I had worked on a couple VMs hosted on the bcache'd FS, I tried > to power the system down normally from the GUI menu. > > At that time there was high disk activity going on and systemd waited for > more than 1'30" trying to unmount the FSes, to no avail. Looks like > everything didn't make it to disk before it eventually timed out. > > Afterwards systemd killed the processes and powered down the system. > > At next powerup, the bcache would activate as usual, but the BTRFS > filesystem on it was completely *GONE*. The “file” utility would identify > the device as “data” (not an FS), mount would complain that this wasn't any > recognizable FS anymore, and “btrfs-find-root” wouldn't find anything. > > AFAIK the FS is completely gone. > > I've been using BTRFS over bcache over LUKS (on 2 machines) for years, and > it was usually very stable until today. > > Both the HD and SSD looks healthy and their SMART do not record any error, > remapped sectors, or other issue. > > So this was just to let you know... There might be some new kernel issue in > bcache or BTRFS or their relation to one another. > > Best regards. > > ॐ > > -- > Swâmi Petaramesh <swami@xxxxxxxxxxxxxx> PGP 9076E32E