cat /proc/mounts and verify it is mounted, ls -l /raid Unmount it and do ls -l /raid and make sure nothing is "under" it. An issue with the raid and/or filesystem is going to be very unlikely to cleanly remove all files like this, typically to do this you either need to have done a rm -rf against the files, or you would need to have done a mkfs and rebuild the filesystem deleting everything, or nothing was ever on that filesystem. On Sun, May 24, 2020 at 6:38 AM Patrick O'Callaghan <pocallaghan@xxxxxxxxx> wrote: > > Still getting the hang of md. I had it working for several days (2 > disks in RAID1 config) but after a system update and reboot, it > suddenly shows no data: > > ]# lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > [...] > sdd 8:48 0 931.5G 0 disk > └─md127 9:127 0 931.4G 0 raid1 > └─md127p1 259:0 0 931.4G 0 part > sde 8:64 0 931.5G 0 disk > └─md127 9:127 0 931.4G 0 raid1 > └─md127p1 259:0 0 931.4G 0 part > > # mdadm --detail /dev/md127p1 > /dev/md127p1: > Version : 1.2 > Creation Time : Wed May 20 16:34:58 2020 > Raid Level : raid1 > Array Size : 976628736 (931.39 GiB 1000.07 GB) > Used Dev Size : 976630464 (931.39 GiB 1000.07 GB) > Raid Devices : 2 > Total Devices : 2 > Persistence : Superblock is persistent > > Intent Bitmap : Internal > > Update Time : Sun May 24 12:29:54 2020 > State : clean > Active Devices : 2 > Working Devices : 2 > Failed Devices : 0 > Spare Devices : 0 > > Consistency Policy : bitmap > > Name : Bree:0 (local to host Bree) > UUID : ba979f01:7f1dbe79:24f19f68:7ba6000c > Events : 22436 > > Number Major Minor RaidDevice State > 0 8 48 0 active sync /dev/sdd > 1 8 64 1 active sync /dev/sde > # mount /dev/md127p1 /raid > # ls /raid > > How is this possible? The only thing that touches the array is a borg > backup run from crontab, which I have verified is working correctly, > including just before the update and reboot this morning. It looks as > if the mount is mounting the wrong thing. > > Or am I missing something very obvious? > > poc > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx