On 4/21/19 5:06 PM, Andre Robatino wrote:
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT /dev/sda2 1026048 536872959 535846912 255.5G 7 HPFS/NTFS/exFAT /dev/sda3 536872960 538970111 2097152 1G 83 Linux /dev/sda4 538970112 3907028991 3368058880 1.6T 5 Extended /dev/sda5 538972160 3907028991 3368056832 1.6T 8e Linux LVM so the bad LBA is in both sda4 and sda5. Trying tune2fs to find the block size gives [root@lenovo-pc ~]# tune2fs -l /dev/sda4 | grep Block tune2fs: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda4
sda4 is an extended partition. That's just a container, no filesystem.
Couldn't find valid filesystem superblock. [root@lenovo-pc ~]# tune2fs -l /dev/sda5 | grep Block tune2fs: Bad magic number in super-block while trying to open /dev/sda5
sda5 is an LVM volume, also not directly a filesystem.
[root@lenovo-pc ~]# tune2fs -l /dev/mapper/fedora-root | grep Block Block count: 419037184 Block size: 4096 Blocks per group: 32768
Do you have a home partition as well? If so, it's more likely to be in that one. Try running "badblocks -s -b 4096 /dev/mapper/fedora-root" and if you have a home partition, "badblocks -s -b 4096 /dev/mapper/fedora-home". I'm assuming that you have 4K blocks, that's the default for ext4. If you get a hit from badblocks, then run debugfs on the filesystem. Enter "icheck <number from badblocks>". That should give you at least one inode. If not, then maybe the block isn't in use. Then exit debugfs and run "find / -xdev -inum <inode number>" to find the file corresponding to the inode. Use /home instead of / if that's where the bad block was found.
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx