Re: Data Integrity Check on EXT Family of Filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23 September 2013 22:08, Andrew Martin <amartin@xxxxxxxxxxx> wrote:
>
> Hello,
>
> I am considering writing a tool to perform data integrity checking on filesystems
> which do not support it internally (e.g. ext4). When storing long-term backups,
> I would like to be able to detect bit rot or other corruption to ensure that I
> have intact backups. The method I am considering is to recreate the directory
> structure of the backup directory in a "shadow" directory tree, and then hash
> each of the files in the backup directory and store the hash in the same filename
> in the shadow directory. Then, months later, I can traverse the backup directory,
> taking a hash of each file again and comparing it with the hash stored in the
> shadow directory tree. If the hashes match, then the file's integrity has been
> verified (or at least has not degraded since the shadow directory was created).
>
> Does this seem like a reasonable approach for checking data integrity, or is there
> an existing tool or different method which would be better?
>
> Thanks,
>
> Andrew Martin

Here's a couple of integrity checking tools to consider:
tripwire - http://sourceforge.net/projects/tripwire/
aide - http://aide.sourceforge.net/

Don't use them, just providing options.

Thanks,
Mike



On 23 September 2013 22:08, Andrew Martin <amartin@xxxxxxxxxxx> wrote:
> Hello,
>
> I am considering writing a tool to perform data integrity checking on filesystems
> which do not support it internally (e.g. ext4). When storing long-term backups,
> I would like to be able to detect bit rot or other corruption to ensure that I
> have intact backups. The method I am considering is to recreate the directory
> structure of the backup directory in a "shadow" directory tree, and then hash
> each of the files in the backup directory and store the hash in the same filename
> in the shadow directory. Then, months later, I can traverse the backup directory,
> taking a hash of each file again and comparing it with the hash stored in the
> shadow directory tree. If the hashes match, then the file's integrity has been
> verified (or at least has not degraded since the shadow directory was created).
>
> Does this seem like a reasonable approach for checking data integrity, or is there
> an existing tool or different method which would be better?
>
> Thanks,
>
> Andrew Martin
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux