Re: CephFS and 32-bit Inode Numbers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is not quite true. The numberspace of MD5 is much greater than 2³², (2¹²⁸ exactly) and as long as you don't exhaust this Numberspace, the probability of having a collision is roughly equally likely as with any other input. There might be collisions, and the more Data you have, i.e. the more Adresses you use, the higher the probability.
Security researchers have shown that it is possible to create collisions, but it is very rare.

I cannot give you an estimate of the consequences of a collision though. It's a matter of what data is stored at that address and how Programs/OSes and even Ceph deal with this. I would suspect, ceph would find a checksum mismatch upon scrubbing. But I don't know how, or if ceph could or would correct this, as the two addresses with the same MD5sum have equally valid copies and i think in such a case it is undecidable, which Data is correct.

Greetings
Ingo

----- Ursprüngliche Mail -----
Von: "Nathan Fish" <lordcirth@xxxxxxxxx>
An: "ceph-users" <ceph-users@xxxxxxx>
Gesendet: Dienstag, 15. Oktober 2019 19:40:05
Betreff:  Re: CephFS and 32-bit Inode Numbers

I'm not sure exactly what would happen on an inode collision, but I'm
guessing Bad Things. If my math is correct, a 2^32 inode space will
have roughly 1 collision per 2^16 entries. As that's only 65536,
that's not safe at all.

On Mon, Oct 14, 2019 at 8:14 AM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>
> OK I found that the kernel has an "ino32" mount option which hashes 64
> bit inos to 32-bit space.
> Has anyone tried this?
> What happens if two files collide?
>
> -- Dan
>
> On Mon, Oct 14, 2019 at 1:18 PM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> >
> > Hi all,
> >
> > One of our users has some 32-bit commercial software that they want to
> > use with CephFS, but it's not working because our inode numbers are
> > too large. E.g. his application gets a "file too big" error trying to
> > stat inode 0x40008445FB3.
> >
> > I'm aware that CephFS is offsets the inode numbers by (mds_rank + 1) *
> > 2^40; in the case above the file is managed by mds.3.
> >
> > Did anyone see this same issue and find a workaround? (I read that
> > GlusterFS has an enable-in32 client option -- does CephFS have
> > something like that planned?)
> >
> > Thanks!
> >
> > Dan
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux