Re: Using RBD to pack billions of small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2/2/21 9:32 PM, Loïc Dachary wrote:
Hi Greg,

On 02/02/2021 20:34, Gregory Farnum wrote:


*snipsnap*

Right. Dan's comment gave me pause: it does not seem to be
a good idea to assume a RBD image of an infinite size. A friend who read this
thread suggested a sensible approach (which also is in line with the
Haystack paper): instead of making a single gigantic image, make
multiple 1TB images. The index is bigger

SHA256 sum of the artifact => name/uuid of the 1TB image,offset,size

instead of

SHA256 sum of the artifact  => offset,size


Just my 2 cents:

You could use the first byte of the SHA sum to identify the image, e.g. using a fixed number of 256 images. Or some flexible approach similar to the way filestore used to store rados objects.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux