Re: Using RBD to pack billions of small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Loïc,

We've never managed 100TB+ in a single RBD volume. I can't think of
anything, but perhaps there are some unknown limitations when they get so
big.
It should be easy enough to use rbd bench to create and fill a massive test
image to validate everything works well at that size.

Also, I assume you'll be doing the IO from just one client? Multiple
readers/writers to a single volume could get complicated.

Otherwise, yes RBD sounds very convenient for what you need.

Cheers, Dan


On Sat, Jan 30, 2021, 4:01 PM Loïc Dachary <loic@xxxxxxxxxxx> wrote:

> Bonjour,
>
> In the context Software Heritage (a noble mission to preserve all source
> code)[0], artifacts have an average size of ~3KB and there are billions of
> them. They never change and are never deleted. To save space it would make
> sense to write them, one after the other, in an every growing RBD volume
> (more than 100TB). An index, located somewhere else, would record the
> offset and size of the artifacts in the volume.
>
> I wonder if someone already implemented this idea with success? And if
> not... does anyone see a reason why it would be a bad idea?
>
> Cheers
>
> [0] https://docs.softwareheritage.org/
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux