On 2/2/25 10:39 PM, RIc Wheeler wrote:
I have always been super interested in how much we can push the
scalability limits of file systems and for the workloads we need to
support, we need to scale up to supporting absolutely ridiculously
large numbers of files (a few billion files doesn't meet the need of
the largest customers we support).
Zach Brown is leading a new project on ngnfs (FOSDEM talk this year
gave a good background on this -
https://www.fosdem.org/2025/schedule/speaker/zach_brown/). We are
looking at taking advantage of modern low latency NVME devices and
today's networks to implement a distributed file system that provides
better concurrency that high object counts need and still have the
bandwidth needed to support the backend archival systems we feed.
ngnfs as a topic would go into the coherence design (and code) that
underpins the increased concurrency it aims to deliver.
Clear that the project is in early days compared to most of the
proposed content, but it can be useful to spend some of the time on
new ideas.
Just adding that all of this work is GPL'ed and we aspire to getting it
upstream.
This is planned to be a core part of future shipping products, so we
intend to fully maintain it going forward.