[LSF/MM/BPF TOPIC] Design challenges for a new file system that needs to support multiple billions of file
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: lsf-pc@xxxxxxxxxxxxxxxxxxxxxxxxxx, linux-fsdevel@xxxxxxxxxxxxxxx
- Subject: [LSF/MM/BPF TOPIC] Design challenges for a new file system that needs to support multiple billions of file
- From: RIc Wheeler <ricwheeler@xxxxxxxxx>
- Date: Sun, 2 Feb 2025 22:39:57 +0100
- Cc: Zach Brown <zab@xxxxxxxxx>
- User-agent: Mozilla Thunderbird
I have always been super interested in how much we can push the
scalability limits of file systems and for the workloads we need to
support, we need to scale up to supporting absolutely ridiculously large
numbers of files (a few billion files doesn't meet the need of the
largest customers we support).
Zach Brown is leading a new project on ngnfs (FOSDEM talk this year gave
a good background on this -
https://www.fosdem.org/2025/schedule/speaker/zach_brown/). We are
looking at taking advantage of modern low latency NVME devices and
today's networks to implement a distributed file system that provides
better concurrency that high object counts need and still have the
bandwidth needed to support the backend archival systems we feed.
ngnfs as a topic would go into the coherence design (and code) that
underpins the increased concurrency it aims to deliver.
Clear that the project is in early days compared to most of the proposed
content, but it can be useful to spend some of the time on new ideas.
[Index of Archives]
[Linux Ext4 Filesystem]
[Union Filesystem]
[Filesystem Testing]
[Ceph Users]
[Ecryptfs]
[NTFS 3]
[AutoFS]
[Kernel Newbies]
[Share Photos]
[Security]
[Netfilter]
[Bugtraq]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux Cachefs]
[Reiser Filesystem]
[Linux RAID]
[NTFS 3]
[Samba]
[Device Mapper]
[CEPH Development]