On Mon, Feb 03, 2025 at 05:18:48PM +0100, Ric Wheeler wrote: > > On 2/3/25 4:22 PM, Amir Goldstein wrote: > > On Sun, Feb 2, 2025 at 10:40 PM RIc Wheeler <ricwheeler@xxxxxxxxx> wrote: > > > > > > I have always been super interested in how much we can push the > > > scalability limits of file systems and for the workloads we need to > > > support, we need to scale up to supporting absolutely ridiculously large > > > numbers of files (a few billion files doesn't meet the need of the > > > largest customers we support). > > > > > Hi Ric, > > > > Since LSFMM is not about presentations, it would be better if the topic to > > discuss was trying to address specific technical questions that developers > > could discuss. > > Totally agree - from the ancient history of LSF (before MM or BPF!) we also > pushed for discussions over talks. > > > > > If a topic cannot generate a discussion on the list, it is not very > > likely that it will > > generate a discussion on-prem. > > > > Where does the scaling with the number of files in a filesystem affect existing > > filesystems? What are the limitations that you need to overcome? > > Local file systems like xfs running on "scale up" giant systems (think of > the old super sized HP Superdomes and the like) would be likely to handle > this well. We don't need "Big Iron" hardware to scale up to tens of billions of files in a single filesystem these days. A cheap server with 32p and a couple of hundred GB of RAM and a few NVMe SSDs is all that is really needed. We recently had a XFS user report over 16 billion files in a relatively small filesystem (a few tens of TB), most of which were reflink copied files (backup/archival storage farm). So, yeah, large file counts (i.e. tens of billions) in production systems aren't a big deal these days. There shouldn't be any specific issues at the OS/VFS layers supporting filesystems with inode counts in the billions - most of the problems with this are internal fielsystem implementation issues. If there are any specific VFS level scalability issues you've come across, I'm all ears... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx