John Madden wrote: > Out of curiousity, how good is zfs with full fs scans when running in > the 100-million file count range? What do you see in terms of > aggregate MB/s throughput? > I'm not sure what you mean by "full fs scan" precisely, and haven't tested anything very large. Since the design allows up to 2^48 files PER DIRECTORY and 2^78 bytes per pool I hope they have thought through very large performance. But I don't recall seeing any benchmarks deliberately on very large numbers of small files. The design uses hundreds of metaslabs per device, not bitmaps or b-trees, so it's quite different than what old admins like me were used to. The first thing you notice is "how many inodes do I need" at filesystem creation time is no longer something you need worry about. If the pool has space you can create more files. Our performance is very good with backends up to 10K users and zpool scrub is about the only thing I can run that pushes iostat numbers up to 99. I don't notice an performance degradation when scrub is running. FWIW our pools and systems are fairly idle but maybe this helps: # zpool iostat capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- ms11 323G 757G 51 52 626K 400K ---- Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html