On Wed, Dec 06, 2023 at 09:34:49PM +1100, Dave Chinner wrote: > Largely they were performance problems - unpredictable IO latency > and CPU overhead for IO meant applications would randomly miss SLAs. > The application would see IO suddenly lose all concurrency, go real > slow and/or burn lots more CPU when the inode switched to buffered > mode. > > I'm not sure that's a particularly viable model given the raw IO > throughput even cheap modern SSDs largely exceeds the capability of > buffered IO through the page cache. The differences in concurrency, > latency and throughput between buffered and DIO modes will be even > more stark itoday than they were 20 years ago.... The question is what's worse: random performance drops or random corruption. I suspect the former is less bad, especially if we have good tracepoints to pin it down.