On Thu, Jul 20, 2023 at 09:55:22AM +0200, Martin Steigerwald wrote: > > I thought that nowadays a cache flush would be (almost) a no-op in the > case the storage receiving it is backed by such reliability measures. > I.e. that the hardware just says "I am ready" when having the I/O > request in stable storage whatever that would be, even in case that > would be battery backed NVRAM and/or temporary flash. That *can* be true if the storage subsystem has the reliability measures. For example, if have a $$$ EMC storage array, then sure, it has an internal UPS backup and it will know that it can ignore that CACHE FLUSH request. However, if you have *building* a storage system, the storage device might be a HDD who has no idea that that it doesn't need to worry about power drops. Consider if you will, a rack of servers, each with a dozen or more HDD's. There is a rack-level battery backup, and the rack is located in a data center with diesel generators with enough fuel supply to keep the entire data center, plus cooling, going for days. The rack of servers is part of a cluster file system. So when a file write to a cluster file system is performed, the cluster file system will pick three servers, each in a different rack, and each rack is in a different power distribution domain. That way, even the entry-level switch on the rack dies, or the Power Distribution Unit (PDU) servicing a group of racks blows up, the data will be available on the other two servers. > At least that is what I thought was the background for not doing the > "nobarrier" thing anymore: Let the storage below decide whether it is > safe to basically ignore cache flushes by answering them (almost) > immediately. The problem is that the storage below (e.g., the HDD) has no idea that all of this redundancy exists. Only the system adminsitrator who is configuring the file sysetm will know. And if you are runninig a hyper-scale cloud system, this kind of custom made system will be much, MUCH, cheaper than buying a huge number of $$$ EMC storage arrays. Cheers, - Ted -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel