Hi all, the recent article in LWN regarding issues with fsync and error reporting in the Linux kernel and the potential for lost data has prompted me to ask 2 questions. 1. Is this issue low level enough that it affects all potentially supported sync methods on Linux? For example, if you were concerned about this issue and you had a filesystem which supports open_sync or open_datasync etc, is switching to one of these options something which should be considered or is this issue low level enough that all sync methods are impacted? 2. If running under xfs as the file system, what is the preferred sync method or is this something which really needs to be benchmarked to make a decision? For background, one of our databases is large - approximately 7Tb with some tables which have large numbers of records with large inserts per day i.e. approx 1,600,000,000 new records per day added and a similar number deleted (no updates), maintaining a table size of about 3Tb, though it is expected we will be increasing the number of retained records and will see the table grow to about 6Tb. This represents a fair amount of I/O and we want to ensure we have the fastest I/O we can achieve with highest data reliability we can get. The columns in the table are small i.e. 7 double precision, 2 integer, 1 date and 2 timestamp. Platform is RHEL, Postgres 9.6.8, filesystem xfs backed by HP SAN. Current wal_sync_method is fsync. Tim -- Tim Cross