On Mon, May 2, 2016 at 8:18 AM, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: > Dave Chinner <david@xxxxxxxxxxxxx> writes: [..] >> We need some form of redundancy and correction in the PMEM stack to >> prevent single sector errors from taking down services until an >> administrator can correct the problem. I'm trying to understand >> where this is supposed to fit into the picture - at this point I >> really don't think userspace applications are going to be able to do >> this reliably.... > > Not all storage is configured into a RAID volume, and in some instances, > the application is better positioned to recover the data (gluster/ceph, > for example). It really comes down to whether applications or libraries > will want to implement redundancy themselves in order to get a bump in > performance by not going through the kernel. And I think I know what > your opinion is on that front. :-) > > Speaking of which, did you see the numbers Dan shared at LSF on how much > overhead there is in calling into the kernel for syncing? Dan, can/did > you publish that spreadsheet somewhere? Here it is: https://docs.google.com/spreadsheets/d/1pwr9psy6vtB9DOsc2bUdXevJRz5Guf6laZ4DaZlkhoo/edit?usp=sharing On the "Filtered" tab I have some of the comparisons where: noop => don't call msync and don't flush caches in userspace persist => cache flushing only in userspace and only on individual cache lines persist_4k => cache flushing only in userspace, but flushing is performed in 4K aligned units msync => same granularity flushing as the 'persist' case, but the kernel internally promotes this to a 4K sized / aligned flush msync_0 => synthetic case where msync() returns immediately and does no other work The takeaway is that msync() is 9-10x slower than userspace cache management. Let me know if there are any questions and I can add an NVML developer to this thread... _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs