Hi Ted, You said: > ...any advantage of decoupling the front/back end > is nullified, since fsync(2) requires a temporal coupling After after pondering it for a while, I realized that is not completely accurate. The reduced delete latency will allow the dbench process to proceed to the fsync point faster, then if our fsync is reasonably efficient (not the case today, but planned) we may still see an overall speedup. > if there is any delays introdued between when the > front-end sends the fsync request, and when the back- > end finishes writing the data and then communicates > this back to the front-end --- i.e., caused by schedular > latencies, this may end up being a disadvantage > compared to more traditional file system designs. Nothing stops our frontend from calling its backend synchronously, which is just what we intend to do for fsync. The real design issue for Tux3 fsync is writing out the minimal set of blocks to update a single file. As it is now, Tux3 commits all dirty file data at each delta, which is fine for many common loads, but not all. Two examples of loads where this may be less than optimal: 1) fsync (as you say) 2) multiple tasks accessing different files To excel under those loads, Tux3 needs to be able to break its "always commit everything rule" in an organized way. We have considered several design options for this but not yet prototyped any because we feel that that work can reasonably be attacked later. As always, we will seek the most rugged, efficient and simple solution. Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html