> BTW, I also compared the IO-less patchset and the vanilla kernel's > JBOD performance. Basically, the performance is lightly improved > under large memory, and reduced a lot in small memory servers. > > vanillla IO-less > -------------------------------------------------------------------------------- [...] > 26508063 17706200 -33.2% JBOD-10HDD-thresh=100M/xfs-100dd-1M-16p-5895M-100M > 23767810 23374918 -1.7% JBOD-10HDD-thresh=100M/xfs-10dd-1M-16p-5895M-100M > 28032891 20659278 -26.3% JBOD-10HDD-thresh=100M/xfs-1dd-1M-16p-5895M-100M > 26049973 22517497 -13.6% JBOD-10HDD-thresh=100M/xfs-2dd-1M-16p-5895M-100M > > There are still some itches in JBOD.. OK, in the dirty_bytes=100M case, I find that the bdi threshold _and_ writeout bandwidth may drop close to 0 in long periods. This change may avoid one bdi being stuck: /* * bdi reserve area, safeguard against dirty pool underrun and disk idle * * It may push the desired control point of global dirty pages higher * than setpoint. It's not necessary in single-bdi case because a * minimal pool of @freerun dirty pages will already be guaranteed. */ - x_intercept = min(write_bw, freerun); + x_intercept = min(write_bw + MIN_WRITEBACK_PAGES, freerun); if (bdi_dirty < x_intercept) { if (bdi_dirty > x_intercept / 8) { pos_ratio *= x_intercept; do_div(pos_ratio, bdi_dirty); } else pos_ratio *= 8; } Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html