Hi Martin, On Fri, 2012-12-28 at 15:53 +0100, Martin Svec wrote: > Sequential scan of rd_dev->sg_table_array in rd_get_sg_table is > a serious I/O performance bottleneck for large rd LUNs. Fix this > by computing the sg_table index directly from page offset because > all sg_tables (except the last one) have the same number of pages. > > Tested with 90 GiB rd_mcp LUN, where the patch improved maximal > random R/W IOPS by more than 100-150%, depending on actual > hardware and SAN setup. > > Signed-off-by: Martin Svec<martin.svec@xxxxxxxx> > --- Apologies for taking so long to get back to this.. Applied to target-pending/for-next, but note for future reference that I ended up having to apply this manually as your original patch appears to be white-space mangled. Aside from that minor bit, nice work on this optimization. ;) --nab > drivers/target/target_core_rd.c | 6 +++++- > 1 files changed, 5 insertions(+), 1 deletions(-) > > diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c > index d00bbe3..549633b 100644 > --- a/drivers/target/target_core_rd.c > +++ b/drivers/target/target_core_rd.c > @@ -271,7 +271,11 @@ static struct rd_dev_sg_table *rd_get_sg_table(struct rd_dev *rd_dev, u32 page) > u32 i; > struct rd_dev_sg_table *sg_table; > > - for (i = 0; i< rd_dev->sg_table_count; i++) { > + u32 sg_per_table = (RD_MAX_ALLOCATION_SIZE / > + sizeof(struct scatterlist)); > + > + i = page / sg_per_table; > + if (i< rd_dev->sg_table_count) { > sg_table =&rd_dev->sg_table_array[i]; > if ((sg_table->page_start_offset<= page)&& > (sg_table->page_end_offset>= page)) -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html