At Sun, 10 Nov 2013 17:49:07 +0900, Hitoshi Mitake wrote: > > From: Hitoshi Mitake <mitake.hitoshi@xxxxxxxxxxxxx> > > Current tgtd sends and receives iSCSI PDUs in its main event > loop. This design can cause bottleneck when many iSCSI clients connect > to single tgtd process. For example, we need multiple tgtd processes > for utilizing fast network like 10 GbE because typical single > processor core isn't fast enough for processing bunch of requests. > > This patch lets tgtd send/receive iSCSI PDUs and check digests in its > worker threads. After applying this patch, the bottleneck in the main > event loop is removed and the performance is improved. > > The improvement can be seen even if tgtd and iSCSI initiator are > running on a single host. Below is a snippet of fio result on my > laptop. The workload is 128MB random RW. Backingstore is sheepdog. > > Original tgtd: > read : io=65392KB, bw=4445.2KB/s, iops=1111, runt= 14711msec > write: io=65680KB, bw=4464.8KB/s, iops=1116, runt= 14711msec > > tgtd with this patch: > read : io=65392KB, bw=5098.9KB/s, iops=1274, runt= 12825msec > write: io=65680KB, bw=5121.3KB/s, iops=1280, runt= 12825msec > > This change will be more effective when a number of iSCSI clients > increases. I'd like to hear your comments on this change. > > Signed-off-by: Hitoshi Mitake <mitake.hitoshi@xxxxxxxxxxxxx> > --- > > v2: > - correct handling of connection closing based on a reference count of an iSCSI > connection > - a silly bug in iscsi_tcp_init() introduced in the previous patch is removed Ping? Could someone review this patch? > all Thanks, Hitoshi -- To unsubscribe from this list: send the line "unsubscribe stgt" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html