[PATCH] Fix memory leak in async readahead (ceph-client/master)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While running a test to improve read performance on 1GbE and 10GbE
clients, a memory leak in the async readahead code was discovered.  A
trivial fix follows.

In case it is of interest, using three separately hosted OSDs (10GbE),
each configured to use six disks, i get the following read performance
(single threaded read, using find $DIR -type f -exec dd bs=8M if={}
of=/dev/null, file system has 1.5TiB of data in variously sized objects)

  v3.0.0  (1GbE client): 500Mb/s
  v3.0.0 (10GbE client): 1Gb/s

  v3.0.0 + ceph-client/master  (1GbE client): >900Mb/s
  v3.0.0 + ceph-client/master (10GbE client): ~2.5-3Gb/s

Of note:
 - doubling the read ahead window for the 10GbE client didn't buy
   any more performance (default = 8192*1024).
 - adding more threads to do reads on the 10GbE client only brought
   aggregate throughput up to <4Gbit/sec

 - using 8 1GbE clients, an aggregate throughput of ~6Gbit/s is achieved
 - increasing to 12GbE clients, aggregate increases only up to 6.5Gbit/sec
   ( I shall retry this with a different disk configuration for the OSDs )

The numbers above are fairly woolly, but hopefully illustrate current
throughput improvements from the ceph-client/master branch.

Kind regards,

..david

David Flynn (1):
  ceph: fix memory leak in async readpages

 fs/ceph/addr.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

-- 
1.7.4.1
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux