Re: dm stripe: add DAX support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2016-06-24 at 14:29 -0400, Mike Snitzer wrote:
> 
> BTW, if in your testing you could evaluate/quantify any extra overhead
> from DM that'd be useful to share.  It could be there are bottlenecks
> that need to be fixed, etc.

Here are some results from fio benchmark.  The test is single-threaded and is
bound to one CPU.

 DAX  LVM   IOPS   NOTE
 ---------------------------------------
  Y    N    790K
  Y    Y    754K   5% overhead with LVM
  N    N    567K
  N    Y    457K   20% overhead with LVM

 DAX: Y: mount -o dax,noatime, N: mount -o noatime
 LVM: Y: dm-linear on pmem0 device, N: pmem0 device
 fio: bs=4k, size=2G, direct=1, rw=randread, numjobs=1

Among the 5% overhead with DAX/LVM, the new DM direct_access interfaces
account for less than 0.5%.

 dm_blk_direct_access 0.28%
 linear_direct_access 0.17%

The average latency increases slightly from 0.93us to 0.95us.  I think most of
the overhead comes from the submit_bio() path, which is used only for
accessing metadata with DAX.  I believe this is due to cloning bio for each
request in DM.  There is 12% more L2 miss in total.

Without DAX, 20% overhead is observed with LVM.  Average latency increases
from 1.39us to 1.82us.  Without DAX, bio is cloned for both data and metadata.

Thanks,
-Toshi




��.n��������+%������w��{.n�����{����w��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux