Re: Unacceptably Poor RAID1 Performance with Many CPU Cores

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Xiao Ni <xni@xxxxxxxxxx> wrote:
> Because it can be reproduced easily in your environment. Can you try
> with the latest upstream kernel? If the problem doesn't exist with
> latest upstream kernel. You can use git bisect to find which patch can
> fix this problem.

I just tried the upstream.  I get almost the same result with 1G ramdisks.

Without RAID (writing to /dev/ram0)
READ:  IOPS=15.8M BW=60.3GiB/s
WRITE: IOPS= 6.8M BW=27.7GiB/s

RAID1 (writing to /dev/md/test)
READ:  IOPS=518K BW=2028MiB/s
WRITE: IOPS=222K BW= 912MiB/s

> > We are actually executing hundreds of VMs on our hosts.  The problem
> > is that when we use RAID1 for our enterprise NVMe disks, the
> > performance degrades very much compared to using them directly; it
> > seems we have the same bottleneck as the test described above.
> 
> So those hundreds VMs run on the raid1, and the raid1 is created with
> nvme disks. What's /proc/mdstat?

At the moment we do not use raid1 due to this performance issue.
Since the machines are in production, I can not change their disk
layout.  If I find the opportunity, I will set up raid1 on real
disks and report the contents of /proc/mdstat.

Thanks,
Ali




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux