Re: Issue with RHEL6 mkfs.xfs (3.1.1+), HP P420 RAID, and MySQL replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 09, 2015 at 05:32:50PM +0000, Hogan Whittall wrote:
> Hello,
> Recently we encountered a previously-reported issue regarding write amplification with MySQL replication and XFS when used with certain RAID controllers (In our case, HP P420).  That issue exactly matches our issue and was documented by someone else here - http://oss.sgi.com/archives/xfs/2013-03/msg00133.html - but I don't see any resolution.  I will say that the problem *does not* exist when mkfs.xfs 2.9.6 is used to format the filesystem on RHEL6 as that sets sunit=0 and swidth=0 instead of setting based on minimum_io_size and optimal_io_size.

I'm not very familiar with MySQL and thus not sure what your workload
is, but either version of mkfs.xfs should support setting options such
that the fs is formatted as with the defaults of another version...

> We have systems that are identical in how they are built and configured, we can take a RHEL6 box that has the MySQL partition formatted with mkfs.xfs v3.1.1 and reproduce the write amplification problem with MySQL replication every single time.  If we take the same box and format the MySQL partition with mkfs.xfs 2.9.6, then bring up MySQL with the exact same configuration there is no problem.  I've included the working and broken settings below.  If it's not the sunit/swidth settings then what will cause 7-10MB/s worth of writes to the XFS partition to become over 200MB/s downstream?  The actual data change on the disks is not 200MB/s, but because the write ops are truly being amplified and not just being misreported our MySQL slaves with the bad XFS settings cannot keep up and the lag steadily increases with no hope of ever becoming current.

It would be nice to somehow see what requests are being made at the
application level. Perhaps via strace or something of that nature if you
can demonstrate a relatively isolated operation at the app. level
resulting in the same I/O requests to the kernel but different I/O out
of the filesystem..?

> I am happy to try some other settings/options with the RHEL6 mkfs.xfs to see if replication performance is able to match that of systems formatted with mkfs.xfs 2.9.6, but the values set by 3.1.1 with the P420 RAID do not work for MySQL replication.  We have ruled out everything else as a possible cause, the absolute only difference on these systems is what values are set by mkfs.xfs.
> ============================================================ Working RHEL6 XFS partition:
> meta-data=/dev/mapper/sys-home   isize=256    agcount=4, agsize=71271680 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=285086720, imaxpct=5         =                       sunit=0      swidth=0 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal               bsize=4096   blocks=32768, version=2         =                       sectsz=512   sunit=0 blks, lazy-count=0realtime =none                   extsz=4096   blocks=0, rtextents=0
> ============================================================ 
> Broken RHEL6 XFS partition:
> meta-data=/dev/mapper/sys-home   isize=256    agcount=32, agsize=8908992 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=285086720, imaxpct=5         =                       sunit=64     swidth=128 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal               bsize=4096   blocks=139264, version=2         =                       sectsz=512   sunit=64 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0
> ============================================================ 
> 

The differences I see for the second mkfs:

- agcount of 32 instead of 4
- sunit/swidth of 64/128 rather than 0/0
- log size of 139264 blocks rather than 32768
- lazy-count=1 rather than lazy-count=0

As mentioned above, I would take the "broken" mkfs.xfs and add options
one at a time that format the fs as the previous version did and try to
identify what leads to the behavior. E.g., maybe first use '-d
su=0,sw=0' to reset the stripe unit, then try adding '-l
size=<32768*blksize>' to set the log size, '-d agcount=N' to set the
allocation group count, etc.

Brian

> Thanks!
> -Hogan

> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux