Re: RAID 1 Performance (Backups)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 16, 2018 at 1:31 PM Adam Goryachev
<mailinglists@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>
>
> On 16/10/18 21:46, Shaun Glass wrote:
> > Good Day,
> >
> > We have some servers that that we have setup RAID 1 with mdadm. The
> > layout is having one disk in one VMWare Datastore and the other in
> > another VMWare Datastore. These are RHEL 7 Servers. LVM With EXT4
> > Filesystems.
> >
> > Now they seem to function perfectly well without too much performance
> > impact until we run backups in the evening. Here we see backups
> > basically taking 5 times as long to complete as they normally would.
> > We use TSM for backups and it is typically flat file backups.
> >
> > Please note these Datastores are in different DC's with a big network
> > link between. Since performance during the day is perfectly fine, we
> > are a bit lost as to why backups are taking so long.
> >
> > Any suggestions ?
> >
> Most likely this is a latency issue between the two sites (not
> bandwidth)... You would need to examine what technology you are using to
> make that remote disk look like a local disk for mdadm. Also, did you
> give mdadm a write-mostly flag for this "remote" disk? What other config
> have you done (or not done)?
>
> In fact, is it the live load that has this RAID1 or the backup server?
> What is the performance of the backup server/what is it's config like?
>
> Regards,
> Adam
>

This is not for the Backup Server and secondly the "write-mostly" flag
was never used in any way.

The history behind this is that servers are being migrated from AIX to
Linux. AIX Had physical disks presented, as opposed to virtual disks
from Datastore's, from each DC and used LVM Mirror's. The same storage
is being used for the Datastores that is presented to the AIX Servers.
After testing out various solutions we went with Stretched Clusters
within VMWare. One thing that was never really performance tested was
backups.

We have now setup a testing environment to sort out the backup issue,
hence the query about possible things to look out for. The following
is what I can give the way we configure the storage :

VM

OS Disks
0:0 (DC1)
1:0 (DC2)

Data Disks
0:1 (DC1)
1:1 (DC2)

... the basics of the above is that we have 2 scsi controllers
attached to the VM. Disks from one DC get attached to the one and the
other DC to the other. Always keeping the sequence the same.

Example MD Device created as follows :

# mdadm --create /dev/md1 -l 1 -n 2 -b internal  missing /dev/sda1 /dev/sdb1

# cat /etc/mdadm.conf
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2
UUID=6d553d9b:4aa2eea6:38946ecc:40420623  devices=/dev/sda1,/dev/sdb1

# cat /proc/mdstat

md1 : active raid1 sda1[1] sdb1[2]
      36142080 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

Regards

Shaun



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux