Re: raid1 round-robin scheduler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





11.03.2015 13:55, Heinz Mauelshagen пишет:

On 03/11/2015 08:22 AM, konstantin wrote:

10.03.2015 17:22, Heinz Mauelshagen wrote:

On 03/10/2015 12:55 PM, konstantin wrote:


19.02.2015 18:02, Heinz Mauelshagen пишет:


dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
--table ...",
which is not the recommended raid1 layout any more) provides read
round-robin since long time.
You'd need an ancient kernel not to have it supported.

"raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
mappings accessible via the dm-raid target
do read optimizations as well. Use "lvcreate --type raid1/raid10
..." or
a respective dm table to set
those up. The former ("raid1") is the default in modern distributions
and configurable via setting
'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.

Heinz




On 02/19/2015 08:23 AM, konstantin wrote:
What version of the kernel should I use to get a round-robin read
implementation on LV raid1?


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
on vg with two physical devices:

lvs -a -o +devices
  LV              VG   Attr     LSize Pool Origin Data%  Move Log
Copy%  Convert Devices
  r1lv            r1vg rwi-a-m- 5.00g 100.00
r1lv_rimage_0(0),r1lv_rimage_1(0)
  [r1lv_rimage_0] r1vg iwi-aor- 5.00g           /dev/sda(1)
  [r1lv_rimage_1] r1vg iwi-aor- 5.00g           /dev/sdb(1)
  [r1lv_rmeta_0]  r1vg ewi-aor- 4.00m           /dev/sda(0)
  [r1lv_rmeta_1]  r1vg ewi-aor- 4.00m           /dev/sdb(0)

but, reading is only one of the devices (i see nmon live disk
utilization). There are solutions to ensure reading from the two
devices?


How do you test?
Do you read from multiple threads?



--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

really .. I used dd to test in one thread,

That's what I assumed.

but why when I read in one thread I can not read from two PV devices
at the same time?


Your dd example causes streaming io which is what spindles can handle best.
Thus it would not make sense to split ios up.
For such kind of single-threaded streaming io with sensefull block size
a striped mapping would do better.

Try running dd/fio/... multiple times in parallel and you should see the
expected effect.


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

my VG based on two PV remote connected by InfiniBand disk storages. I reach the limit of performance InfiniBand ports on my host and I would like to parallelize the load between the two raid1 legs (disk storages) that contain the same data.

--
WBR
Konstantin V. Krotov

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux