Linux RAID Storage Date Index
[Prev Page][Next Page]
- Re: Re: [RFC PATCH V1] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [PATCH] Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Oliver Schinagl <oliver+list@xxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- [PATCH] Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- raid10 array tend to two degraded raid10 array
- From: "vincent" <hanguozhong@xxxxxxxxxxxx>
- Re: commit backport request
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- Re: commit backport request
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: NeilBrown <neilb@xxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 speed goes down
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Catching a RAID error in a process
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Bill Davidsen <davidsen@xxxxxxx>
- Question about how to migrate raid0 to raid5
- From: Zhang Jiejing <b33651@xxxxxxxxxxxxx>
- [PULL REQUEST] 3 more md bugfixes .. they just keep coming...
- From: NeilBrown <neilb@xxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: Need to remove failed disk from RAID5 array
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: Catching a RAID error in a process
- From: NeilBrown <neilb@xxxxxxx>
- Catching a RAID error in a process
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- Re: Need to remove failed disk from RAID5 array
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: NeilBrown <neilb@xxxxxxx>
- Re: [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Mathias Burén <mathias.buren@xxxxxxxxx>
- [RFE] Please, add optional RAID1 feature (= chunk checksums) to make it more robust
- From: Jaromir Capik <jcapik@xxxxxxxxxx>
- Re: RAID5 speed goes down
- From: Alexander Schleifer <alexander.schleifer@xxxxxxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- Re: v3.5 regression in IMSM support
- From: Brian Downing <bdowning@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: v3.5 regression in IMSM support
- From: NeilBrown <neilb@xxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: NeilBrown <neilb@xxxxxxx>
- Re: xfs/md filesystem hang on drive pull under IO with 2.6.35.13
- From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
- Re: xfs/md filesystem hang on drive pull under IO with 2.6.35.13
- From: NeilBrown <neilb@xxxxxxx>
- xfs/md filesystem hang on drive pull under IO with 2.6.35.13
- From: Benedict Singer <bzsing@xxxxxxxxx>
- [PATCH v3] DM RAID: Add support for MD RAID10
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: [RFC]raid5: multiple thread handle stripe
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: [RFC]raid5: multiple thread handle stripe
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: raid1 repair: sync_request() aborts if one of the drives has bad block recorded
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- recreating a non-damaged RAID5
- From: Somewhere ToHide <andre_39145@xxxxxxxxxx>
- Re: doubt about raid 'quality'
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- raid1: metadata 1.2 size calculation formula needed
- From: Sebastian Riemer <sebastian.riemer@xxxxxxxxxxxxxxxx>
- v3.5 regression in IMSM support
- From: Brian Downing <bdowning@xxxxxxxxx>
- Re: doubt about raid 'quality'
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Asdo <asdo@xxxxxxxxxxxxx>
- What to do with this RAID6 ...
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: doubt about raid 'quality'
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- [PATCH 1/2 V1] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: [PATCH 1/2 V1] [PATCH] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: commit backport request
- From: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
- Re: commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- Re: commit backport request
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: [PATCH 1/2 V1] [PATCH] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices
- From: NeilBrown <neilb@xxxxxxx>
- Re: doubt about raid 'quality'
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH 2/2 V1] raid5: For odirect-write performance, not set STRIPE_PREREAD_ACTIVE.
- From: NeilBrown <neilb@xxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: NeilBrown <neilb@xxxxxxx>
- Re: PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling
- From: NeilBrown <neilb@xxxxxxx>
- Re: raid1 repair: sync_request() aborts if one of the drives has bad block recorded
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 0/2] Improve odirect-write performance for block-device.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH] raid5: Flush data when do_md_stop in order to avoid stripe_cache leaking.
- From: NeilBrown <neilb@xxxxxxx>
- Re: doubt about raid 'quality'
- From: Mathias Burén <mathias.buren@xxxxxxxxx>
- Need to remove failed disk from RAID5 array
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- RAID5 speed goes down
- From: Alexander Schleifer <alexander.schleifer@xxxxxxxxxxxxxx>
- doubt about raid 'quality'
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- PATCH: md/raid1: sync_request_write() may complete r1_bio without rescheduling
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Re: [PATCH 0/2] Improve odirect-write performance for block-device.
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: commit backport request
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: raid1 repair: sync_request() aborts if one of the drives has bad block recorded
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- [PATCH 2/2 V1] raid5: For odirect-write performance, not set STRIPE_PREREAD_ACTIVE.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- Re: [PATCH] raid5: Flush data when do_md_stop in order to avoid stripe_cache leaking.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
- From: NeilBrown <neilb@xxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [RFC PATCH V1] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: mdadm-3.2.5: segfault in "--grow --continue"
- From: NeilBrown <neilb@xxxxxxx>
- [PATCH 1/2 V1] [PATCH] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [RFC PATCH V1] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: [PATCH 0/2] Improve odirect-write performance for block-device.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 1/2] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 1/2] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [RFC]raid5: multiple thread handle stripe
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 1/3 v4]raid1: make sequential read detection per disk based
- From: NeilBrown <neilb@xxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: NeilBrown <neilb@xxxxxxx>
- Re: raid1 repair: sync_request() aborts if one of the drives has bad block recorded
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH 0/2] Improve odirect-write performance for block-device.
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- [PATCH 2/2] raid5: For write performance, remove REQ_SYNC when write was odirect.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH 1/2] fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH 0/2] Improve odirect-write performance for block-device.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Resizing/merging partitions on soft RAID
- From: Istvan Pusztai <istvanp@xxxxxxxxx>
- Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- [PULL REQUEST] one new bugfix for md/RAID1
- From: NeilBrown <neilb@xxxxxxx>
- Re: Assembly failure
- From: Richard Scobie <r.scobie@xxxxxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- [RFC PATCH V1] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- [PATCH] raid5: Flush data when do_md_stop in order to avoid stripe_cache leaking.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- Re: commit backport request
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Linux RAID subsystem future
- From: NeilBrown <neilb@xxxxxxx>
- Re: Linux RAID subsystem future
- From: Drew <drew.kay@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Linux RAID subsystem future
- From: Zdenek Kaspar <zkaspar82@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- raid1 repair: sync_request() aborts if one of the drives has bad block recorded
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: [mdadm PATCH] bcache: add bcache superblock
- From: Jacek Danecki <Jacek.Danecki@xxxxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: Alasdair G Kergon <agk@xxxxxxxxxx>
- Re: [PATCH] 07reshape5intr: Set speed_limit_min to be able to reduce resync speed below 1000
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH v2] DM RAID: Add support for MD RAID10
- From: NeilBrown <neilb@xxxxxxx>
- [PATCH v2] DM RAID: Add support for MD RAID10
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- Re: commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- Re: commit backport request
- From: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
- Re: commit backport request
- From: Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx>
- Re: RAID5 faild while in degraded mode, need help
- From: Dietrich Heise <dh@xxxxxxx>
- Re: Assembly failure
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Bernd Schubert <bernd.schubert@xxxxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Bar Ziony <bartzy@xxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Bernd Schubert <bernd.schubert@xxxxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Bar Ziony <bartzy@xxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: mdadm-3.2.5: segfault in "--grow --continue"
- From: Sebastian Hegler <sebastian.hegler@xxxxxxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- Re: Assembly failure
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: Assembly failure
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: On mdadm 3.2 and bad-block-log
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- Re: Assembly failure
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Bar Ziony <bartzy@xxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- [PATCH] 07reshape5intr: Set speed_limit_min to be able to reduce resync speed below 1000
- From: Jes.Sorensen@xxxxxxxxxx
- Re: tests/03r5assemV1 issues
- From: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
- Re: tests/03r5assemV1 issues
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: tests/03r5assemV1 issues
- From: NeilBrown <neilb@xxxxxxx>
- Re: Assembly failure
- From: NeilBrown <neilb@xxxxxxx>
- Re: mdadm-3.2.5: segfault in "--grow --continue"
- From: NeilBrown <neilb@xxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Does mdadm supports TRIM command to SSDs?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Possible data corruption after rebuild
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] DM RAID: Add support for MD RAID10 personality
- From: NeilBrown <neilb@xxxxxxx>
- Does mdadm supports TRIM command to SSDs?
- From: Bar Ziony <bartzy@xxxxxxxxx>
- [PATCH] Fix --update=homehost
- From: Justin Maggard <jmaggard10@xxxxxxxxx>
- Re: [PATCH] DM RAID: Add support for MD RAID10 personality
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- Re: Possible data corruption after rebuild
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: Assembly failure
- From: Sebastian Riemer <sebastian.riemer@xxxxxxxxxxxxxxxx>
- Re: Assembly failure
- From: pants <pants@xxxxxxxxxx>
- Re: Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: "Darrick J. Wong" <djwong@xxxxxxxxxx>
- Re: mdadm-3.2.5: segfault in "--grow --continue"
- From: Sebastian Hegler <sebastian.hegler@xxxxxxxxxxxxx>
- Re: Assembly failure
- From: Sebastian Riemer <sebastian.riemer@xxxxxxxxxxxxxxxx>
- Assembly failure
- From: Brian Candler <B.Candler@xxxxxxxxx>
- Re: mdadm-3.2.5: segfault in "--grow --continue"
- From: NeilBrown <neilb@xxxxxxx>
- mdadm-3.2.5: segfault in "--grow --continue"
- From: Sebastian Hegler <sebastian.hegler@xxxxxxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Matthias Prager <linux@xxxxxxxxxxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Matthias Prager <linux@xxxxxxxxxxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Matthias Prager <linux@xxxxxxxxxxxxxxxxx>
- Re: mdadm crash with lots of devs
- From: Jan Engelhardt <jengelh@xxxxxxx>
- Re: RAID5 faild while in degraded mode, need help
- From: NeilBrown <neilb@xxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Robert Trace <maillist@xxxxxxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: NeilBrown <neilb@xxxxxxx>
- Re: mdadm crash with lots of devs
- From: NeilBrown <neilb@xxxxxxx>
- Re: md device is read only mode
- From: NeilBrown <neilb@xxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: "Darrick J. Wong" <djwong@xxxxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Robert Trace <maillist@xxxxxxxxxxxxx>
- Re: md device is read only mode
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: 'Device not ready' issue on mpt2sas since 3.1.10
- From: Matthias Prager <linux@xxxxxxxxxxxxxxxxx>
- mdadm crash with lots of devs
- From: Jan Engelhardt <jengelh@xxxxxxx>
- Re: Re: mkfs.xfs states log stripe unit is too large
- From: kedacomkernel <kedacomkernel@xxxxxxxxx>
- Re: md raid6 deadlock on write
- From: Jose Manuel dos Santos Calhariz <jose.calhariz@xxxxxxxxxxx>
- Re: RAID5 faild while in degraded mode, need help
- From: Dietrich Heise <dh@xxxxxxx>
- [RFC]raid5: multiple thread handle stripe
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: RAID5 superblock and filesystem recovery after re-creation
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 superblock and filesystem recovery after re-creation
- From: Alexander Schleifer <alexander.schleifer@xxxxxxxxxxxxxx>
- Re: commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- Re: commit backport request
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- commit backport request
- From: NeilBrown <neilb@xxxxxxx>
- Re: md raid6 deadlock on write
- From: NeilBrown <neilb@xxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: NeilBrown <neilb@xxxxxxx>
- Re: md device is read only mode
- From: NeilBrown <neilb@xxxxxxx>
- Re: Raid1 resync problem with leap seconds ?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Raid1 resync problem with leap seconds ?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Possible data corruption after rebuild
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 faild while in degraded mode, need help
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 superblock and filesystem recovery after re-creation
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 superblock and filesystem recovery after re-creation
- From: Alexander Schleifer <alexander.schleifer@xxxxxxxxxxxxxx>
- Re: RAID5 superblock and filesystem recovery after re-creation
- From: NeilBrown <neilb@xxxxxxx>
- RAID5 superblock and filesystem recovery after re-creation
- From: Alexander Schleifer <alexander.schleifer@xxxxxxxxxxxxxx>
- RAID5 faild while in degraded mode, need help
- From: Dietrich Heise <dh@xxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: Kerin Millar <kerframil@xxxxxxxxx>
- Re: Re: [PATCH] raid5:Only move IO_THRESHOLD stripes from delay_list to hold_list in raid5_activate_delayed().
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 0 of 5] MD: prepare RAID10 for inclusion in dm-raid.c
- From: Brassow Jonathan <jbrassow@xxxxxxxxxx>
- Re: [PATCH] raid5:Only move IO_THRESHOLD stripes from delay_list to hold_list in raid5_activate_delayed().
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Possible data corruption after rebuild
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: Possible data corruption after rebuild
- From: Alex <mysqlstudent@xxxxxxxxx>
- Possible data corruption after rebuild
- From: Alex <mysqlstudent@xxxxxxxxx>
- Re: md raid6 deadlock on write
- From: Jose Manuel dos Santos Calhariz <jose.calhariz@xxxxxxxxxxx>
- Re: [PATCH] raid5:Only move IO_THRESHOLD stripes from delay_list to hold_list in raid5_activate_delayed().
- From: Paul Menzel <paulepanter@xxxxxxxxxxxxxxxxxxxxx>
- [PATCH] raid5:Only move IO_THRESHOLD stripes from delay_list to hold_list in raid5_activate_delayed().
- From: majianpeng <majianpeng@xxxxxxxxx>
- Raid1 resync problem with leap seconds ?
- From: Arnold Schulz <arnysch@xxxxxxx>
- Re: tests/03r5assemV1 issues
- From: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
- Re: [patch 2/3 v4]raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 2/3 v4]raid1: read balance chooses idlest disk for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- md device is read only mode
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: [patch 2/3 v4]raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- [patch 3/3 v4]raid1: prevent merging too large request
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/3 v4]raid1: read balance chooses idlest disk for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/3 v4]raid1: make sequential read detection per disk based
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: disk failed during reshape, md3_reshape blocked
- From: Brendan Hide <brendan@xxxxxxxxxxxxxxxxx>
- Re: [PATCH] DM RAID: Add support for MD RAID10 personality
- From: Jan Ceuleers <jan.ceuleers@xxxxxxxxxxxx>
- Re: [patch 3/3 v3] raid1: prevent merging too large request
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 3/3 v2]raid5: add a per-stripe lock
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [patch 2/3 v2]raid5: remove unnecessary bitmap write optimization
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 3/3 v3] raid1: prevent merging too large request
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 1/3 v3] raid1: make sequential read detection per disk based
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [RFC PATCH] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: tests/03r5assemV1 issues
- From: NeilBrown <neilb@xxxxxxx>
- [patch 3/3 v2]raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/3 v2]raid5: remove unnecessary bitmap write optimization
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/3 v2]raid5: lockless access raid5 overrided bi_phys_segments
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [PATCH] DM RAID: Add support for MD RAID10 personality
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 3/3]raid5: remove unnecessary bitmap write optimization
- From: NeilBrown <neilb@xxxxxxx>
- Re: md raid6 deadlock on write
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: [RFC PATCH] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: NeilBrown <neilb@xxxxxxx>
- Re: md raid6 deadlock on write
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] DM RAID: Add support for MD RAID10 personality
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH 0 of 5] MD: prepare RAID10 for inclusion in dm-raid.c
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] MD RAID10: Fix compiler warning.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH] DM RAID: Add support for MD RAID10 personality
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: NeilBrown <neilb@xxxxxxx>
- [PATCH 4 of 4] MD RAID10: Export md_raid10_congested
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- [PATCH 3 of 4] MD: Move macros from raid1*.h to raid1*.c
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- [PATCH 2 of 4] MD RAID1: rename mirror_info structure
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- [PATCH 1 of 4] MD RAID10: rename mirror_info structure
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- [PATCH 0 of 5] MD: prepare RAID10 for inclusion in dm-raid.c
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- [PATCH] MD RAID10: Fix compiler warning.
- From: Jonathan Brassow <jbrassow@xxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: tests/03r5assemV1 issues
- From: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: Chris Mason <chris.mason@xxxxxxxxxxxx>
- Re: Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Re: [RFC PATCH] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [patch 10/10 v3] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- [PULL REQUEST] md fixes for 3.5-rc
- From: NeilBrown <neilb@xxxxxxx>
- [patch 3/3]raid5: remove unnecessary bitmap write optimization
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/3] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/3]raid5: lockless access raid5 overrided bi_phys_segments
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Fatal crash/hang in scsi_lib after RAID disk failure
- From: NeilBrown <neilb@xxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Fatal crash/hang in scsi_lib after RAID disk failure
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Fatal crash/hang in scsi_lib after RAID disk failure
- From: NeilBrown <neilb@xxxxxxx>
- Re: [RFC PATCH] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: NeilBrown <neilb@xxxxxxx>
- [RFC PATCH] raid5: Add R5_ReadNoMerge flag which prevent bio from merging at block layer
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Fatal crash/hang in scsi_lib after RAID disk failure
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Fatal crash/hang in scsi_lib after RAID disk failure
- From: NeilBrown <neilb@xxxxxxx>
- Re: Resync Every Sunday
- From: Keith Keller <kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: NeilBrown <neilb@xxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: Kerin Millar <kerframil@xxxxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: NeilBrown <neilb@xxxxxxx>
- Re: tests/03r5assemV1 issues
- From: NeilBrown <neilb@xxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- md raid6 deadlock on write
- From: Jose Manuel dos Santos Calhariz <jose.calhariz@xxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: Resync Every Sunday
- From: Keith Keller <kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: [patch 8/8] raid5: create multiple threads to handle stripes
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: [patch 10/10 v3] raid5: create multiple threads to handle stripes
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: John Gehring <john.gehring@xxxxxxxxx>
- Re: Resync Every Sunday
- From: Larkin Lowrey <llowrey@xxxxxxxxxxxxxxxxx>
- tests/03r5assemV1 issues
- From: Jes Sorensen <Jes.Sorensen@xxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Stacked array data recovery
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Question about raid5 disk recovery logic
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: mkfs.xfs states log stripe unit is too large
- From: Dave Chinner <david@xxxxxxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: mkfs.xfs states log stripe unit is too large
- From: NeilBrown <neilb@xxxxxxx>
- Re: mkfs.xfs states log stripe unit is too large
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: [patch 08/10 v3] raid5: make_request use batch stripe release
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/2]raid5: lockless access raid5 overrided bi_phys_segments
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/2]raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 08/10 v3] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: Kerin Millar <kerframil@xxxxxxxxx>
- Re: raid10 make_request failure during iozone benchmark upon btrfs
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 10/10 v3] raid5: create multiple threads to handle stripes
- From: NeilBrown <neilb@xxxxxxx>
- raid10 make_request failure during iozone benchmark upon btrfs
- From: Kerin Millar <kerframil@xxxxxxxxx>
- Re: [patch 09/10 v3] raid5: raid5d handle stripe in batch way
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 08/10 v3] raid5: make_request use batch stripe release
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- [patch 3/3 v3] raid1: prevent merging too large request
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/3 v3] raid1: read balance chooses idlest disk for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/3 v3] raid1: make sequential read detection per disk based
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 0/3 v3] Optimize raid1 read balance for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 07/10 v3] md: personality can provide unplug private data
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 06/10 v3] raid5: reduce chance release_stripe() taking device_lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 02/10 v3] raid5: delayed stripe fix
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 03/10 v3] raid5: add a per-stripe lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 02/10 v3] raid5: delayed stripe fix
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 02/10 v3] raid5: delayed stripe fix
- From: NeilBrown <neilb@xxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Re: Resync Every Sunday
- From: Larkin Lowrey <llowrey@xxxxxxxxxxxxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Re: Question about raid5 disk recovery logic
- From: NeilBrown <neilb@xxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Re: Resync Every Sunday
- From: Keith Keller <kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: Question about raid5 disk recovery logic
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Re: Resync Every Sunday
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Resync Every Sunday
- From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
- Re: Question about raid5 disk recovery logic
- From: NeilBrown <neilb@xxxxxxx>
- Question about raid5 disk recovery logic
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Stacked array data recovery
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: Adam Goryachev <mailinglists@xxxxxxxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Fatal crash/hang in scsi_lib after RAID disk failure
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Krzysztof Adamski <k@xxxxxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: [patch 01/10 v3] raid5: use wake_up_all for overlap waking
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 01/10 v3] raid5: use wake_up_all for overlap waking
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 0/4 v2] optimize raid1 read balance for SSD
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: how to get 'peer disk' in raid configuration?
- From: George Shuklin <george.shuklin@xxxxxxxxx>
- Re: [patch 0/4 v2] optimize raid1 read balance for SSD
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 0/4 v2] optimize raid1 read balance for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 0/4 v2] optimize raid1 read balance for SSD
- From: Roberto Spadim <roberto@xxxxxxxxxxxxx>
- Re: [patch 0/4 v2] optimize raid1 read balance for SSD
- From: NeilBrown <neilb@xxxxxxx>
- Re: how to get 'peer disk' in raid configuration?
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH 0/2] Modify read error handle for RAID-4,5,6.
- From: NeilBrown <neilb@xxxxxxx>
- how to get 'peer disk' in raid configuration?
- From: George Shuklin <george.shuklin@xxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH 0/2] Modify read error handle for RAID-4,5,6.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH 0/2] Modify read error handle for RAID-4,5,6.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH V1] md:Fix name of raid thread when raid takeovered.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md/raid5:Add "BlockedBadBlocks" flag when waitting rdev to be unlocked.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md/raid5:Add "BlockedBadBlocks" flag when waitting rdev to be unlocked.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md/raid5:Not add data_offset when call is_badblock in chunk_aligned_read().
- From: NeilBrown <neilb@xxxxxxx>
- Re: md:Fix a bug in function badblocks_show().
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md:Add blk_plug in sync_thread.
- From: NeilBrown <neilb@xxxxxxx>
- Re: [PATCH] md/raid5:When exec md_wait_for_blocked_rdev in ops_run_io,we must atomic_inc(&rdev->nr_pending).
- From: NeilBrown <neilb@xxxxxxx>
- Re: A disk failure during the initial resync after create, does not always suspend the resync to start the recovery
- From: NeilBrown <neilb@xxxxxxx>
- Freeze with cryptsetup->mdadm->lvm2->xfs(->nfs) get_active_stripe?
- From: Stevie Trujillo <stevie.trujillo@xxxxxxxxx>
- A disk failure during the initial resync after create, does not always suspend the resync to start the recovery
- From: Ralph Berrett <ralph.berrett@xxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: recover linear mode raid howto
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: Christian Balzer <chibi@xxxxxxx>
- Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: John Crisp <john@xxxxxxxxxxxxxx>
- Re: Hi! Strange issue with LSR -- bitmaps hadn't been used during 2 of 3 RAIDs resync
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Hi! Strange issue with LSR -- bitmaps hadn't been used during 2 of 3 RAIDs resync
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: mkfs.xfs states log stripe unit is too large
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- recover linear mode raid howto
- From: Костырев Александр Алексеевич <a.kostyrev@xxxxxxxxxx>
- Re: Hi! Strange issue with LSR -- bitmaps hadn't been used during 2 of 3 RAIDs resync
- From: NeilBrown <neilb@xxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: resizing array devices not working?
- From: NeilBrown <neilb@xxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Help with the loss of a software raid (5)
- From: NeilBrown <neilb@xxxxxxx>
- Re: All spares in Raid 5, default chunk? version of mdadm significant?
- From: NeilBrown <neilb@xxxxxxx>
- kernel 3.4.3 + e2fsprogs 1.42 + hdparm-9.39 : Raid-1 : complete data loss
- From: Manfred_Knick <Manfred.Knick@xxxxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: John Crisp <john@xxxxxxxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Rudy Zijlstra <rudy@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: David Brown <david.brown@xxxxxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Help with the loss of a software raid (5)
- From: "Matthias Herrmanny" <Matthias.Herrmanny@xxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- [patch 10/10 v3] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 09/10 v3] raid5: raid5d handle stripe in batch way
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 08/10 v3] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 07/10 v3] md: personality can provide unplug private data
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 06/10 v3] raid5: reduce chance release_stripe() taking device_lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 05/10 v3] raid5: remove some device_lock locking places
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 04/10 v3] raid5: lockless access raid5 overrided bi_phys_segments
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 03/10 v3] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 02/10 v3] raid5: delayed stripe fix
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 01/10 v3] raid5: use wake_up_all for overlap waking
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 00/10 v3] raid5: improve write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- What was the default chunk size in previous versions of mdadm? Is there a way to set data/super offset? Trying to recreate a raid 5 md with all spares
- From: Anshuman Aggarwal <anshuman.aggarwal@xxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Christian Balzer <chibi@xxxxxxx>
- Re: "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: NeilBrown <neilb@xxxxxxx>
- Re: All spares in Raid 5, default chunk? version of mdadm significant?
- From: Anshuman Aggarwal <anshuman.aggarwal@xxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: NeilBrown <neilb@xxxxxxx>
- Re: Data Offset
- From: NeilBrown <neilb@xxxxxxx>
- Re: Recalculating the --size parameter when recovering a failed array
- From: NeilBrown <neilb@xxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: Christian Balzer <chibi@xxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: NeilBrown <neilb@xxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: NeilBrown <neilb@xxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: NeilBrown <neilb@xxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: NeilBrown <neilb@xxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: Christian Balzer <chibi@xxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: Adam Goryachev <mailinglists@xxxxxxxxxxxxxxxxxxxxxx>
- Re: Can't expand linear RAID on top of 2 x RAID1
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- All spares in Raid 5, default chunk? version of mdadm significant?
- From: Anshuman Aggarwal <anshuman.aggarwal@xxxxxxxxx>
- linux-image-2.6.32-5-686: kernel BUG at ... build/source_i386_none/drivers/md/raid5.c:2764!
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Re: Script to save array info
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Can't expand linear RAID on top of 2 x RAID1
- From: Adam Goryachev <adam@xxxxxxxxxxxxxxxxxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: Christian Balzer <chibi@xxxxxxx>
- Re: MD Raid10 recovery results in "attempt to access beyond end of device"
- From: NeilBrown <neilb@xxxxxxx>
- MD Raid10 recovery results in "attempt to access beyond end of device"
- From: Christian Balzer <chibi@xxxxxxx>
- "mdadm: Raid level 5 not permitted with --build" -- why is that?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Stacked array data recovery
- From: Ramon Hofer <ramonhofer@xxxxxxxxxx>
- Re: Script to save array info
- From: Wakko Warner <wakko@xxxxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- resizing array devices not working?
- From: Phillip Susi <psusi@xxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: Script to save array info
- From: Jose Manuel dos Santos Calhariz <jose.spam@xxxxxxxxxxx>
- Re: Find mismatch in data blocks during raid6 repair
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: [patch 8/8] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Mounting MD raid1 array blocked by "Stale NFS file handle"
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Script to save array info
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: [patch 3/9 v2] raid5: lockless access raid5 overrided bi_phys_segments
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [patch 5/9 v2] raid5: reduce chance release_stripe() taking device_lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 5/9 v2] raid5: reduce chance release_stripe() taking device_lock
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Script to save array info
- From: Wakko Warner <wakko@xxxxxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Find mismatch in data blocks during raid6 repair
- From: Robert Buchholz <robert.buchholz@xxxxxxxxxxxx>
- Re: Data Offset
- From: Pierre Beck <mail@xxxxxxxxxxxxxx>
- Mounting MD raid1 array blocked by "Stale NFS file handle"
- From: Skip Coombe <skipcoombe@xxxxxxxxx>
- Re: Data Offset
- From: freeone3000 <freeone3000@xxxxxxxxx>
- Re: Hi! Strange issue with LSR -- bitmaps hadn't been used during 2 of 3 RAIDs resync
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Recalculating the --size parameter when recovering a failed array
- From: Tim Nufire <linux-raid_tim@xxxxxxxxx>
- Re: Thank you Neil
- From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
- Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
- From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
- Creating recovery IMSM array
- From: Jérôme Poulin <jeromepoulin@xxxxxxxxx>
- [patch 9/9 v2] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 8/9 v2] raid5: raid5d handle stripe in batch way
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 7/9 v2] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 6/9 v2] md: personality can provide unplug private data
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 5/9 v2] raid5: reduce chance release_stripe() taking device_lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 4/9 v2] raid5: remove some device_lock locking places
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 3/9 v2] raid5: lockless access raid5 overrided bi_phys_segments
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/9 v2] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/9 v2] raid5: use wake_up_all for overlap waking
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 0/9 v2] raid5: improve write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- RE: Thank you Neil
- From: Bobby Kent <bpkent@xxxxxxxxxxxxxxxxxxxx>
- Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
- From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
- Re: Thank you Neil
- From: Marcus Sorensen <shadowsor@xxxxxxxxx>
- Thank you Neil
- From: Jan Ceuleers <jan.ceuleers@xxxxxxxxxxxx>
- Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- [PATCH] md/raid5:Add "BlockedBadBlocks" flag when waitting rdev to be unlocked.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: How to activate a spare?
- From: Roberto Leibman <roberto@xxxxxxxxxxx>
- Re: Proposal for metadata version 2.x :)
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: Martin Ziler <martin.ziler@xxxxxxxxxxxxxx>
- Re: Proposal for metadata version 2.x :)
- From: Phil Turmel <philip@xxxxxxxxxx>
- Proposal for metadata version 2.x :)
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: How to activate a spare?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Recalculating the --size parameter when recovering a failed array
- From: NeilBrown <neilb@xxxxxxx>
- Nasty md/raid bug in 3.2.{14,15,16} and 3.3.{1,2,3}
- From: NeilBrown <neilb@xxxxxxx>
- Recalculating the --size parameter when recovering a failed array
- From: Tim Nufire <linux-raid_tim@xxxxxxxxx>
- How to activate a spare?
- From: Roberto Leibman <roberto@xxxxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: Oliver Schinagl <oliver+list@xxxxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Re: BBU + Writeback Controller suggestions please?
- From: "John Stoffel" <john@xxxxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function 慴lk_rq_map_sg�
- From: Junio C Hamano <gitster@xxxxxxxxx>
- BBU + Writeback Controller suggestions please?
- From: Ed W <lists@xxxxxxxxxxxxxx>
- Re: Check after raid6 failure
- From: "Kurt Schmitt" <kurt_schmitt@xxxxxx>
- Re: Check after raid6 failure
- From: NeilBrown <neilb@xxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function lk_rq_map_sg
- From: Tomas Carnecky <tomas.carnecky@xxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function lk_rq_map_sg
- From: Fengguang Wu <wfg@xxxxxxxxxxxxxxx>
- Check after raid6 failure
- From: "Kurt Schmitt" <kurt_schmitt@xxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function lk_rq_map_sg
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function lk_rq_map_sg
- From: Fengguang Wu <wfg@xxxxxxxxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function 慴lk_rq_map_sg�
- From: Fengguang Wu <wfg@xxxxxxxxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function 慴lk_rq_map_sg�
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function 慴lk_rq_map_sg�
- From: Fengguang Wu <wfg@xxxxxxxxxxxxxxx>
- Re: drivers/block/cpqarray.c:938:2: error: too many arguments to function ‘blk_rq_map_sg’
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: Data Offset
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: Data Offset
- From: Pierre Beck <mail@xxxxxxxxxxxxxx>
- Re: Data Offset
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: Data Offset
- From: Pierre Beck <mail@xxxxxxxxxxxxxx>
- Re: metadata versions: 0.90 vs 1.2
- From: Emmanuel Noobadmin <centos.admin@xxxxxxxxx>
- Re: metadata versions: 0.90 vs 1.2
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: metadata versions: 0.90 vs 1.2
- From: Emmanuel Noobadmin <centos.admin@xxxxxxxxx>
- Re: Data Offset
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: Data Offset
- From: Pierre Beck <mail@xxxxxxxxxxxxxx>
- Re: metadata versions: 0.90 vs 1.2
- From: David Brown <david.brown@xxxxxxxxxxxx>
- [patch 3/3 v3] raid10: percpu dispatch for write request if bitmap supported
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/3 v3] raid1: percpu dispatch for write request if bitmap supported
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/3 v3] MD: add a specific workqueue to do dispatch
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 0/3 v3] MD: improve raid1/10 write performance for fast storage
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 4/4 v2] raid1: split large request for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 3/4 v2] raid1: read balance chooses idlest disk
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 2/4 v2] raid1: make sequential read detection per disk based
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 1/4 v2] raid1: move distance based read balance to a separate function
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 0/4 v2] optimize raid1 read balance for SSD
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: metadata versions: 0.90 vs 1.2
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- metadata versions: 0.90 vs 1.2
- From: plug bert <plugbert@xxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [patch 8/8] raid5: create multiple threads to handle stripes
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
- From: Iordan Iordanov <iordan@xxxxxxxxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [patch 2/8] raid5: lockless access raid5 overrided bi_phys_segments
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- On mdadm 3.2 and bad-block-log
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: Re: md:Fix a bug in function badblocks_show().
- From: kedacomkernel <kedacomkernel@xxxxxxxxx>
- [PATCH] md/raid5:Not add data_offset when call is_badblock in chunk_aligned_read().
- From: majianpeng <majianpeng@xxxxxxxxx>
- blk/dm: Kernel crash on 3.5-rc2
- From: Sasha Levin <levinsasha928@xxxxxxxxx>
- Re: question about RAID10 near and far layouts
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: [PATCH] md:Add blk_plug in sync_thread.
- From: Nagilum <nagilum@xxxxxxxxxxx>
- Re: Re: [PATCH] md:Add blk_plug in sync_thread.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH] md:Add blk_plug in sync_thread.
- From: Nagilum <nagilum@xxxxxxxxxxx>
- [PATCH] md:Add blk_plug in sync_thread.
- From: majianpeng <majianpeng@xxxxxxxxx>
- [PATCH] md/raid5:When exec md_wait_for_blocked_rdev in ops_run_io,we must atomic_inc(&rdev->nr_pending).
- From: majianpeng <majianpeng@xxxxxxxxx>
- question about RAID10 near and far layouts
- From: plug bert <plugbert@xxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Re: Sync does not flush to disk!?
- From: "Ted Ts'o" <tytso@xxxxxxx>
- Re: Sync does not flush to disk!?
- From: Jan Kara <jack@xxxxxxx>
- Re: Sync does not flush to disk!?
- From: Jan Kara <jack@xxxxxxx>
- Re: Sync does not flush to disk!?
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Re: Sync does not flush to disk!?
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: Sync does not flush to disk!?
- From: NeilBrown <neilb@xxxxxxx>
- Re: Sync does not flush to disk!?
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Sync does not flush to disk!?
- From: Asdo <asdo@xxxxxxxxxxxxx>
- Thanks (was Re: Problem with patch...)
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: Oliver Schinagl <oliver+list@xxxxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Joe Landman <joe.landman@xxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: NeilBrown <neilb@xxxxxxx>
- degraded raid 6 (1 bad drive) showing up inactive, only spares
- From: Martin Ziler <martin.ziler@xxxxxxxxxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: NeilBrown <neilb@xxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: NeilBrown <neilb@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 7/8] raid5: raid5d handle stripe in batch way
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: NeilBrown <neilb@xxxxxxx>
- Re: Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 8/8] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 7/8] raid5: raid5d handle stripe in batch way
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: Shaohua Li <shli@xxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: [patch 8/8] raid5: create multiple threads to handle stripes
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 7/8] raid5: raid5d handle stripe in batch way
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 6/8] raid5: make_request use batch stripe release
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 2/8] raid5: lockless access raid5 overrided bi_phys_segments
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 1/8] raid5: add a per-stripe lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: [patch 4/8] raid5: reduce chance release_stripe() taking device_lock
- From: NeilBrown <neilb@xxxxxxx>
- Re: RAID5 with two drive sizes question
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: pg@xxxxxxxxxxxxxxxxxxx (Peter Grandi)
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: pg@xxxxxxxxxxxxxxxxxxxx (Peter Grandi)
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Problem with patch: "reject a re-add request that cannot be honoured" (commit bedd86b7773fd97f0d708cc0c371c8963ba7ba9a)
- From: Alexander Lyakas <alex.bolshoy@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Brad Campbell <lists2009@xxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Re: [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Marcus Sorensen <shadowsor@xxxxxxxxx>
- Re: Re: [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Brad Campbell <lists2009@xxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Failed drive in raid6 while doing data-check
- From: NeilBrown <neilb@xxxxxxx>
- [PULL REQUEST] md fixes for 3.5-rc
- From: NeilBrown <neilb@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: "Joachim Otahal (privat)" <Jou@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: Roman Mamedov <rm@xxxxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: "Joachim Otahal (privat)" <Jou@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: RAID5 with two drive sizes question
- From: Roman Mamedov <rm@xxxxxxxxxx>
- RAID5 with two drive sizes question
- From: "Joachim Otahal (privat)" <Jou@xxxxxxx>
- Re: Failed drive in raid6 while doing data-check
- From: Krzysztof Adamski <k@xxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: pg@xxxxxxxxxxxxxxxxxxxx (Peter Grandi)
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: pg@xxxxxxxxxxxxxxxxxxxx (Peter Grandi)
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- [PATCH] md/raid5:Choose to replacing or recoverying when raid degraded and had a want_replacement disk at the same time.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH V1] md:Fix name of raid thread when raid takeovered.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Re: [PATCH V1] md:Fix name of raid thread when raid takeovered.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: [PATCH V1] md:Fix name of raid thread when raid takeovered.
- From: NeilBrown <neilb@xxxxxxx>
- Re: Data Offset
- From: NeilBrown <neilb@xxxxxxx>
- [PATCH V1] md:Fix name of raid thread when raid takeovered.
- From: majianpeng <majianpeng@xxxxxxxxx>
- Re: Data Offset
- From: freeone3000 <freeone3000@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- Re: Failed drive in raid6 while doing data-check
- From: NeilBrown <neilb@xxxxxxx>
- Re: Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Joe Landman <joe.landman@xxxxxxxxx>
- Software RAID checksum performance on 24 disks not even close to kernel reported
- From: Ole Tange <ole@xxxxxxxx>
- Re: Data Offset
- From: NeilBrown <neilb@xxxxxxx>
- Re: Data Offset
- From: Pierre Beck <mail@xxxxxxxxxxxxxx>
- Re: Failed drive in raid6 while doing data-check
- From: Krzysztof Adamski <k@xxxxxxxxxxx>
- Re: Can extremely high load cause disks to be kicked?
- From: Igor M Podlesny <for.poige+lsr@xxxxxxxxx>
- [patch 8/8] raid5: create multiple threads to handle stripes
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 7/8] raid5: raid5d handle stripe in batch way
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 6/8] raid5: make_request use batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 5/8] raid5: add batch stripe release
- From: Shaohua Li <shli@xxxxxxxxxx>
- [patch 4/8] raid5: reduce chance release_stripe() taking device_lock
- From: Shaohua Li <shli@xxxxxxxxxx>
[Index of Archives]
[Linux RAID Wiki]
[ATA RAID]
[Linux SCSI Target Infrastructure]
[Linux Block]
[Linux IDE]
[Linux SCSI]
[Linux Hams]
[Device Mapper]
[Kernel]
[Linux Admin]
[Linux Net]
[GFS]
[RPM]
[git]
[Yosemite Forum]
[Linux Networking]