Linux RAID Storage Date Index
[Prev Page][Next Page]
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: James Bottomley <James.Bottomley@xxxxxxx>
- Re: [dm-devel] [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Sergey Vlasov <vsu@xxxxxxxxxxx>
- Re: [dm-devel] [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [RFC PATCHSET block#for-2.6.36-post] block: convert to REQ_FLUSH/FUA
- From: Christoph Hellwig <hch@xxxxxx>
- OT grammar nit Re: [PATCH] block: simplify queue_next_fseq
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Mike Snitzer <snitzer@xxxxxxxxxx>
- [PATCH] block: simplify queue_next_fseq
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH v2] BLOCK: fix bio.bi_rw handling
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: mdadm create problem with existing bitmap file
- From: Neil Brown <neilb@xxxxxxx>
- raid hung due to a drive failure during reshape
- From: Anssi Hannula <anssi.hannula@xxxxxx>
- Re: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- RE: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: gabe peters <gabe.peters@xxxxxxxxx>
- mdadm create problem with existing bitmap file
- From: Juan Aristizabal <jaristizabal@xxxxxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Chris Mason <chris.mason@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Ric Wheeler <rwheeler@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Kiyoshi Ueda <k-ueda@xxxxxxxxxxxxx>
- Re: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- RE: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "Jiang, Dave" <dave.jiang@xxxxxxxxx>
- RE: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "Jiang, Dave" <dave.jiang@xxxxxxxxx>
- Re: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- RE: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "Jiang, Dave" <dave.jiang@xxxxxxxxx>
- Re: question on how to update the super-minor
- From: Neil Brown <neilb@xxxxxxx>
- question on how to update the super-minor
- From: Joe Landman <landman@xxxxxxxxxxxxxxxxxxxxxxx>
- Re: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- RE: getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "Jiang, Dave" <dave.jiang@xxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 3/5] lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 2/5 UPDATED] virtio_blk: drop REQ_HARDBARRIER support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: James Bottomley <James.Bottomley@xxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Kiyoshi Ueda <k-ueda@xxxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [dm-devel] [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- getting a linux boot loader (preferably grub) installed on an intel imsm raid
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: James Bottomley <James.Bottomley@xxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Re: intel fakeraid (imsm) linux kernel support
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: James Bottomley <James.Bottomley@xxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: James Bottomley <James.Bottomley@xxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <teheo@xxxxxxx>
- Re: the true behavior of mdadm's raid-1 with regard to vertical parity and silent error detection/scrubbing- confirmation or feature request
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: the true behavior of mdadm's raid-1 with regard to vertical parity and silent error detection/scrubbing- confirmation or feature request
- From: Michael Tokarev <mjt@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 08/11] block: rename barrier/ordered to flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PULL REQUEST] a few fixes for md.
- From: Neil Brown <neilb@xxxxxxx>
- the true behavior of mdadm's raid-1 with regard to vertical parity and silent error detection/scrubbing- confirmation or feature request
- From: "Brett L. Trotter" <brett@xxxxxxxxxx>
- Re: mdadm: too-old timestamp on backup-metadata
- From: William Heaton <acroporas@xxxxxxxxx>
- Re: mdadm: too-old timestamp on backup-metadata
- From: Neil Brown <neilb@xxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
- Re: [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- [BUG raid1] kernel BUG at drivers/scsi/scsi_lib.c:1113
- From: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
- Re: reboot during reshape: superblock incorrect, cannot assemble
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: reboot during reshape: superblock incorrect, cannot assemble
- From: Joris <joris@xxxxx>
- Re: reboot during reshape: superblock incorrect, cannot assemble
- From: Joris <joris@xxxxx>
- Re: reboot during reshape: superblock incorrect, cannot assemble
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- reboot during reshape: superblock incorrect, cannot assemble
- From: Joris <joris@xxxxx>
- mdadm: too-old timestamp on backup-metadata
- From: William Heaton <acroporas@xxxxxxxxx>
- RE: [mdadm git pull] "--assemble --scan" support for imsm
- From: "Jiang, Dave" <dave.jiang@xxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Mike Snitzer <snitzer@xxxxxxxxxx>
- Re: intel fakeraid (imsm) linux kernel support
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCH 08/11] block: rename barrier/ordered to flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [mdadm git pull] "--assemble --scan" support for imsm
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 08/11] block: rename barrier/ordered to flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- intel fakeraid (imsm) linux kernel support
- From: "K. Posern" <quickhelp@xxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Mike Snitzer <snitzer@xxxxxxxxxx>
- mdadm -C failure with 3.1.3, but /proc/mdstat reports success
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Re: [PATCH 08/11] block: rename barrier/ordered to flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [patch 1/6] md: remove dependency on __GFP_NOFAIL
- From: David Rientjes <rientjes@xxxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
- Re: [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Mike Snitzer <snitzer@xxxxxxxxxx>
- Re: [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH 5/5] dm: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 1/5] block/loop: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 3/5] lguest: replace VIRTIO_F_BARRIER support with VIRTIO_F_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [RFC PATCHSET block#for-2.6.36-post] block: convert to REQ_FLUSH/FUA
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 2/5] virtio_blk: implement REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 4/5] md: implment REQ_FLUSH/FUA support
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH UPDATED 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Nicolas Jungers <nicolas@xxxxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- [PATCH 34/35] Incremental for bare disks, checking routine + integration
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 35/35] Fix problem in mdmon monitor of using removed disk from in imsm container.
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 33/35] update udev rules to support --path parameter with remove action
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 32/35] extension of IncrementalRemove to store location (port) of removed device
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 31/35] added --path <path_id> to give the information on the 'path-id' of removed device
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 30/35] Man pages update with DOMAIN line description.
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 28/35] Monitor: added spare sharing and dev suitable functions
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 29/35] Monitor: autorebuild funcionality added
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 27/35] Monitor: added function move_spare
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 26/35] Monitor: get array domain and subset function added
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 25/35] Monitor: fill devstate of containers based on supertype
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 24/35] imsm: create mdinfo list of disks in a container from supertype
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 23/35] Monitor: link container-volumes in statelist
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 22/35] mdadm: added --no-sharing parameter for Monitor mode
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 21/35] Monitor: removed spare-group based spare sharing code
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 20/35] Monitor: set err on arrays not in mdstat
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 19/35] Util: get device size from id
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 18/35] Assemble: assembly with domains - two runs for imsm spares
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 17/35] Removed uuid setting for imsm spares
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 16/35] test code for loop device support added
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 15/35] additional environment dependent code for platform subset tests
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 14/35] incremental: add domain/subset support
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 13/35] manage: domains support in Manage_subdev
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 12/35] create: respect domains/subsets during create process
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 11/35] assembly: user domain/subset from configuration file in assembly process
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 08/35] imsm: platform dependent domain boundaries
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 07/35] add general domain/subset lists manipulation routines
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 06/35] Updates to udev rules and ReadMe.c for incremental --grab support
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 10/35] update domain search to new structures, added subset search
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 09/35] processing of domain entries made after config is loaded
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 05/35] Partition action support in DOMAIN line
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 04/35] Support for new disk hot plug actions with DOMAINs.
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 03/35] Config option parsing for new DOMAIN line support
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 02/35] Few fixes and sample udev rules file to capture block devices
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 01/35] [hotunplug] we are testing mdstat, not ent which is undefined at this
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- [PATCH 0/35] Autorebuild updated
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tim Small <tim@xxxxxxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Nicolas Jungers <nicolas@xxxxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Nicolas Jungers <nicolas@xxxxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- Re: RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- RAID5 disk failure during rebuild of spare, any chance of recovery when one of the failed devices is suspected to be intact?
- From: Tor Arne Vestbø <torarnv@xxxxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Pro-active replacement
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Problem with raid in QNAP TS-639Pro
- From: Pavel Pilný <pavel@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 02/11] block: kill QUEUE_ORDERED_BY_TAG
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 02/11] block: kill QUEUE_ORDERED_BY_TAG
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers
- From: Jan Kara <jack@xxxxxxx>
- Boot md/udev event storm
- From: Phil Turmel <philip@xxxxxxxxxx>
- Re: [PATCH v2] BLOCK: fix bio.bi_rw handling
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: [PATCH v2] BLOCK: fix bio.bi_rw handling
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH v2] MD: raid, fix BUG caused by flags handling
- From: Christoph Hellwig <hch@xxxxxx>
- Re: [PATCH v2] SCSI: fix bio.bi_rw handling
- From: Christoph Hellwig <hch@xxxxxx>
- RE: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: "Labun, Marcin" <Marcin.Labun@xxxxxxxxx>
- Re: [PATCH v2] BLOCK: fix bio.bi_rw handling
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- Re: [PATCH v2] SCSI: fix bio.bi_rw handling
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- Re: [PATCH v2] MD: raid, fix BUG caused by flags handling
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- [PATCH 05/11] block: misc cleanups in barrier code
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 01/11] block/loop: queue ordered mode should be DRAIN_FLUSH
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 09/11] block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 07/11] block: rename blk-barrier.c to blk-flush.c
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 02/11] block: kill QUEUE_ORDERED_BY_TAG
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 08/11] block: rename barrier/ordered to flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 11/11] block: use REQ_FLUSH in blkdev_issue_flush()
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 03/11] block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush()
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 04/11] block: remove spurious uses of REQ_HARDBARRIER
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 06/11] block: drop barrier ordering by queue draining
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH v2] BLOCK: fix bio.bi_rw handling
- From: Jiri Slaby <jslaby@xxxxxxx>
- [PATCH v2] MD: raid, fix BUG caused by flags handling
- From: Jiri Slaby <jslaby@xxxxxxx>
- [PATCH v2] SCSI: fix bio.bi_rw handling
- From: Jiri Slaby <jslaby@xxxxxxx>
- Re: [PATCH] MD: raid1, fix BUG caused by flags handling
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- [PATCH] MD: raid1, fix BUG caused by flags handling
- From: Jiri Slaby <jslaby@xxxxxxx>
- Re: weird SparesMissing event
- From: Tobias Gunkel <tobias.gunkel@xxxxxxxxx>
- Re: [mdadm PATCH 0/2] two 3.1.3 regression fixes (incremental assembly)
- From: Neil Brown <neilb@xxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Neil Brown <neilb@xxxxxxx>
- Re: [mdadm PATCH 0/2] two 3.1.3 regression fixes (incremental assembly)
- From: Neil Brown <neilb@xxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Neil Brown <neilb@xxxxxxx>
- Re: Fw: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Stefan /*St0fF*/ Hübner <stefan.huebner@xxxxxxxxxxxxxxxxxx>
- Re: [REVISED PULL REQUEST] md updates for 2.6.36
- From: Neil Brown <neilb@xxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Neil Brown <neilb@xxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- Re: weird SparesMissing event
- From: Neil Brown <neilb@xxxxxxx>
- Re: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Neil Brown <neilb@xxxxxxx>
- Sorry if Spamming! - sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- Re: weird SparesMissing event
- From: Tobias Gunkel <tobias.gunkel@xxxxxxxxx>
- Re: debian dist-upgrade etch -> squeeze broke my mdadm RAID1
- From: Tim Small <tim@xxxxxxxxxxx>
- Re: weird SparesMissing event
- From: Neil Brown <neilb@xxxxxxx>
- weird SparesMissing event
- From: Tobias Gunkel <tobias.gunkel@xxxxxxxxx>
- debian dist-upgrade etch -> squeeze broke my mdadm RAID1
- From: Doug <doug.duboulay@xxxxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [mdadm PATCH 0/2] two 3.1.3 regression fixes (incremental assembly)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [REVISED PULL REQUEST] md updates for 2.6.36
- From: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
- Re: [REVISED PULL REQUEST] md updates for 2.6.36
- From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
- Re: Fw: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- Fw: sdc1 does not have a valid v0.90 superblock, not importing!
- From: Jon Hardcastle <jd_hardcastle@xxxxxxxxx>
- [mdadm PATCH 2/2] Incremental: accept '--no-degraded' as a deprecated option
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- [mdadm PATCH 1/2] Incremental: return success in 'container not enough' case
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- [mdadm PATCH 0/2] two 3.1.3 regression fixes (incremental assembly)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- [REVISED PULL REQUEST] md updates for 2.6.36
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH REPOST RFC] relaxed barriers
- From: Tejun Heo <teheo@xxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Neil Brown <neilb@xxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Neil Brown <neilb@xxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Re: Problem regarding RAID10 on kernel 2.6.31
- From: Neil Brown <neilb@xxxxxxx>
- Re: Problem regarding RAID10 on kernel 2.6.31
- From: ravichandra <vmynidi@xxxxxxxxxxxxxxxxxx>
- Re: [PATCH] md: move revalidate_disk() back outside open_mutex
- From: Neil Brown <neilb@xxxxxxx>
- cciss_vol_status can't decide if it has a broken array or not
- From: Tomasz Chmielewski <tch@xxxxxxxx>
- Re: [PATCH REPOST RFC] relaxed barriers
- From: Christoph Hellwig <hch@xxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- RE: --assume-clean on raid5/6
- From: <brian.foster@xxxxxxx>
- [PULL REQUEST] md updates for 2.6.36
- From: Neil Brown <neilb@xxxxxxx>
- Re: md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: Neil Brown <neilb@xxxxxxx>
- Re: --assume-clean on raid5/6
- From: Neil Brown <neilb@xxxxxxx>
- md's fail to assemble correctly consistently at system startup - mdadm 3.1.2 and Ubuntu 10.04
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Re: --assume-clean on raid5/6
- From: Stefan /*St0fF*/ Hübner <stefan.huebner@xxxxxxxxxxxxxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH REPOST RFC] relaxed barriers
- From: Tejun Heo <teheo@xxxxxxx>
- [PATCH] md: move revalidate_disk() back outside open_mutex
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCH, RFC] relaxed barriers
- From: Christoph Hellwig <hch@xxxxxx>
- RE: raid1: prevent adding a "too recent" device to a mirror?
- From: "Dailey, Nate" <Nate.Dailey@xxxxxxxxxxx>
- Re: [PATCH, RFC] relaxed barriers
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: ANNOUNCE: mdadm 3.1.3 - A tool for managing Soft RAID under Linux
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: Problem regarding RAID10 on kernel 2.6.31
- From: Neil Brown <neilb@xxxxxxx>
- Re: ANNOUNCE: mdadm 3.1.3 - A tool for managing Soft RAID under Linux
- From: Neil Brown <neilb@xxxxxxx>
- Problem regarding RAID10 on kernel 2.6.31
- From: ravichandra <vmynidi@xxxxxxxxxxxxxxxxxx>
- Re: ANNOUNCE: mdadm 3.1.3 - A tool for managing Soft RAID under Linux
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- ANNOUNCE: mdadm 3.1.3 - A tool for managing Soft RAID under Linux
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tao Ma <tao.ma@xxxxxxxxxx>
- --assume-clean on raid5/6
- From: <brian.foster@xxxxxxx>
- Performance impact of CONFIG_SCHED_MC? direct-io test case
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Performance impact of CONFIG_DEBUG? direct-io test case
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Chris Mason <chris.mason@xxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Jeff Moyer <jmoyer@xxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Chris Mason <chris.mason@xxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- RE: Replacing a drive in RAID 0
- From: Ben Nemec <lists@xxxxxxxxxxxx>
- Strange initramfs+udev+raid interaction creating /dev/md/*
- From: Michael Guntsche <mike@xxxxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Josef Bacik <josef@xxxxxxxxxx>
- Re: direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Chris Mason <chris.mason@xxxxxxxxxx>
- direct-io regression [Was: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs]
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- RE: Replacing a drive in RAID 0
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: [PATCH, RFC 2/2] dm: support REQ_FLUSH directly
- From: "Jun'ichi Nomura" <j-nomura@xxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Neil Brown <neilb@xxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Valdis.Kletnieks@xxxxxx
- Re: mdadm 3.1.x and bitmap chunk
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Strange initramfs+udev+raid interaction creating /dev/md/*
- From: Michael Guntsche <mike@xxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Andi Kleen <andi@xxxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Mike Snitzer <snitzer@xxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH, RFC 2/2] dm: support REQ_FLUSH directly
- From: Christoph Hellwig <hch@xxxxxx>
- Re: How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
- How to track down abysmal performance ata - raid1 - crypto - vg/lv - xfs
- From: Dominik Brodowski <linux@xxxxxxxxxxxxxxxxxxxx>
- Re: [PATCH, RFC 2/2] dm: support REQ_FLUSH directly
- From: Kiyoshi Ueda <k-ueda@xxxxxxxxxxxxx>
- Re: mdadm 3.1.x and bitmap chunk
- From: Doug Ledford <dledford@xxxxxxxxxx>
- Re: when does a raid rebuild failed
- From: Drew <drew.kay@xxxxxxxxx>
- Re: [PATCH, RFC 2/2] dm: support REQ_FLUSH directly
- From: Christoph Hellwig <hch@xxxxxx>
- [PATCH, RFC 1/2] relaxed cache flushes
- From: Christoph Hellwig <hch@xxxxxx>
- Re: mdadm 3.1.x and bitmap chunk
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: [PATCH] coda: rename REQ_* to CODA_REQ_*
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: [PATCH] coda: rename REQ_* to CODA_REQ_*
- From: Jan Harkes <jaharkes@xxxxxxxxxx>
- [PATCH] coda: rename REQ_* to CODA_REQ_*
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH 1/2 block#for-2.6.36] bio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- when does a raid rebuild failed
- From: 邹勇波 <zou.yongbo@xxxxxxxxx>
- Re: Replacing a drive in RAID 0
- From: Ben Nemec <lists@xxxxxxxxxxxx>
- Re: [PATCH 1/2 block#for-2.6.36] bio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Jens Axboe <jaxboe@xxxxxxxxxxxx>
- Re: [PATCH 1/2 block#for-2.6.36] bio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Jens Axboe <axboe@xxxxxxxxx>
- [PATCH 2/2 block#for-2.6.36] bio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_* flags
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 1/2 block#for-2.6.36] bio, fs: update RWA_MASK, READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: Replacing a drive in RAID 0
- From: Neil Brown <neilb@xxxxxxx>
- Re: Replacing a drive in RAID 0
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Replacing a drive in RAID 0
- From: Neil Brown <neilb@xxxxxxx>
- Re: Replacing a drive in RAID 0
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Replacing a drive in RAID 0
- From: Ben Nemec <lists@xxxxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Neil Brown <neilb@xxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Justin Bronder <jsbronder@xxxxxxxxxx>
- Re: [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Jens Axboe <axboe@xxxxxxxxx>
- Re: mdadm 3.1.x and bitmap chunk
- From: Doug Ledford <dledford@xxxxxxxxxx>
- [PATCH RESEND 2/2 block#for-linus] bio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_* flags
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH RESEND 2/2 block#for-linus] bio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_* flags
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH RESEND 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 2/2 block#for-linus] bio, fs: separate out bio_types.h and define READ/WRITE constants in terms of BIO_RW_* flags
- From: Tejun Heo <tj@xxxxxxxxxx>
- [PATCH 1/2 block#for-linus] bio, fs: update READA and SWRITE to match the corresponding BIO_RW_* bits
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: sw raid array completely hungs during verify in 2.6.32
- From: Neil Brown <neilb@xxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Neil Brown <neilb@xxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID/block regression starting from 2.6.32, bisected
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH]md:dm.c Fix warning: statement with no effect
- From: "Justin P. Mattock" <justinmattock@xxxxxxxxx>
- failed to re-assemble after cable-problem...
- From: wiebittewas <wiebittewas@xxxxxxxxxxxxxx>
- sw raid array completely hungs during verify in 2.6.32
- From: Michael Tokarev <mjt@xxxxxxxxxx>
- Re: [PATCH]md:dm.c Fix warning: statement with no effect
- From: "Justin P. Mattock" <justinmattock@xxxxxxxxx>
- Re: [PATCH]md:dm.c Fix warning: statement with no effect
- From: Alasdair G Kergon <agk@xxxxxxxxxx>
- Re: [PATCH]md:dm.c Fix warning: statement with no effect
- From: "Justin P. Mattock" <justinmattock@xxxxxxxxx>
- Re: raid1 performance
- From: Keld Simonsen <keld@xxxxxxxxxx>
- Re: raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- Re: RAID/block regression starting from 2.6.32, bisected
- From: Tejun Heo <tj@xxxxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: Simon Matthews <simon.d.matthews@xxxxxxxxx>
- Re: MD raid and different elevators (disk i/o schedulers)
- From: Fabio Muzzi <liste@xxxxxxxxxx>
- Re: MD raid and different elevators (disk i/o schedulers)
- From: Eric Shubert <ejs@xxxxxxxxxx>
- Re: MD raid and different elevators (disk i/o schedulers)
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- MD raid and different elevators (disk i/o schedulers)
- From: Fabio Muzzi <liste@xxxxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: Simon Matthews <simon.d.matthews@xxxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: Stefan *St0fF* Huebner <st0ff@xxxxxxx>
- Endian issue assembling arrays
- From: Doug Nazar <nazard.michi@xxxxxxxxx>
- Re: md versus partition scanning (bd_invalidated)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Pending sectors in valid array - how to proceed?
- From: Tim Small <tim@xxxxxxxxxxx>
- RAID/block regression starting from 2.6.32, bisected
- From: Vladislav Bolkhovitin <vst@xxxxxxxx>
- Pending sectors in valid array - how to proceed?
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: raid1 performance
- From: Neil Brown <nfbrown@xxxxxxxxxx>
- Re: raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- Re: Software raid-5 on root partition (2.6.32.1)
- From: Kurt Newman <knewman@xxxxxxxxxxxxxxxxxxx>
- Software raid-5 on root partition (2.6.32.1)
- From: Kurt Newman <knewman@xxxxxxxxxxxxxxxxxxx>
- Re: [PATCH] md: bitwise operations might not fit in a "bool"
- From: "H. Peter Anvin" <hpa@xxxxxxxxx>
- Re: raid1 performance
- From: Neil Brown <nfbrown@xxxxxxxxxx>
- Re: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Neil Brown <neilb@xxxxxxx>
- RE: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- Re: raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- Re: raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- Re: Increase request size for levels other than raid0?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- RE: raid1 performance
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- [PATCH] Notify sysfs when RAID1 disk is In_sync.
- From: Adrian Drzewiecki <adriand@xxxxxxxxxx>
- rw-mount necessary while assemble?
- From: wiebittewas <wiebittewas@xxxxxxxxxxxxxx>
- Increase request size for levels other than raid0?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: raid1 performance
- From: Keld Simonsen <keld@xxxxxxxxxx>
- Re: raid1 performance
- From: Neil Brown <nfbrown@xxxxxxxxxx>
- Re: raid1 performance
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: raid1 performance
- From: Keld Simonsen <keld@xxxxxxxxxx>
- Re: raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- [PATCH 8/8] dm-raid456: switch to use dm_dirty_log for tracking dirty regions.
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 7/8] dm-dirty-log: allow log size to be different from target size.
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 6/8] dm-raid456: add message handler.
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 5/8] dm-raid456: add suspend/resume method
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 4/8] dm-raid456: add support for setting IO hints.
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 3/8] dm-raid456: support unplug
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 2/8] dm-raid456: add congestion checking.
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 1/8] md/dm: create dm-raid456 module using md/raid5
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- [PATCH 0/8] The DM part of dm-raid45
- From: NeilBrown <nfbrown@xxxxxxxxxx>
- Re: raid1 performance
- From: Roman Mamedov <roman@xxxxxxxx>
- raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- RE: [PATCH] Adding ADMA support for PPC460EX DMA engine.
- From: Tirumala Marri <tmarri@xxxxxxx>
- Re: [PATCH] Adding ADMA support for PPC460EX DMA engine.
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Justin Bronder <jsbronder@xxxxxxxxxx>
- Stripe Cache
- From: Chris Farey <chris@xxxxxxxxx>
- Re: [PATCH] Adding ADMA support for PPC460EX DMA engine.
- From: Stefan Roese <sr@xxxxxxx>
- Re: raid1: prevent adding a "too recent" device to a mirror?
- From: Neil Brown <neilb@xxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Neil Brown <neilb@xxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Justin Bronder <jsbronder@xxxxxxxxxx>
- raid1: prevent adding a "too recent" device to a mirror?
- From: "Dailey, Nate" <Nate.Dailey@xxxxxxxxxxx>
- Re: Why is sb->size set to 0 with raid0?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: Why is sb->size set to 0 with raid0?
- From: Neil Brown <neilb@xxxxxxx>
- Re: Why is sb->size set to 0 with raid0?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: [PATCH] md: bitwise operations might not fit in a "bool"
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH] md: bitwise operations might not fit in a "bool"
- From: Boaz Harrosh <bharrosh@xxxxxxxxxxx>
- Re: [PATCH] md: bitwise operations might not fit in a "bool"
- From: Neil Brown <neilb@xxxxxxx>
- [PATCH] md: bitwise operations might not fit in a "bool"
- From: Boaz Harrosh <bharrosh@xxxxxxxxxxx>
- Re: Why is sb->size set to 0 with raid0?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: BUG REPORT: md RAID5 write throughput will drop for 1~2s every 16s (under 1Hz sample rate)
- From: Eddy Zhao <eddy.y.zhao@xxxxxxxxx>
- Re: BUG at drivers/scsi/scsi_lib.c:1113
- From: Boaz Harrosh <openosd@xxxxxxxxx>
- RE: [PATCH 1/2] md: raid5 return new layout in mdstat while reshaping
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- Re: BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: BUG at drivers/scsi/scsi_lib.c:1113
- From: Christoph Hellwig <hch@xxxxxx>
- RE: [PATCH 0/2] md: migrations for external metadata
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- Re: BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: fixes for 3.1.3 (was: Re: [mdadm GIT PULL] rebuild checkpoints...)
- From: Neil Brown <neilb@xxxxxxx>
- Re: BUG at drivers/scsi/scsi_lib.c:1113
- From: Neil Brown <neilb@xxxxxxx>
- BUG at drivers/scsi/scsi_lib.c:1113
- From: Jiri Slaby <jirislaby@xxxxxxxxx>
- Re: Why is sb->size set to 0 with raid0?
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 1/2] md: raid5 return new layout in mdstat while reshaping
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: help needed - 4 disk raid4 with two missing disks
- From: Rainer Fuegenstein <rfu@xxxxxxxxxxxxxxxxxxxxxxxx>
- md versus partition scanning (bd_invalidated)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: fixes for 3.1.3 (was: Re: [mdadm GIT PULL] rebuild checkpoints...)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: [PATCH 0/2] md: migrations for external metadata
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Why is sb->size set to 0 with raid0?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: help needed - 4 disk raid4 with two missing disks
- From: Keld Simonsen <keld@xxxxxxxxxx>
- help needed - 4 disk raid4 with two missing disks
- From: Rainer Fuegenstein <rfu@xxxxxxxxxxxxxxxxxxxxxxxx>
- [PATCH 10/10] mdadm: migration restart for external meta
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 08/10] mdadm: support backup operations for imsm
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 09/10] mdadm: support grow operation for external meta
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 07/10] Add mdadm->mdmon sync_max command message
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 06/10] mdadm: support restore_stripes() from the given buffer
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 05/10] mdadm: add backup methods to superswitch
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 04/10] mdadm: Add IMSM migration record to intel_super
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 03/10] mdadm: support non-grow reshape for external meta
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 02/10] mdadm: read chunksize and layout from mdstat
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 00/10] mdadm: reshape for external metadata
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 01/10] mdadm: second_map enhancement for imsm_get_map()
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 2/2] md: raid5: update suspend_hi during the reshape
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 1/2] md: raid5 return new layout in mdstat while reshaping
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- [PATCH 0/2] md: migrations for external metadata
- From: "Trela, Maciej" <Maciej.Trela@xxxxxxxxx>
- RE: Changing Chunk Size on Array
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- RE: RAID10 status when you remove the first disk and last disk
- From: "Michael Li" <michael.li@xxxxxxxxxxx>
- Re: RAID10 status when you remove the first disk and last disk
- From: Neil Brown <neilb@xxxxxxx>
- Re: Missing Drives
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- Re: Missing Drives
- From: Mark Knecht <markknecht@xxxxxxxxx>
- Missing Drives
- From: James Howells <james@xxxxxxxxxxxxxx>
- Re: [SOLVED] Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- minor typo in md.txt
- From: William Stearns <wstearns@xxxxxxxxx>
- [SOLVED] Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- Re: mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: "John Stoffel" <john@xxxxxxxxxxx>
- Re: BUG REPORT: md RAID5 write throughput will drop for 1~2s every 16s (under 1Hz sample rate)
- From: Neil Brown <neilb@xxxxxxx>
- Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- Re: messed up changing chunk size
- From: Konstantin Svist <kostya@xxxxxxxxxxx>
- Re: Changing Chunk Size on Array
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- Re: Changing Chunk Size on Array
- From: Neil Brown <neilb@xxxxxxx>
- Re: mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: Neil Brown <neilb@xxxxxxx>
- Re: messed up changing chunk size
- From: Keld Simonsen <keld@xxxxxxxxxx>
- Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- Re: messed up changing chunk size
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: messed up changing chunk size
- From: Jools Wills <jools@xxxxxxxxxxxxxxxxxxx>
- raid1 performance
- From: Marco <jjletho67-diar@xxxxxxxx>
- Re: messed up changing chunk size
- From: Roman Mamedov <roman@xxxxxxxx>
- RE: messed up changing chunk size
- From: "Guy Watkins" <linux-raid@xxxxxxxxxxxxxxxx>
- Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- RE: messed up changing chunk size
- From: "Guy Watkins" <linux-raid@xxxxxxxxxxxxxxxx>
- RE: messed up changing chunk size
- From: "Steven Haigh" <netwiz@xxxxxxxxx>
- Re: messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- messed up changing chunk size
- From: Konstantin Svist <fry.kun@xxxxxxxxx>
- Re: mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: "John Stoffel" <john@xxxxxxxxxxx>
- RE: Grow error - WTF!?
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: Grow error - WTF!?
- From: Jérôme Poulin <jeromepoulin@xxxxxxxxx>
- RE: Changing Chunk Size on Array
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Grow error - WTF!?
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: Changing Chunk Size on Array
- From: Roman Mamedov <roman@xxxxxxxx>
- Changing Chunk Size on Array
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: mdadm --add Device or resource busy error and multiple drives "failing" on RAID 6 at once
- From: Richard <richard@xxxxxxxxxxx>
- Re: Raid10 device hangs during resync and heavy I/O.
- From: Justin Bronder <jsbronder@xxxxxxxxxx>
- Raid10 device hangs during resync and heavy I/O.
- From: Justin Bronder <jsbronder@xxxxxxxxxx>
- Re: mdadm --add Device or resource busy error and multiple drives "failing" on RAID 6 at once
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: regression: hung boot upgrading to 2.6.34.1 from 2.6.32.11
- From: Karl Hiramoto <karl@xxxxxxxxxxxx>
- mdadm --add Device or resource busy error and multiple drives "failing" on RAID 6 at once
- From: "fibreraid@xxxxxxxxx" <fibreraid@xxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: Konstantinos Skarlatos <k.skarlatos@xxxxxxxxx>
- regression: hung boot upgrading to 2.6.34.1 from 2.6.32.11
- From: Karl Hiramoto <karl@xxxxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: Thomas Fjellstrom <tfjellstrom@xxxxxxxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: "Caspar Smit" <c.smit@xxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: Thomas Fjellstrom <tfjellstrom@xxxxxxxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: "Caspar Smit" <caspar@xxxxxxxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: Thomas Fjellstrom <tfjellstrom@xxxxxxxxxxxxxxx>
- Re: mvsas still has problems with 2.6.34
- From: Thomas Fjellstrom <tfjellstrom@xxxxxxxxxxxxxxx>
- mvsas still has problems with 2.6.34
- From: Thomas Fjellstrom <tfjellstrom@xxxxxxxxxxxxxxx>
- Re: Need help with recovery
- From: Jan Ceuleers <jan.ceuleers@xxxxxxxxxxxx>
- Re: Need help with recovery
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID5 crashed for unknown reason on old 2.6.16 kernel
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: RAID5 crashed for unknown reason on old 2.6.16 kernel
- From: Markus Hennig <mhennig@xxxxxxxxx>
- Re: mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: Tomasz Chmielewski <mangoo@xxxxxxxx>
- RE: Need help with recovery
- From: "Ceuleers, Jan (Jan)" <jan.ceuleers@xxxxxxxxxxxxxxxxxx>
- Re: mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: Neil Brown <neilb@xxxxxxx>
- Re: creating RAID-1 - in which direction will the sync be made?
- From: Neil Brown <neilb@xxxxxxx>
- Re: Need help with recovery
- From: Neil Brown <neilb@xxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Jim Paris <jim@xxxxxxxx>
- Need help with recovery
- From: Jan Ceuleers <jan.ceuleers@xxxxxxxxxxxx>
- Re: creating RAID-1 - in which direction will the sync be made?
- From: Tomasz Chmielewski <mangoo@xxxxxxxx>
- mdadm "hang", 100% CPU usage when trying to create RAID-1 array with external bitmap
- From: Tomasz Chmielewski <mangoo@xxxxxxxx>
- creating RAID-1 - in which direction will the sync be made?
- From: Tomasz Chmielewski <mangoo@xxxxxxxx>
- Re: [PATCH v2 2/2] Crypto: Talitos: Support for Async_tx XOR offload
- From: hank peng <pengxihan@xxxxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Sebastian Reichel <elektranox@xxxxxxxxx>
- [PATCH]md:dm.c Fix warning: statement with no effect
- From: "Justin P. Mattock" <justinmattock@xxxxxxxxx>
- Re: convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Jérôme Poulin <jeromepoulin@xxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Rudy Zijlstra <rudy@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Jeff Garzik <jeff@xxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Anssi Hannula <anssi.hannula@xxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Rudy Zijlstra <rudy@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Brad Campbell <brad@xxxxxxxxxxx>
- Re: Linux/MacOSX RAID5 dual boot
- From: Marek <mlf.conv@xxxxxxxxx>
- Re: Linux/MacOSX RAID5 dual boot
- From: Neil Brown <neilb@xxxxxxx>
- Re: Linux/MacOSX RAID5 dual boot
- From: Marek <mlf.conv@xxxxxxxxx>
- Re: Linux/MacOSX RAID5 dual boot
- From: Neil Brown <neilb@xxxxxxx>
- Re: Linux/MacOSX RAID5 dual boot
- From: Marek <mlf.conv@xxxxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Sebastian Reichel <elektranox@xxxxxxxxx>
- Re: convert raid10 to raid0
- From: Neil Brown <neilb@xxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Neil Brown <neilb@xxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Janek Kozicki <janek_listy@xxxxx>
- RE: [PATCH 27/33] extension of IncrementalRemove to store location (port) of removed device
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- RE: [PATCH 26/33] added --path <path_id> to give the information on the 'path-id' of removed device
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Robin Hill <robin@xxxxxxxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Tim Small <tim@xxxxxxxxxxxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Sebastian Reichel <elektranox@xxxxxxxxx>
- Re: convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- Re: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Neil Brown <neilb@xxxxxxx>
- Re: convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- RE: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: "Labun, Marcin" <Marcin.Labun@xxxxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Neil Brown <neilb@xxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Neil Brown <neilb@xxxxxxx>
- Linux/MacOSX RAID5 dual boot
- From: Marek <mlf.conv@xxxxxxxxx>
- Re: convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Sebastian Reichel <elektranox@xxxxxxxxx>
- Re: convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- Re: convert raid10 to raid0
- From: Neil Brown <neilb@xxxxxxx>
- Re: power outage while raid5->raid6 was in progress
- From: Neil Brown <neilb@xxxxxxx>
- power outage while raid5->raid6 was in progress
- From: Sebastian Reichel <elektranox@xxxxxxxxx>
- convert raid10 to raid0
- From: Tóth Csaba <csaba.toth@xxxxxxxxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Doug Ledford <dledford@xxxxxxxxxx>
- Re: mapping ataXX.YY to a /dev/sdX
- From: Daniel Pittman <daniel@xxxxxxxxxxxx>
- mapping ataXX.YY to a /dev/sdX
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Neil Brown <neilb@xxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Neil Brown <neilb@xxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Doug Ledford <dledford@xxxxxxxxxx>
- fixes for 3.1.3 (was: Re: [mdadm GIT PULL] rebuild checkpoints...)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- RE: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: "Labun, Marcin" <Marcin.Labun@xxxxxxxxx>
- Worrisome rebuild
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: [PATCH 33/33] Try exclusive open on a spare device before it is added to another container.
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 32/33] Fix the count of member devices in mdstat_read function.
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 31/33] Fix problem in mdmon monitor of using removed disk from in imsm container.
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 27/33] extension of IncrementalRemove to store location (port) of removed device
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 26/33] added --path <path_id> to give the information on the 'path-id' of removed device
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 21/33] Monitor: link containers and volumes in statelist
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 20/33] Added disk util functions
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH 11/33] fix: IncrementalRemove leaves open handle
- From: Neil Brown <neilb@xxxxxxx>
- Re: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Neil Brown <neilb@xxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Neil Brown <neilb@xxxxxxx>
- [PATCH 33/33] Try exclusive open on a spare device before it is added to another container.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 24/33] Monitor: Removed spare-group based spare sharing code
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 31/33] Fix problem in mdmon monitor of using removed disk from in imsm container.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 32/33] Fix the count of member devices in mdstat_read function.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 30/33] Incremental for bare disks, checking routine + integration
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 29/33] update for early rules to support --grab
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 28/33] update udev rules to support --path parameter with remove action
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 27/33] extension of IncrementalRemove to store location (port) of removed device
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 26/33] added --path <path_id> to give the information on the 'path-id' of removed device
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 25/33] Man pages update with DOMAIN line description.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 22/33] Monitor: added function to get domain and subset of a disk
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 23/33] Monitor: Spare sharing with domain/subset support
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 21/33] Monitor: link containers and volumes in statelist
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 20/33] Added disk util functions
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 19/33] Assemble: assembly with domains - two runs for imsm spares
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 17/33] test code for loop device support added
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 18/33] Removed uuid setting for imsm spares
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 16/33] additional environment dependent code for platform subset tests
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 14/33] manage: domains support in Manage_subdev
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 13/33] create: respect domains/subsets during create process
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 15/33] incremental: add domain/subset support
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 12/33] assembly: user domain/subset from configuration file in assembly process
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 11/33] fix: IncrementalRemove leaves open handle
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 10/33] update domain search to new structures, added subset search
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 09/33] processing of domain entries made after config is loaded
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 08/33] imsm: platform dependent domain boundaries
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 07/33] add general domain/subset lists manipulation routines
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 06/33] Updates to udev rules and ReadMe.c for incremental --grab support
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 04/33] Support for new disk hot plug actions with DOMAINs.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 05/33] Partition action support in DOMAIN line
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 03/33] Config option parsing for new DOMAIN line support.
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 02/33] Few fixes and sample udev rules file to capture block devices very early in the udev hot plug sequence, allowing us to make them our own if they match a proper DOMAIN entry in the mdadm conf file
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- [PATCH 01/33] [hotunplug] we are testing mdstat, not ent which is undefined at this point
- From: "Hawrylewicz Czarnowski, Przemyslaw" <przemyslaw.hawrylewicz.czarnowski@xxxxxxxxx>
- Re: List of mismatched blocks?
- From: Niobos <niobos@xxxxxxxxxxxxxxx>
- Re: Stripe dirty bitmap
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Stripe dirty bitmap
- From: Neil Brown <neilb@xxxxxxx>
- Stripe dirty bitmap
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: md and sd out of sync
- From: Neil Brown <neilb@xxxxxxx>
- Re: very odd iowait problem
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: md and sd out of sync
- From: richard <richard@xxxxxxxxxxx>
- md and sd out of sync
- From: Richard Scobie <richard@xxxxxxxxxxx>
- Re: List of mismatched blocks?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Daniel Pittman <daniel@xxxxxxxxxxxx>
- Re: List of mismatched blocks?
- From: Niobos <niobos@xxxxxxxxxxxxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Janek Kozicki <janek_listy@xxxxx>
- Re: List of mismatched blocks?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- List of mismatched blocks?
- From: Niobos <niobos@xxxxxxxxxxxxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Janek Kozicki <janek_listy@xxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: raid5 failed while rebuiling - classical problem
- From: Janek Kozicki <janek_listy@xxxxx>
- Re: [PATCH 0/33] Autorebuild and hot-plug
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- [PATCH 0/33] Autorebuild and hot-plug
- From: "Czarnowska, Anna" <anna.czarnowska@xxxxxxxxx>
- raid5 failed while rebuiling - classical problem
- From: Janek Kozicki <janek_listy@xxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: John Robinson <john.robinson@xxxxxxxxxxxxxxxx>
- Re: [mdadm GIT PULL] rebuild checkpoints, incremental assembly, volume delete/rename, and fixes
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: "Majed B." <majedb@xxxxxxxxx>
- Re: A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- A policy frame work for mdadm (incorporating domains and hotplug and such)
- From: Neil Brown <neilb@xxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: CoolCold <coolthecold@xxxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: Write-intent bitmap decreases or increase performance of RAID5?
- From: Roman Mamedov <roman@xxxxxxxx>
- Re: How do I determine which drive should be in which slot?
- From: Neil Brown <neilb@xxxxxxx>
- Re: "mdadm -Dsv" output
- From: Neil Brown <neilb@xxxxxxx>
- Re: "mdadm -Dsv" output
- From: Adrian Sandor <aditsu@xxxxxxxxx>
- Write-intent bitmap decreases or increase performance of RAID5?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- Re: [md PATCH 00/16] bad block list management for md and RAID1
- From: Bill Davidsen <davidsen@xxxxxxx>
- "mdadm -Dsv" output
- From: Adrian Sandor <aditsu@xxxxxxxxx>
- Re: How do I determine which drive should be in which slot?
- From: Dave W <dave+gmane@xxxxxxxxxxxx>
- Re: RAID5 write hole?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- How to reclaim device slots on v1 superblock?
- From: "Mario 'BitKoenig' Holbe" <Mario.Holbe@xxxxxxxxxxxxx>
- Re: How do I determine which drive should be in which slot?
- From: Neil Brown <neilb@xxxxxxx>
- Re: Request on RAID10
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID5 crashed for unknown reason on old 2.6.16 kernel
- From: Neil Brown <neilb@xxxxxxx>
- harware-failure (was: raid5 crashes on mke2fs ...)
- From: wiebittewas <wiebittewas@xxxxxxxxxxxxxx>
- Re: mdadm monitor spins with start-failed raid0
- From: Neil Brown <neilb@xxxxxxx>
- Re: raid5 crashes on mke2fs ...
- From: Neil Brown <neilb@xxxxxxx>
- Re: How do I determine which drive should be in which slot?
- From: Dave W <dave+gmane@xxxxxxxxxxxx>
- Re: RAID5 write hole?
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: RAID5 write hole?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- Re: [md PATCH 00/16] bad block list management for md and RAID1
- From: Neil Brown <neilb@xxxxxxx>
- Re: [md PATCH 4/5] md: Fix: BIO I/O Error during reshape for external metadata
- From: Neil Brown <neilb@xxxxxxx>
- Re: Problem re-shaping RAID6
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID grow and disk failure
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID5 crashed for unknown reason on old 2.6.16 kernel
- From: Markus Hennig <mhennig@xxxxxxxxx>
- Re: impact of one slow drive?
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- raid5 crashes on mke2fs ...
- From: wiebittewas <wiebittewas@xxxxxxxxxxxxxx>
- mdadm monitor spins with start-failed raid0
- From: Jeff DeFouw <jeffd@xxxxxxx>
- Re: impact of one slow drive?
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: impact of one slow drive?
- From: David Lethe <david@xxxxxxxxxxxx>
- [PULL REQUEST] md: various bug fixes
- From: Neil Brown <neilb@xxxxxxx>
- impact of one slow drive?
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- Re: RAID5 write hole?
- From: Neil Brown <neilb@xxxxxxx>
- Re: RAID5 write hole?
- From: John Hendrikx <hjohn@xxxxxxxxx>
- Re: RAID5 write hole?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- RAID5 crashed for unknown reason on old 2.6.16 kernel
- From: Markus Hennig <mhennig@xxxxxxxxx>
- Request on RAID10
- From: koti <satha_koti@xxxxxxxxxxx>
- Re: RAID5 write hole?
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- RAID5 write hole?
- From: Shaochun Wang <scwang@xxxxxxxxx>
- Re: RAID grow and disk failure
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- How do I determine which drive should be in which slot?
- From: Dave W <dave+gmane@xxxxxxxxxxxx>
- Re: md lock issue, I suppose
- From: CoolCold <coolthecold@xxxxxxxxx>
- Re: Building new RAID5 results in removed and failed devices
- From: Mikael Abrahamsson <swmike@xxxxxxxxx>
- Re: Building new RAID5 results in removed and failed devices
- From: Markus Krainz <ldm@xxxxxx>
- Re: Building new RAID5 results in removed and failed devices
- From: Markus Krainz <ldm@xxxxxx>
- Re: RAID grow and disk failure
- From: Neil Brown <neilb@xxxxxxx>
- Re: Building new RAID5 results in removed and failed devices
- From: Robin Hill <robin@xxxxxxxxxxxxxxx>
- Re: Building new RAID5 results in removed and failed devices
- From: Dan Williams <dan.j.williams@xxxxxxxxx>
- Building new RAID5 results in removed and failed devices
- From: Markus Krainz <ldm@xxxxxx>
- RAID grow and disk failure
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: md lock issue, I suppose
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Drives disappearing from /dev/ during surface scan
- From: John Hendrikx <hjohn@xxxxxxxxx>
- Re: [PATCH] drivers/md: raid10: Fix null pointer dereference in fix_read_error()
- From: Neil Brown <neilb@xxxxxxx>
- Re: [PATCH] drivers/md: raid10: Fix null pointer dereference in fix_read_error()
- From: "Prasanna S. Panchamukhi" <prasanna.panchamukhi@xxxxxxxxxxxx>
- Re: Data-check brings system to a standstill
- From: Bill Davidsen <davidsen@xxxxxxx>
- Re: Data-check brings system to a standstill
- From: Jordan Russell <jr-list-2010@xxxxxx>
- Re: [PATCH] drivers/md: raid10: Fix null pointer dereference in fix_read_error()
- From: Prasanna Panchamukhi <ppanchamukhi@xxxxxxxxxxxx>
- Re: [PATCH] drivers/md: raid10: Fix null pointer dereference in fix_read_error()
- From: Neil Brown <neilb@xxxxxxx>
- ADMA: naming new drivers under drivers/dma/ppc4xx
- From: Tirumala Marri <tmarri@xxxxxxx>
- [PATCH] drivers/md: raid10: Fix null pointer dereference in fix_read_error()
- From: prasanna.panchamukhi@xxxxxxxxxxxx
- Re: md lock issue, I suppose
- From: Neil Brown <neilb@xxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Neil Brown <neilb@xxxxxxx>
- Re: md lock issue, I suppose
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: md lock issue, I suppose
- From: Stefan /*St0fF*/ Hübner <stefan.huebner@xxxxxxxxxxxxxxxxxx>
- md lock issue, I suppose
- From: Piergiorgio Sartor <piergiorgio.sartor@xxxxxxxx>
- Re: How to boost performance [SOLVED]
- From: Bernd Schubert <bernd.schubert@xxxxxxxxxxx>
- Re: How to boost performance [SOLVED]
- Re: RAID5: two disks dropping out
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: [Linux-HA] very odd iowait problem
- From: Ciro Iriarte <cyruspy@xxxxxxxxx>
- very odd iowait problem
- From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
- RE: Upgrading to GRUB2 from GRUB Legacy on RAID1
- From: "Leslie Rhorer" <lrhorer@xxxxxxxxxxx>
- Re: How to boost performance
- From: Roger Heflin <rogerheflin@xxxxxxxxx>
- Re: How to boost performance
- Re: Data-check brings system to a standstill
- From: Jordan Russell <jr-list-2010@xxxxxx>
- Re: Data-check brings system to a standstill
- From: Tim Small <tim@xxxxxxxxxxx>
- Re: Data-check brings system to a standstill
- From: Jordan Russell <jr-list-2010@xxxxxx>
- Re: RAID5: two disks dropping out
- From: "Stefan G. Weichinger" <lists@xxxxxxxx>
- Re: Raid 1 array degrades on reboot [resolved]
- From: David Watson <David.Watson@xxxxxxxxxx>
- Re: Problem re-shaping RAID6
- From: Nagilum <nagilum@xxxxxxxxxxx>
- RE: [md PATCH 4/5] md: Fix: BIO I/O Error during reshape for external metadata
- From: "Kwolek, Adam" <adam.kwolek@xxxxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Keld Simonsen <keld@xxxxxxxxxx>
- RE: [md PATCH 3/5] md: Use added disks for external metadata case in start_reshape()
- From: "Kwolek, Adam" <adam.kwolek@xxxxxxxxx>
- RE: [md PATCH 00/16] bad block list management for md and RAID1
- From: "Graham Mitchell" <gmitch@xxxxxxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Gilad Arnold <arnold@xxxxxxxxxxxxxxx>
- Re: [md PATCH 00/16] bad block list management for md and RAID1
- From: Neil Brown <neilb@xxxxxxx>
- Re: [md PATCH 00/16] bad block list management for md and RAID1
- From: Neil Brown <neilb@xxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Neil Brown <neilb@xxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Gilad Arnold <arnold@xxxxxxxxxxxxxxx>
- Re: migrating from RAID5 to RAID10
- From: Neil Brown <neilb@xxxxxxx>
- Re: How to boost performance
- From: Roger Heflin <rogerheflin@xxxxxxxxx>
- Re: Raid 1 array degrades on reboot
- From: Neil Brown <neilb@xxxxxxx>
[Index of Archives]
[Linux RAID Wiki]
[ATA RAID]
[Linux SCSI Target Infrastructure]
[Linux Block]
[Linux IDE]
[Linux SCSI]
[Linux Hams]
[Device Mapper]
[Kernel]
[Linux Admin]
[Linux Net]
[GFS]
[RPM]
[git]
[Yosemite Forum]
[Linux Networking]