[PATCH 1/1] mdadm/Detail: Can't show container name correctly when unpluging disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The test case is:
1. create one imsm container
2. create a raid5 device from the container
3. unplug two disks
4. mdadm --detail /dev/md126
[root@rhel85 ~]# mdadm -D /dev/md126
/dev/md126:
         Container : ��, member 0

The Detail function first gets container name by function
map_dev_preferred. Then it tries to find which disks are
available. In patch db5377883fef(It should be FAILED..)
uses map_dev_preferred to find which disks are under /dev.

But now, the major/minor information comes from kernel space.
map_dev_preferred malloc memory and init a device list when
first be called by Detail. It can't find the device in the
list by the major/minor. It free the memory and reinit the
list.

The container name now points to an area tha has been freed.
So the containt is a mess.

This patch replaces map_dev_preferred with access.

Fixes: db5377883fef (It should be FAILED when raid has)
Signed-off-by: Xiao Ni <xni@xxxxxxxxxx>
Reported-by: Fine Fan <ffan@xxxxxxxxxx>
---
v2: use access rather than devid2kname
---
 Detail.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/Detail.c b/Detail.c
index d3af0ab..df59378 100644
--- a/Detail.c
+++ b/Detail.c
@@ -351,14 +351,17 @@ int Detail(char *dev, struct context *c)
 	avail = xcalloc(array.raid_disks, 1);
 
 	for (d = 0; d < array.raid_disks; d++) {
-		char *dv, *dv_rep;
-		dv = map_dev_preferred(disks[d*2].major,
-				disks[d*2].minor, 0, c->prefer);
-		dv_rep = map_dev_preferred(disks[d*2+1].major,
-				disks[d*2+1].minor, 0, c->prefer);
-
-		if ((dv && (disks[d*2].state & (1<<MD_DISK_SYNC))) ||
-		    (dv_rep && (disks[d*2+1].state & (1<<MD_DISK_SYNC)))) {
+		char dv[32], dv_rep[32];
+
+		sprintf(dv, "/sys/dev/block/%d:%d",
+				disks[d*2].major, disks[d*2].minor);
+		sprintf(dv_rep, "/sys/dev/block/%d:%d",
+				disks[d*2+1].major, disks[d*2+1].minor);
+
+		if ((!access(dv, R_OK) &&
+		    (disks[d*2].state & (1<<MD_DISK_SYNC))) ||
+		    (!access(dv_rep, R_OK) &&
+		    (disks[d*2+1].state & (1<<MD_DISK_SYNC)))) {
 			avail_disks ++;
 			avail[d] = 1;
 		} else
-- 
2.7.5




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux