RAID10-reshape crashed due to memory allocation problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear readers,

seems like memory is leaking when a RAID10-array is reshaped.

Here are the details of what I did:

RAID10-array consisting of 13 disks (2TB each) and one spare was
grown into a 16 disk RAID10-array by adding two more spares and
then doing  mdadm --grow /dev/md5 --raid-devices=16

When the reshape operation reached 80% (after 20 hours) the system
became unresponsive and crashed soon after. Console output showed
something like "... could not allocate memory block ..."

The machine has 32GB of RAM.

After the machine was rebooted the reshape operation was running
for 6 more hours and was followed by an 8 hour resync.

Everything seems to be OK now, but according to /proc/meminfo
only 15GB of RAM are available. Much too low for a system that
is almost idle.

I will reboot the machine at our next maintenance window and
compare its available memory with the situation right now.

Is restarting the reshape operation after a crash really safe?
Should I check the correctness of my array somehow?

Kind regards

Peter Koch
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux