> On Wednesday, December 4, 2024 at 04:26:22 PM PST, Roman Mamedov <rm@xxxxxxxxxxx> wrote: > > On Thu, 5 Dec 2024 00:09:02 +0000 (UTC) > > Jbum List <jbumslist@xxxxxxxxx> wrote: > >> My current situation: >> >> 1. Raspberry Pi 4 w/ 4 disk RAID 5 array >> 2. PC for general development and test. >> >> I had a 5th disk I wanted to add to the array and in the interest of making things go faster, I decided to temporarily hook up the raid array to my PC. I brought over the existing mdadm.conf settings from the Pi and the array was brought up successfully without any issue on my PC. >> >> I started the grow/reshape operation after adding the new disk and everything is going well. However, I noticed that the rebuild speed isn't that much better than what I'm used to see on the Pi. So rather than wait for the operation to complete (in a few days), I wanted to move the array over to the Pi and continue there. >> >> Can I pause/halt the reshape that's currently running on my PC and resume on the Pi? I know you can pause/halt and resume on the same system it was started on but wasn't sure if that's possible across systems when both have the same configuration settings for the raid array. > > Should be no problem. The reshape state is not saved in the OS, it's on the actual array drives. Awesome. That's music to my ears. Thanks for the confirmation. > > Before moving it back, you could first try: > > echo 1000000 > /sys/devices/virtual/block/mdX/md/sync_speed_min > I tried this but I didn't see any noticeable difference. It could be due to other settings I had already tweaked (read ahead, stripe cache size, etc.). > and see if this makes it faster. > > I would also expect the PC to be faster, at least if you connect the drives to > its onboard fully independent SATA ports with enough PCIe bandwidth to the > controller, and not e.g. the same USB enclosure. My "PC" is really a laptop so I'm using the same multi-bay drive enclosure. I suspect the USB interface is what's mostly limiting the speed. Appreciate the suggestion though. Thanks, Roman.