On Sun, Apr 4, 2021 at 10:47 PM David T-G <davidtg-robot@xxxxxxxxxxxxxxx> wrote: > > Roger, et al -- > > ...and then Roger Heflin said... > % > % The re-add will only work if the array has bitmaps. For quick disk > > Ahhhhh... Good point. > > It didn't really take 9 hours; a few minutes later it was up to 60+ > hours, and then it dropped to a couple of hours and was done the next > time I looked. I also forced the other array using just the last two > drives and saw everything happy, so I then added the "first" drive and > now it's all happy as well. Woo hoo. > > > % hiccups the re-add is nice because instead of 9 hours, often it > % finishes in only a few minutes assuming the disk has not been out of > % the array for long. > > I love the idea. I've been reading up, and in addition to questions of > what size bitmap I need for my sizes > > diskfarm:~ # df -kh /mnt/4Traid5md/ /mnt/750Graid5md/ > Filesystem Size Used Avail Use% Mounted on > /dev/md0p1 11T 11T 309G 98% /mnt/4Traid5md > /dev/md127p1 1.4T 1.4T 14G 100% /mnt/750Graid5md > > and how to tell it (or *if* I tell it; that still isn't clear) there's > also the question of whether or not xfs Easy enough to tell if it is working: md14 : active raid6 sdh4[11] sdg4[6] sdf4[10] sdd4[5] sdc4[9] sdb4[7] sde4[1] 3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU] bitmap: 1/6 pages [4KB], 65536KB chunk I also hack my disks into several partitions such that I have 4 raid6 arrays. This helps because the rebuild time on the entire disk is days, and it makes me feel better when expanding the arrays as it makes the chunks smaller. The biggest help is when I start getting bad blocks on one of the disks typically it is only 1 of the 4 arrays/disk sections are having bad blocks. I also made sure that md*4 always has partition sd*4 so to reduce the thinking about what was where. > > diskfarm:~ # grep /mnt/ssd /etc/fstab > LABEL=diskfarm-ssd /mnt/ssd xfs defaults 0 0 > > will work for my bitmap files target, since all I see is that it must be > an ext2 or ext3 (not ext4? old news?) device. > I don't know, I have always done mine internal. I could see some advantage to have it on a SSD vs internally. I may have to try that, I am about to do some array reworks to go from all 3tb disks to start using some 6tb disks. If the file was pre-allocated I would not think it would matter which. The page is dated 2011 so that would have been old enough that no one tested ext4/xfs. I was going to tell you you could just create a LV and format it ext3 and use it, but I see it appears you are using direct partitions only.