Hi, thanks for the answer. Actually, the locking just prevent to re-read the partition table, not to write it. After "detaching" the disk, the partition table can be re-read and it shows all the partitions. So, the locking is only half protection, in that respect, I guess (the next restart...). About the solution, thanks, I was implementing something similar. Unfortunately, there are use cases, at least one, where the partitions must, or should, be created on a spare space of some drive, already belonging to an array, thus the need to create a partition on a disk belonging to a running array. For example: 5 disks: 40 40 160 160 160 (all could be GiB or whatever). The 160 have a 40 partition, so there is a sigle RAID-6 composed by the 2 40 and 3 partitions, from the 160, of 40. The rest of the 160, namely 120, is free space. Now, a new HDD is added, of 120... At this point the 160 should have added a partition of 80, leaving another 40 free. Unfortunately, they belong to the first RAID, so the partitioning and successive RAID creation does not seem to be posssible. On a live system. Any other suggestions or ideas? Thanks, bye, pg On Mon, Jun 21, 2010 at 11:08:31PM +0200, Stefan /*St0fF*/ Hübner wrote: > I don't think it's too aggressive locking. If the disk wasn't locked, > you could also shrink or expand the raid partition. That wouldn't be a > good idea (f.e. with 0.90 metadata shrinking or expanding would be > really bad!). This could happen with one write to the partition table, > so writing to it should be locked. > > The workaround is first grade programming: divide and conquer! Make two > for-loops. One collecting the information, the second to apply all > changes (first change the pt, then loop thru adding the partitions to > the different arrays). > > Yes, as easy as that. > > Am 21.06.2010 22:42, schrieb Piergiorgio Sartor: > > Hi all, > > > > still playing with my wild bunch of RAID-6. > > > > I'm more or less finished with a script adding > > an HDD to the different arrays. > > > > The script, originally, was going thru the different > > arrays, collecting the partition size, creating the > > partition (on the new disk), adding the partition > > to the corresponding RAID volume. > > > > Something like: > > > > for v in raid_devices > > find start end part > > parted /dev/sdX mkpart part start end > > mdadm --add $v /dev/sdXpart > > end > > > > This works only for the first partition. > > > > The issue seems to be that, after the "--add", the > > device is locked and the partition table *cannot* > > be updated (the kernel cannot). > > The consequence is that the successive "parted", > > while succeeding, report a failure (not a problem), > > and the /dev/sdXpart does not appear. This means > > it will not be added to the next device. > > > > In other words, the script starts with /dev/sdX. > > It creates /dev/sdX1. > > It adds /dev/sdX1 to the RAID. > > It creates /dev/sdX2... > > /dev/sdX2 does not appear, the add fails... > > ... > > > > Is this intended behaviour? Or a bit aggressive locking? > > > > Any possible solution not involving stopping the RAIDs? > > > > Thanks a lot, if you need more info, please let me know. > > > > bye, > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- piergiorgio -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html