Re: embedding area is unusually small... (GRUB2 on software RAID1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Antonio!

--- On Wed, 1/6/10, Antonio Perez <ap23563m@xxxxxxx> wrote:

> However, you did repartition md0, while still expecting to
> boot from md0.
> 
> That is "odd" to me. Just make a FS in md0 and use it as>
> /boot (and root).

I've been, and still am, puzzled by the efficiency of the calculations involved in setting up RAID, one way or the other. And what I refer to, goes down to number of clocks needed to perform the writing. As I was wondering down this path, I figured that the way I set it up would be the most efficient. Of course, I do not know enough about the actual kernel/raid code to be able to tell for sure. My intuition dictated it this way.

> > should I create all the partitions first and build
> > them into so many /dev/md[01234] devices separately maybe?
>
> I could tell you what seems to be a reasonable setup IMO:
>     200Mb --> {sda1,sdb1} --> md1 --> RAID1 --> use as /boot
>     5Gb   --> {sda2,sdb2} --> md2 --> RAID1 --> use as /
>     ...

....and that's exactly what is also on my list, and your suggestion enforces my thoughts, thanks. Nevertheless, it still does not explain to me whether multiple md devices with more underlying disk partitions or more md partitions with fewer underlying disk partitions are the way to go. To be absolutely clear, is the

/dev/sd[ab]1 => /dev/md0 => /dev/md0p[123...] [RAID1]
/dev/sd[ab]2 => /dev/md1 => /dev/md1p[123...] [RAID0]

or the

/dev/sd[ab][123...] => /dev/md[012...] [RAID1,RAID0]

setup the most stable, efficient and robust solution? The way I see it, there shall be a difference somewhere, somehow, right? About the first one, thanks to Michael E. pointing it out, we already know that one has to take care about choosing the patritions.

Furthermore, I think that the gain in speed of RAID0 would start vanishing if disk access is required on the RAID1 side all at the same time. In fact, the whole system's efficiency would degrade. The only way out would be to use 4 disks as

/dev/sd[ab]1 => /dev/md0 => /dev/md0p[123...] [RAID1]
/dev/sd[cd]1 => /dev/md1 => /dev/md1p[123...] [RAID0]

Right? Keep in mind that I am not after what RAID10 can offer. I can afford giving up the data safety on the RAID0 side in exchange for speed.

> Having only two disks means to use only RAID1. RAID0 is too
> risky IMO as any disk failure means a complete loss of data.

I will be developing a numerical electromagnetics code, that will do quite some number crunching and dumping with 4xTesla C1060 GPUs. I will need the RAID0 for that purpose, for a scratch space, if you will. So it is a must have for me.

> I am a common user which is trying to help you.

Thanks, it's highly appreciated.

> But, in this list alone, there are three post from you with exactly the
> same content and question. 

That's really odd. I would never do that intentionally. As I wrote earlier, I did submit it to 3 lists, help-grub, grub-devel and linux-raid, but not to the same one 3 times. The subject is related to both RAID and GRUB, which is why I sent it 3 ways. My apologies again to all those who got it in 3 copies, whether by being subscribed to all 3 lists, or by a misterious triple submission of mine. 

> If you are not getting any feedback, perhaps you should re-think 
> of a different way to make your questions. And, please,
> allow more time for an answer.

I am a patient man, so not getting answers would not drive me to re-post the same message again.

> Thanks for your cooperation.

Absolutely. And I thank to all for your help.

All the best,
Tibor

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux