Re: raid10 layout for 2xSSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Keld Jørn Simonsen <keld@xxxxxxxxxx> writes:

> On Mon, Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote:
>> Kasper Sandberg <postmaster@xxxxxxxxxxx> writes:
>> 
>> > Hello.
>> >
>> > I've been wanting to create a raid10 array of two SSDs, and I am
>> > currently considering the layout.
>> >
>> > As i understand it, near layout is similar to raid1, and will only
>> > provide a speedup if theres 2 reads at the same time, not a single
>> > sequential read.
>> >
>> > so the choice is really between far and offset. As i see it, the
>> > difference is, that offset tries to reduce the seeking for writing
>> > compared to far, but that if you dont consider the seeking penalty,
>> > average sequential write speed across the entire array should be roughly
>> > the same with offset and far, with offset perhaps being a tad more
>> > "stable", is this a correct assumption? if it is, that would mean offset
>> > provides a higher "garantueed" speed than far, but with a lower maximum
>> > speed.
>> >
>> > mvh.
>> > Kasper Sandberg
>> 
>> Doesn't offset have the copies of each stripe right next to each other
>> (just rotated). So writing one stripe would actualy write a 2 block
>> continous chunk per device.
>> 
>> With far copies the stripes are far from each other and you get 2
>> seperate continious chunks per device.
>> 
>> What I'm aiming at is that offset might better fit into erase blocks,
>> cause less internal fragmentation on the disk and give better wear
>> leveling. Might improve speed and lifetime. But that is just a
>> thought. Maybe test and do ask Intel (or other vendors) about it.
>
> I think the caching of the file system levies out all of this, if we
> talk SSD. The presumption on this is that there is no rotational latency
> with SSD, and that no head movement. 

Filesystem has nothing to do with this. It caches the same for both
situations. The only change happens on the block layer.

> The caching means that for writing, more buffers are chained together
> and can be written at once. For near, logical blocks 1-8
> can be written to sector 0 of disk 1 in one go, and logical blocks
> 1-8 can be written to sector 0 of disk 2 in one go.

Which is what I was saying.

> For far it will be for disk 1: block 1, 3, 5, and 7 to sector 0, and
> block 2, 4, 6 and 8 to sector n/2 - n being the number of sectors on the
> diskpartition. For far and disk 2, it will be blocks 2, 4, 6 and 8 to
> sector 0, and blocks 1, 3, 5 and 7 to sector n/2. caching thus reduces
> seeking significantly, from once per block, to once per flushing of the
> cache (syncing). Similarily the cache also would almost eliminate
> seeking for the offset layout.

There is no seeking (head movement) and no rotational latency
involved. That part is completly irelevant.

The important part is that you now have 4 IO operations of half the
size comapred to the 2 IO operations of the offset case. The speed and
wear will depends on the quality of the SSD, how well it copes with
small IO.

> but I would like to see some numbers on this, for SSD.
> Why don't you try it out and tell us what you find?

I would be interested in this myself. I don't have an SSD yet but I'm
tempted to buy. When you test please also test random access. I would
guess that in any sequential test the amount of caching going on will
make all IO operations so big that no difference shows.

> Best regards
> keld

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux