On 06/09/2010 06:21 PM, Aryeh Gregor wrote:
On Wed, Jun 9, 2010 at 7:06 AM, MRK<mrk@xxxxxxxxxxxxx> wrote:
Same problem with write-mostly/write-behind I think. I don't know how long
is the queue that holds data already committed to the SSD and not yet
committed to the HDD but it can't be too long. I'm reading the "man md"
right now and it's not extremely clear on this. I have the impression the
queue between the two it's either the /sys/block/hdddevice/queue/nr_requests
or it uses the write-intent bitmap (if set). In case of the nr_requests,
it's gonna be very short so the SSD can give you quick bursts but continuous
performance will be that of the HDD.
I tried this once and posted some bonnie++ results:
https://kerneltrap.org/mailarchive/linux-raid/2010/1/31/6742263
Thanks for your tests. The write-mostly array seems to go roughly as
fast as the SSD itself if I interpret your tests correctly (have you
really saturated the write-behind queue?). An HDD-only test would have
been interesting though (with SSDs failed and removed).
Secondly:
I now realize that the write-behind distance is settable (man mdadm see
--write-behind= ). However there is written it needs the write intent
bitmap to work. This makes me think that it is not really safe upon SSD
failure. Is the data in the write-behind queue also saved in RAM or does
it exist only in the SSD device (pointed to by the bitmap)? In the
second case, if the SSD dies, the HDD will likely be corrupt, it's not
really like having a RAID. In the first case, I don't understand why it
should need the write intent bitmap active.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html