Re: [RFC] raid5: add a log device to fix raid5/6 write hole issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2015-04-01 at 18:46 +0000, Williams, Dan J wrote:
> On Wed, Apr 1, 2015 at 11:36 AM, Piergiorgio Sartor
> <piergiorgio.sartor@xxxxxxxx> wrote:
> > On Tue, Mar 31, 2015 at 08:47:04PM -0700, Dan Williams wrote:
> >> On Mon, Mar 30, 2015 at 3:25 PM, Shaohua Li <shli@xxxxxx> wrote:
> >> > This is my attempt to fix raid5/6 write hole issue, it's not for merge
> >> > yet, I post it out for comments. Any comments and suggestions are
> >> > welcome!
> >> >
> >> > Thanks,
> >> > Shaohua
> >> >
> >> > We expect a completed raid5/6 stack with reliability and high
> >> > performance. Currently raid5/6 has 2 issues:
> >> >
> >> > 1. read-modify-write for small size IO. To fix this issue, a cache layer
> >> > above raid5/6 can be used to aggregate write to full stripe write.
> >> > 2. write hole issue. A write log below raid5/6 can fix the issue.
> >> >
> >> > We plan to use a SSD to fix the two issues. Here we just fix the write
> >> > hole issue.
> >> >
> >> > 1. We don't try to fix the issues together. A cache layer will do write
> >> > acceleration. A log layer will fix write hole. The seperation will
> >> > simplify things a lot.
> >> >
> >> > 2. Current assumption is flashcache/bcache will be used as the cache
> >> > layer. If they don't work well, we can fix them or add a simple cache
> >> > layer for raid write aggregation later. We also assume cache layer will
> >> > absorb write, so log doesn't worry about write latency.
> >>
> >> It seems neither bcache nor dm-cache are tackling the write-buffering
> >> problem head on... they still seem to be concerned with some amount of
> >> read caching which I can see as useful for file servers and
> >> workstations, but not necessarily scale out storage.
> >>
> >> I'll try to set aside time to take a look at the patch this week.
> >
> > There is one thing I do not really get.
> >
> > The target is to avoid the "write hole", which happens,
> > for example, when there is a sudden power failure.
> >
> > Now, how can be assured, in that case, that the "cache"
> > device is safe after the power is restored?
> 
> If you lose the cache the data-loss damage is greater, but this has
> always been the case with hardware-raid adapters.
> 
> > Doesn't this solution just shifts the problem from
> > the array to a different device (SSD, for example)?
> >
> > Speaking of SSD, these are quite "power failure"
> > sensitive, it seems...
> 
> Simple, if a cache-device is not itself power-failure safe then it
> should not be used for power-failure protection.

I think this would be a good application for some of the newer
technology coming out such as NVDIMM and persistent memory.
��.n��������+%������w��{.n�����{����w��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux