RE: [PATCH] disable queue flag test in barrier check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fyi - related problem is seen with solaris & zfs when users attach them
to hardware-based RAID subsystems.  The vendors had
to make firmware tweaks to address solaris's
flush-to-disk-after-all-writes.  

Not sure what you mean about non-volatile vs. volatile write cache,
however.  If you want to see if write cache is enabled on a disk drive,
or 
Even a logical disk on a hardware-based RAId, under Linux, then google
"mode page editor" for lots of choices.  Also look up zfs write cache
raid and you'll get information that you can just as easily apply to
Linux implementations of md. 


-----Original Message-----
From: linux-raid-owner@xxxxxxxxxxxxxxx
[mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of Eric Sandeen
Sent: Thursday, June 26, 2008 8:25 AM
To: Timothy Shimmin
Cc: xfs-oss; LinuxRaid; NeilBrown; jeremy@xxxxxxxxx
Subject: Re: [PATCH] disable queue flag test in barrier check

Timothy Shimmin wrote:

> Also from memory, I believe Neil checked this removal into the
SLES10sp1 tree
> and some sgi boxes started having slow downs
> (looking at Dave's email below - we were not wanting to tell them
> to use nobarrier but needed it to work by default - I forget now).

But that's an admin issue.

The way it is now, for example a home user of md raid1 (me!) can't run
barriers even if they wanted to.

Until there is a way to know if a write cache is non-volatile the only
safe option is to enable barriers when possible.

> 6.
>> Date: Wed, 25 Jun 2008 08:57:24 +1000
>> From: Dave Chinner <david@xxxxxxxxxxxxx>
>> To: Eric Sandeen <sandeen@xxxxxxxxxxx>
>> Cc: LinuxRaid <linux-raid@xxxxxxxxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
>> Subject: Re: md raid1 passes barriers, but xfs doesn't use them?
>>
>> Yeah, the problem was that last time this check was removed was
>> that a bunch of existing hardware had barriers enabled on them when
>> not necessary (e.g. had NVRAM) and they went 5x slower on MD raid1
>> devices. Having to change the root drive config on a wide install
>> base was considered much more of support pain than leaving the
>> check there. I guess that was more of a distro upgrade issue than
>> a mainline problem, but that's the history. Hence I think we
>> should probably do whatever everyone else is doing here....
>>
>> Cheers,
>>
>> Dave.
> 
> So I guess my question is whether there are cases where we are
> going to be in trouble again.
> Jeremy, do you see some problems?

FWIW, the problem *I* foresee is that some people are going to slow down
when using the defaults, yes, because barriers will start working again.
 But I don't see any other safe way around it.

Education would be in order, I suppose.  :)

-Eric

> --Tim
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux