Re: RAID5 created by 8 disks works with xfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Streaming workloads don't benefit much from writeback cache.
Writeback can absorb spikes, but if you have a constant load that goes
beyond what your disks can handle, you'll have good performance
exactly to the point where your writeback is full. Once you hit
dirty_bytes, dirty_ratio, or the timeout, your system will be crushed
with I/O beyond recovery. It's best to limit your writeback cache to a
relatively small number with such a constant IO load.

You are right that merging could help to some degree, but you likely
won't be merging I/Os from separate streams, so your workload is still
terribly random and you just end up with larger random I/Os. I don't
think it will make up for the difference between your workload and
your configuration.

On Sun, Apr 1, 2012 at 12:20 AM, daobang wang <wangdb1981@xxxxxxxxx> wrote:
> I have the different opinion, the application does not write the disk
> directly, disk IOs will be merged before writen in kernel, just we can
> not caculate how many IOs will be merged.
>
> On 4/1/12, daobang wang <wangdb1981@xxxxxxxxx> wrote:
>> So sorry, the kernel version should be 2.6.36.4, and we do not use
>> distro, we compiled the kernel codes and user space codes.
>>
>> I'm duplicating the input/output error issue, the system was
>> restarted, I will dump the dmesg log if i can duplicate it.
>>
>> Thanks again,
>> Daobang Wang.
>>
>> On 4/1/12, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
>>> On 4/1/2012 12:12 AM, daobang wang wrote:
>>>> Thank you very much!
>>>> I got it, so we can remove the Volume Group and Logical Volume to save
>>>> resource.
>>>> And i will try RAID5 with 16 disks to write 96 total streams again.
>>>
>>> Why do you keep insisting on RAID5?!?!  It is not suitable for your
>>> workload.  It sucks Monday through Saturday and twice on Sunday for this
>>> workload.
>>>
>>> Test your 16 drive RAID5 array head to head with the linear array + XFS
>>> architecture I gave you instructions to create, and report back your
>>> results.
>>>
>>>> I used the Linux kernel 2.6.26.4.
>>>
>>> Which distro?
>>>
>>> 2.6.26 is *ancient* and has storage layer bugs.  It does NOT have
>>> delaylog, which was introduced in 2.6.35, and wasn't fully performant
>>> until 2.6.38+.
>>>
>>> You're building and testing a new platform with a terribly obsolete
>>> distribution.  You need a much newer kernel and distro.  3.0.x would be
>>> best.  Debian 6.0.4 with a backport 3.0.x kernel would be a good start.
>>>
>>>> And we do not have BBWC
>>>
>>> Then you must re-enable barriers or perennially suffer more filesystem
>>> problems.  I simply cannot emphasize enough how critical write barriers
>>> are to filesystem consistency.
>>>
>>>> The application has 16kb cache per stream, Is it possible to optimize
>>>> it if we use 32kb or 64kb cache?
>>>
>>> No.  Read the app's documentation.  This caching is to prevent dropped
>>> frames in the recorded file.  Increasing this value won't affect disk
>>> performance.
>>>
>>> --
>>> Stan
>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux