Re: any recommendation of using EnhanceIO?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI Jan,

On Tue, Aug 18, 2015 at 5:00 AM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
> I already evaluated EnhanceIO in combination with CentOS 6 (and backported 3.10 and 4.0 kernel-lt if I remember correctly).
> It worked fine during benchmarks and stress tests, but once we run DB2 on it it panicked within minutes and took all the data with it (almost literally - files that werent touched, like OS binaries were b0rked and the filesystem was unsalvageable).

Out of curiosity, were you using EnhanceIO in writeback mode?  I
assume so, as a read cache should not hurt anything.

Thanks,
Alex

> If you disregard this warning - the performance gains weren't that great either, at least in a VM. It had problems when flushing to disk after reaching dirty watermark and the block size has some not-well-documented implications (not sure now, but I think it only cached IO _larger_than the block size, so if your database keeps incrementing an XX-byte counter it will go straight to disk).
>
> Flashcache doesn't respect barriers (or does it now?) - if that's ok for you than go for it, it should be stable and I used it in the past in production without problems.
>
> bcache seemed to work fine, but I needed to
> a) use it for root
> b) disable and enable it on the fly (doh)
> c) make it non-persisent (flush it) before reboot - not sure if that was possible either.
> d) all that in a customer's VM, and that customer didn't have a strong technical background to be able to fiddle with it...
> So I haven't tested it heavily.
>
> Bcache should be the obvious choice if you are in control of the environment. At least you can cry on LKML's shoulder when you lose data :-)
>
> Jan
>
>
>> On 18 Aug 2015, at 01:49, Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx> wrote:
>>
>> What about https://github.com/Frontier314/EnhanceIO?  Last commit 2
>> months ago, but no external contributors :(
>>
>> The nice thing about EnhanceIO is there is no need to change device
>> name, unlike bcache, flashcache etc.
>>
>> Best regards,
>> Alex
>>
>> On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz <dang@xxxxxxxxxx> wrote:
>>> I did some (non-ceph) work on these, and concluded that bcache was the best
>>> supported, most stable, and fastest.  This was ~1 year ago, to take it with
>>> a grain of salt, but that's what I would recommend.
>>>
>>> Daniel
>>>
>>>
>>> ________________________________
>>> From: "Dominik Zalewski" <dzalewski@xxxxxxxxxxx>
>>> To: "German Anders" <ganders@xxxxxxxxxxxx>
>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>> Sent: Wednesday, July 1, 2015 5:28:10 PM
>>> Subject: Re:  any recommendation of using EnhanceIO?
>>>
>>>
>>> Hi,
>>>
>>> I’ve asked same question last weeks or so (just search the mailing list
>>> archives for EnhanceIO :) and got some interesting answers.
>>>
>>> Looks like the project is pretty much dead since it was bought out by HGST.
>>> Even their website has some broken links in regards to EnhanceIO
>>>
>>> I’m keen to try flashcache or bcache (its been in the mainline kernel for
>>> some time)
>>>
>>> Dominik
>>>
>>> On 1 Jul 2015, at 21:13, German Anders <ganders@xxxxxxxxxxxx> wrote:
>>>
>>> Hi cephers,
>>>
>>>   Is anyone out there that implement enhanceIO in a production environment?
>>> any recommendation? any perf output to share with the diff between using it
>>> and not?
>>>
>>> Thanks in advance,
>>>
>>> German
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux