Re: any recommendation of using EnhanceIO?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 18 Aug 2015, at 15:50, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
> 
> 
> 
> On 08/18/2015 06:47 AM, Nick Fisk wrote:
>> Just to chime in, I gave dmcache a limited test but its lack of proper writeback cache ruled it out for me. It only performs write back caching on blocks already on the SSD, whereas I need something that works like a Battery backed raid controller caching all writes.
>> 
>> It's amazing the 100x performance increase you get with RBD's when doing sync writes and give it something like just 1GB write back cache with flashcache.
> 
> For your use case, is it ok that data may live on the flashcache for some amount of time before making to ceph to be replicated?  We've wondered internally if this kind of trade-off is acceptable to customers or not should the flashcache SSD fail.
> 

Was it me pestering you about it? :-)
All my customers need this desperately - people don't care about having RPO=0 seconds when all hell breaks loose.
People care about their apps being slow all the time which is effectively an "outage".
I (sysadmin) care about having consistent data where all I have to do is start up the VMs.

Any ideas how to approach this? I think even checkpoints (like reverting to a known point in the past) would be great and sufficient for most people...


>> 
>> 
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>>> Jan Schermer
>>> Sent: 18 August 2015 12:44
>>> To: Mark Nelson <mnelson@xxxxxxxxxx>
>>> Cc: ceph-users@xxxxxxxxxxxxxx
>>> Subject: Re:  any recommendation of using EnhanceIO?
>>> 
>>> I did not. Not sure why now - probably for the same reason I didn't
>>> extensively test bcache.
>>> I'm not a real fan of device mapper though, so if I had to choose I'd still go for
>>> bcache :-)
>>> 
>>> Jan
>>> 
>>>> On 18 Aug 2015, at 13:33, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
>>>> 
>>>> Hi Jan,
>>>> 
>>>> Out of curiosity did you ever try dm-cache?  I've been meaning to give it a
>>> spin but haven't had the spare cycles.
>>>> 
>>>> Mark
>>>> 
>>>> On 08/18/2015 04:00 AM, Jan Schermer wrote:
>>>>> I already evaluated EnhanceIO in combination with CentOS 6 (and
>>> backported 3.10 and 4.0 kernel-lt if I remember correctly).
>>>>> It worked fine during benchmarks and stress tests, but once we run DB2
>>> on it it panicked within minutes and took all the data with it (almost literally -
>>> files that werent touched, like OS binaries were b0rked and the filesystem
>>> was unsalvageable).
>>>>> If you disregard this warning - the performance gains weren't that great
>>> either, at least in a VM. It had problems when flushing to disk after reaching
>>> dirty watermark and the block size has some not-well-documented
>>> implications (not sure now, but I think it only cached IO _larger_than the
>>> block size, so if your database keeps incrementing an XX-byte counter it will
>>> go straight to disk).
>>>>> 
>>>>> Flashcache doesn't respect barriers (or does it now?) - if that's ok for you
>>> than go for it, it should be stable and I used it in the past in production
>>> without problems.
>>>>> 
>>>>> bcache seemed to work fine, but I needed to
>>>>> a) use it for root
>>>>> b) disable and enable it on the fly (doh)
>>>>> c) make it non-persisent (flush it) before reboot - not sure if that was
>>> possible either.
>>>>> d) all that in a customer's VM, and that customer didn't have a strong
>>> technical background to be able to fiddle with it...
>>>>> So I haven't tested it heavily.
>>>>> 
>>>>> Bcache should be the obvious choice if you are in control of the
>>>>> environment. At least you can cry on LKML's shoulder when you lose
>>>>> data :-)
>>>>> 
>>>>> Jan
>>>>> 
>>>>> 
>>>>>> On 18 Aug 2015, at 01:49, Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
>>> wrote:
>>>>>> 
>>>>>> What about https://github.com/Frontier314/EnhanceIO?  Last commit 2
>>>>>> months ago, but no external contributors :(
>>>>>> 
>>>>>> The nice thing about EnhanceIO is there is no need to change device
>>>>>> name, unlike bcache, flashcache etc.
>>>>>> 
>>>>>> Best regards,
>>>>>> Alex
>>>>>> 
>>>>>> On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz <dang@xxxxxxxxxx>
>>> wrote:
>>>>>>> I did some (non-ceph) work on these, and concluded that bcache was
>>>>>>> the best supported, most stable, and fastest.  This was ~1 year
>>>>>>> ago, to take it with a grain of salt, but that's what I would recommend.
>>>>>>> 
>>>>>>> Daniel
>>>>>>> 
>>>>>>> 
>>>>>>> ________________________________
>>>>>>> From: "Dominik Zalewski" <dzalewski@xxxxxxxxxxx>
>>>>>>> To: "German Anders" <ganders@xxxxxxxxxxxx>
>>>>>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>>>>>> Sent: Wednesday, July 1, 2015 5:28:10 PM
>>>>>>> Subject: Re:  any recommendation of using EnhanceIO?
>>>>>>> 
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I’ve asked same question last weeks or so (just search the mailing
>>>>>>> list archives for EnhanceIO :) and got some interesting answers.
>>>>>>> 
>>>>>>> Looks like the project is pretty much dead since it was bought out by
>>> HGST.
>>>>>>> Even their website has some broken links in regards to EnhanceIO
>>>>>>> 
>>>>>>> I’m keen to try flashcache or bcache (its been in the mainline
>>>>>>> kernel for some time)
>>>>>>> 
>>>>>>> Dominik
>>>>>>> 
>>>>>>> On 1 Jul 2015, at 21:13, German Anders <ganders@xxxxxxxxxxxx>
>>> wrote:
>>>>>>> 
>>>>>>> Hi cephers,
>>>>>>> 
>>>>>>>   Is anyone out there that implement enhanceIO in a production
>>> environment?
>>>>>>> any recommendation? any perf output to share with the diff between
>>>>>>> using it and not?
>>>>>>> 
>>>>>>> Thanks in advance,
>>>>>>> 
>>>>>>> German
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>> 
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
>> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux