Re: filestore_fiemap and other ceph tweaks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I mean it seemed fine for the current master branch ran under kernel
2.6.32 but I can't make sure that no other problem because no
production verify.

On Tue, Feb 3, 2015 at 12:21 AM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:
> Hmm, I think there still some buggy exist in 2.6.32. I only try to
> make write block size align(which already merged into master) but not
> verify it in production. Our production cluster ran under  customize
> kernel version based on 3.12.
>
> On Tue, Feb 3, 2015 at 12:18 AM, J-P Methot <jpmethot@xxxxxxxxxx> wrote:
>> Thank you very much. Also thank you for the presentation you made in Paris,
>> it was very instructive.
>>
>> So, from what I understand, the fiemap patch is proven to work on kernel
>> 2.6.32 . The good news is that we use the same kernel in our setup. How long
>> have your production cluster been running with fiemap set to true?
>>
>>
>> On 2/2/2015 10:47 AM, Haomai Wang wrote:
>>>
>>> There exists a more recently discuss in
>>> PR(https://github.com/ceph/ceph/pull/1665).
>>>
>>>
>>> On Mon, Feb 2, 2015 at 11:05 PM, J-P Methot <jpmethot@xxxxxxxxxx> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I've been looking into increasing the performance of my ceph cluster for
>>>> openstack that will be moved in production soon. It's a full 1TB SSD
>>>> cluster
>>>> with 16 OSD per node over 6 nodes.
>>>>
>>>> As I searched for possible tweaks to implement, I stumbled upon
>>>> unitedstack's presentation at the openstack paris summit (video :
>>>>
>>>> https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/build-a-high-performance-and-high-durability-block-storage-service-based-on-ceph).
>>>>
>>>> Now, before implementing any of the suggested tweaks, I've been reading
>>>> up
>>>> on each one. It's not that I don't trust everything that's being said
>>>> there,
>>>> but I thought it may be better to inform myself before starting
>>>> to implement tweaks that may strongly impact the performance and
>>>> stability
>>>> of my cluster.
>>>>
>>>> One of the suggested tweaks is to set filestore_fiemap to true. The issue
>>>> is, after some research, I found that there is a rados block device
>>>> corruption bug linked to setting that option to true (link:
>>>> http://www.spinics.net/lists/ceph-devel/msg06851.html ). I have not found
>>>> any trace of that bug being fixed since, despite the mailing list message
>>>> being fairly old.
>>>>
>>>> Is it safe to set filestore_fiemap to true?
>>>>
>>>> Additionally, if anybody feels like watching the video or reading the
>>>> presentation (slides are at
>>>> http://www.spinics.net/lists/ceph-users/attachments/pdfUlINnd6l8e.pdf ),
>>>> what do you think of the part about the other tweaks and the data
>>>> durability
>>>> part?
>>>>
>>>> --
>>>> ======================
>>>> Jean-Philippe Méthot
>>>> Administrateur système / System administrator
>>>> GloboTech Communications
>>>> Phone: 1-514-907-0050
>>>> Toll Free: 1-(888)-GTCOMM1
>>>> Fax: 1-(514)-907-0750
>>>> jpmethot@xxxxxxxxxx
>>>> http://www.gtcomm.net
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>>
>> --
>> ======================
>> Jean-Philippe Méthot
>> Administrateur système / System administrator
>> GloboTech Communications
>> Phone: 1-514-907-0050
>> Toll Free: 1-(888)-GTCOMM1
>> Fax: 1-(514)-907-0750
>> jpmethot@xxxxxxxxxx
>> http://www.gtcomm.net
>>
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux