Re: ceph-fuse performance about hammer and jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 6, 2016 at 12:23 PM, qisy <qisy@xxxxxxxxxxxx> wrote:
> Yan, Zheng:
>
>     Thanks for your reply.
>     But change into jewel, application read/write disk slowly. confirms the
> fio tested iops.

Does your application use buffered IO or direct IO?  direct-IO in
hammer actually is buffered IO, it is expected to be faster than
direct IO in jewel.


Yan, Zheng


>     Does there any other possibles?
>
>
> 在 16/6/1 21:39, Yan, Zheng 写道:
>
>> On Wed, Jun 1, 2016 at 6:52 PM, qisy <qisy@xxxxxxxxxxxx> wrote:
>>>
>>> my test fio
>>>
>>> fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randwrite -size=1G
>>> -filename=test.iso  -name="CEPH 4KB randwrite test" -iodepth=32
>>> -runtime=60
>>>
>> You were testing direct-IO performance. Hammer does not handle
>> direct-IO correctly, data are cached in ceph-fuse.
>>
>> Regards
>> Yan, Zheng
>>
>>> 在 16/6/1 15:22, Yan, Zheng 写道:
>>>
>>>> On Mon, May 30, 2016 at 10:22 PM, qisy <qisy@xxxxxxxxxxxx> wrote:
>>>>>
>>>>> Hi,
>>>>>       After jewel released fs product ready version, I upgrade the old
>>>>> hammer
>>>>> cluster, but iops droped a lot
>>>>>
>>>>>       I made a test, with 3 nodes, each one have 8c 16G 1osd, the osd
>>>>> device
>>>>> got 15000 iops
>>>>>
>>>>>       I found ceph-fuse client has better performance on hammer than
>>>>> jewel.
>>>>>
>>>>>       fio randwrite 4K
>>>>>       |                       | jewel server | hammer server |
>>>>>       |jewel client      |  480+ iops    |      no test         |
>>>>>       |hammer client |  6000+ iops  |   6000+ iops     |
>>>>
>>>> please post the fio config file.
>>>>
>>>> Regards
>>>> Yan, Zheng
>>>>
>>>>>       ceph-fuse(jewel) mount with jewel server got pity iops, is there
>>>>> any
>>>>> special options need to set?
>>>>>       If I continue use ceph-fuse(hammer) with jewel server, any
>>>>> problems
>>>>> will
>>>>> cause?
>>>>>
>>>>>       thanks
>>>>>
>>>>>       my ceph.conf below:
>>>>>
>>>>> [global]
>>>>> fsid = xxxxxxx
>>>>> mon_initial_members = xxx, xxx, xxx
>>>>> mon_host = 10.0.0.1,10.0.0.2,10.0.0.3
>>>>> auth_cluster_required = cephx
>>>>> auth_service_required = cephx
>>>>> auth_client_required = cephx
>>>>>
>>>>> filestore_xattr_use_omap = true
>>>>> osd_pool_default_size = 2
>>>>> osd_pool_default_min_size = 1
>>>>> mon_data_avail_warn = 15
>>>>> mon_data_avail_crit = 5
>>>>> mon_clock_drift_allowed = 0.6
>>>>>
>>>>> [osd]
>>>>> osd_disk_threads = 8
>>>>> osd_op_threads = 8
>>>>> journal_block_align = true
>>>>> journal_dio = true
>>>>> journal_aio = true
>>>>> journal_force_aio = true
>>>>> filestore_journal_writeahead = true
>>>>> filestore_max_sync_interval = 15
>>>>> filestore_min_sync_interval = 10
>>>>> filestore_queue_max_ops = 25000
>>>>> filestore_queue_committing_max_ops = 5000
>>>>> filestore_op_threads = 32
>>>>> osd_journal_size = 20000
>>>>> osd_map_cache_size = 1024
>>>>> osd_max_write_size = 512
>>>>> osd_scrub_load_threshold = 1
>>>>> osd_heartbeat_grace = 30
>>>>>
>>>>> [mds]
>>>>> mds_session_timeout = 120
>>>>> mds_session_autoclose = 600
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux