Fwd: [ceph-users] poor performance when recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



---------- Forwarded message ----------
From: Libin Wu <hzwulibin@xxxxxxxxx>
Date: 2015-12-08 9:12 GMT+08:00
Subject: Re: [ceph-users] poor performance when recovering
To: Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>
抄送: ceph-users <ceph-users@xxxxxxxxxxxxxx>


Yeah, we will upgrade in the near future. But i'm afraid the recovery
problem also existed in the hammer version.
So, why recovery affect performance so much, any plan to improve it?

2015-12-07 22:29 GMT+08:00 Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>:
> Hi,
>
> maybe you should first upgrade.
>
> "
>
>     Posted by sage
>     November 19th, 2015
>
> This is a bugfix release for Firefly.  As the Firefly 0.80.x series is
> nearing its planned end of life in January 2016 it may also be the last.
> "
>
> I think you are wasting time, trying to analyse/fix issues on a version
> which will be EOL in 3 weeks...
>
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
>
> Am 07.12.2015 um 15:26 schrieb Libin Wu:
>> Btw, my ceph version is 0.80.11
>>
>> 2015-12-07 21:45 GMT+08:00 Libin Wu <hzwulibin@xxxxxxxxx>:
>>> Hi, cephers
>>>
>>> I'm doing the performance test of ceph when recovering. The scene is simple:
>>> 1. run fio on 6 krbd device
>>> 2. stop one OSD for 10 seconds
>>> 3. start that OSD
>>>
>>> However, when the OSD up and start recovering, the performance of fio
>>> drop down from 9k to 1k for about 20 seconds. At the same tiime, we
>>> found the SSD of that OSD's latency is more than 100ms, so it seems
>>> the SSD become the bottleneck。
>>>
>>> So we want to slow down the recovery speed to lighten the load of the
>>> SSD when recovery. But configuration like:
>>>     osd_recovery_max_active
>>>     osd_recovery_max_chunk
>>>     osd_max_backfills
>>>     osd_recovery_op_priority
>>> are all useless.
>>>
>>> After reading and change some code, we want add a flow control in the
>>> process of:
>>>     OSD::do_recovery
>>>
>>> So, will it be possible to do so and if this solution has some
>>> potential problem?
>>>
>>> Thanks!
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux