Re: Re: [PATCH V1] raidd5:Only move IO_THRESHOLD stripes from delay_list to hold_list once.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2012-07-16 15:46 NeilBrown <neilb@xxxxxxx> Wrote:
>On Fri, 13 Jul 2012 18:31:11 +0800 majianpeng <majianpeng@xxxxxxxxx> wrote:
>
>> To improve write perfomance by decreasing the preread stripe,only move
>> IO_THRESHOLD stripes from delay_list to hold_list once.
>> 
>> Using the follow command:
>> dd if=/dev/zero of=/dev/md0 bs=2M count=52100.
>> 
>> At default condition: speed is 95MB/s.
>> At the condition of preread_bypass_threshold was equal zero:speed is 105MB/s.
>> Using this patch:speed is 123MB/s.
>> 
>> If preread_bypass_threshold was zero,the performance will be better,but
>> not better than this patch.
>> I think maybe two reason:
>> 1:If bio is REQ_SYNC
>> 2:In function __get_priority_stripe():
>> >> } else if (!list_empty(&conf->hold_list) &&
>> >>		   ((conf->bypass_threshold &&
>> >>		     conf->bypass_count > conf->bypass_threshold) ||
>> >>		    atomic_read(&conf->pending_full_writes) == 0)) {
>> Preread_bypass_threshold is one condition of getting stripe from
>> hold_list.So only control the number of hold_list can get better
>> performance.
>> 
>> Signed-off-by: Jianpeng Ma <majianpeng@xxxxxxxxx>
>> ---
>>  drivers/md/raid5.c |    3 +++
>>  1 files changed, 3 insertions(+), 0 deletions(-)
>> 
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 04348d7..a6749bb 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -3662,6 +3662,7 @@ finish:
>>  
>>  static void raid5_activate_delayed(struct r5conf *conf)
>>  {
>> +	int count = 0;
>>  	if (atomic_read(&conf->preread_active_stripes) < IO_THRESHOLD) {
>>  		while (!list_empty(&conf->delayed_list)) {
>>  			struct list_head *l = conf->delayed_list.next;
>> @@ -3672,6 +3673,8 @@ static void raid5_activate_delayed(struct r5conf *conf)
>>  			if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
>>  				atomic_inc(&conf->preread_active_stripes);
>>  			list_add_tail(&sh->lru, &conf->hold_list);
>> +			if (++count >= IO_THRESHOLD)
>> +				break;
>>  		}
>>  	}
>>  }
>
>
>I tried this patch - against my current for-next tree - on my own modest
>hardware and could not measure any difference in write throughput.
>
>Maybe some other patch has fixed something.
>
>However it is still reading a lot during a write-only test and that is not
>ideal.  It would be nice if we could arrange that it didn't read at all.
>
By compare to kernel 2.6.18/2.6.32, there are not any reading.
So i think  it should more work to do .
>NeilBRown
>?韬{.n?????%??檩??w?{.n???{炳盯w???塄}?财??j:+v??????2??璀??摺?囤??z夸z罐?+?????w棹f



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux