Re: [PATCH] SMPDesign: Remove duplicate item

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> One extreme approach for the greatest efficiency might be not to
> parallelize at all, to use a single CPU, and to get a negative
> speedup (i.e., speed down).

But we can also get the greatest efficiency using data ownership under the maximum degree of parallelism.

> Anyway, this list is more of a guideline of approaches to
> parallel-programming design considerations.

Got it.


Thanks,
Alan



> 2023年4月12日 上午10:51,Akira Yokosawa <akiyks@xxxxxxxxx> 写道:
> 
> On Wed, 12 Apr 2023 09:15:12 +0800, Alan Huang wrote:
>>>> -	Similarly, the greater the desired efficiency, the smaller
>>>> +	Similarly, the greater the desired efficiency, the bigger
>>>> 	the achievable speedup.
>>> 
>>> I might not be fully woken up, but this change doesn't make
>>> sense to me.
>> 
>> The original sentence means (to me) If we get greater efficiency use of CPUs,
>> we get smaller speedup.
>> 
>> If I misunderstood, please correct me.
> 
> My version of interpretation of the sentence:
> 
>   Similarly, if our goal is greater efficiency, we might end up
>   in a smaller speedup in the end.
> 
> One extreme approach for the greatest efficiency might be not to
> parallelize at all, to use a single CPU, and to get a negative
> speedup (i.e., speed down).
> 
>> 
>>>   If the critical sections have high overhead compared to
>>>   the primitives guarding them, the best way to improve
>>>                                                 ^^^^^^^
>>>   speedup is to increase parallelism by moving to reader/writer
>>>   ^^^^^^^
>>>   locking, data locking, asymmetric, or data ownership.
>>>            ^^^^^^^^^^^^^                ^^^^^^^^^^^^^^
>>> 
>>> Item 5:
>>> 
>>>   If the critical sections have high overhead compared to
>>>   the primitives guarding them and the data structure being
>>>                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>   guarded is read much more often than modified, the best way
>>>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>   to increase parallelism is to move to reader/writer locking
>>>   or asymmetric primitives.
>>> 
>>> Are you sure they are duplicates ???
>> 
>> 
>> If the critical sections have high overhead compared to the primitives guarding them,
>> It means that the overhead of primitives is relative small, but the aim of data locking and data ownership 
>> is to reduce the overhead of primitives.
> 
> Well, I think Paul's motivation of special casing read-mostly
> data comes from his expertise in RCU.
> 
> I'd like to respect his motivation.
> 
> Anyway, this list is more of a guideline of approaches to
> parallel-programming design considerations.
> Why do you want it to be so precise ???
> 
>        Thanks, Akira
> 
>> 
>> Thanks,
>> Alan
>> 
> [...]





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux