Re: [RFC][PATCH v1 5/9] ima: allocating iint improvements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 1, 2012 at 6:58 PM, Eric Paris <eparis@xxxxxxxxxxxxxx> wrote:
> On Mon, Jan 30, 2012 at 5:14 PM, Mimi Zohar <zohar@xxxxxxxxxxxxxxxxxx> wrote:
>> From: Dmitry Kasatkin <dmitry.kasatkin@xxxxxxxxx>
>>
>
>>  static struct rb_root integrity_iint_tree = RB_ROOT;
>> -static DEFINE_SPINLOCK(integrity_iint_lock);
>> +static DEFINE_RWLOCK(integrity_iint_lock);
>>  static struct kmem_cache *iint_cache __read_mostly;
>
> Has any profiling been done here?   rwlocks have been shown to
> actually be slower on multi processor systems in a number of cases due
> to the cache line bouncing required.  I believe the current kernel
> logic is that if you have a short critical section and you can't show
> profile data the rwlocks are better, just stick with a spinlock.

No, I have not done any profiling.
My assumption was that rwlocks are better when there many readers.
If what you say is true then rwlocks are useless...
With big sections it is necessary to use rw semaphores.

- Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux