Re: [PATCH] CIFS: Do not kmalloc under the flocks spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/2/22 Jeff Layton <jlayton@xxxxxxxxx>:
>
> Hmm...looking at my machine:
>
> (gdb) p sizeof(struct cifsLockInfo)
> $1 = 144
>
> So that's 144 bytes per struct...what's the upper limit on "count"?
>
> It seems like that could potentially be a large allocation, and we have
> to do this in the oplock break codepath.
>
> Instead of doing this, would it be better to allocate these when
> the VFS first calls ->lock, and attach them to a new member of
> file_lock.fl_u?

Every time we can have a different number of locks - so, can't
pre-allocate it. I can create new struct for this only - without extra
llist, blist, block_q fields, that can save enough memory.

>
>>       lock_flocks();
>>       cifs_for_each_lock(cfile->dentry->d_inode, before) {
>> @@ -950,35 +962,38 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
>>                       type = CIFS_RDLCK;
>>               else
>>                       type = CIFS_WRLCK;
>> -
>> -             lck = cifs_lock_init(flock->fl_start, length, type,
>> -                                  cfile->netfid);
>> -             if (!lck) {
>> -                     rc = -ENOMEM;
>> -                     goto send_locks;
>> -             }
>> +             lck = &lcks[i];
>>               lck->pid = flock->fl_pid;
>> -
>> -             list_add_tail(&lck->llist, &locks_to_send);
>> +             lck->netfid = cfile->netfid;
>> +             lck->length = length;
>> +             lck->type = type;
>> +             lck->offset = flock->fl_start;
>> +             i++;
>> +             if (i == count)
>> +                     break;
>>       }
>> -
>> -send_locks:
>>       unlock_flocks();
>>
>> -     list_for_each_entry_safe(lck, tmp, &locks_to_send, llist) {
>> +     if (i == count)
>> +             cERROR(1, "Can't push all brlocks!");
>> +
>
> This sort of thing always makes me a little worried. Sure, you're
> allocating a bunch of extra locks, but what's the potential effect
> of not being able to push all the locks here? The user is going to
> see this message and have no clue of what it means.
>
> Even I'm not sure what this means in practice. The VFS will have a
> record of this lock, but the server won't...
>

I really think that there will be no such situations, because we hold
inode->lock_mutex - the locking state should leave the same.

-- 
Best regards,
Pavel Shilovsky.
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux