Re: Reg Upcall Cache Invalidation: asynchronous notification + Cleanup of expired states

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 04/23/2015 05:04 PM, Anand Subramanian wrote:
On 04/23/2015 02:46 PM, Soumya Koduri wrote:
Hi Shyam/Niels,

To re-iterate the issues,

a) at present, when two clients access same file, we send
'cache_invalidation' upcall notification to the first client in the
fop cbk path of the second client. This may affect brick latency esp.,
for the directories (where there are more chances of multiple clients
accessing them at the same time.).

b) clean-up the expired client state which no longer access any of the
files/directories.


Proposed solution -
a) I initially intend to maintain a list of all upcall notifications
and spawn a thread which reads the list and sends notifications
sequentially.

But the issue I foresee with this approach is that, since there will
be a single thread sending notifications (which could be in huge
number depending on the client I/O workloads), there may be delay in
clients receiving this notification which may break the whole

a) If the number of clients is really huge (your problem case),
__get_upcall_client() doing an O(n) scan here along with the strcmp()
can compete strongly with the single thread sending the upcall
bottleneck, no?
__get_upcall_client() is called for a single file/dir. Number of clients operating on same file or directory will be minimal. I am referring to total number of client entries stored for each file & dir in the volume - O(client_entries*no_of_files/dir_accessed ) which could grow huge.

b) if you use a thread pool do you also plan to have a kind of r-w locks
so that looking up vs perform add/delete operations on the client list
make more sense? so as to free up the get_upcall_client part which maybe
invoked more often called than other calls on the list. I am thinking
that is the case, do correct me if that is that not true.

Locks are already taken while scanning/deleting the client list.
Anyways these new threads doesn't need to do any lookup at all. They just receive an encapsulated notification structure + client_uid which needs to be passed to protocol/server to make a RPC call.

Thanks,
Soumya

Anand


purpose of sending these notifications. Instead, after having a
discussion with KP, plan to create synctasks to send upcall
notifications. If there are any issues with it, maybe we need to use
thread_pool.

Any thoughts?

b) approach taken here is that -
  * maintain a global list to contain all the upcall_inode_ctx allocated
  * Every time a upcall_inode_ctx is allocated, it is added to the
global list
  * during inode_forget, that upcall_inode_ctx is marked for destroy
  * created a reaper thread which scans through that list
         * cleans up expired client entries
         * frees the inode_ctx with destroy_mode set.

patch URL: http://review.gluster.org/#/c/10342/

Kindly review the changes and provide your inputs on the same.

Thanks,
Soumya
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux