crimson-osd queues discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

  Here we want to discuss ceph-osd multiple queues and how can we implement crimson-osd more efficient with or without these queues. 

  We noticed there are multiple places for enqueue operations in current ceph-osd for a request when some preconditions are not satisfied such as session->waiting_on_map(waiting for map), slot->waiting(waiting for pg), waiting_for/map/peered/active/flush/scrub/** etc in pg.h, we need hold the request in these waiting queues,  when some certain precondition is satisfied these enqueued request will be dequeued and enqueue front to work queue again to go through all the precondition checks from the beginning.  

  1. is it necessary to go through all the precondition checks again from the beginning or we can continue from the blocked check? 

   Crimson-osd is based on seastar framewok and use future/promise/continue chains, when a task's precondition is not satisfied at now it will return a future immediately and when promise fulfill the future, the continue task will be push to task queue of seastar reactor to schedule.  In this case we still need hold a queue for each precondition to keep track of pending futures, when some precondition is satisfied to call the waiting futures' promise to fulfill the future. 

   2. We have two choice here: a). use application its own queue to do request schedule just like the current ceph-osd (enqueue/dequeue request from one queue to another when precondition is not satisfied), in this case seastar reactor task scheduler is not involved in   b). Use seastar reactor task queue, in this case use future/promise/continue model when precondition is not satisfied, let seastar reactor do schedule (also need application queues for tracking pending futures)
     From our crimson-messenger experience, for some simple repeat action such as send-message, seems application queue is more effective than seastar reactor task queue.  We are not sure for osd/pg this kind of complex case, if it is still more effective.  
    Which one is better for crimson-osd?

   3. For QOS, do we have to use some application queue to implement Qos? Means we can't avoid application queue for QOS? 
    
Hope get some opinions from you!

Thanks!
-Chunmei     



  




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux