Re: question on OSDService::infos_oid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 12 Nov 2014, xinxin shu wrote:
> recently we focus on 4k random write , dump transaction on every 4k
> random write op , found that , for every 4k random write , it will
> update pg epoch and pg info on OSDService::infos_oid , we find the
> below transaction will be serialized on the infos_oid , which maybe
> not friendly to 4k random write performance . i want to if there is
> any special consideration on serializing pg epoch and info update on
> infos_oid , can we shard this to pg level ?
> 
>                 { "op_num": 1,
> 
>                   "op_name": "omap_setkeys",
> 
>                   "collection": "meta",
> 
>                   "oid": "16ef7597\/infos\/head\/\/-1",
> 
>                   "attr_lens": { "0.3c_epoch": 4,
> 
>                       "0.3c_info": 721}},

I think we did that for simplicity without thinking about locking 
parallelism.. there is no particular need to have the keys on the same 
object.  It's a bit awkward to make the change (since we need to read the 
old scheme and write the new one), but it's nothing we haven't done 
before.

Probably a better strategy is an object per PG.  But before we pick a new 
pg object in meta/ we should make sure this is the right strategy... we 
may want something that is grouped with the PG collection and is 
compatible with whatever the 'collection' concept morphs in to.

> btw , i have some question on DBObjectMap lock , currently DBObjectMap
> has several locks , cache lock , header lock
> 
> for cache lock , it seems that it is lock for header cache , but
> LRUCache has an internal lock , is it a duplicated lock?
>
> for header lock, the annotation shows that it serialize the access to
> next seq and in_use set ,  but after checking the code , header lock
> seems not just protect the in_use and next seq . my question is what
> header_lock exactly protects?

Sam, Somnath, or Haomai would have to answer that one...

sage 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux