Missing hashpspool on some pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Recently I noticed that we're missing the 'hashpspool' flag on some of our
production pools which is causing the acting set of OSDs to be the same
across PGs in different pools:

  3.17 [1089,17,447]
  4.16 [1089,17,447]
  6.14 [1089,17,447]
  ^-- Notice how if you add the pool number and pg number together you get
  20 for the PGs with the same acting set


  3.18 [34,146,387]
  4.17 [34,146,387]
  6.15 [34,146,387]
  ^-- Notice how if you add the pool number and pg number together you get
  21 for the PGs with the same acting set


Here's what those pools look like:

  pool 3 'images' replicated size 3 min_size 2 crush_ruleset 2 object_hash
rjenkins pg_num 4096 pgp_num 4096 last_change 842505
min_read_recency_for_promote 1 stripe_width 0
  pool 4 'volumes' replicated size 3 min_size 2 crush_ruleset 2
object_hash rjenkins pg_num 32768 pgp_num 32768 last_change 842506
min_read_recency_for_promote 1 stripe_width 0
  pool 6 'instances' replicated size 3 min_size 2 crush_ruleset 2
object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 842337
min_read_recency_for_promote 1 stripe_width 0


>From what I've read it seems like setting the 'hashpspool' flag would help
in data distribution, but also improve recovery times since there will be
more diversity in the PGs acting sets.

The problem with setting this flag on existing pools is that every PG in
that pool will get remapped to a new set of OSDs.  The data distribution
on those pools is ~8% for images, ~9% for instances, and ~83% for volumes.
 So I'm thinking that I could set the 'hashpspool' flag on only the images
and instances pools and get the same benefit as setting it for all three,
but my question for everyone is whether or not it is worth it?

Since this will be a lot of data movement, I'm also concerned with the
monitor store growing too large.  With over 1400 OSDs I'm seeing messages
like "store is getting too big! 18310 MB >= 15360 MB" rather often these
days...

Thanks,
Bryan

E-MAIL CONFIDENTIALITY NOTICE: 
The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited.

_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux