Re: Strange issue with CRUSH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FWIW,

It would be very interesting to see the output of:

https://github.com/ceph/cbt/blob/master/tools/readpgdump.py

If you see something that looks anomalous. I'd like to make sure that I'm detecting issues like this.

Mark

On 07/09/2015 06:03 PM, Samuel Just wrote:
I've seen some odd teuthology in the last week or two which seems to be anomalous rjenkins hash behavior as well.

http://tracker.ceph.com/issues/12231
-Sam

----- Original Message -----
From: "Sage Weil" <sweil@xxxxxxxxxx>
To: "Gleb Borisov" <borisov.gleb@xxxxxxxxx>
Cc: ceph-devel@xxxxxxxxxxxxxxx
Sent: Thursday, July 9, 2015 3:06:00 PM
Subject: Re: Strange issue with CRUSH

On Fri, 10 Jul 2015, Gleb Borisov wrote:
Hi Sage,

Sorry for mailing you in person, I realize that you're quite busy at redhat,
but I wanted you have a look on an issue with CRUSH map.

No problem. I hope you don't mind I've added ceph-devel to the cc list.

I've described very first insights here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/002897.html

We are continue our research and found that distribution of PG count by OSD
is very strange and after digging into CRUSH source code found rjenkins1
hash function.

After some testing we realized that rjenkins1's value distribution is
exponential, and this can cause our disbalance.

Any issue with rjenkins1's hash function is very interesting and
concerning.  Can you describe your analysis and what you mean by the
distribution being exponential?

What do you think about adding additional hashing algorithm to CRUSH? It
seems that it could improve distribution.

I am definitely open to adding new hash functions, especially if the
current ones are flawed.  The current hash was created by making ad hoc
combinations of rjenkins' mix function with various numbers of
arguments--hardly scientific or methodical.  We did an analysis a couple
years back and found that it effectively modeled a uniform distribution,
but if we missed something or were wrong we should definitely correct it!

In any case, the important step is to quantify what is wrong with the
current hash so that we can ensure any new one is not flawed in the same
way.

Thanks-
sage


We have also tried to generate some syntetic crushmaps (another bucket
types, more OSDs per host, more/less hosts by rack, different cound of
racks, linear osd ids, random osd ids, etc), but didn't found any
combination with better distribution of PG across OSD.

Thanks and one more sorry for bothering you in person.
--
Best regards,
Gleb M Borisov


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux