Re: Forcing Ceph into mapping all objects to a single PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The main issue however is not the hash's strength, but the fact that
once pre-computed, I'm able to use preimages on **every Ceph cluster out
there**. (As the hash functions's output is a deterministic function of
the object's name only)

I agree in that the general issue is inherent in hash-placement systems.

But what I don't agree with is the following:

Why do I have to be able to calculate my object's placement for **every
Ceph cluster** out there?

Why does it not suffice for me to be able to calculate the placement
only for the cluster I'm currently accessing?

>From a logical standpoint it seems reasonable. Why then, are we not able
to constrain the placement calculation in that regard?


If the placement is bound to a specific cluster it should suffice to
derive e.g. a key for SipHash based on cluster specifics.

Is this doable from an implementation point of view?


Note: I only did this as a proof-of-concept for the object store.
Think about the implications, if you're able to do this e.g. for every
RadosGW out there and servies using RadosGW.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux