On Tue, 19 Feb 2013, Noah Watkins wrote: > On Feb 19, 2013, at 2:22 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote: > > On Tue, Feb 19, 2013 at 2:10 PM, Noah Watkins <jayhawk@xxxxxxxxxxx> > > wrote: > > > > That is just truly annoying. Is this described anywhere in their docs? > > Not really. It's just there in the code--I can figure out the metric if > you're interested. I suspect it is local node, local rack, off rack > ordering, with no special tie breakers. > > > I don't think it would be hard to sort, if we had some mechanism for > > doing so (crush map nearness, presumably?), > > Topology information from the bucket hierarchy? I think it's always some > sort of heuristic. > > >> 1. Expand CephFS interface to return IP and hostname > > > > Ceph doesn't store hostnames anywhere ? it really can't do this. All > > it has is IPs associated with OSD ID numbers. :) Adding hostnames > > would be a monitor and map change, which we could do, but given the > > issues we've had with hostnames in other contexts I'd really rather > > not. > > What is the fate of hostnames used in ceph.conf? Could that information > be leveraged, when specified by the cluster admin? Those went hte way of the Dodo. However, we do have host and rack information in the crush map, at least for non-customized installations. How about something like string ceph_get_osd_crush_location(int osd, string type); or similar. We could call that with "host" and "rack" and get exactly what we need, without making any changes to the data structures. sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html