On 08/03/2017, Gregory Farnum wrote: > I haven't followed this discussion closely, but when I saw your macro > usage the thing that struck me was the lack of versioning, which is > what I assume Sage is referring to. If there's a good way to define > versioned encoding that would probably be fine? > Unless the nullptr issue strikes more complicated structures — we need > to hand-write the OSDMap encoder for instance, and clever macros will > never be clever enough for that. > -Greg At least as far as I'm aware the only way we could output different versions would be in response to feature bits, since bounded objects are only allowed to depend on the type and the set of feature bits for the size. I think doing something based on feature bits is doable, though it might be slightly baroque and tempt a combinatorial explosion for something that is bounded and features and varies quite a bit with a lot of bits. To be honest while this might be fun to write, it's probably more of a compile-time-DSL than the value of the check warrants, so I think I'd advocate just taking 'bounded' as an annotation by the author of a class and leaving it up to him to make sure his class actually is bounded. > > > >> I'm inclined to just drop the trick and pass T() everywhere > >> instead... at least until we have something better. Objections? > > > > Two potential ones. On the one hand that would require everything > > dencable to be default constructible. That isn't a problem now but > > it's something I'd like to not have be the case forever. (But that > > bridge can be crossable in the Idealized Future.) > > > > The more pressing objection might be that if the current idea is to > > make sure bounded objects really /are/ bounded, passing T() might be > > the worst of all worlds. If caught at compile time the problem goes > > away. If caught at runtime with the nullptr it at least gives a > > definite error in a useful place. If we pass T() then an object that > > claims to be bounded but /isn't/ could calculate the wrong bound and > > then we'd have to debug overruns or underruns or something. > > > > Given that, I think if we're going to give up on this feature for now, > > the 'least bad' alternative would be to just document the requirements > > for an object that claims to be bounded but pass the supplied object > > reference in all cases. We might have poorer performance from someone > > violating the guarantee, but not outright incorrectness. Also errors > > of this sort seem as if they should be easily eyeballed. One need only > > look for objects that claim to be bounded and see how they implement > > their bound_encode method. > > > > -- > > Senior Software Engineer Red Hat Storage, Ann Arbor, MI, US > > IRC: Aemerson@{RedHat, OFTC} > > 0x80F7544B90EDBFB9 E707 86BA 0C1B 62CC 152C 7C12 80F7 544B 90ED BFB9 > > -- > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Senior Software Engineer Red Hat Storage, Ann Arbor, MI, US IRC: Aemerson@{RedHat, OFTC} 0x80F7544B90EDBFB9 E707 86BA 0C1B 62CC 152C 7C12 80F7 544B 90ED BFB9 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html