Hi List! What is the state of dumb http transport today, for fetching updates? Is the client code smart enough to fetch indexes and use range requests? If so, how does that fare for latency? Background: I am looking at whether yum repositories' data (currently in sqlite & xml) could benefit from being a git (or very gittish) database -- with a bit of re-organizing to make it git-efficient of course. Not the data that would benefit, but rather, users pulling updates from fast-moving repos (updates, updates-testing, rawhide...). One of the constraints is that this has to be http, and work well across a universe of mirrors (that won't install or configure software) and the good bad and ugly world of http proxies. Yum can be taught to use the git proto, but that won't gain widespread use quickly -- http is and will be the mainstay for a long time. m -- martin.langhoff@xxxxxxxxx martin@xxxxxxxxxx -- Software Architect - OLPC - ask interesting questions - don't get distracted with shiny stuff - working code first - http://wiki.laptop.org/go/User:Martinlanghoff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html