[please CC me, as I'm not subscribed] Hi there, I try to use git in a quite unusual way. I have a bunch of servers (hundreds), which get regular pulls of web developer code. The code consists of images, flash files, scripting language files, you name it. An exported repo (just the files, no SCM metadata) contains up to 4GB of files. No I want to distribute changes the developers made in a tree like structure: main server --> slave_1 --> webserver_0815 |-> slave_2 --> webserver_2342 |-> webserver_4711 But with the following contraints: - Store as little as possible on the webservers. One selected revision/tag is enough. - Transfer as little as possible data. Cancel out addition and deletion on the fly. - Nearly atomic update of file tree (easy to implement outside git) Nice to have: - Instead of copying the files to their proper names, hardlink them to their git objects. At the moment I always get more data than I need and have to store the repository AND the checked out data. I couldn't find a way so far to get around this. Is this possible? Any ideas are welcome. Many Thanks in Advance! Best Regards Ingo Oeser -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html