On Sun, Apr 15, 2012 at 11:16:58PM +0400, Pavel Alexeev wrote: > > As I look it for me for first glance. > Install or update one package scenario (yum install foo): > 1) Client ask last foo package version. > 2) Server calculate all dependencies by self algorithms and return in > requested form (several may be used from JSON to XML) full list of > dependencies for that package. No other overhead provided like > dependencies of all packages, filelist etc. > 3) Client got it, intercept with current installed packages list, > exclude whats already satisfy needs, and then request each other what > does not present starting from 1. > > Update scenario (yum update): > 1) Client ask repo-server to get a list of actual versions available > packages. > 2) Server answer it. > 3) Client found which updated and request its as in first scenario > for update. > I don't think this would be a speedup. Instead of the CPUs of tens of thousands of computers doing the depsolving, you'd be requiring the CPUs of a single site to do it. The clients would have to upload, the provides of their installed packages so bandwidth needs might increase. If I was installing a few packages by trial and error/memory I'd likely do yum install tmux followed closely by yum install zsh, which would require separate requests to the server to download separate dependency information as opposed to having the information downloaded once. The server that constructs the subsets of repodata would become single point of failures whereas currently the repodata can be hosted on any mirror. This setup would be much more sensitive to mirrors and repodata going out of sync. There'd likely be times when a new push has gone out where the primary mirror was the only server which could push packages out as every other mirror would be out of sync wrt the repodata server. -Toshio
Attachment:
pgpIn0JSzL5Gh.pgp
Description: PGP signature
-- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel