Re: While we're talking about RPM dependencies ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



16.04.2012 09:33, Toshio Kuratomi написал:
On Mon, Apr 16, 2012 at 02:02:31AM +0400, Pavel Alexeev wrote:
16.04.2012 00:51, Toshio Kuratomi wrote:
On Sun, Apr 15, 2012 at 11:16:58PM +0400, Pavel Alexeev wrote:
As I look it for me for first glance.
Install or update one package scenario (yum install foo):
1) Client ask last foo package version.
2) Server calculate all dependencies by self algorithms and return in
requested form (several may be used from JSON to XML) full list of
dependencies for that package. No other overhead provided like
dependencies of all packages, filelist etc.
3) Client got it, intercept with current installed packages list,
exclude whats already satisfy needs, and then request each other what
does not present starting from 1.

Update scenario (yum update):
1) Client ask repo-server to get a list of actual versions available
packages.
2) Server answer it.
3) Client found which updated and request its as in first scenario
for update.

I don't think this would be a speedup.  Instead of the CPUs of tens of
thousands of computers doing the depsolving, you'd be requiring the CPUs of
a single site to do it.
Yes. And as many clients do the same work, caching will give there
good results. So, sequence requests will costs nothing.
No.  Most requests will be different because they have a different initial
state.
If you read my suggestion I do not suggest send from client big upload overhead of current installed state. Client ask from server full dependency which package have, and then intercept answer with current installed software. So, answer for each client for package will be the same. Additionally it still allow resolve dependencies with several enabled repositories when some dependencies can't be resolved on one server repo.
   The server that
constructs the subsets of repodata would become single point of failures
whereas currently the repodata can be hosted on any mirror.  This setup
would be much more sensitive to mirrors and repodata going out of sync.
There'd likely be times when a new push has gone out where the primary
mirror was the only server which could push packages out as every other
mirror would be out of sync wrt the repodata server.
Yes, as I wrote initially it introduce more requirements to the
server, especially some sort of scripting allowed (php, perl, python,
ruby or other).
But at all it is not exclude mirroring as it is free software and any
ma install it, and sync metadata information in traditional way.
If you're requiring that mirrors run the script on their systems, then
that makes this idea pretty much a non-starter.  We've been  told by mirrors
that they do not want to do this.
For that mirror traditional fallback scheme will be available.
But if that will be implemented, I think appeared new mirrors also. And client may prefer one or another type depending by they needs. Additional it can be implemented as only solver mirror to serve requests and then point to download on other(s) mirrors. In that case requirements will be small and it may be hosted even on shared hosting. So, I too can provide that mirror.

-Toshio


--
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux