Re: increasingly large packages and longer build times

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 31 Aug 2017, John Spray wrote:
> On Wed, Aug 30, 2017 at 11:07 PM, Ken Dreyer <kdreyer@xxxxxxxxxx> wrote:
> > On Wed, Aug 30, 2017 at 11:53 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> >> The thing is, our boost could easily end up being the "old" one, if
> >> the distro is shipping security updates to theirs.  Our
> >> higher-numbered boost packages would potentially block the distro's
> >> updates to their lower-numbered boost packages.  If we ship our own
> >> separate boost, then maybe Ceph is stuck with an un-patched boost, but
> >> other applications on the system are not.
> >
> > That scenario is theoretically possible, and it's good that you bring
> > it up for consideration. I'm trying to understand the likelihood of
> > the effort/disruption there. Do you have specific applications in mind
> > that would benefit in the way you describe? Ones that require boost
> > and are often co-installed on Ceph nodes?
> 
> Lots of things depend on boost.  Naturally I don't know what
> specifically people run on their Ceph servers apart from Ceph.  It's
> risky to blow away distro packages in favour of our own, precisely
> because of that lack of knowledge about what else is going on on the
> servers.
> 
> I'm really just pointing out that there's a degree of risk that our
> users would be taking on, in exchange for the (not inconsiderable)
> benefit of knocking 500MB out of a fully checked out tree.

We should also keep in mind that boost isn't a very compelling 
demonstration of the advantages of shared libraries because it's 99% 
headers, with only a tiny bit of code that gets dynamically linked.  The 
main impacts of moving to a packaged boost will be (1) faster git clone 
times, (2) faster shaman builds, and (3) more annoying build dependencies 
(install-deps.sh would probably have to pull boost from a new repo source 
or something, instead of relying on distro packages like it does now?).

Are we sure that ccache is working properly?  Maybe we can improve 
turnaround times elsewhere...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux