On Mon, 19 Feb 2024 at 08:59, Kevin Kofler via devel <devel@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
Miroslav Suchý wrote:
> What **you** would find as acceptable policy for pruning rawhide chroots?
As I mentioned several times, I already find the existing policy for pruning
EOL release chroots unacceptable (because deleting data must never be the
default – notifications can be and are still lost in spam filters, I still
do not ever get any notification from Copr! – and because the UI to extend
the lifetime follows dark patterns, requiring us to click separately for
every single chroot instead of having an "Extend all" button).
Instead of coming up with new aggressive pruning schemes, Copr really needs
to come up with a reasonable amount of storage to satisfy user demands. HDDs
in the multi-TB-range are available for fairly low budgets (extremely low by
the standards of a company like IBM), and it just takes 2 of them to build a
RAID 1 that is safe against data loss. Of course, that means that Copr needs
to stop locking itself into third-party cloud providers that charge
ridiculously high prices for storage.
Kevin,
I agree with your general concerns but have problems with your analysis of costs.
1. Drive size is not just what is needed but also throughput. The large drives needed to store the data COPR uses for its hundreds of chroots are much 'slower' on reads and writes even when adding in layers of RAID 1+0. Faster drives are possible but the price goes up considerably.
2. Throughput of individual drives also requires backplane speeds which match peek throughput of all the drives. Otherwise you end up with lots of weird stalling (as seen on certain builders which have such drives).
3. Outside of that you need to have a fast network switch speed of 10G minimum and 40G to 100G to deal with the multiple builders and the storage server. The larger the storage, the more bandwidth needed all the way through as building software eats lots of space.
4. The builders need to be housed in a datacenter which charges for
a. power
b. cooling
c. square footage used
d. staff to deal with problems.
e. racks and wiring and parts
Going from the costs 5 years ago, it took using storage systems from 45drives and systems elsewhere.. The capital costs for COPR was around 350k. The operating expenses were around 80k per year. At the time, chroots and other things needed to be cleaned much more regularly than now. We would probably need to double or triple those costs to meet what we have now. [budgets like what was given then only came around 1/10 years or so.]
The amazon systems cost Fedora $0.00 as long as it stays within 'modest' disk space and other resources. That said, I realize there will be a day when you can clearly said "I told you this would happen" when being locked in comes back to bite.
Stephen Smoogen, Red Hat Automotive
Let us be kind to one another, for most of us are fighting a hard battle. -- Ian MacClaren-- _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue