Re: [gluster-devel] Reviewing Rackspace Use

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le mercredi 18 mai 2016 à 12:13 +0200, Niels de Vos a écrit :
> On Wed, May 18, 2016 at 11:37:43AM +0200, Michael Scherer wrote:
> > Le mercredi 18 mai 2016 à 09:53 +0200, Niels de Vos a écrit :
> > > On Wed, May 18, 2016 at 12:28:25PM +0530, Kaushal M wrote:
> > > > On Tue, May 17, 2016 at 7:50 PM, Amye Scavarda <amye@xxxxxxxxxx> wrote:
> > > > > Over the last year or so, the Rackspace use seems to have gone up pretty
> > > > > steadily. While Rackspace is being awesome and supporting us as an open
> > > > > source project, this usage has been growing in ways that we can't always
> > > > > plan for.
> > > > >
> > > > > I think(?) that what's on Rackspace is mostly CI, but I would love some
> > > > > guidance on what we might be able to make on-demand instead of constant
> > > > > lying in wait.
> > > > 
> > > > All the CI machines could be made on-demand. All are based on ready
> > > > images, so we should be able to spin up a VM on demand.
> > > 
> > > Or, even by only starting the VMs when needed, and shutting down when
> > > done. Either way, we need a plugin for creating or starting the VMs on
> > > demand. This is something we looked into before, but never had enough
> > > time to test and finish it:
> > > 
> > >   http://thread.gmane.org/gmane.comp.file-systems.gluster.infra/156
> > >   https://github.com/gluster/jenkins-ssh-slaves-plugin/tree/Before-connect-script-for-gluster-jenkins
> > 
> > I did ask to a friend for the jclouds plugin and he was saying that the
> > stable version is buggy as hell, so we would need a devel snapshot. It
> > didn't inspire a huge confidence.
> > 
> > But now that the automated install is working well (after the last few
> > bugs fixes around, like the xfsprogs issue found by ndevos, the python
> > dir issue finally fixed, etc), that's the next step.
> 
> That could work too, but new slave installations need to get added to
> the Jenkins master. 

My biggest concern is that jenkins is kinda getting too much CVE theses
days for me to be confortable with giving rackspace credentials to it,
and i would rather make sure we have hardened the server as much as
possible before it start to be leaked and that we are losing a ton of
money if someone started 500 bitcoins miners VM due to that.

> Booting and poweroff prevents that step. But.
> whatever works :-)

Mhh in fact, i do not really understand the plugin you pointed, can you
explain a bit more ?

> > > > The only reason I can think of for having them constantly running is
> > > > to serve the archived logs and builds.
> > > > There have been some (some) discussions of having an artifact server
> > > > which could host these archives.
> > > > Jenkins (AIU) already has support for capturing build artifacts and
> > > > uploading them to a server.
> > > 
> > > Indeed, Jenkins jobs can collect files after a run. We do that for the
> > > rpmbuild jobs (QA requested that so they can test the RPMs, no idea if
> > > it is used by anyone):
> > > 
> > >   https://build.gluster.org/job/glusterfs-devrpms-el7/lastSuccessfulBuild/
> > > 
> > > Capturing the logs and all by Jenkins would be nice. Hopefully we can
> > > place those on a Gluster volume and use our own software for storage :)
> > 
> > Then we would need a rather huge disk, no ?
> 
> Not sure how big that data gets. But I guess those logs can be removed
> after a week or such anyway.

For sure, but data (or rather, external disk) can become quite expensive
after a while on rackspace, so I rather make sure we crunch the numbers
to see if that help.

>  It would be nice to have all files on a
> Gluster volume (provided by a few VMs?) and maybe also place the
> contents for download.gluster.org on there... Not really urgent, but it
> is one of the use-cases we can then promote. Maybe we can have a service
> like gdash to show the community?

using gluster for the download server was something I wanted to do
indeed, but first, we have to decide on the usecase we want for that
server (since rpms are moved elsewhere)

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS


Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux