Summary/Minutes from today's Fedora Infrastructure meeting (2012-10-04)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



============================================
#fedora-meeting: Infrastructure (2012-10-04)
============================================


Meeting started by nirik at 18:01:33 UTC. The full logs are available at
http://meetbot.fedoraproject.org/fedora-meeting/2012-10-04/infrastructure.2012-10-04-18.01.log.html
.



Meeting summary
---------------
* Aloha!  (nirik, 18:01:34)

* New folks introductions and Apprentice tasks  (nirik, 18:04:22)

* Applications Maint/Development status / discussion  (nirik, 18:09:11)
  * LINK: https://209.132.184.101/election -
    http://209.132.184.101/fedocal  (pingou, 18:11:08)
  * LINK: https://fedorahosted.org/fedora-infrastructure/ticket/3495
    (nirik, 18:26:53)

* Sysadmin status / discussion  (nirik, 18:29:03)

* Private Cloud status update  (nirik, 18:34:27)

* Security FAD update  (nirik, 18:42:49)

* Upcoming Tasks/Items  (nirik, 18:47:57)
  * 2012-10-08 purge inactive fi-apprentices  (nirik, 18:48:12)
  * 2012-10-08 - announce smolt retirement  (nirik, 18:48:12)
  * 2012-10-09 to 2012-10-23 F18 Beta Freeze  (nirik, 18:48:12)
  * 2012-10-23 F18 Beta release  (nirik, 18:48:12)
  * 2012-11-01 nag fi-apprentices  (nirik, 18:48:12)
  * 2012-11-07 - switch smolt server to placeholder code.  (nirik,
    18:48:13)
  * 2012-11-13 to 2012-11-27 F18 Final Freeze  (nirik, 18:48:14)
  * 2012-11-20 FY2014 budget due  (nirik, 18:48:16)
  * 2012-11-22 to 2012-11-23 Thanksgiving holiday  (nirik, 18:48:18)
  * 2012-11-26 to 2012-11-29 Security FAD  (nirik, 18:48:20)
  * 2012-11-27 F18 release.  (nirik, 18:48:22)
  * 2012-11-30 end of 3nd quarter  (nirik, 18:48:24)
  * 2012-12-24 to 2013-01-01 Red Hat Shutdown for holidays.  (nirik,
    18:48:28)
  * 2013-01-18 to 2013-01-20 FUDCON Lawrence  (nirik, 18:48:30)

* Open Floor  (nirik, 18:49:23)

Meeting ended at 18:51:53 UTC.




Action Items
------------





Action Items, by person
-----------------------
* **UNASSIGNED**
  * (none)




People Present (lines said)
---------------------------
* nirik (103)
* skvidal (42)
* pingou (35)
* abadger1999 (31)
* smooge (18)
* lmacken (15)
* akshaysth (6)
* zodbot (6)
* threebean (6)
* miguelcnf (4)
* jds2001 (2)
* Southern_Gentlem (1)
* _love_hurts_ (1)
* dgilmore (1)
* ricky (0)
* mdomsch (0)
* CodeBlock (0)
--
18:01:33 <nirik> #startmeeting Infrastructure (2012-10-04)
18:01:33 <zodbot> Meeting started Thu Oct  4 18:01:33 2012 UTC.  The chair is nirik. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:33 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
18:01:34 <nirik> #meetingname infrastructure
18:01:34 <nirik> #topic Aloha!
18:01:34 <nirik> #chair smooge skvidal CodeBlock ricky nirik abadger1999 lmacken dgilmore mdomsch threebean
18:01:34 <zodbot> The meeting name has been set to 'infrastructure'
18:01:34 <zodbot> Current chairs: CodeBlock abadger1999 dgilmore lmacken mdomsch nirik ricky skvidal smooge threebean
18:01:43 * skvidal is here
18:01:44 <smooge> here
18:01:58 <miguelcnf> hello
18:02:05 * nirik will wait a few for folks to wander in.
18:02:07 * lmacken 
18:02:10 * jds2001 
18:02:18 <dgilmore> hey
18:02:53 <Southern_Gentlem> it was good to work with CodeBlock at OLF
18:02:56 * threebean 
18:03:09 <akshaysth> /me is here
18:03:20 * akshaysth is here
18:04:15 * pingou 
18:04:19 <nirik> ok, lets go ahead and dive on in...
18:04:22 <nirik> #topic New folks introductions and Apprentice tasks
18:04:33 <miguelcnf> well may I?
18:04:35 <nirik> any new folks want to introduce themselves, or apprentices have questions or comments?
18:04:40 <nirik> miguelcnf: go ahead. ;)
18:05:54 * _love_hurts_ late here
18:06:03 <nirik> no worries.
18:06:39 <miguelcnf> allright... so the name is Miguel and I'm from Portugal. As I've wrote to the list I've recently been working with l10n pt team and now I'm looking to help out the infrastructure team. I'm a system engineer and work with red hat/centos on a daily basis. It would be great if someone could invite me to the fi-apprentice group so I could start poking around and hopefully get some easyfix work. And I think thats pretty much it... Hi!
18:06:47 * abadger1999 here
18:07:06 <akshaysth> miguelcnf: welcome! :)
18:07:09 <nirik> miguelcnf: welcome! I can add you to that group after the meeting... just see me in #fedora-admin
18:07:22 <miguelcnf> cool thanks guys
18:07:44 <nirik> excellent.
18:07:48 <akshaysth> not much of an update here but am still working on ticket 3293
18:07:51 <akshaysth> .ticket 3293
18:07:55 <zodbot> akshaysth: #3293 ([easyfix] add staging monitoring script) – Fedora Infrastructure - https://fedorahosted.org/fedora-infrastructure/ticket/3293
18:08:02 <nirik> cool.
18:08:19 <akshaysth> still trying to figure out how to get the diffs to show correctly when being emailed
18:08:22 <nirik> Hopefully most of our staging changes will be gone before we go into the next freeze, but there will be more piling up I'm sure.
18:08:30 * nirik nods.
18:08:46 <nirik> ok, any other apprentice questions or new folks?
18:09:08 <nirik> ok, moving on...
18:09:11 <nirik> #topic Applications Maint/Development status / discussion
18:09:37 <nirik> any new application devel / maint news this week?
18:10:00 <threebean> working on datanommer in stg now.  should be good to go by the end of the day.
18:10:19 <nirik> threebean: and hopefully we land that in prod before next tuesday?
18:10:35 <pingou> I got a running version of a) fedocal b) election
18:10:40 <threebean> I'd really like to.  If its not possible, in the short time, that's cool.
18:10:52 * lmacken has been doing a lot of fedmsg hacking this week, mostly client side stuff though
18:11:01 <nirik> well, happy to help try and do so... we can see. :)
18:11:01 <abadger1999> pingou: also got some jenkins nodes running in the euca cloud.
18:11:02 <threebean> lmacken++
18:11:08 <pingou> https://209.132.184.101/election - http://209.132.184.101/fedocal
18:11:21 <lmacken> oh yeah, the new jenkins slaves are awesome. http://jenkins.turbogears.org
18:11:24 <abadger1999> err -- "pingou also got"
18:11:33 <abadger1999> :-)
18:11:41 <smooge> what is fedocal?
18:11:42 <pingou> we have an EL6 and a F17 nodes running jenkins
18:11:47 <lmacken> still have to hook a bunch of projects into them, but it'll let us test everything across py2.4 - 3.x
18:11:49 <pingou> smooge: fedora calendar :)
18:11:50 <threebean> pingou: looks really clean
18:12:07 <nirik> there was talk about a python build instance a while back... would that be similar to this jenkins thing?
18:12:13 <smooge> pingou, thanks
18:12:14 <skvidal> pingou: I have a new el6 img I'd like to test - but if what you have is working we can just leave it alone :)
18:12:15 <abadger1999> nirik: yes it would
18:12:16 <pingou> wrt jenkins the question is, do we want to run our own master in the longer term
18:12:35 <lmacken> right now the master is run by me on a RH box at RIT
18:12:44 <jds2001> jenkins is some CI thing, right?
18:12:51 * nirik would like a sop on jenkins setup... and playbooks to do it, so we could easily redo it.
18:12:53 <abadger1999> nirik: Talk to dmalcolm about it -- I'm pretty sure it's different software so it might require more than just ssh access on the build-nodes.
18:12:54 <pingou> skvidal: it's not really working, it complains about the small space available on /
18:13:06 <pingou> jds2001: yes
18:13:15 <abadger1999> I think a master would be good if we want this to be something we do long term.
18:13:18 <lmacken> if we want to give it a proper host, I'm totally down for that. It's all dead simple for me to keep this one up (as long as the power on computer science house stays on)
18:13:21 <nirik> .ticket 1717
18:13:23 <zodbot> nirik: #1717 (buildbot for upstream python code) – Fedora Infrastructure - https://fedorahosted.org/fedora-infrastructure/ticket/1717
18:13:34 <skvidal> pingou: okay - then we'll talk after the meeting
18:13:54 <pingou> nirik: I have a "sop" for the nodes, it's dead simple
18:13:59 <abadger1999> Having a third-party master with a password to login to our boxes doesn't seem quite right for something we want to keep tight control of.
18:14:06 <pingou> nirik: for the master I'd need to do more testing
18:14:34 * nirik thinks it would be good to run the master too if we can... because we might want to script this for other projects to, or reuse it for them or the like.
18:14:42 <pingou> I'm thinking, master in a vm "bare-metal" and nodes on the clouds as we want/need
18:14:52 <pingou> skvidal: was offering to use ansible to deploy them
18:14:56 <lmacken> if we could get fas login to jenkins that would be nice
18:15:07 <pingou> lmacken: euh.. :D
18:15:30 <skvidal> pingou: we can write a playbook very easily if you have the steps you took to install it recorded somewhere
18:15:32 <pingou> abadger1999: note that we can use ssh keys, it just that I don't have a login on the master box
18:15:35 <skvidal> pingou: can you send me your notes?
18:15:51 <pingou> skvidal: http://www.fpaste.org/siAm/
18:15:54 <abadger1999> It would be both exciting and... somewhat scarey to support this for more people, I think.
18:16:21 <nirik> yeah, we don't want to bite off too much, but I think it would be perhaps a nice feature/thing.
18:16:23 <pingou> skvidal: most of the things are due to FedoraReview which requires quite some packages to be installed (mock, rpmlint...)
18:16:23 <skvidal> abadger1999: do we want to stick all of it on instances in the cloudlets and just reroll it all the time?
18:16:57 <abadger1999> skvidal: definitely for the build nodes.  For the master we might want to be more formal about it.
18:16:57 <pingou> threebean: thanks bwt :)
18:17:06 <abadger1999> skvidal: I think pingou had some ideas.
18:17:11 <threebean> :)
18:17:15 <skvidal> abadger1999: cool.
18:17:33 <skvidal> abadger1999, pingou: let me know what I can do to facilitate the systems creation/[re]deployment
18:17:46 <pingou> skvidal: ansible for the nodes sounds really nice
18:17:48 <skvidal> despite what you've heard I can be helpful at times ;)
18:17:48 <nirik> I don't know that I would want to run master nodes in our regular internal net unless we are sure they are safe to do so... also is everything packaged? I think no?
18:17:53 <pingou> for the master, I need to set one up
18:18:10 <pingou> nirik: indeed, it is not
18:18:26 <pingou> nirik: not a problem for the nodes (jenkins adds its stuff on the nodes) but it will be for the master
18:18:32 <abadger1999> nirik: jenkins isn't packaged -- on the jenkins.tg.o master, I think we started with the upstream rpm and then have been using the application's update feature since then
18:18:33 <nirik> right
18:18:53 <lmacken> abadger1999: jenkins.tg.o == java -jar
18:18:54 <abadger1999> we would definitely want backup of the master
18:19:03 <pingou> nirik: upstream provides and rpm and a repo
18:19:06 <abadger1999> lmacken: k
18:19:29 <abadger1999> must be the jenkins server I expermineted with at home that started with the upstream rpm.
18:19:35 <pingou> lmacken: which is what the init script in the rpm do ;)
18:20:02 <lmacken> pingou: oh, nice :)
18:20:28 <nirik> so, I'd say lets keep discussing things and see what all we need and where would be good to have it... but not advertize or promise anything now. ;)
18:20:32 <pingou> lmacken: the rpm is just a big .jar and an init script, files are then extracted the first time you load it
18:20:37 <lmacken> I setup that instance at pycon like 3 or 4 years ago, and really haven't had to mess with it much since.
18:20:40 <nirik> perhaps a thread on the mailing list on it would be good for some general discussion
18:20:46 <lmacken> I do the auto upgrades every few weeks, and it's been very smooth
18:20:48 <abadger1999> lmacken, pingou: Can we prevent the master from running any build jobs on itself?
18:20:54 <pingou> nirik: imho, this is/should be infra restricted
18:21:00 <abadger1999> that would make it safer to keep the master around.
18:21:01 <lmacken> abadger1999: yeah, I think so. and go purely with slaves
18:21:05 <abadger1999> Cool.
18:21:31 <nirik> pingou: to start with for sure.
18:22:08 <nirik> so, lets explore some more and discuss...
18:22:17 <pingou> abadger1999: seems yes, not true for the nodes though (for obvious reason)
18:23:20 <nirik> any other applications news this week? hows the fas and pkgdb releases looking? ;)
18:23:41 * pingou wouldn't mind feedback on fedocal
18:23:55 <pingou> note: I put sysadmin-web as admin of fedocal
18:24:02 <nirik> pingou: I've been meaning to play with it, but haven't gotten the time.
18:24:05 <abadger1999> I'm pretty sure I can get pkgdb out before freeze.
18:24:13 <pingou> nirik: ok thanks :)
18:24:17 <smooge> what is the url?
18:24:30 <pingou> smooge: https://209.132.184.101//fedocal
18:24:32 <nirik> abadger1999: that would be good.
18:24:46 <abadger1999> there are some changes to production that I'm trying to track down and make sure they are applied in the repo first.
18:25:15 <smooge> that calender is too sethie for this world :)
18:25:22 <abadger1999> fas CodeBlock's been handling but everything I've seen looks like it's on track for going to prod too.
18:25:30 <nirik> excellent.
18:25:42 <nirik> oh, one other app news: I disabled raffle in production.
18:25:55 <skvidal> how about smolt? disabled, too?
18:25:56 <nirik> we were not using it and I wanted to rule it out of our httpd issues.
18:26:20 <nirik> skvidal: I think we have a timetable for smolt... going to announce that in a few days.
18:26:23 <abadger1999> Has disabling that changed things yet?
18:26:28 <skvidal> nirik: cool
18:26:36 <nirik> abadger1999: it's not died since then, but it is very sporadic.
18:26:47 <abadger1999> okay.
18:26:53 <nirik> https://fedorahosted.org/fedora-infrastructure/ticket/3495
18:27:02 <nirik> if anyone has debugging ideas or thoughts on that ^
18:27:12 <abadger1999> If thatturns out to be it... we'll have to think about how we deploy things in the future.
18:27:16 <nirik> it's anoying and I want to track it down and fix it.
18:27:32 <nirik> yeah, less mixing things on general app servers, more specific application servers.
18:27:41 <abadger1999> might be we'll need to keep tg2 stuff separated from tg1 stuff.
18:27:47 <abadger1999> <nod>
18:28:09 <abadger1999> that's even better (from a developer pov) :-)
18:28:25 <nirik> yeah, we kinda started moving to that with packages/tagger...
18:28:41 <abadger1999> <nod>  although packages and tagger are on the same host.
18:28:58 <nirik> well, hosts, but yeah... since they were somewhat intertwined.
18:29:03 <nirik> #topic Sysadmin status / discussion
18:29:15 <lmacken> abadger1999: packags & tagger are also in the same puppet module :\
18:29:15 <nirik> so, we completed a mass reboot this week... went pretty smoothly.
18:29:27 <nirik> skvidal reinstalled all our builders.
18:29:34 <abadger1999> lmacken: :-(  That is something to look at cleaning up.
18:29:46 <lmacken> abadger1999: yup, packags also has legacy community stuff in it too :P
18:30:09 <nirik> smooge: whats the status on new boxes? hopefully soonish?
18:30:16 <abadger1999> lmacken: File an easyfix ticket for separating those three?  (It's all puppet so a lot could be cargo culted by a relatively new person)
18:30:20 <smooge> sean got the new keys.
18:30:35 <lmacken> abadger1999: will do
18:30:35 <smooge> so someone can configure virthost12 at the moment
18:30:43 <skvidal> nirik: and releng01, too
18:30:56 <smooge> I am working my way through the new IMM UI and key activation for the bkernel boxes
18:30:58 <nirik> skvidal: oh yeah. which is working fine. ;)
18:31:02 <skvidal> nirik: excellent
18:31:12 <skvidal> nirik: when/how do we want to move the rest of the releng infra?
18:31:37 <nirik> skvidal: I'd like to wait until we have the private/public setup and everything more organized.
18:31:48 <smooge> I hope to have bkernel01/bkernel02 up later today
18:32:07 <smooge> ousosl02 is waiting on network fixes on their side
18:32:17 <smooge> but I got a sort of install sort of working
18:32:20 <skvidal> nirik: understood
18:32:43 <nirik> skvidal: so, perhaps we get that all figured and we can start doing more after beta goes out.
18:32:53 <skvidal> nirik: not a problem
18:32:53 <smooge> does that cover it or am I talking over another conversation?
18:32:54 <skvidal> I concur
18:32:59 <nirik> smooge: sounds good.
18:33:09 <nirik> I think thats all the new machines accounted for.
18:33:18 <smooge> yep
18:33:42 <nirik> ok, any other new or upcoming sysadmin side stuff?
18:34:24 <nirik> moving on...
18:34:27 <nirik> #topic Private Cloud status update
18:34:36 <skvidal> we have clouds
18:34:39 <skvidal> and they  are private
18:34:41 <skvidal> w00t
18:34:46 * skvidal wants his on private sun
18:34:54 <skvidal> maybe a private snowstorm or tornado
18:35:06 <nirik> heh
18:35:07 <pingou> but not too often
18:35:16 <pingou> and we have jenkins nodes on the cloud
18:35:16 <skvidal> we've been using/abusing the cloudlets in the last week
18:35:24 <nirik> so, the openstack cloud has a glusterfs backend... it seems like it's not very fast. ;(
18:35:26 <skvidal> and found places where both cloudlets are suboptimal
18:36:08 <skvidal> euca 3.1.2 is out - I'm going to see how well the cloud fares with an upgrade while running :)
18:36:18 <skvidal> nirik: what's the folsom timeline?
18:36:35 <nirik> I'm not sure... will be a bit longer for rhel packages I think.
18:36:49 <nirik> I'll try and find out.
18:36:56 <skvidal> nirik: cool
18:37:09 <nirik> we could also drop the gluster backend and see how that changes performance. I think reasonably easily.
18:37:35 <skvidal> nirik: but then we lose live migrate, right?
18:37:47 <nirik> yeah
18:37:48 <skvidal> nirik: can we tug the netapp in there via iscsi?
18:37:51 <nirik> and have less overall space
18:38:00 <nirik> ha ha ha ha.
18:38:07 <skvidal> nirik: :)
18:38:30 <nirik> we will have some more storage options there if our vfilers show up and work as we would hope...
18:39:09 <nirik> anyhow, lets keep testing and poking at them...
18:39:16 <skvidal> I concur
18:39:28 <skvidal> nirik: things we need to think about with the clouds
18:39:36 <skvidal> - long term HA
18:39:40 <skvidal> - medium term - backups
18:39:55 <nirik> monitoring
18:39:56 <skvidal> - medium/short - AuthN/AuthZ
18:40:41 <skvidal> nirik: agreed
18:40:55 <smooge> I hear ldap
18:41:54 <nirik> there's lots of cute things we could do to make them more nifty for people too on setup... ie, could we have a fas group(s) setup for ssh, a automatic git repo setup to save off data, etc.
18:42:38 <nirik> anyhow, lets keep poking at them...
18:42:49 <nirik> #topic Security FAD update
18:43:12 <nirik> So, some progress on this... we have a conference room reserved now in the tower.
18:43:26 <nirik> we need to gather final numbers and run it by numbers people.
18:44:07 <nirik> we should also ping people one last time... make sure they can or cannot make it or ask people who are important to the goal.
18:44:47 <nirik> so, I will send to the list, and ping folks, and we will send out budget request monday?
18:44:56 <nirik> anything else we should plan on this?
18:45:08 <abadger1999> wfm
18:45:28 <smooge> wfm
18:46:06 <smooge> I am trying to figure out if I can fly out early to SC see my parents, drive up for the FAD and then leave from SC. The flights were actually cheaper that way... but have to find out if I can borrow one of the parents cars
18:46:10 <nirik> the fad faq thing suggests we should also plan some activities/etc... perhaps we could tour skvidal's bike collection. ;)
18:46:36 <skvidal> nirik: I took a survey... no one likes you
18:46:37 <skvidal> :)
18:46:49 <nirik> ha.
18:46:51 <smooge> hey his bike shed is better off than some of our houses
18:47:01 <pingou> skvidal: all the one personn you asked agreed ?
18:47:05 <smooge> skvidal, got to that line in Portal2 again last night
18:47:06 <skvidal> pingou: yep
18:47:09 <nirik> anyhow... anything else on this? or shall we move on?
18:47:21 <pingou> nirik: skvidal
18:47:30 <pingou> nirik: skvidal's bike collection or the bike shed itself ? :)
18:47:41 <skvidal> pingou: I have lots of various colors of paint
18:47:42 <skvidal> for the shed
18:47:45 <skvidal> cmon down
18:47:57 <nirik> #topic Upcoming Tasks/Items
18:48:10 <nirik> get ready for flood. ;)
18:48:12 <nirik> #info 2012-10-08 purge inactive fi-apprentices
18:48:12 <nirik> #info 2012-10-08 - announce smolt retirement
18:48:12 <nirik> #info 2012-10-09 to 2012-10-23 F18 Beta Freeze
18:48:12 <nirik> #info 2012-10-23 F18 Beta release
18:48:12 <nirik> #info 2012-11-01 nag fi-apprentices
18:48:13 <nirik> #info 2012-11-07 - switch smolt server to placeholder code.
18:48:14 <nirik> #info 2012-11-13 to 2012-11-27 F18 Final Freeze
18:48:16 <nirik> #info 2012-11-20 FY2014 budget due
18:48:18 <nirik> #info 2012-11-22 to 2012-11-23 Thanksgiving holiday
18:48:20 <nirik> #info 2012-11-26 to 2012-11-29 Security FAD
18:48:22 <nirik> #info 2012-11-27 F18 release.
18:48:24 <nirik> #info 2012-11-30 end of 3nd quarter
18:48:28 <nirik> #info 2012-12-24 to 2013-01-01 Red Hat Shutdown for holidays.
18:48:30 <nirik> #info 2013-01-18 to 2013-01-20 FUDCON Lawrence
18:48:32 <nirik> anything else we should note or schedule?
18:48:39 * nirik will try putting this into fedcal. ;)
18:49:23 <nirik> #topic Open Floor
18:49:29 <nirik> Any items for open floor?
18:50:27 * nirik will close out the meeting in a minute if nothing more.
18:51:51 <nirik> ok, thanks for coming everyone.
18:51:53 <nirik> #endmeeting

Attachment: signature.asc
Description: PGP signature

_______________________________________________
infrastructure mailing list
infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux