20:00 < mmcgrath> #startmeeting Infrastructure 20:00 < zodbot> Meeting started Thu Feb 11 20:00:47 2010 UTC. The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00 < skvidal> oh so much 20:00 < zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 20:00 -!- zodbot changed the topic of #fedora-meeting to: (Meeting topic: Infrastructure) 20:00 -!- sijis [~sijis@fedora/sijis] has joined #fedora-meeting 20:01 < mmcgrath> #topic who's here? 20:01 -!- zodbot changed the topic of #fedora-meeting to: who's here? (Meeting topic: Infrastructure) 20:01 * mmcgrath is 20:01 * lmacken 20:01 * a-k is 20:01 * heffer is too, but just by chance 20:01 * sijis 20:02 * hiemanshu 20:02 * ricky 20:02 * skvidal is 20:02 < mmcgrath> I've got 3 main things I want to talk about. The first two should be short the third one is about updates and will likely be longer 20:02 < mmcgrath> So I'll just get started 20:02 < mmcgrath> actually 4 things, 3 are short 20:03 < mmcgrath> #topic VPN issues 20:03 -!- zodbot changed the topic of #fedora-meeting to: VPN issues (Meeting topic: Infrastructure) 20:03 < mmcgrath> We've been seeing strange vpn issues. we saw a cluster of like 5 outages over the span of an hour this morning. 20:03 < mmcgrath> I poked around a bit, did a couple of restarts and have benerally been keeping an eye on things. 20:03 < mmcgrath> I thought they were fixed except that we had another one about 5 minutes ago. 20:04 < mmcgrath> There's lots of things this could be, but the biggest vpn change we've made was yesterday we were running on bastion2, which was xen. Now we're running on bastion1 which is kvm. 20:04 < mmcgrath> I can't say for sure that's what is going on, but we've seen performance issues before with misconfigured vms 20:04 < mmcgrath> anyone have any questions or concerns on that? 20:04 < sijis> could it be network itself? 20:04 -!- yawns1 [~yawn@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined #fedora-meeting 20:04 < mmcgrath> sijis: it could be 20:05 < mmcgrath> the outages are short lived and unpredictable 20:05 < mmcgrath> so it's been difficult to troubleshoot 20:05 < mmcgrath> Ok, next topic 20:05 < mmcgrath> #topic Equallogic 20:05 -!- zodbot changed the topic of #fedora-meeting to: Equallogic (Meeting topic: Infrastructure) 20:05 < mmcgrath> It's in, it's powered up and Dgilmore has even logged into it so he can be imprinted as it's father. 20:05 < abadger1999> :-) 20:05 < mmcgrath> but we don't think the network ports are actually configured. 20:06 < mmcgrath> so, like I said, short topic on that. 20:06 < mmcgrath> we'll keep working on it and see how it goes. 20:06 * dgilmore is here 20:06 < mmcgrath> any questions or comments on that? 20:06 < dgilmore> please give me multiple gig ports 20:06 < dgilmore> pretty please 20:06 -!- jaxjaxmob [~jaxjaxmob@xxxxxxxxxxxxx] has joined #fedora-meeting 20:06 < mmcgrath> dgilmore: well, you should have 8 of them there. 20:06 < mmcgrath> and we can do whatever bonding we desire. 20:07 < Oxf13> WANT 20:07 < mmcgrath> Ok, nothing else on that? 20:07 < dgilmore> nothing 20:07 -!- gholms|mbp [~gholms@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined #fedora-meeting 20:08 < mmcgrath> buhhh 20:08 < mmcgrath> I forgot what the third thing was so we'll go right on to the 4th 20:08 < mmcgrath> #topic Updates 20:08 -!- zodbot changed the topic of #fedora-meeting to: Updates (Meeting topic: Infrastructure) 20:08 < mmcgrath> So we did a group of updates yesterday and, needless to say, things didn't go well. 20:08 < mmcgrath> There's a number of complicated issues here. 20:08 -!- jcollie [~jcollie@fedora/jcollie] has quit Ping timeout: 252 seconds 20:08 < mmcgrath> 1) We have latest versions of things in our repos that aren't to be updated 20:08 < mmcgrath> 2) actually getting a list of things that are to be updated 20:09 < mmcgrath> 3) actually doing the updates. 20:09 < skvidal> okay 20:09 < skvidal> can I jump in here? 20:09 < mmcgrath> Unfortunately system updates scale horribly. Restarting httpd on one server isn't that different from restarting it on 100 servers. But doing updates and restarts... completely different story. 20:09 < mmcgrath> skvidal: absolutely, have at it 20:09 < skvidal> okay 20:09 < skvidal> so something we originally wrote func for was this case 20:09 < skvidal> being able to get a lot of info and act on it 20:10 < skvidal> but we never implemented this 20:10 < skvidal> b/c we got off on other things 20:10 < skvidal> so I decided to work on it this week and I have a really simple script 20:10 < mmcgrath> skvidal: you're talking specifically about 3) or also 2? 20:10 < skvidal> 2 and 2 20:10 < skvidal> err 20:10 < skvidal> 2 and 3 20:10 < mmcgrath> <nod> 20:10 < skvidal> so here's the gist 20:10 < skvidal> get all updates via yumcmd.check_update via func 20:10 < skvidal> • store timestamp of check and list of updates in a dir/db with name of host 20:10 < skvidal> • store complete list of installed pkgs for each host 20:10 < skvidal> • cmd should 20:10 < skvidal> ∘ list hosts needing updates 20:10 < skvidal> ∘ list hosts needing a certain pkg updated 20:10 < skvidal> • apply updates - glob or all 20:10 < skvidal> ∘ report results of this 20:11 < skvidal> right now I'm storing things really simply so we can search it trivially 20:11 < Oxf13> what's with the unicode bullets? 20:11 < skvidal> /some/path/$hostname/[installed|updates|updated-$TIMESTAMP|orphans] 20:11 < skvidal> Oxf13: from my gnote notes - sorry 20:11 < Oxf13> s'ok 20:11 < skvidal> Oxf13: I use it to brainstorm then paste it in places 20:12 < Oxf13> skvidal: ditoo 20:12 < Oxf13> -o+t 20:12 < skvidal> the idea would be to have the script run using func, async, at regular intervals (maybe only once a day is enough) 20:12 < mmcgrath> skvidal: so lets flash forward to where all this work is done and is in place. What would we do come update day? 20:12 < skvidal> to know what';s on the boxes and their status 20:12 -!- pravins [~psatpute@xxxxxxxxxxxxxx] has quit Quit: Leaving 20:12 < skvidal> func-yum -h hostname --pkg pkgname --update 20:13 < skvidal> or 20:13 < skvidal> func-yum --update 20:13 < skvidal> which hits all the hosts 20:13 < skvidal> or func-yum -h hostglob --pkg pkgglob --update 20:13 < mmcgrath> will we get any output or feedback from that? 20:13 < skvidal> then the results of those runs will be stored in /some/path/$hostname/updated-YYYY-MM-DD-HH:MM:SS 20:14 < skvidal> mmcgrath: so you can see what the results are explicitly 20:14 < skvidal> w/o having to chase all over the place 20:14 < skvidal> does that make sense? 20:14 < mmcgrath> <nod> yeah. I like that, pssh does something similar for ssh commands. 20:14 < skvidal> so I've got the storing info 20:14 < skvidal> and updates part working 20:15 < skvidal> I need to update func and certmaster for our hosts 20:15 < skvidal> b/c we're running an old one 20:15 < skvidal> which doesn't support the --timeout option :) 20:15 < skvidal> which is important here 20:15 < skvidal> and then one more thing I'm working on is 20:15 < skvidal> func-yum --status 20:15 < skvidal> which spits out the status of the hosts as it last knew it 20:15 < skvidal> so things like: 20:15 < skvidal> Last Checked: timestamp 20:15 < skvidal> Last Updated: timestamp 20:15 < skvidal> updates available: #of pkgs 20:16 < skvidal> installed pkgs: #of pkgs 20:16 < skvidal> orphans: #of pkgs 20:16 < skvidal> which seems like a reasonable set of things to list out 20:16 < mmcgrath> skvidal: do you need any help with that? 20:16 < skvidal> sure - it's just a single script 20:16 < mmcgrath> smooge: you around? we haven't heard from you yet? :) 20:16 < skvidal> I'm hoping to post a draft of it this afternoon 20:16 < mmcgrath> skvidal: excellent. 20:16 < smooge> yes 20:16 < smooge> sorry 20:16 < skvidal> one place where I do need help 20:16 < smooge> I have this meeting an hour from now 20:16 < skvidal> smooge: :) 20:17 < smooge> changing 20:17 -!- JSchmitt [~s4504kr@fedora/JSchmitt] has quit Remote host closed the connection 20:17 < skvidal> is the error reporting/catching 20:17 < skvidal> there are lots of things that get in the way here 20:17 < mmcgrath> skvidal: yeah, and we've had some bad luck with conflicts in the past. 20:17 < skvidal> and I want to make sure I catch and report all the errors sanely 20:17 < skvidal> mmcgrath: mmm conflicts 20:17 < skvidal> mmcgrath: so, something we should consider doing 20:17 < skvidal> even though it is a pain in the arse 20:17 < skvidal> is running yum transactions for updates with tsflags=test 20:18 < skvidal> which does EVERYTHING but nothing actually gets written out 20:18 < smooge> ok catching up.. the big issue that I had was that about 1/3 of systems required manual flag changes to yum to work 20:18 < skvidal> and no scriptlets are actually run 20:18 < skvidal> smooge: manual flag changes like what? 20:18 < smooge> --exclude --disablerepo 20:18 < skvidal> hmm, disablerepo? 20:18 < skvidal> I sortof get 'exclude' 20:18 < mmcgrath> skvidal: would that do a full download of the package? because I was thinking about doing that as part of a pre-update thing so we don't pound puppet1 with updates and so when the actual time comes it takes less time. 20:19 < mmcgrath> if what you want does download the package, we could kill two birds with one stone. 20:19 < skvidal> mmcgrath: yes - it does everything including run the transaction but it runs it in rpm's test mode which does nothing 20:19 < smooge> skvidal, there are a couple of boxes that have outside repositories and updates will come up squirrely unless I turn off the repos. Thankfully disable repo only occurs on .stg and publictest boxes normally 20:19 < skvidal> mmcgrath: for a good time set tsflags=test in yum.conf under [main] and forget about it 20:19 < skvidal> mmcgrath: it's great fun trying to figure out why you ALWAYS have new updates 20:20 < mmcgrath> heheheh 20:20 -!- adrianr [~adrian@xxxxxxxxxxxxxxxxxxxxxx] has joined #fedora-meeting 20:20 < skvidal> smooge: if we know the set of updates we mandate we could only explicitly enable those 20:20 < mmcgrath> smooge: so what were some of the biggest issues you ran into with this last round of updates? 20:20 < smooge> ok slowness of updates. 20:20 < skvidal> smooge: taking too long to download or too long to install? 20:21 < Oxf13> (or too long between udpate sessions) 20:21 < mmcgrath> smooge: the actual 'yum -y update' part? 20:21 < smooge> 1) slowness of updates. some boxes sit for 2-3 minutes on installation of rpm glibc and such.. 20:21 * nirik notes doing them more regularly would help with that. 20:21 < skvidal> smooge: yah - that's rpm fingerprinting - and there's nothing we can do until rhel6 20:21 < smooge> 2) slowness of updates. slow network to outside. ibiblio was slower than telia1 20:21 < mmcgrath> nirik: so would downloading the packages earlier. We already do them monthly. 20:21 < ricky> Do we ever not want an update available from the RHEL updates? 20:22 < smooge> 3) errors in updates. various packages would spew scriplet %post errors I wanted to make sure they were ok 20:22 < nirik> well, that would help with the download part, but not the applying part. 20:22 < ricky> If not, could that just be automated so we just need to think about rebooting? 20:22 < smooge> 4) conflicting packages. 20:22 < smooge> 5) systems not coming back due to rawhide+xen 20:22 < mmcgrath> yeah rawhide + xen is an absolute bitch 20:23 < mmcgrath> I wonder if we moved our rawhide boxes to KVM if we'd have a better go at them. 20:23 < smooge> 6) updating 8 boxes at once on a xen box cause slowness. 20:23 < mmcgrath> nirik: how often do you think is good to do updates? 20:23 * mmcgrath thinks this is a good discussion to have 20:24 < sijis> we currently do them monthly? 20:24 < smooge> nirik the locality of a 'proxy' for the remote boxes would make some of the delays easier to know. I can deal with 10 minute wait on install.. but watching a package stop downloading for that long gets me wondering 20:24 < mmcgrath> sijis: yeah, unless there's security updates. 20:24 < Oxf13> mmcgrath: we'd have a much better go with rawhide on kvm 20:24 < nirik> well, for our customers we do them daily if they are not requiring a reboot. ;) If they are, we schedule a day and/or time to do them and do reboots. 20:24 < Oxf13> mmcgrath: but any rawhide host has a inherent risk of not coming back after a change 20:24 < nirik> most rhel updates are security updates. 20:24 < mmcgrath> nirik: how are you doing them? 20:25 -!- Sonar_Guy [~Who@fedora/sonarguy] has quit Quit: Leaving 20:25 < Oxf13> nirik: sadly, there has been more and more of non-security updates in the EL channels as of late 20:25 < mmcgrath> Oxf13: and we're still averaging 1 kernel update / month. 20:25 < mmcgrath> which has also been a PITA. 20:25 < mmcgrath> We may want to be more careful about the kernel updates and determine if we really need to reboot. 20:26 < nirik> I typically use 'mussh'... run a check-update over a group (different host lists/groups) and make sure they are all things we know what they are, then use mussh with 'yum -y' and apply them. Then go back and restart anything that needs restarting. 20:26 < nirik> yeah, kernel updates have gone way up in frequency it seems like. ;( 20:26 < mmcgrath> I don't know wtf that's about but it is very annoying 20:26 < mmcgrath> smooge: ok, so back to the issues you saw 20:26 -!- mdomsch [~mdomsch@2001:1938:16a::2] has quit Quit: Leaving 20:26 < mmcgrath> those are all generally things I see when I do updates 20:27 < mmcgrath> and I think with some work much of it can be automated. 20:27 < nirik> some of the kernel updates however we have applied and not rebooted for. 20:27 < smooge> and while it can be paralleled I didn't get to the part where I wasn't dealing with potential races til way after the window for updates should have finished 20:27 < mmcgrath> nirik: yeah 20:28 < smooge> so we are about 1/2 updated 20:28 < smooge> we still have most remote locations to do 20:28 < skvidal> okay so test transacting would help find systems which are more likely to die 20:28 < mmcgrath> skvidal: just curious, how long do you think it'll be before you're ready to actually test? 20:28 < mmcgrath> because it sounds like smooge still has some to do, but we freeze next week for the alpha. 20:29 < skvidal> I need func updated on some boxes - so I could test on the ones I update 20:29 < smooge> mmcgrath, I am wanting to postmortem yesterday since I felt I was just shit-canning our infrastructure 20:29 < skvidal> I was going to start by testing people1 20:29 < smooge> I haven't updated that box at all 20:29 < smooge> skvidal, so it should be good for a test 20:29 < mmcgrath> smooge: naw, you did fine, the only bad ones were that xen4-mgmt's RSA-II decided to stop working (which made the shutdown -h a problem) 20:29 < smooge> the next issue I ran into was that things like transifex should not have been updated .. 20:30 < mmcgrath> and the other one was just waiting for db3 to come back online, lvm + large shares is annoying. 20:30 < mmcgrath> smooge: yea, and that's the last thing I want to talk about 20:30 < smooge> I think xen4 is having real issues 20:30 < skvidal> brb 20:30 < mmcgrath> Basically we need to have a test repo 20:30 < mmcgrath> and not enable it anywhere. 20:30 < mmcgrath> ricky: you're working on transifex now right? 20:30 < nirik> is epel-testing enabled everywhere? 20:30 < smooge> nirik yes 20:30 < ricky> Yeah, I wasn't aware there was a new package in EPEL 20:30 < mmcgrath> nirik: at the moment it is and we have very few problems with it 20:31 < mmcgrath> smooge: whats the puppet epel-test thing you ran into? 20:31 < smooge> puppet is the usual one 20:31 < mmcgrath> ricky: oh the new transifex is in epel? 20:31 < ricky> Did you guys get issues with puppet? I've been testing the latest version without any pain 20:31 < nirik> yeah, just another source of package updates... if you could reduce the need for that it would help make updates easier. 20:31 < mmcgrath> ricky: I didn't think so but I've heard people complaining about it so I must have missed it. 20:31 < smooge> a couple of php packages on some box a while back. 20:31 < ricky> Er, I'm not sure, maybe it came from the infra repo 20:31 < mmcgrath> smooge: did we have a puppet update go bad recently? 20:31 < ricky> Always make sure to update the puppetmaster first on puppet updates 20:31 < smooge> and one time a bad-scriplet that left me two packages on the box 20:31 < mmcgrath> ricky: can you check real quick? 20:31 < smooge> mmcgrath, 3x last month 20:32 < ricky> It's from infra, my mistake 20:32 < mmcgrath> smooge: we had 3 puppet updates? or we had 3 of them go bad? 20:32 < mmcgrath> what happened? 20:32 < ricky> Maybe we need an infrastructure-test for this special staging stuff :-) 20:32 < smooge> mmcgrath, I did the updates in sections last month 20:32 < mmcgrath> ricky: yeah that's what I'm proposing 20:32 < ricky> Otherwise, if we decide to rebuild app1, we need to special case a bunch of stuff 20:32 < mmcgrath> smooge: but what happened? 20:32 < ricky> **appX 20:32 < smooge> mmcgrath, so there were 2-3 pushes of puppet packages and each time I seemed to get some boxes updated to the new stuff 20:32 < smooge> which broke puppet1 so I had to then update it and the boxes I had done before 20:33 < mmcgrath> what broke though? 20:33 < mmcgrath> like what were the errors? 20:33 < smooge> puppet couldn't talk to them. 20:33 < smooge> I didn't find the error.. ricky let me know 2-3 days after I had done the updates when he caught it 20:33 < mmcgrath> the new versions of puppet couldn't talk to the old puppetmaster or the other way around? 20:33 < mmcgrath> ricky: do you remember what happened there? 20:33 < ricky> The server is generally backwards compatible 20:33 < smooge> I think it was the clients weren't getting updates 20:33 < ricky> So if you accidentally update a client, update the server and check if stuff works - no need to rush on updating clients 20:34 < smooge> so various boxes were in lala land for a couple of days. 20:34 < ricky> I don't remember what happened :-/ 20:34 < mmcgrath> yeah 20:34 < ricky> The only thing that should cause pain is a client update without the corresponding server one though 20:34 < ricky> So it must have been that if anything, I guess. 20:34 < mmcgrath> ricky: are you still getting errors sent to you? 20:34 < smooge> but I am trying to piece from xchatlogs 20:35 < ricky> I'm still getting a ton of errors, but most are an unrelated SELinux thing (and lack of mount ACLs in staging) 20:35 < ricky> I think we can reenable puppet email to everybody once that SELinux thing gets fixed 20:35 < mmcgrath> ricky: k 20:36 < mmcgrath> Ok, so I'll create a new testing repo, put it on all the servers but make it so you have to explicitly enable it to use it. 20:36 < smooge> mmcgrath, I am working on a short blurb for what I have done in the past and what we could see if it works for us 20:36 -!- ayoung [~ayoung@xxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined #fedora-meeting 20:36 < smooge> its longer than IRC level so will send to infrastructure list later today 20:37 < mmcgrath> smooge: k, is it vastly different from what we've generally agreed upon here? 20:37 < smooge> ricky can I get them right now even with the selinux stuff 20:37 < mmcgrath> OH! that reminds me, another thing we didn't do this time around... 20:37 < ricky> So any thoughts about automating updates that come from RHEL as opposed to EPEL/Infra repo? 20:37 < smooge> I am not sure.. it could be :) 20:37 < mmcgrath> we didn't update in staging first. 20:37 < ricky> smooge: Really? As in emails in the form of "Puppet Report for XXX" ? 20:37 < mmcgrath> or if we did staging didn't function well for us. 20:37 < smooge> ricky please 20:38 < smooge> ricky best way for me to learn 20:38 < smooge> mmcgrath, the issues I had with updating staging was a couple 20:38 < ricky> Oh, sorry - I thought you said you were getting them, not that you wanted to get them 20:38 < ricky> Sure thing 20:38 < smooge> 1) stuff wasn't exactly the same as in production 20:38 < mmcgrath> ricky: share the pain :) 20:38 < smooge> 2) boxes are spread out over many xen servers which needed to be rebooted due to xen changes 20:38 < smooge> 3) which affected boxes that weren't staging 20:39 < mmcgrath> I'm more specifically wondering how we missed the transifex and fedoracommunity updates, because neither of those rpms are capable of working in our environment at the moment. 20:39 < mmcgrath> I mean, once we have the testing repo in place, that might be fixed, but it'd still be good to have a way to catch it 20:40 < smooge> I can go check the logs, but I do not think they had been updated on those boxes til I got to them 20:40 < mmcgrath> smooge: thats what I mean, once you updated them did you check to see they were still working? 20:40 < smooge> so yes they had not been properly tested 20:40 < smooge> I get it slowly 20:41 < mmcgrath> smooge: one thing I had started working on but need to get back to is this: 20:41 < mmcgrath> http://git.fedorahosted.org/git/fedora-infrastructure.git/?p=fedora-infrastructure.git;a=tree;f=scripts/site-tests;h=148a785193f868a280d27b61adea7af2bcb61c85;hb=HEAD 20:41 < mmcgrath> .tiny http://git.fedorahosted.org/git/fedora-infrastructure.git/?p=fedora-infrastructure.git;a=tree;f=scripts/site-tests;h=148a785193f868a280d27b61adea7af2bcb61c85;hb=HEAD 20:41 < smooge> mmcgrath, no I had not.. to be honest I didn't grok that it was breaking things. 20:41 < zodbot> mmcgrath: http://tinyurl.com/yeocsvz 20:41 < mmcgrath> sorry 20:41 < mmcgrath> ah yeah. 20:41 < mmcgrath> one thing I usually try to do is update staging first and make sure they're all still working before moving on 20:41 < mmcgrath> that's a good step to add to our SOP 20:41 < sijis> is stg done a day or so prior to prod? 20:42 < mmcgrath> smooge: but that link has some scripts I was working on to basically go out and hit our environment, doing tests for 200's, things like that. 20:42 < smooge> I thought changes to transifex would have been tested before I got to them... I am quite guilty of Somebody Elses Problem field 20:42 < smooge> sijis, it will be 20:42 < sijis> ah ok. good 20:42 < mmcgrath> smooge: well, there's multiple types of tests involved, but it's always up to us to verify things are working when we're the ones making the change. 20:42 < smooge> sijis, I will add that to my self-flaggelation email I am writing 20:43 -!- spoleeba [~one@fedora/Jef] has joined #fedora-meeting 20:43 < smooge> mmcgrath, yes. I agree I got caught up in trying to get everything done by window and didn't do my job properly. 20:43 < mmcgrath> We don't exactly make it easy :) 20:44 < smooge> admitting you screwed up is the first step in scew-a-holics anonymous 20:44 < mmcgrath> hopefully after skvidal's work is done updates won't be such a big deal. 20:44 < skvidal> we'll see 20:45 < mmcgrath> smooge: but yeah, take a look at those fedora-infrastructure.git/scripts/site-tests/ scripts 20:45 < mmcgrath> they're nifty :) 20:45 < mmcgrath> ok, anyone have anything else on this topic before we move on? 20:46 < smooge> another repo I need to check out. is that ok for my office box or should it stay inside the colo? 20:46 < smooge> I am done 20:46 < ricky> It's public 20:46 < mmcgrath> smooge: that one's ok to do whatever with, it's on fedorahosted.org 20:46 < ricky> (As in, git://git.fedorahosted.org/git/fedora-infrastructure.git) 20:47 < smooge> ok 20:47 < abadger1999> smooge, mmcgrath: Staging is a hybrid environment though.... I think fedoracomunity and transifex are both updated beyond production in staging. 20:47 < mmcgrath> they're both in some weird state for sure. 20:48 < smooge> abadger1999, my writeup covers a possible fix. BY ADDING MORE BUREAUCRACY. No not really.. wanted to see if skvidal was awke yet 20:48 < mmcgrath> Ok, well lets all think on this some more and re-group next week. 20:48 < skvidal> smooge: thanks, you're a prince 20:48 < mmcgrath> #topic search engine 20:48 -!- zodbot changed the topic of #fedora-meeting to: search engine (Meeting topic: Infrastructure) 20:48 < mmcgrath> a-k: any update on the search engine? 20:48 < abadger1999> The new repo will go a long ways. 20:48 * mmcgrath is trying to speed things up since we've only got 10 minutes or so left 20:48 < a-k> Really fast update... No progress to report this week 20:48 < smooge> skvidal, you are welcome. I see you get enough ribbing as it is so I owe you a lunch at a cafe next time I am in NC 20:49 < mmcgrath> a-k: no worries 20:49 < mmcgrath> #topic Freeze 20:49 -!- zodbot changed the topic of #fedora-meeting to: Freeze (Meeting topic: Infrastructure) 20:49 * ricky shivers 20:49 < mmcgrath> Just a reminder, we freeze for two weeks starting next tuesday 20:49 < skvidal> smooge: remember, I'm one of your followers :) 20:49 < sijis> ricky: funny (not) :) 20:49 < smooge> YOU ARE AN INDIVIDUAL 20:49 < abadger1999> skvidal: One thing I'm anticipating -- new pkgdb won't go into production in time for this freeze. There's just too many outstanding issues. 20:50 < smooge> ok freeze tag 20:50 < skvidal> abadger1999: :( 20:50 < ricky> Just a heads up, we may try to get a change request in for transifex 0.7 20:50 < abadger1999> That means, tags from the pkgdb and critpath won't be there until after we unfreeze. 20:50 < smooge> ok when are we freezing exactly 20:50 < skvidal> abadger1999: fooey 20:50 < ricky> Docs needs this badly for their translations 20:50 -!- djf_jeff [~jeff@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has quit Quit: I quit 20:50 < mmcgrath> smooge: the 16th 20:50 < G> brrr, it's cold in here :P 20:50 -!- mether [~Rahul@xxxxxxxxxxxx] has quit Ping timeout: 252 seconds 20:50 < smooge> abadger1999, can we go for a change request for the change? 20:51 < abadger1999> Welll... 20:51 < mmcgrath> ricky: no way to get it in before the freeze? 20:51 < abadger1999> Oxf13: Under the new no frozen rawhide, when are we doing mass branching? 20:51 < Oxf13> abadger1999: alpha freeze 20:51 < Oxf13> so... tuesday 20:52 < ricky> That might happen as well - I'll try to get some test repos setup and tested by this weekend 20:52 < abadger1999> Okay... smooge, If mass branching is done, I might do it via change request. 20:52 < abadger1999> But I'm very hesitant. 20:52 -!- jaxjaxmob [~jaxjaxmob@xxxxxxxxxxxxx] has quit Ping timeout: 256 seconds 20:52 < mmcgrath> abadger1999: whats the worry? 20:52 -!- gholms|mbp is now known as gholms 20:52 < mmcgrath> techniaclly if the mass branch is part of the release, it's not actually frozen. 20:52 < ricky> I'm not sure if we need specific testing for docs' use case though, since they're apparently the big consumers for this update 20:53 < abadger1999> mmcgrath: Lots of changes, lots of bugs I noticed and squashed, sync script is slow, db is huge. 20:53 < mmcgrath> abadger1999: oh, this is all related to the work you're doing with pkgdb? 20:53 < smooge> abadger1999, ok thanks 20:54 < abadger1999> mmcgrath: Yep. And a little part of it is just that I didn't do the majority of the code this time so my gut doesn't trust all of the changes that went in yet. 20:54 < mmcgrath> abadger1999: <nod> well as that comes let me know how I can help 20:54 < abadger1999> Some time in staging will let me know what to expect. 20:55 -!- cwickert [~chris@fedora/cwickert] has joined #fedora-meeting 20:55 < abadger1999> ricky, mmcgrath, skvidal: So here's a question -- is tx update more important than new pkgdb? 20:55 < abadger1999> new pkgdb gets us tags and critpath which we need. 20:55 < abadger1999> But it sounds like the tx update needs some love and is important as well. 20:55 < ricky> The tx update is a blocker for docs, so it's pretty important 20:56 < ricky> Right now, we have it running in staging - we need test repos (ideally test repos that test docs workflow) and also some config file cleanup. 20:56 < smooge> ricky, its just documentation.. i mean next we will be worrying about quality assurance :) 20:56 < abadger1999> Do you guys want me to switch over to working on tx instead of pkgdb since I already am sure pkgdb is going to slip? 20:56 < ricky> (This is why puppet is currently disabled on app01.stg, sorry for hogging it :-)) 20:57 -!- J5 [~quinticen@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has quit Ping timeout: 272 seconds 20:57 < mmcgrath> abadger1999: I don't really know I have the knowledge to answer that. 20:57 < mmcgrath> I don't know what tx not making it in would mean 20:57 < G> mmcgrath: no French/German/etc translations? 20:57 < ricky> .ticket 1455 20:57 < zodbot> ricky: #1455 (transifex upgrade) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1455 20:57 < mmcgrath> G: for what? we had german and french translations for F12 20:58 < mmcgrath> that's my confusion 20:58 < ricky> My info is what sparks said on the second-to-last comment 20:58 < ricky> Apparently docs translations need certain features from tx 0.7 20:58 < abadger1999> "This will adversely affect the Release Notes and all other Docs Guides if not completed by Mar 11." 20:58 < mmcgrath> huh? why 20:58 < ricky> Looking at that comment again though, the date is past the freeze, so not as much of a rush as I thought 20:59 < mmcgrath> ricky: k 20:59 < mmcgrath> well since we're about done I'm going to open the floor real quick 21:00 < mmcgrath> #topic open floor 21:00 -!- zodbot changed the topic of #fedora-meeting to: open floor (Meeting topic: Infrastructure) 21:00 < mmcgrath> anyone have anything they'd like to quickly discuss? 21:00 < smooge> i had something.. sneezed and forgot it 21:00 < smooge> don't turn 40.. its the new 80 21:00 < mmcgrath> hahaha 21:00 < mmcgrath> Ok, and with that 21:00 < abadger1999> :-) 21:00 < mmcgrath> #endmeeting 21:00 -!- zodbot changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Meeting_channel for meeting schedule 21:00 < zodbot> Meeting ended Thu Feb 11 21:00:37 2010 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . 21:00 < zodbot> Minutes: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting.2010-02-11-20.00.html 21:00 < mmcgrath> thanks for coming everyone! 21:00 < zodbot> Minutes (text): http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting.2010-02-11-20.00.txt 21:00 < zodbot> Log: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting.2010-02-11-20.00.log.html
Attachment:
pgpbVWtX5mAdp.pgp
Description: PGP signature
_______________________________________________ infrastructure mailing list infrastructure@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/infrastructure