On Fri, 22 Jan 2016 16:45:49 -0500 (EST) Patrick Uiterwijk <puiterwijk@xxxxxxxxxx> wrote: > Just domain would be a good way for us internally, but maybe we can > also get the design people to provide us with banners or different > versions of the logo to put on stg/dev/cloud/... instances, so that > we also make it clear inside the applications in a consistent manner. I suppose, but when something is down people would look at the banner how? ;) Thats why I think domain names are a good way to show things. > Maybe clearly indicate cloud.fp.o (and some others probably) as > exceptions to this rule. Sure, but I was hoping to phase out cloud.fp.o in favor of fedorainfracloud. > getfedora.org - Same level as fedoraproject.org. Yeah. In fact, further thought on this makes me think we have one level higher too... mirrorlists, hotspot.txt, ie things end users immediately note. However, it might just be muddying the water to try and mention these as higher support level. > cloud.fedoraproject.org - Same level as fedorainfracloud.org. Yeah > Where do hosted, people and planet fall in? > I would say these are production as well, and same as fp.o. Yeah, agreed. Although I could see a case for people and planet being in a lower category. Probibly best to put them with fp.o > > Any general thoughts on the idea? > > Outside of the indications to users, how about defining "SLA levels", > or however we want to call it, and display the above rules in a > table, for easy grokking by other people? > > Something like: > Status | Monitored | Paged | Off-hours > Production | X X X - > Staging | - X - - > > Or however we want to fill this in exactly, just a quick example, we > might for example give different names to the levels or the sort. Sounds like a good idea. Also possibly the response time there... kevin
Attachment:
pgpV8wMz44k__.pgp
Description: OpenPGP digital signature
_______________________________________________ infrastructure mailing list infrastructure@xxxxxxxxxxxxxxxxxxxxxxx http://lists.fedoraproject.org/admin/lists/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx