Re: Fedora Documentation Platform

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2007-10-09 at 12:20 -0600, Jonathan Steffan wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Team,
> 
> 	So, after brief thought about the Fedora Documentation Platform (FDP)
> changes I'd like to do... here they are:

I mentioned the acronym overloading over IRC, so hopefully we can find a
better name for the platform. :-)

> * Replace Makefiles with config files and then use the FDP to do all
> building, allowing a user to specify they want to use the local cpu to
> do the building, or if they want to use the buildd.
> 
> 	+ We get to use python :-D
> 	+ IMHO we would get much more flexibility and a tighter integration
> with our translators and translation systems (read: translators would be
> able to easily render for their language to check their results before
> pushing the build to zope
> 	+ AFAIK, we have more combined skills with python, over Makefiles

I afraid of being a naysayer, but this sounds a bit like the cart
driving the horse.  When you say "we have more combined skills with
Python over Makefiles," what I hear is "I know more about Python than
Makefiles."  But using Makefiles and mainly (since Tommy vanished) the
meager skills of a liberal arts major, we've been able to maintain
Makefiles that do plenty of stuff like render HTML formats for
translators' languages, checking the results, and so forth.  (Q.v. "make
html" or "make validate-xml-<LANG>".)

When we have consultants come in to our office who work on solution
design for a couple weeks and then start their presentation off with
"Well, the first thing we need to do is replace all this UNIX stuff with
Windows 2003 servers!", we generally take their badges and wave goodbye.
I'm not saying that this is an equivalent scenario -- I know you've done
much more than a couple weeks' work -- but GNU make is nothing if not
renowned for flexibility.

Using Python configuration files moves the learning curve very steeply
upward for anyone else who wants to figure out the toolchain, because
config files don't *do* anything.  They imply the need to understand how
everything's happening in Zope.  (Don't they also remove the ability for
Z/P-less docs work?)  With the Makefile, everything is right there on
the surface.

I'm not going to pretend that we don't have to make some changes, but we
also need to understand that our toolchain needs to be packageable at
some point.  We'd like to have a simple (ahem, repeat, SIMPLE) toolset
that someone can install from a fedora-docs-tools RPM that allows them
to simply write DocBook XML and then generate roughly the same kinds of
outputs as we're making now -- and hopefully more and better.

> 	+ Centralized code updates
> 	- Centralized code updates, this is because very little code will
> actually be in the buildd-cli. If the command is to run local, the
> buildd will just return an array of commands it would have run...
> allowing the buildd-cli to run them on the local cpu. This does require
> the buildd to be available and the contributor in question to have
> Internet access. Do we want to allow offline building?

See above, but yes, offline building is very important.  We do not want
to limit options to just people with strong Internet presence, keeping
in mind our global constituency.

> * Have better targets. It will be much easier for me to write more
> "stable" code if I am able to checkout a CVS module and then read
> (uniformly) into the buildd what this CVS module allows the buildd to
> do. For example, What languages are complete?

"make postat-<LANG>"

>  What languages are there?

"make showvars | grep OTHERS"

> When was the last build, the results? What is the target for this doc?
> Where do we have it published to? ... Stuff like this. 

All right, we don't have that stuff. :-)  But what does "the target for
this doc" mean?  What does the ending location mean?  I thought we also
want people to be able to use and build these documents locally and not
constrain them to just our online platform.  If I'm going through here
completely misunderstanding your point, I really do apologize for being
thickheaded.

> I'd like to be
> able to programmatically read this information.. while also having it
> very easy to work with for a human (read: use something like
> ConfigParser) and storing most, if not all, information in the CVS tree
> itself. For example, I really wanted to add a "lock" to the CVS module
> when someone is doing TTW (through the web) editing. This will prevent
> data from being lost. Right now, it is possible for users via plone to
> be over written by a use editing via CVS and vice versa. I'd like to be
> able to checkout a CVS module and know "right away" if there has been an
> edit somewhere else... that has not been saved back to the module. If we
> had a nice system that I could easily make changes via the buildd to
> inform users of this.. it would be perfect. Example:
> 
> 	User 1 is editing the README via plone.
> 	User 2 is leet and edits the docbook directly by checking out the module
> 	User 2 is informed with a DONTREALLYDOTHIS file that has the user info
> from plone stating the edit is going on, and when.
> 	[ OK, so yes.. we can do this anyways... and will]
> 	Any action User 2 takes via the buildd-cli, they will be directly
> warned and also questioned as to continue or not before they can render,
> or use the cli to commit (they could of course just use direct CVS
> commands, but yea)

Does this mean that two people can't work on a guide at the same time,
or only that two people can't publish the same guide at the same time?
Because currently, Karsten and I can do something like hit the Release
Notes "beats" in tandem to get the content put together faster.  Can you
explain this in a way that shows me how we gain by using this kind of
scheme?

> I also need the ability to have a document in different namespaces.
> Namespace = url request that retrieves rendered content.
> 
> Example:
> 
> CVS module harHar could have the namespaces /the/har/Har and also
> /documentation/this/is/answering/all/you/asked
> 
> Such:
> 
> 	Admin 1 authorizes Document 1 to go into official namespace as
> /howto/cure/luser/error
> 	This document is going through the standard process of translation, and
> updates.
> 	User 1 wants to contribute a fix to /howto/cure/luser/error but doesn't
> have access to that namespace.
> 	* Here, we want to enable anyone to help... on the team or not.
> 	User 1 either copies (if they can read they can copy :-D) or inits
> another document using the same CVS source. At this point I want User 1
> to be able to edit the document. They will be able to, they are owners
> of the object. They will be restricted from being able to call a commit,
> but will be able to render from CVS (though the most likely don't want
> to as it would re-render over their changes.. good thing the document
> would be versioned in plone so they can revert that oops).
> 	* Here I want to illustrate why I really want a good way to work in
> multiple locations
> 	User 1 does some great work and informs Admin 1 (or 2, or 15) they
> should look at the changes. (Now, hopefully, I can get CMFDiff to work
> correctly, but lets assume it does) The Admin user will be able to look
> at the history tab and view all of the changes. If they are acceptable,
> the Admin user will be able to (from this user namespace) issue a commit
> to save the changes to CVS.
> 	Admin 1 has saved the changes.. and likes them enough they want to push
> them into the official namespace. Well, all that will need to happen is
> to issue a render in the official namespace.
> 
> == At this point, having config files based in CVS is even more
> important. I briefly brought this up a while ago and have yet to solve
> it. ==

I think I may be able to hazily glimpse a little of why these config
files are important, but it's still eluding me.  Could these config
files, then, be generated from Makefile rules and a bit of other content
unseen by mortals?  That would retain some sort of compatibility for
people who just want to do work via the $SCM command line.  

> 	* What happens if we get an edit in English, for example, while
> translations are going on? Even if they are not? Do we render all
> languages... even when some languages have not been updated yet? Does
> this mean we will have multiple running versions? Do we block renders
> until all languages are updated? In our current system, it is very hard
> for me to programmatically tell/detect all of these situations and
> anything I've tried so far I was able to break quickly.

I don't know whether I understand your questions, but we always have
translations occurring on documents.  If we have a document set to
render, it would seem -- to me at least -- more worthwhile to have all
the active languages that are translated past some low-bar point
rendered with it.  It's better to have things up to date than fully
translated, given that we at least have English as a lingua franca for
the Fedora Project as a whole.  The way documents render using our
current PO methods, if there is an edit made that changes the original
text content, the string representing that content goes "fuzzy."  When
the document is built in a locale where a string is fuzzy, the original
(untranslated) string appears instead of the old, possibly outdated
translation.  So if I make an edit to the original English of the second
paragraph of the three below, until the German translation is done,
you'd see:

  Gabi, geh doch mal ran!
  Gabi, go and get it quickly!
  Gabi, geh doch mal ran!

> Depending on the answers to the above, does it not make sense to be able
> to say "current render for language XY is already updated, don't
> render... *next*" to save CPU and rendering time? Does it not make sense
> to "ping" our translation system when we have stale detection? Do we
> ping and then block the render from going to the zope instance?

I see your point here, and definitely appreciate the tenderness of this
particular issue (CPU time eaten).  Building the entire release notes
content haul, for example, on my dual-core CPU takes a LONG time, on the
order of 15+ minutes.  On the other hand, we wouldn't expect to build
that much at one time on this platform.  However, I don't see what this
has to do with the beginning of this email, which could just mean I'm
lost.

> I'm going to cut myself off to try to answer the rest of my questions
> from any responses I get from this. Basically, IMHO moving away from the
> current Makefile system will make what I think we are trying to achieve
> with the FDP less of a big "what can we get done building on top of our
> current system" and more of a "oh yeah, now that is cool" situation.

I'm not yet convinced that throwing out Makefiles is the answer here.
I'd never argue that Python's not cool and flexible -- it is -- but
config files by themselves (and in the absence of any glue or other
helpers) offer a lot less value in cases where people aren't using our
Plone for reasons of bandwidth, interface, or what have you.

I don't mean to come off as stodgy, though.  If this is really a "no,
no, you can still do all that, PLUS" situation, then I'm all for it.
I'm certainly not married to Makefiles.  We do need to be careful,
though, that we are not letting the tools drive the need.  And of
course, the standard "this is just my US $0.02" applies.  I'm very much
out of my depth when it comes to all this CMF/Zope/Plone stuff, so at
some point I have to throw up my hands and say, "OK, whatever, just
please don't make everything suck." :-D

-- 
Paul W. Frields, RHCE                          http://paul.frields.org/
  gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233  5906 ACDB C937 BD11 3717
           Fedora Project: http://pfrields.fedorapeople.org/
  irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug

Attachment: signature.asc
Description: This is a digitally signed message part

-- 
fedora-docs-list mailing list
fedora-docs-list@xxxxxxxxxx
To unsubscribe: 
https://www.redhat.com/mailman/listinfo/fedora-docs-list

[Index of Archives]     [Fedora Users]     [Fedora Desktop]     [Red Hat 9]     [Yosemite News]     [KDE Users]

  Powered by Linux