On Mon, Sep 13, 2004 at 02:42:35PM -0400, Havoc Pennington wrote: > Hi, > > Red Hat engineering is starting a new project we're calling > "stateless Linux" for lack of a better name - some components of this > are already in Rawhide, and others will be appearing shortly. > > We've been keeping the project a little bit quiet at first, but now > we've written it up in some detail: > > - an overview document, available from > http://people.redhat.com/~hp/stateless/ > > - a HOWTO document and a couple associated RPMs, available from > http://people.redhat.com/dmalcolm/stateless/ > > There aren't many new RPMs for this, because stateless Linux isn't a > single codebase or package, it's a set of changes across the > distribution (you might think of it as a "philosophy"). Most of the > changes are already in Rawhide (the highlights are mentioned in the > StatelessLinux.pdf document). > > Appreciate feedback, especially from anyone who has time to try out > the HOWTO. We expect the code to change quite a bit as issues and > suggestions come in. > > Havoc > An experience that might be interesting: My LUG got involved in a Linux divulgation project that ended up using stateless clients, FC1 based. The project required two different classes of clients: one for a 3d game and another for a webpaper contest. The class of a client was deduced from the PCI ID of the graphics adapter (i810 and NVidia for the gamers, and mga for the others). The initrd didn't mount the nfs share as root and used a tmpfs in conjunction with --bind mounts, as the "stateless linux" project currently uses. Instead, the tmpfs was used as the root folder, with changeable files and directories copied to the tmpfs directory, while the other non-changeable directories and files were symlinked. A slight complication arouse from supporting both i810+dri acceleration and NVidia's glcore libraries. Still, the script that created the root filesystem was under 20 lines and the boot performance was good. Updates to the image were easy. A chroot and rpm/yum install on the server, or a remount /nfs -o rw, make the appropriate changes, and migrate the volatile files to their location. A script was also running on each client that connected to the server and reported its class. The server had then the ability to restart, power down, or cycle the client. (Cycling meaning killing all user processes, restoring the home dir from a known location, and logging the user.) Regards, Luciano Rocha -- Consciousness: that annoying time between naps.