Ian Chapman wrote:
Michael Nielsen wrote:
Simply not true. Right this moment I'm copying podcasts to my mp3
player which was mounted directly by Gnome. It's accessible under
/media/disk. My external USB drive, the partitions for the other OSs I
have installed, my SD cards, camera etc are all available in a similar
way.
Try to mount an NFS/CIFS, which were what i was talking about, sorry for
being imprecise.
4. Everything is thrown in huge collective directories, such as
/usr/bin, /usr/lib etc, and it is a huge mess, just like windows with
I'm surprised at this one. This is a big plus for me. As someone who
regularly has to deal with Solaris systems amongst others, I get tired
of playing guess the location of the binary, hunt the man page and
setting an ever increasing $PATH.
Actually all the install script (for the application) was to update the
global login scripts, for the PATH variable, then the user
would see it as if the whole system was a flat directory.
It is a big disadvantage when testing, because the current scheme
prevents having Firefox-2 and Firefox-3 (apache-1.3, apache-2.2 etc)
installed, under package management because they contain files that
conflict, similarly with 64 bit systems, where you need to install 32bit
compatability software, they usually conflict, due to irrelevant
documentation files conflicting.
5. More and more services are bound up in the userinterface, such as
the pulse audio, which is started by the GUI, this means if you use 2
user
Unfortunately Linux has suffered countless audio 'standards' and many
applications have been slow to catch up with the standard du jour, if
at all. Probably the single thing I most hate having to deal with when
it doesn't just work.
Yup, I find the audio absolutely frustrating, have been trying to get
skype running, with some reasonable audio quality.
wrong. The original concept for unix was to install an application
such as firefox in either, /opt or /usr/local/.
IIRC /opt was intended for self contained applications, which provide
their own tree. They are often statically compiled or depend only upon
libraries and files found in their own tree. They can be a complete
PITA to deal with. /usr/local I believe was intended to for installing
software locally to a specific machine or a group of machines without
interfering with system files or vice versa. Often the filesystems
weren't even local but NFS mounted from a server or similar. Good
package management and the fact that in general, most of the
filesystems these days are local has negated those reasons and
/usr/local is frequently (mis)used in other ways.
Yes /usr/local was intended for local variations on software, so that
you could install programs only on that machine, especially if you were
using TBOOT (I think it was called), however, the only reason that
spreading things out is hard to deal with, is because they don't update
the various paths properly.
I find it a huge problem, when there is a problem with system package,
that I need to replace with a newer version, sometimes there are files
left behind, that cause problems when you compile your own version.
Also you cannot with the "everything in one dir" philosophy handle the
situation where a user (or administrator) compiles a newer version from
source, and there is a version installed via the package manager.. If
you use the /opt approach, you can control it via softlinks, and you can
have multiple versions of the same program available - I've always seen
this as a strength of Unix, and a weakness of windows, where it is
virtually impossible to have (for instance) IE 6, IE7, and IE 8
installed at the same time.
I just find it a shame that this limitation is being adopted by Linux.
Such that the entire application was contained within a single
installation directory, and then to use the PATH and LD_LIBRARY_PATH
to allow the execution of the application.
Exactly, I refer to the PATH hell earlier. Additionally
LD_LIBRARY_PATH is considered a security risk by many, especially when
many OSs have had alternatives for years.
You can also use /etc/ld.so.conf for a more secure option.
I'm really curious as to the reasoning for moving everything from the
standard configuration mechanisms to the gui layer, breaking
compatibility with scripting, and other standard UNIX featuers.
Curious. I maintain many Linux servers without a GUI installed. I
don't think I've had a requirement to configure anything that's
required a GUI. If by moving, you mean providing GUI tools for
configuration tasks that have traditionally required a command line,
I'm all for it.
You can avoid using the GUI (which I prefer), however, what I mean is,
if you use the gui to configure the network, and you're not careful, you
can find that the configuration you performed, is tied to your GUI
account, and when you reboot, the settings are lost, until you log in again.
I'm all for gui tools as an interface to the command line tools, and I'd
applaud such an approach. I don't like the approach of creating
parallel configurations, that are tied to the GUI.
--
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list