Chuck Anderson wrote:
On Fri, Nov 21, 2008 at 02:00:34PM -0600, Les Mikesell wrote:
blkid ?
What does that do for unformatted disks? If it reports an md device, how
do I know that I shouldn't separately use the things it separately also
reports that might be the underlying elements? Same for lvm if it shows
them, or lvm on top of md?
I don't follow what you are trying to accomplish here. All the data
is available to link the devices together in the stack via the already
existing tools. It may not be convenient, but it is there.
I want to be able to do anything that I might need to do to maintain a
server or copy it to a new one. For example, add a new scsi controller
of a different type than was there when the OS was installed and add
some new disk drives. How do I (a) know the driver module that needs to
be added and (b) identify the new blank disks attached? Or I might
encounter the same problem if a motherboard dies and I have to move the
disks to a different chassis that is not quite the same.
tool like mii-tool should enumerate your NICs and show which have
link established - and any other useful information they can detect.
Then,
ethtool ?
How do I enumerate the devices with ethtool?
Ok, this isn't so great:
for i in `ifconfig -a | cut -d' ' -f1 | sort -u`; do ethtool $i| grep
-E '^Settings|Link detected'; done
but this works, and I verified that it shows the hardware link status
(in addition to the ifconfig UP/DOWN status).
ip link show
I only have a centos 5 handy, but if mii-tool says this:
# mii-tool
eth0: negotiated 100baseTx-FD, link ok
SIOCGMIIPHY on 'eth1' failed: Resource temporarily unavailable
eth2: negotiated 100baseTx-FD, link ok
eth3: no link
shouldn't eth2 be UP here (it isn't, but link is):
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
qlen 1000
link/ether 00:14:5e:17:05:10 brd ff:ff:ff:ff:ff:ff
3: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:d8:df:00 brd ff:ff:ff:ff:ff:ff
4: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:23:d8:df:01 brd ff:ff:ff:ff:ff:ff
5: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:14:5e:17:05:12 brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
And I wouldn't have guessed the ip command would be the thing to check
interfaces that aren't configured for IP.
It probably wouldn't take too much to write a script around blkid and
ethtool to do this.
Don't forget these steps are already a level up from where we need to
start. We may be moving an installed system to one where the drivers
don't match current hardware, or we may have added new NICs or disk
controllers and have to get the appropriate drivers loaded.
I'm not sure I follow what you are saying here. What was once kudzu's
job is now handled by hal/udev, which handle hardware changes
automatically, and the drivers are always there since they are all
compiled as modules for the kernel...easy of automatic hardware
detection and driver loading is one reason why Fedora has resisted
trying to split up the kernel modules into separate packages, and why
all the X11 video card drivers are installed by default.
OK, but I may have moved the disk or replaced the motherboard and
controller, and now need to boot from a module that wasn't included in
initrd.
Are you
against running hal/udev because they are userspace daemons that take
up memory?
No, memory is cheap enough. And disk is cheap enough that I wouldn't
mind have an initrd with all drivers available if that's one of the
choices.
And the scripts need to accommodate things that can't be enumerated too,
like nfs/iscsi mounts and multiple vlans on an interface.
Again, I'm not sure what you want from this. NFS/iSCSI mounts can't
be automatically discovered from within the installed system
image--the information about what to mount from where and to where
needs to come from somewhere.
I want a mechanism that handles things consistently. If I have to
identify and specify the nfs and iscsi targets, then I want to do it the
same for local devices.
> Perhaps what you want is a centralized
configuration management system like Puppet or bcfg2? How does this
relate to the kernel namespace <--> physical device issue we've been
discussing above? How should it work in an ideal world?
Mostly what I want is the ability to ship a disk to a remote location,
have someone there plug it into a chassis I haven't seen but is similar
to the one where the disk was made and have it come up automatically
with at least one IP address on a predictable interface. Or if that is
completely impossible, have commands appropriate for describing over the
phone to someone who has not seen linux before find the correct
interface both physically and logically and assign the address. I
hadn't seen bcfg2 but that's not quite what I want. A closer 'central'
model would be drbl/clonezilla but the machines aren't always cloned
directly in their target host and the eventual hosts don't generally use
dhcp. There are variations involving adding and moving things but
coming up running after something changes is the starting point.
--
Les Mikesell
lesmikesell@xxxxxxxxx
--
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list