Re: time synch between two systems: HOWTO?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: "Brian T. Brunner" <brian.t.brunner@xxxxxxxxxxxxxxx>

> Thanks to jdow and rda, things have become operational.  I'm not happy,
> but that's life.
> 
> I'm not normally paid to watch the clock, but that is what I had to do!
> 
> Not present in CAPITAL LETTERS it merits is that:
> 
>   1: under ideal conditions, ntpd takes 15 minutes (plus/minus)
> to start showing an effect.

Run "ntpdate" first then start the daemon. As the set of peers gets
more and more discipline data the clocks will start out with proper
drift and ageing parameters and need less time to quit hunting for
optimum sync.

Presuming you are using the "/etc/init.d/ntpd" file to start the daemon
AFTER you have the network up and the firewall configured on each of the
client machines the "ntpdate" program is automatically run if you have
filled in /etc/ntp/step-tickers. Simply put in the address for your
master system either as a dotted quad or a normal DNS lookup address.

>   2: a gaggle of peers don't synch up, somebody must be a master.  This
> is a flaw to me, implying the gaggle must be asymmetric.

This is not a defect. It is an absolute necessity. If all the systems
are peers then they'll go wandering off after each other and never
reach any form of equilibrium. That's a mathematical necessity for
such a configuration given that ntp is designed to lock to real time
not some concensus time such as you'd get without a master plus
clients system. Furthermore with a concensus time drifting around
chaotically the whole group would never lock up together very well.

The proper setup if you are worried about the dissappearance of the
master is to maintain a three stratum network. Stratum 3, let's say,
is your master considering it may sometimes be locked externally and
sometimes not. Stratum 4 is your backup master(s) which is locked to
the master when it is operational. And the clients are all stratum 5.
Each of the clients has both the master and backup master in its
server list. That way if either the master or the backup master dies
you still have some form of group discipline for the whole system.

> Clue as to why I view/do things thus: I manufacture comms/alarm 
> equipment for use in gas and oil platforms, mines, refineries, and 
> factories.  These are RHLinux-based systems sans operator/manager.  
> There may be one system, or 8 in a site that must keep in mutual synch.  
> I want a shrink-wrap exploder for our system install folks that is 
> correct for all combinations, and that leaves the spare CPU (if any) 
> ready as a plug-n-play replacement for any system in the site.

Can't have it correct for all boxes on a system. Many aspects of a
system must be unique to the box or be configured from a server or
master machine. For example the IP address cannot be common for all
boxes. So they either need customization on installation or there
needs to be a DHCP server of some sort. If each machine has a DHCP
server active on it I imagine the resuits would be amusing for those
experienced admins watching the poor fellows trying to make it work.

If you use a standard IP host address configuration you may be able
to automatically configure ntp in your installation scripts or via a
clever customization of the /etc/init.d/ntpd file. Use x.x.x.1 for
the master, x.x.x.2 for the secondary master, and 3..254 for the
clients. That way when you set the system up it can self configure.
This requires some thought, because replacing a formerly functioning
master with a new one that is not first brought into line with the
secondary master for awhile may lead to hunting in the network as
it adapts to the new master's clock offset. This is probably not a
good thing.

> I finally got there.  ntp is deployed. Thanks.

And that is a good thing, too.

{^_^}


-- 
Shrike-list mailing list
Shrike-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/shrike-list

[Index of Archives]     [Fedora Users]     [Centos Users]     [Kernel Development]     [Red Hat Install]     [Red Hat Watch]     [Red Hat Development]     [Red Hat Phoebe Beta]     [Yosemite Forum]     [Fedora Discussion]     [Gimp]     [Stuff]     [Yosemite News]

  Powered by Linux