Brian T. Brunner wrote:
jdow, your advice is precisely what I have attempted to do.
Success eludes, clues elude.
BTW if I have any system flagged 'stratum' in /etc/ntp.conf
I find in /var/log/messages that 'stratum' is an unrecognized keyword,
the line is ignored.
So I set my /etc/ntp.conf as follows:
restrict 192.168.1.0
server sys1
peer sys2
peer sys3
Then I restarted ntpd on all systems (service ntpd restart)
Then I changed one system's time, and synching never happened.
ntp will gradually pull in the time, tracking time with great precision.
Also, the initial times should be close to the initial server
(say, within half and hour). If the times off too much, the ntp
daemon will refuse to update the time. The gradual adjustment is what
you really need, to avoid pulsing programs with a large time change.
It usually takes at least 8 minutes before ntpd will start adjusting the
time. It needs to use a long time baseline - it's trying for sub-millisecond
precision. That's why you didn't see an immediate change in time.
Use "ntpdate sys1" to initially sync time to the server (after ntpd has
started on the server). This will force-set the system time on the
local system to the server. Note that this cannot be done while
ntpd is running. This a convenience thing you usually do once by hand.
You can immediately verify that ntpd is tracking using "ntpq". example:
15% ntpq localhost
ntpq> peers
remote refid st t when poll reach delay offset jitter
==============================================================================
*ahab.rinconres. cisco1-xxx.x 3 u 489 1024 377 0.611 1.571 0.882
LOCAL(0) LOCAL(0) 10 l 45 64 377 0.000 0.000 0.015
ntpq> assoc
ind assID status conf reach auth condition last_event cnt
===========================================================
1 11068 9634 yes yes none sys.peer reachable 3
2 11069 9034 yes yes none reject reachable 3
ntpq> rv 11068
status=9634 reach, conf, sel_sys.peer, 3 events, event_reach,
srcadr=ahab.rinconres.com, srcport=123, dstadr=194.9.200.146,
dstport=123, leap=00, stratum=3, precision=-11, rootdelay=263.733,
rootdispersion=128.311, refid=cisco1-mhk.kansas.net, reach=377,
unreach=0, hmode=3, pmode=4, hpoll=10, ppoll=10, flash=00 ok, keyid=0,
offset=1.571, delay=0.611, dispersion=15.223, jitter=0.882,
reftime=c34a8314.7849a000 Wed, Oct 29 2003 11:19:00.469,
org=c34a83f7.98e15000 Wed, Oct 29 2003 11:22:47.597,
rec=c34a83f7.988e581c Wed, Oct 29 2003 11:22:47.595,
xmt=c34a83f7.98664d3b Wed, Oct 29 2003 11:22:47.595,
filtdelay= 0.61 0.59 1.09 0.60 0.02 0.62 0.75 0.57,
filtoffset= 1.57 2.45 1.21 1.64 0.94 1.26 -0.65 1.78,
filtdisp= 0.50 15.89 31.27 46.64 62.02 77.38 85.04 92.74
ntpq> quit
At the bottom, you'll see that ntpd needs 8 samples before it makes
a decision. At the start, these are all set to rediculous numbers.
The associate table "condition" will be "insane" until 8 samples are made.
Looking the top table, my current polling interval on ahab is 1024 seconds.
It had counted up to 489 .. when it hits 1024, it samples again.
When you start off, it should default to sampling every 60 seconds,
so after 8 minutes it starts to operate. The delay, offset, and jitter
numbers are in milliseconds. The initial offset can be high,
unless you use ntpdate to initial set the time. After an hour, it should
have pulled in the local clock and be tracking.
Also note that it's not necessary for the client machines to be peers.
Just the server line will suffice. Referencing the other machines as
peers will require each ntpd to check all the servers AND peers.
My client systems look like:
server my_ntp_server
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
driftfile /etc/ntp.drift
The second and third lines have ntpd flywheel off the local clock, but
applies a drift factor that it's calculated (and cached) in the driftfile.
ntp will end up pinging the one server every 1024 sec, and gracefully
survive server outages.
We require ntp for all our NFS and build servers. Otherwise we get
make warnings and/or inconsistent builds.
Hope this helps,
-Bob Arendt
--
Shrike-list mailing list
Shrike-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/shrike-list