ntp in cluster mode

unix

    Next

  • 1. VEA conflict with Solaris patches
    We recently added the recommended Solaris 8 patches to a server. Now the VEA GUI interface doesn't work. It hangs when you try to change a disk group. The command line interface works okay. Symantec support recommends removing and re-adding the packages VEA uses. This seems a little drastic. Checked with the storage vendor and they didn't think the recommendation was unreasonable. Has anyone else encountered this issue? Dennis Williams
  • 2. how to avoid "disk full" when it is actually not
    Recently one of uf filesystem on our sparc running solaris 9 experienced the problem that files cannot be created or updated. Doing a df command shows there are still plenty of space left The remedy is to delete some files (mainly large-size files) I would think this has something to do with the well known "inode" problems. But my understanding is that this is caused by too many files (and not files of large size) ? How do I avoid this to happen again ? How do I check in advance if the limit is nearly reached ? BTW, what is the file size limit of a single file under ufs in solaris 9 ?
  • 3. Does sftp/ssh on Solaris support chrooted users
    Hi Does anyone know if sftp/ssh on Solaris 9 or 10 supports chrooted users? If not are there any 3rd party products that do? Thanks Nelly Boy
  • 4. file handling in shell scripting
    Hi All , I am new to shell scripting.i donot know whether this is right group to post this. Thought many veteran's must have idea about this. i have a shell script , i want to it to do one more job for me. ie mounting a directory over other using lofs . i have done it manually using 1) mount command mount -F lofs /export/home/dju /dju 2) entry into /etc/vstab file /export/home/dju - /dju lofs - yes - how can i do this in shell script . can somebody help me. Regards Aki

ntp in cluster mode

Postby Przem » Tue, 06 Jun 2006 22:23:24 GMT

Hi

I need advice when configuring ntp on cluster.

For now it uses /etc/inet/ntp.conf.cluster and server 127.127.1.0 as 
time server.
But because of that this cluster at this moment 12 minutes faster than 
my whole environment.

How can I configure it to use external ntp server?
As I can see, /etc/init.d/xntpd.cluster stopes when it find 
/etc/inet/ntp.conf, so I assume I should still use 
/etc/inet/ntp.conf.cluster.

Can I simply change field "server 127.127.1.0" to i.e. "server 
ntp.server.my"? Will nodes be still synchronized than?

And should I do this on both nodes? Or on one only?
Or maybe just use /etc/init.d/xntpd and /etc/inet/ntp.conf as conf file, 
but adding "peer clusternode1-priv prefer; peer clusternode2-priv" there?

My actulaf conf files are (both same):
server 127.127.1.0
peer clusternode1-priv prefer
peer clusternode2-priv
driftfile /var/ntp/ntp.drift
filegen peerstats file peerstats type day enable
filegen loopstats file loopstats type day enable
filegen clockstats file clockstats type day enable

how should they look after I switch to external ntp source?


Thanks

-- 
	Przemyslaw Krol

Re: ntp in cluster mode

Postby Logan Shaw » Wed, 07 Jun 2006 13:59:02 GMT



I haven't set up a cluster, although I have used ntp a fair bit.

My advice would be that you need to get your clock calibrated
on every server before you set them up to try to run independently.
The most straightforward way I can think of to do that is to run
ntpd against an external server for a week or so, then let it build
an ntp.drift file that it can use to compensate for the natural
drift of the system's clock.


If you have Internet connectivity available, the easiest thing would
be to just synchronize your servers to external servers.  If you
really want to ensure that your local machines stay in sync with
each other, you can solve that in one of at least two ways:

(1) synchronize each internal server with enough external servers
     that the chances of one of them being far off is really small,
     just because it would require too many failures (of external
     servers) for that to be likely to happen, or
(2) synchronize one server with external servers, then synchronize
     your other servers to it; then your internal servers will not
     all be on equal footing because one will be the master, but
     they should still stay in sync for a while if you lose internet
     connectivity for a day or two.

In general, keep in mind that just setting up a cluster of servers
trying to sync to each other will not give you reliable and
accurate timekeeping.  Computers make OK clocks but not good ones,
and in order to keep in sync, you need to ultimately be drawing
your information from a good clock (either from a public ntp server
or from a hardware clock, like a GPS receiver).


Just look at what they suggest doing at  http://www.**--****.com/ 
If you are in Europe, you should be able to put

    server 0.europe.pool.ntp.org
    server 1.europe.pool.ntp.org
    server 2.europe.pool.ntp.org

and remove all the "peer" lines, and that should probably do it
for you.

   - Logan

Similar Threads:

1.NTP n00b: NTP & Cluster

Hi all. I have 2 Netra440s in a cluster. I think I have them clocking
off of the same server, but that's not what I see when look at the
peers. I'm fairly new to NTP so I'm not sure if my ntp.conf files are
setup properly. Actually, I'd be surprised if they were. Both ntp.conf
files have
-----------------------------
#Servers:
server 127.127.1.0
server 216.152.162.29
server 216.152.162.30

#Peers:
peer clusternode1-priv prefer
peer clusternode2-priv
peer clusternode3-priv
peer clusternode4-priv
peer clusternode5-priv
peer clusternode6-priv
peer clusternode7-priv
peer clusternode8-priv
-----------------------------

So, right off, I guess I don't need nodes 3-8 in there since there are
only 2 nodes. Correct?

Next, when I look at ntpq -p I get:
-----------------------------
Node1:
     remote           refid      st t when poll reach   delay
offset    disp
==============================================================================
 LOCAL(0)        LOCAL(0)         3 l   56   64  377     0.00
0.000   10.01
*216.152.162.29  192.38.7.240     2 u  286 1024  377   212.98
96.627   89.90
+216.152.162.30  192.5.41.209     2 u  567 1024  377     2.53
6.402    9.90
 clusternode1-pr 0.0.0.0         16 -    - 1024    0     0.00    0.000
16000.0
 clusternode2-pr clusternode1-pr  4 u    -   64  377    -2.76
-14.217    8.18

Node2:
     remote           refid      st t when poll reach   delay
offset    disp
==============================================================================
 LOCAL(0)        LOCAL(0)         3 l   33   64  377     0.00
0.000   10.01
 216.152.162.29  0.0.0.0         16 -    - 1024    0     0.00    0.000
16000.0
 216.152.162.30  0.0.0.0         16 -    - 1024    0     0.00    0.000
16000.0
*clusternode1-pr 216.152.162.29   3 u   55   64  376     1.11
16.159    9.00
 clusternode2-pr 0.0.0.0         16 -    - 1024    0     0.00    0.000
16000.0
-----------------------------

It looks to me like node 1 is clocking off of the network and node2 is
clocking off of node 1. In the ntp.conf file it mentions that "All
nodes within the cluster must synchronize within the cluster as
peers.  Time synchronization amongst the nodes is more important than
the accuracy of the agreed upon time." which would explain why node2
is clocking of node1, but would that mean node2 would not even attempt
to clock from the network?

Thanks in advance.

Ben..

2.question regarding NTP configuration for clusters, and "cluster time" stability

3.question regarding NTP configuration for clusters, and "cluster time" stability

I have a question that seems somewhat similiar to one that was just
asked,
but there are a couple of differences, so I figured I'd ask mine as
well.
Apologies for the long post, but I'm trying to skip the "more info
please"
phase. :-)

I have a product that is comprised of a cluster of Linux nodes, with
the
cluster ranging in size from 4 to over 100 nodes. To date, we've used
the version of NTP included in the OS (SLES 10) to maintain internal
time synchronization in the cluster, but without associations to any
external NTP servers nor any hardware based time sources. While
this has worked satisfactorily, it does allow for a gradual drift
from
UTC over time, so we'd like to extend the product to eliminate this.

What this means in terms of requirements is that we still must
maintain a stabile internal "cluster time" with sub-second tolerance.
This should be trivial for NTP to maintain, as that is a rather loose
tolerance compared to many others I've seen discussed. The requirement
to match true UTC is even looser, as all we're trying to do is enable
the use of an external reference to stop what can be a perpetual
drift.
Just to give it a number though, let's say we'd like it to be within
60 seconds of UTC.

The topology of our cluster has two tiers. All of the nodes are
interconnected
over a private network, and some subset of the nodes also have
external
connections to the LAN where it is deployed. The subset is always at
least 2 nodes, and can be as high as 25% of the total number of nodes.

Prior to extending the product to allow use of an external (to the
cluster)
NTP server or servers, those nodes with external connections were
configured as peer servers to the internal cluster, with all other
nodes
pure clients.

After adding support for external NTP servers, we kept something like
the same config: The nodes with external connections were still
servers to the internal network, and were peers of each other. But now
they were also clients of one or more external servers. I understand
that
requiring three or more would be better, and we can do that, but we
still
have to ensure stability of the internal cluster time even if a
reduced set
of servers (including the null set) were reachable.

Our configuration did not work, because we were able to cause
instability
in the internal cluster time with perturbations in the external
server. And we
have to guarantee stabililty even with bad inputs.

What happened was that some (but not all) of those externally
connected
nodes deemed the external server a false ticker, and stopped believing
it.
But some of the other externally connected nodes did not, and as a
result
there was time divergence between members of this group. It is this
divergence
that I'm referring to when I speak of a lack of stability.

So before I go into configuration details, is there a known "best way"
to
handle the sort of requirements I described? It sounds like orphan
mode
might provide functionality I'm looking for, but I figured in parallel
with
emperical experimentation, I'd pursue the analytical approach and ask
people who know more than me. :)

thanks,
Tim

4.[ntp:questions] Advice on sync clock between cluster of linux v2.6 to +-1us

5.NTP in a Linux cluster

Folks,

Could any NTP experts suggest how I should best configure NTP in
a loosely-coupled Linux cluster, where intra-cluster synchronization
is
the top priority?

I have done some reading about NTP, but can't seem to find an
authoritative
guide to using NTP in a cluster environment.  My company sells systems
that
run on small clusters of Linux servers - typically from 2 to 16
servers ("nodes"),
each running RedHat Linux.  All nodes in a cluster are equal.  We
don't use
any third-party clustering software, just the standard OS and our own
applications.

The main priorities are:

1)  Time must be kept closely synchronized between all nodes in the
cluster.

2)  If one or more nodes become unavailable, synchronization must
still be
     maintained.

3)  Time must never go backwards, or jump - all changes must be by
"slewing"

4)  Time should track one or more external NTP servers as closely as
possible,
      while observing (1) to (3) above.

The key requirement here is the as-close-as-possible synchronization
between
nodes in the cluster; that is far more important than closely tracking
the external
NTP server(s).

How would an NTP guru go about configuring a cluster to meet those
requirements?

It would be preferable if the configuration of all nodes could be
identical.   A solution
that requires two or more node types ("master" and "slave" perhaps)
with different
settings would be acceptable, of course, if the preferred one-size-
fits-all approach
is impossible.

Should all nodes have configuration entries for all other nodes as
"peers"?

Should all nodes (or only one) have configuration entries for the
external NTP server(s)?

If all nodes have both peer and external server entries, how can I
arrange that
keeping in sync with peers is seen as more important than keeping in
sync with
the external servers?

Questions, questions...

Any and all answers would be gratefully received!

Thanks in advance,

Lorcan Hamill

6. ntp cluster problem

7. configuring ntp in sun v880 in a cluster

8. [tip:timers/ntp] ntp: adjust SHIFT_PLL to improve NTP convergence



Return to unix

 

Who is online

Users browsing this forum: No registered users and 84 guest