On 06/06/17 07:24, Antony Stone wrote:
On Monday 05 June 2017 11:50:42 erdosain9 wrote:
Hi. For what I understood. It is important ttl of dns names.
Yes, TTL is important. It tells caching DNS servers how long they may
remember the last answer they got from the authoritative server, before they
need to ask the authoritative server again.
So, I wanted to know when the squid server would ask for resolution again.
Well, that's a different question.
Q: When will Squid ask [its configured name server] for resolution again?
A: When it needs to know the answer again.
Q: When will the [recursive] DNS server which Squid asks, ask for resolution
again?
A: When the TTL has expired.
That is, how long was the record kept.
That is the TTL.
;; ANSWER SECTION:
yahoo.com. 590 IN A 98.138.253.109
;; ANSWER SECTION:
pijamasurf.com. 299 IN A 104.24.25.112
I wish I could put a bigger ttl to avoid being asked every "little amount of
time" by one address.
Why? What does it matter to you that Yahoo asks your DNS server to refresh
its results no more than 30 minutes after the last time (your example of 590
fails to mention that you clearly asked your local name server for yahoo.com
1210 seconds previously). If you want to know the real TTL, ask an
authoritative name server:
$ dig @ns1.yahoo.com. yahoo.com
;; ANSWER SECTION:
yahoo.com. 1800 IN A 98.139.183.24
If you only ask your local caching server, all you are finding out is how much
longer its cached answer is valid for, before it will ask (the authoritative
servers) again.
For example pijamasurf.com = 299 and yahoo = 590, so
who manage that time??
Whoever maintains the zone files (DNS records) for those domains.
how can i put more time to live?
You cannot (and should not).
Or does this make no sense?
Why do you want to change the TTL on somebody else's domain?
What (do you think) is the benefit for you?
Maybe I did not understand Amos's comment.
Please repeat the comment which led you to trying to change the TTL of other
people's domains - maybe that will help us better understand what you are
trying to achieve,
I suspect it was this comment:
The core issue is the speed at which that service rotates its response
IP lists, which is directly related to each request going to entirely
different server in their farm. Simply having a single (and maybe more
sane regarding TTLs) resolver as a networks focal point for the
traffic before it reaches out to the Google service seems to bring
sanity back to the performance.
What I meant there was using a resolver that obeys the domain TTL it
gets given and stores the result until that TTL expires.
The way the Google service load balances does not allow that to happen -
your users query will reach a different server to your Squid query and
to your test query later - all of which probably have different TTL
values coming back (as you saw in that dig result 590 != 1800 and 299 !=
1800 ... the Google service is just a farm of recursive resolvers all
with different cache contents). By having your own resolver between you
and the Google service/farm that resolver takes only _one_ TTL period at
a time from Google - and delivers all your clients and Squid etc that
result until its single TTL period expires. After which it asks Google
again and gets just one TTL to follow, and so on.
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users