This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Editing the linux back end for DNS in UTM9?

OK, so we are located in a 3rd world country, and for various reasons that I won't go into, the local ISP DNS server is untrustworthy, so we set our DNS forwarders on our UTM9 and on our internal Windows 2008 servers to the Google Public DNS servers.

As these Google Public DNS servers are not located in our country, the entire lookup process is slower than your average environment, so we used to get a lot of host not found errors, but once I extended the recursion timeout on our WIndows servers, those went away. Now I primarily get those errors from the UTM9 instead.

Sophos Support has told me there is no way to increase the DNS timeout on the UTM9, but I assume that they are talking about via the GUI.

I'm semi-confident in Linux, and was thinking I would extend the DNS timeout in the resolv.conf or somewhere where similar.

Has anyone had experience with the UTM9 and making changes to the conf files? Is there any "DO NOT DO THIS" warnings that I need to know about?

Any advice, similar situations?

 

(Yes, I have configured our UTM9 DNS as per the DNS recommendations but our situation is different to the standard)

 

Thanks



This thread was automatically locked due to age.
  • Hi, Heidi, and welcome to the UTM Community!

    Running tcpdump -vvv -s 0 -l -n port 53, you may find [bad udp cksum 6279!] or similar errors when you do a DNS lookup.  These could be caused by hardware offloading to your External NIC.  If that's the case, what do you see when you run:

    ethtool -k eth0|grep on

    You may want to disable TCP tx offloading: ethtool -K eth0 tx off rx off

    Any luck with that?

    Cheers - Bob
    PS You might try namebench - Open-source DNS Benchmark Utility - Google Project Hosting.

  • Any luck with that, Heidi?

    Cheers -  Bob

  • Thank you. I think it helped a little, it set me down a path of investigating and reading. It's difficult to catch the issue when it's occurring. I usually know in hindsight. Though I also know that end users have stopped reporting the issue because they are just resigned to it.

    I've come to the conclusion that we are just going to have long DNS request timeouts.

    So I've set up a morning batch file that does an nslookup of the key sites, and that way I know our DNS servers will get new records with a recent TTL. That way our clients are not waiting for the DNS server or the SG550 to do a new lookup.

    It's a clumsy workaround, but I think it will do for now.