Guest User!

You are not Sophos Staff.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How do you block/allow domains with revolving/rotating or distributed/geo-dependent DNS?

Our system needs to allow outgoing HTTP/S connections to our Amazon S3 services.  In the past, I was able to create a "DNS Group" object that kept track of the 700+ IP addresses associated with "s3.amazonaws.com" but now after recent firmware updates the DNS Group object is reduced to just one.

I asked Sophos Support about this behavior, and they responded that the DNS Group object was never designed to track all IPs for a domain name with revolving/rotating or distributed/geo-dependent DNS.

Besides entering all of S3's 300 IP blocks manually as network objects (and updating as they change) I was wondering if anyone uses a solution to remedy this behavior?

How do you block/allow a domain name that has revolving or distributed DNS?

Cheers!



This thread was automatically locked due to age.
  • Sam, if you'd like to add this suggestion at Ideas, post a link to it here and I'll vote for and comment on it.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Am I understanding correctly that there is no way to permit packets being blocked by the firewall based on a regex URL? For example,

    Default DROP TCP 192.168.0.143 : 5601→ 54.91.149.109 : 2000

    Where 54.91.149.109 is *.amazonws.com

  • REGEX only works for URLs in Web Protection.  In Network Protection, that's not possible.  In part, it's because of the way DNS works around the world.

    What does that packet represent?

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • To verify, it is not possible to create a firewall rule that accomplishes:

    Permit from local network host X, source port 1:65535, destination port 9445, destination host www.rainforestcloud.com

    where www.rainforestcloud.com is an AWS host with an variable IP address?

    Thx

  • I used http://centralops.net/co/DomainDossier.aspx to see that www.rainforestcloud.com has two IPs, and the name server switches the order every few seconds.  Instead of using a DNS Host definition, try your same firewall rule with a DNS Group definition using www.rainforestcloud.com.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • My documentation (9.4) says that DNS Group is intended for one name that has multiple resource records.   That seems to be different from your situation, one name that returns different results on subsequent queries without ever publishing all of the possibilities.

    You are right that you cannot create a Cisco-style network object with a long list of IPs, but you can create a firewall rule with that long list.   If you need that list again for another firewall rule, you can use the Clone option to avoid typing the list a second time.   The real headache for you is that the list has to typed into the UI rather than being something that can be prepared offline and loaded from his text file.

    Web Protection is probably the best part of UTM, you should use it.

    I have found Webserver Protectioni / WAF difficult to get exactly right (effective but without false positives).   Do you have any insight to share from your experience tuning your WAF configuration?

  • Doug, I think you scanned my answer too quickly - that FQDN has two A-records.  The Amazon name server delivers both every time, but changes the order every few seconds, so a DNS Group is exactly what he needs.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • Thnaks for clarifying Bob.

    Back to Bob's original recommendation:   If you use Standard-Mode web filtering, your server sends the URL to UTM, and UTM does the DNS resolution.   It is entirely possible to create a web filtering policy that says "These specific servers can only send web traffic to these specific remote hosts", which is the result that I think you want.   The allow/block decisions will be made before the dNS lookup occurs, so the 700 or so the IP addresses do not matter at all.

    Yes, other protocols will require other strategies, but when you have a Swiss Army Knife, you need to use the tool that is optimized for the problem.

    Yes, there are firewalls that can create a network object list more easily.   You can put a relatively inexpensive firewall in front of UTM and have a good solution.   I don't perceive the firewall subsystem as UTM's best feature.  

    UTM was a winner for me because (a) it was available at a price that I could get approved by management, and (2) it did a lot of useful things for me.   It was a given that it would not be best-in-class on all of its features.   I have frustrations with the product, but its effect on our perimeter defenses has been extraordinarily beneficial.

  • I may be late with my reply, but just now found this thread and did some lookups (see my screenshot). It's definately amazon who's doing the "changing" ip-addresses.

    Whenever I nslookup s3-1-w.amazonaws.com I get different IP-addresses (and only 1 each time).

    Doing the same for ie mail.office365.com gives a whole bunch of IP's. So this looks like it has nothing to do with UTM dns groups....


    Managing several Sophos UTMs and Sophos XGs both at work and at some home locations, dedicated to continuously improve IT-security and feeling well helping others with their IT-security challenges.

    Sometimes I post some useful tips on my blog, see blog.pijnappels.eu/category/sophos/ for Sophos related posts.

  • To add to your post, from more of a technical perspective, I think of these features this way:

    Web Filtering (a paid for feature in the UTM, that we don't have) allows the admin to restrict web protocols HTTP/S (and possibly others?) based on the request packet header values written by the client's user agent.  Therefore, it has the ability to do a string match on those URLs / domains; therefore the DNS resolution of those domains / subdomains (and the resulting L3 IP addresses of the remote host[s]) is moot.  This provides flexibility when restricting the protocols that the Web Filtering daemon supports (again, HTTP/S, possibly FTP, maybe others?).

    Network Protection (paid feature that we do have) allows the admin to restrict ANY port based solely on the L3 IP address of the source & destination.  This is the generic network firewall that we're all used to, it does not ascertain or care what protocol you're using, only the network port you're trying to access.  This allows a bit of flexibility when running daemons on non-standard ports, but GREATLY LIMITS flexibility in today's DNS resolution strategies; think distributed or geo-dependent DNS, rotating A records, or multiple A records.

    When entering Host / Network / DNS Definitions as network objects in the UTM, the UTM does the DNS resolution on its own and populates those network objects with the real IP addresses it resolves.  This is where I found another quirk that's worth mentioning.

    1) If your client (internal laptop / server) has DNS server X defined in it's network settings that's different than the UTM's DNS server Y (Network Services -> DNS -> Forwarders), you may run into an issue where your client resolves one IP, sends its packet to the UTM where that IP may not [yet] be stored in the network object, and therefore doesn't match your firewall rule; this is especially true for distributed or geo-dependent DNS resolution (think global CDNs) and is magnified if, for example, X and Y are on separate continents.  

    2) In a similar hand, if you have rotating DNS resolution with hundreds of IP addresses (think Amazon S3) - it is certainly possible, in fact probable - that the client will create its packet with an IP address that the UTM hasn't [yet] resolved and stored in its network object definition.  I have proof of this scenario in my UTM: a)the network object is 700 IP addresses large for my DNS Group object "s3.amazonaws.com", b)when the network object is first created and before the UTM resolves most of the 700 addresses, my client packets have a slim chance of matching the firewall rule.  Even when the object is "fully populated", the rotating nature of the DNS resolution is not accurately or timely tracked by the UTM.  If AWS drops a whole /24 from the DNS resolution for s3.amazonaws.com, and adds a different /24, it will take some time for the UTM to store all of those changes.  During this time essentially my firewall rule contains stale IPs; it becomes likely that my client will form a packet with a destination that the UTM doesn't yet have.. and then I wait.. and I wait.. and I wait until the UTM has finally re-resolved s3.amazonaws.com and updated its massive 700 address network object.

    My post here was intended to point out our use case; that we are trying to restrict egress connections - not just for Web protocols - using the standard L3 firewall.  Fortunately, the UTM database does support DNS Group objects, which do "store" DNS resolutions for some time.  But there are difficulties with rotating DNS or geo-dependent DNS.

    I wonder if we should be talking with the IETF for standardizing these DNS strategies.  Obviously, organizations are using creative DNS resolution strategies to benefit their business cases - and I vote that we allow security & firewall admins a front-row seat to a standardizing discussion for these creative strategies.

    Cheers!

    SAM