This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Sophos XG HTTPS Hostname Blocking

Hello,

 

I seem to be having issues connecting to the HTTPS webpages for ESXI and Vcenter. My PC's are in the zone LAN and my https websites are in the zone LAB. I can connect to the sites fine using IP. When i try to connect to the sites using the hostnames i get a 501 bad gateway error. Interestingly if i place a PC in the LAB zone and try to connect to it from there it works using both IP & hostname.

 

I have a firewall policy setup to all traffic to flow from LAN to LAB and also from LAB to LAN. The policy includes no scanning of any kind and no NAT. 

 

Interestingly i have noticed that when i get the 502 bad gateway error the SSL certificate that is shown has been issued by the sophos firewall.

 

 

I believe that the sophos is intercepting the traffic and blocking it, but i can't work out why because the firewall policy doesn't include any scanning.

 

 

Any help will be greatly appreciated

 

Regards

Oliver 



This thread was automatically locked due to age.
  • Strange behaviour, Oliver.

    Are you upgrading from Cyberoam or another XG version?

    Also did you try to reboot the XG?

    I would recommend you to also regenerate the Certificate and the CA (both built-in) from Certificate menu.

    Let us know.

    Thanks

  • Hello,

    It's a brand new install of v16. I have tried rebooting by its made no difference. I have also tried regenerating the appliance certificate but I'm still getting the issue.

    Regards

    Oliver Knights

  • Oliver,

    try to disable the pharming protection under Web Protection > Protection > Malware Scanning > Advanced settings.

    I am not sure if there is a way to bypass pharming protection for single ip/url/subnet.

    Thanks

  • Hello 

     

    Disabling pharming protection seems to make it work.

     

    Thanks

    Oliver

  • Thanks Oliver.

    , , we need an option inside the XG where the pharming protection can be disabled under Web > Exceptions. Disabling pharming protection at all is not a safe way.

    Thanks

  • I know  explained some of this in another thread https://community.sophos.com/products/xg-firewall/f/sophos-xg-firewall-general-discussion/84901/web-proxy-always-active-and-interferes-with-dns without going into details as to why. I still don't understand why ALL DNS traffic is being hijacked with pharming protection enabled specially when there are no web protection rules in place.

  • 1)

    It is not "all DNS" but "double-check the DNS for all web traffic".  And yes, it occurs even when there is no web protection rules in place.

    2)

    The fact that transparent mode web traffic goes through the proxy is a design issue that we are aware of and are working on a resolution for, with no ETA.

    3)

    If you are using web and you do need more granular control of Pharming Protection, please raise it as a feature request.  As with any feature request the more voices, the higher the priority.

    4)

    If turning off Pharming Protection still does not resolve the problem, and the problem is specifically with HTTPS, and you are not doing anything with Application Control, then there is one additional thing you can do.

    ssh into the box as admin (or use Console in the admin menu).

    Choose option 4 (Device Console)

    system application_classification microapp-discovery off
    system application_classification off

    This will disable a few things around microapp discovery in HTTPS traffic, affecting both Application detection and the web proxy.

  • Thanks Michael for your deep info.

    For security reason pharming protection should be always ON unless there is a false positive like this case. We need an option on the XG where we can disable pharming proteciton for some URL/IP.

    Do we need to open a feature request for it, or there is already something planned for it?

    Thanks

  • There may or may not be a feature request, I don't know.

    But I do know that there is no current plans to put in additional configuration for this.

    Feature requests that come in via any avenue (Partner feedback, beta, feature request site, etc) are weighted by how important they are - how many (and what size) of customers requesting, workarounds, etc.  The more data that you can add, the higher the priority.  In other words, if this is important to you (over other features) then don't go "oh, they already have a feature request so I don't need to do anything".

  • Hi Michael and as always thanks for taking the time to explain the thought process. I am not going to argue the pros and cons of FORCED pharming protection but here is my problem with the whole thing.

    1. It always ends up with me complaining about logging. Why is the intercepted traffic not logged? Maybe it is being logged and I have never seen it... if that is the case where should I be looking for such logs? Webfiltering? Firewall? System? Moreover, when/if they improve the logging in v17 this traffic will definitely be logged somewhere?

    2. Unless you had taken the time to explain this, there is probably nobody else on these boards that knew about this "hidden feature" nor is there any documentation on the actual behavior of pharming protection. From Sophos XG web interface reference guide v16 "Protect users against pharming and other domain name poisoning attacks by repeating DNS lookups before connecting". What does that mean? Repeat a query to a forwarder even if TTL has not expired? Query the root servers? Query a forwarder that is DNSSEC aware? XG/UTM will make a DNS request for any traffic through httpproxy so what is it that the pharming protection does in addition to initial DNS queries if you don't count the unintended DNS hijacking behavior[:D]

    3. I realize that I can turn of pharming if it is an issue within my network but that is not what I am arguing. When pharming protection was developed in XG, someone decided to add a DNAT rule for port 53 for all http traffic even when the traffic is not passing the web filtering mechanism. This is clearly a bug and shouldn't need a feature request to fix.

    Thanks again for the time that you spend on these boards. I for one really appreciate ALL your feedback.

    Regards

    Bill