This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Allow specific URLs but block non-URL 443 traffic

I want to allow a PC to access specific web sites and nothing else. I have setup a firewall rule on XG 18 that allows HHTP/S services and set a web policy.

What happens if a program tries to make a direct connection from that PC to another computer on those ports? Won't that be allowed by the allowed services? How do you stop this from happening so only the allowed URLs will pass?

I've read loads of posts about web filtering but can't find the answer.



This thread was automatically locked due to age.
Parents
  • Hello JasP,

    So you only want to allow for example https://thiswebsite.com and nothing else?

    If you have Web Policy with the specific URLs applied then the XG shouldn't allow access to any other website or connection. Additionally, you could create a Firewall rule below this one and block port 443. 

    Regards,

  • If you have Web Policy with the specific URLs applied then the XG shouldn't allow access to any other website or connection.

    This doesn't appear to be correct.

    I spent some time testing this. I created a rule at the top of the firewall for one server that allowed HTTP and HTTPS and had a web policy that allowed access to one URL. I then created a rule just below it that blocked all other traffic from that server. I then used a utility called Packet Sender running on my test server to send and receive traffic to another external server running Packet Sender and they could send traffic backwards and forwards on ports 80 and 443 without being blocked. What was even more disconcerting was that, despite having logging turned on for both rules, the traffic didn't appear in the firewall log at all!

    My rules are below:

  • I just picked an existing policy to use in the test. It doesn't really matter for the purposes of this test as I am not checking URL blocking, I'm checking what happens to non-URL, direct connections over port 80 and 443.

    FWIW Cloudflare offer both DNS over HTTPS and DNS over TLS and if you try https://cloudflare-dns.com/ in a browser you will see it does resolve to a web page.

  • Hi Jasp,

    I might be getting a bit old and grey, I understand DSN usingTLS, but to use a URL to lookup a DNS requires a DNS in the first place, fails the pub test very badly.

    Anyway have yo u looked at this page on the XG.

     

    Ian

  • We're all getting old and grey! Software that does DNS over HTTPS tends to have two approaches. Either it allows an IP rather than a URL or it has 'fallback' port 53 DNS settings that allow it to resolve the URL before switching to DNS over HTTPS.

    I have had a look at the page you indicated but I already have the most secure settings set.

  • I would suggest you try a different URL, that doesn't change the access mode during your testing.

    Blocking specific sites and allowing specific sites does work, you do need to examine web policies and default settings. Also you might need IPS to get classification etc. and add an application setting of block al in your test allow rule.

    Ian

  • For the purpose of this test, the web policy is immaterial, it's just a simple, single URL policy that I put in the firewall rule so I had a policy there. The test server isn't doing DNS at all so there is no change of access mode. I could put any web policy in there it wouldn't make a difference to what I'm testing. The original server that I found this problem on actually uses a different web policy.

    rfcat_vk said:
    Blocking specific sites and allowing specific sites does work

    It does if you're accessing URLs. I've tested it multiple times and tested it again on the test server. I can open a browser and access https://cloudflare-dns.com/ fine and can't access any other web sites. But the rules I posted do not block direct access to any IP on HTTP/S ports. If you just want to allow access to specific URLs and nothing else, this does not work and potentially is a very big problem.

    I did look at App Control in the firewall rule but is was no help. I'd already tried an app setting of 'Block All'. This had the effect of stopping direct IP access but it also prevented the permitted URLs from working. I briefly looked at having an app control that allowed just the application I wanted but it hasn't been detected (been running this XG with IPS for a couple of months now) and there is no way to add your own application definitions.

    Having a firewall rule that allows access to specific URLs and nothing else is a common requirement and my current configuration doesn't achieve that and I've yet to see a solution that does. I'm particularly interested in what Emmanuel comes back with as he seems to think that what I have done should work. He may be mistaken and I'm missing something or their is a bug in the implementation of URL filtering with big security implications.

  • Hi Jasp,
    you are doing something wrong. I have a number of rules that provide access to specific urls for specific devices.

    ian

  • rfcat_vk said:

    Hi Jasp,
    you are doing something wrong. I have a number of rules that provide access to specific urls for specific devices.

    ian

    That part works for me too (URL permit/block isn't the issue). But have you tested whether those rules also allow a direct IP connection (not via a URL) to unwanted IPs? I suggest you try it because you might get a nasty shock! You can do a quick dirty test by trying to open a telnet session on port 443 to the IP of a known web server. You won't get anything in the telnet window but if it connects then you are seeing the same issue as I am. I went further and used a utility running both on my local server and a remote server to pass test messages between the two.

    Since I started using XG (from the first v18 beta) I have wondered about this question of direct connections. I had guessed that if you set a web policy that the rule would only allow those URLs and nothing else because that is what made sense from a firewall point of view. I've been really busy so only just got around to asking the question here to confirm my understanding is correct and actually testing it myself. It is not working as I expected!

  • Hi Jasp,

    no telnet on macOS, so I setup a firewall rule that allowed https to34.215.119.153

    Connection to 34.215.119.153 port 443 [tcp/https] succeeded!

    But, I don't have any blocks for IP addresses in my active policies.

    I will add block IP to the active policy and update this post.

    Ian

    Update:- no affect on 80 or 443. The worst thing is it seems to bring back memories of an issues I raised many years ago, but cannot remember the answer. So something wrong with the proxy reading the format of a supplied IP address. Somebody dropped the ball in programming and inQA testing it would seem to me.

  • Hello Jesp,

    Actually I realized that I said to create the block rule below the current one, but it should be above.

    Also, make sure SSL/TLS is enabled.  Let me know if this changes the behavior.

    Regards,

  • Putting the block rule above the allow rule does exactly what I would expect it to do - it blocks all HTTP/S traffic, nothing passes to the rule below so it can't allow the desired traffic to pass.

    Have you actually tried this yourself Emmanuel? I don't think you realise the seriousness of this bug. As it stands, if you have a rule that allows specific URLs, it also allows any malicious piece of software to open a direct IP connection on port 80/443 to any IP address and send whatever it wants.

    I initially assumed I was doing something wrong but that doesn't appear to be the case and you now have another experienced user confirming the same problem. I suggest you get someone in your QA department to look at this as a matter of urgency! This is a major security flaw.

Reply
  • Putting the block rule above the allow rule does exactly what I would expect it to do - it blocks all HTTP/S traffic, nothing passes to the rule below so it can't allow the desired traffic to pass.

    Have you actually tried this yourself Emmanuel? I don't think you realise the seriousness of this bug. As it stands, if you have a rule that allows specific URLs, it also allows any malicious piece of software to open a direct IP connection on port 80/443 to any IP address and send whatever it wants.

    I initially assumed I was doing something wrong but that doesn't appear to be the case and you now have another experienced user confirming the same problem. I suggest you get someone in your QA department to look at this as a matter of urgency! This is a major security flaw.

Children
  • I decided to open a 'critical' support case for this (#10017096).

    Spent nearly two hours on a remote session with a support engineer who appeared to know his stuff. He can't explain why it is possible to make direct 443/80 connections and, whilst he won't confirm it is a bug at this stage, it has been escalated to a level 2 engineer to have a look at.

  • Hello Jasp,

    Thank you for the followup.

    Actually I wasn't able to replicate until now.

    I will be following the Case you provided. I will be also checking if we have any other case open with the same or previous cases/resolutions.

    Regards,

    Emmanuel Osorio

  • Hi JasP,

    I'm glad to see that people are testing the firewall/proxy and I support you doing so.  I am a QA who works directly in this area.

     

    Let us first make sure your settings are correct.

    Please go into the XG "device console" (this is not ssh / advanced shell) and do:
    show http_proxy

    You should have:
    HTTP relay_invalid_http_traffic: off

    If it is on then turn it off with
    set http_proxy relay_invalid_http_traffic off

    In Web > General Settings
    Block unrecognized SSL Protocols.
    Please make sure that it is on.

    The interpretation of these two are slightly different whether you use the web proxy or DPI mode, but both are involved with blocking invalid traffic.
    The relay_invalid_http_traffic will block traffic on ports that does not conform to the http standard.
    The block unrecognized SSL Protocols will block SSL/TLS traffic that is not supported.

    As you were involved in the v18 EAP, you may remember that changing these settings away from their defaults was part of workarounds for EAP issues. However the default for all new installations is that relay_invalid_http_traffic is off. I suspect it is on in your system, because you likely set it so during EAP.

    As for the testing, opening a connection with no data is not proof of anything, as this is just a TCP connection with no traffic. It might be valid HTTP or it might be an SSL handshake or it might be arbitrary binary data. The question is what is sent on the TCP connection afterwards, whether the data is sent on to the destination server, whether the far server sends data back that reaches the client, and what is virus scanned.

    I don't know Packet Sender, we usually use netcat to do this and it can also act as the client and server, though I'll use telnet since you mentioned it. If you want a full end-to-end you also need to have a netcat server running on the WAN side (eg on the other side of your rule) so you can monitor what the server receives/sends. Or use TCP dump to watch both the WAN and LAN side of the firewall.

    Note: If you are using telnet/netcat there are differences on how the data is sent in packets if you are typing it in (I think it sends a packet when you do a CR) or piping in the data (eg `cat myrequest.txt | nc www.example.com 80` will send a packet after max packet size or EOF). Also make sure that if you are manually typing HTTP there are two CR to indicate end of headers.


    So if you do:

    telnet www.example.com 80
    GET / HTTP/1.1
    Host: www.example.com


    The client will get back the webpage.


    If you do
    telnet www.example.com 80
    Hello buddy

    The client will get dropped.


    In the second test:
    If you are using the web proxy, the far server does not get a connection or any data. The web proxy will only make a connection to the web server if is has a full and complete valid request.
    If you are using DPI mode, the far server gets a connection. IIRC whether the far server receives data depends on whether the data is sent in one packet or many. The DPI mode allows the connection to the web server and inspects the packets. It will drop the connection only after it receives enough to know it is invalid.

    If you retry with relay_invalid_http_traffic on, the second test should not be dropped by the XG (though the far server might).

     


    > if you have a rule that allows specific URLs, it also allows any malicious piece of software to open a direct IP connection on port 80/443 to any IP address and send whatever it wants.

    That statement is true if relay_invalid_http_traffic is set to on. Port 80/443 traffic that does not conform to the HTTP specification is allowed.

    If relay_invalid_http_traffic is off (default) then traffic is blocked, with slight variations based on proxy/DPI and packet breakdown.

  • Hi Michael

    Thanks for taking the time to respond and your detailed explanation.

    For those who don’t want to take the time to read through all the post, my issue appears to be due to not having ‘Block unrecognized SSL protocols’ ticked and looking at a couple of new installs I have just done, this is not the default setting so will need to be changed.

    My initial intention was just to check I was setting up URL filtering correctly. 18-EAP was the first version of XG I have used and I’m still learning the ins and outs (previously a UTM user). I’d read countless articles on the forum but they didn’t answer the question and there were no authorative answers to my own post. The behaviour I was expecting was for the web policy URLs to be allowed and nothing else on ports 80 and 443.

    As I had no answers, I decided to test if my understanding was correct and discovered it wasn’t. The telnet test was just a quick first stab. If I got no connection then I would have known it was blocked and wouldn’t need to go further. I knew I had to do something more extensive to test it properly but didn’t have a tool to do the job so found Packet Sender. I wanted something with a graphical interface that was easy to use. It basically does a similar job to Netcat (although not as sophisticated). I was running it on both ends of the test so I could generate responses to test traffic.

    I’ve retested everything in the light of your explanation and everything works as expected. Despite using 18-EAP, I never changed the setting for ‘relay_invalid_http_traffic’ (I just decided to wait for a more mature release). However, I didn’t have ‘Block unrecognized SSL protocols’ ticked so I enabled that.

    I was testing with a small amount of data (16 bytes), so with that in mind and your explanation, I observed the following. Using DPI, the sending server was still able to send the data to the receiving server (because it was a small amount of data) but the sending server did not receive the reply. If I increased the amount of data, it was blocked. With Web Proxy, the sending server was unable to send any data.

    Reflections on this experience.

    Restricting access to specific URLs and nothing else is a common requirement of a firewall. It shouldn’t be this hard to setup correctly! When you set a web policy it should be implicit in that rule that nothing else should be allowed to pass for that rule, it shouldn’t be necessary to set a separate global configuration setting (‘Block unrecognized SSL protocols’) for this to work as intended – especially when that setting is disabled by default. A lot of people are going to get this wrong and assume they are receiving protection that they aren’t. Why can’t you change the program logic so that when you set a web policy, it automatically sets Block unrecognized SSL protocols for that rule (whatever the global setting)?

    At very least, this seems to desperately need a KB. Even Sophos’s own support engineers don’t seem to know how to set this up properly. The person I worked with on this case seemed very capable and thorough and he spent two hours looking at this and couldn’t get it to work correctly. What hope have the rest of us got?!

  • Hi Michael,

    my XG is now blocking IP address access using https without turning on

    "In Web > General Settings
    Block unrecognized SSL Protocols.
    Please make sure that it is on."

    But if I use the NC I can still show a connection, but no actual data is transferred.

    I am trying to remember why the WEB setting was not enabled and I suspect it was because some sites are not classified correctly.

    I will leave the setting enabled to see what applications fail during today.

    I would have thought that an IP address going through a firewall rule with http/s scanning enabled and a block IP address policy in place should not connect regardless of application using the https?

    Ian

  • JasP said:

    My initial intention was just to check I was setting up URL filtering correctly. 18-EAP was the first version of XG I have used and I’m still learning the ins and outs (previously a UTM user). I’d read countless articles on the forum but they didn’t answer the question and there were no authorative answers to my own post. The behaviour I was expecting was for the web policy URLs to be allowed and nothing else on ports 80 and 443.

     

    Just so you know, the behavior of proxy mode on XG and the UTM are the same.

    JasP said:

    I was testing with a small amount of data (16 bytes), so with that in mind and your explanation, I observed the following. Using DPI, the sending server was still able to send the data to the receiving server (because it was a small amount of data) but the sending server did not receive the reply. If I increased the amount of data, it was blocked. With Web Proxy, the sending server was unable to send any data.

    I don't know the Packet Sender app, but there are several factors.  The first is whether packets are actually sent, another is whether enough data has been sent to identify the traffic. 

    Lets take your 16 bytes of data.  Max size of a TCP packet is 1500 bytes.  Did it send one packet containing one byte, then another packet containing one byte, etc?  Or does it "save up" bytes until it has enough to send?  If you are doing netcat, it does not actually send a packet of data until you hit enter.  So you can type in 100 bytes of data and think it is working even though 0 packets and 0 bytes have actually been sent.  Then once it is received by the proxy, off the top of my head I cannot recall if it waits for the entire first line of the header or will reject if the first several bytes don't conform to a HTTP header.

     

    JasP said:

    Restricting access to specific URLs and nothing else is a common requirement of a firewall. It shouldn’t be this hard to setup correctly! When you set a web policy it should be implicit in that rule that nothing else should be allowed to pass for that rule, it shouldn’t be necessary to set a separate global configuration setting (‘Block unrecognized SSL protocols’) for this to work as intended – especially when that setting is disabled by default. A lot of people are going to get this wrong and assume they are receiving protection that they aren’t. Why can’t you change the program logic so that when you set a web policy, it automatically sets Block unrecognized SSL protocols for that rule (whatever the global setting)?

    Most settings are a balance between compatibility and security.  Turning on the "Block unrecognized protocols" breaks some applications and IoT devices.  No default is going to please everyone or every situation.

     

     
  • Michael Dunn said:

    Most settings are a balance between compatibility and security.  Turning on the "Block unrecognized protocols" breaks some applications and IoT devices.  No default is going to please everyone or every situation.

    So if the default is not going to please everyone or every situation, allow your rules to make exceptions to the default setting! If "Block unrecognized protocols" breaks some applications, why does it have to be a global setting only? Why can't it be set (or overridden) for an individual rule? Then you can have rules for your IoT devices that bypass this setting while your other endpoints remain fully protected.

    Neither of the two scenarios you present are, in my opinion, acceptable in a modern firewall. It isn't acceptable that some devices just can't communicate (Block unrecognized protocols on). It isn't acceptable that malicious traffic can be transmitted over port 443 because you need that endpoint to be able to access specific URLs to function correctly (Block unrecognized protocols off). So why not make the setting more granular as I have suggested than a 'one size fits all' global setting?

    Michael Dunn said:

    I don't know the Packet Sender app, but there are several factors.  The first is whether packets are actually sent, another is whether enough data has been sent to identify the traffic.

    I don't want to get too hung up on the testing because I agree with you that XG performs the way you describe it does.

    I did another test with a larger data set from which I conclude that Packet Sender doesn't send any data until you hit 'enter'. As before, with Web Proxy and Block unrecognized protocols on, no data gets sent at all. With DPI and Block unrecognized protocols on, 1496 bytes gets sent before the traffic is blocked, i.e. one packet.