Guest User!

You are not Sophos Staff.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Battle.net Client Can't Download Updates

Hi.  My battle.net client (Blizzard games like WoW, StarCraft, etc...) cannot update if HTTP scanning is turned on.  It works if I disable HTTP scanning in the web filter. I do not have HTTPS scanning turned on.  I have tried bypassing these sites from  getting scanned and it still does not work.  Here's a great list of regex exceptions from UTM 9 that don't seem to work with XG Firewall. 

https://community.sophos.com/products/unified-threat-management/f/55/p/45070/161552



This thread was automatically locked due to age.
  • I have exactly the same problem with Origin client. It does not want to download any updates with HTTP/HTTPS scanning enabled. Bypass rules do not seem to do anything in this case.

  • Test the AV bypass rule using www.eicar.org which has IP 188.40.238.250.

    From here you can download the safe virus test file: http://www.eicar.org/85-0-Download.html

    Create Rule:

    Source = *

    Destination = *

    URL Regex = www\.eicar\.org

    Now try downloading the test virus.  It should skip AV scanning and download.

    Change the Rule:

    Source = *

    Destination = 188.40.238.250

    URL Regex = *

    Now try downloading the test virus.  It should skip AV scanning and download.

    Let me know if this test works for you.

    NOTE: The URL RegEx is for the URL.  If it is being accessed by domain name then an IP will not work.

  • The exceptions work for me. I have yet to figure out all URLs and probably some IPs.

    Unfortunately those game stores are light-hearted if network configuration is concerned. I have noticed that many do not work with HTTP scanning enabled and those are not protocols like HLS - plain file downloading. There are HTTP requests to IPs and what's worse HTTPS requests to self-signed certificates. Nightmare.

    and these are multi-milion $$$ companies... pathetic.

  • One more question: Why do you use "^https?" pattern in HTTP scanning rules? I know that it doesnt hurt, but it is needed to duplicate those in the section below.

  • Say you had a pattern battle\.net and then imagine the URL

    http://evilsite.com/malware.exe?fooltheproxy=battle.net

    If your regex only had battle\.net then it would match because it is implicitly .*battle\net.* .  By putting in a full regex including the ^https? you are really removing the .* at the beginning and making sure your search string only matches in the FQDN.

    But in any sort of testing or debugging I recommend using the simplest RegEx.  Here is a tool intended to help build RegEx for the Sophos UTM but it applies equally here.

    http://utmtools.com/QuickRegex

  • That was not my intention.


    Just why do you use "^https?://" in HTTP rules instead of "^http://". I will make it clear, I was asking why "s?" is included.

  • The regex s? means "letter s" "zero or one of them".  Basically https?:// will match both http:// and https://

  • I know that. But we are discussing HTTP only rules not HTTP and HTTPS.

  • Though the section heading is "HTTP Scanning Rules" in reality it is "anything that goes through the web proxy" which includes HTTP, HTTPS, and FTP-over-HTTP.

    For most people, if they want to turn off the virus scanner for domain.com they want to turn it off for both http and https access to domain.com therefore the convention is to use that notation.

    If you wanted to turn off the virus scanner for http access and leave it on for https access then you would not include s?.

  • Well, then maybe the section names should be changed to: "Proxy scanning rules" and "HTTPS bypass rules". Especially when we have a section named "HTTP/HTTPS Configurations" just above.

    The name "HTTP Scanning Rules" may be misleading in this context. As i was thinking it is only for HTTP traffic and nothing else.

    Let me put this straight: To bypass scanning of some SSL secured sites I MUST put the domain into "HTTP Scanning Rules" as regexp and then into a category in "HTTPS Scanning Rules".


    Is it correct ?