This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Slow Web Filter Performance

We are seeing very slow performance from the Endpoint Webfilter at supported clients and our own site - typically 3-5 seconds delay before opening a new page!

Initially I thought this would be DNS related - but quick investigation points to slow response from Sophos Servers to the Webfilter clients.

If we have to do these looksup via the cloud, please provide the resource/bandwidth - currently the performance is so poor most of clients are turning it off!

Any comments welcome.

:29253


This thread was automatically locked due to age.
  • Hello SnakeyJ,

    are you referring to the HTTP lookups to sophosxl.net? Dunno how long the LSP waits for the connection to be established and the reply received but I do think that it doesn't assume that "the cloud" is always available and performing. That's not to say that there wouldn't be a noticeable delay in case "the cloud" is not working as it should.

    But how could a general please provide the resource/bandwidth be addressed? Sophos has likely seen about adequate SLAs with their cloud providers, but even extensive monitoring can't detect all bottlenecks. Where you connect to depends not only on our geographic location but also on your ISP - thus without you telling Sophos where you are (in terms of "The Internet") it'd be hard for them to look into the problem. As you likely don't want to reveal our location and other details to the public I'd suggest you call Support.

    Christian

    :29257
  • Hmm, perhaps doing it all via the cloud is not the most efficient solution - may be the web filer could check against a local cache (managed via the enterprise console) and only refer queries for unknown / less commonly accessed sites?

    I'll have to to look at some capture to see exactly what is being requested, but the problem appears to be one of latency rather than bandwidth limitation.    Whilst it may not be practical to hold all this content locally, it seems quite ineficient for 2,500 clients to query the service everytime they access common used sites.

    Given the nature of the threat I want my clients to use this part of endpoint protection, but 3-5 second delays on page lookups is quite a major deterent.

    :29365
  • Hello SnakeyJ,

    there shouldn't be dozens of HTTP lookups unless you access a site with scripts from many other sites . well, a trace should reveal what exactly is requested. If it is latency (and then specifically only when accessing the sophosxl.net cloud)  this should be investigated. Accesses to "the cloud" are quite common nowadays (think of all the pages with Like and Share buttons as well as the ads), thus I think there's no justification for caching just these requests. 

    Christian

    :29405