Guest User!

You are not Sophos Staff.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Did the RAM limits change for Home edition for 16.05.2 to 4GB?

Did the limits for RAM on Home edition get changed with SFOS 16.05.2 MR2? I'm only seeing 4GB after update. Still only showing 4GB after rolling back to SFOS 16.05.1 MR-1. I was seeing 6GB usable before upgrade.



This thread was automatically locked due to age.
  • DavidWilliams1 said:

     

    I have 2x4GB installed.  Supermicro X10SBA-L-O

    SFVH_SO01_SFOS 16.05.2 MR-2# grep RAM /proc/iomem
    00001000-00089bff : System RAM
    00100000-1effffff : System RAM
    1f100000-1fffffff : System RAM
    20100000-78825fff : System RAM
    78b81000-78b81fff : System RAM
    78bc4000-78d2ffff : System RAM
    78ffa000-78ffffff : System RAM
    79000000-7affffff : RAM buffer
    100000000-17fffffff : System RAM

     

     
    Theoretically these numbers should match or be close to your usable ram. You look to have basically the same issue i do. Most of you memory is mapped starting at a starting address of 4GB.
    I just used the script i posted and looked at the "System RAM" Data. I did not have a RAM Buffer on my system. They got a little reordered by the sort.
     
    1f100000-1fffffff 14 MB 15359 KB
    78b81000-78b81fff 0 MB 3 KB
    78bc4000-78d2ffff 1 MB 1455 KB
    78ffa000-78ffffff 0 MB 23 KB
    00001000-00089bff 0 MB 546 KB
    00100000-1effffff 494 MB 506879 KB
    20100000-78825fff 1415 MB 1449111 KB
    100000000-17fffffff 2047 MB 2097151 KB

    Total Bytes: 4168227832
    Total KBytes: 4070534
    Total MBytes: 3975
     
  • Hi All,

    We have received all the output and our Dev Team is working on this. I will be more transparent by saying that we are trying to get remote access to some of the customers who are affected by this issue. Enabling access ID and allowing SSH access to our IP address can help us investigate. If anyone can share these access to us then please DM me the details. You will be informed prior to any changes or access. 

    Thanks

  • Thanks to the DEVS for taking the time to look at this. 

    To me it comes down to properly adjusting the mem= value to account for where a system maps memory. Many newer systems will map a large chunk of the memory starting at the 4GB address location. mem=6G will NOT allow for the allocation of any memory past the 6GB address pointer. It does NOT take bios memory mapping into account. Some users may have an options to change that mapping. My BIOS does not.

    So far the information provided that i have looked at in these posts points to anyone with that BIOS memory remapping to 4GB addresses as being impacted. I've looked at some of my VM's and the memory maps are much simpler on those which probably explains why some people with VM's were not impacted.

    One solution i can think of is adjusting up the value of mem= upwards(example below) based on e820 memory maps and/or /proc/iomem until the desired result is achieved. The problem with /proc/iomem is that anyone already booted on mem=6G will not have the full map readily available. The other solution i can think of is a daemon or driver that simply allocates+pins real memory until the system is down to 6GB free and removing mem= from the kernel options.  

     

    Here's the current memory map from my system reported with a variation of the script i posted a while back that looks at "System RAM" in /proc/iomem

      00001000-0009c7ff | Region: START    0 MB END    0 MB | SIZE:    0 MB     621 KB
      00100000-1effffff | Region: START    1 MB END  495 MB | SIZE:  494 MB  506879 KB
      20200000-7b217fff | Region: START  514 MB END 1970 MB | SIZE: 1456 MB 1491039 KB
      7b293000-7b3bafff | Region: START 1970 MB END 1971 MB | SIZE:    1 MB    1183 KB
      7bb30000-7bffffff | Region: START 1979 MB END 1983 MB | SIZE:    4 MB    4927 KB
    100000000-17fffffff | Region: START 4096 MB END 6143 MB | SIZE: 2047 MB 2097151 KB

    Total Bytes : 4200249338
    Total KBytes: 4101805
    Total MBytes: 4005

    Here's a memory map report from what davidwilliams1 provided.

      00001000-00089bff | Region: START    0 MB END    0 MB | SIZE:    0 MB     546 KB
      00100000-1effffff | Region: START    1 MB END  495 MB | SIZE:  494 MB  506879 KB
      1f100000-1fffffff | Region: START  497 MB END  511 MB | SIZE:   14 MB   15359 KB
      20100000-78825fff | Region: START  513 MB END 1928 MB | SIZE: 1415 MB 1449111 KB
      78b81000-78b81fff | Region: START 1931 MB END 1931 MB | SIZE:    0 MB       3 KB
      78bc4000-78d2ffff | Region: START 1931 MB END 1933 MB | SIZE:    1 MB    1455 KB
      78ffa000-78ffffff | Region: START 1935 MB END 1935 MB | SIZE:    0 MB      23 KB
    100000000-17fffffff | Region: START 4096 MB END 6143 MB | SIZE: 2047 MB 2097151 KB

    Total Bytes : 4168227832
    Total KBytes: 4070534
    Total MBytes: 3975

    Here's a theoretical/handmade example map that shows what a modified mem=8283M value for my host would do. That value should provide me with exactly 6GB of usable ram.

      00001000-0009c7ff | Region: START    0 MB END    0 MB | SIZE:    0 MB     621 KB
      00100000-1effffff | Region: START    1 MB END  495 MB | SIZE:  494 MB  506879 KB
      20200000-7b217fff | Region: START  514 MB END 1970 MB | SIZE: 1456 MB 1491039 KB
      7b293000-7b3bafff | Region: START 1970 MB END 1971 MB | SIZE:    1 MB    1183 KB
      7bb30000-7bffffff | Region: START 1979 MB END 1983 MB | SIZE:    4 MB    4927 KB
    100000000-205B00000 | Region: START 4096 MB END 8283 MB | SIZE: 4187 MB 4287488 KB

    Total Bytes : 6443153403
    Total KBytes: 6292141
    Total MBytes: 6144

     

    My original bios(e820) usable memory map before mem=6G from syslog.log is below.

    [mem 0x0000000000000000-0x000000000009c7ff] usable
    [mem 0x0000000000100000-0x000000001effffff] usable
    [mem 0x0000000020200000-0x000000007b217fff] usable
    [mem 0x000000007b293000-0x000000007b3bafff] usable
    [mem 0x000000007bb30000-0x000000007bffffff] usable
    [mem 0x0000000100000000-0x000000027fffffff] usable

    The impact of mem=6G from the log is also below. It removes any memory over the 6GB address mark. Depending on memory usage below 4G and how things are mapped on different boards different users will see different amounts of free memory. I think the 2 examples above should show that.

    remove [mem 0x180000000-0xfffffffffffffffe] usable

    I know i have made a lot of posts on this but i have been troubleshooting UNIX(SunOS/Solaris/AIX/HP-UX/Linux/NetBSD) issues since 1996 and have used Linux since the 0.98 days and am just interested in seeing the problem resolved as soon as possible. If there's any questions DM or email me. I think most of the info needed to fix this is in these posts. It boils down to mem= being an overly simplistic implementation, and there being no good alternative to limit usable memory to a specific value. 

    I would personally fix this by creating a program/script to calculate a proper mem= value based on the system memory map, or using some other means to make extra memory unusable.

    Thanks. Hope you guys figure this out Before the next MR :)

  • FYI.  I just installed the lastest firmware update, SW-SFOS_16.05.3_MR-3.SFW-183.  No fix, appears the RAM is still unchanged, 3096MB used of 8GB.

  • Just updated to 3-mr3 and memory use hasn't changed, 5369mb out of 8gb.

  • Agree about this.

     

    I think that is a better solution to handle "overcapacity" by just allocating it, as this would then allow the system to also manage the "overcapacity" as "hot spare".

    So when it detects a memory failure, it could remap away that failure by using the capacity that is over license.

     

    Example: You have 8 GB. Your license is 6 GB. If a 1 GB chip fails on one of the ram sticks, it could remap it so it uses lets say the first 5 GB, then makes the failed 1 GB unavailabe and then uses 1 GB of the "overcapacity", and then makes the last 1 GB unavailable, so 6 GB stays usable at all times, even at hardware failure.

  • It's been 16 days since your post, has there any movement on this issue? I think i pretty clearly laid out the source of the problem.

    Either remove mem= from the kernel options and use some other method to limit memory or calculate the proper value of mem= taking into account the systems memory map.

    I understand development takes time, but this issue has been known for well over a month.

  • Does anyone have a test firewall with this issue to give them access to for diagnosis. I imagine that may be what's holding it up. I think many of us just don't feel right about giving an outside entity access to our live configs, but i don't think it will go anywhere without it.

     

    I've looked around a bit and don't see any way to replicate the remapping in a VM in KVM or virtualbox.

  • I have given them full access to my boxes in previous UTM and XG betas. They need access when the problem is limited to a single user that they can't replicate in the lab. This issue is wide spread and has been identified by you and others correctly.

    The last UTM rollout broke antivirus scanning for ALL users. It took sophos almost a week to fix that even though paying customers were being affected. During XG 16.05RC1, a bad rollout caused WINGc to crash breaking web categorization for EVERYONE. That took almost 3 days to fix, although the bad IPS updated affected paid and home users alike.

    This problem is only affecting home users. I am sure sophos is hard at work fixing it as we speak[:^)]

  • Hi All,

    We haven't received any volunteer to provide us both, Access ID and the SSH access to their appliance for investigation. Our Dev Team cannot really proceed further until we find a machine to troubleshoot, that has this issue ongoing.

    Thanks