Guest User!

You are not Sophos Staff.

This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Update from 9.315 to 9.351 failed cause of 9.317

Today I wanted to update to the latest 9.351 from my 9.315 but recently there was 9.316 and 9.317 available.

Now the installation failed cause of failed dependencies:

2015:11:11-05:50:01 wall-2 auisys[22912]: You are currently running Version 9.317005, but Version 9.315002 is required for this up2date package.
2015:11:11-05:50:01 wall-2 auisys[22912]:
2015:11:11-05:50:01 wall-2 auisys[22912]: 1. Modules::Logging::msg:46() /</sbin/auisys.plx>Modules/Logging.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 2. Modules::Auisys::Installer::Systemstep::install:149() /</sbin/auisys.plx>Modules/Auisys/Installer/Systemstep.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 3. Modules::Auisys::Up2DatePackages::install:143() /</sbin/auisys.plx>Modules/Auisys/Up2DatePackages.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 4. Modules::Auisys::QueueIterator::process_qfiles:81() /</sbin/auisys.plx>Modules/Auisys/QueueIterator.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 5. main::main:300() auisys.pl
2015:11:11-05:50:01 wall-2 auisys[22912]: 6. main::top-level:35() auisys.pl
2015:11:11-05:50:01 wall-2 auisys[22912]: |=========================================================================
2015:11:11-05:50:01 wall-2 auisys[22912]: id="371J" severity="error" sys="system" sub="up2date" name="Fatal: Version conflict: required version: 9.315002 <=> current version: 9.317005" status="failed" action="install" package="sys"
2015:11:11-05:50:01 wall-2 auisys[22912]:
2015:11:11-05:50:01 wall-2 auisys[22912]: 1. Modules::Logging::alf:100() /</sbin/auisys.plx>Modules/Logging.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 2. Modules::Auisys::Installer::Systemstep::install:152() /</sbin/auisys.plx>Modules/Auisys/Installer/Systemstep.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 3. Modules::Auisys::Up2DatePackages::install:143() /</sbin/auisys.plx>Modules/Auisys/Up2DatePackages.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 4. Modules::Auisys::QueueIterator::process_qfiles:81() /</sbin/auisys.plx>Modules/Auisys/QueueIterator.pm
2015:11:11-05:50:01 wall-2 auisys[22912]: 5. main::main:300() auisys.pl
2015:11:11-05:50:01 wall-2 auisys[22912]: 6. main::top-level:35() auisys.pl
2015:11:11-05:50:01 wall-2 auisys[22912]: [CRIT-311] Firmware Up2Date installation failed
2015:11:11-05:50:22 wall-2 auisys[22912]: |=========================================================================
2015:11:11-05:50:22 wall-2 auisys[22912]: A serious error occured during installation! (20)

Any ideas?



This thread was automatically locked due to age.
Parents
  • Sorry guys, been out all day. Just got home. As Scorpion king says, from the Master run ha_utils ssh to connect to the Slave and be able to run commands on it.

    __________________
    ACE v8/SCA v9.3

    ...still have a v5 install disk in a box somewhere.

    http://xkcd.com
    http://www.tedgoff.com/mb
    http://www.projectcartoon.com/cartoon/1
  • Thanks to Scott, Billybob and scorpionking.

    After piecing together the above posts, I was able to fix my ha cluster with a little more help from *gasp* the Sophos Knowledge-base.

    https://www.sophos.com/en-us/support/knowledgebase/120870.aspx

    So assuming your slave is out, how I fixed it was:

    ssh loginuser to master

    ha_utils ssl

    password again

    cd /var/up2date/sys

    ls

    I got:

    u2d-sys-9.315002-350012.tgz.gpg  u2d-sys-9.317005-350012.tgz.gpg  u2d-sys-9.350012-351003.tgz.gpg

    So rather then delete all as Billybob wrote, I used:

    su root

    rm u2d-sys-9.315002-350012.tgz.gpg

    This left only the files Scott suggested.

    I then took a look at things by typing ha_utils with out the ssl part:

    ha_utils

    and got something like this:

    - Status -----------------------------------------------------------------------

    Current mode: HA SLAVE with id 2 in state UP2DATE-FAILED

    -- Nodes --------------------------------------------------- eth1 alive --------

    SLAVE: 2 Node2 198.19.250.2 9.317005 UP2DATE-FAILED since Wed Nov 11 02:50:01 2015

    MASTER: 1 Node1 198.19.250.1 9.351003 ACTIVE since Wed Nov 11 02:49:37 2015

    finally I ran:

    /etc/init.d/ha restart

    BAM! It did the rest and rebooted, then synced the slave. Watch the HA Live Log from WebAdmin. All should be well after about 7-10 min. You can also look at "watch up2date in progress" on the up2date page and the up2date live log.

    I am back to Node 1 ACTIVE, Node 2 READY with both on 9.351-3.

    MichaelMuenz, you may need to tinker with the above a little since your units are in different states but do it soon since from what I gather the whole HA is actually down when a unit is in state UP2DATE-FAILED.

    Best Regards - HTG
    Frustrated Sophos Partner seeing all the things
    that brought me to Sophos slowly slip away.
    RIP astaro.org

  • Thanks, great post! But from which unit do you execute /etc/init.d/ha restart?
  • I am only speculating here but I would say it does not actually matter which node you execute the command from. The important thing is to make sure that only the correct up2date files are in those directories before you execute the command so you may want to check on both your UTMs but specifically the one that is in failed status.

    I executed the command from the failed slave UTM and only had files in its directory since my other unit was already up to date.

    For you, you may want to consider just getting the failed node back up and both UTMs back to ACTIVE, READY ON 9.317 then once everything is up and running again try to update to 9.351 from WebAdmin.

    If not then just go forward and put the correct files Scott specified into the directory on the UTM that has failed updates and run the command.

    Best Regards - HTG
    Frustrated Sophos Partner seeing all the things
    that brought me to Sophos slowly slip away.
    RIP astaro.org

  • This night I deleted the file on the slave and restarted the HA on the slave. The only change was, that HA state on slave is now "RESERVED". Tonight I'll try to update the master to 9.317 and see what happens :/
  • Hi,

    I'm in the same situation, node master active with version 9.315 and node slave UP2DATE_FAILED with versione 9.317 and not jet rebooted.
    I'm not able to connect to node slave with SSH because I haven't activated before update.
    What is the best solution to come back to HA in sync?
    I think to power off slave node, the update master node to version 9.317 ( with downtime but it is not a problem ) and the power on again node slave.
    At restart, node slave come back to sync with node master, and at this time I can connect by SSH and delete wrong package.

    What do you think about this solution? Node slave at restart with the same version of node master come back to HA in sync?

    Many Thanks.

    Ciao
Reply
  • Hi,

    I'm in the same situation, node master active with version 9.315 and node slave UP2DATE_FAILED with versione 9.317 and not jet rebooted.
    I'm not able to connect to node slave with SSH because I haven't activated before update.
    What is the best solution to come back to HA in sync?
    I think to power off slave node, the update master node to version 9.317 ( with downtime but it is not a problem ) and the power on again node slave.
    At restart, node slave come back to sync with node master, and at this time I can connect by SSH and delete wrong package.

    What do you think about this solution? Node slave at restart with the same version of node master come back to HA in sync?

    Many Thanks.

    Ciao
Children
No Data