Solaris 8 upgrade using Live upgrade method

unix

Solaris 8 upgrade using Live upgrade method

Postby Joe Philip » Wed, 25 Feb 2004 02:43:19 GMT

I want to get some feedback on the Live upgrade of Solaris 8 (2/04). How was
the upgrade? Was it kind of problem-free? Did Veritas give any problem with
the upgrade?



Similar Threads:

1.Live Upgrade broken for Solaris 9 to Solaris 10 upgrade

Has anyone else run into this problem?  I'm using Live Upgrade to upgrade
a Solaris 9 server to Solaris 10.  I created a boot environment on a
separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'.
Now when I go to use `luupgrade -t' to apply the latest Solaris 10
patches to it, I get this...

  Validating the contents of the media </var/tmp/patches>.
  The media contains 220 software patches that can be added.
  All 220 patches will be added because you did not specify any specific patches to add.
  Mounting the BE <s10lu>.
  ERROR: The boot environment <s10lu> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.

Is there a way to make this work?  The new BE can't possibly contain a
a non-global zone.

-- 
-Gary Mills-    -Unix Support-    -U of M Academic Computing and Networking-

2.Live Upgrade fails during upgrade from Solaris 10 U7 to U8

Hello,

problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade.

ZFS root mirrored on 2 disks, no zones, no separate /var.
Should be an easy job for live upgrade.

Yes, liveupgrade20 has been applied from the lofi mounted U8.
Yes, 121431-44, the Live Upgrade Patch is installed.

luupgrade fails with:

  ERROR: Installation of the packages from this media of the media failed;
  pfinstall returned these diagnostics:
  Processing profile
  Loading local environment and services

Why does lucreate propagates /boot/grub/menu.lst?
It's a dummy, the real menu.lst is on /rpool/boot/grub.

Here are the details:


# lucreate -n s10u8
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u8>.
Source boot environment is <s10u7>.
Creating boot environment <s10u8>.
Cloning file systems from boot environment <s10u7> to create boot environment <s10u8>.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u8>.
Creating clone for <rpool/ROOT/s10u7@s10u8> on <rpool/ROOT/s10u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10u8>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u8> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <s10u8> in GRUB menu
Population of boot environment <s10u8> successful.
Creation of boot environment <s10u8> successful.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u7                      yes      yes    yes       no     -         
s10u8                      yes      no     no        yes    -         


# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   17.6G   115G  38.5K  /rpool
rpool/ROOT              8.55G   115G    18K  legacy
rpool/ROOT/s10u7        8.41G   115G  8.23G  /
rpool/ROOT/s10u7@s10u8   187M      -  8.15G  -
rpool/ROOT/s10u8         140M   115G  8.21G  /
rpool/dump              2.00G   115G  2.00G  -
rpool/export            3.07G   115G    19K  /export
rpool/export/local      3.07G   115G  3.07G  /export/local
rpool/swap                 4G   119G    16K  -


# luupgrade -u -n s10u8 -s /mnt
System has findroot enabled GRUB
No entry for BE <s10u8> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
63093 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u8>.
Checking for GRUB menu on ABE <s10u8>.
Saving GRUB menu on ABE <s10u8>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <s10u8>.
Performing the operating system upgrade of the BE <s10u8>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
ERROR: Installation of the packages from this media of the media failed;
 pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Restoring GRUB menu on ABE <s10u8>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
The Solaris upgrade of the boot environment <s10u8> failed.
Installing failsafe
Failsafe install is complete.


Cheers,
Michael.

3.Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors

I have been learning about Live Upgrade in a lab environment for the
last month. I managed to upgrade 3 Solaris 9 systems to Solaris 10. The
setup involves a pair of RAID-1 drives that I split using LU to detach
one slice of the current mirror, and create a new mirror. The command
that I have used is as follows:

(the disk mirror is d10, and the submirrors are d0, and d1)...

# lucreate -n Solaris_10 -m /:/dev/md/dsk/d11:ufs,mirror \
> -m /:d0:detach,attach,preserve
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <d0> expands to device path </dev/md/dsk/d0>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices....

Now I have a system on Solaris 10 with the same setup, yet when I try
the same thing it states that mirror d11 does not exist, which is true,
but as you can see above, Solaris 9 allowed it. I found a post on this
board (S10 LU with SVM (Raid-1) volumes that described this same
problem but not much as to a solution. Below is the attempt on Solaris
10, along with the mirroring information.

I have looked through the Installations guides for Solaris 10 and it
sure seems this should work.  Am I missing something obvious here? Any
help would be greatly appreciated!

Thanks,

Mike Jacobs

bash-3.00# lucreate -n Solaris_10_b -m /:/dev/md/dsk/d11:ufs,mirror -m
/:d1:detach,attach,preserve
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <d1> expands to device path </dev/md/dsk/d1>
ERROR: device </dev/md/dsk/d11> does not exist
ERROR: device </dev/md/dsk/d11> is not available for use with mount
point </>
ERROR: cannot create new boot environment using file systems as
configured
ERROR: please review all file system configuration options
ERROR: cannot create new boot environment using options provided

Here is a metastat, metadb, and lustatus:
bash-3.00# metastat
d10: Mirror
    Submirror 0: d0
      State: Okay
    Submirror 1: d1
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 30721626 blocks (14 GB)

d0: Submirror of d10
    State: Okay
    Size: 30721626 blocks (14 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t0d0s0          0     No            Okay   Yes


d1: Submirror of d10
    State: Okay
    Size: 30724515 blocks (14 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t1d0s0          0     No            Okay   Yes


Device Relocation Information:
Device   Reloc  Device ID
c1t1d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E7A900007418MSAA
c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E4G300007418M6NS

bash-3.00# metadb
        flags           first blk       block count
     a m  p  luo        16              8192
/dev/dsk/c1t0d0s6
     a    p  luo        16              8192
/dev/dsk/c1t0d0s7
     a    p  luo        16              8192
/dev/dsk/c1t1d0s6
     a    p  luo        16              8192
/dev/dsk/c1t1d0s7

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Solaris_10_a               yes      yes    yes       no     -

4.Live upgrade from Solaris 8 to Solaris 9

Hoping that some one has already done this and would be able to answer 
my questions..

I am looking at the live upgrade guide and I understand that I have to 
create an empty boot environment with -s option before I can install a 
flash archive on the disk.

Q) What comprises of the empty boot environment?

Once an empty BE is set up, we use the luupgrade and -a/-j/-J the three 
mutually exclusive options. Crossing out -a option I am confused about 
the -j  and -J option usage

-J for an entry from the profile

-j for for the profile path

Q) If I use the -J switch will it still lay out the file systems as 
defined in the same profile? Where exactly are the filesystems laid out 
in this process assuming that I specified them as explicit partitioning 
in my profile. If they are laid out at this stage where is the empty BE 
stored that was create in the previous step? Was it stored on a file 
system as specified by the -m option? If Yes, Am I supposed to lay out 
the filesystem in such a way in my profile that it doesnt touch the 
slice where the empty BE was created?

Q) What does luactivate does? (without -s or any switches) What does it 
differently than using eeprom and changing the boot-device path?

Thanks,
Shivakanth.

5.Solaris 10 live upgrade from Solaris 8

Hi All,

I am having problems upgrading the Solaris 8 to Solaris 10 on both of my 
servers: Sunblade 2000 and Sunfire v440.
Both upgrades are failing the same way:

After successful upgrade at the reboot to Solaris 10 BE the 
network/inetd-upgrade service fails to start and after this service all 
other dependent services fail: services... telnet, ftp... X...

If I disable the network/inetd-update and reboot the system(s) it is the 
same...

Any suggestion would be greatly appreciated..

Thanks,

Ned


6. Using Live Upgrade for patch installation

7. Live upgrade using OS X Parallels

8. Make slice NOT used by Live Upgrade anymore



Return to unix

 

Who is online

Users browsing this forum: No registered users and 53 guest