Is it possible to do a live upgrade from Solaris 10 (?/07) to Solaris Express Community Edition b90? (Blade 1000). Thank you --
Is it possible to do a live upgrade from Solaris 10 (?/07) to Solaris Express Community Edition b90? (Blade 1000). Thank you --
In article < XXXX@XXXXX.COM >, < XXXX@XXXXX.COM > writes: Yes, it should work. Remember before you start, remove and reinstall the live upgrade packages from the b90 media, and you also need to install SUNWp7zip from the b90 media. -- Andrew Gabriel [email address is not usable -- followup in the newsgroup]
XXXX@XXXXX.COM (Andrew Gabriel) writes: Thank you Do you mean that I should install the b90 live upgrade (and p7zip) packages on my running Solaris 10? Thanks again --
Yes - when doing live upgrade you should use the latest version of lu which is kindly provided on each SXCE build image.
mount the iso image and run /mnt/Solaris_11/Tools/Installers/liveupgrade20 it will upgrade the lu packages.
Oscar del Rio < XXXX@XXXXX.COM > writes: Thank you for this instruction. --
1.Live Upgrade broken for Solaris 9 to Solaris 10 upgrade
Has anyone else run into this problem? I'm using Live Upgrade to upgrade a Solaris 9 server to Solaris 10. I created a boot environment on a separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'. Now when I go to use `luupgrade -t' to apply the latest Solaris 10 patches to it, I get this... Validating the contents of the media </var/tmp/patches>. The media contains 220 software patches that can be added. All 220 patches will be added because you did not specify any specific patches to add. Mounting the BE <s10lu>. ERROR: The boot environment <s10lu> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed. Is there a way to make this work? The new BE can't possibly contain a a non-global zone. -- -Gary Mills- -Unix Support- -U of M Academic Computing and Networking-
2.Live Upgrade fails during upgrade from Solaris 10 U7 to U8
Hello, problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade. ZFS root mirrored on 2 disks, no zones, no separate /var. Should be an easy job for live upgrade. Yes, liveupgrade20 has been applied from the lofi mounted U8. Yes, 121431-44, the Live Upgrade Patch is installed. luupgrade fails with: ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics: Processing profile Loading local environment and services Why does lucreate propagates /boot/grub/menu.lst? It's a dummy, the real menu.lst is on /rpool/boot/grub. Here are the details: # lucreate -n s10u8 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment <s10u7> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <s10u8>. Source boot environment is <s10u7>. Creating boot environment <s10u8>. Cloning file systems from boot environment <s10u7> to create boot environment <s10u8>. Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u8>. Creating clone for <rpool/ROOT/s10u7@s10u8> on <rpool/ROOT/s10u8>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10u8>. Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u8> as <mount-point>//boot/grub/menu.lst.prev. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <s10u8> in GRUB menu Population of boot environment <s10u8> successful. Creation of boot environment <s10u8> successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- s10u7 yes yes yes no - s10u8 yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 17.6G 115G 38.5K /rpool rpool/ROOT 8.55G 115G 18K legacy rpool/ROOT/s10u7 8.41G 115G 8.23G / rpool/ROOT/s10u7@s10u8 187M - 8.15G - rpool/ROOT/s10u8 140M 115G 8.21G / rpool/dump 2.00G 115G 2.00G - rpool/export 3.07G 115G 19K /export rpool/export/local 3.07G 115G 3.07G /export/local rpool/swap 4G 119G 16K - # luupgrade -u -n s10u8 -s /mnt System has findroot enabled GRUB No entry for BE <s10u8> in GRUB menu Uncompressing miniroot Copying failsafe kernel from media. 63093 blocks miniroot filesystem is <lofs> Mounting miniroot at </mnt/Solaris_10/Tools/Boot> Validating the contents of the media </mnt>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <s10u8>. Checking for GRUB menu on ABE <s10u8>. Saving GRUB menu on ABE <s10u8>. Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE <s10u8>. Performing the operating system upgrade of the BE <s10u8>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics: Processing profile Loading local environment and services Restoring GRUB menu on ABE <s10u8>. ABE boot partition backing deleted. PBE GRUB has no capability information. PBE GRUB has no versioning information. ABE GRUB is newer than PBE GRUB. Updating GRUB. GRUB update was successfull. Configuring failsafe for system. Failsafe configuration is complete. The Solaris upgrade of the boot environment <s10u8> failed. Installing failsafe Failsafe install is complete. Cheers, Michael.
3.Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors
I have been learning about Live Upgrade in a lab environment for the last month. I managed to upgrade 3 Solaris 9 systems to Solaris 10. The setup involves a pair of RAID-1 drives that I split using LU to detach one slice of the current mirror, and create a new mirror. The command that I have used is as follows: (the disk mirror is d10, and the submirrors are d0, and d1)... # lucreate -n Solaris_10 -m /:/dev/md/dsk/d11:ufs,mirror \ > -m /:d0:detach,attach,preserve Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environment configurations Determining types of file systems supported Validating file system requests The device name <d0> expands to device path </dev/md/dsk/d0> Preparing logical storage devices Preparing physical storage devices Configuring physical storage devices Configuring logical storage devices.... Now I have a system on Solaris 10 with the same setup, yet when I try the same thing it states that mirror d11 does not exist, which is true, but as you can see above, Solaris 9 allowed it. I found a post on this board (S10 LU with SVM (Raid-1) volumes that described this same problem but not much as to a solution. Below is the attempt on Solaris 10, along with the mirroring information. I have looked through the Installations guides for Solaris 10 and it sure seems this should work. Am I missing something obvious here? Any help would be greatly appreciated! Thanks, Mike Jacobs bash-3.00# lucreate -n Solaris_10_b -m /:/dev/md/dsk/d11:ufs,mirror -m /:d1:detach,attach,preserve Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environment configurations Determining types of file systems supported Validating file system requests The device name <d1> expands to device path </dev/md/dsk/d1> ERROR: device </dev/md/dsk/d11> does not exist ERROR: device </dev/md/dsk/d11> is not available for use with mount point </> ERROR: cannot create new boot environment using file systems as configured ERROR: please review all file system configuration options ERROR: cannot create new boot environment using options provided Here is a metastat, metadb, and lustatus: bash-3.00# metastat d10: Mirror Submirror 0: d0 State: Okay Submirror 1: d1 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 30721626 blocks (14 GB) d0: Submirror of d10 State: Okay Size: 30721626 blocks (14 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t0d0s0 0 No Okay Yes d1: Submirror of d10 State: Okay Size: 30724515 blocks (14 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t1d0s0 0 No Okay Yes Device Relocation Information: Device Reloc Device ID c1t1d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E7A900007418MSAA c1t0d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E4G300007418M6NS bash-3.00# metadb flags first blk block count a m p luo 16 8192 /dev/dsk/c1t0d0s6 a p luo 16 8192 /dev/dsk/c1t0d0s7 a p luo 16 8192 /dev/dsk/c1t1d0s6 a p luo 16 8192 /dev/dsk/c1t1d0s7 bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- Solaris_10_a yes yes yes no -
4.Solaris 10 live upgrade from Solaris 8
Hi All, I am having problems upgrading the Solaris 8 to Solaris 10 on both of my servers: Sunblade 2000 and Sunfire v440. Both upgrades are failing the same way: After successful upgrade at the reboot to Solaris 10 BE the network/inetd-upgrade service fails to start and after this service all other dependent services fail: services... telnet, ftp... X... If I disable the network/inetd-update and reboot the system(s) it is the same... Any suggestion would be greatly appreciated.. Thanks, Ned
5.SSH'ing between Sol 8 -> Sol 10 hosts
Hi, I have two hosts - Solaris 8 host running SSH v1.2.30 trying to connect to a Solaris 10 (build 72) host running the stock std version of SSH. When i attempt to connect to the Solaris 10 host i get the following error even after I've unhashed the "Protocol 2,1" line in the /etc/ssh/sshd_config and restarted sshd. <Solaris 8 host># ssh -v <solaris 10 host> SSH Version 1.2.30, protocol version 1.5. Standard version. Does not use RSAREF. tlons200: Reading configuration data /etc/ssh_config tlons200: ssh_connect: getuid 0 geteuid 0 anon 0 tlons200: Connecting to test [10.192.248.100] port 22. tlons200: Allocated local port 1022. tlons200: Connection established. tlons200: Remote protocol version 2.0, remote software version Sun_SSH_1.1 tlons200: Waiting for server public key. Local: Bad packet length 1349676916. Any idea's why? Thanks in advance for any responses.
6. Unable to upgrade to Sol Express B54 on SB1000
Users browsing this forum: No registered users and 14 guest