Live upgrade Sol 10 > Solaris Express

unix

    Next

  • 1. Patch 118844-28 is declared as incompatible by 117462-02
    I'm trying to install 118844-28 on S10 FCS system with 118844-27 installed. $ cat /etc/release Solaris 10 3/05 s10_74L2a X86 Copyright 2005 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 22 January 2005 $ uname -srv SunOS 5.10 Generic_118844-27 patchadd(1M) reports: | 0 Patch 118844-28 is declared as incompatible by 117462-02, | which has already been installed on the system. pca confirms that 117462 was installed was obsoleted. $ ./pca -a|grep 117462 117462 03 = 03 108 Obsoleted by: 118844-27 SunOS 5.10_x86: boot.bin patch I can't duplicate with my other S10 FCS installs with similar patch histories. Has anyone here seen this bug before? John XXXX@XXXXX.COM
  • 2. Burn solaris 10 cds
    After download the soloaris 10 cd "iso" files, just wonder which software normally use to burn them onto cds so the cd can be bootable? Any options need to change when using the software to burn the cds in order to make it bootable? Thanks,
  • 3. Problem mapping PCI memory space to user space
    Hi, I want to map a local address space of my PCI device to user space. The local adress space is mapped in PCI memory space and its size is 1024 bytes. I have created a devmap entry point in my driver according to the "Writing Device Drivers" document. The drivers works on a Sparc platform but when I use it on a x86 platform mmap fails. The global error returns ENXIO. The devmap_devmem_setup() function in my driver returns -1. When I change the size of the local address space to 4096 bytes equal to page size, it works. Anyway this is no solution for me, I want to be able to map local address spaces smaller as the space size. By the way I round up the len parameter to a multiple of the page size, so thats not the problem. What could be the problem? What is the difference with the Sparc platform? Is it not possible to map a local address space smaller as the page size? Someone who can help me?
  • 4. 64 bit or 32 bit OS
    I have a preinstalled solaris machine, $ uname -a SunOS sun_machine 5.9 Generic_117171-07 sun4u sparc SUNW,Ultra-Enterprise $ uname -X System = SunOS Node = sun_machine Release = 5.9 KernelID = Generic_117171-07 Machine = sun4u BusType = <unknown> Serial = <unknown> Users = <unknown> OEM# = 0 Origin# = 1 NumCPU = 2 1. how to know whether the OS is 32-bit or 64-bit? and whether the release is Maintenance Update 5 or later? or earlier? 2. how to limit the sizes of varios log files ? Thanks.
  • 5. Solaris Consultant's bread and butter
    As a new Solaris Admin/consultant, I would like to hear what are some of your favorite things, in general, to improve when you walk into the average corporate shop (with 10 to 50 Solaris servers) for a 3 to 6 month contract. Maybe these are closely guarded secrets, maybe not. But it would be interesting to hear what types of things make the biggest difference, and which of those are the easiest to implement. Here's my own noob list of things to do: Centralized monitoring scripts with notification, software upgrades/standards, system dumps, standard rsc/kvm console access, security (getting rid of rlogin, telnet and direct root), cleaning up /var/adm/messages errors, setting up a jumpstart server, etc. I hope to learn many more. This environment is a bit too small and stable.

Live upgrade Sol 10 > Solaris Express

Postby rb » Tue, 17 Jun 2008 06:01:05 GMT

Is it possible to do a live upgrade from Solaris 10 (?/07) to Solaris
Express Community Edition b90? (Blade 1000).

Thank you
-- 


Re: Live upgrade Sol 10 > Solaris Express

Postby andrew » Tue, 17 Jun 2008 06:19:24 GMT

In article < XXXX@XXXXX.COM >,
	< XXXX@XXXXX.COM > writes:

Yes, it should work.
Remember before you start, remove and reinstall the live upgrade
packages from the b90 media, and you also need to install SUNWp7zip
from the b90 media.

-- 
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]

Re: Live upgrade Sol 10 > Solaris Express

Postby rb » Tue, 17 Jun 2008 06:34:10 GMT

 XXXX@XXXXX.COM  (Andrew Gabriel) writes:


Thank you


Do you mean that I should install the b90 live upgrade (and p7zip)
packages on my running Solaris 10?

Thanks again
-- 

Re: Live upgrade Sol 10 > Solaris Express

Postby kangcool » Tue, 17 Jun 2008 08:12:05 GMT



Yes - when doing live upgrade you should use the latest version of lu
which is kindly provided on each SXCE build image.

Re: Live upgrade Sol 10 > Solaris Express

Postby Oscar del Rio » Tue, 17 Jun 2008 10:15:30 GMT



mount the iso image and run
/mnt/Solaris_11/Tools/Installers/liveupgrade20

it will upgrade the lu packages.

Re: Live upgrade Sol 10 > Solaris Express

Postby rb » Tue, 17 Jun 2008 11:22:57 GMT

Oscar del Rio < XXXX@XXXXX.COM > writes:




Thank you for this instruction.
-- 

Similar Threads:

1.Live Upgrade broken for Solaris 9 to Solaris 10 upgrade

Has anyone else run into this problem?  I'm using Live Upgrade to upgrade
a Solaris 9 server to Solaris 10.  I created a boot environment on a
separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'.
Now when I go to use `luupgrade -t' to apply the latest Solaris 10
patches to it, I get this...

  Validating the contents of the media </var/tmp/patches>.
  The media contains 220 software patches that can be added.
  All 220 patches will be added because you did not specify any specific patches to add.
  Mounting the BE <s10lu>.
  ERROR: The boot environment <s10lu> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.

Is there a way to make this work?  The new BE can't possibly contain a
a non-global zone.

-- 
-Gary Mills-    -Unix Support-    -U of M Academic Computing and Networking-

2.Live Upgrade fails during upgrade from Solaris 10 U7 to U8

Hello,

problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade.

ZFS root mirrored on 2 disks, no zones, no separate /var.
Should be an easy job for live upgrade.

Yes, liveupgrade20 has been applied from the lofi mounted U8.
Yes, 121431-44, the Live Upgrade Patch is installed.

luupgrade fails with:

  ERROR: Installation of the packages from this media of the media failed;
  pfinstall returned these diagnostics:
  Processing profile
  Loading local environment and services

Why does lucreate propagates /boot/grub/menu.lst?
It's a dummy, the real menu.lst is on /rpool/boot/grub.

Here are the details:


# lucreate -n s10u8
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u8>.
Source boot environment is <s10u7>.
Creating boot environment <s10u8>.
Cloning file systems from boot environment <s10u7> to create boot environment <s10u8>.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u8>.
Creating clone for <rpool/ROOT/s10u7@s10u8> on <rpool/ROOT/s10u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10u8>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u8> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <s10u8> in GRUB menu
Population of boot environment <s10u8> successful.
Creation of boot environment <s10u8> successful.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u7                      yes      yes    yes       no     -         
s10u8                      yes      no     no        yes    -         


# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   17.6G   115G  38.5K  /rpool
rpool/ROOT              8.55G   115G    18K  legacy
rpool/ROOT/s10u7        8.41G   115G  8.23G  /
rpool/ROOT/s10u7@s10u8   187M      -  8.15G  -
rpool/ROOT/s10u8         140M   115G  8.21G  /
rpool/dump              2.00G   115G  2.00G  -
rpool/export            3.07G   115G    19K  /export
rpool/export/local      3.07G   115G  3.07G  /export/local
rpool/swap                 4G   119G    16K  -


# luupgrade -u -n s10u8 -s /mnt
System has findroot enabled GRUB
No entry for BE <s10u8> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
63093 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u8>.
Checking for GRUB menu on ABE <s10u8>.
Saving GRUB menu on ABE <s10u8>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <s10u8>.
Performing the operating system upgrade of the BE <s10u8>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
ERROR: Installation of the packages from this media of the media failed;
 pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Restoring GRUB menu on ABE <s10u8>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
The Solaris upgrade of the boot environment <s10u8> failed.
Installing failsafe
Failsafe install is complete.


Cheers,
Michael.

3.Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors

I have been learning about Live Upgrade in a lab environment for the
last month. I managed to upgrade 3 Solaris 9 systems to Solaris 10. The
setup involves a pair of RAID-1 drives that I split using LU to detach
one slice of the current mirror, and create a new mirror. The command
that I have used is as follows:

(the disk mirror is d10, and the submirrors are d0, and d1)...

# lucreate -n Solaris_10 -m /:/dev/md/dsk/d11:ufs,mirror \
> -m /:d0:detach,attach,preserve
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <d0> expands to device path </dev/md/dsk/d0>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices....

Now I have a system on Solaris 10 with the same setup, yet when I try
the same thing it states that mirror d11 does not exist, which is true,
but as you can see above, Solaris 9 allowed it. I found a post on this
board (S10 LU with SVM (Raid-1) volumes that described this same
problem but not much as to a solution. Below is the attempt on Solaris
10, along with the mirroring information.

I have looked through the Installations guides for Solaris 10 and it
sure seems this should work.  Am I missing something obvious here? Any
help would be greatly appreciated!

Thanks,

Mike Jacobs

bash-3.00# lucreate -n Solaris_10_b -m /:/dev/md/dsk/d11:ufs,mirror -m
/:d1:detach,attach,preserve
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <d1> expands to device path </dev/md/dsk/d1>
ERROR: device </dev/md/dsk/d11> does not exist
ERROR: device </dev/md/dsk/d11> is not available for use with mount
point </>
ERROR: cannot create new boot environment using file systems as
configured
ERROR: please review all file system configuration options
ERROR: cannot create new boot environment using options provided

Here is a metastat, metadb, and lustatus:
bash-3.00# metastat
d10: Mirror
    Submirror 0: d0
      State: Okay
    Submirror 1: d1
      State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 30721626 blocks (14 GB)

d0: Submirror of d10
    State: Okay
    Size: 30721626 blocks (14 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t0d0s0          0     No            Okay   Yes


d1: Submirror of d10
    State: Okay
    Size: 30724515 blocks (14 GB)
    Stripe 0:
        Device     Start Block  Dbase        State Reloc Hot Spare
        c1t1d0s0          0     No            Okay   Yes


Device Relocation Information:
Device   Reloc  Device ID
c1t1d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E7A900007418MSAA
c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA6E4G300007418M6NS

bash-3.00# metadb
        flags           first blk       block count
     a m  p  luo        16              8192
/dev/dsk/c1t0d0s6
     a    p  luo        16              8192
/dev/dsk/c1t0d0s7
     a    p  luo        16              8192
/dev/dsk/c1t1d0s6
     a    p  luo        16              8192
/dev/dsk/c1t1d0s7

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Solaris_10_a               yes      yes    yes       no     -

4.Solaris 10 live upgrade from Solaris 8

Hi All,

I am having problems upgrading the Solaris 8 to Solaris 10 on both of my 
servers: Sunblade 2000 and Sunfire v440.
Both upgrades are failing the same way:

After successful upgrade at the reboot to Solaris 10 BE the 
network/inetd-upgrade service fails to start and after this service all 
other dependent services fail: services... telnet, ftp... X...

If I disable the network/inetd-update and reboot the system(s) it is the 
same...

Any suggestion would be greatly appreciated..

Thanks,

Ned


5.SSH'ing between Sol 8 -> Sol 10 hosts

Hi,
I have two hosts - Solaris 8 host running SSH v1.2.30 trying to connect
to a Solaris 10 (build 72) host running the stock std version of SSH.
When i attempt to connect to the Solaris 10 host i get the following
error even after I've unhashed the "Protocol 2,1" line in the
/etc/ssh/sshd_config and restarted sshd.

<Solaris 8 host># ssh -v <solaris 10 host>
SSH Version 1.2.30, protocol version 1.5.
Standard version.  Does not use RSAREF.
tlons200: Reading configuration data /etc/ssh_config
tlons200: ssh_connect: getuid 0 geteuid 0 anon 0
tlons200: Connecting to test [10.192.248.100] port 22.
tlons200: Allocated local port 1022.
tlons200: Connection established.
tlons200: Remote protocol version 2.0, remote software version
Sun_SSH_1.1
tlons200: Waiting for server public key.
Local: Bad packet length 1349676916.
Any idea's why?

Thanks in advance for any responses.

6. Unable to upgrade to Sol Express B54 on SB1000

7. Question on upgrade from Sol 10 to 10 1/06

8. zfs after live upgrade to solaris 10(11.06)



Return to unix

 

Who is online

Users browsing this forum: No registered users and 14 guest