Repost: Solaris Live Upgrade: questions about /var/sadm

unix

    Next

  • 1. Can't expand zfs pool with new devices
    So I've got a RAIDZ dataset, consisting of six 9GB drives in a multipack. One by one, I offline and replace them with 18GB drives. Unfortunately, the size didn't increase after the last resilvering, as I thought it would. Is there anything I can do to 'kick' the zpool, and make it recognise that it now has double the space? This is on Solaris 10 05/09, with zpool version 10. Cheers, Colin
  • 2. getnameinfo for ipv6 in Solaris 10
    Hi, getnameinfo on Solaris 10 is return 'address family not supported', if I setup a socket for IPv4/6 (AF_UNSPEC) and accept an incoming connection and pass the accepted address on to getnameinfo. I don't have any problem with this sequence on OpenSolaris independent of whether an IPv6 interface is configured or not. Is there a known issue with getnameinfo related to IPv6? - Thomas
  • 3. LOGNAME: Undefined variable
    Hi All On a Solaris 8 system of ours, scripts are failing because daemon processes (running as root) have started up with the environment variable LOGNAME unset. I have checked /etc/profile, /.profile, /.shrc and root's entry in /etc/passwd and all looks the same as another working system. Where else can I look to track down why LOGNAME is not set? Thanks, Neil

Repost: Solaris Live Upgrade: questions about /var/sadm

Postby G Dahler » Sat, 05 Feb 2005 00:25:00 GMT

Sorry for the report, seems like my news reader misbehaved (Ah, MSFT!)

 According to the solaris FAQ:

--------8<-----------------

|3.40) Why does installing patches take so much space in /var/sadm?

    All the files that are replaced by a patch are stored under
    /var/sadm/patch/<patch-id>/save so the patch can be safely
    backed out.  Newer patches will save the old files
    under /var/sadm/pkg/<pkg>/save/<patch-id>/undo.Z, for each package
    patches.

    You can remove the <patchdir>/save directory provided you also
    remove the <patchdir>/.oldfilessaved file.  Newer patches will not
    install a .oldfilessaved file.

    Alternatively, you can install a patch w/o saving the old
    files by using the "-d" flag to installpatch.

|3.41) Do I need to back out previous versions of a patch?

    No, unless otherwise stated in the patch README.
    If the previous patch installation saved the old
    files, you may want to reclaim that space.

    Patches can be backed out with (Solaris 2.6+):
     patchrm <patch-id>

    or in earlier releases:

     /var/sadm/patch/<patch-id>/backoutpatch <patch-id>

    Backoutpatch can take an awful long time, especially when the
    patch contained a lot of files.  This is fixed in later versions
    of backoutpatch.

---------------->8---------------------------

Prior to doing the live upgrade, I would like to minimize the size of my
root partition
(containing /var in my case) Can I safely remove all patches (not just the
save directories)
from /var/sadm/patches or will live upgrade do it automatically ?

I presume that after a solaris upgrade, the liste of installed patches on
the system will be NULL ?

Another last point, does "back out patch" mean to go back to the state
BEFORE the patch was applied ?
I don't understand the 3.41 paragraph. What does it mean ? It seems to mean
that whenever you apply a new
patch, you should go back to the previous version before installing ?

Ex: Original package = blabla-01

Apply patch blabla-02  (BlaBla-01 saved)

You now want to apply patch blabla-03.

Does question 3.41 ask: "if you need to install patch blabla-03 you should
first back ou to blabla-01 ?"

If so, what do you do if the README tells you that you have to, but you
REMOVED the save directory ?

Thanks !



Re: Repost: Solaris Live Upgrade: questions about /var/sadm

Postby Michael Tosch » Sat, 05 Feb 2005 02:47:51 GMT

Dahler wrote:

You can empty the save directories:
rm -r /var/sadm/pkg/*/save/*

this saves a lot (check with du -sk /var/sadm/pkg/*/save), and afterwards you
cannot backout any patch (where each patch occupies a few kilobytes in /var/sadm).
Everything else is not worth the effort.
Only if you are paranoid (check with du -sk /var/sadm/patch), do
rm -r /var/sadm/patch/*

You dont want to backout (patchrm) patches on your fallback OS, because 1. it
takes you too long, and 2. you would turn it into an unstable OS.


Small. showrev -p will also list integrated patches (where each patch occupies a
few hundred bytes in /var/sadm).


Yes.


This is the question in 3.41, and the answer in 3.41 is: no.


No!


I have seen few special patches called "point patches" and "T-patches". Point
patches I have not seen since some years. T-patches are sent to you for testing,
if you have opened a bug report with Sun.
Both are never in the Sun patch clusters.

One advise from experience:
when you install a brand new patch cluster, install it with the backout option
(default).
In case there is a bad patch, you have the possibility to back it out.
When you system runs stable for some time, or just before installing the next
patch cluster, you may empty the /var/sadm/pkg/*/save directories.

--
Michael Tosch @ hp : com


Similar Threads:

1.Solaris Live upgrade: questions about /var/sadm/patch

 According to the solaris FAQ:--------8<-----------------|3.40) Why does
installing patches take so much space in /var/sadm?    All the files that
are replaced by a patch are stored under    /var/sadm/patch/<patch-id>/save
so the patch can be safely    backed out.  Newer patches will save the old
files    under /var/sadm/pkg/<pkg>/save/<patch-id>/undo.Z, for each package
patches.    You can remove the <patchdir>/save directory provided you also
remove the <patchdir>/.oldfilessaved file.  Newer patches will not
install a .oldfilessaved file.    Alternatively, you can install a patch w/o
saving the old    files by using the "-d" flag to installpatch.|3.41) Do I
need to back out previous versions of a patch?    No, unless otherwise
stated in the patch README.    If the previous patch installation saved the
old    files, you may want to reclaim that space.    Patches can be backed
out with (Solaris 2.6+):     patchrm <patch-id>    or in earlier releases:
/var/sadm/patch/<patch-id>/backoutpatch <patch-id>    Backoutpatch can take
an awful long time, especially when the    patch contained a lot of files.
This is fixed in later versions    of
backoutpatch.---------------->8---------------------------Prior to doing the
live upgrade, I would like to minimize the size of my root partition
(containing /var in my case) Can I safely remove all patches (not just the
save directories)from /var/sadm/patches or will live upgrade do it
automatically ? I presume that after a solaris upgrade, the liste of
installed patches on the system will be NULL ?Another last point, does "back
out patch" mean to go back to the state BEFORE the patch was applied ? I
don't understand the 3.41 paragraph. What does it mean ? It seems to mean
that whenever you apply a newpatch, you should go back to the previous
version before installing ?Ex: Original package = blabla-01Apply patch
blabla-02  (BlaBla-01 saved)You now want to apply patch blabla-03.Does
question 3.41 ask: "if you need to install patch blabla-03 you should first
back ou to blabla-01 ?" If so, what do you do if the README tells you that
you have to, but you REMOVED the save directory ?Thanks !


2.Live Upgrade broken for Solaris 9 to Solaris 10 upgrade

Has anyone else run into this problem?  I'm using Live Upgrade to upgrade
a Solaris 9 server to Solaris 10.  I created a boot environment on a
separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'.
Now when I go to use `luupgrade -t' to apply the latest Solaris 10
patches to it, I get this...

  Validating the contents of the media </var/tmp/patches>.
  The media contains 220 software patches that can be added.
  All 220 patches will be added because you did not specify any specific patches to add.
  Mounting the BE <s10lu>.
  ERROR: The boot environment <s10lu> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.

Is there a way to make this work?  The new BE can't possibly contain a
a non-global zone.

-- 
-Gary Mills-    -Unix Support-    -U of M Academic Computing and Networking-

3.Solaris 8 upgrade using Live upgrade method

I want to get some feedback on the Live upgrade of Solaris 8 (2/04). How was
the upgrade? Was it kind of problem-free? Did Veritas give any problem with
the upgrade?


4.Live Upgrade fails during upgrade from Solaris 10 U7 to U8

Hello,

problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade.

ZFS root mirrored on 2 disks, no zones, no separate /var.
Should be an easy job for live upgrade.

Yes, liveupgrade20 has been applied from the lofi mounted U8.
Yes, 121431-44, the Live Upgrade Patch is installed.

luupgrade fails with:

  ERROR: Installation of the packages from this media of the media failed;
  pfinstall returned these diagnostics:
  Processing profile
  Loading local environment and services

Why does lucreate propagates /boot/grub/menu.lst?
It's a dummy, the real menu.lst is on /rpool/boot/grub.

Here are the details:


# lucreate -n s10u8
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u8>.
Source boot environment is <s10u7>.
Creating boot environment <s10u8>.
Cloning file systems from boot environment <s10u7> to create boot environment <s10u8>.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u8>.
Creating clone for <rpool/ROOT/s10u7@s10u8> on <rpool/ROOT/s10u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10u8>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u8> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <s10u8> in GRUB menu
Population of boot environment <s10u8> successful.
Creation of boot environment <s10u8> successful.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u7                      yes      yes    yes       no     -         
s10u8                      yes      no     no        yes    -         


# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   17.6G   115G  38.5K  /rpool
rpool/ROOT              8.55G   115G    18K  legacy
rpool/ROOT/s10u7        8.41G   115G  8.23G  /
rpool/ROOT/s10u7@s10u8   187M      -  8.15G  -
rpool/ROOT/s10u8         140M   115G  8.21G  /
rpool/dump              2.00G   115G  2.00G  -
rpool/export            3.07G   115G    19K  /export
rpool/export/local      3.07G   115G  3.07G  /export/local
rpool/swap                 4G   119G    16K  -


# luupgrade -u -n s10u8 -s /mnt
System has findroot enabled GRUB
No entry for BE <s10u8> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
63093 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u8>.
Checking for GRUB menu on ABE <s10u8>.
Saving GRUB menu on ABE <s10u8>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <s10u8>.
Performing the operating system upgrade of the BE <s10u8>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
ERROR: Installation of the packages from this media of the media failed;
 pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Restoring GRUB menu on ABE <s10u8>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
The Solaris upgrade of the boot environment <s10u8> failed.
Installing failsafe
Failsafe install is complete.


Cheers,
Michael.

5./var/sadm/pkg is very huge

Hello.

I noticed that my /var/sadm/pkg is rather big. It's currently about 1.7g.
Can I deleted no longer required "save" directories from there (like
/var/sadm/pkg/SUNWzfsu/save)?

Thanks,

Alexander Skwar

6. Cleaning Up /var/sadm

7. /var/sadm/pkg direcotry Q

8. Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors



Return to unix

 

Who is online

Users browsing this forum: No registered users and 48 guest