Similar Threads:
1.Solaris Live upgrade: questions about /var/sadm/patch
According to the solaris FAQ:--------8<-----------------|3.40) Why does
installing patches take so much space in /var/sadm? All the files that
are replaced by a patch are stored under /var/sadm/patch/<patch-id>/save
so the patch can be safely backed out. Newer patches will save the old
files under /var/sadm/pkg/<pkg>/save/<patch-id>/undo.Z, for each package
patches. You can remove the <patchdir>/save directory provided you also
remove the <patchdir>/.oldfilessaved file. Newer patches will not
install a .oldfilessaved file. Alternatively, you can install a patch w/o
saving the old files by using the "-d" flag to installpatch.|3.41) Do I
need to back out previous versions of a patch? No, unless otherwise
stated in the patch README. If the previous patch installation saved the
old files, you may want to reclaim that space. Patches can be backed
out with (Solaris 2.6+): patchrm <patch-id> or in earlier releases:
/var/sadm/patch/<patch-id>/backoutpatch <patch-id> Backoutpatch can take
an awful long time, especially when the patch contained a lot of files.
This is fixed in later versions of
backoutpatch.---------------->8---------------------------Prior to doing the
live upgrade, I would like to minimize the size of my root partition
(containing /var in my case) Can I safely remove all patches (not just the
save directories)from /var/sadm/patches or will live upgrade do it
automatically ? I presume that after a solaris upgrade, the liste of
installed patches on the system will be NULL ?Another last point, does "back
out patch" mean to go back to the state BEFORE the patch was applied ? I
don't understand the 3.41 paragraph. What does it mean ? It seems to mean
that whenever you apply a newpatch, you should go back to the previous
version before installing ?Ex: Original package = blabla-01Apply patch
blabla-02 (BlaBla-01 saved)You now want to apply patch blabla-03.Does
question 3.41 ask: "if you need to install patch blabla-03 you should first
back ou to blabla-01 ?" If so, what do you do if the README tells you that
you have to, but you REMOVED the save directory ?Thanks !
2.Live Upgrade broken for Solaris 9 to Solaris 10 upgrade
Has anyone else run into this problem? I'm using Live Upgrade to upgrade
a Solaris 9 server to Solaris 10. I created a boot environment on a
separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'.
Now when I go to use `luupgrade -t' to apply the latest Solaris 10
patches to it, I get this...
Validating the contents of the media </var/tmp/patches>.
The media contains 220 software patches that can be added.
All 220 patches will be added because you did not specify any specific patches to add.
Mounting the BE <s10lu>.
ERROR: The boot environment <s10lu> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
Is there a way to make this work? The new BE can't possibly contain a
a non-global zone.
--
-Gary Mills- -Unix Support- -U of M Academic Computing and Networking-
3.Solaris 8 upgrade using Live upgrade method
I want to get some feedback on the Live upgrade of Solaris 8 (2/04). How was
the upgrade? Was it kind of problem-free? Did Veritas give any problem with
the upgrade?
4.Live Upgrade fails during upgrade from Solaris 10 U7 to U8
Hello,
problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade.
ZFS root mirrored on 2 disks, no zones, no separate /var.
Should be an easy job for live upgrade.
Yes, liveupgrade20 has been applied from the lofi mounted U8.
Yes, 121431-44, the Live Upgrade Patch is installed.
luupgrade fails with:
ERROR: Installation of the packages from this media of the media failed;
pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Why does lucreate propagates /boot/grub/menu.lst?
It's a dummy, the real menu.lst is on /rpool/boot/grub.
Here are the details:
# lucreate -n s10u8
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10u8>.
Source boot environment is <s10u7>.
Creating boot environment <s10u8>.
Cloning file systems from boot environment <s10u7> to create boot environment <s10u8>.
Creating snapshot for <rpool/ROOT/s10u7> on <rpool/ROOT/s10u7@s10u8>.
Creating clone for <rpool/ROOT/s10u7@s10u8> on <rpool/ROOT/s10u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10u8>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u8> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <s10u8> in GRUB menu
Population of boot environment <s10u8> successful.
Creation of boot environment <s10u8> successful.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10u7 yes yes yes no -
s10u8 yes no no yes -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 17.6G 115G 38.5K /rpool
rpool/ROOT 8.55G 115G 18K legacy
rpool/ROOT/s10u7 8.41G 115G 8.23G /
rpool/ROOT/s10u7@s10u8 187M - 8.15G -
rpool/ROOT/s10u8 140M 115G 8.21G /
rpool/dump 2.00G 115G 2.00G -
rpool/export 3.07G 115G 19K /export
rpool/export/local 3.07G 115G 3.07G /export/local
rpool/swap 4G 119G 16K -
# luupgrade -u -n s10u8 -s /mnt
System has findroot enabled GRUB
No entry for BE <s10u8> in GRUB menu
Uncompressing miniroot
Copying failsafe kernel from media.
63093 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u8>.
Checking for GRUB menu on ABE <s10u8>.
Saving GRUB menu on ABE <s10u8>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <s10u8>.
Performing the operating system upgrade of the BE <s10u8>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
ERROR: Installation of the packages from this media of the media failed;
pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Restoring GRUB menu on ABE <s10u8>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Configuring failsafe for system.
Failsafe configuration is complete.
The Solaris upgrade of the boot environment <s10u8> failed.
Installing failsafe
Failsafe install is complete.
Cheers,
Michael.
5./var/sadm/pkg is very huge
Hello.
I noticed that my /var/sadm/pkg is rather big. It's currently about 1.7g.
Can I deleted no longer required "save" directories from there (like
/var/sadm/pkg/SUNWzfsu/save)?
Thanks,
Alexander Skwar
6. Cleaning Up /var/sadm
7. /var/sadm/pkg direcotry Q
8. Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors