Good practices with NFS sharing of binaries

unix

    Sponsored Links

    Next

  • 1. What will you answer?
    Hi, I had an interview of the position of Unix administrator few days ago. The interviewer asked me what will you do if users say the application is running slowly on the solaris platform? How can you troubleshoot the problem? Can anyone help? Thanks
  • 2. Open file descriptors and reading lsof output
    My system is getting overwhelmed by open file descriptors. When I run lsof 65% of the entries look like this: java 13307 someuser 258r VREG 0,16384 / u1 (/dev/vx/dsk/sharedg/u1) The way I read this is that there is a Java process having pid 13307 running under someuser on the 258th file descriptor, reading 16384 bytes of data from u1. What is /dev/vx/dsk/sharedg/u1 ? Check this out: $ file /dev/vx/dsk/sharedg/u1 /dev/vx/dsk/sharedg/u1: block special (230/117002) What does VREG mean? Is there something I can do with the 258 value to get more information about what's going on with this process aside from the fact that it was started by pid 13307 ? Here is my UNIX info: $uname -a SunOS mycomputer 5.8 Generic_117350-20 sun4u sparc SUNW,Sun-Fire-V440 Is there another tool aside from lsof that can tell me what's going on? Thanks.

Good practices with NFS sharing of binaries

Postby Rick Denoire » Tue, 27 Jul 2004 04:39:29 GMT

Hello

In order to avoid having to install the same software on different
servers, I just install any "third party" program (not available as a
Solaris package) from source into /usr/local and shared this directory
via NFS to all other servers, which then use the automounter. Do you
think that this was a wise decision? Is this the usual place to do it?

But I am not sure that I won't get a problem later. Do all packages
which install from source put their files beneath the /usr/local tree?
What if they use some server-dependent configuration files? Would they
be overwritten?

In short, what are "best practices" when setting up a shared directory
for executable binaries?

Well, other related questions would be:
If I want to install (make; make install) a different version of a
software from source which is already installed as an Solaris package,
what issues arise with the PATH variable? (Because depending on it,
the packaged version or the /usr/local version will be used).

How can I keep control of this "wild" software installed from source?
I mean: mutual dependencies or dependencies on libraries,
removing/updating etc. Does there a package exist to ease
administrative tasks of software installed from source?

What happens when one server which mounts /usr/local is updated
completely (new Solaris version, for example). Will still binaries
mounted in /usr/local still work?

If I install a package TRALALA from source, and if it goes to
/usr/local, then I get /usr/local/TRALALA as its main directory. So I
would have to include the path to the corresponding binary
/usr/local/TRALALA/tralala.bin in the PATH variable. But that yields a
very long and tedious PATH variable with time, as more and more
packages are installed this way. What is the best way to cope this
this? (Ideally, the package should put the main binary in
/usr/local/bin, but many don't do that).

OK, you might not comment on so many questions, just give me a pointer
to some document on the Internet.

(Using several Enterprise-Servers and Ultra-Workstations with solaris
7 on the Servers and Solaris 8 on the Workstations).

Thanks a lot

Rick Denoire


Re: Good practices with NFS sharing of binaries

Postby gerg » Tue, 27 Jul 2004 08:22:33 GMT

 XXXX@XXXXX.COM  writes:

Now "all other servers" are dependent on this one server.  If this
one server crashes, the others will not have access to the programs
you installed on /usr/local.

This is usually a bad idea.  The reason you have separate servers
in the first place is to avoid the situation where a single server
failure affects all services on the network.

  -Greg
-- 
Do NOT reply via e-mail.
Reply in the newsgroup.

Re: Good practices with NFS sharing of binaries

Postby aryzhov » Tue, 27 Jul 2004 15:58:53 GMT

ick Denoire < XXXX@XXXXX.COM > wrote in message news:< XXXX@XXXXX.COM >...

Sharing the binary packages via automounter is quite a common practice.
In addition to space and maintenance savings, automounter gives you
flexibility to mount relese- and architecture-specific binary trees,
using automounter variables.

However, *building* the binaries on production NFS server is IMHO
not a very good idea. In clean environment, separate development and test
machines must be used to build Solaris packages out of source distribution,
and those packages installed on NFS/autofs server into appropriate
subtree.


If *YOU* build the packages from source, you usually can control
where the stuff goes. If you download the package, say, from blastwave or
sunfreeware, then you can look into the package and figure out whether
there are any files that go outside the main destination subtree.


If your production environment deploys only one release on one hardware
architecture, then /usr/local may be fine. Some sites maintain a structure
like
/export/app/$HW_ARCH/$OSNAME/$OSREL/local
on NFS server and automount a corresponding subtree to /usr/local on
relevant machines.


PATH is usually set by user's .profile, and/or wrappers that
start the applications. So it's your decision which version to pick.
If you are installing from source the same SW that is available
as SUNW package, why not remove the SUNW package at all?


As I said, best way is to keep everything in the form of packages.
If you don't have a development machine for each architecture and
OSREL, build a clean chrooted environment on existing machine,
and do "make install" and package build there, instead of polluting
the production root.


Upward compatibility usually is there. If yor NFS server has, say,
Solaris 2.6 binaries, they usually will work on all later Solaris releases.
Some shared library issues may arise, but if those binaries have been
bult cleanly to use shared libs under the same subtree, then it should
be fine.

Downward compatibility doesn't exist in Solaris: Binaries buit in
later releases most likely won't work in earlier releases.


Well, keeping each package it it's own directory only, still sounds
like a very nice idea. It is implemented in some Linux distributions,
check www.linux.org distributions list.

Simpliest way is to keep only symlinks in /usr/local/bin, each symlink
pointing to an application wrapper (already in app jail) that sets
necessary environment (including PATH) for this application only.

Regards,
Andrei

Re: Good practices with NFS sharing of binaries

Postby Rick Denoire » Wed, 28 Jul 2004 05:45:13 GMT





I think your idea is against any modern understanding of server
consolidation. Nowadays almost *everything* is highly centralized and
dependant on one node. The idea is to concentrate all efforts to keep
this one server running, which is by far easier than keeping a larger
number of them running. Not to mention the burden time number of
servers.

Rick Denoire

Re: Good practices with NFS sharing of binaries

Postby gerg » Wed, 28 Jul 2004 08:59:39 GMT

 XXXX@XXXXX.COM  writes:



The original poster declared that he had multiple servers in
his network, and did not say he was planning any sort of
consolidation.  There's no reason to apply principles of
server consolidation to this question.

  -Greg
-- 
Do NOT reply via e-mail.
Reply in the newsgroup.

Re: Good practices with NFS sharing of binaries

Postby Juhan Leemet » Thu, 29 Jul 2004 03:31:48 GMT




Not how I read it. He said that he was sharing /usr/local by NFS, i.e.
that he had already done some consolidation. He specifically asked about
"setting up a shared directory for executable binaries" (consolidation).
If he's asking about consolidation why do you contradict him?

BTW, you can specify alternate servers for shared (r/o) NFS mounts. Then
if your primary crashes, the dependent machines will switch to alternate.

-- 
Juhan Leemet
Logicognosis, Inc.


Re: Good practices with NFS sharing of binaries

Postby spamisevi1 » Fri, 30 Jul 2004 04:45:05 GMT



this is comp.unix.solaris. so given that solaris 2.6 and beyond has nfs 
client side failover, there's no need to depend on one server. List multiple
servers in an automount entry, mount it read-only.

in terms of best practices, from time to time binaries need to
be updated. what you don't want to do is modify a binary that is currently
being actively in use by a process on a client. at best this will
trigger a core dump. at worst this will produce data corruption.

a way around this is to use symlinks. e.g. /usr/local/bin/vim
might be symlink to v1/vim.

Now you want to updated vim. Create v2/vim on the servers. remove
the vim link and re-link it to v2/vim.

Since v1/vim still exists, unchaged, the processes created via an
exec of vim before version 2 wasa installed will continue unperturbed.

use the time of last access as a rough guide as to when to remove v1/vim.


Similar Threads:

1.Conventions for NFS sharing of binaries

Hello

In order to avoid having to install the same software on different
servers, I just install any "third party" program (not in the
distribution of RedHat Advanced Server 2.1, update 4) from source into
/usr/local and shared this directory via NFS to all other servers,
which then use the automounter. Do you think that this was a wise
decision? Is this the usual place to do it?

But I am not sure that I won't get a problem later. Do all packages
which install from source put their files beneath the /usr/local tree?
What if they use some server-dependent configuration files? Would they
be overwritten?

In short, what are "best practices" when setting a shared directory
for executable binaries?

Well, other related questions would be:
If I want to install a different version of a software from source
which is already installed as an RPM package, what issues arise with
the PATH variable? (Because depending on it, the RPM version or the
/usr/local version will be used).

How can I keep control of this "wild" software installed from source?
I mean: mutual dependencies or dependencies on libraries,
removing/updating etc. Does there a package exist to ease
administrative tasks of software installed from source?

What happens when one server which mounts /usr/local is updated
completely (new kernel, new distribution version etc.). Will still
binaries mounted in /usr/local still work?

If I install a package TRALALA from source, and if it goes to
/usr/local, then I get /usr/local/TRALALA. So I would have to include
the path to the corresponding binary /usr/local/TRALALA/tralala.bin in
the PATH variable. But that yields a very long and tedious PATH
variable with time, as more and more packages are installed this way.
What is the best way to cope this this? (Ideally, the package should
put the main binary in /usr/local/bin, but many don't do that).

OK, you might not comment on so many questions, just give me a pointer
to some document on the Internet.

Thanks a lot

Rick Denoire

2.64 bit binary launching 32 bit binaries and LD_PRELOAD'ed shared lib

Hi,

Platform information: Solaris 10 running on AMD64.

I have a 32-bit pre-loaded shared library 'mysharedlib.so' running
with a 64-bit binary 'bin_exec_64'.

LD_PRELOAD=mysharedlib.so bin_exec_64 ...

I am getting the following error:

wrong ELF class: ELFCLASS64

I read about this problem on the web and found out about the
specialized LD_PRELOAD_32/LD_FLAGS_32 (for pre-loading 32bit shared
libs) and LD_PRELOAD_64/LD_FLAGS_64 (for pre-loading 64bit shared
libs).

However, if I set LD_PRELOAD_32=mysharedlib.so and execute 64-bit
'bin_exec_64', none of the functions inside 'mysharedlib.so' are
called.

Is there is a strict rule in Solaris that only a 64-bit compiled
shared library is suitable for pre-loading when run with a 64-bit
binary?  Is there any way I can use my 32-bit compiled
'mysharedlibrary.so' with my 64-bit 'bin_exec_64'?

Regards
Rajesh

3.64 bit binary launching 32 bit binaries and LD_PRELOAD'ed shared lib

Rajesh < XXXX@XXXXX.COM > writes:

> I read on the groups/web that I have to
> set both  LD_PRELOAD_32 and LD_PRELOAD_64 and ld.so will automatically
> do the right thing at runtime:
>
> $ LD_PRELOAD_32=<path>/32_bit_compiled_shared_lib.so \
> LD_PRELOAD_64=<path>/64_bit_compiled_shared_lib.so bin_exec_64

Depends on what 'the right thing to do at runtime' is.

Above will preload 32-bit DSO into every 32-bit process exec'd by
bin_exec_64, and will preload 64-bit DSO into every 64-bit one
(provided bin_exec_64 doesn't itself modify/unset these variables).

>
> However, the problem is that none of the functions inside either of my
> shared library are being called.

What functions did you define, are they actually defined in these
DSOs (what does 'nm' say), and why do you expect them to be called
(i.e. what makes you believe bin_exec_64 calls any of them)?

> Does anyone have any suggestions on debugging this
> problem ? I set LD_DEBUG=detail but not much help.

This:

  LD_DEBUG=files,symbols,bindings,detail

will probably give you more than you ever wanted to know.

Cheers,
-- 
In order to understand recursion you must first understand recursion.
Remove /-nsp/ for email.

4.[PATCH 06/14] NFS: Share NFS superblocks per-protocol per-server per-FSID [try #8]

5.Seeking information : System Administration Policies and Best Practices

Hi all,

I was hoping to find information about system administration policies
and practices that were in a platform independent manner.
Eg. Policies for allocating accounts irrespective of the OS used.

While I found some information, I was looking for a forum where these
best practices were developed in a more structured fashion .. in a
manner similar to say ... Design Patterns.

Any help / links /pointers will be highly appreciated.

Thanks,
sriad

6. Linux forces good coding practices, for wrong reason

7. [News] GNU/Linux Distributions That Are Good for Entertainment & Practice

8. DVD-RAM - best practice?



Return to unix

 

Who is online

Users browsing this forum: No registered users and 87 guest