tunneling io of a running process

unix

    Next

  • 1. Using Environemt Variables
    Hi All, We are trying to develop a standard environment include for each of our scripts. Basically at the top of each script, we will include that environemt file. However, there is one issue, for some reason, $0 is not being resolved into the script name??? Please review this clip of our include file: operation="Test Script" . /rm5/prod/control/header.env HEADER.ENV ------------------------- dte=`date +%m%d%Y` tme=`date +%m%d%Y%H%M%S` scr_name=`echo $0 | awk -F"/" '{print $5}'` log_name=`echo $0 | awk -F"/" '{print $5}' | awk -F"." '{print $1}'` logfile=$LOGDIR/$log_name.$dte today=`/bin/date +%c` echo "================= START OF SCRIPT ==============" >> $logfile echo "Start Time : $today" >> $logfile echo "Script Name : $scr_name" >> $logfile echo "Operation : $operation" >> $logfile echo "Process ID : $$" >> $logfile Why is $0 not getting resolved???
  • 2. Please explain why kill -9 doesn't always kill
    I've seen this before and I'm confused by it because the manual pages for kill, signal, etc. all say that SIGKILL and SIGSTOP "cannot be caught, blocked or ignored." I know that this has to do with programming for sigaction that they cannot be caught, blocked or ignored, but why is it sometimes, when at the commands prompt, I'll give a process "kill -9 <pid>" and the kill program completes, but then a ps -aux | grep <proc_name> shows that the process is still active? I'd like to know if I'm on the right track with thinking that it's because the process I'm attempting to kill is in some blocking state waiting for I/O, but I'm not sure I'm correct. This has really baffled me from time to time because on the one hand the manual pages say that the signal "cannot be caught, blocked or ignored" but yet when I give the kill command as above, it sure seems to be "caught, blocked or ignored." So, what gives? Andy

tunneling io of a running process

Postby nir m » Wed, 06 Dec 2006 22:25:17 GMT

Hello,
I have a process which runs in a system and has a thread that takes
input and prints output to the terminal.
However it is not being run from a terminal so it is not connected to
any tty device.
What I want to acheive  is to be able to somehow connnect to this
process' stdin/stdout at some point and interact with it (after it has
been running for sometime).
When I started thinking about it my first thought was using 2 named
pipes and select() calls with timeouts, and interact with it through
another process which will open those pipes.
It sort of works but it has all kinds of problems like deadlocks
between the two processes at some specific cases and so on. I guess I
can solve those specific problems but I'm sure there's a better way to
implement what I'm trying to do.

I've heard of pty's and even read the relevant chapter in advanced unix
programming but couldn't quite understand how to use it for my purpose.

could someone please outline how do I get this sort of thing to work
with pty's or, alternatively , provide me with another idea besides the
two i've already mentioned?

Additionally, for some reason, I can't find any real reference+examples
about pty's in the web so if
someone could point me to such reference, that would be great.
Btw it can be assumed that the code of the running process can be
changed to fit to any solution.

Thanks you,
Nir


Re: tunneling io of a running process

Postby tmstaedt » Sat, 09 Dec 2006 18:09:01 GMT

Well, just googled and found
 http://www.**--****.com/ ~ali/K0D/UNIX/PTY/. There is some sample
code.

tm






Similar Threads:

1.Run X client with ssh tunnel after su

Howdy, all.  I have been doing a bit of googling, but I'm having no
luck so far.

I have a server.  I ssh into the server.  This works fine.  I forward
the X connection.  So, I can start an X term, and it appears on the
desktop machine.  But, when I su, I can no longer run X programs.  I
get this instead :

dawn:/home/will# xterm
X11 connection rejected because of wrong authentication.
X connection to localhost:10.0 broken (explicit kill or server
shutdown).

Really, I want to be able to run synaptic, the Debian package manager,
remotely - rather than having to just use the command line / log into
the machine directly.

If anybody can help me straighten this out, I'd appreciate it!

Incidentally, I am running debian on both the server and my main
workstation.  The server is an Alpha:

uname -a
Linux dawn 2.6.15-1-alpha-generic #1 Wed Feb 22 17:09:30 UTC 2006 alpha
GNU/Linux

The desktop is a PC, but I would like to be able to get this working
from any machine on my LAN, if possible.

2.collect process IO stats (linux newbie)

3.IO \ Disk usage - which process?

Hi Guys,

I'm trying to find a process that's hogging disk i/o.

Using iostat I can tell that the disk is heavily loaded, does anyone
know know how to narrow it down to a specific process?

Thanks
Warrick

4.Identifying processes that are heavy IO users.

Hey,

I'm in the process of working on a script to identify IO usage on a
high IO server I have setup (Debian Etch). My question is how can
identify specific processes that are using much of these resources, I
can identify the processes using IOTOP, but doing it remotely via
script can be a pain since I have to grep and awk through the entire
content in real time.

I also can look at the open file handles via:

lsof | awk '{print $2}' | sort | uniq -c | sort -k1 -g

But that doesn't give sound enough proof as I'm looking for an abusive
user not specifically something that has X amount of file handles
open.

Does anyone have to identify specific processes that are either
writing or reading excessively?

5.limit the use of RAM and IO for a specific process

Do you know a way to limit the use of RAM and IO for a specific process 
on linux?
I am googling but nothing found.

6. IO per process usage

7. limit cpu/mem/io usage of a user/process

8. Waiting sleeping process & IO events



Return to unix

 

Who is online

Users browsing this forum: No registered users and 1 guest