How to compare System Performance and Disk IO Performance



  • 1. LPAR NICs after NIM install
    Hi ALL, After installing via NIM and rebooting I foung that I have console connection but not ssh looking in ifconfig showed interface en2 UP with IP .188 of source system, looking smit showed en0 with right IP .157 of target system, checking NIC showed en0,1,2 HMC showed for LPAR virtual IO two Ethernets 2.20 and 2.19, which are related to 2 VIOs with one virtual Ethernets as well (VIO1 has 2.20 and VIO2 2.19) on my new client I assume that en0 and en1 are virtual Ethernets that are 'joint' in ether-channnel interface en2 Questions: how to 'close' en0 and 1 it will be not possible to manipulate? why NIM setup don't recognize and use en0 instead of en2? what is good practice here? Thanks, Alex
  • 2. backup on AIX 5.2 media for N3700 cifs
    Hi We have AIX 5.2 running on P 520 series, and N3700 having volumes created for Windows servers ( CIFS), iam trying to use backup media on AIX 5.2 to take the backup of cifs partition. Iam able to take backup using nfs commands.... but during restore i cannot retain permissions granted on windows 2003 for cifs partition. Kindly let me know how to retain the permissions on restore using above setup. Thanks
  • 3. Rsync Rookie
    I am new to trying to set rsync up to sync 2 or 3 filesystems between 2 seperate servers - can anyone assist me in how to get started?

Re: How to compare System Performance and Disk IO Performance

Postby Hajo Ehlers » Fri, 11 Apr 2008 17:24:33 GMT

Looks like you answered your first question by yourself. The second
one is on a first approach  simply not related to the named hardware.

Or to say in other words:
Disk I/O is determined by the Disk Type, Disk Amount, Disk
Configuration (Raid Type ), Connection Type ( FC,SSA,SCSI,iSCSI,SAS ),
Conection Speed , HBA , Filesystem Type and configuration
(JFS,JFS2,GPFS) and ..... the CPU


Re: How to compare System Performance and Disk IO Performance

Postby moonhkt » Sat, 12 Apr 2008 01:00:32 GMT

Thank a lot

Similar Threads:

1.How to compare System Performance and Disk IO Performance

Hi AIX User

How to compare System  Performance and Disk IO Performance ?

Old System : p5 9117-570, 4 x 1.65GHz
New System : p5 9117-MMA , 4 x 4.2GHz

We want compare  System  Performance and Disk IO Performance

I just have
Multi-user Performance (rPerf, SPEC CPU2006)
19.66 for Old System
38.76 for New System


2.Analysing disk IO performance

I have a somewhat complex environment (single multi-core machine with
20+ processes handling 1000s of web users entering data for various
projects and 100s running reports on this data) the performance of
which I need to improve.

It seems that the current bottleneck is IO performance (I'm basing
that on vmstat, sar etc. output). I'd like to be able to generate some
kind of analysis of what processes read how much and even what files.
There doesn't seem to be any tool kind out there that an analyse
things so precisely (supposedly FreeBSD has something like this, but
I'm on RHEL 4). The results of this could help me decide how to move
forward (maybe some optimisation is possible still on the current
environment, maybe I need to partition the data or functionality to
allow for mutliple machines sharing the load).

I'm thinking of building my own, which would basically be a LD_PRELOAD
taking note of open/read/write (so memory-mapped IO won't be tracked).
For each read/write I'd track the amount of data transferred and wall
clock time taken to do so. I suppose you might do it using strace too
but doing that severely degrades performance of the traced process.

At process exit or perhaps on receiving a signal, the data is dumped
and then an analysis tool can tell you that your process e.g. did 2345
reads and 3523 writes of file foo.db which took 45s.

Is there something like this already out there? I'll take any other
recommendations for system performance tuning too (I'm using e.g.
oprofile, strace already). I'll note that I have a hard time
reproducing exactly the same load mix in a test environment so I'm
looking at some fairly non-intrusive profile I can safely run in the
production environment.

3.Disk io performance


I see a strange behaviour during a dd read test. Seusequent runs of the
same dd  command takes relatively different times.

This is a new sun solaris box , no user has connected , We are testing
the server.

The file test is 2GB. in size . Storage is emc - raid10 , 960kb. stripe
size, striped to the 16 disks.  Server has 1 HBA (suns own hba)

File system is ufs with forcedirectio .

The first run of the dd takes 1:01.31 , with  15.9 Mb. read per second
and 53.5 asvc_t.
The next run takes 39.97 sec , with 84.3 mb. read and 10.1 asvc_t.

Why does the two tests differ? Which part can be the problem? emc ,

Kind Regards,

 timex  dd if=test of=/dev/null bs=983040

2132+1 records in
2132+1 records out

real     1:01.31
user        0.00
sys         0.19

                   extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   20.0    0.0   15.9    0.0  0.0  1.1    0.0   53.5   0 100

timex  dd if=test of=/dev/null bs=983040

2132+1 records in
2132+1 records out

real       39.97
user        0.00
sys         0.19

                   extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  106.0    0.0   84.3    0.0  0.0  1.1    0.0   10.1   0  99

4.Performance - High disk IO


Our monitoring system (using Big Brother) is showing one of our system 
(E4500 Solaris 8) experiencing high disk (T3 storage) IO. The comparison is 
done by comparing old and current data / graph.

As there are schedule cron jobs running in the middle of the night, is there 
a way to track which process contributing to the high disk IO by using 

Thanks in advance for any pointers. 

5.DISK IO and NIC recieve and send performance data


I have install collect into tru64 unix 4.0D. I prefectly show all the
system performance stat.
Is there any system call like (table) that I can collect the
performance data like
disk read/write, network interface card send/receive data ?


6. Direct io on block device has performance regression on 2.6.x kernel - pseudo disk driver

7. Low file-system performance for 2.6.11 compared to 2.4.26

8. Linux Kernel Markers - performance characterization with large IO load on large-ish system

Return to unix


Who is online

Users browsing this forum: No registered users and 11 guest