Similar Threads:
1.How to compare System Performance and Disk IO Performance
Hi AIX User
How to compare System Performance and Disk IO Performance ?
Old System : p5 9117-570, 4 x 1.65GHz
New System : p5 9117-MMA , 4 x 4.2GHz
We want compare System Performance and Disk IO Performance
I just have
Multi-user Performance (rPerf, SPEC CPU2006)
19.66 for Old System
38.76 for New System
moonhk
GMT+8
2.Analysing disk IO performance
I have a somewhat complex environment (single multi-core machine with
20+ processes handling 1000s of web users entering data for various
projects and 100s running reports on this data) the performance of
which I need to improve.
It seems that the current bottleneck is IO performance (I'm basing
that on vmstat, sar etc. output). I'd like to be able to generate some
kind of analysis of what processes read how much and even what files.
There doesn't seem to be any tool kind out there that an analyse
things so precisely (supposedly FreeBSD has something like this, but
I'm on RHEL 4). The results of this could help me decide how to move
forward (maybe some optimisation is possible still on the current
environment, maybe I need to partition the data or functionality to
allow for mutliple machines sharing the load).
I'm thinking of building my own, which would basically be a LD_PRELOAD
taking note of open/read/write (so memory-mapped IO won't be tracked).
For each read/write I'd track the amount of data transferred and wall
clock time taken to do so. I suppose you might do it using strace too
but doing that severely degrades performance of the traced process.
At process exit or perhaps on receiving a signal, the data is dumped
and then an analysis tool can tell you that your process e.g. did 2345
reads and 3523 writes of file foo.db which took 45s.
Is there something like this already out there? I'll take any other
recommendations for system performance tuning too (I'm using e.g.
oprofile, strace already). I'll note that I have a hard time
reproducing exactly the same load mix in a test environment so I'm
looking at some fairly non-intrusive profile I can safely run in the
production environment.
3.Disk io performance
Hi,
I see a strange behaviour during a dd read test. Seusequent runs of the
same dd command takes relatively different times.
This is a new sun solaris box , no user has connected , We are testing
the server.
The file test is 2GB. in size . Storage is emc - raid10 , 960kb. stripe
size, striped to the 16 disks. Server has 1 HBA (suns own hba)
File system is ufs with forcedirectio .
The first run of the dd takes 1:01.31 , with 15.9 Mb. read per second
and 53.5 asvc_t.
The next run takes 39.97 sec , with 84.3 mb. read and 10.1 asvc_t.
Why does the two tests differ? Which part can be the problem? emc ,
hba?
Kind Regards,
hope
timex dd if=test of=/dev/null bs=983040
2132+1 records in
2132+1 records out
real 1:01.31
user 0.00
sys 0.19
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
20.0 0.0 15.9 0.0 0.0 1.1 0.0 53.5 0 100
c2t5006048C48585813d80
timex dd if=test of=/dev/null bs=983040
2132+1 records in
2132+1 records out
real 39.97
user 0.00
sys 0.19
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
106.0 0.0 84.3 0.0 0.0 1.1 0.0 10.1 0 99
c2t5006048C48585813d80
4.Performance - High disk IO
Hi,
Our monitoring system (using Big Brother) is showing one of our system
(E4500 Solaris 8) experiencing high disk (T3 storage) IO. The comparison is
done by comparing old and current data / graph.
As there are schedule cron jobs running in the middle of the night, is there
a way to track which process contributing to the high disk IO by using
script.
Thanks in advance for any pointers.
5.DISK IO and NIC recieve and send performance data
Hi,
I have install collect into tru64 unix 4.0D. I prefectly show all the
system performance stat.
Is there any system call like (table) that I can collect the
performance data like
disk read/write, network interface card send/receive data ?
Thanks
6. Direct io on block device has performance regression on 2.6.x kernel - pseudo disk driver
7. Low file-system performance for 2.6.11 compared to 2.4.26
8. Linux Kernel Markers - performance characterization with large IO load on large-ish system