Discussion:
Heavy IO
Christopher Williams
2011-11-08 20:56:28 UTC
Permalink
Christopher Williams [http://community.zenoss.org/people/cwilli1971] created the discussion

"Heavy IO"

To view the discussion, visit: http://community.zenoss.org/message/62493#62493

--------------------------------------------------------------
I have Zenoss Core 3.2.1 installed on a 8 Core 2G Intel Xeon and 48G of memory. I am seeing very high IO on this box that is causing everything to be slow periodically. When the IO is not occuring, everything is very fast. When it is ocurring the box is very very slow, even the CLI.

This is not the case when the Zenoss service is shutdown. Here is an IOSTAT when we are seeing the problem;


Filesystem:              rBlk_nor/s   wBlk_nor/s   rBlk_dir/s   wBlk_dir/s   rBlk_svr/s   wBlk_svr/s     ops/s    rops/s    wops/s
apoh0010nfs01:/Apps/LSWREPO/export         0.01         0.00         0.00         0.00         0.00         0.00      0.04      0.00      0.00
apil0210nfs01.:/Apps/SpanDataM/lpil0210zen02         2.50         2.57         0.00         0.00         2.49         2.57      0.11      0.04      0.04

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
cciss/c0d0        0.00   207.20    0.00  418.40     0.00  4996.80    11.94    34.43   81.96   2.21  92.40
cciss/c0d1        0.00   194.60    0.00  399.60     0.00  4774.40    11.95   143.23  358.72   2.50 100.00
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    0.80     0.00     6.40     8.00     0.25  310.00 204.00  16.32
dm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-5              0.00     0.00    0.00   23.80     0.00   184.00     7.73     3.78  158.79  15.06  35.84
dm-6              0.00     0.00    0.00    0.00     0.00     0.00     0.00   264.20    0.00   0.00 100.00

Filesystem:              rBlk_nor/s   wBlk_nor/s   rBlk_dir/s   wBlk_dir/s   rBlk_svr/s   wBlk_svr/s     ops/s    rops/s    wops/s
apoh0010nfs01:/Apps/LSWREPO/export         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00
apil0210nfs01.:/Apps/SpanDataM/lpil0210zen02         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
cciss/c0d0        0.00   179.40    0.00  404.00     0.00  4672.00    11.56    25.87   64.14   2.28  92.24
cciss/c0d1        0.00   195.00    0.00  394.00     0.00  4748.80    12.05   143.46  364.33   2.54 100.00
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-5              0.00     0.00    0.00    5.40     0.00    43.20     8.00     0.24   44.59  10.52   5.68
dm-6              0.00     0.00    0.00    0.00     0.00     0.00     0.00   253.93    0.00   0.00 100.00

Filesystem:              rBlk_nor/s   wBlk_nor/s   rBlk_dir/s   wBlk_dir/s   rBlk_svr/s   wBlk_svr/s     ops/s    rops/s    wops/s
apoh0010nfs01:/Apps/LSWREPO/export         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00
apil0210nfs01:/Apps/SpanDataM/lpil0210zen02         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
cciss/c0d0        0.00   248.20    0.20  411.20     1.60  5160.00    12.55    31.03   74.64   2.26  93.12
cciss/c0d1        0.00   268.40    0.00  403.40     0.00  5131.20    12.72   143.45  356.94   2.48 100.00
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-5              0.00     0.00    0.00    4.20     0.00    33.60     8.00     0.38   89.52  11.43   4.80
dm-6              0.00     0.00    0.20    0.20     1.60     0.00     4.00   280.61 691804.00 2500.00 100.00

Filesystem:              rBlk_nor/s   wBlk_nor/s   rBlk_dir/s   wBlk_dir/s   rBlk_svr/s   wBlk_svr/s     ops/s    rops/s    wops/s
apoh0010nfs01:/Apps/LSWREPO/export         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00
apil0210nfs01t:/Apps/SpanDataM/lpil0210zen02         0.00         0.00         0.00         0.00         0.00         0.00      0.00      0.00      0.00


Does anyone have any idea what could be causing Zenoss to have such high IO? We only have about 190 devices.

Thanks in advance.
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/62493#62493]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
dpetzel
2011-11-08 21:00:14 UTC
Permalink
dpetzel [http://community.zenoss.org/people/dpetzel] created the discussion

"Re: Heavy IO"

To view the discussion, visit: http://community.zenoss.org/message/62495#62495

--------------------------------------------------------------
The IO is almost certainly coming from zenoss accessing the RRDs.

Also it appears have your RRDs on NFS mounts? We had tried running it on NFS and the performance was HORRIBLE. We ditched the NFS mounts and went direct fiber attached. I'll be the first admit I didnt do a lick of NFS tuning or debugging, but our I/O performance was drastically better once moving away from NFS
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/62495#62495]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Christopher Williams
2011-11-08 21:05:27 UTC
Permalink
Christopher Williams [http://community.zenoss.org/people/cwilli1971] created the discussion

"Re: Heavy IO"

To view the discussion, visit: http://community.zenoss.org/message/62497#62497

--------------------------------------------------------------
No these are local drives. We have 2 2 drive Raid 1 pairs formated with XFS.

cciss/c0d0 is a raid 1 pair
cciss/c0d1 is a raid 1 pair
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/62497#62497]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
dpetzel
2011-11-08 21:10:23 UTC
Permalink
dpetzel [http://community.zenoss.org/people/dpetzel] created the discussion

"Re: Heavy IO"

To view the discussion, visit: http://community.zenoss.org/message/62498#62498

--------------------------------------------------------------
Sure Enough, Copying your post into a wider text file, makes that very clear :)

I still suspect its the RRD reads/writes.
have you had a chance to review: http://community.zenoss.org/people/ckrough/blog/2010/02/09/performance-tuning-for-zenoss-storage http://community.zenoss.org/people/ckrough/blog/2010/02/09/performance-tuning-for-zenoss-storage
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/62498#62498]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Christopher Williams
2011-11-09 16:59:31 UTC
Permalink
Christopher Williams [http://community.zenoss.org/people/cwilli1971] created the discussion

"Re: Heavy IO"

To view the discussion, visit: http://community.zenoss.org/message/62534#62534

--------------------------------------------------------------
I removed all of the RRD graphing that I didnt need and things are working much better now. Thanks for your help!
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/62534#62534]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Loading...