Discussion:
Hardware requirements for Zenoss v4.x
nilie
2012-10-18 16:24:31 UTC
Permalink
nilie [http://community.zenoss.org/people/nilie] created the discussion

"Hardware requirements for Zenoss v4.x"

To view the discussion, visit: http://community.zenoss.org/message/69359#69359

--------------------------------------------------------------
Hello everybody,

We've been running a Zenoss server for more than two years and now the time has come to migrate it on a new server and at the same time upgrade to the latest version of Zenoss. Unfortunately we are being pressed into running Zenoss on a virtual server and I'm expecting to be plagued by performace problems.
Up to this time we have been running on an IBM X series with 16GB of RAM, 1 Intel dual-core CPU and a RAID 5 made of 7K RPM disks.
I noticed that hardware requirements are based on the number of devices being monitored but that does not work very well for our environment. We are using Zenoss to monitor networking equipments only, 200 devices (switches, routers and firewalls) having close to 44000 interfaces (in excess of 350000 data sources) being polled and graphed, and planing to add more devices. At this point the disk usage is impacting the performance with CPU waiting for IO an average of 20% with bursts up to 80% but the server still shows an acceptable response time. The big mistery is how is this going to work in a virtual server and here I have a couple of questions.
* Is anyone running Zenoss on a virtual server for the same kind of environment (low number of devices but high number of datasources) and if yes, are there any issues I will have to address in terms of virtual environment configuration (VMware) ?
* Even if applying the recommended fine-tuning for storage might help improving the situation a little bit, I doubt the server could take an additional 50 networking devices and the fact that the server will be using a shared storage will surely not bring a performance boost so what other options do I  have ?
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/69359#69359]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Charles Wilkinson
2012-10-20 14:18:59 UTC
Permalink
Charles Wilkinson [http://community.zenoss.org/people/charles81] created the discussion

"Re: Hardware requirements for Zenoss v4.x"

To view the discussion, visit: http://community.zenoss.org/message/69385#69385

--------------------------------------------------------------
I'm not sure what virtualisation you're using but you could try multiple virtual disks in a volume group on your machine rather than just one virtual disk.  I could be wrong, and it may depend on your virtualisation, but I think it grants more performance/time to multiple disks as opposed to just one big disk.

I'd also look at the disk you're running the storage repositories on.  If they're runing on some mirrored drive, local to the Virtualisation host, you likely don't have a hope unless you have multiple virtual machines and put a collector on each one.  But depending on your disks, tweek the elevators, turn off the changing of access times on files and do many of the other generic performance improvements that are recommended for any Linux installation.

Personally, I run my primary servers on SSD disks.  Expensive yes, but OH so worth it for something this IO intensive B-) .

Outside that you might like to investigate exactly what you're collecting and recording in RRDs then look at trimming what isn't absolutely necessary.
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/69385#69385]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
nilie
2012-10-24 16:23:48 UTC
Permalink
nilie [http://community.zenoss.org/people/nilie] created the discussion

"Re: Hardware requirements for Zenoss v4.x"

To view the discussion, visit: http://community.zenoss.org/message/69511#69511

--------------------------------------------------------------
Thank you for your input.
Unfortunately by virtualizing this server we are going to lose control of the hardware (it will be a separate group handling the VMware hosts). Although I might find out details about the solution they are using, we have no chance of influencing their decisions or the actual setup.
I guess deploying additional collectors would be the way to go.
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/69511#69511]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Charles Wilkinson
2012-10-24 16:33:32 UTC
Permalink
Charles Wilkinson [http://community.zenoss.org/people/charles81] created the discussion

"Re: Hardware requirements for Zenoss v4.x"

To view the discussion, visit: http://community.zenoss.org/message/69513#69513

--------------------------------------------------------------
Yeah, find out the details.  If you still have enough control to determine how many virtual drives you get (as opposed to a single big drive) and priority for the IO on those drives over other systems then it can help.  But certainly talk to them and just ask what kinds of disks they use or through-puts they get.  Heck, test it on one of their VMs.

If they have a really good SAN storage system behind the virtualisation then its probably not an issue.  If they have any kind of automated tiering to SSD disks on their SAN then you'll likely find that the intensive IO will cause most of the Zenoss disk/s to be auto-migrated to them after only 48 hours B-)
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/69513#69513]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Loading...