Discussion:
Unhandled exception in zenhub service PingPerformanceConfig
Blaine B
2013-02-20 00:11:47 UTC
Permalink
Blaine B [http://community.zenoss.org/people/blaineb] created the discussion

"Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/71597#71597

--------------------------------------------------------------
I have been getting the following error message for a while now, and unfortunately I'm not exactly sure what changed that made it start popping up.  I think it may have been the upgrade to 4.2.3 (from 4.2).


| Resource: | localhost |
| Component: | Products.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig |
| Event Class: | /Unknown (http://192.168.5.22:8080/zport/dmd/Events/Unknown) |
| Status: | New |
| Message: | Unhandled exception in zenhub service Products.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig: 0x08c5a4 |
|
Event Details... |
|
| manager | zenoss |
| methodCall | _createDeviceProxies((,), {}) |
| traceback | Traceback (most recent call last): File "/opt/zenoss/Products/ZenCollector/services/config.py", line 108, in _wrapFunction return functor(*args, **kwargs) File "/opt/zenoss/Products/ZenCollector/services/config.py", line 227, in _createDeviceProxies proxy = self._createDeviceProxy(device) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 163, in _createDeviceProxy self._getComponentConfig(iface, perfServer, proxy.monitoredIps) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 101, in _getComponentConfig for ipAddress in iface.ipaddresses(): File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 71, in __call__ return self.objectValuesAll() File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 174, in objectValuesAll return list(self.objectValuesGen()) File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 181, in objectValuesGen for obj in self._objects: File "/opt/zenoss/lib/python2.7/_abcoll.py", line 532, in __iter__ v = self[i] File "/opt/zenoss/lib/python2.7/UserList.py", line 31, in __getitem__ def __getitem__(self, i): return self.data[i] File "/opt/zenoss/lib/python/ZODB/Connection.py", line 860, in setstate self._setstate(obj) File "/opt/zenoss/lib/python/ZODB/Connection.py", line 901, in _setstate p, serial = self._storage.load(obj._p_oid, '') File "/opt/zenoss/lib/python/relstorage/storage.py", line 476, in load raise POSKeyError(oid) POSKeyError: 0x08c5a4 |
|
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/71597#71597]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Justin Simmons
2013-03-08 21:24:06 UTC
Permalink
Justin Simmons [http://community.zenoss.org/people/jmsimmons] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/72321#72321

--------------------------------------------------------------
I have been seeing this message too since upgrading to 4.2.3. I delete the event, but then it periodically pops back up...

Traceback (most recent call last): File "/opt/zenoss/Products/ZenCollector/services/config.py", line 108, in _wrapFunction return functor(*args, **kwargs) File "/opt/zenoss/Products/ZenCollector/services/config.py", line 227, in _createDeviceProxies proxy = self._createDeviceProxy(device) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 165, in _createDeviceProxy self._getComponentConfig(iface, perfServer, proxy.monitoredIps) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 102, in _getComponentConfig for ipAddress in iface.ipaddresses(): File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 71, in __call__ return self.objectValuesAll() File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 174, in objectValuesAll return list(self.objectValuesGen()) File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 181, in objectValuesGen for obj in self._objects: File "/opt/zenoss/lib/python2.7/_abcoll.py", line 532, in __iter__ v = self[i] File "/opt/zenoss/lib/python2.7/UserList.py", line 31, in __getitem__ def __getitem__(self, i): return self.data[i] File "/opt/zenoss/lib/python/ZODB/Connection.py", line 860, in setstate self._setstate(obj) File "/opt/zenoss/lib/python/ZODB/Connection.py", line 901, in _setstate p, serial = self._storage.load(obj._p_oid, '') File "/opt/zenoss/lib/python/relstorage/storage.py", line 476, in load raise POSKeyError(oid) POSKeyError: 0x261a09

I hope someone has an idea about what is happening here, since there's more than one of us having this issue.
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/72321#72321]

Start a new discussion in zenoss-users by email
[discussions-community-forums-zenoss--***@community.zenoss.org] -or- at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
bknotts
2013-04-10 14:20:46 UTC
Permalink
bknotts [http://community.zenoss.org/people/bknotts] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/72776#72776

--------------------------------------------------------------
Also have this exact same error on 4.2.3. 

Thanks,
Brent
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/72776#72776]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
jcurry
2013-04-10 15:03:08 UTC
Permalink
jcurry [http://community.zenoss.org/people/jcurry] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/72778#72778

--------------------------------------------------------------
I also have 4.2.3 and haven't seen this one - though that doesn't really mean anything!  Is this a persistent error or an occasional one?

It looks like a method is trying to get data out of the Zope database(ZODB) and failing. RelStorage is the subsystem that provides access to the ZODB which is now held in the MySQL zodb database and it looks like a RelStorga ecall is what actually barfs.

Has anyone seen this on any other 4.x system (ie can we pin it down to 4.2.3 or is it generic to 4)?

Cheers,
Jane
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/72778#72778]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Jeremy Wynia
2013-05-31 14:45:34 UTC
Permalink
Jeremy Wynia [http://community.zenoss.org/people/Jeremy] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/73420#73420

--------------------------------------------------------------
We are running 4.2.0 and I started getting this error not too long ago.  Same behavior as others have described.  Can't get rid of it, and can't map it.

Thanks
Jeremy
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/73420#73420]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Jeff Creek
2013-08-06 18:44:42 UTC
Permalink
Jeff Creek [http://community.zenoss.org/people/jcreek] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/74307#74307

--------------------------------------------------------------
Not much to add other than a 'me too'.

Started seeing these on my Core 4.2.3 server a few weeks ago.


Unhandled exception in zenhub service Products.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig: 0x186a33

| Traceback (most recent call last): File "/opt/zenoss/Products/ZenCollector/services/config.py", line 108, in _wrapFunction return functor(*args, **kwargs) File "/opt/zenoss/Products/ZenCollector/services/config.py", line 227, in _createDeviceProxies proxy = self._createDeviceProxy(device) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 165, in _createDeviceProxy self._getComponentConfig(iface, perfServer, proxy.monitoredIps) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 102, in _getComponentConfig for ipAddress in iface.ipaddresses(): File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 71, in __call__ return self.objectValuesAll() File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 174, in objectValuesAll return list(self.objectValuesGen()) File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 181, in objectValuesGen for obj in self._objects: File "/opt/zenoss/lib/python2.7/_abcoll.py", line 532, in __iter__ v = self[i] File "/opt/zenoss/lib/python2.7/UserList.py", line 31, in __getitem__ def __getitem__(self, i): return self.data[i] File "/opt/zenoss/lib/python/ZODB/Connection.py", line 860, in setstate self._setstate(obj) File "/opt/zenoss/lib/python/ZODB/Connection.py", line 901, in _setstate p, serial = self._storage.load(obj._p_oid, '') File "/opt/zenoss/lib/python/relstorage/storage.py", line 476, in load raise POSKeyError(oid) POSKeyError: 0x186a33 |
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/74307#74307]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Daniel Rich
2013-08-07 18:08:29 UTC
Permalink
Daniel Rich [http://community.zenoss.org/people/sjthespian] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/74284#74284

--------------------------------------------------------------
I'll add another "me too", I just saw this on three devices yesterday for the first time. I'm on 4.2.4, been ugrading since the first 4.2. release.
2013-08-06 14:43:14,657 ERROR zen.hub: Unhandled exception in zenhub service Pro
ducts.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig: 'lily.anim.dr
eamworks.com'
  File "/opt/zenoss/Products/ZenCollector/services/config.py", line 116, in _wra
pFunction
    return functor(*args, **kwargs)
  File "/opt/zenoss/Products/ZenCollector/services/config.py", line 233, in _cre
ateDeviceProxies
    proxy = self._createDeviceProxy(device)
  File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 165
, in _createDeviceProxy
    self._getComponentConfig(iface, perfServer, proxy.monitoredIps)
  File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 99,
in _getComponentConfig
    basepath = iface.rrdPath()
  File "/opt/zenoss/Products/ZenModel/RRDView.py", line 302, in rrdPath
    return GetRRDPath(self)
  File "/opt/zenoss/Products/ZenModel/RRDView.py", line 30, in GetRRDPath
    d = deviceOrComponent.device()
  File "/opt/zenoss/Products/ZenModel/OSComponent.py", line 46, in device
    if os: return os.device()
  File "/opt/zenoss/Products/ZenModel/OperatingSystem.py", line 158, in device
    return self.getPrimaryParent()
  File "/opt/zenoss/Products/ZenRelations/PrimaryPathObjectManager.py", line 83,
in getPrimaryParent
    return self.__primary_parent__.primaryAq()
  File "/opt/zenoss/Products/ZenRelations/PrimaryPathObjectManager.py", line 78,
in primaryAq
    raise KeyError(self.id)
KeyError: 'lily.anim.dreamworks.com'
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/74284#74284]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Mark Rogers
2013-08-23 02:29:29 UTC
Permalink
Mark Rogers [http://community.zenoss.org/people/CatalyticDragon] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/74487#74487

--------------------------------------------------------------
Me too:

2013-08-23 11:22:37,268 ERROR zen.hub: Unhandled exception in zenhub service Products.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig: 0x99e353
Traceback (most recent call last):
  File "/opt/zenoss/Products/ZenCollector/services/config.py", line 116, in _wrapFunction
    return functor(*args, **kwargs)
  File "/opt/zenoss/Products/ZenCollector/services/config.py", line 233, in _createDeviceProxies
    proxy = self._createDeviceProxy(device)
  File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 165, in _createDeviceProxy
    self._getComponentConfig(iface, perfServer, proxy.monitoredIps)
  File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 102, in _getComponentConfig
    for ipAddress in iface.ipaddresses():
  File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 71, in __call__
    return self.objectValuesAll()
  File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 174, in objectValuesAll
    return list(self.objectValuesGen())
  File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 179, in <genexpr>
    return (obj.__of__(self) for obj in self._objects)
  File "/opt/zenoss/lib/python2.7/_abcoll.py", line 532, in __iter__
    v = self[i]
  File "/opt/zenoss/lib/python2.7/UserList.py", line 31, in __getitem__
    def __getitem__(self, i): return self.data[i]
  File "/opt/zenoss/lib/python/ZODB/Connection.py", line 860, in setstate
    self._setstate(obj)
  File "/opt/zenoss/lib/python/ZODB/Connection.py", line 901, in _setstate
    p, serial = self._storage.load(obj._p_oid, '')
  File "/opt/zenoss/lib/python/relstorage/storage.py", line 476, in load
    raise POSKeyError(oid)
POSKeyError: 0x99e353
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/74487#74487]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
N Eone
2013-09-06 00:29:56 UTC
Permalink
N Eone [http://community.zenoss.org/people/josephsmith] created the discussion

"Re: Unhandled exception in zenhub service PingPerformanceConfig"

To view the discussion, visit: http://community.zenoss.org/message/74580#74580

--------------------------------------------------------------
*Disclaimer that I am a Windows guy and not too familiar with linux.

I also have (or had) a virtual 4.2.3 Core install that had 3 of these errors popping up under the localhost events. I didn't think much of it until I started using maintenance windows on several groups of servers that have monthly reboots (patching). I happened to be awake when one of the maintenance windows was supposed to be active on a group of our servers (80+ devices set from Production to Maintenance). An hour into the maintenance window I noticed none of the devices in that group were set to maintenance. I verified the time/date was correct and the active maintenance window informational event was showing in the localhost events. I decided to change the status to maintenance in one swoop by manually selecting all the devices in the particular group. Low and behold, it failed to do so with no error. I chose a single device in the group and successfully had the state move to maintenance. I upped it a bit with half of the devices and attempted, it failed. I started doing 10 at time, success, success, then fail on the 3rd 10.

After testing each of those 10 individually I found the culprit. I wasn't able to update ANYTHING on this device, groups, status, details, etc. I even attempted to delete it. Every attempt to save/modify the device yield a fleeting yellow error banner at the top of the browser "POSKeyError: *0x1cd6a1*". Sure enough when I when back and viewed the localhost events, the code matched.

With 3 total events like this on my localhost, I correctly guessed I had 2 other corrupted devices. This was easily tested by creating a temporary group and doing a drag/drop for a smaller amount of devices to that group. 50 at a time I eventually narrowed it down found 2 other corrupt devices.


*Message:*
+Unhandled exception in zenhub service Products.ZenHub.services.PingPerformanceConfig.PingPerformanceConfig: 0x1cd6a1+
*+Traceback (most recent call last):+*
+File "/opt/zenoss/Products/ZenCollector/services/config.py", line 108, in _wrapFunction return functor(*args, **kwargs) File "/opt/zenoss/Products/ZenCollector/services/config.py", line 227, in _createDeviceProxies proxy = self._createDeviceProxy(device) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 165, in _createDeviceProxy self._getComponentConfig(iface, perfServer, proxy.monitoredIps) File "/opt/zenoss/Products/ZenHub/services/PingPerformanceConfig.py", line 102, in _getComponentConfig for ipAddress in iface.ipaddresses(): File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 71, in __call__ return self.objectValuesAll() File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 174, in objectValuesAll return list(self.objectValuesGen()) File "/opt/zenoss/Products/ZenRelations/ToManyRelationship.py", line 181, in objectValuesGen for obj in self._objects: File "/opt/zenoss/lib/python2.7/_abcoll.py", line 532, in __iter__ v = self[i] File "/opt/zenoss/lib/python2.7/UserList.py", line 31, in __getitem__ def __getitem__(self, i): return self.data[i] File "/opt/zenoss/lib/python/ZODB/Connection.py", line 860, in setstate self._setstate(obj) File "/opt/zenoss/lib/python/ZODB/Connection.py", line 901, in _setstate p, serial = self._storage.load(obj._p_oid, '') File "/opt/zenoss/lib/python/relstorage/storage.py", line 476, in load raise POSKeyError(oid) POSKeyError: *0x1cd6a1*+

Google yielded no useful details and I came across several threads on here from other users with the same issue then eventually this post by jmp242:
* +this is a corruption caused by MySQL exiting mid transaction, thereby corrupting objects in the relstorage ZopeDB. This is an issue if, say, MySQL is killed by the OOM killer - so sizing the Zenoss server appropriately is critical. The corruption can linger unnoticed for some time if you don't access the corrupted object, but you will see errors in some logs. Repair is an involved manual process, requiring a Zenoss Guru.+
http://community.zenoss.org/thread/19880 http://community.zenoss.org/thread/19880

And since this can "linger unnoticed", you can only hope your backups go far enough back, before the event initally occured. Transferring/restoring to a new Zenoss just carries the issue along with the DB.

So, if you see these, you have a corrupt device for each event. I just rebuilt ours.

Ultimately it looks to require properly size the server (ram, disk) and possibly schedule a regular reboot of Zenoss to keep memory use down, aka the memory creep/leak.



Off topic/unrelated to this issue, when I say memory leak (I wish I had bookmarked the thread here about another person's ranting about why it doesn't release it back...)  The memory utilization on the localhost just slowly creeps up and up and up. The cache slow gets smaller after used hits 100%, finally hitting 10-15% and swap begins to creep up (eventully spiking in our system).

In the picture of our memory utilization graph of one of the Zenoss Core 4.2.3 for our environment, the red stars indicate a manual reboot to the system. To "reset" the system back to lower memory use. The blue star indicates a bump from 24GB of ram to 36GB. Again the memory use creeps up, even after that. The two spikes next to the orange star, I have no explanation for. The gaps being Zenoss unable to connect to SNMP on the localhost.
Loading Image... Loading Image...
--------------------------------------------------------------

Reply to this message by replying to this email -or- go to the discussion on Zenoss Community
[http://community.zenoss.org/message/74580#74580]

Start a new discussion in zenoss-users at Zenoss Community
[http://community.zenoss.org/choose-container!input.jspa?contentType=1&containerType=14&container=2003]
Loading...