exhuma / puresnmp Goto Github PK
View Code? Open in Web Editor NEWPure Python SNMPv2 Library
License: MIT License
Pure Python SNMPv2 Library
License: MIT License
Tried to use puresnmp against a SNMPv1 device, and was unsuccessful. After digging into the docs some more, it seems it only supports SNMPv2c currently, with v3 support planned.
However, there are still a significant number of SNMPv1 devices "out in the wild". Would you consider supporting SNMPv1, or do you consider that to be out of scope for this project?
I'd be willing to write up an implementation when I get some time.
When a response is larger than 4096 bytes, the following error is raised:
[...]
File "/home/users/malbert/work/private/puresnmp/puresnmp/api/pythonic.py", line 183, in bulkwalk
for oid, value in result:
File "/home/users/malbert/work/private/puresnmp/puresnmp/api/pythonic.py", line 117, in multiwalk
for oid, value in raw_output:
File "/home/users/malbert/work/private/puresnmp/puresnmp/api/raw.py", line 203, in multiwalk
timeout)
File "/home/users/malbert/work/private/puresnmp/puresnmp/api/raw.py", line 408, in fetcher
port=port, timeout=timeout)
File "/home/users/malbert/work/private/puresnmp/puresnmp/api/raw.py", line 367, in bulkget
raw_response = Sequence.from_bytes(response)
File "/home/users/malbert/work/private/puresnmp/puresnmp/x690/types.py", line 153, in from_bytes
cls, expected_length, len(data)))
ValueError: Corrupt packet: Unexpected length for <class 'puresnmp.x690.types.Sequence'> Expected 6501 (0x1965) but got 4092 (0xffc)
This is due to the way sock.recv
is used in puresnmp.transport
. It only fetches one chunk and does not continue reading after that.
When querying for a non-existent OID on a target machine, it causes an exception which is not handled properly by puresnmp.
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/pdu.py", line 184, in decode
return super().decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/pdu.py", line 89, in decode
values, data = pop_tlv(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 89, in pop_tlv
value = cls.from_bytes(chunk)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 134, in from_bytes
return cls.decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 319, in decode
value, data = pop_tlv(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 89, in pop_tlv
value = cls.from_bytes(chunk)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 134, in from_bytes
return cls.decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 319, in decode
value, data = pop_tlv(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 89, in pop_tlv
value = cls.from_bytes(chunk)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 134, in from_bytes
return cls.decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/pdu.py", line 80, in decode
raise EmptyMessage('No data to decode!')
puresnmp.exc.EmptyMessage: No data to decode!
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main_test.py", line 306, in <module>
analyzer_list, importer_list, import_analyzer_mapping, snmp_statistics = first_run()
File "main_test.py", line 265, in first_run
effected_importers = snmp_statistics.gen_effected_importers(import_analyzer_mapping.get_mapping())
File "main_test.py", line 230, in gen_effected_importers
backedup_analyzers = self._analyzer_que_lookup()
File "main_test.py", line 216, in _analyzer_que_lookup
result = get(ip_address, self.community, snmp_oid)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/__init__.py", line 60, in get
return multiget(ip, community, [oid], port)[0]
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/__init__.py", line 84, in multiget
raw_response = Sequence.from_bytes(response)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 134, in from_bytes
return cls.decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 319, in decode
value, data = pop_tlv(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 89, in pop_tlv
value = cls.from_bytes(chunk)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/x690/types.py", line 134, in from_bytes
return cls.decode(data)
File "/usr/lib/python3.5/site-packages/puresnmp-1.1.5-py3.5.egg/puresnmp/pdu.py", line 186, in decode
raise NoSuchOID('Nothing found at the given OID (%s)' % exc)
puresnmp.exc.NoSuchOID: Nothing found at the given OID (No data to decode!)
I'm interested in learning what it would take to bring asyncio support to puresnmp.
The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.
Creating a separate issue from something identified in PR #32:
By @remdragon:
Actually, I did the alias wrong. I need to create it as a subclass of
UnknownType so that repr() will output the correct value. Also, I think the
subclass will be necessary in order to flag it as deprecated anyway. Please
give me a chance to fix that.
The type with tag 0x40
should be detected as IpAddress. This should be exposed as instance from the ipaddress
module on the high-level border of the puresnmp
module.
I've noticed when using an incorrect community string, a sent message never times out while waiting for a response (or it takes longer that I was willing to wait). Perhaps a sock.settimeout(2) or something similar should be added in the puresnmp.transport.send() function. It could also be made adjustable by adding a default argument in function calls.
When using pop_tlv
to convert a bytes object to a bytes object into a PDU object, the result is different than what you might expect.
pop_tlv
these values are represented as puresnmp
Integer
instance.This has been observed with the error_index
and error_status
fields.
Data can only be read exactly once from a UDP socket. All bytes which don't fit into the buffer are discarded.
Currently the module puresnmp.transport
contains an implementation to repeatedly read data from the socket if the buffer is too small. This makes only sense for TCP and causes a timeout
exception for UDP if the package is too large.
puresnmp
should simply read once. If any data is discarded (i.e. if the buffer-size is too small), it will result in a truncated package which will trigger a "corrupt packet" error. This should make it easier to pinpoint the error.
Additionally, the corrupt-package error message could include a hint to the buffer size.
The PDUs support multiple OIDs in one packet and should be supported.
Currently the representation of the IpAddress type contains a byte-string. Converting the byte-string to an ip-address would make log output much more readable.
The following line would break the API with the superclass Type
:
IpAddress('192.168.1.1')
becuase the superclass requires a bytes object as argument. The following would keep a consistent API but would make output quite long:
IpAddress.from_string('192.168.1.1')
Alternatively a non-executable form may be used:
<IpAddress 192.168.1.1>
The code in the simple methods like puresnmp.get
predate the functions in puresnmp.multiget
. Other, similar cases exist for walk
and set
.
There's quite some code-duplication. It is possible to keep the "multi" functions, and make the "simple" functions thin wrappers around those "multi" methods. This should remove the duplication and still keep both versions available.
Why keep both, and why not only keep the "multi" variants?
One main aim of the library is to keep the API as simple as possible. Having both versions makes:
Currently the chunk-size for socket reading is hardcoded to 4096
bytes. This value could be made configurable for the end-user. This is kind of related to #45 as it may reduce the chance of running into #45. However, fixing #45 has priority!
This is also related to #1 as it is another configuration value for the network behaviour.
Currently the timeout
value is exposed by most top-level functions. To guarantee backwards-compatibility a new argument chunk_size
should be introduced.
For the next major release, this would make more sense in the creation of an SNMP-Client instance (see also #3)
The set
command requires typing information for variables. However, there is no reference of the available types available in the docs.
Add the necessary information to the docs!
Currently (by design) the SNMP types are converted to Python types on the library boundaries.
This make day-to-day work easy and straight-forward. But information is lost which may be helpful in some cases and should be made accessible.
I'm planning to refactor the existing functions into "low-level" functions which will also be intended to be used by end-users of the library. Then, in order to retain the "easy" interface, new functions will be introduced which depend on the older, lower-level functions and to the existing type conversions.
As the title says.
I'm retrieving a small table near the end of the MIB (followed by 5 scalars). The response contains the table, the 5 scalars and EndOfMibView. This leads to bulkwalk raising an exception.
Actually it should return the retrieved values and the EndOfMibView indication so that the application can read values near the end of the MIB and also figure out that it has hit the end of the MIB.
I tried to figure out where exactly the problem happens but I got lost...
This is the end of the response message:
_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00_00
.1.3.6.1.6.3.1.1.6.1.0=128024585
.1.3.6.1.6.3.10.2.1.1.0=80_00_08_be_80_33_37_34_30_37_37_33_33_35_35
.1.3.6.1.6.3.10.2.1.2.0=1
.1.3.6.1.6.3.10.2.1.3.0=1656
.1.3.6.1.6.3.10.2.1.4.0=1500
.1.3.6.1.6.3.10.2.1.4.0=[endOfMibView] } }
These are functions available in another library I wrote years back and some projects depend on this functionality.
Currently __contains__
exist which may cover this. This ticket exists so that I don't forget to verify this before releasing 1.3
.
Old implementation (incompatible when used verbatim):
def parentof(self, other):
"""
Returns ``True`` if *other* is a direct parent of this OID:
>>> ObjectIdentifier('1.2').parentof(ObjectIdentifier('1.2.3'))
True
>>> ObjectIdentifier('1.3').parentof(ObjectIdentifier('1.2.3'))
False
"""
# pylint: disable=protected-access
if not isinstance(other, ObjectIdentifier):
other = ObjectIdentifier(other)
if self == other:
return True
if len(self._nodes) > len(other._nodes):
return False
common_head = other._nodes[:len(self._nodes)]
return common_head == self._nodes
def childof(self, other):
"""
Returns ``True`` if *other* is a direct child of this OID:
>>> ObjectIdentifier('1.2.3').childof(ObjectIdentifier('1.2'))
True
>>> ObjectIdentifier('1.1.3').childof(ObjectIdentifier('1.2'))
False
"""
# pylint: disable=protected-access
if not isinstance(other, ObjectIdentifier):
other = ObjectIdentifier(other)
if self == other:
return False
if len(self._nodes) < len(other._nodes):
return False
common_head = self._nodes[:len(other._nodes)]
return common_head == other._nodes
As reported by harti:
The type inherits from
OctetString
and theOctetString
encoder has the type hardcoded. Using the following inOctetString
fixes the issue:def __bytes__(self): tinfo = TypeInfo(self.TYPECLASS, TypeInfo.PRIMITIVE, self.TAG) return bytes(tinfo) + self.length + self.value
Currently, the package provides a "leaky abstraction" in the sense that calls to the main entry-point functions return data-types which are internal to puresnmp
(most notably ObjectIdentifier
).
The returned data-types should always be pure Python types.
Sometimes the "raw" data-types may provide useful functionalities, and as such a second entry-point should be provided to return the values unmodified. This would make it explicit to the end-user what kind of types are returned.
In my test environment IP's might change so I prefer to use host names. Is there a way to get this working without extra layer on my side?
At the moment I am getting:
ValueError: 'server_name.mydomain.local' does not appear to be an IPv4 or IPv6 address
"walk" returns pythonized values whereas "bulkwalk" returns raw values.
"bulkwalk" should also return pythonized values.
For SNMP beginners, these terms are unknown. A simple 1-2 sentence entry in the glossary would help. No need to go into detail. Just one sentence & link to an external (authoritative) document.
As reported by harti:
I observed this by chance when encoding
32768
. The current code yields0x8000
. Since integers are signed (also the unsigned types!) this actually is a -0. The correct encoding is0x008000
. The following code inInteger
seems to fix this, but it deserves probably more testing for the edge cases:def __bytes__(self): if self.value == 0: octets = [0] else: # Split long integers into multiple octets. remainder = self.value octets = [] while True: octets.append(remainder & 0b11111111) if remainder == 0 or remainder == -1: break remainder = remainder >> 8 if remainder == 0 and octets[-1] == 0b10000000: octets.append(0) octets.reverse() # remove leading octet if there is a string of 9 zeros or ones while len(octets) > 1 and (octets[0] == 0 and octets[1] & 0b10000000 == 0) or \ (octets[0] == 0b11111111 and octets[1] & 0b10000000 != 0): del octets[0] tinfo = TypeInfo(self.TYPECLASS, TypeInfo.PRIMITIVE, self.TAG) return bytes(tinfo) + bytes([len(octets)]) + bytes(octets)
I've run into a bug in a network device which does not behave correctly on GetNext requests. Instead of returning the next OID it will return the same OID as the one requested on some rare occasions.
This causes endless loops on walk
and bulkwalk
(and thus table
requests as well). A check should be introduced that verifies that the OID received by getnext
is different from the one passed in. A more precise check would be to check if the returned OID is "larger" than the one passed in.
These checks may have a considerable performance impact on larger walk
operations, primarily the check if the OID is increasing. They should therefore be made optional.
Publish official release on pypi as soon as the remaining issues for Release 1.0 are implemented.
The endless loop which was fixed in 1.3.1 was reintroduced in 1.3.2
Is there a reason why doing a full walk on a device, it doesn't start yielding until after it has accumulated all the data from the device? Would it break anything to modify the logic to start yielding in multiwalk's "while unfinished_oids" block?
Line 461 in d8ec41e
This conditional is repeated three lines lower.
The license in setup.py should be MIT, not BSD,
Some parts of the documentation is auto-generated via the fabric script.
RTD does not run the fabric script and thus does not include the generated documentation. This needs to be fixed!
Currently the top-level module contains a couple of duplicate functions, for example walk
and multiwalk
or get
and multiget
.
This can lead to confusion. It was meant to keep the arguments simple for simple requests but introduces too much inconsistency. The "multi" functions should be kept. This way all functions will accept a lit of OIDs making the overall API calls more consistent.
Hello, thanks for this excellent approach to handle SNMP.
After update from from 1.4.1 to 1.5.1 the api stopped to work because of missing keyword argument:
in puresnmp/transport.py
:
def send(self, ip, port, packet): # pragma: no cover
# type: ( str, int, bytes ) -> bytes
but the call remain using timeout:
in puresnmp/api/raw.py
response = transport.send(ip, port, to_bytes(packet), timeout=timeout)
Error responses contain variable bindings. In case of an error, the error-index
is set to the index of the failed variable binding. This detail is currently not included in error messages but should be.
I seem to remember to have implemented the TimeTick
, Counter32
and Gauge32
types. But they are not properly detected!
I've introduced some PEP8 errors (Identified in #32 which should be fixed. Additionally, to avoid confusion in future PRs, the contribution guide should contain a note about which linter is used.
As the title says.
It is possible to get something very similar to tables, even without MIBs. It is doable to return a list of dicts, where each list item represents a row in the table, and the dict contains the colums where the key equals the column ID from the OID.
Why not a list of lists?
Some manufaturers have "sparse" tables. This introduces an ambiguity when using lists as columns: If an index is set to None
, does that mean the element was missing, or does it mean it was explicitly reported as Null
.
Using dicts removes this ambiguity.
The decode_length
function returns a namedtuple since one of the latest commits. This opens up the possibility to access its elements via name instead of indices. This makes code more readable. I don't remember if the return value is always used via variable unpacking, or if the indices are used somewhere.
In case the indices are used, they should be replaced with the new namedtuple attributes.
As reported to me via e-mail:
Hi Albert,
looks like the encode_length() function in x690/util.py is broken. It produces little endian byte ordering for the long definite form instead of big endian.
Insertingoutput.reverse()
before prefixing the info length seems to do the job. The decoding is correct, though.
Regards,
harti
When receiving the byte value 06 00
the library raises an IndexError
.
It should however return the following OID:
ObjectIdentifier(0)
Running len(ObjectIdentifier.from_string('1.2.3.4'))
should return 4
In the current release (1.1.1), bulkwalk raises the following error on OID 1.3.6.1.2.1.2.1
:
Traceback (most recent call last):
File "my__.tester.py", line 90, in <module>
test_if_oper_stat()
File "my__.tester.py", line 85, in test_if_oper_stat
for row in iftable:
File "/home/users/michel/work/pynet/env/lib/python3.4/site-packages/puresnmp/__init__.py", line 475, in bulkwalk
for oid, value in result:
File "/home/users/michel/work/pynet/env/lib/python3.4/site-packages/puresnmp/__init__.py", line 225, in multiwalk
unfinished_oids = get_unfinished_walk_oids(varbinds, requested_oids)
File "/home/users/michel/work/pynet/env/lib/python3.4/site-packages/puresnmp/__init__.py", line 201, in get_unfinished_walk_oids
for k, v in results.items()}
File "/home/users/michel/work/pynet/env/lib/python3.4/site-packages/puresnmp/__init__.py", line 201, in <dictcomp>
for k, v in results.items()}
IndexError: list index out of range
RFC 3416 contains more details about variable types and their inheritance than I originally found. Additionally it's from a more recent RFC which was not linked in the one I used originally.
Compare the existing codebase with RFC3416 and see if there are no errors.
The documentation process ends in success, but the docs are incomplete. The build log contains the following two interesting tracebacks:
/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/checkouts/stable/docs/developer_guide/api/puresnmp.rst:25: WARNING: autodoc: failed to import module 'puresnmp'; the following exception was raised:
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/sphinx/ext/autodoc.py", line 385, in import_object
__import__(self.modname)
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/puresnmp-1.1.3.post2-py3.4.egg/puresnmp/__init__.py", line 39, in <module>
'version.txt').decode('ascii').strip()
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/pkg_resources/__init__.py", line 1162, in resource_string
self, resource_name
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/pkg_resources/__init__.py", line 1603, in get_resource_string
return self._get(self._fn(self.module_path, resource_name))
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/pkg_resources/__init__.py", line 1726, in _get
with open(path, 'rb') as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/puresnmp-1.1.3.post2-py3.4.egg/puresnmp/version.txt'
/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/checkouts/stable/docs/developer_guide/api/puresnmp.const.rst:4: WARNING: autodoc: failed to import module 'puresnmp.const'; the following exception was raised:
Traceback (most recent call last):
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/sphinx/ext/autodoc.py", line 385, in import_object
__import__(self.modname)
File "/home/docs/checkouts/readthedocs.org/user_builds/puresnmp/envs/stable/lib/python3.4/site-packages/puresnmp-1.1.3.post2-py3.4.egg/puresnmp/__init__.py", line 18, in <module>
from . import types # NOQA (must be here for type detection)
ImportError: cannot import name 'types'
Ran into an issue when packaging up puresnmp in a project with pyinstaller.
The program wouldn't run because version.txt didn't get included.
Can I remove dependency on that file or prevent the script from crashing if it can't be found?
puresnmp.x690.types.OctetString.init() converts value to ascii, but then calculates length on the original possibly unicode input value.
now, since it's converting to ascii, any multi-byte character would just throw a unicode error anyway, but it feels wrong to be calculating the length on the wrong value.
Please feel free to just close this as me being pedantic, but the I posted this is because I was already typing this issue report before I realized it wasn't a bug and it could cause others confusion in the future.
really no my domain of expertise
should be
really not my domain of expertise
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.