nornir-automation / nornir Goto Github PK
View Code? Open in Web Editor NEWPluggable multi-threaded framework with inventory management to help operate collections of devices
Home Page: https://nornir.readthedocs.io/
License: Apache License 2.0
Pluggable multi-threaded framework with inventory management to help operate collections of devices
Home Page: https://nornir.readthedocs.io/
License: Apache License 2.0
For reference: #68 (comment)
people is not going to read the API so some interesting functionality design to help out building workflows is probably going to be unknown for most. A tips and patterns section might help newcomers and people that doesn't have the time to read API's documentaiton :)
Mostly to avoid confusing users.
Would be great to have an AnsibleInventory
pluging so people can use it if they already have one.
task_id
is specified but something went wrong print everything(env) rxnanne@LPDSTNETDEV01:~/nornir> ipython
Python 3.4.6 (default, Mar 01 2017, 16:52:22) [GCC]
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import nornir as n
In [2]: n.__version__
Out[2]: '1.0.1'
In [3]: from nornir.core import InitNornir
In [4]: brg = InitNornir("nornir_config.yml")
In [5]: brg.inventory
Out[5]: <nornir.plugins.inventory.ansible.AnsibleInventory at 0x7fa72015df98>
In [6]: brg.inventory.hosts
Out[6]: {}
I would expect the hosts dictionary to be populated.
rxnanne@LPDSTNETDEV01:~/nornir> cat nornir.log
2018-06-27 17:02:28,198 - nornir - DEBUG - read_vars_file() - AnsibleInventory: var file doesn't exist: group_vars/all
2018-06-27 17:02:28,198 - nornir - DEBUG - read_vars_file() - AnsibleInventory: var file doesn't exist: group_vars/all
I created a group_vars directory and file for "all"
rxnanne@LPDSTNETDEV01:~/nornir> mkdir group_vars
rxnanne@LPDSTNETDEV01:~/nornir> cat > group_vars/all
[all]
Tried to instantiate InitNornir again
In [10]: brg = InitNornir("nornir_config.yml")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-48637fa64fff> in <module>()
----> 1 brg = InitNornir("nornir_config.yml")
~/nornir/nornir/core/__init__.py in InitNornir(config_file, dry_run, **kwargs)
275 inv_args = getattr(conf, inv_class.__name__, {})
276 transform_function = conf.transform_function
--> 277 inv = inv_class(transform_function=transform_function, **inv_args)
278
279 return Nornir(inventory=inv, dry_run=dry_run, config=conf)
~/nornir/nornir/plugins/inventory/ansible.py in __init__(self, hostsfile, **kwargs)
190
191 def __init__(self, hostsfile="hosts", **kwargs):
--> 192 host_vars, group_vars = parse(hostsfile)
193 defaults = group_vars.pop("defaults")
194 super().__init__(host_vars, group_vars, defaults, **kwargs)
~/nornir/nornir/plugins/inventory/ansible.py in parse(hostsfile)
177 raise
178
--> 179 parser.parse()
180 return parser.hosts, parser.groups
181
~/nornir/nornir/plugins/inventory/ansible.py in parse(self)
47
48 def parse(self):
---> 49 self.parse_group("defaults", self.original_data["all"])
50 self.sort_groups()
51
~/nornir/nornir/plugins/inventory/ansible.py in parse_group(self, group, data, parent)
38
39 self.groups[group].update(data.get("vars", {}))
---> 40 self.groups[group].update(self.read_vars_file(group_file, self.path, False))
41 self.groups[group] = self.map_nornir_vars(self.groups[group])
42
ValueError: dictionary update sequence element #0 has length 3; 2 is required
Currently it is possible to access arbitrary key-value pairs using __getitem__
:
switch1["my_key"]
or switch1.get("my_key")
Majority of variables available to the user are stored in self.data
or accessed from the group hierarchy.
This change would make variable access even more beautiful using delegation, where all attempts to read new variables (or write - if taken to the extreme) would be delegated to methods of the attribute data
.
You would be able to say:
switch1.my_key
or switch1.my_key = "value"
Here is an (extreme) example that uses both read and write:
class Host:
__initialized = False
def __init__(self, name):
self.name = name
self.data = {}
self.__initialized = True
def __getattr__(self, name):
try:
return self.data[name]
except (AttributeError, KeyError):
raise AttributeError(
f"'{self.__class__.__qualname__}' object has no attribute '{name}'"
)
def __setattr__(self, name, value):
if not self.__initialized or hasattr(self, name):
super().__setattr__(name, value)
else:
self.data[name] = value
Usage:
In [21]: sw1 = Host(name="sw1")
In [22]: sw1.name
Out[22]: 'sw1'
In [23]: sw1.site
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-20-991bde2bf5b8> in __getattr__(self, name)
10 try:
---> 11 return self.data[name]
12 except (AttributeError, KeyError):
KeyError: 'site'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-23-2a572e3bc687> in <module>()
----> 1 sw1.site
<ipython-input-20-991bde2bf5b8> in __getattr__(self, name)
12 except (AttributeError, KeyError):
13 raise AttributeError(
---> 14 f"'{self.__class__.__qualname__}' object has no attribute '{name}'"
15 )
16
AttributeError: 'Host' object has no attribute 'site'
In [24]: sw1.site = "sj"
In [25]: sw1.__dict__
Out[25]: {'name': 'sw1', 'data': {'site': 'sj'}, '_Host__initialized': True}
In [26]: sw1.name
Out[26]: 'sw1'
In [27]: sw1.name = "new-sw"
In [28]: sw1.name
Out[28]: 'new-sw'
In [29]: sw1.__dict__
Out[29]: {'name': 'new-sw', 'data': {'site': 'sj'}, '_Host__initialized': True}
Read-only version will be simpler with less magic, but still could be useful
Thoughts?
When using NAPALM plugins connections stay open and can cause scripts to hang while waiting for the connections to timeout.
Forcing the connections to close using a task helps with this.
task.host.connections["napalm"].close()
Maybe this could be done automatically? It'll require to find a way to know which connections are not useful anymore.
hosts.yaml
isp1-pe1:
nornir_host: 192.168.122.41
nornir_username: admin
nornir_password: admin
nornir_nos: ios
groups:
- isp1
type: network_device
isp2-pe1:
nornir_host: 192.168.122.51
nornir_username: admin
nornir_password: admin
groups:
- isp2
nornir_nos: ios
type: network_device
groups.yaml
isp:
isp1:
asn: 1000
groups:
- isp
isp2:
asn: 2000
groups:
- isp
In [8]: nornir_runner = InitNornir(config_file="config.yaml")
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-8333d1930a09> in <module>()
----> 1 nornir_runner = InitNornir(config_file="config.yaml")
~/.pyenv/versions/3.7.0/lib/python3.7/site-packages/nornir/core/__init__.py in InitNornir(config_file, dry_run, **kwargs)
264 inv_args = getattr(conf, inv_class.__name__, {})
265 transform_function = conf.transform_function
--> 266 inv = inv_class(transform_function=transform_function, **inv_args)
267
268 return Nornir(inventory=inv, dry_run=dry_run, config=conf)
~/.pyenv/versions/3.7.0/lib/python3.7/site-packages/nornir/plugins/inventory/simple.py in __init__(self, host_file, group_file, **kwargs)
132
133 defaults = groups.pop("defaults", {})
--> 134 super().__init__(hosts, groups, defaults, **kwargs)
~/.pyenv/versions/3.7.0/lib/python3.7/site-packages/nornir/core/inventory.py in __init__(self, hosts, groups, defaults, transform_function, nornir)
291
292 for group in self.groups.values():
--> 293 group.groups = self._resolve_groups(group.groups)
294
295 self.hosts = {}
AttributeError: 'NoneType' object has no attribute 'groups'
Will submit PR.
Brigade supports grouping tasks. For instance, you can do something like:
def print_facts(task, facts):
r1 = networking.napalm_get_facts(task, "facts")
print(r1)
r2 = networking.napalm_get_facts(task, "interfaces")
print(r2)
return ???
brigade.run(task=get_facts,
facts=facts)
The idea is to group tasks that logically go together to build reusable workflows. The challenge is what to return in the case you may want to return something or pass it to a "callback" (see #3). What I propose is to have a MultiResult
class that does something like:
class MultiResult(object):
def __init__(self, *results):
self.results = results
self.changed = any([r.changed for r in result])
You'd basically rewrite the grouping above like this:
def print_facts(task, facts):
r1 = networking.napalm_get_facts(task, "facts")
r2 = networking.napalm_get_facts(task, "interfaces")
return MultiResult(r1, r2)
See details in the parent issue napalm-automation/napalm#327
Ansible has implicit localhost host which you can use as a target for your tasks, or use:
delegate_to: localhost
run_once: true
Currently nornir does not have this concept (but it also does not ship the code to execute remotely) and majority of tasks work on localhost and some establish network connection.
It is a little bit confusing.
E.g., nornir-tools
has an example of doing network configuration backup. It runs napalm_get
and write_file
on a group of networking devices.
But what does it really mean that to run the task write_file
on a number of networking devices without digging a lot into doc? Does it mean I will save the file on the networking device filesystem or locally? I need to check docs or sometimes source code to be sure.
Example use-cases associated with localhost:
Function
, could be Task
)I don't have a good solution in mind for this. May be something like nornir_runner.local_run
or nornir_runner.run(local=true)
instead of nornir_runner.run
? This would allow localhost directly access target's vars.
May be the Task itself must be also LocalTask
which could be a subclass of Task
.
I also feel that having some kind of implicit Host: localhost could be helpful to keep specific key-value pairs on the fly that don't belong to any other Host or Group
What are your thoughts about this?
The idea is to store (as it happens) the execution of brigade. Calls made, parameters, responses, etc... The object should be serializable.
.
A little followup for the discussion in #17.
Regarding having username and password etc, as arguments in tasks or only based on information from the inventory my first use case will be to use Brigade as part of a network discovery process. I will have a number of ip addresses and a number of different sets of credentials. I won't know which credentials to use for each of the device and will have to each one until I find something which works. In that scenario I wouldn't want the credentials to be tied to the inventory, i.e. it would be good to have both where the task arguments would override the inventory.
I do however like the idea of reusing the logic between different tasks so that we don't invent the wheel for each set of tasks.
Also for consistency I think we should enforce the naming of common arguments so they remain the same for all tasks.
As an example I used host
as an argument for the tcp_ping
task. The similar argument with napalm_cli
is hostname
, for the netmiko task it's ip
or host
. I think all of these only use one argument and that it should be host instead of hostname or ip, the reason being that a host could be either an ip address or a hostname.
Also I think we should standardise other arguments too, for example that all tasks use username
instead of having some tasks with username
and others with user
, api_user
or something else.
A starting list for this could be (examples actual names can be changed):
Ping @dbarrosop, @ktbyers.
We should add a timeout value for threads and probably integrate it with tasks that timeout like napalm or netmiko related tasks.
Brigade's architecture is designed to run code locally but sometime is useful to run tasks in the target machine.
This issue was opened to discuss if nornir should introduce type checks and mypy.
Mypy is a static type checker for Python 2 and 3.
It is quite smart and can infer a lot of types based on the code and type hinting which was introduced with PEP 484
Prior to Python 3.6 type hints were specified using comments: # type: <type>
, but Python 3.6 introduced new type hinting using:
var: <type> = <value>
While some perceive it as ugly, it really helps (at least in my opinion):
I am definitely not a mypy expert, but I was helping certbot (medium codebase, around 35k LoC) to migrate to mypy during PyCon sprints. We caught a dozen of small bugs and found places with weird code that definitely require refactoring.
The strategy was:
As a point of reference we used this document written by Zulip team who successfully migrated to mypy for even larger codebase.
Good presentation to understand type hints and how to do it in real-world: Carl Meyer - Type-checked Python in the real world - PyCon 2018
Would be nice to have a task wrapping requests
Two options:
http://brigade.readthedocs.io/en/develop/ref/internals/execution_model.html
On two instances the images refer to my_thid_task
instead of my_third_task
.
Basically, explain how a user may be able to access a type of connection directly for the sake of working with it directly via Host.connections
and Host.get_connection
.
So users can specify extra connection types
on_error
and on_success
for the run
method that is.
Like update
and maybe others.
Test everything, specially tasks, and specially now that the API is not stable yet so we can easily identify what we break every time we do a slight change :)
Provides both static analysis and coverage. Might be interesting.
git+https://github.com/ktbyers/netmiko.git@develop is added to tox.ini to solve a problem with Alpine Linux which is used by the tests for Brigade.
Once the next version of Netmiko, whichever comes after 1.4.3 that line should be removed and netmiko can be included in requirements.txt instead.
group -> groups, replace all with defaults.
https://github.com/nornir-automation/nornir/blob/develop/nornir/core/inventory.py#L41-L46
Looks like pyyaml is pretty much abandoned. We should migrate to ruaml.yaml (AnsibleInventory) is already using it.
Due to #136 we are going to be renaming brigade to something else. If you have a proposal for an original name let us know by dropping a comment or contacting me privately if you prefer. If you like any of the proposals feel free to indicate so by adding a reaction.
By the way, name doesn't have to be a real word or even in english. Feel free to do something fancy like combining different words, coming up with a new one or even playing with languages other than english.
Rename:
host -> node
hosts -> nodes
To keep things more generic.
As discussed here: #68 (comment)
This is slightly related to #155 and #149
The pattern of running tasks on the group or on a specific host is very common. There were a couple of PRs trying to implement this using filters.
While I think F filters are a good idea, I think it would be great if you could also invoke run
method directly on Host
or Group
class instance, something like this:
nornir_runner.inventory.hosts["sj-br1"].run(
task=networking.napalm_get,
name="Collecting facts using NAPALM",
getters=["facts"],
)
nornir_runner.inventory.groups["sj-edge"].run(
task=networking.napalm_get,
name="Collecting facts using NAPALM",
getters=["facts"],
)
Thoughts?
easy way of adding jinja filters
As discussed here: #68 (comment)
Might be better to handle this via the Configuration object instead.
I was wondering if we should have a decorator for tasks where we can signal things. For instance:
@brigade.plugins.task(dry_run=True, format_agruments=["filename"])
def napalm_configure(task, filename=None, configuration=None, replace=False):
....
@brigade.plugins.task(dry_run=None, format_agruments=[])
def print_result(task, data, vars=None, failed=None, task_id=None):
Basically arguments of the decorators indicate that:
a. Whether the task supports dry_run
or not or if it doesn't apply
b. Which arguments should be auto-formatted (an alternative to #53)
c. Others that might come, maybe?
Is there any notable difference between this and fabric?
Basically the idea is to remove stale files. Something like:
def ensure_structure(task, path, files):
deleted = []
for directory, dirnames, filenames in os.walk(path):
for dirname in dirnames:
if dirname not in files:
name = f"{directory}/{dirname}"
deleted.append(name)
if not task.is_dry_run():
shutil.rmtree(name)
for filename in filenames:
if filename not in files:
name = f"{directory}/{filename}"
deleted.append(name)
if not task.is_dry_run():
os.remove(name)
return Result(host=task.host, result=deleted, changed=bool(deleted))
Although we may want to accept a dictionary and do this recursively.
Hi, I'm one of the maintainers of another project named Brigade (https://brigade.sh, https://github.com/azure/brigade/) that might be similar enough to yours that it may cause some confusion. I read an article on your framework, which I think looks very cool. But I am a little worried that the similarity of our project names might confuse our users, especially since ours is also an automation tool (though specific to Kubernetes).
I thought I would raise the issue with you so that we can together do what's best for the broader community.
As discussed in #55 the idea is to generalize the Config
class so users can also use it to configure their application
This code causes an error for self-inflicted failures.
https://github.com/brigade-automation/brigade/blob/develop/brigade/core/task.py#L96-L98
For example:
def echo_task(task, msg):
failed = False
if msg == "5":
failed = True
return Result(host=task.host, result=f"Hello from Brigade: {msg}", failed=failed)
def data_with_greeting(task):
task.run(task=echo_task, name="First", msg='1')
task.run(task=echo_task, name="Second", msg='5')
task.run(task=echo_task, name="Third", msg='3')
task.run(task=echo_task, name="Fourth", msg='4', severity=logging.DEBUG)
task.run(task=echo_task, name="Fifth", msg='5', severity=logging.DEBUG)
Traceback (most recent call last):
File "/Users/patrick/src/brigade/brigade/core/task.py", line 61, in start
r = self.task(self, **self.params)
File "test-run.py", line 32, in data_with_greeting
task.run(task=echo_task, name="Second", msg='5')
File "/Users/patrick/src/brigade/brigade/core/task.py", line 98, in run
raise r.exception
TypeError: exceptions must derive from BaseException
The Result.exception is set to None by default and doesn't get changed when a failed task doesn't throw an error.
https://github.com/brigade-automation/brigade/blob/develop/brigade/core/task.py#L58-L67
if r.failed:
# Without this we will keep running the grouped task
raise r.exception
Either we need to check if r.exception=None, and raise some other generic Exception. Or we can let users return a failure reason.
Thoughts?
Make clear which ones require a "connection" and which ones work locally. Other improvements welcome
Reference guides contains the API reference for Brigade and describe the core functions and plugins.
Something like:
./brigade-runner napalm_get_facts -f group=switch -f site=my_dc "facts=get_interfaces"
./brigade-runner scp -f group=switch -f site=my_dc "src=/local/folder" "dst={brigade_ip}:/remote/folder
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.