Giter Club home page Giter Club logo

fabric's Introduction

PyPI - Package Version PyPI - Python Version PyPI - License CircleCI Codecov

Welcome to Fabric!

Fabric is a high level Python (2.7, 3.4+) library designed to execute shell commands remotely over SSH, yielding useful Python objects in return. It builds on top of Invoke (subprocess command execution and command-line features) and Paramiko (SSH protocol implementation), extending their APIs to complement one another and provide additional functionality.

To find out what's new in this version of Fabric, please see the changelog.

The project maintainer keeps a roadmap on his website.

fabric's People

Contributors

acdha avatar bhrutledge avatar bitprophet avatar bossjones avatar davidjmemmett avatar dsch avatar fruch avatar henryh9n avatar jscissr avatar karlcow avatar kyleam avatar kyrias avatar offbyone avatar omadjoudj avatar redkrieg avatar rupeshpatro avatar web3-tony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric's Issues

Make use of ssh_config where possible

Description

Paramiko offers an API for reading users' SSH config settings. We can use this, at the very least for host aliasing/settings (e.g. default SSH username different from local username; host "foo" actually having hostname of "bar"; etc) although the more esoteric settings may not be supported by Paramiko itself, such as agent forwarding.

The Paramiko SSHConfig class API is pretty basic: just parse a class, then lookup a hostname. For some silly (?) reason it lowercases the key names (so one gets forwardagent instead of ForwardAgent) but I guess that doesn't present a problem.

Main question is how to integrate the host info with env.hosts, env.user and so forth. ssh_config files, from Fabric's perspective, can only really serve two purposes. Here's a sample ssh_config file:

Host foo
HostName bar
User johndoe

Host *
User jdoe

Someone using Fabric with this file could expect the following:

Additional "DNS"-like mappings

@hosts('foo') results in Fabric actually trying to connect to the hostname bar.

Since this is just a supplement to Fabric's functionality, it should probably be on by default, with an option for deactivating it.

In terms of what is displayed to the user, offhand I'd display the "key" hostname (foo) but possibly have debug output show bar, or show something like foo (bar) perhaps.

Username overrides

In the absence of env.user being set at module level or via a host string in @hosts or similar, the sample config file above would result in connections to foo using the username johndoe, and connections elsewhere using jdoe (this "falling through" is due to how SSH and Paramiko interpret ssh_config files -- Fabric would not be doing this itself.)

When the username is set by the Fabric fabfile or CLI flags, however, it gets a bit trickier. I think it obvious that a full host string would take precedence, i.e. @hosts('user@host') would never be overridden by an ssh_config file. But what about a globally set env.user versus an ssh_config with a host-specific user (such as Host foo above)?

This will entail a little more thought (though in this specific example I'd argue that the more specific setting, across both Fabric and ssh_config, should win -- just like how ssh_config-only things resolve.)


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 04:59pm EDT

Attachments

Relations

  • Related to #4: Allow for storing/using metadata about hosts
  • Duplicated by #259: Shortcuts as in .ssh/config
  • Related to #72: SSH key forwarding

/bin/bash is not always available – per host shell configuration?

Description

By default, fabric assumes that /bin/bash can be used to run commands – that is not always the case, since some Unix-systems (like the BSDs) do not ship with bash, and if it's installed by the user, it is located in /usr/local/bin/bash

Now, it is possible to set the shell via a CLI parameter, but different hosts might have different shells – so is it somehow possible to set a different shell per host?


Originally submitted by Mikkel Høgh (mikl) on 2009-07-28 at 04:17pm EDT

Relations

  • Duplicates #4: Allow for storing/using metadata about hosts

Closed as Duplicate on 2009-11-08 at 11:44am EST

Rework output mechanisms

Description

The current output mechanisms (threads + while loops) make I/O error prone and there are currently at least one or two, possibly mutually exclusive edge cases involving dropped output. It would be nice to remove this and use a callback-oriented approach (similar to how Ruby's Net-SSH library works.)

Stuff to look into:

  • Twisted (Conch)
  • gevent
  • Eventlet
  • greenlet
  • multitask
  • Kamaelia/Axon
  • coroutines (used by greenlet, multitask, others?)
  • PuSSH
  • Paramiko .select if applicable
  • Tornado had some async stuff in it
  • more? (go over this recent comparison page)

Not only would such changes give us a possible opportunity to remove the existing bugs, but it should result in a smaller/neater codebase as well.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 03:00pm EDT

Relations

  • Related to #21: Make execution model more robust/flexible
  • Related to #381: Jython support
  • Related to #19: Implement parallelism/thread-safety
  • Related to #7: Improved prompt detection and passthrough

Closed as Wontfix on 2010-06-22 at 09:21am EDT

Clarify execution docs re: local. (was: local is called more than once)

Description

local function is executed once per each remote host instead of just once.
I don't think that is the intended behavior, at least fabric 0.1.1 wasn't like that.

I'm using fabric-0.9b1.

Here is my fabfile and the execution trace for your reference:

$ cat fabfile.py
from fabric.api import run, local, env
env.hosts = ["0.0.0.0", "127.0.0.1"]
def uname():
"""
Prints host type (via ``uname -s``) for a remote host.
"""
local("echo hello")
run('uname -s')
$ fab uname
[localhost] run: echo hello
[0.0.0.0] run: uname -s
[0.0.0.0] out: Linux
[localhost] run: echo hello
[127.0.0.1] run: uname -s
[127.0.0.1] out: Linux
Done.
Disconnecting from 0.0.0.0... done.
Disconnecting from 127.0.0.1... done.

Originally submitted by Anand Chitipothu (anandology) on 2009-07-28 at 06:39am EDT


Closed as Done on 2009-11-08 at 11:32am EST

Increase test coverage

Description

Use nose's builtin coverage flag to get coverage reports and use that as a rough guide for where to focus on, test-wise. (Not that 100% coverage.py coverage is actually full coverage, of course, but it's a good metric.)

I anticipate that some internals will have to change, possibly drastically, in order to be more easily tested. Try to limit this as much as possible -- will probably have to make judgement calls as to whether a given component is worth seriously redoing/breaking at this point, in order to get tests working; or what needs to just be held off on until 1.0.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 12:29pm EDT

Relations

  • Related to #87: Set up a buildbot

Enhanced @task (aliases etc)

Description

Add ability to specify the task class to use, along with args and kwargs that are passed to the task class.

(Jeff) Also: aliasing and module-default tasks.


N.B. #383 used to be number 22.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 03:23pm EDT

Relations

  • Related to #387: Add alias/default info to fab --list output

Closed as Done on 2011-08-18 at 07:53pm EDT

Bash/Zsh/etc completion

Description

At the very least, task names should be tab-complete compatible.

What would be even cooler is to have task arguments tab-completing too, e.g. to do fab mytask:myarg=1 one would type fab my<tab>:my<tab>1.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:20pm EDT

Attachments

Relations

  • Duplicated by #129: Bash-completion script
  • Duplicated by #162: Zsh completion
  • Related to #208: fab --shortlist
  • Related to #56: Add namespacing or dot notation

Implement parallelism/thread-safety

Description

Fabric currently uses the simplest approach to the most common use case, but as a result is quite naive and not threadsafe, and cannot easily be run in parallel even by an outside agent.

Rework the execution model and state sharing to be thread-safe (whether by using threadlocals or something else), and if possible go further and actually implement a parallel execution mode users can choose to activate, with threading or multiprocessing or similar.


Morgan Goose has been the lead on this feature and has a in-working-shape branch in his Github fork (link goes to the multiprocessing branch, which is the one you want to use). We hope to merge this into core Fabric for 1.1.


Current TODO:

  • Anal retentive renaming, e.g. s/runs_parallel/parallel/
  • Code formatting cleanup/rearranging
  • Mechanics/behavior/implementation double-check
  • Linewise output added back in (may make sub-ticket)
  • Paramiko situation examined re: dependency on 1.7.7.1+ and thus PyCrypto 2.1+
    • Including documenting the change in the install docs if necessary
  • Pull in anything useful that Morgan hadn't pushed at time of my merge
  • (if not included in previous) Examine logging support and decide if it's worth bumping to next release
  • Test, test, test

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 02:52pm EDT

Attachments

Relations

  • Related to #20: Rework output mechanisms
  • Related to #21: Make execution model more robust/flexible
  • Related to #197: Handle running without any controlling tty

Try polishing operations' API

Description

  • Re-examine API of existing operations, e.g. prompt's validate option
    • wrt validate, figure out which is better: 'dual-mode' single arguments
      like that, or splitting it into 2 different arguments
      • dual-mode makes sense given an either-or situation like that one
      • but at the same time it feels kind of messy
      • see how stdlib does it in similar situations. guessing 2 arguments
        with "only X or Y may be given, but not both at the same time" note?
    • look at the rest too

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 12:11pm EDT

Reorganize internals a bit

Description

In no particular order:

  • Stuff in utils should be user-useful only, other subroutines ought to go elsewhere
  • Should probably move subroutines in main to another module so that main is just that: main(). New module would be something like core - subroutines of the same sort found in utils but NOT usually for user consumption (and ALSO not generally used multiple times by internals? or should this be the same spot as other non-user-facing subroutines?)
    • Also note that a number of main subroutines right now, such as the ones for finding settings, and setting host lists, should be updated so that library users can use them without trouble.

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 12:07pm EDT

Allow put-style globbing with get

Description

put, since its sources come from local files, is able to use the local Python's glob module so one can e.g. put('/tmp/*', '/tmp/'). However, get cannot do this, and Paramiko's SFTP module doesn't seem to provide such functionality for us.

In the interests of having a symmetrical API, try leveraging run('ls') type calls which can do at least a reasonable facsimile of the glob module's behavior, i.e. get('/tmp/*', '/tmp/') would involve e.g. file_list = run('ls -1 /tmp/*') ; for file in file_list: sftp.get(file).

Furthermore, having a mode kwarg like with put (which, I assume, would perform an os.path.chmod or similar locally) would go even further towards making the two APIs identical. (Hat tip to Max Arnold from #215 for this request.)


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:31pm EDT

Relations

  • Duplicated by #215: get() should allow remote wildcards and optional file mode
  • Duplicates #140: Recursive put() and get()

Closed as Duplicate on 2010-09-17 at 02:09pm EDT

get() not correctly detecting number of hosts

Description

Scenario is the use of one task to set env.hosts, then a 2nd task called after it without any @hosts usage (so that the previous set of env.hosts takes effect.)

get(), in this case, is appending hostnames as if there were more than one host. Either there's a bug in how it looks at env.all_hosts, or env.all_hosts is not being set correctly.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-28 at 02:30pm EDT


Closed as Worksforme on 2011-03-10 at 11:43am EST

Overhaul contrib.project.upload_project

Description

  • Use a tempfile lib to make a unique temporary directory, if possible
  • Use try/finally to ensure things are cleaned up even if something aborts/fails
  • Add parameters for controlling local directory to be archived and remote directory to be unarchived into

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 10:54am EDT

Attachments


Closed as Done on 2011-04-24 at 11:53pm EDT

Occasional dropped lines of output

Description

One or two users (at least) are experiencing dropped lines of output (usually the last line in an expected return value from a run() or sudo()) after the patch to fix output-dropping issues from about a month ago.

A good example of this is Xinan Wu with a series of emails to the list, most recently on July 13th, with a suggested fix. Also, the "Beta 1" thread has mails from Niklas Lindstrom who has the verifiable "works prior to recent fix, doesn't work after" issue.

Take a look at that email, find the others if possible, and see what's up. Would be nice to have this fixed before 0.9 final, so marking it as such, but if it's an edge case and isn't affecting the majority of users, might be able to wait for an 0.9.x release.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-24 at 09:48pm EDT

Attachments


Closed as Done on 2009-11-08 at 12:04pm EST

Update "use_sudo to be "via" or similar

Description

I.e. allow user to pass in callable instead of specifying this boolean option.

And/or, make use of sudo controllable by an env var, thus eliminating call chains entirely.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:12pm EDT

Relations

  • Duplicated by #181: Sudo context and/or decorator

Possibly reimplement old undocumented "confirm_proceed" behavior

Description

Fab 0.1.x and earlier had an undocumented feature where one could set an option to be prompted prior to running every operation, e.g.

[localhost] run: rm -rf /*
Proceed? [Yn] 

If folks +1 this, might be worth putting in.

This could be considered a special case of dry-running to some degree, but is sufficiently different that it can probably live on its own.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 11:13am EDT


Closed as Wontfix on 2011-03-10 at 12:09pm EST

env.hosts should be a list initially.

Description

I would like to have commands to append hosts in the host env variable, sort of like:

def production():
    env.hosts.append("production.com")

def staging():
    env.hosts.append("staging.com")

Right now, however, env.hosts is None, so I need to check and assign a list to it every time. It should be a list initially so we can just append to it.


Originally submitted by Stavros Kalapothas (stavros) on 2009-08-09 at 07:26am EDT


Closed as Done on 2009-11-08 at 11:44am EST

Make execution model more robust/flexible

Description

  • Right now, relatively simple, calls each function one by one, and for each
    function, runs on each host one by one, and the host list may be different
    per-function.
  • What may make more sense is to specify host list "first", then for each
    host, you call each function in turn (in which case the order of functions
    may matter more than it does now). This would mean that the logic for host
    discovery changes a decent amount.
  • Do we want to allow this to be switched up dynamically? I.e. allow user
    to specify a "mode" (like in the old Fabric) to determine which of those
    two algorithms is used?
  • How do these decisions affect what decorators/etc can be applied? I.e.
    @hosts doesn't make any sense in the latter scenario because there is only
    one global host list per session. (But isn't that the more sensible
    solution? Is it ever useful to execute a fab command line and have the host
    list to change during the session?)
  • See mikeias' fork for one potential
    approach: preserves dynamic checking of host/role lists as attached to
    tasks via decorators, but thus requires one to call a function with the
    task name as a string, instead of simply calling the function itself. May
    be worth the tradeoff, and is definitely food for thought.
  • Also work in the concept of dependencies. Right now users have to explicitly call the other functions, which results in minor DRY violations in some cases at least.
    • Argument against dependencies would be that it could introduce magic, i.e. calling a function that is hooked into by other functions is no longer a straight "just calls this one function only".
    • Could also be a lot more complex to set up, depending on how the stuff above is approached (i.e. if the above is done without sacrificing a lot of simplicity, this almost definitely would)

As a possible stopgap (or maaaybe long term) simply refactor existing connection/execution stuff so it can be exposed to users, i.e. something encapsulating "call function X on Y host list". Which is essentially the same as mikeias' fork (2nd to last bullet point above) --but with the caveat that this method would not be the only or preferred way to call functions, instead just an additional tool for when the default exec model doesn't fit your needs.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 03:02pm EDT

Relations

  • Related to #76: Use decorator to define tasks
  • Related to #218: Update get(), env.all_hosts to work for library users
  • Related to #243: Consider not using set() when merging hosts/roles
  • Related to #266: Externally loaded roledefs
  • Duplicated by #286: Use call instead of init for classes that define one
  • Related to #297: Object-oriented hosts/roles/collections
  • Related to #391: Allow tasks to generate new tasks (for other hosts) in the same session
  • Related to #19: Implement parallelism/thread-safety
  • Related to #20: Rework output mechanisms

upload_template produces "Permission denied".

Description

def add_django_site():
    upload_template("template.txt", context["short_name"])

This produces:

Underlying exception message:
    Permission denied

put() works fine.


Originally submitted by Anonymous () on 2009-07-26 at 08:15am EDT

Attachments


Closed as Done on 2009-11-08 at 11:31am EST

Add global env var controlling pty behavior

Description

run and sudo currently take a kwarg controlling whether a pty is used during SSH sessions. Add an env var and command line switch for this.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:26pm EDT


Closed as Done on 2009-11-08 at 11:45am EST

upload_template: wrong handling of source filename and generation of remote path

Description

When upload_template is trying to upload temporary file to remote host builds wrong path if source filename is not local file name.

>>> upload_template('/foo/bar/file.name', '/remote/path')

[X.X.X.X] put: /tmp/tmpBDQ5yh -> /tmp//foo/bar/file.name

Fatal error: put() encountered an exception while uploading '/tmp/tmpBDQ5yh'

Underlying exception message:
    No such file

Patch is attached.


Originally submitted by Alex Koshelev (daevaorn) on 2009-08-10 at 12:04pm EDT

Attachments


Closed as Done on 2009-11-08 at 11:31am EST

--roles doesn't work

Description

I recall this coming up before and thought it was fixed, but perhaps not. asksol is finding that "fab task:roles=foo" works but "fab --roles=foo task" prompts for a host.

Been a while since I mucked with the hosts/roles stuff, but not seeing anything obviously wrong right now (though I also don't see anywhere that env.roles is being referenced; but nor is env.hosts, implying that -H shouldn't work, and I know it does...).

This is as good a spot as any to start beefing up the test suite. Write tests for this and hosts, then fix when it doesn't pass.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-24 at 09:48am EDT

Relations

  • Related to #33: Look into adding "*" support for hosts/roles

Closed as Done on 2009-11-08 at 11:31am EST

Find decent way to perform platform-specific path logic

Description

E.g. for SFTP, Paramiko provides a method of expanding tildes, but for path separators, or tildes and/or path separators for anything NOT involving SFTP, we currently just assume Unix. It would be nice to detect remote OS in some semi reliable fashion so we can correctly perform both of these tasks -- i.e. Unix vs Windows for path separators, and expansion of tildes on all platforms (which would involve running some tests, e.g. run('cwd') since the default login directory is the remote user's $HOME.)

Current locations:

  • context_managers.cd
  • contrib.files.upload_template
  • contrib.files.exists (because we doublequote the path being tested for existence)
  • Possibly others that didn't have TODO comments, take a look

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 10:38am EDT

Make `put` sudo-able

Description

Right now, put can only write remote files as the logging-in user, which is problematic when one needs to write to locations owned by root or another user.

Paramiko's SFTP API doesn't support any sort of sudo functionality (though I rather doubt the SFTP protocol in general supports this anyways) so this will need to be implemented as a call to the current put behavior, plus a sudo('mv').


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 04:56pm EDT

Relations

  • Related to #140: Recursive put() and get()
  • Related to #257: Add a convenience chown-ing kwarg to put()?

Closed as Done on 2010-11-14 at 09:02pm EST

Make reconnection more robust (was re: reboot() specifically)

Description

Use a more robust reconnection/sleep mechanism than "guess how long a reboot takes and sleep that long". Possibilities:

  • Try reconnecting after, say, 30 seconds, with a short timeout value, then loop every, say, 10 seconds until we reconnect
  • Just give user a prompt, within a loop, so they can manually whack Enter to try reconnecting
  • Stick with the manual sleep timer entry, and just ensure it is explicitly documented, i.e. "we highly recommend figuring out how long your system takes to reboot before using this function"

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 11:38am EDT

Relations

  • Related to #201: Ambiguous sudo call in reboot function

Incorrect import of state.connections

Description

Currently fabric.operations is importing state.connections at module level like this:

from state import connections

this is most probably not wanted and a silent subtle bug waiting to explode. The reason is that Python rebinds names when imports happen in that way. You can try yourself by comparing with 'is' the 2 connections variables after startup and you'll see that they compare to false. Given the shared nature of the variable and the fact that it's immutable you are basically asking for trouble by rebinding it.

The fix is to never use that import and instead access connections through state. The even better fix is to not use this way of creating new connections but favor a connect() function that can access a cache instead. Also instead of importing the connect function it should pass it as an argument so that users can override it to implement custom behavior on connect (like tunnelling).


Originally submitted by Anonymous () on 2009-07-27 at 04:08pm EDT


Closed as Worksforme on 2011-04-24 at 11:12pm EDT

installation doc: does not list setuptool dependency

Description

I install packages from tarballs (tgz) and use setup.py normally.

There is a dependency on setuptool in the setup.py script.
I don't know if this is a documentation bug or a setup script bug.


Originally submitted by Anonymous () on 2009-08-06 at 06:40pm EDT

Relations

  • Duplicates #67: Prepare for packaging on PyPI, etc

Closed as Duplicate on 2009-11-08 at 11:32am EST

Specifying two hosts throws fabric into an infinite loop.

Description

This code throws fabric into an infinite loop:

from fabric.api import env

env.hosts = []

def production():
    env.hosts.append("test.com")
    print env.hosts

def staging():
    env.hosts.append("staging.test.com")
    print env.hosts

The servers keep getting appended to the list for ever. I would imagine that this code runs once per host, so it keeps running. How can I make each one run once?


Originally submitted by Stavros Kalapothas (stavros) on 2009-08-09 at 10:46am EDT


Closed as Done on 2009-11-08 at 11:44am EST

Allow skipping of "bad" host connections

Description

Make it possible for Fabric to "skip" hosts that timeout (either via our explicit timing out, or any other pre-existing timeout condition) instead of aborting. This should not be the default behavior, but it should be configurable.

We may also want to extend this to skip hosts that encounter any connection problem, such as authentication failures, and even down to actual run/sudo-failure issues (in that case, almost a sort of post-warn_only, continue-Python-keyword like behavior. See #448 for a PR for this angle.)

See the comments for some more detailed thoughts/approaches.

(Note: Ticket was originally created to deal with allowing control over network timeouts, which has been moved over to #249)

(Also note: There are some likely-applicable patches over in #189 which really belong here. Clean that up sometime; #189 will be closed with a more limited implementation but those patches will still be there.)


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:26pm EDT

Relations

  • Related to #249: Add timeout support
  • Related to #348: Retry support for put, run, sudo
  • Duplicated by #131: Allow graceful user-controlled handling of connection failures
  • Related to #96: Remote 3-strikes auth failure causes traceback
  • Related to #189: Add option: skip password prompting upon failure

Fabric does not prompt for input when the host does.

Description

I am running pg_dump on the host and it prompts me for a password. Shouldn't Fabric pass this prompt on to the local machine? I don't want to have to enter the password in the command line...


Originally submitted by Stavros Kalapothas (stavros) on 2009-08-09 at 12:21pm EDT

Relations

  • Duplicates #7: Improved prompt detection and passthrough

Closed as Duplicate on 2009-11-08 at 11:45am EST

Make generic "persistence" context manager

Description

Take what context_managers.cd does now and make a generic version of it (i.e. anything-the-user-enters + '&&'), then rewrite cd to make use of the generic, similar to what was done with settings.

Possible names:

  • state (possibly confused with fabric.state however)
  • prefix (technically accurate if not too elegant)
  • anything else?

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:08pm EDT


Closed as Done on 2010-06-18 at 11:35pm EDT

Look into adding "*" support for hosts/roles

Description

Some folks have expressed interest in allowing the use of e.g. @roles('*') to imply "use all defined roles".

I think that as long as roles are sufficiently fixed (see #31) this may not be necessary because one can just define env.roles = ['role1', 'role2'], then as long as one doesn't use @roles at all, the value of env.roles would be used (just as env.hosts is now.)

However, noting it in case there's some extra angle I'm not seeing.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-25 at 10:05am EDT

Relations

  • Related to #31: --roles doesn't work

Closed as Worksforme on 2011-03-10 at 11:44am EST

Use of _transport instead of the more appropriate get_transport()

Description

Fabric depends on Paramiko > 1.7. In all the 1.7 releases of Paramiko the suggested way of accessing the transport is the use of get_transport() instead of _transport. The main benefit is being able to interchangeably use Channels or SSHClients as generic 'client' objects.

From what I see there are 2 occurrences of _transport in the code and both are in operations.py. Incidentally again this would increase freedom to implement extensions to Fabric since it wouldn't rely on a specific private interface but on a public interface that Paramiko uses across multiple objects.


Originally submitted by Anonymous () on 2009-07-27 at 04:13pm EDT


Closed as Done on 2011-03-16 at 02:53pm EDT

rsync_project to a different ssh port

Description

When not using the standard port 22 rsync_project failes to detect the port setting used in the env.hosts variable (example.com:443).


Originally submitted by Anonymous () on 2009-08-05 at 11:35am EDT

Relations

  • Duplicated by #186: rsync to non-standard port not working

Closed as Done on 2009-12-13 at 03:51pm EST

Documentation enhancements (was: Enhance documentation for final release)

Description

  • Go over all existing prose docs, tweaking and enhancing where appropriate
  • Update all docstrings so they "read well" in the generated API:
    • General language / tense / etc
    • Where applicable, turn function() (in double-backticks) into function or ~fabric.module.function
      • Maybe put more effort into figuring out why the "default role" doesn't seem to work right?
    • Change all references to env vars into links to docs/usage/env.rst
  • See Steve Steiner's notes regarding what examples and other docs from the old site still apply to the new one, and implement.
  • Don't forget the top level text files, e.g. INSTALL
  • Migrate the text FAQ doc into prose Sphinx docs
  • Brainstorm other common sections we're currently missing (history, etc)

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 12:31pm EDT


Closed as Done on 2010-05-23 at 12:55pm EDT

Implement "dry run" feature

Description

I.e. allow execution to go as far as it can without actually effecting any changes.

A high level version of this would be simply stating "I would run(foo) and then sudo(bar) and so on". This would be largely useful for testing non-return-value-related logic as well as any host/role/dependency logic, to make sure the right tasks are run on the right hosts. Such an approach would avoid entering the operations at all, and the operations would avoid becoming more complex.

A lower level version would enter the operations, allowing for more useful debugging, i.e. finding out what characters would be escaped, tildes expanded and so forth. However, this would make the operations more complex as they would have to test for whether a dry run is occurring, and behave differently in that situation.

Finally, if we can come up with a good solution for the test suite such that no network connections (or local shell invocations) are actually created, that may work well for this purpose as well.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:20pm EDT

Relations

  • Related to #98: Optionally avoid using ssh if going to localhost

Implement tunnelling

Description

It should be possible to tunnel all the commands through a single entry point in the network.


Originally submitted by Anonymous () on 2009-07-27 at 05:22pm EDT

Attachments

Relations

  • Related to #275: Consider forking Paramiko
  • Duplicated by #344: Tunnelling SSH over HTTP Proxies
  • Related to #78: Add Tunneling Context to Fab
  • Related to #72: SSH key forwarding

Add manpage

Description

At some point it would be a good idea to have at least a rudimentary Unix man page for the fab tool. Would be nice if any tools exist to build one out of our optparse configuration to avoid a (pretty irritating) violation of DRY.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-26 at 04:47pm EDT

Add command line flag for setting env vars

Description

This used to be called let and one could actually reimplement it with a task, since it used similar syntax to the current task argument stuff (let:foo=bar).

However, since I don't want to provide "builtin" tasks, best to do this as a flag, e.g. --set or --env or something. Probably make it repeatable and set a single var each time.

Has same outstanding issue as task arguments re: setting non-string values (#69).


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 11:44am EDT

Improved prompt detection and passthrough

Description

Pre-intro

Apologies for ticket length; the issue at hand is not simple and has many overlapping factors/considerations. Consider skipping down to the bottom of the description, where there is a concise summary that should function as a tl;dr.

Intro

This ticket used to be partly about prompt detection. We're now of the opinion that detecting prompts beforehand (in order to know when to present users with a Python-level prompt) will alway be painful and will never cover 100% of possible use cases. Instead, we feel that actual live interaction with the remote end (i.e. sending local stdin to the other side) will not only sidestep this problem, but be more useful and more in line with user expectations. See #177 for more on the "expect" approach.

The "live" approach itself has shortcomings, but none significantly worse than manually invoking ssh by hand, and anything in this space is certainly better than the "nothing" we have now.

Investigation into SSH and terminal behavior

Mostly because we can't really hope to offer "better" behavior than vanilla
ssh does. Plus this presents a learning opportunity -- all of the below
behaviors are reflected in Paramiko itself, as one might expect.

There are basically two issues at stake when performing fully interactive command line calls remotely: the mixing of stdout and stderr, and how stdin is echoed.

Stdout/stderr

Stdout and stderr mixing were tested with the following program (which prints 0
through 9 alternating to stdout and stderr, unbuffered).

#!/usr/bin/env python

import sys
from itertools import izip, cycle

for pipe, num in izip(cycle([sys.stdout, sys.stderr]), range(10)):
    pipe.write("%s\n" % num)
    pipe.flush()

No pty

When invoked normally (without -t) ssh appears to separate stdout
and stderr on at least a line-by-line basis, if not moreso, insofar as we see all of stdout first, and then stderr. Printed normally:

$ ssh localhost "~/test.py"
0
2
4
6
8
1
3
5
7
9

With streams separated for examination:

$ ssh localhost "~/test.py" >out 2>err
$ cat out
0
2
4
6
8
$ cat err
1
3
5
7
9

Thus, pty-less SSH is going to look a bit different than the same program interacted with locally.

With pty

When invoked with a pty, we get the expected result of the numbers being in
order, but the streams are now combined together before we get to them (since
all we get is the output from the pseudo-terminal device on the remote end,
just as if we were reading a real terminal window). Printed normally:

$ ssh localhost -t "~/test.py"
0
1
2
3
4
5
6
7
8
9
Connection to localhost closed.

Examining the streams:

$ ssh localhost -t "~/test.py" >out 2>err
$ cat out
0
1
2
3
4
5
6
7
8
9
$ cat err
Connection to localhost closed.

Thus, the tradeoff here is "correct"-looking output versus the ability to get a
distinct stdout and stderr.

Echoing of stdin

No pty

Without a pty, ssh must echo the user's stdin wholesale (or hide it entirely,
though there do not appear to be options for this) and this means that password
prompts become unsafe. Sudo without a pty:

$ ssh localhost "sudo ls /"
Password:mypassword

.DS_Store
.Spotlight-V100
.Trashes
.com.apple.timemachine.supported
Applications
Developer
[...]

Note that the user's password, typed to stdin, shows up in the output. For
thoroughness, let's examine what went to which stream:

$ ssh localhost "sudo ls /" >out 2>err
mypassword
$ cat out
.DS_Store
.Spotlight-V100
.Trashes
.com.apple.timemachine.supported
Applications
Developer
[...]
$ cat err
Password:

As expected, the user's stdin didn't end up in the streams from the remote end
(ergo it is the local terminal echoing stdin, and not the remote end) and the
password prompt showed up in stderr.

With pty

Here's the same sequence but with -t enabled, forcing a pty:

$ ssh -t localhost "sudo ls /"
Password:
.DS_Store               Applications
.Spotlight-V100         .Trashes
Developer               [...]
Connection to localhost closed.

Note that in addition to not echoing the user's password, ls picked up on the
terminal being present and altered its behavior. This is orthogonal to our
research but is still a useful thing to keep in mind.

As before, use of pty means that all output now goes into stdout, leaving
stderr empty save for local output from the ssh program itself:

$ ssh -t localhost "sudo ls /" >out 2>err
$ cat out
Password:
.DS_Store               Applications
.Spotlight-V100         .Trashes
Developer               [...]
$ cat err
Connection to localhost closed.

And as with the previous invocation, our password never shows up, even on our
local terminal.

Non-hidden output

Finally, as a sanity test to ensure that non-password stdin is echoed by the
remote pty when appropriate, we remove a (previously created) test file with
rm's "are you sure" option enabled:

$ ssh -t localhost "rm -i /tmp/testfile"
remove /tmp/testfile? y
Connection to localhost closed.

And proof that it is the remote end doing the echoing -- our stdin shows up in
the stdout from the remote end:

$ ssh -t localhost "rm -i /tmp/testfile" >out 2>err
$ cat out
remove /tmp/testfile? y
$ cat err
Connection to localhost closed.

Conclusion

As seen above, there are a number of different behaviors one may encounter when
using, or not using, a pty. The tradeoff being, essentially, access to distinct
stdout and stderr streams (but garbled output and blanket echo of stdin) versus
a more shell-like behavior (but without the ability to tell the remote stderr
from stdout).

In our experience, the ssh program defaults to not using a pty, but the
average Fabric user is probably best served by enforcing one. New
users are more likely to expect "shell-like" behavior (such as proper
multiplexing of stdout and stderr, and hiding of password prompt stdin) and
Fabric already defaults to a "shell-like" behavior insofar as it wraps commands
in a login shell.

Summation of early comments

A summary of findings so far (contains up through comment 16):

  1. Python's default I/O buffering is typically line-by-line (linewise). I/O is
    not typically printed to the destination until a line ending is encountered.
    This applies both to input and output. (It's also why
    fabric.utils.fastprint was created -- one must manually flush output to
    e.g. stdout to get things like progress bars to show up reliably.)
  2. Fab's current mode of I/O is also linewise, partly because of point 1, and
    partly to allow printing of stdout and stderr streams independently. As a
    side effect, partial line output such as prompts will not be displayed to
    the Fabric user's console.
  3. As seen above, SSH's default buffering mode is mostly linewise, insofar as the
    default non-pty behavior mixes the two streams up but on a line by line
    basis, but it is still capable of presenting partial lines (prompts) when
    necessary.
  4. Because we cannot discern a reliable way of printing less-than-a-line output without moving to bytewise buffering, we'll need to switch to printing every byte as we receive it, in order for the user to see things such as prompts (or more complicated output, e.g. curses apps or things like top).
    • If/when the secret of ssh's print buffering is found, use that algorithm instead.
  5. Forcing Python's stdin to be bytewise requires the use of the Unix-only
    termios and tty libraries, but I believe there may be Windows
    alternatives. For now, we plan to focus on the best Unix-oriented approach
    and will implement Windows compatibility later if possible. (Sorry, Windows
    folks.)
  6. Obtaining remote data bytewise is a bit easier insofar as data from the
    client isn't linewise. However, shortening the size of the buffer throws a
    wrench in Fabric's current method of detecting whether there is no more
    output to be had, so we are currently experimenting with other approaches,
    specifically select.select (which, yes, is another Windows compatibility
    pain point.)
    • Any new solution should also hopefully obviate all the annoying, painful,
      error-prone issues with the current output_thread I/O loop, insofar as line
      remainders and such are concerned.
    • Ideally, as with select, this should also remove the need for threads
      entirely, which will make it easier to fully paralellize Fabric in the
      future, and kill another entire class of occasional problems.
  7. With bytewise output, we run into problems where the remote stdout and
    stderr get mixed up character-by-character (e.g. the last line of regular
    output can become garbled up with a "following" line containing a prompt, since
    many prompts print to stderr). Until/unless we can figure out how the
    regular SSH client accomplishes its "linewise but not really" buffering, the
    only way to avoid this problem is to set set_combine_stderr to True.
    • We could, and probably should, offer this as a setting in case users have
      need for it.
  8. And without using a pty, we are forced to manually echo all stdin, just as
    how vanilla SSH does (see previous major section). This then presents issues
    with password prompts becoming insecure.

Putting it all together

So, here's the planned TODO for this issue, given all of the above and the
current state of the feature branch (namely, hardcoded bytewise stdin, skipping
out on the output threads in favor of select, and printing prefixes after
each newline):

  1. Abstract out the currently-implemented stdin manipulation; it essentially
    requires a try/finally and I think it'd be handy to have as a
    context manager or similar.
    • Possibly also make it configurable, since bytewise stdin is not
      absolutely required much of the time. Still feel it should be enabled by
      default, though.
    • Offer an option to allow suppression of stdin echoing, just because.
  2. Expose set_combine_stderr as a user-facing option. Default should be on -- not too many people need the distinct
    stderr access, and with it off, output is very likely to be garbled
    unexpectedly. It's an advanced user sort of thing.
  3. Change the pty option to default to True (currently False). This will
    provide the smoothest user experience, and since we're combining the streams
    by default anyway, it's a no-brainer.
  4. Decide what to do with output_thread's password detection and response.
    This may become more difficult with bytewise buffering, and was originally
    implemented to get around the lack of stdin.
    • Drop the feature entirely, since users can now enter prompts
      interactively. Dropping features isn't great, though.
    • Repackage it as a "password memory" feature (it needs an overhaul
      anyways). Maybe as part of #177.
    • Keep it entirely as-is, and just use the output capturing as the read
      buffer in place of the current approach (checking the as-big-as-possible
      chunk from the remote end). Possibly quickest. We won't be able to hide the
      prompt itself from user eyes anymore (that's the biggest reason #80 can't
      work) but that's not required, just nice.
  5. Figure out if it's possible to omit printing the output prefix in lines where the user's input is being echoed by the remote end. Currently this results in said prefix showing up mid-line in some prompt situations (usually where the echoed stdin is the first data to show up in the stdout buffer, though it could also be a problem once the user hits Enter to submit the prompt too).
    • Might be able to conditionally hide prefix in cases where the byte coming in to stdout is the same as the last byte seen on stdin, but that is messy (e.g. output coming in long after the user is done typing -- do we add time memory? how much of one? etc)
    • Depending on exactly how it shakes out, this may not even be an issue for anything but the case where the typed input's echo is the first stdout. will have to see.
  6. Add an interact_with that makes use of invoke_shell, assuming it can work seamlessly with the final exec_command based solution without code duplication.
  7. Come up with Windows-compatible solutions, if possible, for all Unix-isms
    used in this effort.
  8. Note in the parallel-related ticket(s) that this solution will make it more difficult for a parallel execution setup to function, insofar as bytewise-vs-linewise output is concerned. A truly parallel execution would be incredibly confusing even on a line-by-line basis, however, so a better solution is likely to be needed anyways.
  9. Reorganize operations.py and network.py -- nuke old outdated code, shuffle around new code, it should ideally live in another module that is neither network or operations (?)
  10. Document all of the above changes thoroughly, and attend to related tickets re: tutorial etc.
    • Update changelog (the pty default is now backwards incompatible!)
    • Make sure users know they need to deactivate both pty and
      combine-streams options in order to get distinct streams.
    • Update skeleton usage docs re: interactivity
    • Search for mentions of use of the stderr attribute and update them since it's not populated by default anymore

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:24pm EDT

Relations

  • Related to #73: Once Git can be used, update tutorial to use it.
  • Duplicated by #49: Fabric does not prompt for input when the host does.
  • Related to #80: See whether paramiko.SSHClient.invoke_shell + paramiko.Channel.send is feasible
  • Duplicated by #153: Hangs When Encountering an Invalid Security Certificate
  • Related to #177: Investigate pexpect/expect integration
  • Related to #20: Rework output mechanisms
  • Related to #182: New I/O mechanisms print "extra" blank lines on \r
  • Related to #183: Prompts appear to kill capturing (now with bonus test server!)
  • Related to #190: Sudo prompt mixed up a bit
  • Related to #192: Per-user/host password memory (was: Possible issue in password memory)
  • Related to #193: Terminal resizing support/detection
  • Related to #196: open_shell() doesn't do readline too well
  • Related to #163: Formattable output prefix.
  • Related to #197: Handle running without any controlling tty
  • Related to #204: Better in-thread exception handling
  • Related to #209: Some password prompts no longer specify the user
  • Related to #212: Hitting Ctrl-C during I/O still requires shell reset
  • Related to #219: Blank lines after silent commands
  • Related to #223: Full stack tests choking on passphrase-vs-password issue

Closed as Done on 2010-08-06 at 11:22pm EDT

Allow for storing/using metadata about hosts

2018 update: this is an old, old ticket and it is primed to actually be implemented now that 2.0 is available to build features on top of.

The tl;dr is that users need ways to store rich data about their target hosts (and, usually, groups or "roles"), first, and then need a way of addressing that information when doing API or CLI level things, second.

Fabric 2 is built on Invoke which has a powerful configuration system, which is then exposed to tasks via a 'context' object. It seems likely that we will build this feature on those, something like (but not necessarily limited to):

  • Standardize on some relatively generic config-style format for representing hosts; basically Connection's parameters (user, hostname, port, connect_kwargs, timeout, etc etc) and probably with a bit more on top
    • or at least the opportunity for users to put arbitrary data on top and have that show up in the objects exposed to the user at runtime
  • Update Context so it has a lightweight link to some other class or classes which turn the 'raw' config into usable API objects, based on some query or lookup
    • Open question is whether these show up as actual Connections or if there's an intermediate representational class like Host

For example (again: just an off the cuff example!) perhaps we'll set it up so host data gets its own config-style file that lives alongside the regular config files - say, $PROJECT/hosts.(yml|json|py|etc):

web1:
  host: web1
  # Implicit local user, as with Connection
web2:
  host: web2
  user: admin2
  port: 2223
db:
  # Implicit dict-key-is-the-host-value, i.e. implicit "host: db"
  user: dbadmin

Then perhaps there's something like this (using pure Invoke style tasks for now, though certainly this would want the ability to use -H or decorators to select target hosts to 'wrap' the task, as in v1):

@task
def deploy_webs(c):
  # Assuming auto-creation of Connection objects...
  for cxn in [c.find_host('web1'), c.find_host('web2')]:
    cxn.run("hostname")

There are a whole lot of different ways we could slice and dice this, and a lot of directions that it could be extended in; the emphasis should be on giving users as much power and control as reasonably possible and then getting out of their way. Ideally all we'll do is standardize on some very basic way of shoveling data into Connection objects, and add support to the core CLI exec framework to arrive at something resembling Fabric 1 re: selecting execution targets.

All while exposing these mechanisms publicly so advanced users can take matters into their own hands - again I expect anybody beyond the most basic use cases to be highly likely to fall back on a "regular Invoke style tasks + making the needful API calls within those task bodies" approach.


Original description

Right now a "host" is solely limited to user/hostname/port. Would be nice, even just for user fabfiles, to store additional information such as operating system, per-host settings like env.shell, and so forth.

Note that this may (or may not) be a good time to reconsider changing the default value of shell to /bin/sh.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:02pm EDT

Relations

  • Duplicated by #43: /bin/bash is not always available – per host shell configuration?
  • Related to #97: In some situations, pressing Enter does not reuse the previous password
  • Related to #138: env.port not honored if host string lacks port specification
  • Related to #3: Make use of ssh_config where possible
  • Related to #76: Use decorator to define tasks

Refactor run/sudo to use single subroutine

Description

They share a significant amount of code and it's a pretty big (and time wasting) violation of DRY. Merge them into a subroutine branching where necessary, then call that subroutine from the "real" functions.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:12pm EDT


Closed as Done on 2009-11-08 at 11:30am EST

Allow arbitrary shell commands to be specified on the command line

Description

As far as I can tell, the primary reason this is useful vs ssh user@host -t <whatever> is because one could specify Fabric roles (though that may also be possible with ssh_config?).

Possibly syntax would be to use double dashes like many commands do to say "ok, after this point, no flags, it's all a big string to do X with", so e.g. fab --role foo -- ifconfig -a.


Originally submitted by Jeff Forcier (bitprophet) on 2009-07-21 at 04:53pm EDT


Closed as Done on 2010-06-18 at 09:24pm EDT

Consider adding a "fab shell" back in

Description

  • Probably accessible as an option causing different behavior from normal, e.g. fab --run-shell or fab --shell
  • Depending on how it shakes out, may be best to simply load up IPython with a line or two already evaluated, e.g. the API import.
    • If that's not possible, still probably best to leverage IPython as a library, which I've seen done before

Originally submitted by Jeff Forcier (bitprophet) on 2009-07-20 at 05:18pm EDT

0.9 doc first example is wrong

Description

http://docs.fabfile.org/0.9/

The about demo's command line reads
$ fab -H localhost,linuxbox uname
and should read
$ fab -H localhost,linuxbox host_type


Originally submitted by Anonymous () on 2009-08-05 at 02:12pm EDT


Closed as Done on 2009-11-08 at 11:32am EST

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.