Giter Club home page Giter Club logo

webdriver-wharf's Introduction

webdriver-wharf

A docker-based warehouse of Selenium servers running chrome and firefox, ready to be checked out for use by Selenium WebDriver clients.

Configuration is done entirely with environment variables (detailed below), which should make it trivial to use something other than systemd to manage the wharf service, should systemd be unavailable.

systemd example config

/etc/systemd/system/webdriver-wharf.service

[Unit]
Description=WebDriver Wharf
After=docker.service

[Service]
Type=simple
ExecStart=/usr/bin/webdriver-wharf
EnvironmentFile=/etc/default/webdriver-wharf

[Install]
WantedBy=multi-user.target

Note that on RPM-bases systems, EnvironmentFile should probably be /etc/sysconfig/webdriver-wharf

Docker example config

docker run -v /var/run/docker.sock:/var/run/docker.sock -e WEBDRIVER_WHARF_IMAGE=quay.io/redhatqe/selenium-standalone -e WEBDRIVER_WHARF_POOL_SIZE=16 --publish 4899:4899 --net=host --detach --privileged --name wharf-master -v wharf-data:/var/run/wharf/ wharf:latest

Environment Variables

WEBDRIVER_WHARF_IMAGE

The name of the docker image to spawn in the wharf pool.

Defaults to cfmeqe/sel_ff_chrome, but can be any docker image that exposes a selenium server on port 4444 and a VNC server on port 5999 (display :99). The sel_ff_chrome image also exposes nginx's json-based file browser on port 80.

WEBDRIVER_WHARF_POOL_SIZE

Number of containers to keep in the active pool, ready for checkout.

Defaults to 4

WEBDRIVER_WHARF_MAX_CHECKOUT_TIME

Maximum time, in seconds, a container can be checked out before it is reaped.

Defaults to 3600, set to 0 for no max checkout time (probably a bad idea)

WEBDRIVER_WHARF_IMAGE_PULL_INTERVAL

Interval, in seconds, of how often wharf will check for updates to the docker image.

Defaults to one hour (3600 seconds)

WEBDRIVER_WHARF_REBALANCE_INTERVAL

Interval, in seconds, of how often wharf will rebalance the active container pool.

Frequent rebalancing should not be necessary, and indicates a wharf bug.

Defaults to six hours (21600 seconds)

WEBDRIVER_WHARF_LOG_LEVEL

Loglevel for wharf's console spam. Must be one of python's builtin loglevels: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'

Defaults to 'INFO', which offers a running commentary on the starting, stopping, and destruction of containers

WEBDRIVER_WHARF_LISTEN_HOST

Host address to bind to.

Defaults to 0.0.0.0 (all interfaces)

WEBDRIVER_WHARF_LISTEN_PORT

Host port (TCP) to bind to.

Defaults to 4899

WEBDRIVER_WHARF_START_TIMEOUT

How long, in seconds, wharf will wait when starting a container before deciding the container has failed to start.

Defaults to 60

WEBDRIVER_WHARF_DB_URL

Database URL that wharf should connect to for container tracking.

By default, wharf creates and maintains its own SQLite database in sane locations, though not necessarily the "correct" one according to the Filesystem Hierarchy Standard

If set, this value is passed directly to sqlalchemy with no further processing. See the sqlalchemy docs for information regarding the construction of URLs. When using other database engines, wharf takes no responsibility for installing the correct driver, needs the ability to create tables, and most importantly has not been tested with that engine.

webdriver-wharf's People

Contributors

mshriver avatar psav avatar ronnypfannschmidt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webdriver-wharf's Issues

Allow wharf to work with a remote Docker host

Currently, webdriver-wharf expects that the docker daemon is being run on the same host as it is. There are several advantages to allowing wharf to connect to a remote docker host, such as:

  • Decoupling for increased resiliency. Makes upgrades and maintenance easier by allowing them to be operated on individually.
  • Re-use existing docker infrastructure. There's nothing special about wharf that makes it require its own docker daemon. You could easily piggyback off someone else's docker host(s) and further reduce maintenance burden.
  • Allows for use with cloud container services, like Amazon EC2 Container Service and maybe OpenShift.

This really shouldn't be too difficult to change (I think). Just make some modifications to the docker calls and some config options for docker host location. This one isn't as immediately important to me as the port issue, so that should take priority.

document docker usage

currently we run with docker run -v /var/run/docker.sock:/var/run/docker.sock -e WEBDRIVER_WHARF_POOL_SIZE=16 --publish 4899:4899 --detach --privileged --net=host --name wharf-master -v wharf-data:/var/run/wharf/ ronnypfannschmidt/webdriver-wharf

Allow use of multiple docker images

Currently, you can only specify a single docker image for use with wharf. This requires a monolithic docker image containing all the browsers that could be needed. It also limits the potential use cases. Setting up multiple wharf instances for multiple images is silly when this feature could easily be implemented. The complicated part would be determining how many spare containers to keep running for each image. May need to be a dynamic thing via REST calls rather than a config option.

This one is actually pretty low priority for me, I'm just getting all my ideas out while I have them on my mind.

Fix wharf for latest version of docker

I'm getting an error with docker version: 1.12.6, build 96d83a5/1.12.6
APIError: 400 Client Error: Bad Request ("{"message":"starting container with HostConfig was deprecated since v1.10 and removed in v1.12"}")

Description of problem and solution is here: docker/docker-py#1267

Try not to allocate ports that are in use

https://github.com/seandst/webdriver-wharf/blob/master/webdriver_wharf/interactions.py#L53

At the moment, wharf tries not to allocate ports that it's already using, but has no regard for other services that may be listening on the box. _next_available_port should make an attempt to check those ports before binding (which will create a nice race condition between checking the port and actually binding it, but is probably still an improvement...?)).

Additionally, it would probably good to decouple the ports (e.g. container 17 doesn't always get ports x917), and check (un)availability for each individual service.

Cannot start container: port has already been allocated

[ERROR] apscheduler.executors.default Job "balance_containers (trigger: interval[6:00:00], next run at: 2014-09-10 22:30:41 UTC)" raised an exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/apscheduler/executors/base.py", line 108, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "/usr/lib/python2.7/site-packages/webdriver_wharf/app.py", line 289, in balance_containers
    interactions.start(container_to_start)
  File "/usr/lib/python2.7/site-packages/webdriver_wharf/interactions.py", line 98, in start
    client.start(container.id, privileged=True, port_bindings=container.port_bindings)
  File "/usr/lib/python2.7/site-packages/docker/client.py", line 818, in start
    self._raise_for_status(res)
  File "/usr/lib/python2.7/site-packages/docker/client.py", line 87, in _raise_for_status
    raise errors.APIError(e, response, explanation=explanation)
APIError: 500 Server Error: Internal Server Error ("Cannot start container 299c48ee3416de1bc27c9e153f52ab49a609631edd20a049a721a678dd3e1879: port has already been allocated")

This is happened when wharf believes a container has been destroyed, but docker is still in the process of tearing it down. Unfortunately it breaks the entire balance_containers run, so we probably need to guard against APIError and just have balance_containers sleep a second and continue.

Fix REST API to properly use HTTP methods

Known issue. The current REST API uses all GET requests. This could be cleaned up to work like any other standard REST service by properly using the GET, POST and DELETE methods of the HTTP spec.

OpenShift support

Allow launching containers in a remote OpenShift cluster. Probably images tuning would be needed because in OpenShift containers run unprivileged by default.

Container expiration not updated on checkout

I've had several occasions where containers where destroyed prematurely. I expected that after checkout, they would not be forcefully destroyed until TIMEOUT was reach (starting from the time I checked out the container). However, the behavior appears to be that the container is destroyed based on when it was created to the TIMEOUT limit regardless of checkout status.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.