Giter Club home page Giter Club logo

automation's Introduction

This repository contains various scripts which SUSE uses to automate development, testing, and CI (continuous integration) of the various components of SUSE Cloud, i.e. OpenStack and Crowbar.

Scripts

This project has several scripts for different automated tasks. Some of them are:

  • create-vm.sh: creates a fresh KVM VM via libvirt.
  • crowbar-prep.sh: prepares a host for Crowbar admin node installation.
  • /scripts/mkcloud: builds a SUSE Cloud environment for development or testing purposes.
  • repochecker: tries to solve runtime dependencies for a given repository
  • mkcloudruns: runs multiple copies of mkcloud for various scenarios. More details are in the README.

Documentation

Crowbar mkcloud deployments

Find out more in /docs/mkcloud.md

Unified Cloud deployments

Find out more in /docs/cloud/testing.md

devstack deployments

Find out more in /docs/devstack/

License

Files in this repository are licensed under the Apache 2.0 license unless stated otherwise. See the LICENSE file for details.

Contributing

This project uses pull requests to process contributions and some continous integration jobs to test that your changes are OK to be merged.

It's recommended to read Contributing to Open Source on GitHub and Forking Projects if you want to get a better understanding of how GitHub pull requests work.

Testing your changes

The syntax of the shell scripts is checked using bashate, you can install it running.

$ sudo pip install bashate

Once you have installed bashate and the changes you wanted, you should check the syntax of the shell scripts running make test. Here is an example output of a successful execution:

$ make test
cd scripts ; for f in *.sh mkcloud mkchroot jenkins/{update_automation,*.sh} ; do echo "checking $f" ; bash -n $f || exit 3 ; bash8 --ignore E010,E020 $f || exit 4 ; done
checking compare-crowbar-upstream.sh
checking create-vm.sh
checking crowbar-prep.sh
checking mkcloud-crowbar-logs.sh
checking qa_crowbarsetup.sh
checking setenv.2.sh
checking setenv.sh
checking mkcloud
checking mkchroot
checking jenkins/update_automation
checking jenkins/qa_openstack.sh
checking jenkins/qa_tripleo.sh
checking jenkins/update_tempest.sh
cd scripts ; for f in *.pl jenkins/{apicheck,jenkins-job-trigger,*.pl} ; do perl -c $f || exit 2 ; done
analyse-py-module-deps.pl syntax OK
jenkins/apicheck syntax OK
jenkins/jenkins-job-trigger syntax OK
jenkins/cloud-trackupstream-matrix.pl syntax OK
jenkins/jenkinsnotify.pl syntax OK
jenkins/openstack-unittest-testconfig.pl syntax OK
jenkins/track-upstream-and-package.pl syntax OK

Shellcheck

You can run ShellCheck locally to check for warnings and suggestions for shell scripts.

First install it:

$ zypper in ShellCheck

Then run it with:

$ make shellcheck

Note that there is currently a high number of warnings and errors discovered by shellcheck so the check is supposed to fail until we can fix all those issues

Jenkins jobs

There are manually maintained jobs, and some jobs are now using jenkins-job-builder which defines jobs in yaml format. New jobs should always be defined in yaml format.

The jenkins-job-builder jobs are deployed automatically (once per day) via the jenkins job cloud-update-ci.

They can also be deployed manually via Makefile targets

make cisd_deploy # deploys ci.suse.de jobs
make cioo_deploy # deploys ci.opensuse.org jobs

This requires setting up jenkins job builder locally:

zypper in python-jenkins-job-builder
  • Get the APIKEY from the CI web UI (Profile / Configure / Show API Key)
  • Create jenkins_jobs.ini as described below

Both Makefile targets need a valid jenkins_jobs ini file.

  • make cisd_deploy looks for /etc/jenkins_jobs/jenkins_jobs-cisd.ini
  • make cioo_deploy looks for /etc/jenkins_jobs/jenkins_jobs-cioo.ini

See /scripts/jenkins/jenkins_jobs.ini.sample

To update a single job on ci.suse.de, run:

jenkins-jobs --ignore-cache update \
    jenkins/ci.suse.de/:jenkins/ci.suse.de/templates/ <name-of-one-job>

For this you need a local ini file and add it via the --conf parameter to the above command.

To tune the Parsed Console Output of mkcloud jobs, edit the rules file according to the documented rules file syntax.

automation's People

Contributors

abelnavarro avatar aplanas avatar aspiers avatar bmwiedemann avatar cmurphy avatar dirkmueller avatar djz88 avatar evalle avatar flaviodsr avatar gosipyan avatar janzerebecki avatar jdsn avatar jennywei8 avatar jgrassler avatar jsuchome avatar kbaikov avatar maximilianmeister avatar mbelur avatar mjura avatar nicolasbock avatar ramak-suse avatar rhafer avatar rtamalin avatar s-t-e-v-e-n-k avatar saschpe avatar sayalilunkad avatar skazi0 avatar stefannica avatar toabctl avatar vuntz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automation's Issues

Explore alternative options for mkcloud deployment

Unfortunately setting up a test environment with mkcloud takes quite some time, and mkcloud only allows to have one snapshot at a time, which makes difficult to quickly test and iterate locally.

We should try to identify alternative routes for a faster mkcloud deployments for development purposes.

Some bullet points from my (still ignorant) point of view of mkcloud:

  • Minimising re-downloading of images (#223)
  • Faster download of images (#1139)
  • Using docker containers?
    • Comment: This could help us test crowbar at a scale much easier locally.
  • Using pre-packaged images for the nodes (like the admin?)
    • Question: Is it necessary to pxe reinstall the nodes on each deployment?
  • Using pre-packaged images for nodes and admin with the software installed and only run the updater on deploy for faster deployment (removing the need to download extra packages)
  • Allowing to set a proxy on the nodes so any downloads may be already cached in the proxy

Looking for some comments here to see if its feasible or not and what else could be looked at.

Speeding up the product itself (not mkcloud)

  • Only run subset of roles in chef-client run instead of all roles (part of the crowbar orchestration epic) - this will be the biggest speed up by far, because currently applying each proposal repeats the same recipes over and over again.

setupcompute and instcompute are misnomers

The setupcompute and instcompute steps don't just setup and install compute nodes, they also do controller and storage nodes. So they should be renamed to something more general. I can't yet think of the right word though, sorry ;-)

Apache keystone.conf has extra noise at the bottom

The extra noise is causing problems with login to Horizon Dashboard

Comment the noise as show below to get back to a healthy state.

Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=2 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/keystone/keystone.log
CustomLog /var/log/keystone/keystone_access.log combined

<Directory /usr/bin>
    Require all granted
</Directory>

<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=2 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/keystone/keystone.log
CustomLog /var/log/keystone/keystone_access.log combined

<Directory /usr/bin>
    Require all granted
</Directory>

#Alias /identity /usr/bin/keystone-wsgi-public
#<Location /identity>

SetHandler wsgi-script

Options +ExecCGI

WSGIProcessGroup keystone-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

#

#Alias /identity_admin /usr/bin/keystone-wsgi-admin
#<Location /identity_admin>

SetHandler wsgi-script

Options +ExecCGI

WSGIProcessGroup keystone-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

#

mkcloud proposal trying deploy manila proposal with GM5+up

With : cloudsource=GM5+up want_sles12=1 want_ceph=1 mkcloud .. proposal

Finished proposal heat(default) at: Fri Sep 11 07:22:43 UTC 2015
Usage: crowbar <area> <subcommand>
  Areas: batch ceilometer ceph cinder crowbar database deployer dns glance heat ipmi keystone logging machines network neutron nfs_client node_state nova nova_dashboard ntp pacemaker provisioner rabbitmq reset reset_nodes reset_proposal suse_manager_client swift tempest trove updater
Starting proposal manila(default) at: Fri Sep 11 07:22:53 UTC 2015
No hooks defined for service: manila
Usage: crowbar <area> <subcommand>
  Areas: batch ceilometer ceph cinder crowbar database deployer dns glance heat ipmi keystone logging machines network neutron nfs_client node_state nova nova_dashboard ntp pacemaker provisioner rabbitmq reset reset_nodes reset_proposal suse_manager_client swift tempest trove updater
Error: 'crowbar manila proposal --file=/root/manila.default.proposal edit default' failed with exit code: 255

$h1!!
Error detected. Stopping mkcloud.

rebootcloud step breaks when admin node is unreachable

The code really should be more robust than this.

+ echo '============> MKCLOUD STEP START: rebootcloud <============'
============> MKCLOUD STEP START: rebootcloud <============
+ echo

+ sleep 2
+ echo rebootcloud
+ cmd_parameters=rebootcloud
+ cmd=rebootcloud
+ rebootcloud rebootcloud
+ onadmin rebootcloud
+ local cmd=rebootcloud
+ shift
+ sshrun onadmin_rebootcloud
+ cat
+ env
+ grep -e '^debug_' -e '^pre_' -e '^want_' -e '^net_' -e '^nodenumber' -e '^clusterconfig'
+ sort
+ scp -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null ./qa_crowbarsetup.sh mkcloud.config [email protected]:
ssh: connect to host 192.168.217.10 port 22: Connection timed out
lost connection
+ [[ '' = 1 ]]
++ hostname
+ ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null [email protected] 'echo mkcloud1 > cloud ; . qa_crowbarsetup.sh ; onadmin_rebootcloud'
ssh: connect to host 192.168.217.10 port 22: Connection timed out
+ return 255
+ return 255
+ ret=255
+ '[' 255 '!=' 0 ']'
+ set +x

$h1!!
Error detected. Stopping mkcloud.
The step 'rebootcloud' returned with exit code 255
Please refer to the rebootcloud function in this script when debugging the issue.
ssh: connect to host 192.168.217.10 port 22: Connection timed out
ssh: connect to host 192.168.217.10 port 22: Connection timed out

Environment Details
-------------------------------
    hostname: mkcloud1.cloud.suse.de
     started: Mon 23 Nov 11:07:20 UTC 2015
       ended: Mon 23 Nov 11:15:53 UTC 2015
-------------------------------
 cloudsource: develcloud6
    TESTHEAD: 1
 want_test_updates: 1
    scenario: 
  nodenumber: 4
     cloudpv: 
 UPDATEREPOS: 
    cephvolumenumber: 0
 upgrade_cloudsource: 
-------------------------------
want_ipmi=false
want_sles12sp1=1
want_test_updates=1
want_sles12=1
-------------------------------

"soc-ci worker-pool-reserve" doesn't work with hosts running mkcloud non-root

It seems that recently some mkcloud host (e.g. mkchm) were changed to run mkcloud as the non-root user jenkins. With such a setup it's no longer possible to reserve built slot using soc-ci worker-pool-reserve as that tool currently only handles the case where the pool directory is located in /root.

According to @jdsn soc-ci was supposed to be fixed like this:
"the basic idea was, that soc-ci tries to connect as jenkins first, and falls back to root (long term goal is to have all workers run as non-root), to save ssh connections, soc-ci caches its results in a local dot-file and in case of an error or after an expiry period the host is probed again (first jenkins then root) "

allocpool is undocumented and written in Perl

allocpool is a small but critical part of our CI. It would be helpful if it was documented and also written in a language which doesn't violate our policy. This is increasingly important as new people join the team who have been hired as Python/Ruby/shell hackers, not Perl hackers.

It should be possible to pick the proposals that are deployed by mkcloud

When I want to create a test environment for let's say ceilometer, I currently have to deploy all barclamps up to ceilometer manually. Then I can fetch the updated ceilometer barclamp and deploy it.

It should be possible to tell mkcloud which proposals to deploy. It could be done with a variable that contains a string that holds the requested proposals and is exported as a environment variable or using crowbar batch in a way.

native support for screen (and tmux?)

It seems like a pretty common use case to want to run mkcloud inside screen. I guess there are a few reasons, e.g.

  • mkcloud runs generally take a long time, so this protects against network outages
  • mkcloud runs generate a lot of output, and screen/tmux can capture output in the scroll-back buffer or even in a log file (although #1191 could take care of that separately)
  • It's useful to be able to live-collaborate with others on a cloud.
  • mkcloud currently needs to run in a dedicated directory (see #224), so this helps keep separation between multiple clouds on the same machine.

We could either write a wrapper around mkcloud which supports creation of a new screen session for the cloud and reuse of any existing session, or we could add native support for this directly into mkcloud itself. For example:

  1. Check if already running inside screen; if so, carry on as normal.
  2. Not running inside screen, so check if there is an existing screen session for this cloud. If so, join the session and re-run via exec $0 "$@" or similar.
  3. If there is no screen session, create one and re-run the script as in step 2.

A similar approach could be used for tmux.

Is this worth doing? @vuntz This arose from looking at /root/manual.vuntz/start-screen.sh etc. on mkcloud1.

/cc @jdsn @bmwiedemann

One remaining question is how to handle non-interactive invocations, e.g. if you were batch-starting 10 new clouds in parallel.

crowbar-prep.sh: can't find SUSE-Cloud-4-Updates in /etc/fstab

I get the following error trying to run crowbar-prep.sh from the current master:

# cat crowbar-prep.sh | ssh root@crowbar bash -s - -p 4 nue-nfs
Password: 
WARNING: Removing 192.168.124.10 entry already in /etc/hosts:
192.168.124.10   pebbles.crowbar.dev pebbles
Not using 9p

/srv/tftpboot/suse-11.3/install already mounted; umounting ...
mounted /srv/tftpboot/suse-11.3/install

/srv/tftpboot/repos/SLES11-SP3-Pool already mounted; umounting ...
mounted /srv/tftpboot/repos/SLES11-SP3-Pool

/srv/tftpboot/repos/SLES11-SP3-Updates already mounted; umounting ...
mounted /srv/tftpboot/repos/SLES11-SP3-Updates



/srv/tftpboot/repos/Cloud already mounted; umounting ...
mounted /srv/tftpboot/repos/Cloud

mount: can't find /srv/tftpboot/repos/SUSE-Cloud-4-Updates in /etc/fstab or /etc/mtab
Couldn't mount /srv/tftpboot/repos/SUSE-Cloud-4-Updates

mkcloud writes temporary files in cwd

mkcloud writes mkcloud.pid and mkcloud.config temporary files into whichever directory you run it from, but these should go under /tmp or /var to avoid littering the git working tree (best case) or other random directories (worst case) with temporary files. This happens even before the sanity_check function is hit which is currently causing #222.

add step to update the product codebase without reinstalling the cloud

It would be nice being able to update the crowbar codebase to the newest version from D:C:S:X without having to reinstall everything from scratch.

sth like ./mkcloud rebase

This is already (manually) possible with the devsetup, and updating local git clones manually, but for production environment this doesn't exist yet.

I remember having done a rsync from the newest ISO to the repo on the admin node, but this required a download of the newest ISO.

maybe there is a way (also for remote workers) to update the cloud like that?

Changes to crowbar chef templates may get overwritten by updates

During onadmin_prepareinstallcrowbar two lines are written to /opt/dell/chef/cookbooks/nfs-server/templates/default/exports.erb (See lines qa_crowbarsetup.sh#L1546 and qa_crowbarsetup.sh#L1550).

The file /opt/dell/chef/cookbooks/nfs-server/templates/default/exports.erb belongs to the crowbar-core package. If this package gets updated after the onadmin_prepareinstallcrowbar, all changes to this file are overwritten.

We run the following steps for testing: cleanup prepare setupadmin addupdaterepo prepareinstcrowbar runupdate bootstrapcrowbar instcrowbar setupnodes instnodes setup_aliases proposal testsetup cct addupdaterepo+0 onadmin+allow_vendor_change_at_nodes onadmin+zypper_update onadmin+cloudupgrade_clients testsetup cct

In our case the runupdate step that runs after the prepareinstcrowbar performs such an update of crowbar-core. Because of this the mkcloud run fails at step proposal when trying to mount the /var/lib/glance nfs share. More specifically, the command crowbarctl proposal commit nfs_client data fails with the message:

---- Begin output of mount -t nfs -o nofail,comment="managed-by-crowbar-barclamp-nfs-client" <hostname>:/var/lib/glance/images /var/lib/glance/images ----

 STDOUT: 
 STDERR: mount.nfs: access denied by server while mounting <hostname>:/var/lib/glance/images

 ---- End output of mount -t nfs -o nofail,comment="managed-by-crowbar-barclamp-nfs-client" <hostname>:/var/lib/glance/images /var/lib/glance/images ----

This is caused by the fact that /var/lib/glance/images is not part of /etc/exports on the nfs server.

A possible hot fix for this could be to move the code that alters the crowbar template to a later mkcloud step (will provide a PR to showcase this). A real solution IMHO should avoid changing the template at all, since it is owned by an rpm package that might change the file.

Sample script in docs/mkcloud.md fails

The documentation for mkcloud, docs/mkcloud.md, contains a sample bash script in the section "Using with local repositories". The biggest problem with this script is that it suggests that it can be used to create a full cloud and on casual reading seems to execute the additional setup steps mentioned at the top, such as creating the disk and loopback. But the problem is that it skips step (like setuphost) that have to be run in order to work properly. It also has a hardcoded path (/home/tom...) that make it unusable without tweaking. It also changes several network values for no apparent reason.
It appears that the whole point of the script was to demonstrate caching, so instead of having a script, it would be better to document how the caching options work, i.e. cache_clouddata and cache_dir.

mkcloud cleanup in system without cloud-admin network

Traceback (most recent call last):
  File "./scripts/lib/libvirt/cleanup.py", line 79, in <module>
    main()
  File "./scripts/lib/libvirt/cleanup.py", line 68, in main
    network.undefine()
UnboundLocalError: local variable 'network' referenced before assignment

motd should show origin of cloud

When a cloud is built via Jenkins, ideally a URL to the job should be included in /etc/motd on at least the Crowbar node (maybe the others too) so that when sshing to a worker cloud, it's easy to see where it came from. This mechanism could be implemented in a way which supports arbitrary cloud descriptions, not just Jenkins URLs etc. E.g. if I manually invoke mkcloud to test a particular feature such as compute node HA, I should be able to set a clouddescription variable or similar with a value like testing compute node HA feature. Then anyone else who sshes to my cloud can see what its purpose is.

Implementation would be in two parts:

  • add clouddescription variable to mkcloud, so that the description appears in /etc/motd
  • amend Jenkins jobs so that they set this variable appropriately (i.e. it includes a URL to the Jenkins job, and maybe other useful info too)

mkcloud needs clearly defined interface

Currently mkcloud relies on a bunch of shell environment variables to control its behaviour:

  • Some of these are documented via comments near the top of the script.
  • A smaller number are documented in docs/mkcloud.md.
  • Some are documented in the usage text output when you run ./mkcloud with no arguments. (It's also supposed to be output when you run ./mkcloud help, but that's broken.)
  • Many are not documented at all.
  • Some of the variables have documentation duplicated in more than one of the three locations, and sometimes the duplicated info is inconsistent.
  • These environment variables are mostly (but not all!) lowercase, and are indistinguishable from other "private" shell variables used within the script. One of the causes of this is violation of the common shell coding standard which uses uppercase for environment variables and constants, and lowercase for local variables.
  • Some variables are optional and others are mandatory, but it's not always clear which are which.

How did we get ourselves in this mess? I suggest that the answers include (but are not limited to) the following reasons:

  • shell code does not encourage clean encapsulation
  • the "boiling frog" effect: scripts start as quick small hacks, and gradually grow past the point where they should be treated like proper production code without anyone noticing
  • no structured peer review across the whole team for a large part of the code's history

This is not the fault of any one person, and anyway my goal is not to blame people. The priorities are:

  1. to learn how we can avoid this in the future, and
  2. to figure out how we can clean up the mess.

My suggestions are:

Thoughts welcome.

Running the cloud-crowbar-testbuild-pr Jenkins job fails with encoding problems

The scripts/jenkins/jenkins-job-trigger script fails in our Jenkins job with a LookupError because of "unknown encoding: idna".

The following stack trace shows the error:

Triggering jenkins job with url http://<address>/mko/sap-oc:crowbar-openstack:37:db8e9b8ac89b7c5b0ab98bd402f115871f2783c9:stable/sap/3.0/ and directory /srv/mkcloud/mko/sap-oc:crowbar-openstack:37:db8e9b8ac89b7c5b0ab98bd402f115871f2783c9:stable/sap/3.0
Traceback (most recent call last):
  File "/root/github.com/SUSE-Cloud/automation/scripts/jenkins/jenkins-job-trigger", line 71, in <module>
    jenkins_build_job(sys.argv[1], args)
  File "/root/github.com/SUSE-Cloud/automation/scripts/jenkins/jenkins-job-trigger", line 63, in jenkins_build_job
    server.build_job(job_name, job_parameters)
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 915, in build_job
    self.build_job_url(name, parameters, token), b''))
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 344, in jenkins_open
    self.maybe_add_crumb(req)
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 258, in maybe_add_crumb
    self._build_url(CRUMB_URL)), add_crumb=False)
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 345, in jenkins_open
    response = urlopen(req, timeout=self.timeout).read()
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1227, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib64/python2.7/urllib2.py", line 1194, in do_open
    h.request(req.get_method(), req.get_selector(), req.data, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1041, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1075, in _send_request
    self.endheaders(body)
  File "/usr/lib64/python2.7/httplib.py", line 1037, in endheaders
    self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 881, in _send_output
    self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 843, in send
    self.connect()
  File "/usr/lib64/python2.7/httplib.py", line 824, in connect
    self.timeout, self.source_address)
  File "/usr/lib64/python2.7/socket.py", line 554, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
LookupError: unknown encoding: idna

/root/github.com/SUSE-Cloud/automation/scripts/crowbar-testbuild.rb:124:in `trigger_jenkins_job'
/root/github.com/SUSE-Cloud/automation/scripts/crowbar-testbuild.rb:137:in `block in trigger_jenkins_jobs'
/root/github.com/SUSE-Cloud/automation/scripts/crowbar-testbuild.rb:136:in `each'
/root/github.com/SUSE-Cloud/automation/scripts/crowbar-testbuild.rb:136:in `trigger_jenkins_jobs'
/root/github.com/SUSE-Cloud/automation/scripts/crowbar-testbuild.rb:289:in `<main>'

We try to build & test the following PR sap-oc/crowbar-openstack#37

mkcloud runs from Jenkins should include job details in Jenkins

We should make mkcloud write a /var/log/mkcloud.txt containing details of the mkcloud run (including Jenkins job number), and make the supportconfig plugin capture this. So then if you have a mystery supportconfig, you can tell where it came from and how mkcloud was configured.

Created virtual network may conflict with existing networks

When mkcloud creates the virtual networks in libvirt, it should first check that it is not creating a network that overlaps with any existing virtual network that may be present. Failure to make this pre-check may cause network requests fail in ways that may be difficult to diagnose, especially when there is only a partial overlap, i.e. requests to/from some interfaces work properly and others do not.

mkcloud trying deploy swift barclamp with GM5+up and only SLE12CC5 nodes

cloudpv=/dev/loop0 cloudsource=GM5+up nodenumber=2 compute_node_memory=4194304 want_sles12=1 tempestoptions="-N -t" ./scripts/mkcloud plain

will fail with:

Starting proposal swift(default) at: Pá zář 11 20:46:18 UTC 2015
Failed to edit: default : Errors in data
Failed to validate proposal: Role swift-storage can't be used for suse 12.0, windows /.*/ platform(s).
Error: 'crowbar swift proposal --file=/root/swift.default.proposal edit default' failed with exit code: 1

$h1!!
Error detected. Stopping mkcloud.
The step 'proposal' returned with exit code 88
Please refer to the proposal function in this script when debugging the issue.

Horizon project list doesn

uncommenting OPENSTACK_API_VERSIONS in
/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py

seems to fix this.

jenkins/jenkins-job-trigger HTTP failure should not be silent

After looking at #2679 it seems to me from the logs that when

server.build_job(job_name, job_parameters)
fails to create a job when the HTTPS certificate is expired it doesn't print any output. But it should. Also I'm unsure whether it correctly sets a failure exit return value.

The request is made around here: https://github.com/openstack/python-jenkins/blob/1.2.1/jenkins/__init__.py#L543
I would expect an SSLError exception or something to be printed.

`mkcloud plain` fails to download manila-service-image

Using commit a2de743 and the following settings:

export cloudsource=develcloud6
export debug_qa_crowbarsetup=1
export cephvolumenumber=1
export want_neutronsles12=1
export want_mtu_size=8900
export clusterconfig=data+services+network=2

with mkcloud plain downloading manila-service-image times out.

+ wget -N --progress=dot:mega http://149.44.176.43/images/other/manila-service-image.qcow2
--2016-04-22 10:41:40--  http://149.44.176.43/images/other/manila-service-image.qcow2
Connecting to 149.44.176.43:80... failed: Connection timed out.

When I manually log into the cloud node I can reproduce this behavior. From the crowbar node wget completes just fine. This looks like some network forwarding is not correctly setup from within the cloud node.

I should mention that this is on tumbleweed.

mkcloud should cache downloads in /var/cache

Currently stuff like the SUSE-SLE12-CLOUD-5-COMPUTE .iso get downloaded into the admin node every run. They should get downloaded into /var/cache/mkcloud on the mkcloud host to avoid this.

Incorporate readme instructions into setuphost target

docs/mkcloud.md has a number of one-time setup tasks that really make sense to incorporate into the setuphost target. This includes:

  • installing and enabling libvirt if necessary
  • installing and enabling virtlogd if necessary
  • creating a suitable local disk file and performing losetup, if necessary
    setuphost should be idempotent, of course, so that it can be re-run without harm.
    As a further exercise, it would be nice if setuphost were included in the default list of those targets that expand to lots of steps. e.g. all, plain, etc.

in the cloud-update-ci job JJB fails to update ci.opensuse.org

From https://ci.suse.de/job/cloud-update-ci/741/console

INFO:jenkins_jobs.builder:Reconfiguring jenkins job openstack-trackupstream
INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: ci.opensuse.org

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/parallel.py", line 62, in run
    **task['kwargs'])
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 355, in parallel_update_job
    self.update_job(job.name, job.output().decode('utf-8'))
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 139, in update_job
    self.jenkins.reconfig_job(job_name, xml)
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 1223, in reconfig_job
    headers=DEFAULT_HEADERS
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 541, in jenkins_open
    return self.jenkins_request(req, add_crumb, resolve_auth).text
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 560, in jenkins_request
    self._request(req))
  File "/usr/lib/python2.7/site-packages/jenkins/__init__.py", line 520, in _response_handler
    "empty response" % self.server)
EmptyResponseException: Error communicating with server[https://ci.opensuse.org/]: empty response
Traceback (most recent call last):
  File "/usr/bin/jenkins-jobs", line 10, in <module>
    sys.exit(main())
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/cli/entry.py", line 158, in main
    jjb.execute()
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/cli/entry.py", line 139, in execute
    ext.obj.execute(self.options, self.jjb_config)
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/cli/subcommand/update.py", line 150, in execute
    existing_only=options.existing_only)
  File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 340, in update_jobs
    raise result
jenkins.EmptyResponseException: Error communicating with server[https://ci.opensuse.org/]: empty response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.