Giter Club home page Giter Club logo

deploy-ibm-cloud-private's Introduction

Deploy IBM Cloud Private

Instructions:

Accessing IBM Cloud Private

Access the URL using the username, password provided in last few lines of the ICP deployment.

Note: It will likely give you a certificate error as ICP was installed with a self signed certificate.

ICP Login Page

Click on admin on the top right hand corner of the screen to bring up a menu and select "Configure Client".

ICP Configure Client

Copy and Paste the provided commands into a shell:

kubectl config set-cluster mycluster.icp --server=https://192.168.27.100:8001 --insecure-skip-tls-verify=true
kubectl config set-context mycluster.icp-context --cluster=mycluster.icp
kubectl config set-credentials mycluster.icp-user --token=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF0X2hhc2giOiJFaGllVkp1T3VtNEVyWVI0d2NjUThBIiwiaXNzIjoiaHR0cHM6Ly9teWNsdXN0ZXIuaWNwOjk0NDMvb2lkYy9lbmRwb2ludC9PUCIsImF1ZCI6ImM2ZDk3NTdmYWY0NmIyNDBkNTJjNDkyMjg0YzQxYmY5IiwiZXhwIjoxNTA5NjgxNjc0LCJpYXQiOjE1MDk2Mzg0NzR9.oLvpbbmJLnxf-ALAMc7vku-EU7ucp1JEixYf6OALkk76oNsVYhVVWKMyfZWU2IMH98ivo1INAU5SRl2w2bQjvwkzMsa3UScu1XR7GFm3XOl4SUWOGFCxfjxaR7n0zEIH0kaLvsrNUIiHl3kE70HuYcNU1MsOwq9u3NfzaDZnHQFu8NFOeGpsI26GlKrqlT_ROz7bsuQ1-M5KOMV4vjKKL6o95d_Ab0Nb7HXn58jXONRQNEQYPCUWVBJQDbyzq-3zWOFUz_ev8YamQgCDOdaU-Gk2MmiInDAPPvExG6vasBQ4fXyWpoeprPtwkCOAb-bEHFdLL4v4fwQK9RfLS4ZyTQ
kubectl config set-context mycluster.icp-context --user=mycluster.icp-user --namespace=default
kubectl config use-context mycluster.icp-context

Check that you can run some basic commands against the cluster:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.3-7+154699da4767fd", GitCommit:"154699da4767fd4225cbaa91cc26abd71bc853c7", GitTreeState:"clean", BuildDate:"2017-08-28T06:41:56Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME             STATUS    AGE       VERSION
192.168.27.100   Ready     23h       v1.7.3-7+154699da4767fd
192.168.27.101   Ready     23h       v1.7.3-7+154699da4767fd
192.168.27.102   Ready     23h       v1.7.3-7+154699da4767fd
192.168.27.111   Ready     23h       v1.7.3-7+154699da4767fd

From here you should be able to interact with ICP via either the Web UI or the kubectl command.

deploy-ibm-cloud-private's People

Contributors

apearson-ibm avatar barecode avatar davidkarlsen avatar hassenius avatar jjasghar avatar jwcroppe avatar kalafala avatar loafyloaf avatar mchasal avatar paulczar avatar rabidanubis avatar timroster avatar tpouyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deploy-ibm-cloud-private's Issues

PODs deployed on worker nodes not registering with DNS/proxy?

I have a three machine cluster (1x master/proxy/management, 2x workers) running in vagrant, everything is reporting as healthy. However, I'm currently seeing a problem where, I have an nginx app being deployed to a pod along with a service and ingress.
If the pod runs on the Master node it works correctly, however, if it is deployed to a worker node a 504 Gateway time-out is displayed externally.
If I run a curl request into the nginx pod from another pod, it returns the correct HTML, so it works internally.
I've found that if I go to the CLI within the actual pod and initiate a ping to www.google.com (only works pinging external sites), everything starts working. It's as if the network to the pod was dead and the ping wakes it up.
I'm pretty sure the ingress is set up correctly, and the service endpoints are pointing to the pod on the worker node.
There are no errors in the kubedns logs....I'm not sure what it could be?

Installation failed under MAC/OS

==> icp: TASK [addon : Adding label to proxy nodes] *************************************
==> icp: changed: [localhost] => (item=192.168.27.100)
==> icp:
==> icp: TASK [addon : Adding label to management nodes] ********************************
==> icp: failed: [localhost] (item=192.168.27.111) => {"changed": true, "cmd": "kubectl label nodes 192.168.27.111 management=true --overwrite=true", "delta": "0:00:00.234490", "end": "2017-10-11 16:53:28.597171", "failed": true, "item": "192.168.27.111", "rc": 1, "start": "2017-10-11 16:53:28.362681", "stderr": "Error from server (NotFound): nodes "192.168.27.111" not found", "stderr_lines": ["Error from server (NotFound): nodes "192.168.27.111" not found"], "stdout": "", "stdout_lines": []}
==> icp:
==> icp: PLAY RECAP *********************************************************************
==> icp: 192.168.27.100 : ok=195 changed=61 unreachable=0 failed=0
==> icp: 192.168.27.101 : ok=0 changed=0 unreachable=1 failed=0
==> icp: 192.168.27.102 : ok=0 changed=0 unreachable=1 failed=0
==> icp: 192.168.27.111 : ok=0 changed=0 unreachable=1 failed=0
==> icp: localhost : ok=109 changed=49 unreachable=0 failed=1
==> icp: Playbook run took 0 days, 0 hours, 7 minutes, 10 seconds
==> icp: FATAL ERROR OCCURRED DURING INSTALLATION :-(
==> icp:
==> icp: TASK [Checking Python interpreter] *********************************************
==> icp: changed: [192.168.27.100]
==> icp: fatal: [192.168.27.111] => Failed to connect to the host via ssh: ssh: connect to host 192.168.27.111 port 22: Host is unreachable
==> icp: fatal: [192.168.27.101] => Failed to connect to the host via ssh: ssh: connect to host 192.168.27.101 port 22: Host is unreachable
==> icp: fatal: [192.168.27.102] => Failed to connect to the host via ssh: ssh: connect to host 192.168.27.102 port 22: Host is unreachable
==> icp:
==> icp: PLAY [Checking prerequisites] **************************************************

disabled_management_services quote characters don't work with latest ICP

The latest daily build of ICP with this Vagrantfile deploys the VA. It seems that the quote characters are incorrect and cause the the newest installer to think VA needs to be deployed:

disabled_management_services = '[“va”]'

should be:

disabled_management_services = '["va"]'

See Issue #2927 in IBM private cloud repo.

terraform/openstack/README is missing

Going through the tutorial, it passes you off to "Please refer to the embedded README document in terraform/openstack for detailed deployment steps.".

Only problem, there isn't one.

Vagrant image doesn't start up the Manager

I just downloaded latest git. Vagrant image was working fine late last year but now the "manager" doesn't come on line. also worker1 and worker2 has been renamed indicating something has changed in the vagrant install which hasn't been reflected in the documentation since the lxc list command indicates the names should show as cfc-worker1 and cfc-worker2.
Any thoughts why the manager wont start up (given I'm still using same configuration, vagrant, virtual box, macos, 16gb ram)

Deploy ICP on Softlayer, Catalog cannot show any helm charts

Just follow the instruction to deploy ICP on Softlayer, and it cannot show any helm charts in Catalog GUI.

Having checked with helmapi container, it has the below error:
2017-11-06T02:40:34.413Z 'ERROR' 'getChartsFromRepo(recursive) error: getaddrinfo EAI_AGAIN raw.githubusercontent.com:443'

Try to run the command "nslookup" in helmapi container, it seem it cannot resolve any dns name.
Here is the helmapi /etc/resolv.conf;
nameserver 192.168.0.10
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
However, I cannot ping the nameserver IP which is the kube-dns cluster IP

On the other hand, I get into kube-dns container, here is the /etc/resolv.conf:
nameserver 10.174.61.42
Thus, I can resolve dns name in kube-dns.

Any idea how to solve for it?
Louie

Vagrant issue: icp_auth not loading from ibmcom docker hub

Used default Vagrantfile to install ICp 2.1.0/2.1.0.1
icp-auth on docker hub (not)pulled with error

docker pull ibmcom/icp-auth
Using default tag: latest
Error response from daemon: manifest for ibmcom/icp-auth:latest not found

Normally there is a version number appended that is set in the start of the Vagrantfile. So if that substitutes to 2.1.0 or 2.1.0.1 you get the same error
this is because there are only 2 >3months old images in the hub, both beta

Installation failing on Mac with Invalid UUID or filename

MACBOOKPRO-20:deploy-ibm-cloud-private rpatel$ vagrant up
Bringing machine 'icp' up with 'virtualbox' provider...
==> icp: Importing base box 'bento/ubuntu-16.04'...
==> icp: Matching MAC address for NAT networking...
==> icp: Setting the name of the VM: IBM-Cloud-Private-dev-edition
==> icp: Fixed port collision for 22 => 2222. Now on port 2200.
==> icp: Clearing any previously set network interfaces...
==> icp: Preparing network interfaces based on configuration...
icp: Adapter 1: nat
icp: Adapter 2: hostonly
==> icp: Forwarding ports...
icp: 22 (guest) => 2200 (host) (adapter 1)
==> icp: Running 'pre-boot' VM customizations...
A customization command failed:

["storageattach", :id, "--storagectl", "SATA Controller", "--port", 0, "--device", 0, "--type", "hdd", "--nonrotational", "on", "--medium", "/Users/rpatel/VirtualBox VMs/IBM-Cloud-Private-dev-edition/ubuntu-16.04-amd64-disk001.vmdk"]

The following error was experienced:

#<Vagrant::Errors::VBoxManageError: There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["storageattach", "34e4c878-2e42-4804-99ee-3648fd9be1d0", "--storagectl", "SATA Controller", "--port", "0", "--device", "0", "--type", "hdd", "--nonrotational", "on", "--medium", "/Users/rpatel/VirtualBox VMs/IBM-Cloud-Private-dev-edition/ubuntu-16.04-amd64-disk001.vmdk"]

Stderr: VBoxManage: error: Could not find file for the medium '/Users/rpatel/VirtualBox VMs/IBM-Cloud-Private-dev-edition/ubuntu-16.04-amd64-disk001.vmdk' (VERR_FILE_NOT_FOUND)
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component MediumWrap, interface IMedium, callee nsISupports
VBoxManage: error: Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, enmAccessMode, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 179 of file VBoxManageDisk.cpp
VBoxManage: error: Invalid UUID or filename "/Users/rpatel/VirtualBox VMs/IBM-Cloud-Private-dev-edition/ubuntu-16.04-amd64-disk001.vmdk"

Please fix this customization and try again.

Failed to deploy on softlayer

MACBOOKPRO-20:deploy-ibm-cloud-private rpatel$ ssh root@public-ip docker run -e SL_USERNAME=user -e SL_API_KEY=apikey -e LICENSE=accept --net=host --rm -t -v /root/cluster:/installer/cluster icp-on-sl install
ERROR! Attempted to execute "cluster/hosts" as inventory script: Inventory script (cluster/hosts) had an execution error: Traceback (most recent call last):
File "/installer/cluster/hosts", line 206, in
SoftLayerInventory()
File "/installer/cluster/hosts", line 85, in init
self.get_all_servers()
File "/installer/cluster/hosts", line 203, in get_all_servers
self.get_virtual_servers(tags=tags)
File "/installer/cluster/hosts", line 186, in get_virtual_servers
instances = vs.list_instances(mask=mask,tags=tags)
File "/usr/lib/python2.7/site-packages/SoftLayer/managers/vs.py", line 160, in list_instances
return func(**kwargs)
File "/usr/lib/python2.7/site-packages/SoftLayer/API.py", line 392, in call_handler
return self(name, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/SoftLayer/API.py", line 360, in call
return self.client.call(self.name, name, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/SoftLayer/API.py", line 263, in call
return self.transport(request)
File "/usr/lib/python2.7/site-packages/SoftLayer/transports.py", line 195, in call
raise _ex(ex.faultCode, ex.faultString)
SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SoftLayer_Exception): Invalid IP Address for API key.

ICP2.1.0.1 : in a cluster with 3 worker nodes, no nodes are listed in the Dashboard.

Hi There,
Architecture ==> A cluster with 3 worker node + 1 boot/master node + 1 proxy node + 1 management node.
T
no_nodes.docx
he ICP 2.1.0.1 installation on RH7.4 VM (Vmware) completed with success but in the GUI we have no nodes listed and the resource overview is empty.
See attached doc as a reference.
The kubectl correctly returns the nodes.
[icpboot@nc118123 cluster]$ kubectl -s 127.0.0.1:8888 get nodes
NAME STATUS AGE VERSION
x.yyy.zzz.123 Ready 17d v1.8.3+icp+ee
x.yyy.zzz.128 Ready 17d v1.8.3+icp+ee
x.yyy.zzz.130 Ready 17d v1.8.3+icp+ee
x.yyy.zzz.131 Ready 17d v1.8.3+icp+ee
x.yyy.zzz.139 Ready 17d v1.8.3+icp+ee
x.yyy.zzz.223 Ready 17d v1.8.3+icp+ee

from the unified router log ==>
{"log":"2018/01/05 19:33:49 [error] 10#0: 96945 connect() failed (111: Connection refused) while connecting to upstream, client: x.yyy.118.123, server: dcos., request: "GET /unified-router/api/v1/nodedetail HTTP/1.1", upstream: "http://127.0.0.1:30090/api/v1/nodedetail\", host: "x.yyy.123:8443"\n","stream":"stderr","time":"2018-01-05T19:33:49.129519262Z"}
Any help will be really appreciate..

Thanks a lot in advance..Ciao..Mario.

Change Virtualbox parameters

The setting of the Virtualbox parameter --natdnshostresolver to on in the Vagrantfile breaks AAAA record lookup and results in at least 5sec delay for all external lookups on Alpine Linux based pods. Suggest it is changed to off.

A VirtualBox machine with the name 'IBM-Private-dev-edition' already exists....

I reran the vagrant up command when my terminal window sit idles for over two hours. By doing so, I received the following errors"
"A VirtualBox machine with the name 'IBM-Cloud-Private-dev-edition' already exists.
Please use another name or delete the machine with the existing name, and try again."

How to I do clean up the previous partial installation of ICP and start again?

Ansible SL Installation Fails with ICP CE 2.1

I'm using Ansible to install the ICP 2.1 CE on SL. However I encountered the following error when running the docker container with icp-on-sl install. The master, worker and proxy run Ubuntu 16.04.

TASK [addon : Creating rbac roles] *********************************************
fatal: [localhost] => {'_ansible_parsed': True, 'stderr_lines': [u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for extensions/, Kind=PodSecurityPolicy', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for extensions/, Kind=PodSecurityPolicy', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding', u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding'], u'cmd': u'kubectl apply --force --overwrite=true -f /installer/playbook/..//cluster/cfc-components/roles/roles.yaml', u'end': u'2017-12-12 00:05:41.125016', '_ansible_no_log': False, u'stdout': u'', u'changed': True, u'rc': 1, u'start': u'2017-12-12 00:05:40.017669', u'stderr': u'unable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for extensions/, Kind=PodSecurityPolicy\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for extensions/, Kind=PodSecurityPolicy\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding\nunable to recognize "/installer/playbook/..//cluster/cfc-components/roles/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding', u'delta': u'0:00:01.107347', u'invocation': {u'module_args': {u'warn': True, u'executable': u'/bin/bash', u'_uses_shell': True, u'_raw_params': u'kubectl apply --force --overwrite=true -f /installer/playbook/..//cluster/cfc-components/roles/roles.yaml', u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed': True}

Unable to pull docker

vagrant@master:~/cluster$ lxc list
+--------------+---------+-----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-manager1 | RUNNING | 192.168.27.111 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-worker1 | RUNNING | 192.168.27.101 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-worker2 | RUNNING | 192.168.27.102 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+

FROM THE cfc-manager1 log
E: Failed to fetch https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_17.09.0~ce-0~ubuntu_amd64.deb GnuTLS recv error (-54): Error in the pull function.

Pods not able to resolve each others hostname

Hi

I have deployed IBM Cloud Private 2.1.0.1 in a local VM on mac (Vagrant) with install and setup working fine.

Post installation I was able to create deployments using kubectl but I have run into an issue where Pods are not able to resolve any hostname. They are able to ping each other via IP.

Has anyone seen this issue before?

Thanks

ICP 2.1.0 does not start up after reboot of virtual machine

I installed ICP 2.1.0 using the Vagrantfile. When I stop the virtual machine and start it up again (vagrant halt;vagrant up) ICP does not start.

vagrant@master:~$ kubectl -n kube-system get po
NAME                                    READY     STATUS             RESTARTS   AGE
calico-node-amd64-88jbb                 2/2       Running            0          2d
calico-node-amd64-bpknc                 2/2       Running            4          2d
calico-node-amd64-pmp5z                 2/2       Running            4          2d
calico-node-amd64-qxglp                 2/2       Running            4          2d
elasticsearch-client-3479638665-ptt3s   2/2       Running            4          2d
elasticsearch-data-0                    1/1       Running            2          2d
elasticsearch-master-1570256108-2m5mr   1/1       Running            2          2d
filebeat-ds-amd64-7ghrl                 1/1       Running            0          2d
filebeat-ds-amd64-9mqn9                 1/1       Running            3          2d
filebeat-ds-amd64-tr6n6                 1/1       Running            2          2d
filebeat-ds-amd64-v12fk                 1/1       Running            2          2d
k8s-etcd-192.168.27.100                 1/1       Running            0          2d
k8s-mariadb-192.168.27.100              1/1       Running            0          2d
k8s-master-192.168.27.100               2/3       CrashLoopBackOff   5          2d
k8s-proxy-192.168.27.100                1/1       Running            0          2d
k8s-proxy-192.168.27.101                1/1       Running            2          2d
k8s-proxy-192.168.27.102                1/1       Running            2          2d
k8s-proxy-192.168.27.111                1/1       Running            2          2d
logstash-4245234969-808pb               1/1       Running            2          2d

So there looks to be an issue with the k8s-master pod.

vagrant@master:~$ kubectl -n kube-system logs k8s-master-192.168.27.100 controller-manager
2017-10-26 20:32:27.841423 I | proto: duplicate proto type registered: google.protobuf.Any
2017-10-26 20:32:27.841488 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-10-26 20:32:27.841501 I | proto: duplicate proto type registered: google.protobuf.Timestamp
I1026 20:32:27.884250       1 feature_gate.go:144] feature gates: map[TaintBasedEvictions:true PersistentLocalVolumes:true]
I1026 20:32:27.884389       1 controllermanager.go:107] Version: v1.7.3-7+154699da4767fd
I1026 20:32:27.889638       1 leaderelection.go:179] attempting to acquire leader lease...
I1026 20:32:27.903647       1 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
I1026 20:32:27.903719       1 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"85f49771-b8f5-11e7-8c66-080027a8df8b", APIVersion:"v1", ResourceVersion:"39663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' master became leader
F1026 20:32:28.807127       1 controllermanager.go:176] error building controller context: failed to get supported resources from server: unable to retrieve the complete list of server APIs: servicecatalog.k8s.io/v1alpha1: an error on the server ("Error: 'dial tcp 10.0.0.230:443: getsockopt: connection refused'\nTrying to reach: 'https://10.0.0.230:443/apis/servicecatalog.k8s.io/v1alpha1'") has prevented the request from succeeding

The issue is with the controller-manager container and is similar to kubernetes issue kubernetes/kubernetes#53424. That issue is solved in kubernetes 1.7.5. The kubernetes version shipped with ICP 2.1.0 is:

vagrant@master:~$ kubectl version --short
Client Version: v1.7.3-7+154699da4767fd
Server Version: v1.7.3-7+154699da4767fd

In order to exclude a glitch in the installation I executed vagrant destroy -f and vagrant up a couple of times, the result is always the same; the initial installation looks good but after a reboot the issue appears.

I would love to try to update kubernetes that comes with ICP to 1.7.5 and check if this solves the issue but I found no documentation that describes how to do that, is it possible or do I have to wait until IBM releases a ICP version with an updated kubernetes? Or is there another solution for this issue?

Idle CPU usage post-install

I was able to install IBM Cloud private (vagrant, virtualbox) simply & quickly on a macbook pro 2016 TB 15" with these scripts, so thanks so much for developing.

However I noticed that after install 'VBoxHeadless' (ie the virtualbox VM) was taking pretty much 1 core's worth of host CPU constantly (80-100%). This is after allowing the environment to 'settle', without any user pods etc setup/deployed, and no user interfaces open or API requests being made.

This could be a 'feature' of the base images rather than this installer, so may need to be reported elsewhere, perhaps it is known and expected. However my expectation would be that with no user apps running the overhead of monitoring, cluster management, would be very low.

I also used 'vagrant ssh' and the vm itself is only reporting around 10% CPU

Can not install on my laptop

I cann't install successfully because of cloudant service, even I changed metering_service to false.

Below is error message :

==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (3 retries left).
==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (2 retries left).
==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (1 retries left).
==> icp: fatal: [192.168.27.100] => Status code was not [200]: HTTP Error 503: Service Unavailable
==> icp:
==> icp: PLAY RECAP *********************************************************************
==> icp: 192.168.27.100 : ok=150 changed=57 unreachable=0 failed=1
==> icp: 192.168.27.101 : ok=107 changed=40 unreachable=0 failed=0
==> icp: 192.168.27.102 : ok=107 changed=40 unreachable=0 failed=0
==> icp: localhost : ok=28 changed=5 unreachable=0 failed=0
==> icp:
==> icp: Playbook run took 0 days, 0 hours, 23 minutes, 17 seconds
==> icp: FATAL ERROR OCCURRED DURING INSTALLATION :-(
==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (3 retries left).
==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (2 retries left).
==> icp: FAILED - RETRYING: TASK: master : Ensuring that the Cloudant Database is ready (1 retries left).
==> icp: fatal: [192.168.27.100] => Status code was not [200]: HTTP Error 503: Service Unavailable
==> icp:
==> icp: PLAY RECAP *********************************************************************
==> icp: 192.168.27.100 : ok=150 changed=57 unreachable=0 failed=1
==> icp: The install log can be view with:
==> icp: vagrant ssh
==> icp: cat icp_install_log
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command

Vagrant Installation Fails on Ubuntu 17.10 VM

When I trying to use 17.10 in my laptop, Vagrant report "Can not find enp0s8". May be this is not a issue.
My purpose is verify wether ICP can running on 17.10 or not.

My vagrant version is 2.0.1, and virtualbox version is 5.1.30, 17.10 is bento/ubuntu-17.10

IBM cloud private boot node become inactive after installation if we include vsphere cloud configuration in config.yaml

Hi Team,
I have tested the scenario multiple times. I am trying to install IBM cloud private 2.1.0 ee GA version with vsphere cloud provider configuration definition in config.yaml. Installation is successful. But the boot node is always in inactive state.
While i am doing the same installation without vsphere cloud provider configuration there is not issues.
I certainly think this is a bug in the product and request you to check at the earliest.
Tryed the setup with 3 nodes and 6 nodes. All the same.

Once more observation is icp-ds-1 is in pending state. icp-ds-0 is running in master2 , icp-ds-2 is running in master 3.
But containers and other pods are running in the master1 (bootnode) server.
hosts

root@master1:/opt/ibm-cp-app-mod-2.1.0/cluster# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 NotReady 3h v1.7.3-11+f747daa02c9ffb
master2 Ready 3h v1.7.3-11+f747daa02c9ffb
master3 Ready 3h v1.7.3-11+f747daa02c9ffb
worker1 Ready 3h v1.7.3-11+f747daa02c9ffb
worker2 Ready 3h v1.7.3-11+f747daa02c9ffb
worker3 Ready 3h v1.7.3-11+f747daa02c9ffb

lxc list is empty

I install ICP by using vagrant in my Mac OS High Sierra. but don't whats the error and how to fix with lxc list is empty

param:
cpus = '2'
memory = '4096'
disabled_management_services = '["va","metering","monitoring"]'
use_cache = 'true'
export VAGRANT_VAGRANTFILE=Cachefile

sh-3.2# vagrant up
Bringing machine 'icp_cache' up with 'virtualbox' provider...
==> icp_cache: Importing base box 'bento/ubuntu-16.04'...
==> icp_cache: Matching MAC address for NAT networking...
==> icp_cache: Setting the name of the VM: ICP-Cache-Server
==> icp_cache: Clearing any previously set network interfaces...
==> icp_cache: Preparing network interfaces based on configuration...
icp_cache: Adapter 1: nat
icp_cache: Adapter 2: hostonly
==> icp_cache: Forwarding ports...
icp_cache: 3142 (guest) => 3142 (host) (adapter 1)
icp_cache: 5000 (guest) => 5000 (host) (adapter 1)
icp_cache: 22 (guest) => 2222 (host) (adapter 1)
==> icp_cache: Running 'pre-boot' VM customizations...
==> icp_cache: Booting VM...
==> icp_cache: Waiting for machine to boot. This may take a few minutes...
icp_cache: SSH address: 127.0.0.1:2222
icp_cache: SSH username: vagrant
icp_cache: SSH auth method: private key
==> icp_cache: Machine booted and ready!
==> icp_cache: Checking for guest additions in VM...
icp_cache: The guest additions on this VM do not match the installed version of
icp_cache: VirtualBox! In most cases this is fine, but in rare cases it can
icp_cache: prevent things such as shared folders from working properly. If you see
icp_cache: shared folder errors, please make sure the guest additions within the
icp_cache: virtual machine match the version of VirtualBox you have installed on
icp_cache: your host and reload your VM.
icp_cache:
icp_cache: Guest Additions Version: 5.1.26
icp_cache: VirtualBox Version: 5.2
==> icp_cache: Setting hostname...
==> icp_cache: Configuring and enabling network interfaces...
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_master_ssh_keys
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_swap_space
icp_cache: Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
icp_cache: no label, UUID=ae37eb2e-bde5-46a5-994b-988b385939fd
icp_cache: vm.swappiness = 40
icp_cache: vm.vfs_cache_pressure = 50
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_performance_settings
icp_cache: vm.swappiness = 40
icp_cache: vm.vfs_cache_pressure = 50
icp_cache: net.ipv4.ip_forward = 1
icp_cache: net.ipv4.conf.all.rp_filter = 0
icp_cache: net.ipv4.conf.default.rp_filter = 0
icp_cache: net.ipv6.conf.all.disable_ipv6 = 1
icp_cache: net.ipv6.conf.default.disable_ipv6 = 1
icp_cache: net.ipv6.conf.lo.disable_ipv6 = 1
icp_cache: net.ipv4.tcp_mem = 182757 243679 365514
icp_cache: net.core.netdev_max_backlog = 182757
icp_cache: net.ipv4.conf.enp0s3.proxy_arp = 1
icp_cache: fs.inotify.max_queued_events = 1048576
icp_cache: fs.inotify.max_user_instances = 1048576
icp_cache: fs.inotify.max_user_watches = 1048576
icp_cache: vm.max_map_count = 262144
icp_cache: Y
icp_cache: Y
icp_cache: Generating grub configuration file ...
icp_cache: Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
icp_cache: Found linux image: /boot/vmlinuz-4.4.0-92-generic
icp_cache: Found initrd image: /boot/initrd.img-4.4.0-92-generic
icp_cache: done
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: install_prereqs
icp_cache: OK
icp_cache: Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
icp_cache: Get:2 https://download.docker.com/linux/ubuntu xenial InRelease [49.8 kB]
icp_cache: Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease
icp_cache: Get:4 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
icp_cache: Get:5 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages [2,756 B]
icp_cache: Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [408 kB]
icp_cache: Get:7 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
icp_cache: Get:8 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [372 kB]
icp_cache: Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [681 kB]
icp_cache: Get:10 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [179 kB]
icp_cache: Get:11 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [7,472 B]
icp_cache: Get:12 http://security.ubuntu.com/ubuntu xenial-security/restricted i386 Packages [7,472 B]
icp_cache: Get:13 http://security.ubuntu.com/ubuntu xenial-security/restricted Translation-en [2,412 B]
icp_cache: Get:14 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [190 kB]
icp_cache: Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe i386 Packages [159 kB]
icp_cache: Get:16 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [99.0 kB]
icp_cache: Get:17 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [3,208 B]
icp_cache: Get:18 http://security.ubuntu.com/ubuntu xenial-security/multiverse i386 Packages [3,384 B]
icp_cache: Get:19 http://security.ubuntu.com/ubuntu xenial-security/multiverse Translation-en [1,408 B]
icp_cache: Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages [639 kB]
icp_cache: Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [285 kB]
icp_cache: Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [8,072 B]
icp_cache: Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/restricted i386 Packages [8,100 B]
icp_cache: Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Translation-en [2,672 B]
icp_cache: Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [565 kB]
icp_cache: Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/universe i386 Packages [530 kB]
icp_cache: Get:27 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [229 kB]
icp_cache: Get:28 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [16.2 kB]
icp_cache: Get:29 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse i386 Packages [15.3 kB]
icp_cache: Get:30 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse Translation-en [8,052 B]
icp_cache: Get:31 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [4,860 B]
icp_cache: Get:32 http://archive.ubuntu.com/ubuntu xenial-backports/main i386 Packages [4,856 B]
icp_cache: Get:33 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [6,612 B]
icp_cache: Get:34 http://archive.ubuntu.com/ubuntu xenial-backports/universe i386 Packages [6,600 B]
icp_cache: Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,768 B]
icp_cache: Fetched 4,805 kB in 10s (445 kB/s)
icp_cache: Reading package lists...
icp_cache: Reading package lists...
icp_cache: Building dependency tree...
icp_cache: Reading state information...
icp_cache: apt-transport-https is already the newest version (1.2.24).
icp_cache: linux-image-extra-4.4.0-92-generic is already the newest version (4.4.0-92.115).
icp_cache: linux-image-extra-4.4.0-92-generic set to manually installed.
icp_cache: nfs-common is already the newest version (1:1.2.8-9ubuntu12.1).
icp_cache: software-properties-common is already the newest version (0.96.20.7).
icp_cache: The following packages were automatically installed and are no longer required:
icp_cache: fakeroot libfakeroot libfile-fcntllock-perl ncurses-term os-prober
icp_cache: python3-requests python3-urllib3 ssh-import-id tcpd
icp_cache: Use 'sudo apt autoremove' to remove them.
icp_cache: The following additional packages will be installed:
icp_cache: cpp-5 dpkg-dev g++ g++-5 gcc-5 gcc-5-base libaio1 libasan2 libatomic1
icp_cache: libcc1-0 libcilkrts5 libcurl3-gnutls libdpkg-perl libexpat1-dev libgcc-5-dev
icp_cache: libgomp1 libitm1 liblsan0 libltdl7 libmpx0 libopts25 libpython-dev
icp_cache: libpython2.7 libpython2.7-dev libpython2.7-minimal libpython2.7-stdlib
icp_cache: libquadmath0 libstdc++-5-dev libstdc++6 libtsan0 libubsan0 linux-firmware
icp_cache: linux-image-4.4.0-104-generic linux-image-extra-4.4.0-104-generic
icp_cache: linux-image-generic python-pip-whl python-pkg-resources python2.7
icp_cache: python2.7-dev python2.7-minimal
icp_cache: Suggested packages:
icp_cache: doc-base avahi-daemon gcc-5-locales debian-keyring g++-multilib
icp_cache: g++-5-multilib gcc-5-doc libstdc++6-5-dbg gcc-5-multilib libgcc1-dbg
icp_cache: libgomp1-dbg libitm1-dbg libatomic1-dbg libasan2-dbg liblsan0-dbg
icp_cache: libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libmpx0-dbg libquadmath0-dbg
icp_cache: libstdc++-5-doc fdutils linux-doc-4.4.0 | linux-source-4.4.0 linux-tools
icp_cache: linux-headers-4.4.0-104-generic ntp-doc python-setuptools-doc python2.7-doc
icp_cache: binfmt-support
icp_cache: Recommended packages:
icp_cache: cgroupfs-mount | cgroup-lite libalgorithm-merge-perl thermald python-all-dev
icp_cache: python-wheel
icp_cache: The following NEW packages will be installed:
icp_cache: apt-cacher-ng aufs-tools build-essential docker-ce dpkg-dev g++ g++-5
icp_cache: libaio1 libexpat1-dev libltdl7 libopts25 libpython-dev libpython2.7
icp_cache: libpython2.7-dev libstdc++-5-dev linux-firmware
icp_cache: linux-image-4.4.0-104-generic linux-image-extra-4.4.0-104-generic
icp_cache: linux-image-extra-virtual linux-image-generic nfs-kernel-server ntp
icp_cache: python-dev python-pip python-pip-whl python-pkg-resources python-setuptools
icp_cache: python2.7-dev thin-provisioning-tools
icp_cache: The following packages will be upgraded:
icp_cache: ca-certificates cpp-5 curl gcc-5 gcc-5-base libasan2 libatomic1 libcc1-0
icp_cache: libcilkrts5 libcurl3-gnutls libdpkg-perl libgcc-5-dev libgomp1 libitm1
icp_cache: liblsan0 libmpx0 libpython2.7-minimal libpython2.7-stdlib libquadmath0
icp_cache: libstdc++6 libtsan0 libubsan0 python2.7 python2.7-minimal
icp_cache: 24 upgraded, 29 newly installed, 0 to remove and 87 not upgraded.
icp_cache: Need to get 191 MB of archives.
icp_cache: After this operation, 617 MB of additional disk space will be used.
icp_cache: Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu116.04.5 [38.8 kB]
icp_cache: Get:2 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 17.09.1
ce-0ubuntu [21.0 MB]
icp_cache: Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1
16.04.5 [55.1 kB]
icp_cache: Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu116.04.5 [27.4 kB]
icp_cache: Get:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1
16.04.5 [8,920 B]
icp_cache: Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu116.04.5 [264 kB]
icp_cache: Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1
16.04.5 [105 kB]
icp_cache: Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu116.04.5 [244 kB]
icp_cache: Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1
16.04.5 [95.3 kB]
icp_cache: Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu116.04.5 [40.1 kB]
icp_cache: Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1
16.04.5 [9,786 B]
icp_cache: Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu116.04.5 [131 kB]
icp_cache: Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1
16.04.5 [8,638 kB]
icp_cache: Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu116.04.5 [2,226 kB]
icp_cache: Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1
16.04.5 [7,786 kB]
icp_cache: Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5-base amd64 5.4.0-6ubuntu116.04.5 [17.1 kB]
icp_cache: Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++6 amd64 5.4.0-6ubuntu1
16.04.5 [393 kB]
icp_cache: Get:18 http://archive.ubuntu.com/ubuntu xenial/universe amd64 apt-cacher-ng amd64 0.9.1-1ubuntu1 [504 kB]
icp_cache: Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 libopts25 amd64 1:5.18.7-3 [57.8 kB]
icp_cache: Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ntp amd64 1:4.2.8p4+dfsg-3ubuntu5.7 [518 kB]
icp_cache: Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7 amd64 2.7.12-1ubuntu016.04.2 [224 kB]
icp_cache: Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-stdlib amd64 2.7.12-1ubuntu0
16.04.2 [1,880 kB]
icp_cache: Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-minimal amd64 2.7.12-1ubuntu016.04.2 [1,294 kB]
icp_cache: Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-minimal amd64 2.7.12-1ubuntu0
16.04.2 [338 kB]
icp_cache: Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ca-certificates all 2017071716.04.1 [168 kB]
icp_cache: Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.5 [138 kB]
icp_cache: Get:27 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.5 [184 kB]
icp_cache: Get:28 http://archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
icp_cache: Get:29 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++-5-dev amd64 5.4.0-6ubuntu1
16.04.5 [1,430 kB]
icp_cache: Get:30 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 g++-5 amd64 5.4.0-6ubuntu116.04.5 [8,435 kB]
icp_cache: Get:31 http://archive.ubuntu.com/ubuntu xenial/main amd64 g++ amd64 4:5.3.1-1ubuntu1 [1,504 B]
icp_cache: Get:32 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdpkg-perl all 1.18.4ubuntu1.3 [195 kB]
icp_cache: Get:33 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg-dev all 1.18.4ubuntu1.3 [584 kB]
icp_cache: Get:34 http://archive.ubuntu.com/ubuntu xenial/main amd64 build-essential amd64 12.1ubuntu2 [4,758 B]
icp_cache: Get:35 http://archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
icp_cache: Get:36 http://archive.ubuntu.com/ubuntu xenial/main amd64 libaio1 amd64 0.3.110-2 [6,356 B]
icp_cache: Get:37 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1-dev amd64 2.1.0-7ubuntu0.16.04.3 [115 kB]
icp_cache: Get:38 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7 amd64 2.7.12-1ubuntu0
16.04.2 [1,070 kB]
icp_cache: Get:39 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython2.7-dev amd64 2.7.12-1ubuntu016.04.2 [27.8 MB]
icp_cache: Get:40 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython-dev amd64 2.7.11-1 [7,728 B]
icp_cache: Get:41 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-firmware all 1.157.14 [44.8 MB]
icp_cache: Get:42 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-4.4.0-104-generic amd64 4.4.0-104.127 [21.9 MB]
icp_cache: Get:43 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-extra-4.4.0-104-generic amd64 4.4.0-104.127 [36.0 MB]
icp_cache: Get:44 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-generic amd64 4.4.0.104.109 [2,314 B]
icp_cache: Get:45 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-image-extra-virtual amd64 4.4.0.104.109 [1,772 B]
icp_cache: Get:46 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 nfs-kernel-server amd64 1:1.2.8-9ubuntu12.1 [88.0 kB]
icp_cache: Get:47 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python2.7-dev amd64 2.7.12-1ubuntu0
16.04.2 [276 kB]
icp_cache: Get:48 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-dev amd64 2.7.11-1 [1,160 B]
icp_cache: Get:49 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip-whl all 8.1.1-2ubuntu0.4 [1,110 kB]
icp_cache: Get:50 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip all 8.1.1-2ubuntu0.4 [144 kB]
icp_cache: Get:51 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-pkg-resources all 20.7.0-1 [108 kB]
icp_cache: Get:52 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-setuptools all 20.7.0-1 [169 kB]
icp_cache: Get:53 http://archive.ubuntu.com/ubuntu xenial/universe amd64 thin-provisioning-tools amd64 0.5.6-1ubuntu1 [319 kB]
icp_cache: dpkg-preconfigure: unable to re-open stdin: No such file or directory
icp_cache: Fetched 191 MB in 1min 37s (1,963 kB/s)
icp_cache: (Reading database ...
icp_cache: (Reading database ... 5%
icp_cache: (Reading database ... 10%
icp_cache: (Reading database ... 15%
icp_cache: (Reading database ... 20%
icp_cache: (Reading database ... 25%
icp_cache: (Reading database ... 30%
icp_cache: (Reading database ... 35%
icp_cache: (Reading database ... 40%
icp_cache: (Reading database ... 45%
icp_cache: (Reading database ... 50%
icp_cache: (Reading database ... 55%
icp_cache: (Reading database ... 60%
icp_cache: (Reading database ... 65%
icp_cache: (Reading database ... 70%
icp_cache: (Reading database ... 75%
icp_cache: (Reading database ... 80%
icp_cache: (Reading database ... 85%
icp_cache: (Reading database ... 90%
icp_cache: (Reading database ... 95%
icp_cache: (Reading database ... 100%
icp_cache: (Reading database ...
icp_cache: 35871 files and directories currently installed.)
icp_cache: Preparing to unpack .../libcc1-0_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libgomp1_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking libgomp1:amd64 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libitm1_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libitm1:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libatomic1_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking libatomic1:amd64 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libasan2_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libasan2:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../liblsan0_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking liblsan0:amd64 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libtsan0_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libtsan0:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libubsan0_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking libubsan0:amd64 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libcilkrts5_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libmpx0_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking libmpx0:amd64 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libquadmath0_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../gcc-5_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking gcc-5 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../libgcc-5-dev_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../cpp-5_5.4.0-6ubuntu1
16.04.5_amd64.deb ...
icp_cache: Unpacking cpp-5 (5.4.0-6ubuntu116.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Preparing to unpack .../gcc-5-base_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking gcc-5-base:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Processing triggers for libc-bin (2.23-0ubuntu9) ...
icp_cache: Processing triggers for man-db (2.7.5-1) ...
icp_cache: Setting up gcc-5-base:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: (Reading database ...
icp_cache: (Reading database ... 5%
icp_cache: (Reading database ... 10%
icp_cache: (Reading database ... 15%
icp_cache: (Reading database ... 20%
icp_cache: (Reading database ... 25%
icp_cache: (Reading database ... 30%
icp_cache: (Reading database ... 35%
icp_cache: (Reading database ... 40%
icp_cache: (Reading database ... 45%
icp_cache: (Reading database ... 50%
icp_cache: (Reading database ... 55%
icp_cache: (Reading database ... 60%
icp_cache: (Reading database ... 65%
icp_cache: (Reading database ... 70%
icp_cache: (Reading database ... 75%
icp_cache: (Reading database ... 80%
icp_cache: (Reading database ... 85%
icp_cache: (Reading database ... 90%
icp_cache: (Reading database ... 95%
icp_cache: (Reading database ... 100%
icp_cache: (Reading database ...
icp_cache: 35871 files and directories currently installed.)
icp_cache: Preparing to unpack .../libstdc++6_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libstdc++6:amd64 (5.4.0-6ubuntu1
16.04.5) over (5.4.0-6ubuntu116.04.4) ...
icp_cache: Processing triggers for libc-bin (2.23-0ubuntu9) ...
icp_cache: Setting up libstdc++6:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Processing triggers for libc-bin (2.23-0ubuntu9) ...
icp_cache: Selecting previously unselected package apt-cacher-ng.
icp_cache: (Reading database ...
icp_cache: (Reading database ... 5%
icp_cache: (Reading database ... 10%
icp_cache: (Reading database ... 15%
icp_cache: (Reading database ... 20%
icp_cache: (Reading database ... 25%
icp_cache: (Reading database ... 30%
icp_cache: (Reading database ... 35%
icp_cache: (Reading database ... 40%
icp_cache: (Reading database ... 45%
icp_cache: (Reading database ... 50%
icp_cache: (Reading database ... 55%
icp_cache: (Reading database ... 60%
icp_cache: (Reading database ... 65%
icp_cache: (Reading database ... 70%
icp_cache: (Reading database ... 75%
icp_cache: (Reading database ... 80%
icp_cache: (Reading database ... 85%
icp_cache: (Reading database ... 90%
icp_cache: (Reading database ... 95%
icp_cache: (Reading database ... 100%
icp_cache: (Reading database ...
icp_cache: 35871 files and directories currently installed.)
icp_cache: Preparing to unpack .../apt-cacher-ng_0.9.1-1ubuntu1_amd64.deb ...
icp_cache: Unpacking apt-cacher-ng (0.9.1-1ubuntu1) ...
icp_cache: Selecting previously unselected package libopts25:amd64.
icp_cache: Preparing to unpack .../libopts25_1%3a5.18.7-3_amd64.deb ...
icp_cache: Unpacking libopts25:amd64 (1:5.18.7-3) ...
icp_cache: Selecting previously unselected package ntp.
icp_cache: Preparing to unpack .../ntp_1%3a4.2.8p4+dfsg-3ubuntu5.7_amd64.deb ...
icp_cache: Unpacking ntp (1:4.2.8p4+dfsg-3ubuntu5.7) ...
icp_cache: Preparing to unpack .../python2.7_2.7.12-1ubuntu016.04.2_amd64.deb ...
icp_cache: Unpacking python2.7 (2.7.12-1ubuntu0
16.04.2) over (2.7.12-1ubuntu016.04.1) ...
icp_cache: Preparing to unpack .../libpython2.7-stdlib_2.7.12-1ubuntu0
16.04.2_amd64.deb ...
icp_cache: Unpacking libpython2.7-stdlib:amd64 (2.7.12-1ubuntu016.04.2) over (2.7.12-1ubuntu016.04.1) ...
icp_cache: Preparing to unpack .../python2.7-minimal_2.7.12-1ubuntu016.04.2_amd64.deb ...
icp_cache: Unpacking python2.7-minimal (2.7.12-1ubuntu0
16.04.2) over (2.7.12-1ubuntu016.04.1) ...
icp_cache: Preparing to unpack .../libpython2.7-minimal_2.7.12-1ubuntu0
16.04.2_amd64.deb ...
icp_cache: Unpacking libpython2.7-minimal:amd64 (2.7.12-1ubuntu016.04.2) over (2.7.12-1ubuntu016.04.1) ...
icp_cache: Preparing to unpack .../ca-certificates_2017071716.04.1_all.deb ...
icp_cache: Unpacking ca-certificates (20170717
16.04.1) over (20160104ubuntu1) ...
icp_cache: Preparing to unpack .../curl_7.47.0-1ubuntu2.5_amd64.deb ...
icp_cache: Unpacking curl (7.47.0-1ubuntu2.5) over (7.47.0-1ubuntu2.2) ...
icp_cache: Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.5_amd64.deb ...
icp_cache: Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.5) over (7.47.0-1ubuntu2.2) ...
icp_cache: Selecting previously unselected package aufs-tools.
icp_cache: Preparing to unpack .../aufs-tools_1%3a3.2+20130722-1.1ubuntu1_amd64.deb ...
icp_cache: Unpacking aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
icp_cache: Selecting previously unselected package libstdc++-5-dev:amd64.
icp_cache: Preparing to unpack .../libstdc++-5-dev_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking libstdc++-5-dev:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Selecting previously unselected package g++-5.
icp_cache: Preparing to unpack .../g++-5_5.4.0-6ubuntu116.04.5_amd64.deb ...
icp_cache: Unpacking g++-5 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Selecting previously unselected package g++.
icp_cache: Preparing to unpack .../g++_4%3a5.3.1-1ubuntu1_amd64.deb ...
icp_cache: Unpacking g++ (4:5.3.1-1ubuntu1) ...
icp_cache: Preparing to unpack .../libdpkg-perl_1.18.4ubuntu1.3_all.deb ...
icp_cache: Unpacking libdpkg-perl (1.18.4ubuntu1.3) over (1.18.4ubuntu1.2) ...
icp_cache: Selecting previously unselected package dpkg-dev.
icp_cache: Preparing to unpack .../dpkg-dev_1.18.4ubuntu1.3_all.deb ...
icp_cache: Unpacking dpkg-dev (1.18.4ubuntu1.3) ...
icp_cache: Selecting previously unselected package build-essential.
icp_cache: Preparing to unpack .../build-essential_12.1ubuntu2_amd64.deb ...
icp_cache: Unpacking build-essential (12.1ubuntu2) ...
icp_cache: Selecting previously unselected package libltdl7:amd64.
icp_cache: Preparing to unpack /libltdl7_2.4.6-0.1_amd64.deb ...
icp_cache: Unpacking libltdl7:amd64 (2.4.6-0.1) ...
icp_cache: Selecting previously unselected package docker-ce.
icp_cache: Preparing to unpack .../docker-ce_17.09.1ce-0ubuntu_amd64.deb ...
icp_cache: Unpacking docker-ce (17.09.1ce-0ubuntu) ...
icp_cache: Selecting previously unselected package libaio1:amd64.
icp_cache: Preparing to unpack /libaio1_0.3.110-2_amd64.deb ...
icp_cache: Unpacking libaio1:amd64 (0.3.110-2) ...
icp_cache: Selecting previously unselected package libexpat1-dev:amd64.
icp_cache: Preparing to unpack .../libexpat1-dev_2.1.0-7ubuntu0.16.04.3_amd64.deb ...
icp_cache: Unpacking libexpat1-dev:amd64 (2.1.0-7ubuntu0.16.04.3) ...
icp_cache: Selecting previously unselected package libpython2.7:amd64.
icp_cache: Preparing to unpack .../libpython2.7_2.7.12-1ubuntu016.04.2_amd64.deb ...
icp_cache: Unpacking libpython2.7:amd64 (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Selecting previously unselected package libpython2.7-dev:amd64.
icp_cache: Preparing to unpack .../libpython2.7-dev_2.7.12-1ubuntu016.04.2_amd64.deb ...
icp_cache: Unpacking libpython2.7-dev:amd64 (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Selecting previously unselected package libpython-dev:amd64.
icp_cache: Preparing to unpack .../libpython-dev_2.7.11-1_amd64.deb ...
icp_cache: Unpacking libpython-dev:amd64 (2.7.11-1) ...
icp_cache: Selecting previously unselected package linux-firmware.
icp_cache: Preparing to unpack .../linux-firmware_1.157.14_all.deb ...
icp_cache: Unpacking linux-firmware (1.157.14) ...
icp_cache: Selecting previously unselected package linux-image-4.4.0-104-generic.
icp_cache: Preparing to unpack .../linux-image-4.4.0-104-generic_4.4.0-104.127_amd64.deb ...
icp_cache: Done.
icp_cache: Unpacking linux-image-4.4.0-104-generic (4.4.0-104.127) ...
icp_cache: Selecting previously unselected package linux-image-extra-4.4.0-104-generic.
icp_cache: Preparing to unpack .../linux-image-extra-4.4.0-104-generic_4.4.0-104.127_amd64.deb ...
icp_cache: Unpacking linux-image-extra-4.4.0-104-generic (4.4.0-104.127) ...
icp_cache: Selecting previously unselected package linux-image-generic.
icp_cache: Preparing to unpack .../linux-image-generic_4.4.0.104.109_amd64.deb ...
icp_cache: Unpacking linux-image-generic (4.4.0.104.109) ...
icp_cache: Selecting previously unselected package linux-image-extra-virtual.
icp_cache: Preparing to unpack .../linux-image-extra-virtual_4.4.0.104.109_amd64.deb ...
icp_cache: Unpacking linux-image-extra-virtual (4.4.0.104.109) ...
icp_cache: Selecting previously unselected package nfs-kernel-server.
icp_cache: Preparing to unpack .../nfs-kernel-server_1%3a1.2.8-9ubuntu12.1_amd64.deb ...
icp_cache: Unpacking nfs-kernel-server (1:1.2.8-9ubuntu12.1) ...
icp_cache: Selecting previously unselected package python2.7-dev.
icp_cache: Preparing to unpack .../python2.7-dev_2.7.12-1ubuntu016.04.2_amd64.deb ...
icp_cache: Unpacking python2.7-dev (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Selecting previously unselected package python-dev.
icp_cache: Preparing to unpack /python-dev_2.7.11-1_amd64.deb ...
icp_cache: Unpacking python-dev (2.7.11-1) ...
icp_cache: Selecting previously unselected package python-pip-whl.
icp_cache: Preparing to unpack .../python-pip-whl_8.1.1-2ubuntu0.4_all.deb ...
icp_cache: Unpacking python-pip-whl (8.1.1-2ubuntu0.4) ...
icp_cache: Selecting previously unselected package python-pip.
icp_cache: Preparing to unpack .../python-pip_8.1.1-2ubuntu0.4_all.deb ...
icp_cache: Unpacking python-pip (8.1.1-2ubuntu0.4) ...
icp_cache: Selecting previously unselected package python-pkg-resources.
icp_cache: Preparing to unpack .../python-pkg-resources_20.7.0-1_all.deb ...
icp_cache: Unpacking python-pkg-resources (20.7.0-1) ...
icp_cache: Selecting previously unselected package python-setuptools.
icp_cache: Preparing to unpack .../python-setuptools_20.7.0-1_all.deb ...
icp_cache: Unpacking python-setuptools (20.7.0-1) ...
icp_cache: Selecting previously unselected package thin-provisioning-tools.
icp_cache: Preparing to unpack .../thin-provisioning-tools_0.5.6-1ubuntu1_amd64.deb ...
icp_cache: Unpacking thin-provisioning-tools (0.5.6-1ubuntu1) ...
icp_cache: Processing triggers for man-db (2.7.5-1) ...
icp_cache: Processing triggers for systemd (229-4ubuntu19) ...
icp_cache: Processing triggers for ureadahead (0.100.0-19) ...
icp_cache: Processing triggers for libc-bin (2.23-0ubuntu9) ...
icp_cache: Processing triggers for mime-support (3.59ubuntu1) ...
icp_cache: Setting up libcc1-0:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up libgomp1:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libitm1:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up libatomic1:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libasan2:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up liblsan0:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libtsan0:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up libubsan0:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libcilkrts5:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up libmpx0:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libquadmath0:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up cpp-5 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up gcc-5 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up apt-cacher-ng (0.9.1-1ubuntu1) ...
icp_cache: Setting up libopts25:amd64 (1:5.18.7-3) ...
icp_cache: Setting up ntp (1:4.2.8p4+dfsg-3ubuntu5.7) ...
icp_cache: Setting up libpython2.7-minimal:amd64 (2.7.12-1ubuntu016.04.2) ...
icp_cache: Setting up python2.7-minimal (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Setting up libpython2.7-stdlib:amd64 (2.7.12-1ubuntu016.04.2) ...
icp_cache: Setting up python2.7 (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Setting up ca-certificates (2017071716.04.1) ...
icp_cache: Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.5) ...
icp_cache: Setting up curl (7.47.0-1ubuntu2.5) ...
icp_cache: Setting up aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
icp_cache: Setting up libstdc++-5-dev:amd64 (5.4.0-6ubuntu1
16.04.5) ...
icp_cache: Setting up g++-5 (5.4.0-6ubuntu116.04.5) ...
icp_cache: Setting up g++ (4:5.3.1-1ubuntu1) ...
icp_cache: update-alternatives:
icp_cache: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
icp_cache: Setting up libdpkg-perl (1.18.4ubuntu1.3) ...
icp_cache: Setting up dpkg-dev (1.18.4ubuntu1.3) ...
icp_cache: Setting up build-essential (12.1ubuntu2) ...
icp_cache: Setting up libltdl7:amd64 (2.4.6-0.1) ...
icp_cache: Setting up docker-ce (17.09.1
ce-0ubuntu) ...
icp_cache: Setting up libaio1:amd64 (0.3.110-2) ...
icp_cache: Setting up libexpat1-dev:amd64 (2.1.0-7ubuntu0.16.04.3) ...
icp_cache: Setting up libpython2.7:amd64 (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Setting up libpython2.7-dev:amd64 (2.7.12-1ubuntu016.04.2) ...
icp_cache: Setting up libpython-dev:amd64 (2.7.11-1) ...
icp_cache: Setting up linux-firmware (1.157.14) ...
icp_cache: update-initramfs: Generating /boot/initrd.img-4.4.0-92-generic
icp_cache: W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
icp_cache: Setting up linux-image-4.4.0-104-generic (4.4.0-104.127) ...
icp_cache: Running depmod.
icp_cache: update-initramfs: deferring update (hook will be called later)
icp_cache: Examining /etc/kernel/postinst.d.
icp_cache: run-parts: executing /etc/kernel/postinst.d/apt-auto-removal
icp_cache: 4.4.0-104-generic
icp_cache: /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/dkms
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/initramfs-tools
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: update-initramfs: Generating /boot/initrd.img-4.4.0-104-generic
icp_cache: W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
icp_cache: run-parts: executing /etc/kernel/postinst.d/unattended-upgrades
icp_cache: 4.4.0-104-generic
icp_cache: /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/update-notifier
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/vboxadd
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/zz-update-grub
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: Generating grub configuration file ...
icp_cache: Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
icp_cache: Found linux image: /boot/vmlinuz-4.4.0-104-generic
icp_cache: Found initrd image: /boot/initrd.img-4.4.0-104-generic
icp_cache: Found linux image: /boot/vmlinuz-4.4.0-92-generic
icp_cache: Found initrd image: /boot/initrd.img-4.4.0-92-generic
icp_cache: done
icp_cache: Setting up linux-image-extra-4.4.0-104-generic (4.4.0-104.127) ...
icp_cache: run-parts: executing /etc/kernel/postinst.d/apt-auto-removal
icp_cache: 4.4.0-104-generic
icp_cache: /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/dkms
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/initramfs-tools
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: update-initramfs: Generating /boot/initrd.img-4.4.0-104-generic
icp_cache: W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
icp_cache: run-parts: executing /etc/kernel/postinst.d/unattended-upgrades
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/update-notifier
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/vboxadd
icp_cache: 4.4.0-104-generic
icp_cache: /boot/vmlinuz-4.4.0-104-generic
icp_cache: run-parts: executing /etc/kernel/postinst.d/zz-update-grub
icp_cache: 4.4.0-104-generic /boot/vmlinuz-4.4.0-104-generic
icp_cache: Generating grub configuration file ...
icp_cache: Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
icp_cache: Found linux image: /boot/vmlinuz-4.4.0-104-generic
icp_cache: Found initrd image: /boot/initrd.img-4.4.0-104-generic
icp_cache: Found linux image: /boot/vmlinuz-4.4.0-92-generic
icp_cache: Found initrd image: /boot/initrd.img-4.4.0-92-generic
icp_cache: done
icp_cache: Setting up linux-image-generic (4.4.0.104.109) ...
icp_cache: Setting up linux-image-extra-virtual (4.4.0.104.109) ...
icp_cache: Setting up nfs-kernel-server (1:1.2.8-9ubuntu12.1) ...
icp_cache: Creating config file /etc/exports with new version
icp_cache: Creating config file /etc/default/nfs-kernel-server with new version
icp_cache: Setting up python2.7-dev (2.7.12-1ubuntu0
16.04.2) ...
icp_cache: Setting up python-dev (2.7.11-1) ...
icp_cache: Setting up python-pip-whl (8.1.1-2ubuntu0.4) ...
icp_cache: Setting up python-pip (8.1.1-2ubuntu0.4) ...
icp_cache: Setting up python-pkg-resources (20.7.0-1) ...
icp_cache: Setting up python-setuptools (20.7.0-1) ...
icp_cache: Setting up thin-provisioning-tools (0.5.6-1ubuntu1) ...
icp_cache: Processing triggers for libc-bin (2.23-0ubuntu9) ...
icp_cache: Processing triggers for systemd (229-4ubuntu19) ...
icp_cache: Processing triggers for ureadahead (0.100.0-19) ...
icp_cache: Processing triggers for ca-certificates (20170717~16.04.1) ...
icp_cache: Updating certificates in /etc/ssl/certs...
icp_cache: 17 added, 42 removed; done.
icp_cache: Running hooks in /etc/ca-certificates/update.d...
icp_cache: done.
icp_cache: Collecting pip
icp_cache: Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
icp_cache: Installing collected packages: pip
icp_cache: Found existing installation: pip 8.1.1
icp_cache: Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
icp_cache: Successfully installed pip-9.0.1
icp_cache: Collecting docker
icp_cache: Downloading docker-2.6.1-py2.py3-none-any.whl (117kB)
icp_cache: Collecting backports.ssl-match-hostname>=3.5; python_version < "3.5" (from docker)
icp_cache: Downloading backports.ssl_match_hostname-3.5.0.1.tar.gz
icp_cache: Collecting six>=1.4.0 (from docker)
icp_cache: Downloading six-1.11.0-py2.py3-none-any.whl
icp_cache: Collecting websocket-client>=0.32.0 (from docker)
icp_cache: Downloading websocket_client-0.44.0-py2.py3-none-any.whl (199kB)
icp_cache: Collecting requests!=2.11.0,!=2.12.2,!=2.18.0,>=2.5.2 (from docker)
icp_cache: Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
icp_cache: Collecting ipaddress>=1.0.16; python_version < "3.3" (from docker)
icp_cache: Downloading ipaddress-1.0.19.tar.gz
icp_cache: Collecting docker-pycreds>=0.2.1 (from docker)
icp_cache: Downloading docker_pycreds-0.2.1-py2.py3-none-any.whl
icp_cache: Collecting urllib3<1.23,>=1.21.1 (from requests!=2.11.0,!=2.12.2,!=2.18.0,>=2.5.2->docker)
icp_cache: Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)
icp_cache: Collecting idna<2.7,>=2.5 (from requests!=2.11.0,!=2.12.2,!=2.18.0,>=2.5.2->docker)
icp_cache: Downloading idna-2.6-py2.py3-none-any.whl (56kB)
icp_cache: Collecting chardet<3.1.0,>=3.0.2 (from requests!=2.11.0,!=2.12.2,!=2.18.0,>=2.5.2->docker)
icp_cache: Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)
icp_cache: Collecting certifi>=2017.4.17 (from requests!=2.11.0,!=2.12.2,!=2.18.0,>=2.5.2->docker)
icp_cache: Downloading certifi-2017.11.5-py2.py3-none-any.whl (330kB)
icp_cache: Installing collected packages: backports.ssl-match-hostname, six, websocket-client, urllib3, idna, chardet, certifi, requests, ipaddress, docker-pycreds, docker
icp_cache: Running setup.py install for backports.ssl-match-hostname: started
icp_cache: Running setup.py install for backports.ssl-match-hostname: finished with status 'done'
icp_cache: Running setup.py install for ipaddress: started
icp_cache: Running setup.py install for ipaddress: finished with status 'done'
icp_cache: Successfully installed backports.ssl-match-hostname-3.5.0.1 certifi-2017.11.5 chardet-3.0.4 docker-2.6.1 docker-pycreds-0.2.1 idna-2.6 ipaddress-1.0.19 requests-2.18.4 six-1.11.0 urllib3-1.22 websocket-client-0.44.0
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: add_apt_cache_storage_vol
icp_cache: Physical volume "/dev/sdb" successfully created
icp_cache: Volume group "vagrant-vg" successfully extended
icp_cache: Logical volume "storage" created.
icp_cache: mke2fs 1.42.13 (17-May-2015)
icp_cache: Creating filesystem with 131070976 4k blocks and 32768000 inodes
icp_cache: Filesystem UUID: 4a10dd95-bd3c-4243-bd4c-219d5d64c0b1
icp_cache: Superblock backups stored on blocks:
icp_cache:
icp_cache:
icp_cache: 32768
icp_cache: ,
icp_cache: 98304
icp_cache: ,
icp_cache: 163840
icp_cache: ,
icp_cache: 229376
icp_cache: ,
icp_cache: 294912
icp_cache: ,
icp_cache: 819200
icp_cache: ,
icp_cache: 884736
icp_cache: ,
icp_cache: 1605632
icp_cache: ,
icp_cache: 2654208
icp_cache: ,
icp_cache:
icp_cache:
icp_cache: 4096000
icp_cache: ,
icp_cache: 7962624
icp_cache: ,
icp_cache: 11239424
icp_cache: ,
icp_cache: 20480000
icp_cache: ,
icp_cache: 23887872
icp_cache: ,
icp_cache: 71663616
icp_cache: ,
icp_cache: 78675968
icp_cache: ,
icp_cache:
icp_cache:
icp_cache: 102400000
icp_cache: Allocating group tables:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
icp_cache: Writing inode tables:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
icp_cache: Creating journal (32768 blocks):
icp_cache: done
icp_cache: Writing superblocks and filesystem accounting information:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_apt_client
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: add_docker_cache_storage_vol
icp_cache: Physical volume "/dev/sdc" successfully created
icp_cache: Volume group "vagrant-vg" successfully extended
icp_cache: Logical volume "docker" created.
icp_cache: mke2fs 1.42.13 (17-May-2015)
icp_cache: Creating filesystem with 131070976 4k blocks and 32768000 inodes
icp_cache: Filesystem UUID: c57a7414-9160-4246-9a98-4b732df40939
icp_cache: Superblock backups stored on blocks:
icp_cache:
icp_cache:
icp_cache: 32768
icp_cache: ,
icp_cache: 98304
icp_cache: ,
icp_cache: 163840
icp_cache: ,
icp_cache: 229376
icp_cache: ,
icp_cache: 294912
icp_cache: ,
icp_cache: 819200
icp_cache: ,
icp_cache: 884736
icp_cache: ,
icp_cache: 1605632
icp_cache: ,
icp_cache: 2654208
icp_cache: ,
icp_cache:
icp_cache:
icp_cache: 4096000
icp_cache: ,
icp_cache: 7962624
icp_cache: ,
icp_cache: 11239424
icp_cache: ,
icp_cache: 20480000
icp_cache: ,
icp_cache: 23887872
icp_cache: ,
icp_cache: 71663616
icp_cache: ,
icp_cache: 78675968
icp_cache: ,
icp_cache:
icp_cache:
icp_cache: 102400000
icp_cache: Allocating group tables:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
icp_cache: Writing inode tables:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
icp_cache: Creating journal (32768 blocks):
icp_cache: done
icp_cache: Writing superblocks and filesystem accounting information:
icp_cache: 0/4000
icp_cache:
icp_cache:
icp_cache:
icp_cache: done
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_docker_cache
icp_cache: Unable to find image 'registry:2' locally
icp_cache: 2:
icp_cache: Pulling from library/registry
icp_cache: ab7e51e37a18:
icp_cache: Pulling fs layer
icp_cache: c8ad8919ce25:
icp_cache: Pulling fs layer
icp_cache: 5808405bc62f:
icp_cache: Pulling fs layer
icp_cache: f6000d7b276c:
icp_cache: Pulling fs layer
icp_cache: f792fdcd8ff6:
icp_cache: Pulling fs layer
icp_cache: f792fdcd8ff6:
icp_cache: Waiting
icp_cache: f6000d7b276c:
icp_cache: Waiting
icp_cache: c8ad8919ce25:
icp_cache: Verifying Checksum
icp_cache: c8ad8919ce25:
icp_cache: Download complete
icp_cache: ab7e51e37a18:
icp_cache: Verifying Checksum
icp_cache: ab7e51e37a18:
icp_cache: Download complete
icp_cache: ab7e51e37a18:
icp_cache: Pull complete
icp_cache: c8ad8919ce25:
icp_cache: Pull complete
icp_cache: f6000d7b276c:
icp_cache: Verifying Checksum
icp_cache: f6000d7b276c:
icp_cache: Download complete
icp_cache: f792fdcd8ff6:
icp_cache: Verifying Checksum
icp_cache: f792fdcd8ff6:
icp_cache: Download complete
icp_cache: 5808405bc62f:
icp_cache: Verifying Checksum
icp_cache: 5808405bc62f:
icp_cache: Download complete
icp_cache: 5808405bc62f:
icp_cache: Pull complete
icp_cache: f6000d7b276c:
icp_cache: Pull complete
icp_cache: f792fdcd8ff6:
icp_cache: Pull complete
icp_cache: Digest: sha256:9d295999d330eba2552f9c78c9f59828af5c9a9c15a3fbd1351df03eaad04c6a
icp_cache: Status: Downloaded newer image for registry:2
icp_cache: 9ed81518260ed1a5026e9c0cb47cc607cbf68203ced2552bc0961d3d2dfdc56a
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: configure_nat_iptable_rules
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: install_startup_script
icp_cache: Created symlink from /etc/systemd/system/default.target.wants/icp-cache-startup.service to /etc/systemd/system/icp-cache-startup.service.
==> icp_cache: Running provisioner: shell...
icp_cache: Running: script: install_shellinabox
sh-3.2# vagrant ssh
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-92-generic x86_64)

86 packages can be updated.
37 updates are security updates.

*** System restart required ***
vagrant@cache:~$ lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
vagrant@cache:~$

[Enhancement] Install and setup OpenLDAP

In ICP 2.1, users are authenticated against an external LDAP/AD. There is no built-in user DB. In order to try the user role management/RBAC capabilities, it would be nice to set up OpenLDAP and populate it with a few test users.

Vagrant issue: etcd not starting

After adapting my Vagrantfile to point to the new ICp version (2.1.0.1) and Kubernetes and Helm,
the etcd service did not start.
I run on Ubuntu.
Distributor ID: Ubuntu
Description: Ubuntu 17.04
Release: 17.04
Codename: zesty

ICP 2.1.0 installed ok

A collegue noticed that the etcd image was 5 times as big as before.

Vagrantfile.txt

icp_install_log.txt

Enable multiple deployments with Ansible on Softlayer (TAG)

Currently the set of installation files for Softlayer rely on using the TAG for the selection of the VMs, but if different people use the same scripts to deploy then also the VMs of the other deployments are selected (and fail for ssh login).

Currently the tag used is "icp".
The files that are implicitly based in it are (and need to be changed):

  • playbooks/create_sl_vms.yml (tag added to each of the VM nodes)
  • hosts (TAGS array l.42); is used by
  • cluster/hosts same as above but from the master node deployment
  • playbooks/destroy_sl_vms.yml (tag used to select the VM to decommission)

To reproduce:

  • run two separate deployments with the create_sl_vms.yml
  • when you attempt to run prepare_sl_vms.yml, the hosts script provides the list of the vms of both of the deppoyements (8 VMs), it can't do ssh login on the foreign half of them and fails.

Two possible solutions are:

  1. add a step in the instructions to change the tag in all of the above files manually
  2. parameterize the tag in the script files

Installation stuck on Win7 64

Tried the default steps in a Windows shell (git clone, vagrant up), and after initial license agreement, nothing happened for quite some time.

After restarting in "vagrant up --debug" mode, I noticed some symptoms that I could search help for, and found this corresponding information hashicorp/vagrant#8783

Thus installed powershell 6.0.0 (or better), started it in administrator mode.
Given previous attempts to install ICP, I cleared vagrant and virtual box

  • clear vagrant folder: %USERHOME%.vagrant.d\
  • clear virtualbox image: ubuntu-16.04-amd64

Then ran "vagrant up --debug", and it ran almost until the end, but failed with this error:

Manage.exe: error: Could not rename the directory 'C:\\Users\\IBM_ADMIN\\Virtual
Box VMs\\ubuntu-16.04-amd64_1508412122173_40549' to 'C:\\Users\\IBM_ADMIN\\Virtu
alBox VMs\\IBM-Cloud-Private-dev-edition' to save the settings file (VERR_ALREAD
Y_EXISTS)\r\nVBoxManage.exe: error: Details: code E_FAIL (0x80004005), component
 SessionMachine, interface IMachine, callee IUnknown\r\nVBoxManage.exe: error: C
ontext: \"SaveSettings()\" at line 3052 of file VBoxManageModifyVM.cpp\r\n"]

Vagrant Installation Fails on Ubuntu 16.04 VM

I have installed the Vagrant distribution of ICp on my local Macbook, and got it up and running no problems.

I tried to move the Vagrant installation to its own isolated x86 VM, running Ubuntu 16.04 and am now having an issue running the installation script (vagrant up).

screen shot 2017-11-09 at 1 44 09 pm

Looks like there's some kind of issue with installing docker on the LXC nodes under the Ubuntu VM in Virtualbox. The line right under it, mentioning the base_segment, the vagrant script correctly creates a vboxnet0 adapter on this host, and it looks like there shouldn't be any collisions with that subnet on this VM.

screen shot 2017-11-09 at 1 45 00 pm

I have modified the base_segment and still get the same error with a new virtual adapter on a different subnet (192.168.56).

Cant access the web console

The install seemed to go ok (Windows 7). While it did show the dancer it also showed that it timed out attempting to verify the services. Now I cannot get to the web console or indeed any access via the 192.168.56 or 192.168.27 networks.

I do seem to be able to SSH to the linux kernel at 127.0.0.1

Can you advise me?

Unable to load Helm catalog and repositories

Pulled the latest change (ba600e9) and rebuilt my local ICp on RHEL Linux.

The Helm repositories list hung while trying to load (helm-repo logs showed 500 errors loading the public IBM charts)

The Helm Catalog showed an error message and loaded nothing.

Configure Calico failed during "vagrant up"

==> icp: Playbook run took 0 days, 0 hours, 44 minutes, 27 seconds
==> icp: FATAL ERROR OCCURRED DURING INSTALLATION :-(
==> icp: FAILED - RETRYING: TASK: addon : Waiting for configuring calico node to
node mesh (3 retries left).
==> icp: FAILED - RETRYING: TASK: addon : Waiting for configuring calico node to
node mesh (2 retries left).
==> icp: FAILED - RETRYING: TASK: addon : Waiting for configuring calico node to
node mesh (1 retries left).
==> icp: fatal: [localhost] => {'_ansible_parsed': True, 'stderr_lines': [], u'c
md': u'kubectl get pods --show-all --namespace=kube-system |grep configure-calic
o-mesh', u'end': u'2017-11-01 05:58:36.565823', '_ansible_no_log': False, u'stdo
ut': u'configure-calico-mesh-0zk9g 0/1 ContainerCreating
0 13m', u'changed': True, u'delta': u'0:00:00.367213', u'start': u'201
7-11-01 05:58:36.198610', 'attempts': 10, u'stderr': u'', u'rc': 0, u'invocation
': {u'module_args': {u'warn': True, u'executable': u'/bin/bash', u'_uses_shell':
True, u'_raw_params': u'kubectl get pods --show-all --namespace=kube-system |gr
ep configure-calico-mesh', u'removes': None, u'creates': None, u'chdir': None}},
'stdout_lines': [u'configure-calico-mesh-0zk9g 0/1 Contai
nerCreating 0 13m'], 'failed': True}
==> icp:
==> icp: PLAY RECAP ************************************************************


Failed to install docker on lxc nodes...

I'm using the vagrant deployment with icp-cache server. It seems everything goes right when deploying the icp server, however after some minutes docker installation seems to fail with the message:

[...]
==> icp: Preparing nodes for IBM Cloud Private community edition cluster installation.
==> icp: This process will take approximately 10-20 minutes depending on network speeds.
==> icp: Take a break and go grab a cup of coffee, we'll keep working on this while you're away ;-)
==> icp: .
==> icp: .
==> icp: .
==> icp: .
==> icp: .
==> icp: Failed to install docker on lxc nodes...
==> icp: Setting up crda (3.13-1) ...
==> icp: Setting up libltdl7:amd64 (2.4.6-0.1) ...
==> icp: Setting up docker-ce (17.09.0ce-0ubuntu) ...
==> icp: Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
==> icp: invoke-rc.d: initscript docker, action "start" failed.
==> icp: ● docker.service - Docker Application Container Engine
==> icp: Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
==> icp: --
==> icp: Nov 06 15:48:43 cfc-worker1 systemd[1]: docker.service: Unit entered failed ....
==> icp: Nov 06 15:48:43 cfc-worker1 systemd[1]: docker.service: Failed with result '....
==> icp: Hint: Some lines were ellipsized, use -l to show in full.
==> icp: dpkg: error processing package docker-ce (--configure):
==> icp: subprocess installed post-installation script returned error exit status 1
==> icp: Setting up keyutils (1.5.9-8ubuntu1) ...
==> icp: Setting up libexpat1-dev:amd64 (2.1.0-7ubuntu0.16.04.3) ...
==> icp: Setting up libpython2.7:amd64 (2.7.12-1ubuntu0.16.04.1) ...
==> icp: --
==> icp: Processing triggers for ureadahead (0.100.0-19) ...
==> icp: Errors were encountered while processing:
==> icp: docker-ce
==> icp: E: Sub-process /usr/bin/dpkg returned an error code (1)
==> icp: Cloud-init v. 0.7.9 running 'modules:final' at Mon, 06 Nov 2017 15:47:15 +0000. Up 12.0 seconds.
==> icp: 2017-11-06 15:49:30,700 - util.py[WARNING]: Failed to install packages: ['linux-image-extra-4.4.0-92-generic', 'linux-image-extra-virtual', 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'docker-ce', 'python-setuptools', 'python-pip', 'build-essential', 'python-dev', 'aufs-tools', 'nfs-common']
==> icp: 2017-11-06 15:49:30,705 - cc_package_update_upgrade_install.py[WARNING]: Rebooting after upgrade or install per /var/run/reboot-required
==> icp: --
==> icp: ci-info: | 0 | 192.168.56.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U |
==> icp: ci-info: +-------+--------------+---------+---------------+-----------+-------+
==> icp: Cloud-init v. 0.7.9 running 'modules:config' at Mon, 06 Nov 2017 15:49:35 +0000. Up 2.0 seconds.
==> icp: Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
==> icp: Exception:
==> icp: Traceback (most recent call last):
==> icp: File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main
==> icp: Setting up crda (3.13-1) ...
==> icp: Setting up libltdl7:amd64 (2.4.6-0.1) ...
==> icp: Setting up docker-ce (17.09.0ce-0ubuntu) ...
==> icp: Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
==> icp: invoke-rc.d: initscript docker, action "start" failed.
==> icp: ● docker.service - Docker Application Container Engine
==> icp: Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
==> icp: --
==> icp: Nov 06 15:48:43 cfc-worker2 systemd[1]: docker.service: Unit entered failed ....
==> icp: Nov 06 15:48:43 cfc-worker2 systemd[1]: docker.service: Failed with result '....
==> icp: Hint: Some lines were ellipsized, use -l to show in full.

When entering into the VM:
vagrant@master:$ lxc list
+--------------+---------+-----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-manager1 | RUNNING | 192.168.27.111 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-worker1 | RUNNING | 192.168.27.101 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+
| cfc-worker2 | RUNNING | 192.168.27.102 (eth0) | | PERSISTENT | 0 |
+--------------+---------+-----------------------+------+------------+-----------+
vagrant@master:$ systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Mon 2017-11-06 16:02:24 UTC; 3min 48s ago
Docs: https://docs.docker.com
Process: 1832 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 1832 (code=exited, status=1/FAILURE)

Nov 06 16:02:24 master systemd[1]: docker.service: Unit entered failed state.
Nov 06 16:02:24 master systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 06 16:02:24 master systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Nov 06 16:02:24 master systemd[1]: Stopped Docker Application Container Engine.
Nov 06 16:02:24 master systemd[1]: docker.service: Start request repeated too quickly.
Nov 06 16:02:24 master systemd[1]: Failed to start Docker Application Container Engine.

Can you help me?

Pre-pulling the wrong cloudant image

When we pre-pull images the cloudant image in the Vagrantfile is:
echo "Pulling #{image_repo}/icp-cloudantdb:#{version}..."
docker pull #{image_repo}/cloudantdb:#{version} &> /dev/null

However this is the image the installer actually pulls:
ibmcom/icp-datastore 2.1.0-beta-3 ebe697aa6df8 5 days ago 1.69GB

The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed.

can't open https://192.168.27.100:8443. how to resolve?

vagrant up
......
icp: skipping: [192.168.27.111]
icp: skipping: [192.168.27.101]
icp: skipping: [192.168.27.100]
icp: skipping: [192.168.27.102]
icp:
icp: TASK [common : Setting WLP Client ID] ******************************************
icp: skipping: [192.168.27.101]
icp: skipping: [192.168.27.111]
icp: skipping: [192.168.27.102]
icp: skipping: [192.168.27.100]
icp:
icp: TASK [common : Setting WLP Client Secret] **************************************
icp: skipping: [192.168.27.101]
icp: skipping: [192.168.27.111]
icp: skipping: [192.168.27.102]
icp: skipping: [192.168.27.100]
icp:
icp: TASK [common : Setting WLP OAuth2 Client Registration Password] ****************
icp: skipping: [192.168.27.111]
icp: skipping: [192.168.27.101]
icp: skipping: [192.168.27.102]
icp: skipping: [192.168.27.100]
icp:
icp: TASK [ipsec : include] *********************************************************
icp: skipping: [192.168.27.111]
icp: skipping: [192.168.27.102]
icp: skipping: [192.168.27.101]
icp: skipping: [192.168.27.100]
icp:
icp: PLAY RECAP *********************************************************************
icp: 192.168.27.100 : ok=194 changed=59 unreachable=0 failed=0
icp: 192.168.27.101 : ok=112 changed=39 unreachable=0 failed=0
icp: 192.168.27.102 : ok=112 changed=39 unreachable=0 failed=0
icp: 192.168.27.111 : ok=112 changed=39 unreachable=0 failed=0
icp: localhost : ok=201 changed=100 unreachable=0 failed=0
icp:
icp:
icp: POST DEPLOY MESSAGE ************************************************************
icp:
icp: UI URL is https://192.168.27.100:8443 , default username/password is admin/admin
icp: Playbook run took 0 days, 0 hours, 40 minutes, 6 seconds
==> icp: Running provisioner: shell...
icp: Running: script: install_kubectl
==> icp: Running provisioner: shell...
icp: Running: script: create_persistant_volumes
==> icp: Running provisioner: shell...
icp: Running: script: install_helm
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
sh-3.2# vagrant ssh
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-92-generic x86_64)

84 packages can be updated.
36 updates are security updates.

*** System restart required ***
Last login: Wed Dec 20 15:20:18 2017 from 192.168.27.100
vagrant@master:$ lxc list
+--------------+---------+--------------------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------------+---------+--------------------------------+------+------------+-----------+
| cfc-manager1 | RUNNING | 192.168.27.111 (eth0) | | PERSISTENT | 0 |
| | | 172.17.0.1 (docker0) | | | |
| | | 10.1.70.64 (tunl0) | | | |
+--------------+---------+--------------------------------+------+------------+-----------+
| cfc-worker1 | RUNNING | 192.168.27.101 (eth0) | | PERSISTENT | 0 |
| | | 172.17.0.1 (docker0) | | | |
| | | 10.1.213.128 (tunl0) | | | |
+--------------+---------+--------------------------------+------+------------+-----------+
| cfc-worker2 | RUNNING | 192.168.27.102 (eth0) | | PERSISTENT | 0 |
| | | 172.17.0.1 (docker0) | | | |
| | | 10.1.54.64 (tunl0) | | | |
+--------------+---------+--------------------------------+------+------------+-----------+
vagrant@master:
$

Setup in Mac gets stuck

The vagrant up gets stuck at ==> icp: Pulling ibmcom/icp-datastore:2.1.0-beta-3... for ever with no movement even after 1 hr.

docker version
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: darwin/amd64
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Sandips-MacBook-Air:~ sandipgupta$ vagrant version
Installed Version: 2.0.0
Latest Version: 2.0.0
You're running an up-to-date version of Vagrant!

==> icp: This process will take approximately 10-20 minutes depending on network speeds.
==> icp: Take a break and go grab a cup of coffee, we'll keep working on this while you're away ;-)
==> icp: .
==> icp: .
==> icp: .
==> icp: master.icp ready
==> icp: cfc-worker1.icp ready
==> icp: cfc-worker2.icp ready
==> icp: cfc-manager1.icp ready
==> icp: Running provisioner: shell...
icp: Running: script: precache_images
==> icp: Seeding IBM Cloud Private installation by pre-caching required docker images.
==> icp: This may take a few minutes depending on your connection speed and reliability.
==> icp: Pre-caching docker images....
==> icp: Pulling ibmcom/icp-inception:2.1.0-beta-3...
==> icp: Pulling ibmcom/icp-datastore:2.1.0-beta-3...

Failure to install and work on Mac

I'm at my wits end at this point - I've had it up and running twice and in both cases the Helm deployment never comes up. This is my 12th install - and I keep getting errors like this (at various points in the install) Hardware: Mac Pro 2012, 2x6-core Xeon, 16GB Ram, High Sierra latest release.

Any suggestions would be welcomed:

icp: failed: [localhost] (item=192.168.27.111) => {"changed": true, "cmd": "kubectl label nodes 192.168.27.111 management=true --overwrite=true", "delta": "0:00:00.406529", "end": "2017-12-20 15:09:44.039662", "failed": true, "item": "192.168.27.111", "rc": 1, "start": "2017-12-20 15:09:43.633133", "stderr": "Error from server (NotFound): nodes \"192.168.27.111\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"192.168.27.111\" not found"], "stdout": "", "stdout_lines": []}
    icp: 
    icp: PLAY RECAP *********************************************************************
    icp: 192.168.27.100             : ok=194  changed=59   unreachable=0    failed=0   
    icp: 192.168.27.101             : ok=103  changed=32   unreachable=1    failed=0   
    icp: 192.168.27.102             : ok=103  changed=32   unreachable=1    failed=0   
    icp: 192.168.27.111             : ok=103  changed=32   unreachable=1    failed=0   
    icp: localhost                  : ok=111  changed=47   unreachable=0    failed=1   
    icp: 
    icp: Playbook run took 0 days, 0 hours, 12 minutes, 34 seconds
    icp: FATAL ERROR OCCURRED DURING INSTALLATION :-(
    icp: 
    icp: TASK [kubelet : Check that nsenter exists] *************************************
    icp: ok: [192.168.27.100]
    icp: fatal: [192.168.27.111] => Failed to connect to the host via ssh: Connection timed out during banner exchange
    icp: fatal: [192.168.27.101] => Failed to connect to the host via ssh: Connection timed out during banner exchange
    icp: fatal: [192.168.27.102] => Failed to connect to the host via ssh: Connection timed out during banner exchange
    icp: 
    icp: TASK [kubelet : Copying nsenter onto operating system] *************************
    icp: skipping: [192.168.27.100]
    icp: The install log can be view with: 
    icp: vagrant ssh
    icp: cat icp_install_log
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Failed in localhost

Error message :

==> icp:
==> icp: TASK [addon : Creating rbac roles] *********************************************
==> icp: changed: [localhost]
==> icp:
==> icp: TASK [addon : Adding label to master nodes] ************************************
==> icp: failed: [localhost] (item=192.168.27.100) => {"changed": true, "cmd": "kubectl label nodes 192.168.27.100 role=
master --overwrite=true", "delta": "0:00:00.350920", "end": "2017-10-11 12:29:47.887221", "failed": true, "item": "192.1
68.27.100", "rc": 1, "start": "2017-10-11 12:29:47.536301", "stderr": "Error from server (NotFound): nodes "192.168.27.
100" not found", "stderr_lines": ["Error from server (NotFound): nodes "192.168.27.100" not found"], "stdout": "", "s
tdout_lines": []}
==> icp:
==> icp: PLAY RECAP *********************************************************************
==> icp: 192.168.27.100 : ok=186 changed=18 unreachable=0 failed=0
==> icp: 192.168.27.101 : ok=104 changed=16 unreachable=0 failed=0
==> icp: 192.168.27.102 : ok=104 changed=16 unreachable=0 failed=0
==> icp: 192.168.27.111 : ok=104 changed=16 unreachable=0 failed=0
==> icp: localhost : ok=107 changed=6 unreachable=0 failed=1
==> icp:
==> icp: Playbook run took 0 days, 0 hours, 10 minutes, 5 seconds

What's happened? Or what I need to do?

ICP 2.1 Deploy in SL env.

Running this command:
ansible-playbook playbooks/create_sl_vms.yml
It come out this error:

[WARNING]: provided hosts list is empty, only localhost is available

ERROR! no action detected in task

The error appears to have been in '/home/louie/deploy-ibm-cloud-private/playbooks/create_sl_vms.yml': line 10, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

    file: ../cluster/config.yaml
- name: create master
  ^ here

Any clue to solve it? Or my configuration has some issue?

ICP on SL via Ansible install fails - "Unable to parse /etc/ansible/hosts as an inventory source"

I'm attempting to install ICP on Softlayer. I got the CLI configured and SSH key generated and installed. I updated cluster/config.yaml with the ssh key, data center, and VLAN.
When I run the Ansible script, I get these errors:

`[~/projects/icp/deploy-ibm-cloud-private] | ansible-playbook playbooks/create_sl_vms.yml
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source

[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does
not match 'all'

PLAY [create servers] ******************************************************************************************

TASK [Include cluster vars] ************************************************************************************
ok: [localhost]

TASK [create master] *******************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SoftLayer_Exception_Public): Could not place order. There is insufficient capacity to complete the request. For more information about capacity considerations, please see https://console.bluemix.net/docs/vsi/ts_capacity_bp.html
failed: [localhost] (item=icp-master01) => {"changed": false, "item": "icp-master01", "module_stderr": "Traceback (most recent call last):\n File "/var/folders/vc/s1bbd4mj1sq9sm3c6wbl96bh0000gn/T/ansible_3Ad_Ak/ansible_module_sl_vm.py", line 392, in \n main()\n File "/var/folders/vc/s1bbd4mj1sq9sm3c6wbl96bh0000gn/T/ansible_3Ad_Ak/ansible_module_sl_vm.py", line 384, in main\n (changed, instance) = create_virtual_instance(module)\n File "/var/folders/vc/s1bbd4mj1sq9sm3c6wbl96bh0000gn/T/ansible_3Ad_Ak/ansible_module_sl_vm.py", line 302, in create_virtual_instance\n tags = tags)\n File "/Library/Python/2.7/site-packages/SoftLayer/managers/vs.py", line 558, in create_instance\n inst = self.guest.createObject(self._generate_create_dict(**kwargs))\n File "/Library/Python/2.7/site-packages/SoftLayer/API.py", line 390, in call_handler\n return self(name, *args, **kwargs)\n File "/Library/Python/2.7/site-packages/SoftLayer/API.py", line 358, in call\n return self.client.call(self.name, name, *args, **kwargs)\n File "/Library/Python/2.7/site-packages/SoftLayer/API.py", line 261, in call\n return self.transport(request)\n File "/Library/Python/2.7/site-packages/SoftLayer/transports.py", line 215, in call\n raise _ex(ex.faultCode, ex.faultString)\nSoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SoftLayer_Exception_Public): Could not place order. There is insufficient capacity to complete the request. For more information about capacity considerations, please see https://console.bluemix.net/docs/vsi/ts_capacity_bp.html\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}

PLAY RECAP *****************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 `

Add RHEL support

Given that RHEL's a supported platform for IBM Cloud Private we should make sure this deployment method works, so I gave it a try. RHEL 7.3 on my p50 laptop (IBM provided)

  • Set up VirtualBox, I'm using 5.1
  • Set up Vagrant, I use 1.9.6
  • git clone this repo
  • vagrant up

Wait 7 minutes, fails, then do vagrant ssh and see what's happened:

TASK [addon : Adding label to master nodes] ************************************
changed: [localhost] => (item=192.168.27.100)

TASK [addon : Adding label to proxy nodes] *************************************
changed: [localhost] => (item=192.168.27.100)

TASK [addon : Adding label to management nodes] ********************************
failed: [localhost] (item=192.168.27.111) => {"changed": true, "cmd": "kubectl label nodes 192.168.27.111 management=true --overwrite=true", "delta": "0:00:00.254766", "end": "2017-10-11 12:49:17.319007", "failed": true, "item": "192.168.27.111", "rc": 1, "start": "2017-10-11 12:49:17.064241", "stderr": "Error from server (NotFound): nodes \"192.168.27.111\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"192.168.27.111\" not found"], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
192.168.27.100             : ok=195  changed=61   unreachable=0    failed=0   
192.168.27.101             : ok=0    changed=0    unreachable=1    failed=0   
192.168.27.102             : ok=0    changed=0    unreachable=1    failed=0   
192.168.27.111             : ok=0    changed=0    unreachable=1    failed=0   
localhost                  : ok=109  changed=49   unreachable=0    failed=1   

Playbook run took 0 days, 0 hours, 7 minutes, 12 seconds

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.