Giter Club home page Giter Club logo

openshift-vagrant's Introduction

OpenShift Vagrant | 中文

Licensed under Apache License version 2.0

☞ Notice

This project will be at maintainance stage and the OKD version support remains on RELEASE-3.11. It's been a while that openshift isn't a part of my work life and I have no time to maintain compatibility with the next openshift releases.

Any contributions are warmly welcomed at any time! I hope the project can make your life easy for a cup of coffee.

Overview

The OpenShift Vagrant project aims to make it easy to bring up a real OKD cluster by provisioning pre-configured Vagrantfile of several major releases of OKD on your local machine.

Prerequisites

  • Host machine must have at least 8GB memory (16GB for OKD 3.11)
  • Oracle VirtualBox installed on your host machine
  • Vagrant (2.0 or above) installed on your host machine
  • Vagrant plugin vagrant-hostmanager must be installed

OKD Version Support

Currently this project pre-configured and support the following major versions of the OKD:

But, it's very easy to customize the respected ansible hosts file in order to support other incoming major versions.

The Vagrantfile uses Origin 3.11 and openshift-ansible release-3.11 branch by default. Feel free to adjust your versions by updating the following 2 variables in Vagrantfile:

  1. OPENSHIFT_RELEASE
  2. OPENSHIFT_ANSIBLE_BRANCH

The following table lists the corresponding version relationships between Origin and openshift-ansible:

OKD version openshift-ansible branch
3.11.x release-3.11
3.10.x release-3.10
3.9.x release-3.9
3.7.x release-3.7
3.6.x release-3.6

Getting Started

After adjusting your expected version information, now it's time to bring your cluster up and running.

This Vagrantfile will create 3 VMs in VirtualBox and the network base will be specified by variable NETWORK_BASE.

Checkout the table below for more details:

VM Node Private IP Roles
master #{NETWORK_BASE}.101 node, master, etcd
node01 #{NETWORK_BASE}.102 node
node02 #{NETWORK_BASE}.103 node

Bring Vagrant Up

$ vagrant up

Provisioning Private Keys

$ vagrant provision --provision-with master-key,node01-key,node02-key

Install Origin Cluster Using Ansible

Run the following command if you would like to install origin previous to release-3.8:

$ vagrant ssh master -c 'ansible-playbook /home/vagrant/openshift-ansible/playbooks/byo/config.yml'

Run the following command for origin 3.8 or above:

vagrant ssh master \
        -c 'ansible-playbook /home/vagrant/openshift-ansible/playbooks/prerequisites.yml &&
            ansible-playbook /home/vagrant/openshift-ansible/playbooks/deploy_cluster.yml'

oc-up.sh

The above 3 steps have been grouped together as one script for you. To bring your cluster up, just use the following command:

$ ./oc-up.sh

Open Web Console

In browser of your host, open the following page: https://master.example.com:8443/ and you should see OpenShift Web Console login page. The default login account is admin/handhand

Have fun with OKD and Vagrant :p

openshift-vagrant's People

Contributors

chris-str-cst avatar eliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

openshift-vagrant's Issues

When trying to deploy an image, getting connection refused to Kubernete API endpoint

Tried creating a project and deploy an image. I get the following error.

error: couldn't get deployment websphere-liberty-1: Get https://172.30.0.1:443/api/v1/namespaces/was-liberty/replicationcontrollers/websphere-liberty-1:  dial tcp 172.30.0.1:443: connect: connection refused

I tried to do a curl https://172.30.0.1:443/ It seem to work on the master node

[vagrant@master ~]$ curl https://172.30.0.1
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps.openshift.io",
    "/apis/apps.openshift.io/v1",
    "/apis/apps/v1",
``

But when I tried to issue this from the node01 or node 02 It cannot connect

[vagrant@node01 ~]$ curl https://172.30.0.1
curl: (7) Failed connect to 172.30.0.1:443; Connection refused
[vagrant@node01 ~]$


Any pointers highly appreciated.  

Add support for custom version tag

OKD v3.11.0 has been officially available since Oct, 11. Would be nice to support this, but rather support this with variables which can be provided on startup.

Related to #1

Error after oc-up.sh

Can't reach the web console. I see the following during the oc-up.sh.
Any pointers what is missing or what to look for. This is my first attempt at getting an OpenShift environment on my laptop
Thanks in advance.
'''
$ ./oc-up.sh
Bringing machine 'node01' up with 'virtualbox' provider...
Bringing machine 'node02' up with 'virtualbox' provider...
Bringing machine 'master' up with 'virtualbox' provider...
==> node01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> node01: [vagrant-hostmanager:host] Updating hosts file on your workstation (password may be required)...
==> node01: Machine already provisioned. Run vagrant provision or use the --provision
==> node01: flag to force provisioning. Provisioners marked to run always will still run.
==> node02: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> node02: [vagrant-hostmanager:host] Updating hosts file on your workstation (password may be required)...
==> node02: Machine already provisioned. Run vagrant provision or use the --provision
==> node02: flag to force provisioning. Provisioners marked to run always will still run.
==> master: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> master: [vagrant-hostmanager:host] Updating hosts file on your workstation (password may be required)...
==> master: Machine already provisioned. Run vagrant provision or use the --provision
==> master: flag to force provisioning. Provisioners marked to run always will still run.
==> master: Running provisioner: master-key (file)...
master: .vagrant/machines/master/virtualbox/private_key => /home/vagrant/.ssh/master.key
==> master: Running provisioner: node01-key (file)...
master: .vagrant/machines/node01/virtualbox/private_key => /home/vagrant/.ssh/node01.key
==> master: Running provisioner: node02-key (file)...
master: .vagrant/machines/node02/virtualbox/private_key => /home/vagrant/.ssh/node02.key

PLAY [Fail openshift_kubelet_name_override for new hosts] ***********************************************

TASK [Gathering Facts] **********************************************************************************
fatal: [node01.example.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\n@ WARNING: UNPROTECTED PRIVATE KEY FILE! @\r\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\nPermissions 0664 for '/home/vagrant/.ssh/node01.key' are too open.\r\nIt is required that your private key files are NOT accessible by others.\r\nThis private key will be ignored.\r\nLoad key "/home/vagrant/.ssh/node01.key": bad permissions\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [node02.example.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\n@ WARNING: UNPROTECTED PRIVATE KEY FILE! @\r\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\nPermissions 0664 for '/home/vagrant/.ssh/node02.key' are too open.\r\nIt is required that your private key files are NOT accessible by others.\r\nThis private key will be ignored.\r\nLoad key "/home/vagrant/.ssh/node02.key": bad permissions\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
fatal: [master.example.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\n@ WARNING: UNPROTECTED PRIVATE KEY FILE! @\r\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\r\nPermissions 0664 for '/home/vagrant/.ssh/master.key' are too open.\r\nIt is required that your private key files are NOT accessible by others.\r\nThis private key will be ignored.\r\nLoad key "/home/vagrant/.ssh/master.key": bad permissions\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", "unreachable": true}
to retry, use: --limit @/home/vagrant/openshift-ansible/playbooks/prerequisites.retry

PLAY RECAP **********************************************************************************************
master.example.com : ok=0 changed=0 unreachable=1 failed=0
node01.example.com : ok=0 changed=0 unreachable=1 failed=0
node02.example.com : ok=0 changed=0 unreachable=1 failed=0

Connection to 127.0.0.1 closed.
'''

Workaround for Installing OKD 3.11

This is originally an issue from openshift-ansible due to centos repo frozen for 7.6. Please check out the following links about this disccussion:

I have already applied a temporary fix for that:

# Fix missing packages for openshift origin 3.11.0
# https://lists.openshift.redhat.com/openshift-archives/dev/2018-November/msg00005.html
[ "$(version #{OPENSHIFT_RELEASE})" -eq "$(version 3.11)" ] && yum install -y centos-release-openshift-origin311

Master node not already to run master-api.

I'm checking all node already connection on master ndode by use command line $oc get nodes. The result show as below

server master.example.com:8443 was refused - did you specifiy the right host and port

Check logging on master-api at master node.

[vagrant@master ~]$ sudo /usr/local/bin/master-logs api api
E0823 23:26:50.786996       1 helpers.go:134] Encountered config error json: unknown field "masterCount" in object *config.MasterConfig, raw JSON:
{"admissionConfig":{"pluginConfig":{"BuildDefaults":{"configuration":{"apiVersion":"v1","env":[],"kind":"BuildDefaultsConfig","resources":{"limits":{},"requests":{}}}},"BuildOverrides":{"configuration":{"apiVersion":"v1","kind":"BuildOverridesConfig"}},"openshift.io/ImagePolicy":{"configuration":{"apiVersion":"v1","executionRules":[{"matchImageAnnotations":[{"key":"images.openshift.io/deny-execution","value":"true"}],"name":"execution-denied","onResources":[{"resource":"pods"},{"resource":"builds"}],"reject":true,"skipOnResolutionFailure":true}],"kind":"ImagePolicyConfig"}}}},"aggregatorConfig":{"proxyClientInfo":{"certFile":"aggregator-front-proxy.crt","keyFile":"aggregator-front-proxy.key"}},"apiLevels":["v1"],"apiVersion":"v1","authConfig":{"requestHeader":{"clientCA":"front-proxy-ca.crt","clientCommonNames":["aggregator-front-proxy"],"extraHeaderPrefixes":["X-Remote-Extra-"],"groupHeaders":["X-Remote-Group"],"usernameHeaders":["X-Remote-User"]}},"controllerConfig":{"election":{"lockName":"openshift-master-controllers"},"serviceServingCert":{"signer":{"certFile":"service-signer.crt","keyFile":"service-signer.key"}}},"controllers":"*","corsAllowedOrigins":["(?i)//127\\.0\\.0\\.1(:|\\z)","(?i)//localhost(:|\\z)","(?i)//10\\.0\\.2\\.15(:|\\z)","(?i)//kubernetes\\.default(:|\\z)","(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)","(?i)//kubernetes(:|\\z)","(?i)//openshift\\.default(:|\\z)","(?i)//openshift\\.default\\.svc(:|\\z)","(?i)//172\\.30\\.0\\.1(:|\\z)","(?i)//master\\.example\\.com(:|\\z)","(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)","(?i)//kubernetes\\.default\\.svc(:|\\z)","(?i)//openshift(:|\\z)"],"dnsConfig":{"bindAddress":"0.0.0.0:8053","bindNetwork":"tcp4"},"etcdClientInfo":{"ca":"master.etcd-ca.crt","certFile":"master.etcd-client.crt","keyFile":"master.etcd-client.key","urls":["https://master.example.com:2379"]},"etcdStorageConfig":{"kubernetesStoragePrefix":"kubernetes.io","kubernetesStorageVersion":"v1","openShiftStoragePrefix":"openshift.io","openShiftStorageVersion":"v1"},"imageConfig":{"format":"docker.io/openshift/origin-${component}:${version}","latest":false},"imagePolicyConfig":{"internalRegistryHostname":"docker-registry.default.svc:5000"},"kind":"MasterConfig","kubeletClientInfo":{"ca":"ca-bundle.crt","certFile":"master.kubelet-client.crt","keyFile":"master.kubelet-client.key","port":10250},"kubernetesMasterConfig":{"apiServerArguments":{"storage-backend":["etcd3"],"storage-media-type":["application/vnd.kubernetes.protobuf"]},"controllerArguments":{"cluster-signing-cert-file":["/etc/origin/master/ca.crt"],"cluster-signing-key-file":["/etc/origin/master/ca.key"],"pv-recycler-pod-template-filepath-hostpath":["/etc/origin/master/recycler_pod.yaml"],"pv-recycler-pod-template-filepath-nfs":["/etc/origin/master/recycler_pod.yaml"]},"masterCount":1,"masterIP":"10.0.2.15","podEvictionTimeout":null,"proxyClientInfo":{"certFile":"master.proxy-client.crt","keyFile":"master.proxy-client.key"},"schedulerArguments":null,"schedulerConfigFile":"/etc/origin/master/scheduler.json","servicesNodePortRange":"","servicesSubnet":"172.30.0.0/16","staticNodeNames":[]},"masterClients":{"externalKubernetesClientConnectionOverrides":{"acceptContentTypes":"application/vnd.kubernetes.protobuf,application/json","burst":400,"contentType":"application/vnd.kubernetes.protobuf","qps":200},"externalKubernetesKubeConfig":"","openshiftLoopbackClientConnectionOverrides":{"acceptContentTypes":"application/vnd.kubernetes.protobuf,application/json","burst":600,"contentType":"application/vnd.kubernetes.protobuf","qps":300},"openshiftLoopbackKubeConfig":"openshift-master.kubeconfig"},"masterPublicURL":"https://master.example.com:8443","networkConfig":{"clusterNetworks":[{"cidr":"10.128.0.0/14","hostSubnetLength":9}],"externalIPNetworkCIDRs":["0.0.0.0/0"],"networkPluginName":"redhat/openshift-ovs-subnet","serviceNetworkCIDR":"172.30.0.0/16"},"oauthConfig":{"assetPublicURL":"https://master.example.com:8443/console/","grantConfig":{"method":"auto"},"identityProviders":[{"challenge":true,"login":true,"mappingMethod":"claim","name":"htpasswd_auth","provider":{"apiVersion":"v1","file":"/etc/origin/master/htpasswd","kind":"HTPasswdPasswordIdentityProvider"}}],"masterCA":"ca-bundle.crt","masterPublicURL":"https://master.example.com:8443","masterURL":"https://master.example.com:8443","sessionConfig":{"sessionMaxAgeSeconds":3600,"sessionName":"ssn","sessionSecretsFile":"/etc/origin/master/session-secrets.yaml"},"tokenConfig":{"accessTokenMaxAgeSeconds":86400,"authorizeTokenMaxAgeSeconds":500}},"pauseControllers":false,"policyConfig":{"bootstrapPolicyFile":"/etc/origin/master/policy.json","openshiftInfrastructureNamespace":"openshift-infra","openshiftSharedResourcesNamespace":"openshift"},"projectConfig":{"defaultNodeSelector":"node-role.kubernetes.io/compute=true","projectRequestMessage":"","projectRequestTemplate":"","securityAllocator":{"mcsAllocatorRange":"s0:/2","mcsLabelsPerProject":5,"uidAllocatorRange":"1000000000-1999999999/10000"}},"routingConfig":{"subdomain":"openshift.example.com"},"serviceAccountConfig":{"limitSecretReferences":false,"managedNames":["default","builder","deployer"],"masterCA":"ca-bundle.crt","privateKeyFile":"serviceaccounts.private.key","publicKeyFiles":["serviceaccounts.public.key"]},"servingInfo":{"bindAddress":"0.0.0.0:8443","bindNetwork":"tcp4","certFile":"master.server.crt","clientCA":"ca.crt","keyFile":"master.server.key","maxRequestsInFlight":500,"requestTimeoutSeconds":3600},"volumeConfig":{"dynamicProvisioningEnabled":true}}
I0823 23:26:50.797468       1 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
I0823 23:26:50.797650       1 plugins.go:84] Registered admission plugin "Initializers"
I0823 23:26:50.797812       1 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
I0823 23:26:50.798173       1 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
I0823 23:26:50.798591       1 plugins.go:84] Registered admission plugin "AlwaysAdmit"
I0823 23:26:50.798812       1 plugins.go:84] Registered admission plugin "AlwaysPullImages"
I0823 23:26:50.799183       1 plugins.go:84] Registered admission plugin "LimitPodHardAntiAffinityTopology"
I0823 23:26:50.805464       1 plugins.go:84] Registered admission plugin "DefaultTolerationSeconds"
I0823 23:26:50.805492       1 plugins.go:84] Registered admission plugin "AlwaysDeny"
I0823 23:26:50.805507       1 plugins.go:84] Registered admission plugin "EventRateLimit"
I0823 23:26:50.805516       1 plugins.go:84] Registered admission plugin "DenyEscalatingExec"
I0823 23:26:50.805522       1 plugins.go:84] Registered admission plugin "DenyExecOnPrivileged"
I0823 23:26:50.805530       1 plugins.go:84] Registered admission plugin "ExtendedResourceToleration"
I0823 23:26:50.805537       1 plugins.go:84] Registered admission plugin "OwnerReferencesPermissionEnforcement"
I0823 23:26:50.805548       1 plugins.go:84] Registered admission plugin "ImagePolicyWebhook"
I0823 23:26:50.805558       1 plugins.go:84] Registered admission plugin "LimitRanger"
I0823 23:26:50.805566       1 plugins.go:84] Registered admission plugin "NamespaceAutoProvision"
I0823 23:26:50.805574       1 plugins.go:84] Registered admission plugin "NamespaceExists"
I0823 23:26:50.805583       1 plugins.go:84] Registered admission plugin "NodeRestriction"
I0823 23:26:50.805592       1 plugins.go:84] Registered admission plugin "PersistentVolumeLabel"
I0823 23:26:50.805604       1 plugins.go:84] Registered admission plugin "PodNodeSelector"
I0823 23:26:50.805622       1 plugins.go:84] Registered admission plugin "PodPreset"
I0823 23:26:50.805630       1 plugins.go:84] Registered admission plugin "PodTolerationRestriction"
I0823 23:26:50.805639       1 plugins.go:84] Registered admission plugin "ResourceQuota"
I0823 23:26:50.805647       1 plugins.go:84] Registered admission plugin "PodSecurityPolicy"
I0823 23:26:50.805655       1 plugins.go:84] Registered admission plugin "Priority"
I0823 23:26:50.805663       1 plugins.go:84] Registered admission plugin "SecurityContextDeny"
I0823 23:26:50.805679       1 plugins.go:84] Registered admission plugin "ServiceAccount"
I0823 23:26:50.805690       1 plugins.go:84] Registered admission plugin "DefaultStorageClass"
I0823 23:26:50.805698       1 plugins.go:84] Registered admission plugin "PersistentVolumeClaimResize"
I0823 23:26:50.805706       1 plugins.go:84] Registered admission plugin "StorageObjectInUseProtection"
F0823 23:27:20.816119       1 start_api.go:68] dial tcp 192.168.160.101:2379: connect: connection refused

Logo

Your use of the OpenShift logo is not in line with Red Hat's logo usage and guidelines.
Please do not distort or refrain from using it.

/home/vagrant/openshift-ansible/playbooks/byo/config.yml could not be found

Hello

Following your doc I get this
vagrant ssh master -c 'ansible-playbook /home/vagrant/openshift-ansible/playbooks/byo/config.yml'
ERROR! the playbook: /home/vagrant/openshift-ansible/playbooks/byo/config.yml could not be found

Instead I add to run this:
vagrant ssh master -c 'ansible-playbook /home/vagrant/openshift-ansible/playbooks/deploy_cluster.yml'

Add instructions for setting up oc/kubectl

Since we get an out-of-the-box working Openshift cluster it would be nice also to get the kubeconfig / oc config to be able to connect to it (without needing to figure the URL and certificates locations.)

missing docker unix group

Hi,

I had an issue with missing "docker" unix group.
I am on Window 7 with VirtualBox VM 6.0.0.
Error message is just after "Configuring Docker proxy"
Faulty command is : chown ... root:docker

I've made a quick fix by adding that group as
config.vm.provision "shell", inline: <<-SHELL
echo 'Add missing docker unix group'
groupadd docker
/vagrant/all.sh #{OPENSHIFT_RELEASE}
SHELL

after that initial installation goes w/o any issues.

Could not find csr for nodes

INSTALLER STATUS ***********************************************************************************
Initialization              : Complete (0:00:33)
Health Check                : Complete (0:00:06)
Node Bootstrap Preparation  : Complete (0:02:36)
etcd Install                : Complete (0:00:38)
Master Install              : Complete (0:03:42)
Master Additional Install   : Complete (0:00:41)
Node Join                   : In Progress (0:03:09)
	This phase can be restarted by running: playbooks/openshift-node/join.yml


Failure summary:


  1. Hosts:    master.example.com
     Play:     Approve any pending CSR requests from inventory nodes
     Task:     Approve node certificates when bootstrapping
     Message:  Could not find csr for nodes: node02.example.com, node01.example.com
Connection to 127.0.0.1 closed.

I got this error when installing, and I've tried serval times, each time got the same error.
Any ideas?

Thanks.

Wait for control plane pods to appear

As of 16 of Jun, master plane fails to start with message

Wait for control plane pods to appear ....

TASK [openshift_control_plane : Report control plane errors] *********************************************************************************************************************************
fatal: [master.example.com]: FAILED! => {"changed": false, "msg": "Control plane pods didn't come up"}                                                                                        
                                                                                                                                                                                              
NO MORE HOSTS LEFT ***************************************************************************************************************************************************************************
        to retry, use: --limit @/home/vagrant/openshift-ansible/playbooks/deploy_cluster.retry                                                                                                
                                                                                                                                                                                              
PLAY RECAP ***********************************************************************************************************************************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0                                                                                                                   
master.example.com         : ok=324  changed=149  unreachable=0    failed=1                                                                                                                   
node01.example.com         : ok=113  changed=60   unreachable=0    failed=0                                                                                                                   
node02.example.com         : ok=113  changed=60   unreachable=0    failed=0  

it appears that root cause is line

127.0.0.1      master.example.com      master

present in /etc/hosts and etcd listening purely on 192.168.150.101

#127.0.0.1      master.example.com      master
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

## vagrant-hostmanager-start
192.168.150.101 master.example.com
192.168.150.101 etcd.example.com
192.168.150.101 nfs.example.com
192.168.150.103 node02.example.com

192.168.150.102 node01.example.com
192.168.150.102 lb.example.com
## vagrant-hostmanager-end

Manual correcting of hosts file on master node solves the issue

Outdated configs

Hey!

First of all - thanks for your work!

Currently the config is outdated, because Ansible 2.7 is came out, and it is not able to install OpenShift Origin at this moment. (need to force the Ansible version before 2.7)

The second thing is, after I used Ansible 2.6.5, your node labels are not correct in the ansible hosts file.
Here are the new labels, and the exact same problem: openshift/openshift-ansible#8327

And in the ansible hosts file the subdomain name was incorrect also.

Will you fix those issues? Or do you maintain this repository? Because I'm almost already done with the fixes, and if you want them, I can create a pull request.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.