Giter Club home page Giter Club logo

contrib's Introduction

Kubernetes Contrib

Build Status Go Report Card APACHEv2 License

Do not add new projects to this repository. We eventually want to move all code in this repository to more appropriate repositories (see #762). Create a new repository in kubernetes-incubator instead (process).

Getting the Code

The code must be checked out as a subdirectory of k8s.io, and not github.com.

mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
# Replace "$YOUR_GITHUB_USERNAME" below with your github username
git clone https://github.com/$YOUR_GITHUB_USERNAME/contrib.git
cd contrib

Updating Godeps

Godeps in contrib/ has a different layout than in kubernetes/ proper. This is because contrib contains multiple tiny projects, each with their own dependencies. Each in contrib/ has it's own Godeps.json. For example the Godeps.json for Ingress is Ingress/Godeps/Godeps.json. This means that godeps commands like godep restore or godep test do not work in the root directory. They should be run from inside the subproject directory you want to test.

Prerequisites for updating Godeps

Since we vendor godeps through /vendor vs the old style Godeps/_workspace, you either need a more recent install of go and godeps, or you need to set GO15VENDOREXPERIMENT=1. Eg:

$ godep version
godep v74 (linux/amd64/go1.6.1)
$ go version
go version go1.6.1 linux/amd64
$ godep save ./...

Will automatically save godeps to vendor/ instead of _workspace/. If you have an older version of go, you must run:

$ GO15VENDOREXPERIMENT=1 godep save ./...

If you have an older version of godep, you must update it:

$ go get github.com/tools/godep
$ cd $GOPATH/src/github.com/tools/godep
$ go build -o godep *.go

Updating Godeps

The most common dep to update is obviously going to be kubernetes proper. Updating kubernetes and it's dependancies in the Ingress subproject for example can be done as follows (the example assumes your Kubernetes repo is rooted at $GOPATH/src/github.com/kubernetes, s/github.com\/kubernetes/k8s.io/ as required):

cd $GOPATH/src/github.com/kubernetes/contrib/ingress
godep restore
go get -u github.com/kubernetes/kubernetes
cd $GOPATH/src/github.com/kubernetes/kubernetes
godep restore
cd $GOPATH/src/github/kubernetes/contrib/ingress
rm -rf Godeps
godep save ./...
git [add/remove] as needed
git commit

Other deps are similar, although if the dep you wish to update is included from kubernetes we probably want to stay in sync using the above method. If the dep is not in kubernetes proper something like the following should get you a nice clean result:

cd $GOPATH/src/github/kubernetes/contrib/ingress
godep restore
go get -u $SOME_DEP
rm -rf Godeps
godep save ./...
git [add/remove] as needed
git commit

Running all tests

To run all go test in all projects do this:

./hack/for-go-proj.sh test

Getting PRs Merged Into Contrib

In order for your PR to get merged, it must have the both lgtm AND approved labels. When you open a PR, the k8s-merge-bot will automatically assign a reviewer from the OWNERS files. Once assigned, the reviewer can then comment /lgtm, which will add the lgtm label, or if he/she has permission, the reviewer can add the label directly.

Each file modified in the PR will also need to be approved by an approver from its OWNERS file or an approver in a parent directory's OWNERS file. A file is approved when the approver comments /approve, and it is unapproved if an approver comments /approve cancel. When all files have been approved, the approved label will automatically be added by the k8s-merge-bot and the PR will be added to the submit-queue to be merged.

contrib's People

Contributors

aledbf avatar apelisse avatar bprashanth avatar brendandburns avatar eparis avatar fgrzadkowski avatar foxish avatar freehan avatar gmarek avatar grodrigues3 avatar ingvagabund avatar ixdy avatar jasonbrooks avatar k8s-ci-robot avatar lavalamp avatar maciekpytel avatar madhusudancs avatar mikedanese avatar mikespreitzer avatar mwielgus avatar piosz avatar q-lee avatar random-liu avatar roberthbailey avatar rutsky avatar spxtr avatar stephenrlouie avatar thockin avatar wojtek-t avatar xialonglee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contrib's Issues

provisioning hangs at TASK: [node | Get the node token values] w/ 2 or more nodes

When provisioning using Vagrantfile and default centos/7 box w/ NUM_NODES greater than 1, the process hangs at TASK: [node | Get the node token values].

The problem appears to be the same as reported in https://bugzilla.redhat.com/show_bug.cgi?id=1242682 and https://bugzilla.redhat.com/show_bug.cgi?id=1240613.

The following change addresses the issue for me:

diff --git a/ansible/vagrant/Vagrantfile b/ansible/vagrant/Vagrantfile
index f6505a4..7d40fa5 100644
--- a/ansible/vagrant/Vagrantfile
+++ b/ansible/vagrant/Vagrantfile
@@ -128,6 +128,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
         ansible.groups = groups
         ansible.playbook = "./vagrant-ansible.yml"
         ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
+        ansible.raw_ssh_args = ['-o ControlMaster=no']
       end
     end

@@ -137,6 +138,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
       ansible.playbook = "../cluster.yml"
       ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
       ansible.tags = ansible_tags
+      ansible.raw_ssh_args = ['-o ControlMaster=no']
     end
   end
 end

Run separate merge queues for separate branches

In particular, each release branch has it's own e2e test validating the release branch. Merges to the release branch shouldn't be blocked if e2e tests at HEAD are failing, and merges to HEAD shouldn't be blocked if tests against the release branch are failing.

We currently have 2 release branches (release-1.0 and release-1.1) that should have separate merge-bots configured.

/cc @lavalamp @eparis

Ansible: Add kubedash addon support

This is a feature request, feel free to close it if you don't think is needed or desired to have.

https://github.com/kubernetes/kubedash/blob/master/deploy/kube-config.yaml

[vagrant openstack] 'run_provisioner': Catched Error: Catched Error: uninitialized constant

Vagrant Installed Version: 1.7.4

Running ansible vagrant with the openstack provider I hit the error below after doing vagrant up. A quick google suggested a workaround but not sure if thats acceptable. I'll submit a pr just in case it is.

/Users/jamesrawlings/.vagrant.d/gems/gems/vagrant-openstack-provider-0.7.0/lib/vagrant-openstack-provider/action/provision.rb:34:in `run_provisioner': Catched Error: Catched Error: uninitialized constant VagrantPlugins::Shell::Provisioner (NameError)
    from /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:95:in `call'
    from /opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:95:in `block in finalize_action'    

Workaround came from..
ggiamarchi/vagrant-openstack-provider#240 (comment)

F5 LoadBalancer

I'm looking to see what the effort would be to write an F5 integration. Currently my enterprise uses RHEL Atomic hosts with an F5 load balancer. We can access the F5 API to programmatically create virtual servers.

On AWS, I can specify the "LoadBalancer" service type, and it auto creates the ELB + configures. I'm looking for a way to do exactly the same thing. I am 100% cool with doing the effort, just looking for some strategy around how / where / and integration.

Is this pluggable where I could mirror how it works on AWS? Or do I need to change approaches. It's probably important to be pluggable as I"m not building k8s myself, rather, using the RHEL packages from Atomic upgrades.

Auto-label long-latency PRs

Proposals:

  • New files in docs/proposals

Major API changes:

Minor API changes:

  • Changes to pkg/api/types.go or pkg/expapi/types.go

Will be useful if/when we start tracking PR latency

[submit queue] only auto-merge when test jobs pass consistently

#112 refers

We should only merge PR's if Jenkins e2e tests are stable across multiple runs. If I'm reading the code correctly:

https://github.com/kubernetes/contrib/blob/023e2c05df5899bf9734b0d6d21fad0b0617fb45/submit-queue/e2e.go#L93

it deems a single successful test run per job to constitute a "stable" job. In theory, a single fluky success could cause all pending PR's to auto-merge, which is bad.

In practise, we have multiple test jobs being checked, so if all of them become flaky, their intermittent successes would all have to line up correctly for PR's to merge (which is statistically unlikely). But even so, I'd suggest adding a "number of consecutive successes required" config value per job. Faster, more reliable test jobs (e.g. e2e-gce-parallel, at 20 min and 90%+ reliability) might require, say, 4 consecutive successes, while slower jobs (e.g. scalability, at multiple hours per run), might require fewer.

Re-enable gce-autoscaling tests for merge-bit

Because of its flakiness, the gce-autoscaling project was removed from the list of Jenkins projects that the merge-bot checks (PRs in waiting is growing). The failures needs to be investigated and the project added back to that list soon.

Ansible should use configuration and script files from local system

Right now we use the get_url to obtain most of the addon and configuration files from the Github content.

Example:

- name: Make sure the system services namespace exists
  get_url:
    url=https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/cluster/saltbase/salt/kube-addons/namespace.yaml
    dest="{{ kube_config_dir }}/addons/"
    force=yes

We should have a mechanism to use files from the local system. Since the contrib folder is now outside the main repository this is a little more difficult since we can not assume file locations. It would be nice to have as little logic in the setup.sh script as well but might be the best solution?

I propose we keep the downloading versions of the files and have the option to use local files. We do this with packageManager and localBuild for the actual kubernetes binaries now. Using this same technique we have the config files.

Logging the bug to track ideas, suggestions and progress.

Update ansible directions (1) for mac people (2) for openstack urls

I had to make some changes to run this from my mac with a local repo build.

first, we may want to show how to push local binaries - its tricky to get the right path, b/c you need to use the dockerized build and specify amd64 iirc

You need to use run.sh to build dockerized linux binaries. Then, you have to do something like this...

  3 --- a/ansible/roles/master/defaults/main.yml
  4 +++ b/ansible/roles/master/defaults/main.yml
  5 @@ -1,3 +1,3 @@
  6  kube_master_insecure_port: 8080
  7 
  8 -localBuildOutput: ../../_output/local
  9 \ No newline at end of file
 10 +localBuildOutput: ~/Development/kubernetes/_output/dockerized/bin/linux

and

 13 --- a/ansible/roles/master/tasks/localBuildInstall.yml
 14 +++ b/ansible/roles/master/tasks/localBuildInstall.yml
 16  ---
 17  - name: Copy master binaries
 18    copy:
 19 -    src: "{{ localBuildOutput }}/go/bin/{{ item }}"
 20 +    src: "{{ localBuildOutput }}/amd64/{{ item }}"
 21      dest: /usr/bin/
 22      mode: 0755
 23    with_items:

also, i think we might want to update vagrant example file for openstack

and finally in my main.yml, this is what it looks like. Mainly, the os_auth_url was different... So we should update the example vagrant file

 24 diff --git a/ansible/roles/node/defaults/main.yml b/ansible/roles/node/defaults/main.yml
 29 -localBuildOutput: ../../_output/local
 30 +localBuildOutput: ~/Development/kubernetes/_output/dockerized/bin/linux
 31 os_username: jvyas
 32 os_password: 123123123123
 33 os_tenant: "ENG CTO Office"
 34 os_auth_url: "http://os1-public.osop.rhcloud.com:5000/v2.0/tokens"
 35 os_region_name: "OS1Public"
 36 os_ssh_key_name: "JPeerindex"
 37 os_flavor: "m1.small"
 38 os_image: "_OS1_Fedora-Cloud-Base-22-20150521.x86_64.qcow2"
 39 os_security_groups:
 40   - "default"
 41 os_floating_ip_pool: "os1_public"

Tidy up l7 controller

Nodes have no External IP

Hi,

I am trying to run the example, but I get stuck in "Expose services"; none of my nodes have any external IP. The following command gives me nothing:

kubectl get nodes -o json | grep -i externalip -A 1

How can I activate the external IP for the role=loadbalancer nodes?

Thank you

submit queue should ensure CI results are fresh

The submit queue automatically re-runs Jenkins before merging, but it's possible for the Shippable and Travis results to be very stale. Because Jenkins doesn't run all of the same verifications and checks currently, it's possible for things to break.

Motivating example: kubernetes/kubernetes#15608 was auto-merged, but it immediately broke the build, since some new docs had been added since its last CI run.

The GitHub API indicates when commit statuses were updated (see https://github.com/google/go-github/blob/master/github/repos_statuses.go#L34); it should hopefully be fairly straightforward to require these to be no more than N days old or retrigger CI if they are stale.

@brendandburns @eparis

setup.sh fails for various reasons

Hello all,

I'm behind a proxy, so the first thing I had to solve was that contrib/ansible/roles/kubernetes/files/make-ca-cert.sh was not using the environment for a proxy nor a curlrc file. I ended up adding the --proxy my.proxy.here:3128 to the curl call directly. After that, the script had a problem with ./easyrsa --batch "--req-cn=${cert_ip}@$(date +%s)" build-ca nopass as ${cert_ip}@$(date +%s) would resolve to a string longer than 64 characters. After I worked around that, the ansible run went on up until

TASK: [master | Copy master binaries] *****************************************

which in turn failed due to:

fatal: [kube-master.WHOOPS] => input file not found at \ 
/home/WHOOPS/kubernetes/contrib/ansible/roles/_output/local/go/bin/kube-apiserver \
or /home/WHOOPS/kubernetes/_output/local/go/bin/kube-apiserver

And because I'm new to ansible I was wondering, because I can see this in roles/master/tasks/localBuildInstall.yml

- name: Copy master binaries
 copy:
   src: ../../_output/local/go/bin/{{ item }}
   dest: /usr/bin/
   mode: 755
 with_items:
   - kube-apiserver
   - kube-scheduler
   - kube-controller-manager
   - kubectl
 notify: restart daemons

making sense but prior to that it doesn't say that it'll build something. Nothing. So I'm curious, when should building the mentioned binaries in that _outcome directory happen?

So I was following

http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_ansible_config.html

which already had the trouble of not being up2date (it doesn't state that the ansible stuff has moved) and that has lead so far nowhere.

Is this a cluster.yml switch, to enable that build?

Service loadbalancer does not expose services in kube-system namespace

The documentation for the service loadbalancer indicates that the kube-ui can be reached via the default configuration however the loadbalancer does not query the kube-system namespace so nothing is actually exposed. Please either change the documentation to reflect this or allow a way to query multiple namespaces e.g kube-system, default, etc.

ansible: failed load flannel config file if use IP in inventory

TASK: [flannel | Load the flannel config file into etcd] **********************
failed: [10.66.15.133 -> 10.66.15.133] => {"changed": true, "cmd": "/usr/bin/etcdctl --no-sync --peers=http://10.66.15.133:2379 set /cluster.local/network/config < /tmp/flannel-conf.json", "delta": "0:00:00.214135", "end": "2015-11-15 01:33:17.547932", "rc": 4, "start": "2015-11-15 01:33:17.333797", "warnings": []}
stderr: Error: 501: All the given peers are not reachable (failed to propose on members [http://10.66.15.133:2379] twice [last error: Put http://10.66.15.133:2379/v2/keys/cluster.local/network/config: dial tcp 10.66.15.133:2379: connection refused]) [0]

FATAL: all hosts have already failed -- aborting

Ansible install

I'm trying to run through the ansible setup and keep getting to the following spot and it just hangs: Load the flannel config file into etcd

I'm running Fedora Server 22 from a Fedora client. Are there any additional logs I can see or things I can dig into to see why it's hanging?

[Possible bug] tcpServices binds to new map in newLoadBalancerController

@Beeps:

In newLoadBalacnerController lines https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L391-L395

Seems a little off to me, i.e. line 391...

 lbc := loadBalancerController{ 
...
    tcpServices: map[string]int{},
}

In the above, shouldn't the loadBalancerController be binding tcpServices to the value of cfg.tcpServices ? The code just below seems to expect that tcpServices actually has some data in it...

strings.Split(*tcpServices, ",") {

I'm in the process of extracting more info from reading the code.

If I've missed something obvious, apologies in advance :)..

[vagrant openstack] error inventory_hostname in groups['nodes'] or inventory_hostname in groups['gateways']

I'm trying out the openstack vagrant steps which seem to be working with only two small changes my side which I'll raise pr's for but then fail with the error below.

TASK: [opencontrail | Interface configuration file (physical)] ****************
fatal: [kube-node-2] => One or more undefined variables: 'opencontrail_interface' is undefined
fatal: [kube-node-1] => One or more undefined variables: 'opencontrail_interface' is undefined

FATAL: all hosts have already failed -- aborting

but as a total guess I uncommented https://github.com/kubernetes/contrib/blob/master/ansible/group_vars/all.yml#L50 and set it to eth0 which was an available interface - no idea if this was right at all but that error seemed to go away the next time I ran the vagrant up.

So now I'm left with the errors below which I guess are related and wondering if anyone has any suggestions

TASK: [opencontrail | build tag] **********************************************
fatal: [kube-master] => error while evaluating conditional: inventory_hostname in groups['nodes'] or inventory_hostname in groups['gateways']
ok: [kube-node-1]
ok: [kube-node-2]

TASK: [opencontrail | Unpack vrouter tarball] *********************************
changed: [kube-node-1]
changed: [kube-node-2]

TASK: [opencontrail | Depmod] *************************************************
changed: [kube-node-1]
changed: [kube-node-2]

TASK: [opencontrail | Reduce memory utilization of vrouter] *******************
skipping: [kube-node-1]
skipping: [kube-node-2]

TASK: [opencontrail | Interface up/down scripts] ******************************
changed: [kube-node-2] => (item=ifup-vhost)
changed: [kube-node-1] => (item=ifup-vhost)
changed: [kube-node-2] => (item=ifdown-vhost)
changed: [kube-node-1] => (item=ifdown-vhost)

TASK: [opencontrail | Interface configuration file (physical)] ****************
changed: [kube-node-2]
changed: [kube-node-1]

TASK: [opencontrail | Interface configuration file (vhost0)] ******************
fatal: [kube-node-2] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipaddr'", 'failed': True}
fatal: [kube-node-1] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipaddr'", 'failed': True}
fatal: [kube-node-2] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipaddr'", 'failed': True}
fatal: [kube-node-1] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ipaddr'", 'failed': True}

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/Users/jamesrawlings/cluster.retry

kube-master                : ok=92   changed=40   unreachable=1    failed=0
kube-node-1                : ok=63   changed=25   unreachable=1    failed=0
kube-node-2                : ok=63   changed=25   unreachable=1    failed=0

An unknown error happened in Vagrant OpenStack provider

To easily debug what happened, we recommend to set the environment
variable VAGRANT_OPENSTACK_LOG to debug

    $ export VAGRANT_OPENSTACK_LOG=debug

If doing this does not help fixing your issue, there may be a bug
in the provider. Please submit an issue on Github at
https://github.com/ggiamarchi/vagrant-openstack-provider
with the stracktrace and the logs.

We are looking for feedback, so feel free to ask questions or
describe features you would like to see in this provider.
Catched Error: Catched Error: Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Service lb doesn't play well with host header rules

Building current servicelb and pushing it out to a cluster with the following services: https://gist.github.com/bprashanth/a1eaa27ef28f2c7fee0f using the rc.yaml in the servicelb directory, gives me the following errors in kubectl logs:

ALERT] 273/173127 (28) : parsing [/etc/haproxy/haproxy.cfg:87] : error detected while parsing switching rule : no such ACL : 'host_acl_kubernetes:443'.
[ALERT] 273/173127 (28) : parsing [/etc/haproxy/haproxy.cfg:92] : error detected while parsing switching rule : no such ACL : 'host_acl_echoheadersdefault'.
[ALERT] 273/173127 (28) : parsing [/etc/haproxy/haproxy.cfg:97] : error detected while parsing switching rule : no such ACL : 'host_acl_echoheadersx'.

And I can't reach http://lbip/echoheadersx, but I can reach the service through http://lbip:nodePort-of-echoheadersx. Guessing this is because of the latest host headers change.

@aledbf

Cut a new service_loadbalancer image

Several useful fixes have gone in/are in flight/will go in soon:

  • reaping zombie haproxy
  • logging to syslog
  • dry run mode talking to localhost kubectl proxy
  • using alpine linux
  • url based routing (maybe?)
  • liveness probe chk directly in haproxy
  • increasing relist period to 10m
  • proper flag parsing

Need to test/push a new image to google_containers

Create special test labels that add test-suites as a gatekeeper for merges

We have the gce-e2e suite that acts as a gatekeeper for PR merges.

We should have the ability to add additional suite(s) to act as a gatekeeper for a PR merge.
This can be done by leveraging github labels.

Currently, we are in a mode where many tests are kicked out of gce-e2e suite because of slowness etc..

Having a custom label-driven suite gatekeeper, we can have certain PRs that need a specific test suite to be run, act as a gatekeep for the PR merges.

@quinton-hoole
@brendandburns
@ixdy
@thockin

checkBuilds() returns true even when builds are failing?

https://github.com/kubernetes/contrib/blob/023e2c05df5899bf9734b0d6d21fad0b0617fb45/submit-queue/e2e.go#L93

As far as I can tell by looking at this code, this function returns true even if stable==false for some or all builds.

In theory this will cause PR's to be merged even if Jenkins e2e tests are failing.

@gmarek has reported this to be the case and @lavalamp has therefore disabled the merge bot.

Need to verify that this theoretical bug is in fact manifesting. The fix is pretty straightforward.

@brendandburns FYI.

Dynamic weights of backends

Create a sidecar container that runs in the same pod as your backend and reports it's qps to a service annotation. The servicelb will lookup these service annotations and update the weights of the backends by writing to the haproxy socket (or #120). The goal here is to send less traffic to backends that have historically been slow to repond (so not exactly leastconns). I expect this to help with noisy neighbors.

@aledbf @jayunit100

Crash if an reviewer isn't found

E0901 16:56:30.766474      98 blunderbuss.go:99] No owners found for PR 13025
panic: runtime error: invalid memory address or nil pointer dereference

k8s.io/contrib/mungegithub/pulls.(*BlunderbussMunger).MungePullRequest(0x4c208042130, 0x4c2080362c0, 0x8c15f0, 0xa, 0x8c15f0, 0xa, 0x4c208d05458, 0x4c2098454a0, 0x4c2082c0d00, 0x1, ...)
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/mungegithub/pulls/blunderbuss.go:85 +0x129
k8s.io/contrib/mungegithub/pulls.mungePullRequestList(0x4c208d02000, 0x64, 0x8d, 0x4c2080362c0, 0x8c15f0, 0xa, 0x8c15f0, 0xa, 0x4c2080ced50, 0x3, ...)
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/mungegithub/pulls/pulls.go:120 +0x6b4
k8s.io/contrib/mungegithub/pulls.MungePullRequests(0x4c2080362c0, 0x7ffe6e7e8def, 0x24, 0x8c15f0, 0xa, 0x8c15f0, 0xa, 0x0, 0x0, 0x0, ...)
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/mungegithub/pulls/pulls.go:84 +0x3c6
main.main()
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/mungegithub/mungegithub.go:68 +0x521

goroutine 5 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0xadb780)
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x78
created by github.com/golang/glog.init·1
    /usr/local/google/home/bburns/brendandburns/src/k8s.io/contrib/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x2a7

goroutine 17 [runnable]:
net/http.(*persistConn).readLoop(0x4c208036370)
    /usr/lib/google-golang/src/net/http/transport.go:928 +0x9ce
created by net/http.(*Transport).dialConn
    /usr/lib/google-golang/src/net/http/transport.go:660 +0xc9f

Mergebot is drunk on issue pages

From the logs right now.

I1006 20:33:38.846890       1 github.go:668] Fetching page 16510 of issues
I1006 20:33:40.737390       1 github.go:668] Fetching page 16511 of issues
I1006 20:33:42.847761       1 github.go:668] Fetching page 16512 of issues
I1006 20:33:44.840999       1 github.go:668] Fetching page 16513 of issues
I1006 20:33:46.855748       1 github.go:668] Fetching page 16514 of issues
I1006 20:33:48.763123       1 github.go:668] Fetching page 16515 of issues
I1006 20:33:50.816153       1 github.go:668] Fetching page 16516 of issues
I1006 20:33:52.949485       1 github.go:668] Fetching page 16517 of issues
I1006 20:33:54.800915       1 github.go:668] Fetching page 16518 of issues
I1006 20:33:56.796266       1 github.go:668] Fetching page 16519 of issues
I1006 20:33:58.795852       1 github.go:668] Fetching page 16520 of issues
I1006 20:34:00.960203       1 github.go:668] Fetching page 16521 of issues
I1006 20:34:02.973533       1 github.go:668] Fetching page 16522 of issues
I1006 20:34:05.166156       1 github.go:668] Fetching page 16523 of issues
I1006 20:34:06.867732       1 github.go:668] Fetching page 16524 of issues
I1006 20:34:08.911436       1 github.go:668] Fetching page 16525 of issues
I1006 20:34:10.828744       1 github.go:668] Fetching page 16526 of issues
I1006 20:34:12.900781       1 github.go:668] Fetching page 16527 of issues
I1006 20:34:14.876124       1 github.go:668] Fetching page 16528 of issues
I1006 20:34:16.764546       1 github.go:668] Fetching page 16529 of issues
I1006 20:34:18.790262       1 github.go:668] Fetching page 16530 of issues
I1006 20:34:20.993383       1 github.go:668] Fetching page 16531 of issues
I1006 20:34:22.817142       1 github.go:668] Fetching page 16532 of issues
I1006 20:34:24.834514       1 github.go:668] Fetching page 16533 of issues
I1006 20:34:26.800389       1 github.go:668] Fetching page 16534 of issues
I1006 20:34:28.815217       1 github.go:668] Fetching page 16535 of issues

service-loadbalancer goes into an infinite loop of Restart container

I followed the step described in README.

I run into an issue of

core@kube-node-01 ~ $ docker logs c3e318d07f32
I0827 21:43:17.144454       1 service_loadbalancer.go:383] Creating new loadbalancer: {Name:haproxy ReloadCmd:./haproxy_reload Config:/etc/haproxy/haproxy.cfg Template:template.cfg Algorithm:roundrobin}
I0827 21:43:17.144510       1 service_loadbalancer.go:426] All tcp/https services will be ignored.
F0827 21:43:17.145153       1 service_loadbalancer.go:436] Failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

I was using the same command kubectl create -f ./rc.yaml. The state of the pods show Running, but it goes into an infinite loop of Restart container.

[vagrant openstack] docker not running before task opencontrail | Build docker container

During the vagrant up I get an error as docker isn't running in the image during the opencontrail | Build docker container task, I've worked around it by adding service: name=docker state=started to https://github.com/kubernetes/contrib/blob/master/ansible/roles/docker/tasks/main.yml#L49 but I'm pretty sure my lack of understanding here means that's not the right answer.

Using fedora 22 cloud base image from https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2

TASK: [opencontrail | Build docker container] *********************************
skipping: [kube-node-1]
skipping: [kube-node-2]
failed: [kube-master] => {"changed": true, "cmd": ["docker", "build", "-t", "opencontrail/kmod_fedora22-4.0.4-301.fc22.x86_64", "fedora22-4.0.4-301.fc22.x86_64"], "delta": "0:00:00.023080", "end": "2015-10-20 18:01:26.362457", "rc": 1, "start": "2015-10-20 18:01:26.339377", "warnings": []}
stderr: Post http:///var/run/docker.sock/v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&t=opencontrail%2Fkmod_fedora22-4.0.4-301.fc22.x86_64&ulimits=null: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?

Debugging problems with service-loadbalancer

Hi -

I've built and launched the servicelb container and launched it using the rc.yml example. I can connect to the stats port on 1936, and haproxy appears to be listening ok on 80. I've launched a test container to get it picked up by the loadbalancer but when I curl -v http://10.100.41.6/test I get 'Not Found'. If I query the logs for the loadbalancer I get the messages below. From here, I'm not sure where to go to try and further diagnose why it's not working but I'm guessing that the launched test container has not been detected so the haproxy.cfg has not been automatically updated (if that's correct?) - any help on how to try and get to bottom of this would be greatly appreciated.

Thanks,
Piers.

Icore@master ~ $ kubectl logs service-loadbalancer-1t2qm

I1110 05:58:36.959150 1 service_loadbalancer.go:550] Creating new loadbalancer: {Name:haproxy ReloadCmd:./haproxy_reload Config:/etc/haproxy/haproxy.cfg Template:template.cfg Algorithm: startSyslog:false lbDefAlgorithm:roundrobin}
E1110 05:58:36.959519 1 service_loadbalancer.go:241] Get : unsupported protocol scheme ""
I1110 05:58:36.988153 1 service_loadbalancer.go:294] haproxy -- cat: can't open '/var/run/haproxy.pid': No such file or directory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.