Giter Club home page Giter Club logo

community-catalog's Introduction

Rancher Community Catalog

This catalog provides templates created by the community, and they are not maintained or supported by Rancher Labs.

License

Copyright (c) 2014-2017 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

community-catalog's People

Contributors

accoleon avatar ageekymonk avatar anantpatil avatar chrisurwin avatar cloudnautique avatar danielfree avatar devkyles avatar disaster37 avatar dwern-dlb avatar ellerbrock avatar flaccid avatar galal-hussein avatar gothrek22 avatar ibuildthecloud avatar janeczku avatar joshuacox avatar jrouaix avatar jsilberm avatar kolaente avatar monotek avatar mschneider82 avatar ralfyang avatar rawmind0 avatar rockaut avatar rucknar avatar superseb avatar vincent99 avatar wborn avatar wtayyeb avatar zicklag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community-catalog's Issues

Janitor Catalog: Volume Parameter Request

Could you add a parameter for the docker volume that gets mounted? I was going to make my own, but figured I might not be the only one to use the Community catalog that has docker installed in a different location.

Thanks for the awesome entry!

Request: Non safe consul cluster for dev/test

When developing and experimenting I find myself setting up and tearing down things a lot.
Would it be possible to have a non safe consul cluster template, that one can just spin up w/o any certificates. just for quick development and testing.

Currently, there is quite some overhead to get things up and running with the consul cluster template.

Just my 2 cents.

[drone-workers] distributed cache

I'm trying to set up distributed cache amongst drone worker nodes.
I created a gluster fs storage pool sharing /var/lib/drone/cache across all drone-agent containers but that doesn't work...
/cc @cloudnautique Would be great if the rancher/drone-config Dockerfile could be public by the way! And cheers for that awesome setup (up and running in < 5 minutes, insane)

Sentry 8.5.0

When you have multiple hosts in an environment and you start Sentry with defaults, it creates a sentry, redis and postgres instance for every host in the environment.

Every container in docker-compose.yml has:

labels:
    io.rancher.scheduler.global: 'true'

This means if you put a load balancer in front of it, you hit N different services, all which are not in sync.

Error in Owncloud catalog template

When creating owncloud template, it stuck loading giving the following message:

 Activating (Service 'owncloud' configuration key 'environment' contains an invalid type, it should be an array or object)  

Rancher Version
build-master:v0.20.0

Docker Version
1.11.2

Unable to get secrets-bridge working

@cloudnautique I am trying to get the secrets-bridge to work as you showed in the April 2016 online meetup, but am having trouble. I first started by deploying both server, agent and test alpine container on a single local VM. Then I tried deploying to a two-environment, 1 host per environment set up with the same results.

Logs for bridge server:
5/17/2016 8:51:56 AMtime="2016-05-17T12:51:56Z" level=info msg="Listening on port: 8181"

Logs for bridge agent:

5/17/2016 8:55:19 AMtime="2016-05-17T12:55:19Z" level=info msg="Entering event listening Loop"
5/17/2016 8:56:18 AMtime="2016-05-17T12:56:18Z" level=info msg="Received action: start, from container: c5201eb621f1d707202a9603ab9fc35d37fef7fcb78b8e1355c00570efe80813"

Logs for vault server:

5/17/2016 7:45:46 AM==> WARNING: Dev mode is enabled!
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AMIn this mode, Vault is completely in-memory and unsealed.
5/17/2016 7:45:46 AMVault is configured to only have a single unseal key. The root
5/17/2016 7:45:46 AMtoken has already been authenticated with the CLI, so you can
5/17/2016 7:45:46 AMimmediately begin using the Vault CLI.
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AMThe only step you need to take is to set the following
5/17/2016 7:45:46 AMenvironment variables:
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: security barrier initialized (shares: 1, threshold 1)
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: post-unseal setup starting
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type generic at secret/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type cubbyhole at cubbyhole/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type system at sys/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] rollback: starting rollback manager
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: post-unseal setup complete
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: root token generated
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: pre-seal teardown starting
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] rollback: stopping rollback manager
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: pre-seal teardown complete
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: vault is unsealed
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: post-unseal setup starting
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type generic at secret/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type cubbyhole at cubbyhole/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: mounted backend of type system at sys/
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] rollback: starting rollback manager
5/17/2016 7:45:46 AM2016/05/17 11:45:46 [INFO] core: post-unseal setup complete
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AM    export VAULT_ADDR='http://0.0.0.0:8200'
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AMThe unseal key and root token are reproduced below in case you
5/17/2016 7:45:46 AMwant to seal/unseal the Vault or play with authentication.
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AMUnseal Key: 725a8914f9b2420cf3976e4a6843b8bee7168e98f15f4c760a3d1cb627cfde80
5/17/2016 7:45:46 AMRoot Token: abracadabra
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AM==> Vault server configuration:
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AM         Log Level: info
5/17/2016 7:45:46 AM             Mlock: supported: true, enabled: false
5/17/2016 7:45:46 AM           Backend: inmem
5/17/2016 7:45:46 AM        Listener 1: tcp (addr: "0.0.0.0:8200", tls: "disabled")
5/17/2016 7:45:46 AM           Version: Vault v0.5.2
5/17/2016 7:45:46 AM
5/17/2016 7:45:46 AM==> Vault server started! Log data will stream in below:
5/17/2016 7:45:46 AM

Is there any way to turn on a "debug" or "verbose" mode to get more information on what is happening? Also, can you please take a look at my setting below to see if I'm off the mark?

Thanks...Dan

Settings Used

Rancher configured with two environments named "Rancher" and "Test". secrets-bridge-server is deployed to Rancher and secrets-bridge-agent is deployed to Test. A simple alpine image that runs an interactive bash shell is started to test getting the temp secret in /tmp/secrets folder.

Alpine Compose File:

alpine:
  labels:
    io.rancher.container.hostname_override: container_name
    io.rancher.container.dns: 'true'
  tty: true
  entrypoint:
  - /bin/sh
  image: alpine:3.3

Vault server installed and started on to same host used in Rancher environment with:
vault server -dev -dev-root-token-id=abracadabra -dev-listen-address=0.0.0.0:8200

Vault configured using script:

export VAULT_ADDR=http://xxx.xxx.xxx.xxx:8200
export VAULT_ADDR=$VAULT_ADDR
export ROOT_TOKEN=abracadabra

# Create policies for roles and applications
vault policy-write grantor-Rancher ./policies/grantor-Rancher.hcl
vault policy-write test-alpine ./policies/test-alpine.hcl

# Create role and assign policies
curl -s -X POST -H "X-Vault-Token: ${ROOT_TOKEN}" -d '{"allowed_policies": "default,grantor-Rancher,test-alpine"}' ${VAULT_ADDR}/v1/auth/token/roles/grantor-Rancher

# Assign policies to applications
vault write secret/secrets-bridge/Rancher/Test/alpine/alpine policies=test-alpine,default

# Create temporary grantor token (ttl is only 15 min)
TEMP_TOKEN=$(curl -s -H "X-Vault-Token: $ROOT_TOKEN" ${VAULT_ADDR}/v1/auth/token/create -d '{"policies": ["default"], "ttl": "24h", "num_uses": 30}'|jq -r '.auth.client_token')
PERM_TOKEN=$(curl -s -X POST -H "X-Vault-Token: $ROOT_TOKEN" ${VAULT_ADDR}/v1/auth/token/create/grantor-Rancher -d '{"policies": ["default", "grantor-Rancher", "test-alpine"], "ttl": "72h", "meta": {"configPath": "secret/secrets-bridge/Rancher"}}' | jq -r '.auth.client_token')
curl -X POST -H "X-Vault-Token: ${TEMP_TOKEN}" ${VAULT_ADDR}/v1/cubbyhole/Rancher -d "{\"permKey\": \"${PERM_TOKEN}\"}"
echo "${TEMP_TOKEN}"

Taiga template is not working

Rancher Version:
build-master:v0.20.0

Docker Version:
1.11.2

Catalog Template:
Taiga template

Taiga template is giving error whenever i try to sign in or sign up:

taiga

The logs of the container:

postgresql:

6/24/2016 1:33:59 AMThe files belonging to this database system will be owned by user "postgres".
6/24/2016 1:33:59 AMThis user must also own the server process.
6/24/2016 1:33:59 AM
6/24/2016 1:33:59 AMThe database cluster will be initialized with locale "en_US.utf8".
6/24/2016 1:33:59 AMThe default database encoding has accordingly been set to "UTF8".
6/24/2016 1:33:59 AMThe default text search configuration will be set to "english".
6/24/2016 1:33:59 AM
6/24/2016 1:33:59 AMData page checksums are disabled.
6/24/2016 1:33:59 AM
6/24/2016 1:33:59 AMfixing permissions on existing directory /var/lib/postgresql/data ... ok
6/24/2016 1:33:59 AMcreating subdirectories ... ok
6/24/2016 1:33:59 AMselecting default max_connections ... 100
6/24/2016 1:33:59 AMselecting default shared_buffers ... 128MB
6/24/2016 1:33:59 AMselecting dynamic shared memory implementation ... posix
6/24/2016 1:34:00 AMcreating configuration files ... ok
6/24/2016 1:34:02 AMcreating template1 database in /var/lib/postgresql/data/base/1 ... ok
6/24/2016 1:34:02 AMinitializing pg_authid ... ok
6/24/2016 1:34:02 AMinitializing dependencies ... ok
6/24/2016 1:34:02 AMcreating system views ... ok
6/24/2016 1:34:03 AMloading system objects' descriptions ... ok
6/24/2016 1:34:03 AMcreating collations ... ok
6/24/2016 1:34:03 AMcreating conversions ... ok
6/24/2016 1:34:03 AMcreating dictionaries ... ok
6/24/2016 1:34:03 AMsetting privileges on built-in objects ... ok
6/24/2016 1:34:03 AMcreating information schema ... ok
6/24/2016 1:34:03 AMloading PL/pgSQL server-side language ... ok
6/24/2016 1:34:04 AMvacuuming database template1 ... ok
6/24/2016 1:34:04 AMcopying template1 to template0 ... ok
6/24/2016 1:34:04 AMcopying template1 to postgres ... ok
6/24/2016 1:34:04 AMsyncing data to disk ... ok
6/24/2016 1:34:04 AM
6/24/2016 1:34:04 AMWARNING: enabling "trust" authentication for local connections
6/24/2016 1:34:04 AMYou can change this by editing pg_hba.conf or using the option -A, or
6/24/2016 1:34:04 AM--auth-local and --auth-host, the next time you run initdb.
6/24/2016 1:34:04 AM
6/24/2016 1:34:04 AMSuccess. You can now start the database server using:
6/24/2016 1:34:04 AM
6/24/2016 1:34:04 AM    pg_ctl -D /var/lib/postgresql/data -l logfile start
6/24/2016 1:34:04 AM
6/24/2016 1:34:04 AM****************************************************
6/24/2016 1:34:04 AMWARNING: No password has been set for the database.
6/24/2016 1:34:04 AM         This will allow anyone with access to the
6/24/2016 1:34:04 AM         Postgres port to access your database. In
6/24/2016 1:34:04 AM         Docker's default configuration, this is
6/24/2016 1:34:04 AM         effectively any other container on the same
6/24/2016 1:34:04 AM         system.
6/24/2016 1:34:04 AM
6/24/2016 1:34:04 AM         Use "-e POSTGRES_PASSWORD=password" to set
6/24/2016 1:34:04 AM         it in "docker run".
6/24/2016 1:34:04 AM****************************************************
6/24/2016 1:34:04 AMwaiting for server to start....LOG:  database system was shut down at 2016-06-23 23:34:04 UTC
6/24/2016 1:34:04 AMLOG:  MultiXact member wraparound protections are now enabled
6/24/2016 1:34:04 AMLOG:  database system is ready to accept connections
6/24/2016 1:34:04 AMLOG:  autovacuum launcher started
6/24/2016 1:34:05 AM done
6/24/2016 1:34:05 AMserver started
6/24/2016 1:34:05 AMALTER ROLE
6/24/2016 1:34:05 AM
6/24/2016 1:34:05 AM
6/24/2016 1:34:05 AM/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
6/24/2016 1:34:05 AM
6/24/2016 1:34:05 AMLOG:  received fast shutdown request
6/24/2016 1:34:05 AMLOG:  aborting any active transactions
6/24/2016 1:34:05 AMLOG:  autovacuum launcher shutting down
6/24/2016 1:34:05 AMLOG:  shutting down
6/24/2016 1:34:05 AMwaiting for server to shut down....LOG:  database system is shut down
6/24/2016 1:34:06 AM done
6/24/2016 1:34:06 AMserver stopped
6/24/2016 1:34:06 AM
6/24/2016 1:34:06 AMPostgreSQL init process complete; ready for start up.
6/24/2016 1:34:06 AM
6/24/2016 1:34:06 AMLOG:  database system was shut down at 2016-06-23 23:34:05 UTC
6/24/2016 1:34:06 AMLOG:  MultiXact member wraparound protections are now enabled
6/24/2016 1:34:06 AMLOG:  database system is ready to accept connections
6/24/2016 1:34:06 AMLOG:  autovacuum launcher started

tiagback:

6/24/2016 1:34:04 AMTrying import local.py settings...
6/24/2016 1:34:04 AMWARNING: Using backend without django-transaction-hooks support, auto delete files will not work.
6/24/2016 1:34:05 AMTrying import local.py settings...
6/24/2016 1:34:05 AMWARNING: Using backend without django-transaction-hooks support, auto delete files will not work.

The k8s catalog item MongoDB have some error in run.sh and yaml file

my k8s'version is:
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"b57e8bdc7c3871e3f6077b13c42d205ae1813fbd", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"b57e8bdc7c3871e3f6077b13c42d205ae1813fbd", GitTreeState:"clean"}
docker version is:1.9.1
when I deploy mongoDB in k8s with yaml file.I found two error:
the first is in run.sh
IP=$(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 | sed -n 2p)
if use this script to get ip , it will return empty,but when remove | sed -n 2p it will be ok.

the second is in yaml file mongo-controller.yaml
in this file the environment variable PRIMARY is not set , the default value is container is true,as result,when deploy mongoDB by mongo-controller.yaml.every mongo rc instance will change to primary due to the defaule value in env.After set environment variable PRIMARY value to false,the result is correct, the member incloud one primary and 2 scendary(set replicas: 2).
so need to add PRIMARY environment variable in mongo-controller.yaml.

droneworkers either doesn't work or setup isn't clear enough

Hey there,

So I was really pumped to see that someone had made a dynamic worker creator, so I set it up and found the setup from the catalogue to be a bit confusing, that's fine, I get it's experimental and new, but I'm not sure what is being asked for in the "Stack name" field. I've tried bot the actual stack name, in my case "drone", and used "drone-server" as it says "should be the same as above". Neither of these seem to work other than to remove ALL workers from the drone server (I'm using the default catalogue one). Am I misunderstanding what these fields are, or is it simply not in a working state?

Thanks!

Drone is outdated

The Drone image we're using seems to be pretty outdated. Can we either use the official Drone image instead or update ours to the latest version of drone 0.4?

Unable to use the registry with protus

Hi,
I was just trying to get the registry to work on my local host but when I launch it it shows the following error:
Error (500 Server Error: Internal Server Error ("create registry/db: volume name invalid: "registry/db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed"))

It always fails to launch the first database container. Has someone used this before ?

[cloudflare] unmarshal number error of type string

Hi,

I'm encountering an error while testing the CloudFlare DNS Service.

23/4/2016 23:24:30time="2016-04-23T21:24:30Z" level=fatal msg="Failed to set zone for root domain domain.tld: CloudFlare API call has failed: json: cannot unmarshal number into Go value of type string"

Corresponding config files:

docker-compose.yml

cloudflare:
  environment:
    CLOUDFLARE_EMAIL: [email protected]
    CLOUDFLARE_KEY: 9534129bbb1f22bc71973d4cd90a429xxxxxx
    ROOT_DOMAIN: domain.tld
    TTL: '300'
  labels:
    io.rancher.container.create_agent: 'true'
    io.rancher.container.agent.role: external-dns
  command:
  - -provider=cloudflare
  image: rancher/external-dns:v0.2.1

rancher-compose.yml

cloudflare:
  scale: 1
  health_check:
    port: 1000
    interval: 2000
    unhealthy_threshold: 3
    strategy: recreate
    response_timeout: 2000
    request_line: GET / HTTP/1.0
    healthy_threshold: 2

Precision: I have deliberatly obfuscated the end of the API_KEY.

Elasticsearch 2.x

I'm currently running Rancher 1.0.2 and trying out the Elasticsearch 2.x stack, however it's not working and I'm lost as to why.

All the services load up and report healthy.But navigating to kopf I get an error connecting http://192.168.11.40:8081/es/. But if I click the red warning, or go to it directly I get the following

{
  "name" : "Elasticsearch_elasticsearch-clients_1",
  "cluster_name" : "ShiroiES",
  "version" : {
    "number" : "2.3.3",
    "build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
    "build_timestamp" : "2016-05-17T15:40:04Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

I'm not sure what to do, I'm not even progressing with Kibana4 until I get Elasticsearch 2.x working.

[2016-07-06 00:19:16] Elasticseach connection:
[2016-07-06 00:19:16] {"host":"http://192.168.11.40:8081/es/","withCredentials":false}
[2016-07-06 00:19:17] Error executing request:
[2016-07-06 00:19:17] {"method":"GET","transformRequest":[null],"transformResponse":[null],"url":"http://192.168.11.40:8081/es//","data":{},"params":{},"headers":{"Accept":"application/json, text/plain, /"}}
[2016-07-06 00:19:17] REST API output:
[2016-07-06 00:19:17] {"name":"Elasticsearch_elasticsearch-clients_1","cluster_name":"ShiroiES","version":{"number":"2.3.3","build_hash":"218bdf10790eef486ff2c41a3df5cfa32dadcfde","build_timestamp":"2016-05-17T15:40:04Z","build_snapshot":false,"lucene_version":"5.5.0"},"tagline":"You Know, for Search"}

I'm not using the default port 80 as that is already in use, but with the 'question' I picked 8081 as that's free. I've tried 2.2.1, I've used 2.3.3 template from a PR, the same error happens regardless of Elasticsearch's version. Any idea what's going on?

Registry: Error creating Certificate

Hey,

I have tried to deploy the registry but the sslproxy wont start. I looked a bit around and found the problem with the certificates. There could not created from portus.

The error is:

INFO[0066] [2/5] [portus]: Started                      
INFO[0066] [2/5] [registry]: Starting                   
INFO[0066] [2/5] [sslproxy]: Starting                   
portus_1 | 2016-04-26T18:03:10.080710472Z Creating Certificate
2016-04-26T18:03:10.111949368Z error on line 353 of /dev/fd/63
portus_1 | 2016-04-26T18:03:10.111974046Z 140235343283856:error:0E079065:configuration file routines:DEF_LOAD_BIO:missing equal sign:conf_def.c:362:line 353
portus_1 | 2016-04-26T18:03:10.112663656Z config FQDN into rails

When I try to execute the "openssl"-Command from the startup.sh I get the following error:

#  openssl req -x509 -newkey rsa:2048 -keyout "$PORTUS_KEY_PATH" -out "$PORTUS_CRT_PATH" -days 3650 -nodes -subj "/CN=$PORTUS_MACHINE_FQDN" -extensions SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:registry,DNS:$PORTUS_MACHINE_FQDN,DNS:$ALTNAME,IP:$IPADDR,DNS:portus"))
Error Loading extension section SAN
140057797936784:error:2206D06D:X509 V3 routines:X509V3_parse_list:invalid null value:v3_utl.c:299:
140057797936784:error:22097069:X509 V3 routines:DO_EXT_NCONF:invalid extension string:v3_conf.c:139:name=subjectAltName,section=DNS:registry,DNS:registry.<domain>.net,DNS:,IP:,DNS:portus
140057797936784:error:22098080:X509 V3 routines:X509V3_EXT_nconf:error in extension:v3_conf.c:93:name=subjectAltName, value=DNS:registry,DNS:registry.<domain>.net,DNS:,IP:,DNS:portus

I have tried to deploy the containers on two hosts. But I get on both hosts the same error.
I have no idea what exactly is the problem. I hope somone can help me.

Jenkins plugins are not installed

Due to not automatically resolved dependencies, plugins which are chosen by default when spinning jenkins from rancher catalog, are not installed:

LOGS:

java.io.IOException: Dependency git (2.0.3) doesn't exist
29/03/2016 20:10:09 at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:533)
29/03/2016 20:10:09 at hudson.PluginManager$2$1$1.run(PluginManager.java:383)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:282)
29/03/2016 20:10:09 at jenkins.model.Jenkins$8.runTask(Jenkins.java:924)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:210)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
29/03/2016 20:10:09 at java.lang.Thread.run(Thread.java:745)
29/03/2016 20:10:09
29/03/2016 20:10:09Mar 29, 2016 6:10:09 PM jenkins.InitReactorRunner$1 onTaskFailed
29/03/2016 20:10:09SEVERE: Failed Loading plugin plain-credentials
29/03/2016 20:10:09java.io.IOException: Dependency credentials (1.21) doesn't exist
29/03/2016 20:10:09 at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:533)
29/03/2016 20:10:09 at hudson.PluginManager$2$1$1.run(PluginManager.java:383)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:282)
29/03/2016 20:10:09 at jenkins.model.Jenkins$8.runTask(Jenkins.java:924)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:210)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
29/03/2016 20:10:09 at java.lang.Thread.run(Thread.java:745)
29/03/2016 20:10:09
29/03/2016 20:10:09Mar 29, 2016 6:10:09 PM jenkins.InitReactorRunner$1 onTaskFailed
29/03/2016 20:10:09SEVERE: Failed Loading plugin github
29/03/2016 20:10:09java.io.IOException: Dependency credentials (1.22), plain-credentials (1.1), git (2.4.0), token-macro (1.11) doesn't exist
29/03/2016 20:10:09 at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:533)
29/03/2016 20:10:09 at hudson.PluginManager$2$1$1.run(PluginManager.java:383)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:282)
29/03/2016 20:10:09 at jenkins.model.Jenkins$8.runTask(Jenkins.java:924)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:210)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
29/03/2016 20:10:09 at java.lang.Thread.run(Thread.java:745)
29/03/2016 20:10:09
29/03/2016 20:10:09Mar 29, 2016 6:10:09 PM jenkins.InitReactorRunner$1 onTaskFailed
29/03/2016 20:10:09SEVERE: Failed Loading plugin ssh-slaves
29/03/2016 20:10:09java.io.IOException: Dependency ssh-credentials (1.6.1), credentials (1.9.4) doesn't exist
29/03/2016 20:10:09 at hudson.PluginWrapper.resolvePluginDependencies(PluginWrapper.java:533)
29/03/2016 20:10:09 at hudson.PluginManager$2$1$1.run(PluginManager.java:383)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:282)
29/03/2016 20:10:09 at jenkins.model.Jenkins$8.runTask(Jenkins.java:924)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:210)
29/03/2016 20:10:09 at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
29/03/2016 20:10:09 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
29/03/2016 20:10:09 at java.lang.Thread.run(Thread.java:745)

MongoDB on cattle not initializing correctly causing problems

When deploying mongodb catalog template - it starts up everything nicely and it works fine, but if you shutdown one of the nodes - it doesn't seem like it can ever re-join cluster properly.
One of the things that I noticed, is that one of the members in the replica contains a name instead of IP, and possibly that's one of the reasons, but either way something is off, and using MongoDB catalog template is very-very unsafe as it stands right now, since in case one of the replicas fails - you will not be able to re-create the cluster, and after rancher starts to spin up new instances and delete failed - it might essentually lose your content (in case that you don't use external volume and trigger + / - instances, i accidentally did that and lost all data)

rs0:PRIMARY> rs.config()
{
    "_id" : "rs0",
    "version" : 3,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "**MongoDB_mongo-cluster_1:27017**",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "**10.42.230.33:27017**",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "**10.42.94.55:27017**",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("576af7ee04b83fbf3e3135a3")
    }
}

Another issue that I noticed - replicaset becomes unavailable after the cluster is increased and followed by reduction of number of instances, ie - increase to 4, and then if 2 nodes fail - cluster will never recover. And if you bring more instances up - it will just cause split brain, and new instances will come up as individual single-master-only nodes, and the replicaset nodes will remain as secondaries

taigafront fails to start

Trying to start taiga gives the following error from the taigafront container.

2/24/2016 3:23:32 PM2016/02/24 20:23:32 [emerg] 10#10: host not found in upstream "taiga-back" in /etc/nginx/conf.d/default.conf:25
2/24/2016 3:23:32 PMnginx: [emerg] host not found in upstream "taiga-back" in /etc/nginx/conf.d/default.conf:25

I did have to change the ports used as the server I am deploying on already has those ports allocated.

Openstack on rancher

Guys i have been thinking of this wierd project in a while. I want to see the possibility of having an openstack stack run rancher.

logspout:v0.2.0 not work (logstash:1.5.6-1, elasticsearch-2)

@dominikhahn. I run:

  1. elasticsearch-2
    elasticsearch-2.zip
  2. logstash
    logstash.zip
  3. logspout
    logspout.zip

In logs Logspout connect to Logstash, but not send logs from servers.

18.04.2016 11:40:35# logspout v3-master-custom by gliderlabs
18.04.2016 11:40:35# adapters: udp tcp syslog logstash raw
18.04.2016 11:40:35# options : persist:/mnt/routes
18.04.2016 11:40:35# jobs    : pump http[routes,logs]:80
18.04.2016 11:40:35# routes  :
18.04.2016 11:40:35#   ADAPTER  ADDRESS     CONTAINERS  SOURCES OPTIONS
18.04.2016 11:40:35#   logstash logstash:5000               map[]

In old version elasticsearch (and logstash) all work, i can use kibana, but now ES-2 not get data

docker-compose.yml

logspout:
  environment:
    LOGSPOUT: ignore
    ROUTE_URIS: logstash://logstash:5000
  external_links:
  - logstash/logstash-collector:logstash
  labels:
    io.rancher.scheduler.global: 'true'
    io.rancher.container.hostname_override: container_name
  tty: true
  image: rancher/logspout-logstash:v0.2.0
  volumes:
  - /var/run/docker.sock:/var/run/docker.sock
  stdin_open: true

kopf:
screenshot 2

Taiga Admin User Doesn't Exist

Creating the Taiga project, I can't login to the web UI as "admin" with password "123123". I think the connection to the database is not working somehow...

The logs of the database container say

FATAL:  role "taiga" does not exist

Janitor template gives error on deploy

When deploying a Janitor container (v1.6.0 with the latest Privileged mode option) it gives an error:

Activating (Service 'cleanup' configuration key 'privileged' contains an invalid type, it should be a boolean.)

However looking at the template definition, it seems to be valid. Is this because of a trailing newline or similar?

Registry-Convoy: No such file or directory

When deploying a Registry-Convoy stack from the catalog, the sslproxy service errors with Error (404 Client Error: Not Found ("no such file or directory")).

I'm running Rancher 1.1.0-dev5 on CoreOS 1010.5.0. Unfortunately, I can't get the container logs since they don't start up.

DataDog agent Host labels don't work

Just deployed the DataDog agent via the catalog entry. I set service:rancher,environment:rancher for the Host labels to DataDog tags variable but it doesn't look the like the tags I set are being propagated to DataDog correctly.

[janitor] cannot deploy v1.6 on Rancher 1.0.1

When I try to launch a janitor Stack v1.6 on Rancher 1.0.1, this results in an endless loop of deployment attempts with the following error output from the Rancher Server:

time="2016-05-19T15:35:20Z" level=error msg="Stack Create Event Failed: yaml: unmarshal errors:\n  line 13: cannot unmarshal !!str `true` into bool" eventId=074ccb45-6586-4548-bd82-8bb0c4c2cdc8 resourceId=1e3
time="2016-05-19T15:35:20Z" level=error msg="Failed to unmarshall: yaml: unmarshal errors:\n  line 13: cannot unmarshal !!str `false` into bool\nenvironment:\n  CLEAN_PERIOD: 3600\n  DEBUG: 0\n  DELAY_TIME: 900\n  KEEP_CONTAINERS: '*:*'\n  KEEP_IMAGES: rancher/\n  LOOP: \"true\"\nimage: meltwater/docker-cleanup:1.6.0\nlabels:\n  io.rancher.scheduler.affinity:host_label_ne: janitor.exclude=true\n  io.rancher.scheduler.global: \"true\"\nnet: none\nprivileged: \"false\"\nstdin_open: false\ntty: false\nvolumes:\n- /var/run/docker.sock:/var/run/docker.sock\n- /var/lib/docker:/var/lib/docker\n"
time="2016-05-19T15:35:20Z" level=error msg="Failed to parse service cleanup: yaml: unmarshal errors:\n  line 13: cannot unmarshal !!str `false` into bool"
time="2016-05-19T15:35:20Z" level=error msg="Could not parse config for project janitor : yaml: unmarshal errors:\n  line 13: cannot unmarshal !!str `false` into bool"

The problem seems to be that privileged has a value of "true" or "false" instead of true or false, but that is just a guess from my side.

This is probably related to #142 but my Rancher instance did not prevent me from deploying.

On a related note: how do I get rid of these stuck services, they don't show up in the Rancher UI :(


Useful Info
Versions Rancher v1.0.1 Cattle: v0.159.7 UI: v1.0.5

galera-lb never start.... Initializing...

Hi,

When I start galera on last (and previous) version of rancher, the load balancer never finish to startup.

5/4/2016 15:43:15INFO: ROOT -> ./
5/4/2016 15:43:15INFO: ROOT -> ./etc/
5/4/2016 15:43:15INFO: ROOT -> ./etc/monit/
5/4/2016 15:43:15INFO: ROOT -> ./etc/monit/conf.d/
5/4/2016 15:43:15INFO: ROOT -> ./etc/monit/conf.d/haproxy
5/4/2016 15:43:15INFO: ROOT -> ./etc/default/
5/4/2016 15:43:15INFO: ROOT -> ./etc/default/haproxy
5/4/2016 15:43:15INFO: ROOT -> ./etc/haproxy/
5/4/2016 15:43:15INFO: ROOT -> ./etc/haproxy/certs/
5/4/2016 15:43:15INFO: ROOT -> ./etc/haproxy/certs/certs.pem
5/4/2016 15:43:15INFO: ROOT -> ./etc/haproxy/certs/default.pem
5/4/2016 15:43:15INFO: ROOT -> ./etc/haproxy/haproxy.cfg
5/4/2016 15:43:15INFO: Sending haproxy applied 3-a5eac3965952846cbd39c610ae44d58f5b54450bd46bdfba35c57dda8edfaab0
5/4/2016 15:43:15INFO: HOME -> ./
5/4/2016 15:43:15INFO: HOME -> ./etc/
5/4/2016 15:43:15INFO: HOME -> ./etc/cattle/
5/4/2016 15:43:15INFO: HOME -> ./etc/cattle/startup-env
5/4/2016 15:43:15INFO: ROOT -> ./
5/4/2016 15:43:15INFO: ROOT -> ./etc/
5/4/2016 15:43:15INFO: ROOT -> ./etc/init.d/
5/4/2016 15:43:15INFO: ROOT -> ./etc/init.d/agent-instance-startup
5/4/2016 15:43:15INFO: Sending agent-instance-startup applied 1-b4c20a4067550042bc08dbf6a1eea1b4687dff4ec952bbe7dde04e79dd29d15e
5/4/2016 15:43:15monit: generated unique Monit id 6fe3b6a025aeca46f989b25ddab66bb1 and stored to '/var/lib/monit/id'
5/4/2016 15:43:15Starting monit daemon with http interface at [localhost:2812]

image

Any idea ?

Unable to setup registry stack: 502 Bad Gateway

Hello,

Trying to get the "registry" stack working but not able to. I'm hitting a 502 Bad Gateway NGINX error when connecting to https://registry.mydomain.com.

I configured the stack to use registry.mydomain.com as the FQDN and used the default ports (5000/443).

Here is my docker-compose.yml:

db:
  environment:
    MYSQL_DATABASE: portus
    MYSQL_PASSWORD: mypasswordhere
    MYSQL_ROOT_PASSWORD: mypasswordhere
    MYSQL_USER: portus
  labels:
    io.rancher.service.hash: 1e0b6e32ffafa0a0b0b9250e7540d97fe5fdce9d
  tty: true
  image: mysql:5.7.10
  volumes:
  - /var/docker-registry/db:/var/lib/mysql
  stdin_open: true
sslproxy:
  labels:
    io.rancher.service.hash: d12f9cfbe6ef4cf67459c96af4096fd7d4feb0c4
  tty: true
  image: nginx:1.9.9
  links:
  - portus:portus
  volumes:
  - /var/docker-registry/certs:/etc/nginx/certs:ro
  - /var/docker-registry/proxy:/etc/nginx/conf.d:ro
  stdin_open: true
lb:
  ports:
  - 5000:5000/tcp
  - 443:443/tcp
  labels:
    io.rancher.scheduler.global: 'true'
    io.rancher.scheduler.affinity:not_host_label: lb=0
    io.rancher.loadbalancer.target.sslproxy: 443=443
    io.rancher.service.hash: d7d36fab64ca84d9c4967ea01f47bff1ee5343f4
    io.rancher.loadbalancer.target.registry: 5000=5000
  tty: true
  image: rancher/load-balancer-service
  links:
  - registry:registry
  - sslproxy:sslproxy
  stdin_open: true
registry:
  environment:
    REGISTRY_AUTH_TOKEN_ISSUER: registry.mydomain.com
    REGISTRY_AUTH_TOKEN_REALM: https://registry.mydomain.com:443/v2/token
    REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certs/registry.crt
    REGISTRY_AUTH_TOKEN_SERVICE: registry.mydomain.com:5000
    REGISTRY_HTTP_SECRET: httpsecret
    REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
    REGISTRY_HTTP_TLS_KEY: /certs/registry.key
    REGISTRY_LOG_LEVEL: warn
    REGISTRY_NOTIFICATIONS_ENDPOINTS: |
      - name: portus
        url: http://portus:3000/v2/webhooks/events
        timeout: 500
        threshold: 5
        backoff: 1
    REGISTRY_STORAGE_DELETE_ENABLED: 'true'
  labels:
    io.rancher.service.hash: 6eccbc547093b7269e5f2abc71829a027ff43a65
  tty: true
  image: registry:2.1
  links:
  - portus:portus
  volumes:
  - /var/docker-registry/certs:/certs:ro
  - /var/docker-registry/data:/var/lib/registry
  stdin_open: true
portus:
  environment:
    PORTUS_CHECK_SSL_USAGE_ENABLED: 'true'
    PORTUS_GRAVATAR_ENABLED: 'true'
    PORTUS_KEY_PATH: /certs/registry.key
    PORTUS_LDAP_AUTHENTICATION_BIND_DN: ou=portus,dc=company,dc=com
    PORTUS_LDAP_AUTHENTICATION_ENABLED: 'false'
    PORTUS_LDAP_AUTHENTICATION_PASSWORD: password
    PORTUS_LDAP_BASE: ou=People,dc=company,dc=com
    PORTUS_LDAP_ENABLED: 'false'
    PORTUS_LDAP_GUESS_EMAIL_ATTR: mail
    PORTUS_LDAP_GUESS_EMAIL_ENABLED: 'true'
    PORTUS_LDAP_HOSTNAME: ldap.company.com
    PORTUS_LDAP_METHOD: starttls
    PORTUS_LDAP_PORT: '389'
    PORTUS_LDAP_UID: cn
    PORTUS_MACHINE_FQDN: registry.mydomain.com
    PORTUS_PASSWORD: mypasswordhere
    PORTUS_PORT: '443'
    PORTUS_PRODUCTION_DATABASE: portus
    PORTUS_PRODUCTION_HOST: db
    PORTUS_PRODUCTION_PASSWORD: mypasswordhere
    PORTUS_PRODUCTION_USERNAME: portus
    PORTUS_SECRET_KEY_BASE: mypasswordhere
    PORTUS_SMTP_ENABLED: 'false'
    REGISTRY_HOSTNAME: registry.mydomain.com
    REGISTRY_NAME: Registry
    REGISTRY_PORT: '5000'
    REGISTRY_SSL_ENABLED: 'true'
  labels:
    io.rancher.container.pull_image: always
    io.rancher.service.hash: efe4408dc0c6702a602249112ddc49d184f133ef
  tty: true
  image: sshipway/portus:2.0.3
  links:
  - db:db
  volumes:
  - /var/docker-registry/certs:/certs
  - /var/docker-registry/proxy:/etc/nginx/conf.d
  stdin_open: true

Here is my rancher-compose.yml:

db:
  scale: 1
  metadata:
    io.rancher.service.hash: d207d194d16104535c1df27c1c9ac5918b7cfa26
sslproxy:
  scale: 1
  metadata:
    io.rancher.service.hash: 76d82b9309a25e5460531ac297525a19e5a6ee54
lb:
  load_balancer_config:
    haproxy_config: {}
  health_check:
    port: 42
    interval: 2000
    unhealthy_threshold: 3
    strategy: recreate
    response_timeout: 2000
    healthy_threshold: 2
  metadata:
    io.rancher.service.hash: 71d8a2cfdb029771cf96cc7b5db9316380fd730b
registry:
  scale: 1
  metadata:
    io.rancher.service.hash: c2f7cdf5184286537044b64fbb51f0c0e574e30e
portus:
  scale: 1
  metadata:
    io.rancher.service.hash: 70cf70f82cd8b2af5a2c2b7ec7e4f8ab49cde759

Am I doing something wrong?

Thanks for any tips

Feature request : Mesos/Spark/Chronos

Hi there,

I think it would be very interesting to have the ability to deploy a new kind of environment (like cattle or kubernetes) which would be Mesos.

It has a lot of features, and for me, the first is Spark support. Other projects built on top of Mesos are also interesting like Chronos.

Is there anyone else intetested in Mesos over Rancher ?

Thanks.

Jenkins-CI Port Linking

Please allow to link Jenkins-CI Container Port to another Port on Host system.
If you run rancher in single node installation the default port for rancher-server is 8080 run in conflict with jenkins-ci

Gitlab Template doesn't start

Rancher Version: v1.2.0-pre1

The Gitlab catalog entry doesn't start and the gitlab_gitlab-server_1 container stopped without throwing any error in the logs.

Traefik Template Error

Rancher Version: v1.2.0-pre1

Trying to run the Traefik catalog template resulted in error, where no containers are being created and the state of the stack is Unhealthy.

gogs template: missing mysql_database_name env

in mysql service the MYSQL_DATABASE env is missing. this way no databases was created. and when you try to configure gogs you get this error:

captura de ecra de 2016-03-10 16 37 57

If you execute a shell in mysql container and list databases you will see no database was created.
captura de ecra de 2016-03-10 16 42 11

Update

After create the database manually i was able to finish the installation process

Gogs has different default database

I think we should add the environment variable MYSQL_DATABASE: gogs to the Gogs templates/gogs/*/docker-compose.yml since it's the default database of the Gogs setup.

registry: 500 Internal Server Error If you are the administrator of this website, then please read this web application's log file and/or the web server's log file to find out what went wrong.

2016年5月27日 GMT+8 下午10:34:52Processing by ErrorsController#show as JSON
2016年5月27日 GMT+8 下午10:34:52  Parameters: {"status"=>"500"}
2016年5月27日 GMT+8 下午10:34:52Completed 500 Internal Server Error in 0ms (ActiveRecord: 0.0ms)
2016年5月27日 GMT+8 下午10:34:52Error during failsafe response: Missing template errors/500, application/500 with {:locale=>[:en], :formats=>[:json], :variants=>[], :handlers=>[:erb, :builder, :raw, :ruby, :slim, :coffee]}. Searched in:
2016年5月27日 GMT+8 下午10:34:52  * "/portus/app/views"
2016年5月27日 GMT+8 下午10:34:52  * "/usr/local/bundle/gems/kaminari-0.16.3/app/views"
2016年5月27日 GMT+8 下午10:34:52  * "/usr/local/bundle/gems/devise-3.5.1/app/views"

Recommendation: Don't bind ports in docker-compose.yml

A small recommendation from something that I've had to deal with while using some of these templates on a new environment. It may be best to not bind to public ports in docker-compose.yml. While it can be useful if you are just setting up a machine specific to that service, it can become an annoyance to those who have machines multi-tasking. The two big issues I have run into with default port binding are:

  1. Port conflicts. I may already have software serving on port 80, 443, 3000, 5000, 8080, etc. Getting templates to work then requires having to stop the service as it keeps failing, upgrade it, and then check for any issues downstream due to other services that might depend on that port.
  2. Potential Security Risk. Some of the containers being launched with these acquire escalated privileges which can pose a threat to your servers if controlled by a malicious user. Some expose unprotected APIs to the public, such as influxdb in prometheus. In the InfluxDB example, the default installation can let anybody both read and alter data, which in my opinion is a bad default to implement.

So my proposal is this: Stop binding public ports by default and instead list them in the description for each template. It may even be better to have rancher conditionally enable or disable sections of docker-compose.yml based on values from rancher-compose.yml.

Also, it may be helpful to list volumes to mount as well for those of us who want persistent storage on our projects, especially with technologies like gluster, ceph, convoy, flocker, etc.

Alfresco template is failing

Rancher Version: v1.2.0-pre1

The alfresco_postgres_postgres-data_1 container in Alfresco template is failing and throwing the following error:

Error (500 Server Error: Internal Server Error ("No command specified")) 

Hadoop Illegal character in host-name

Hi, we use rancher template for hadoop+yarn, but it seems that hadoop is unable to deal with using container names as hostnames.

Caused by: java.net.URISyntaxException: Illegal character in hostname at index 13: http://hadoop_datanode_1:50075/webhdfs/v1/skystore/tmp/devtest_onedir/2016_08_19_02_35_35_32f7/header.json?op=CREATE&user.name=hdfs&namenoderpcaddress=10.42.14.252:8020&overwrite=true

Am I doing it wrong or is there some workaround?

As I see it the problem is caused byt using container names as host names while rancher creates containers with underscores. Have no idea how to fix it though...

Cannot scale clients to Elasticsearch 2.x stack

I'm trying to scale out the Elasticsearch 2.x stack provided in Rancher 1.0. Starting from a clean deployment of the stack, I used the "Scale" tool in elasticsearch-clients service. This created a new separate client that has not joined the existing client.
I can see the second client alive because kopf started switching from one client the the other every two seconds.
Is there a way to setup a cluster using this stack or I have to create a custom one?
Thanks

RabbitMQ

Hi guys,

I'm wondering if anyone has a catalog item for RabbitMQ?

Cheers,

Logstash: create additional inputs

I found this post which describes a way to create inputs via environmental variables:
https://github.com/rancher/catalog-dockerfiles/tree/master/logstash/containers/0.2.0/logstash-config

For example:

 #example(with environment variable backend): 
  LOGSTASH_CONFIG_INPUTS_UDP_0={"port": "5000"}

  # will translate to:
  output {
   udp {
       port => 5000
   }
  }
  ...

I am looking for assistance on how I would format an input for IIS. Also I am confused as to whether the input should point at redis or elasticsearch and how I would do that in this scenario. Any assistance would be appreciated.

How to link ClusterControl to Percona XtraDB Cluster

Hello,

I'm trying to connect CusterControl to pxc. I've installer pxc from the catalog and added ClusterControl as a new service. But the problem is when I login to ClusterControl and try to add the existing node it fails because of ssh connection :

Can't connect (SSH): libssh connect error: Connection refused

Anyone have already tried to get the two services communicate ?

glusterfs: mount storage from location on host

Not hugely familiar with Gluster and want to start using it via this entry, but it looks like snapshotting and other advanced features require LVM. Any chance the data container might be able to mount /var/lib/gluster from an LVM volume on the host?

Or is there more to it than simply placing the storage on an LVM mount? I only want LVM so I can take snapshotted, online backups. Is there some other way to back up a storage pool created by this container that is safe to perform live?

Thanks.

convoy-gluster: rancher/convoy-agent is outdated

This container uses rancher/convoy-agent:v0.3.0, but the NFS container uses v0.7.0. Can this agent be upgraded safely? I don't have the resources to test whether this upgrade would work.

Unfortunately, this agent version doesn't seem to work well with Docker 1.10.3 as it fails to remove volumes. It also returns volumes in docker volume ls that don't appear at all in convoy list. So it really needs an upgrade.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.