ebayclassifiedsgroup / panteras Goto Github PK
View Code? Open in Web Editor NEWPanteraS - PaaS - Platform as a Service in a box
License: GNU General Public License v2.0
PanteraS - PaaS - Platform as a Service in a box
License: GNU General Public License v2.0
It looks like there is a race condition between consul-template
and restarting haproxy
. I noticed on several cluster nodes that haproxy
is running several times, e.g.:
administrator@ECS-7b314f01:~$ docker exec -it panteras_panteras_1 /bin/bash
root@ECS-7b314f01:/opt# ps -aux | grep haproxy
root 15 0.1 0.0 19616 1684 ? S 08:51 0:00 /bin/bash /opt/consul-template/haproxy_watcher.sh
root 16 0.3 0.3 12724 7748 ? Sl 08:51 0:01 consul-template -consul=10.50.80.41:8500 -template template.conf:/etc/haproxy/haproxy.cfg:/opt/consul-template/haproxy_reload.sh -template=./keepalived.conf:/etc/keepalived/keepalived.conf:./keepalived_reload.sh
root 1142 0.0 0.0 21076 1608 ? Ss 08:53 0:00 /usr/sbin/haproxy -p /tmp/haproxy.pid -f /etc/haproxy/haproxy.cfg -sf 1133 1114 1097
root 1479 0.0 0.0 21076 1600 ? Ss 08:54 0:00 /usr/sbin/haproxy -p /tmp/haproxy.pid -f /etc/haproxy/haproxy.cfg -sf 1474 1142 1114
root 1505 0.0 0.0 21076 1400 ? Ss 08:54 0:00 /usr/sbin/haproxy -p /tmp/haproxy.pid -f /etc/haproxy/haproxy.cfg -sf 1498 1479 1142
root 2406 0.0 0.0 10468 940 ? S+ 08:58 0:00 grep --color=auto haproxy
Note, that there are more than one pid
s listed for the -sf
option of haproxy
!
My guess is, that this happens, when there are several service changes propagating through consul-template
with very short time intervals in between, e.g., when scaling a marathon
app. This will trigger the haproxy reload command in parallel each one getting one or more pid
s via the pidof
call resulting in the situation given above.
The result of this is that haproxy
on that mode will/might run with an outdated haproxy config
.
Right now, I don't have a good idea how to fix that...
I don't really understand how to setup a simple HTTP load balancer to www.myawesomedomain.com
for a domain myawesomedomain.com
on a host at server1337.myawesomedomain.com
Could you provide a simple example or a short explanation on:
Thanks
I am trying to close the subnetwork my_service.service.consul
to prevent the http of marathon mesos supervisor and consul-ui to listen on 0.0.0.0
I have seen in generate_yml.sh
these 3 parameters available to configure consul. I want consul to listen only on docker host
If consul_ip is the interface to bind to, CONSUL_IP=172.17.42.1
should do the trick ?
What are exactly theses settings for :
HOST_IP=${IP}
DNS_IP=${DNS_IP}
CONSUL_IP=${IP}
It would be nice to make a documentation section to list all the available parameters with descriptions,
When starting up vagrant
(with BUILD=true
and also without it) the provisioning process stops with an no space left on device
error.
Is there an easy way to update mesos-slave
attributes
within a running panteras
conatiner?
I am using the mesos-slave
attributes
to define the roles
the cluster node fullfills, e.g., compute
node, log
node, build
node, ...
I do this be defining MESOS_SLAVE_PARAMS
like this when generating the docker-compose
configuration:
MESOS_SLAVE_PARAMS="--attributes=\\\\\"compute:true;log:true\\\\\" --resources=\\\\\"ports(*):[8000-8999, 31000-32999\\\\\""
Note the escaping of the backslash, which is required since MESOS_SLAVE_PARAMS
is pasted serveral times during the configuration process, and each time one level of escaping is eaten up...
How would I update the attributes
of a running panteras
container in a graceful manner?
If you use more than one haproxy_route tags in the SERVICE_TAGS parameter the consul-template produces a singe line with all the acls for the routes in it.
Example: SERVICE_TAGS="haproxy, haproxy_route=/media, haproxy_route=/chat
The produced output in haproxy.conf is:
acl acl_myservice path_beg /chat/acl acl_seeitmediaaws path_beg /media/
There is just a newline missing in the template.conf:
{{range $service.Tags}}{{if . | regexMatch "haproxy_route=(.+)" }}acl acl_{{$service.Name}} path_beg {{. | regexReplaceAll "haproxy_route=" ""}}/{{end}}{{end}}
should be
{{range $service.Tags}}{{if . | regexMatch "haproxy_route=(.+)" }}acl acl_{{$service.Name}} path_beg {{. | regexReplaceAll "haproxy_route=" ""}}/
{{end}}{{end}}
I have a host with 2 phyisical network interface : public & private.
I want to start PanteraS from master branch on a single node master+slave using the following:
./build-docker-images.sh
IP=159.203.300.283 LISTEN_IP=172.16.0.1 ./generate_yml.sh
docker-compose up -d
If I remove LISTEN_IP, services all starts normally.
Here is the log file :
ras# cat docker-compose.yml
panteras:
image: panteras/paas-in-a-box:latest
net: host
privileged: true
restart: "no"
ports:
- "8500:8500"
- "8080:8080"
- "5050:5050"
- "4400:4400"
environment:
CONSUL_IP: "159.203.300.283"
HOST_IP: "159.203.300.283"
LISTEN_IP: "172.16.0.1"
FQDN: "master-01.mydomain.com"
GOMAXPROCS: "4"
SERVICE_8500_NAME: consul-ui
SERVICE_8500_TAGS: haproxy
SERVICE_8500_CHECK_HTTP: /v1/status/leader
SERVICE_8080_NAME: marathon
SERVICE_8080_TAGS: haproxy
SERVICE_8080_CHECK_HTTP: /v2/leader
SERVICE_5050_NAME: mesos
SERVICE_5050_TAGS: haproxy
SERVICE_5050_CHECK_HTTP: /master/health
SERVICE_4400_NAME: chronos
SERVICE_4400_TAGS: haproxy
SERVICE_4400_CHECK_HTTP: /ping
START_CONSUL: "true"
START_CONSUL_TEMPLATE: "true"
START_DNSMASQ: "true"
START_MESOS_MASTER: "true"
START_MARATHON: "true"
START_MESOS_SLAVE: "true"
START_REGISTRATOR: "true"
START_ZOOKEEPER: "true"
START_CHRONOS: "true"
CONSUL_APP_PARAMS: "agent -client=172.16.0.1 -advertise=159.203.300.283 -bind=172.16.0.1 -data-dir=/opt/consul/ -ui-dir=/opt/consul/ -node=master-01.mydomain.com -dc=UNKNOWN -domain consul -server -bootstrap-expect 1 "
CONSUL_DOMAIN: "consul"
CONSUL_TEMPLATE_APP_PARAMS: "-consul=159.203.300.283:8500 -template haproxy.cfg.ctmpl:/etc/haproxy/haproxy.cfg:/opt/consul-template/haproxy_reload.sh "
DNSMASQ_APP_PARAMS: "-d -u dnsmasq -r /etc/resolv.conf.orig -7 /etc/dnsmasq.d --server=/consul/159.203.300.283#8600 --host-record=master-01.mydomain.com,159.203.300.283 --bind-interfaces --listen-address=172.16.0.1 --address=/consul/159.203.300.283 "
HAPROXY_ADD_DOMAIN: ""
MARATHON_APP_PARAMS: "--master zk://master-01.mydomain.com:2181/mesos --zk zk://master-01.mydomain.com:2181/marathon --hostname master-01.mydomain.com --no-logger --http_address 172.16.0.1 --https_address 172.16.0.1 "
MESOS_MASTER_APP_PARAMS: "--zk=zk://master-01.mydomain.com:2181/mesos --work_dir=/var/lib/mesos --quorum=1 --ip=172.16.0.1 --hostname=master-01.mydomain.com --cluster=mesoscluster "
MESOS_SLAVE_APP_PARAMS: "--master=zk://master-01.mydomain.com:2181/mesos --containerizers=docker,mesos --executor_registration_timeout=5mins --hostname=master-01.mydomain.com --ip=172.16.0.1 --docker_stop_timeout=5secs --gc_delay=1days --docker_socket=/tmp/docker.sock "
REGISTRATOR_APP_PARAMS: "-ip=159.203.300.283 consul://159.203.300.283:8500 "
ZOOKEEPER_APP_PARAMS: "start-foreground"
ZOOKEEPER_HOSTS: "master-01.mydomain.com:2181"
ZOOKEEPER_ID: "0"
KEEPALIVED_VIP: ""
CHRONOS_APP_PARAMS: "--master zk://master-01.mydomain.com:2181/mesos --zk_hosts master-01.mydomain.com:2181 --http_address 172.16.0.1 --http_port 4400 "
HOSTNAME: "master-01.mydomain.com"
env_file:
./restricted/env
volumes:
- "/etc/resolv.conf:/etc/resolv.conf.orig"
- "/var/spool/marathon/artifacts/store:/var/spool/store"
- "/var/run/docker.sock:/tmp/docker.sock"
- "/var/lib/docker:/var/lib/docker"
- "/sys:/sys"
- "/tmp/mesos:/tmp/mesos"
root@master-01:~/panteras# dc logs
Attaching to
root@master-01:~/panteras# dc up -d
Creating panteras_panteras_1
root@master-01:~/panteras# dc logs
Attaching to panteras_panteras_1
panteras_1 | 2016-01-13 17:41:15,558 CRIT Supervisor running as root (no user in config file)
panteras_1 | 2016-01-13 17:41:15,578 INFO RPC interface 'supervisor' initialized
panteras_1 | 2016-01-13 17:41:15,578 CRIT Server 'inet_http_server' running without any HTTP authentication checking
panteras_1 | 2016-01-13 17:41:15,578 INFO RPC interface 'supervisor' initialized
panteras_1 | 2016-01-13 17:41:15,579 CRIT Server 'unix_http_server' running without any HTTP authentication checking
panteras_1 | 2016-01-13 17:41:15,579 INFO supervisord started with pid 1
panteras_1 | 2016-01-13 17:41:16,581 INFO spawned: 'stdout' with pid 7
panteras_1 | 2016-01-13 17:41:16,584 INFO spawned: 'dnsmasq' with pid 8
panteras_1 | 2016-01-13 17:41:16,587 INFO spawned: 'consul' with pid 9
panteras_1 | 2016-01-13 17:41:16,590 INFO spawned: 'zookeeper' with pid 10
panteras_1 | 2016-01-13 17:41:16,593 INFO spawned: 'consul-template_haproxy' with pid 11
panteras_1 | 2016-01-13 17:41:16,596 INFO spawned: 'mesos-master' with pid 12
panteras_1 | 2016-01-13 17:41:16,599 INFO spawned: 'marathon' with pid 14
panteras_1 | 2016-01-13 17:41:16,603 INFO spawned: 'mesos-slave' with pid 19
panteras_1 | 2016-01-13 17:41:16,607 INFO spawned: 'registrator' with pid 21
panteras_1 | 2016-01-13 17:41:16,610 INFO spawned: 'chronos' with pid 26
panteras_1 | 2016-01-13 17:41:17,370 INFO exited: registrator (exit status 1; not expected)
panteras_1 | 2016-01-13 17:41:17,733 INFO success: stdout entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: dnsmasq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: consul entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: consul-template_haproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: mesos-master entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: marathon entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: mesos-slave entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | 2016-01-13 17:41:17,734 INFO success: chronos entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
panteras_1 | zookeeper stderr | JMX enabled by default
panteras_1 | consul-template_haproxy stdout | Currently: none
panteras_1 | Initially routing to haproxy_a
panteras_1 | zookeeper stderr | Using config: /etc/zookeeper/conf/zoo.cfg
panteras_1 | consul stdout | ==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode.
panteras_1 | ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
panteras_1 | consul stdout | ==> Starting Consul agent...
panteras_1 | marathon stdout | run_jar
panteras_1 | dnsmasq stderr | dnsmasq: started, version 2.68 cachesize 150
panteras_1 | dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth
panteras_1 | dnsmasq: using nameserver 159.203.300.283#8600 for domain consul
panteras_1 | dnsmasq: reading /etc/resolv.conf.orig
panteras_1 | dnsmasq stderr | dnsmasq: using nameserver 213.186.33.99#53
panteras_1 | dnsmasq: using nameserver 127.0.0.1#53
panteras_1 | dnsmasq: using nameserver 159.203.300.283#8600 for domain consul
panteras_1 | dnsmasq: read /etc/hosts - 9 addresses
panteras_1 | registrator stderr | 2016/01/13 17:41:17 Starting registrator ...
panteras_1 | 2016/01/13 17:41:17 Forcing host IP to 159.203.300.283
panteras_1 | consul stdout | ==> Starting Consul agent RPC...
panteras_1 | registrator stderr | 2016/01/13 17:41:17 consul: Get http://159.203.300.283:8500/v1/status/leader: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | consul stdout | ==> Consul agent running!
panteras_1 | Node name: 'master-01.mydomain.com'
panteras_1 | Datacenter: 'unknown'
panteras_1 | Server: true (bootstrap: true)
panteras_1 | Client Addr: 172.16.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
panteras_1 | Cluster Addr: 159.203.300.283 (LAN: 8301, WAN: 8302)
panteras_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
panteras_1 | Atlas: <disabled>
panteras_1 |
panteras_1 | ==> Log data will now stream in as it occurs:
panteras_1 |
panteras_1 | 2016/01/13 17:41:17 [INFO] raft: Node at 159.203.300.283:8300 [Follower] entering Follower state
panteras_1 | 2016/01/13 17:41:17 [WARN] memberlist: Binding to public address without encryption!
panteras_1 | 2016/01/13 17:41:17 [INFO] serf: EventMemberJoin: master-01.mydomain.com 159.203.300.283
panteras_1 | 2016/01/13 17:41:17 [WARN] memberlist: Binding to public address without encryption!
panteras_1 | 2016/01/13 17:41:17 [INFO] consul: adding LAN server master-01.mydomain.com (Addr: 159.203.300.283:8300) (DC: unknown)
panteras_1 | 2016/01/13 17:41:17 [INFO] serf: EventMemberJoin: master-01.mydomain.com.unknown 159.203.300.283
panteras_1 | 2016/01/13 17:41:17 [INFO] consul: adding WAN server master-01.mydomain.com.unknown (Addr: 159.203.300.283:8300) (DC: unknown)
panteras_1 | 2016/01/13 17:41:17 [ERR] agent: failed to sync remote state: No cluster leader
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:17 [DEBUG] (logging) setting up logging
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:17 [DEBUG] (logging) config:
panteras_1 |
panteras_1 | {
panteras_1 | "name": "consul-template",
panteras_1 | "level": "WARN",
panteras_1 | "syslog": false,
panteras_1 | "syslog_facility": "LOCAL0"
panteras_1 | }
panteras_1 |
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:17 [ERR] (view) "services" catalog services: error fetching: Get http://159.203.300.283:8500/v1/catalog/services?wait=60000ms: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:17 [ERR] (runner) watcher reported error: catalog services: error fetching: Get http://159.203.300.283:8500/v1/catalog/services?wait=60000ms: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | mesos-master stderr | I0113 17:41:17.405643 12 main.cpp:229] Build: 2015-10-12 20:57:28 by root
panteras_1 | I0113 17:41:17.405753 12 main.cpp:231] Version: 0.25.0
panteras_1 | I0113 17:41:17.405758 12 main.cpp:234] Git tag: 0.25.0
panteras_1 | I0113 17:41:17.405761 12 main.cpp:238] Git SHA: 2dd7f7ee115fe00b8e098b0a10762a4fa8f4600f
panteras_1 | mesos-master stderr | I0113 17:41:17.405809 12 main.cpp:252] Using 'HierarchicalDRF' allocator
panteras_1 | mesos-slave stderr | I0113 17:41:17.405933 19 main.cpp:185] Build: 2015-10-12 20:57:28 by root
panteras_1 | I0113 17:41:17.406049 19 main.cpp:187] Version: 0.25.0
panteras_1 | I0113 17:41:17.406055 19 main.cpp:190] Git tag: 0.25.0
panteras_1 | I0113 17:41:17.406059 19 main.cpp:194] Git SHA: 2dd7f7ee115fe00b8e098b0a10762a4fa8f4600f
panteras_1 | mesos-master stderr | I0113 17:41:17.733381 12 leveldb.cpp:176] Opened db in 327.098874ms
panteras_1 | mesos-master stderr | I0113 17:41:17.783476 12 leveldb.cpp:183] Compacted db in 50.03221ms
panteras_1 | I0113 17:41:17.783524 12 leveldb.cpp:198] Created db iterator in 16395ns
panteras_1 | I0113 17:41:17.783536 12 leveldb.cpp:204] Seeked to beginning of db in 832ns
panteras_1 | I0113 17:41:17.783542 12 leveldb.cpp:273] Iterated through 0 keys in the db in 2128ns
panteras_1 | mesos-master stderr | I0113 17:41:17.783717 12 replica.cpp:744] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
panteras_1 | mesos-master stderr | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:17,784:12(0x75851d42d700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | mesos-master stderr | I0113 17:41:17.784937 169 log.cpp:238] Attempting to join replica to ZooKeeper group
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:17,785:12(0x75851b1f7700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | mesos-master stderr | I0113 17:41:17.785684 156 recover.cpp:449] Starting replica recovery
panteras_1 | I0113 17:41:17.785979 156 recover.cpp:475] Replica is in EMPTY status
panteras_1 | mesos-master stderr | 2016-01-13 17:41:17,786:12(0x75851b1f7700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851b1f7700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851d42d700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851b1f7700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x758526f84600 sessionId=0 sessionPasswd=<null> context=0x758500001160 flags=0
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851d42d700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851d42d700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x758526f84600 sessionId=0 sessionPasswd=<null> context=0x758508000a80 flags=0
panteras_1 | I0113 17:41:17.786701 12 main.cpp:465] Starting Mesos master
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | 2016-01-13 17:41:17,786:12(0x75851a9ad700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | mesos-master stderr | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:17,786:12(0x7585176be700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x758526f84600 sessionId=0 sessionPasswd=<null> context=0x7584e8000b40 flags=0
panteras_1 | 2016-01-13 17:41:17,787:12(0x75851a9ad700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:17,787:12(0x75851a9ad700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:17,787:12(0x75851a9ad700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x758526f84600 sessionId=0 sessionPasswd=<null> context=0x7585080011d0 flags=0
panteras_1 | mesos-master stderr | I0113 17:41:17.787729 140 replica.cpp:641] Replica in EMPTY status received a broadcasted recover request
panteras_1 | I0113 17:41:17.787914 151 recover.cpp:195] Received a recover response from a replica in EMPTY status
panteras_1 | I0113 17:41:17.788108 156 recover.cpp:566] Updating replica status to STARTING
panteras_1 | mesos-master stderr | 2016-01-13 17:41:17,788:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:17,788:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | I0113 17:41:17.790123 167 master.cpp:376] Master f7c57647-83e4-41fb-ab35-87ea0ec85310 (master-01.mydomain.com) started on 172.16.0.1:5050
panteras_1 | I0113 17:41:17.790146 167 master.cpp:378] Flags at startup: --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="false" --authenticate_slaves="false" --authenticators="crammd5" --authorizers="local" --cluster="mesoscluster" --framework_sorter="drf" --help="false" --hostname="master-01.mydomain.com" --hostname_lookup="true" --initialize_driver_logging="true" --ip="172.16.0.1" --log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" --max_slave_ping_timeouts="5" --port="5050" --quiet="false" --quorum="1" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="5secs" --registry_strict="false" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/lib/mesos" --zk="zk://master-01.mydomain.com:2181/mesos" --zk_session_timeout="10secs"
panteras_1 | mesos-master stderr | I0113 17:41:17.790477 167 master.cpp:425] Master allowing unauthenticated frameworks to register
panteras_1 | I0113 17:41:17.790485 167 master.cpp:430] Master allowing unauthenticated slaves to register
panteras_1 | I0113 17:41:17.790500 167 master.cpp:467] Using default 'crammd5' authenticator
panteras_1 | W0113 17:41:17.790616 167 authenticator.cpp:505] No credentials provided, authentication requests will be refused
panteras_1 | I0113 17:41:17.790628 167 authenticator.cpp:512] Initializing server SASL
panteras_1 | mesos-master stderr | I0113 17:41:17.927023 151 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 138.717617ms
panteras_1 | I0113 17:41:17.927079 151 replica.cpp:323] Persisted replica status to STARTING
panteras_1 | mesos-master stderr | I0113 17:41:17.927211 163 recover.cpp:475] Replica is in STARTING status
panteras_1 | mesos-master stderr | I0113 17:41:17.927666 153 replica.cpp:641] Replica in STARTING status received a broadcasted recover request
panteras_1 | I0113 17:41:17.927819 168 recover.cpp:195] Received a recover response from a replica in STARTING status
panteras_1 | mesos-master stderr | I0113 17:41:17.928025 146 recover.cpp:566] Updating replica status to VOTING
panteras_1 | mesos-master stderr | I0113 17:41:17.991933 163 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 63.703358ms
panteras_1 | I0113 17:41:17.991989 163 replica.cpp:323] Persisted replica status to VOTING
panteras_1 | mesos-master stderr | I0113 17:41:17.992045 169 recover.cpp:580] Successfully joined the Paxos group
panteras_1 | I0113 17:41:17.992144 169 recover.cpp:464] Recover process terminated
panteras_1 | mesos-master stderr | I0113 17:41:18.001402 168 contender.cpp:149] Joining the ZK group
panteras_1 | mesos-slave stderr | I0113 17:41:18.140561 19 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix
panteras_1 | mesos-slave stderr | I0113 17:41:18.155714 19 linux_launcher.cpp:103] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:18,159:19(0x729f16934700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x729f22d48600 sessionId=0 sessionPasswd=<null> context=0x729efc000fb0 flags=0
panteras_1 | I0113 17:41:18.159775 19 main.cpp:272] Starting Mesos slave
panteras_1 | I0113 17:41:18.160207 154 slave.cpp:190] Slave started on 1)@172.16.0.1:5051
panteras_1 | mesos-slave stderr | I0113 17:41:18.160223 154 slave.cpp:191] Flags at startup: --appc_store_dir="/tmp/mesos/store/appc" --authenticatee="crammd5" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_remove_delay="6hrs" --docker_socket="/tmp/docker.sock" --docker_stop_timeout="5secs" --enforce_container_disk_quota="false" --executor_registration_timeout="5mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1days" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname="master-01.mydomain.com" --hostname_lookup="true" --image_provisioner_backend="copy" --initialize_driver_logging="true" --ip="172.16.0.1" --isolation="posix/cpu,posix/mem" --launcher_dir="/usr/libexec/mesos" --logbufsecs="0" --logging_level="INFO" --master="zk://master-01.mydomain.com:2181/mesos" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --port="5051" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --resource_monitoring_interval="1secs" --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/tmp/mesos"
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:18,160:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | I0113 17:41:18.160823 154 slave.cpp:354] Slave resources: cpus(*):16; mem(*):31104; disk(*):14436; ports(*):[31000-32000]
panteras_1 | 2016-01-13 17:41:18,160:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | I0113 17:41:18.160853 154 slave.cpp:390] Slave hostname: master-01.mydomain.com
panteras_1 | I0113 17:41:18.160857 154 slave.cpp:395] Slave checkpoint: true
panteras_1 | mesos-slave stderr | I0113 17:41:18.163228 148 state.cpp:54] Recovering state from '/tmp/mesos/meta'
panteras_1 | mesos-slave stderr | I0113 17:41:18.163278 148 state.cpp:690] No checkpointed resources found at '/tmp/mesos/meta/resources/resources.info'
panteras_1 | I0113 17:41:18.163391 148 state.cpp:97] Failed to find the latest slave from '/tmp/mesos/meta'
panteras_1 | I0113 17:41:18.163543 141 status_update_manager.cpp:202] Recovering status update manager
panteras_1 | mesos-slave stderr | I0113 17:41:18.163727 157 docker.cpp:535] Recovering Docker containers
panteras_1 | I0113 17:41:18.163794 160 containerizer.cpp:386] Recovering containerizer
panteras_1 | mesos-slave stderr | I0113 17:41:18.165714 159 slave.cpp:4110] Finished recovery
panteras_1 | chronos stdout | [2016-01-13 17:41:18,306] INFO --------------------- (org.apache.mesos.chronos.scheduler.Main$:26)
panteras_1 | chronos stdout | [2016-01-13 17:41:18,308] INFO Initializing chronos. (org.apache.mesos.chronos.scheduler.Main$:27)
panteras_1 | chronos stdout | [2016-01-13 17:41:18,310] INFO --------------------- (org.apache.mesos.chronos.scheduler.Main$:28)
panteras_1 | 2016-01-13 17:41:18,693 INFO spawned: 'registrator' with pid 253
panteras_1 | marathon stdout | [2016-01-13 17:41:18,688] INFO Starting Marathon 0.13.0 with --master zk://master-01.mydomain.com:2181/mesos --zk zk://master-01.mydomain.com:2181/marathon --hostname master-01.mydomain.com --http_address 172.16.0.1 --https_address 172.16.0.1 (mesosphere.marathon.Main$:main)
panteras_1 | registrator stderr | 2016/01/13 17:41:18 Starting registrator ...
panteras_1 | 2016/01/13 17:41:18 Forcing host IP to 159.203.300.283
panteras_1 | registrator stderr | 2016/01/13 17:41:18 consul: Get http://159.203.300.283:8500/v1/status/leader: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | 2016-01-13 17:41:18,710 INFO exited: registrator (exit status 1; not expected)
panteras_1 | consul stdout | 2016/01/13 17:41:18 [WARN] raft: Heartbeat timeout reached, starting election
panteras_1 | 2016/01/13 17:41:18 [INFO] raft: Node at 159.203.300.283:8300 [Candidate] entering Candidate state
panteras_1 | consul stdout | 2016/01/13 17:41:19 [INFO] raft: Election won. Tally: 1
panteras_1 | 2016/01/13 17:41:19 [INFO] raft: Node at 159.203.300.283:8300 [Leader] entering Leader state
panteras_1 | consul stdout | 2016/01/13 17:41:19 [INFO] consul: cluster leadership acquired
panteras_1 | 2016/01/13 17:41:19 [INFO] consul: New leader elected: master-01.mydomain.com
panteras_1 | consul stdout | 2016/01/13 17:41:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
panteras_1 | consul stdout | 2016/01/13 17:41:19 [INFO] consul: member 'master-01.mydomain.com' joined, marking health alive
panteras_1 | consul stdout | 2016/01/13 17:41:19 [INFO] agent: Synced service 'consul'
panteras_1 | 2016-01-13 17:41:20,818 INFO spawned: 'registrator' with pid 260
panteras_1 | registrator stderr | 2016/01/13 17:41:20 Starting registrator ...
panteras_1 | 2016/01/13 17:41:20 Forcing host IP to 159.203.300.283
panteras_1 | 2016-01-13 17:41:20,840 INFO exited: registrator (exit status 1; not expected)
panteras_1 | registrator stderr | 2016/01/13 17:41:20 consul: Get http://159.203.300.283:8500/v1/status/leader: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | marathon stdout | [2016-01-13 17:41:20,899] INFO Connecting to Zookeeper... (mesosphere.marathon.Main$:main)
panteras_1 | marathon stdout | [2016-01-13 17:41:20,912] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:host.name=master-01.mydomain.com (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.version=1.8.0_66 (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | marathon stdout | [2016-01-13 17:41:20,912] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.class.path=/usr/bin/marathon (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.library.path=/usr/local/lib:/usr/lib:/usr/lib64 (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:os.version=3.14.32-xxxx-grs-ipv6-64 (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | [2016-01-13 17:41:20,912] INFO Client environment:user.dir=/opt (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | marathon stdout | [2016-01-13 17:41:20,913] INFO Initiating client connection, connectString=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=com.twitter.common.zookeeper.ZooKeeperClient$3@476aac9 (org.apache.zookeeper.ZooKeeper:main)
panteras_1 | marathon stdout | [2016-01-13 17:41:20,936] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:21,014] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | chronos stdout | [2016-01-13 17:41:21,081] INFO Wiring up the application (org.apache.mesos.chronos.scheduler.config.MainModule:39)
panteras_1 | marathon stdout | [2016-01-13 17:41:21,117] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:21,119] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | mesos-master stderr | 2016-01-13 17:41:21,125:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [200.202.200.183:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | 2016-01-13 17:41:21,125:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:21,125:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:21,125:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:21,125:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:21,125:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | 2016-01-13 17:41:21,125:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [200.202.200.183:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | 2016-01-13 17:41:21,126:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | chronos stderr | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@716: Client environment:host.name=master-01.mydomain.com
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@724: Client environment:os.arch=3.14.32-xxxx-grs-ipv6-64
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@725: Client environment:os.version=#5 SMP Wed Sep 9 17:24:34 CEST 2015
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
panteras_1 | chronos stderr | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@741: Client environment:user.home=/root
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b769a700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master-01.mydomain.com:2181 sessionTimeout=10000 watcher=0x6ec3edbf4600 sessionId=0 sessionPasswd=<null> context=0x6ec384000960 flags=0
panteras_1 | chronos stderr | 2016-01-13 17:41:21,387:26(0x6ec3b65f7700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:21,387:26(0x6ec3b65f7700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | chronos stdout | [2016-01-13 17:41:21,471] INFO Starting (org.apache.curator.framework.imps.CuratorFrameworkImpl:230)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,480] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,480] INFO Client environment:host.name=master-01.mydomain.com (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,481] INFO Client environment:java.version=1.8.0_66 (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | [2016-01-13 17:41:21,481] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,481] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,481] INFO Client environment:java.class.path=/usr/bin/chronos (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,482] INFO Client environment:java.library.path=/usr/local/lib:/usr/lib64:/usr/lib (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,482] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | [2016-01-13 17:41:21,482] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,482] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,483] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,484] INFO Client environment:os.version=3.14.32-xxxx-grs-ipv6-64 (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,484] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,485] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | [2016-01-13 17:41:21,485] INFO Client environment:user.dir=/opt (org.apache.zookeeper.ZooKeeper:100)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,486] INFO Initiating client connection, connectString=master-01.mydomain.com:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@5965844d (org.apache.zookeeper.ZooKeeper:438)
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:21,497:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:21,497:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | chronos stdout | [2016-01-13 17:41:21,503] INFO Connecting to ZK... (org.apache.mesos.chronos.scheduler.config.ZookeeperModule:40)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,505] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,555] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,658] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:21,659] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | marathon stdout | [2016-01-13 17:41:22,220] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:22,221] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | marathon stdout | [2016-01-13 17:41:22,322] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:22,322] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:22 [ERR] (view) "services" catalog services: error fetching: Get http://159.203.300.283:8500/v1/catalog/services?wait=60000ms: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | consul-template_haproxy stderr | 2016/01/13 17:41:22 [ERR] (runner) watcher reported error: catalog services: error fetching: Get http://159.203.300.283:8500/v1/catalog/services?wait=60000ms: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | chronos stdout | [2016-01-13 17:41:22,760] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:22,761] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | chronos stdout | [2016-01-13 17:41:22,862] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:22,862] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | marathon stdout | [2016-01-13 17:41:23,423] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:23,424] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | marathon stdout | [2016-01-13 17:41:23,525] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:23,526] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | 2016-01-13 17:41:23,966 INFO spawned: 'registrator' with pid 293
panteras_1 | chronos stdout | [2016-01-13 17:41:23,963] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:23,964] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | registrator stderr | 2016/01/13 17:41:23 Starting registrator ...
panteras_1 | 2016/01/13 17:41:23 Forcing host IP to 159.203.300.283
panteras_1 | 2016-01-13 17:41:23,985 INFO exited: registrator (exit status 1; not expected)
panteras_1 | registrator stderr | 2016/01/13 17:41:23 consul: Get http://159.203.300.283:8500/v1/status/leader: dial tcp 159.203.300.283:8500: connection refused
panteras_1 | chronos stdout | [2016-01-13 17:41:24,065] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:24,066] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | mesos-master stderr | 2016-01-13 17:41:24,461:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:24,461:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:24,461:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:24,461:12(0x7584d25bc700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | 2016-01-13 17:41:24,462:12(0x758514c5b700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:24,462:12(0x7585154eb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-master stderr | 2016-01-13 17:41:24,462:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | 2016-01-13 17:41:24,463:12(0x7584d2e90700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | marathon stdout | [2016-01-13 17:41:24,627] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:24,628] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | chronos stderr | 2016-01-13 17:41:24,724:26(0x6ec3b65f7700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | chronos stderr | 2016-01-13 17:41:24,724:26(0x6ec3b65f7700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | marathon stdout | [2016-01-13 17:41:24,729] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | marathon stdout | [2016-01-13 17:41:24,730] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:main-SendThread(master-01.mydomain.com:2181))
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_66]
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_66]
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[marathon:0.13.0]
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[marathon:0.13.0]
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:24,833:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [2001:41d0:1000:8b7:::2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | mesos-slave stderr | 2016-01-13 17:41:24,833:19(0x729f11ceb700):ZOO_ERROR@handle_socket_error_msg@1697: Socket [159.203.300.283:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
panteras_1 | chronos stdout | [2016-01-13 17:41:25,167] INFO Opening socket connection to server master-01.mydomain.com/159.203.300.283:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:25,168] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
panteras_1 | java.net.ConnectException: Connection refused
panteras_1 | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
panteras_1 | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
panteras_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
panteras_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
panteras_1 | chronos stdout | [2016-01-13 17:41:25,269] INFO Opening socket connection to server master-01.mydomain.com/2001:41d0:1000:8b7:0:0:0:0:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:975)
panteras_1 | chronos stdout | [2016-01-13 17:41:25,270] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1102)
Any idea why zookeeper refuse connection ?
I also tried to disable chronos with START_CHRONOS="false"
but it didn't work
I have created mesos PR
https://issues.apache.org/jira/browse/MESOS-3377
https://reviews.apache.org/r/38165/
to provide MESOS_CONTAINER_NAME
With new mesos it is possible to talk via docker socket (not via binary or wrapper)
so more cleaner, elegant and desired way is to get is as ENV variable.
when committed, to do:
start.sh
for MESOS_CONTAINER_NAMEHi,
Just a question regarding haproxy....
How am I able to access haproxy on port 81
when I don't see it listening on my host?
[root@c01nhvd613 /]# netstat -ano | grep LISTEN
tcp 0 0 0.0.0.0:8550 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:8551 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:8660 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:8661 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 0.0.0.0:5051 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 :::8301 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::8400 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::8500 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::53 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::22 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::8600 :::* LISTEN off (0.00/0/0)
[root@c01nhvd613 /]# netstat -ano | grep 81
tcp 0 0 10.7.112.106:36332 10.7.112.105:2181 ESTABLISHED off (0.00/0/0)
I don't see port 81 listening, yet I am able to connect to the web UI. How?
Hello,
I love this project - thank you for your work.
I am running into an issue. If I run a standalone container, I notice the Marathon and Mesos services get registered in Consul without any problems.
However, if I run a master/slave configuration (3 masters, 3 slaves), the services don't appear in Consul. The only ones I see are "consul" and "consul-ui".
My docker-compose.yml contains the following (on all 3 of my masters), so I'm not sure why it doesn't work:
SERVICE_8500_NAME: consul-ui
SERVICE_8500_TAGS: haproxy
SERVICE_8500_CHECK_HTTP: /v1/status/leader
SERVICE_8080_NAME: marathon
SERVICE_8080_TAGS: haproxy
SERVICE_8080_CHECK_HTTP: /v2/leader
SERVICE_5050_NAME: mesos
SERVICE_5050_TAGS: haproxy
SERVICE_5050_CHECK_HTTP: /master/health
Any ideas?
Hello,
First thanks to open source your work for a more easy setup of a mesos-marathon cluster !
Just to let you know when launching demo examples, in the docker logs we get :
" Service tag "weight=1" will not be discoverable via DNS due to invalid characters. " from deploy1_marathon.json
"env": {
"SERVICE_TAGS" : "webapp,weight=1",
...
},
It is related to hashicorp/consul#683 so maybe you want to make some changes.
Then there is two more warnings :
consul-template_haproxy stderr | [WARNING] 139/123310 (4528) : consul-template_haproxy stderr | config : log format ignored for proxy 'stats' since it has no log address.
consul-template_haproxy stderr | [WARNING] 139/123310 (4528) : consul-template_haproxy stderr | config : log format ignored for frontend 'http-in' since it has no log address.
but I could not figure how to get rid of those.
Cheers
Hi,
I had a question around how DNS should be configured in a 3 masters + 3 slaves
environment.
Should all of the slaves be configured to use the master nodes as their primary DNS server in /etc/resolv.conf
? i.e.:
nameserver 10.0.0.1
nameserver 10.0.0.2
nameserver 10.0.0.3
nameserver <internal corp DNS>
I am not sure this is a good idea because all traffic (even non consul-related DNS) will get routed through the masters. Won't it?
I just created this issue hashicorp/consul#1032 for 'consul', but it might also be related to an issue in 'PanteraS', therefore I cross reference it from here as well.
the issue is that a DNS query for any service (registered/existing or not) via "service-to-discover.service.consul" always returns the IP address of the current consul master. For more details see the referenced consul issue hashicorp/consul#1032 .
I would like get get some feedback about best practices for doing a rolling update of a PanteraS
cluster.
Right now I am doing something like this:
Iterator over all cluster nodes (with Ansible
) and doing these steps on each node before moving on to the next node (using serial: 1
in the Ansible
playbook):
panteras
configuration for docker-compose
via: ./generate_yml.sh
panteras
container via: docker-compose stop
panteras
container via: docker-compose rm --force
docker stop $(docker ps -q) || true
mesos
stuff from the /tmp
folder: sudo rm -rf /tmp/mesos
panteras
container again via: docker-compose up -d
master
node wait a couple of seconds in order to let the (re)election process for the rejoined master
node happen.Does that look reasonable?
Hello everyone, and thank you for sharing this project which is very similar to what I put in place for several months, but I've made some different choices I would like to share with you:
Thank you all anyway, for my part I confess to taking back your template generation logic from tags (my template was previously managed by hand ...)
Hi sielaq again, I hope you don't mind that I like to play with your bundle, I looked your openvpn plugin to see how you can use it.
- "/etc/ssl/certs/:/etc/ssl/certs/"' && \
$B2D [ -f /etc/nsswitch.conf ] 2>/dev/null && OPENVPN_VOL=${OPENVPN_VOL}'
- "/etc/nsswitch.conf:/etc/nsswitch.conf"' && \
$B2D [ -f /etc/nslcd.conf ] 2>/dev/null && OPENVPN_VOL=${OPENVPN_VOL}'
- "/etc/nslcd.conf:/etc/nslcd.conf"'
What are /etc/nsswitch.conf
&& /etc/nslcd.conf
used for ? Would you share some examples or improve the documentation ? I am use to have only one .ovpn file per server
First of all, thank you for sharing your work.
I was trying the PaaS, then I saw you guys also provided everything to build my own custom PanteraS image using the script ./build-docker-images.sh
.
Unfortunately, this script isn't working for me :
Step 47 : RUN gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv E56151BF && gpg --export --armor E56151BF | apt-key add -
---> Using cache
---> a53961c278ef
Step 48 : RUN apt-get update && apt-get -y install mesos=${MESOS_APP_VERSION} marathon=${MARATHON_APP_VERSION}
---> Running in 13ca22ad772a
Get:1 http://ppa.launchpad.net trusty InRelease [15.5 kB]
Ign http://archive.ubuntu.com trusty InRelease
Get:2 http://archive.ubuntu.com trusty-updates InRelease [64.4 kB]
Get:3 http://archive.ubuntu.com trusty-backports InRelease [64.5 kB]
Get:4 http://archive.ubuntu.com trusty-security InRelease [64.4 kB]
Get:5 http://repos.mesosphere.io trusty InRelease [3,137 B]
Get:6 http://ppa.launchpad.net trusty/main amd64 Packages [3,086 B]
Get:7 http://archive.ubuntu.com trusty Release.gpg [933 B]
Get:8 http://archive.ubuntu.com trusty-updates/main Sources [306 kB]
Get:9 http://repos.mesosphere.io trusty/main amd64 Packages [5,982 B]
Get:10 http://archive.ubuntu.com trusty-updates/restricted Sources [4,513 B]
Get:11 http://archive.ubuntu.com trusty-updates/universe Sources [180 kB]
Get:12 http://archive.ubuntu.com trusty-updates/main amd64 Packages [819 kB]
Get:13 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [22.7 kB]
Get:14 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [425 kB]
Get:15 http://archive.ubuntu.com trusty Release [58.5 kB]
Get:16 http://archive.ubuntu.com trusty-backports/main Sources [7,937 B]
Get:17 http://archive.ubuntu.com trusty-backports/restricted Sources [40 B]
Get:18 http://archive.ubuntu.com trusty-backports/main amd64 Packages [9,690 B]
Get:19 http://archive.ubuntu.com trusty-backports/restricted amd64 Packages [40 B]
Get:20 http://archive.ubuntu.com trusty-security/main Sources [125 kB]
Get:21 http://archive.ubuntu.com trusty-security/restricted Sources [3,230 B]
Get:22 http://archive.ubuntu.com trusty-security/universe Sources [36.0 kB]
Get:23 http://archive.ubuntu.com trusty-security/main amd64 Packages [463 kB]
Get:24 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [19.4 kB]
Get:25 http://archive.ubuntu.com trusty-security/universe amd64 Packages [155 kB]
Get:26 http://archive.ubuntu.com trusty/main Sources [1,335 kB]
Get:27 http://archive.ubuntu.com trusty/restricted Sources [5,335 B]
Get:28 http://archive.ubuntu.com trusty/universe Sources [7,926 kB]
Get:29 http://archive.ubuntu.com trusty/main amd64 Packages [1,743 kB]
Get:30 http://archive.ubuntu.com trusty/restricted amd64 Packages [16.0 kB]
Get:31 http://archive.ubuntu.com trusty/universe amd64 Packages [7,589 kB]
Fetched 21.5 MB in 2min 6s (170 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
libapr1 libaprutil1 libjline-java liblog4cxx10 liblog4j1.2-java
libnetty-java libserf-1-1 libservlet2.5-java libslf4j-java libsvn1
libxerces2-java libxml-commons-external-java libxml-commons-resolver1.1-java
libzookeeper-java libzookeeper-java-doc libzookeeper-mt2 zookeeper
zookeeper-bin zookeeperd
Suggested packages:
libjline-java-doc liblog4j1.2-java-doc libgnumail-java
libcommons-logging-java libxerces2-java-doc libxerces2-java-gcj
libxml-commons-resolver1.1-java-doc
The following NEW packages will be installed:
libapr1 libaprutil1 libjline-java liblog4cxx10 liblog4j1.2-java
libnetty-java libserf-1-1 libservlet2.5-java libslf4j-java libsvn1
libxerces2-java libxml-commons-external-java libxml-commons-resolver1.1-java
libzookeeper-java libzookeeper-java-doc libzookeeper-mt2 marathon mesos
zookeeper zookeeper-bin zookeeperd
0 upgraded, 21 newly installed, 0 to remove and 0 not upgraded.
Need to get 94.1 MB of archives.
After this operation, 169 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main libapr1 amd64 1.5.0-1 [85.1 kB]
Get:2 http://repos.mesosphere.io/ubuntu/ trusty/main marathon amd64 0.11.1-1.0.432.ubuntu1404 [59.1 MB]
Get:3 http://archive.ubuntu.com/ubuntu/ trusty/main libaprutil1 amd64 1.5.3-1 [76.4 kB]
Get:4 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libserf-1-1 amd64 1.3.3-1ubuntu0.1 [42.2 kB]
Get:5 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libsvn1 amd64 1.8.8-1ubuntu3.2 [916 kB]
Get:6 http://archive.ubuntu.com/ubuntu/ trusty/universe libzookeeper-mt2 amd64 3.4.5+dfsg-1 [50.2 kB]
Get:7 http://archive.ubuntu.com/ubuntu/ trusty/main libjline-java all 1.0-2 [69.4 kB]
Get:8 http://archive.ubuntu.com/ubuntu/ trusty/universe liblog4cxx10 amd64 0.10.0-1.2ubuntu3 [554 kB]
Get:9 http://archive.ubuntu.com/ubuntu/ trusty/main liblog4j1.2-java all 1.2.17-4ubuntu3 [383 kB]
Get:10 http://archive.ubuntu.com/ubuntu/ trusty/universe libnetty-java all 1:3.2.6.Final-2 [671 kB]
Get:11 http://archive.ubuntu.com/ubuntu/ trusty/universe libservlet2.5-java all 6.0.39-1 [209 kB]
Get:12 http://archive.ubuntu.com/ubuntu/ trusty/universe libslf4j-java all 1.7.5-2 [110 kB]
Get:13 http://archive.ubuntu.com/ubuntu/ trusty/main libxml-commons-resolver1.1-java all 1.2-7build1 [91.6 kB]
Get:14 http://archive.ubuntu.com/ubuntu/ trusty/main libxml-commons-external-java all 1.4.01-2build1 [245 kB]
Get:15 http://archive.ubuntu.com/ubuntu/ trusty/main libxerces2-java all 2.11.0-7 [1,362 kB]
Get:16 http://repos.mesosphere.io/ubuntu/ trusty/main mesos amd64 0.25.0-0.2.70.ubuntu1404 [28.0 MB]
Get:17 http://archive.ubuntu.com/ubuntu/ trusty/universe libzookeeper-java all 3.4.5+dfsg-1 [1,237 kB]
Get:18 http://archive.ubuntu.com/ubuntu/ trusty/universe libzookeeper-java-doc all 3.4.5+dfsg-1 [682 kB]
Get:19 http://archive.ubuntu.com/ubuntu/ trusty/universe zookeeper all 3.4.5+dfsg-1 [109 kB]
Get:20 http://archive.ubuntu.com/ubuntu/ trusty/universe zookeeper-bin amd64 3.4.5+dfsg-1 [64.6 kB]
Get:21 http://archive.ubuntu.com/ubuntu/ trusty/universe zookeeperd all 3.4.5+dfsg-1 [8,812 B]
Fetched 94.1 MB in 44s (2,092 kB/s)
Selecting previously unselected package libapr1:amd64.
(Reading database ... 23413 files and directories currently installed.)
Preparing to unpack .../libapr1_1.5.0-1_amd64.deb ...
Unpacking libapr1:amd64 (1.5.0-1) ...
Selecting previously unselected package libaprutil1:amd64.
Preparing to unpack .../libaprutil1_1.5.3-1_amd64.deb ...
Unpacking libaprutil1:amd64 (1.5.3-1) ...
Selecting previously unselected package libserf-1-1:amd64.
Preparing to unpack .../libserf-1-1_1.3.3-1ubuntu0.1_amd64.deb ...
Unpacking libserf-1-1:amd64 (1.3.3-1ubuntu0.1) ...
Selecting previously unselected package libsvn1:amd64.
Preparing to unpack .../libsvn1_1.8.8-1ubuntu3.2_amd64.deb ...
Unpacking libsvn1:amd64 (1.8.8-1ubuntu3.2) ...
Selecting previously unselected package libzookeeper-mt2:amd64.
Preparing to unpack .../libzookeeper-mt2_3.4.5+dfsg-1_amd64.deb ...
Unpacking libzookeeper-mt2:amd64 (3.4.5+dfsg-1) ...
Selecting previously unselected package libjline-java.
Preparing to unpack .../libjline-java_1.0-2_all.deb ...
Unpacking libjline-java (1.0-2) ...
Selecting previously unselected package liblog4cxx10.
Preparing to unpack .../liblog4cxx10_0.10.0-1.2ubuntu3_amd64.deb ...
Unpacking liblog4cxx10 (0.10.0-1.2ubuntu3) ...
Selecting previously unselected package liblog4j1.2-java.
Preparing to unpack .../liblog4j1.2-java_1.2.17-4ubuntu3_all.deb ...
Unpacking liblog4j1.2-java (1.2.17-4ubuntu3) ...
Selecting previously unselected package libnetty-java.
Preparing to unpack .../libnetty-java_1%3a3.2.6.Final-2_all.deb ...
Unpacking libnetty-java (1:3.2.6.Final-2) ...
Selecting previously unselected package libservlet2.5-java.
Preparing to unpack .../libservlet2.5-java_6.0.39-1_all.deb ...
Unpacking libservlet2.5-java (6.0.39-1) ...
Selecting previously unselected package libslf4j-java.
Preparing to unpack .../libslf4j-java_1.7.5-2_all.deb ...
Unpacking libslf4j-java (1.7.5-2) ...
Selecting previously unselected package libxml-commons-resolver1.1-java.
Preparing to unpack .../libxml-commons-resolver1.1-java_1.2-7build1_all.deb ...
Unpacking libxml-commons-resolver1.1-java (1.2-7build1) ...
Selecting previously unselected package libxml-commons-external-java.
Preparing to unpack .../libxml-commons-external-java_1.4.01-2build1_all.deb ...
Unpacking libxml-commons-external-java (1.4.01-2build1) ...
Selecting previously unselected package libxerces2-java.
Preparing to unpack .../libxerces2-java_2.11.0-7_all.deb ...
Unpacking libxerces2-java (2.11.0-7) ...
Selecting previously unselected package libzookeeper-java.
Preparing to unpack .../libzookeeper-java_3.4.5+dfsg-1_all.deb ...
Unpacking libzookeeper-java (3.4.5+dfsg-1) ...
Selecting previously unselected package libzookeeper-java-doc.
Preparing to unpack .../libzookeeper-java-doc_3.4.5+dfsg-1_all.deb ...
Unpacking libzookeeper-java-doc (3.4.5+dfsg-1) ...
Selecting previously unselected package zookeeper.
Preparing to unpack .../zookeeper_3.4.5+dfsg-1_all.deb ...
Unpacking zookeeper (3.4.5+dfsg-1) ...
Selecting previously unselected package zookeeper-bin.
Preparing to unpack .../zookeeper-bin_3.4.5+dfsg-1_amd64.deb ...
Unpacking zookeeper-bin (3.4.5+dfsg-1) ...
Selecting previously unselected package zookeeperd.
Preparing to unpack .../zookeeperd_3.4.5+dfsg-1_all.deb ...
Unpacking zookeeperd (3.4.5+dfsg-1) ...
Selecting previously unselected package marathon.
Preparing to unpack .../marathon_0.11.1-1.0.432.ubuntu1404_amd64.deb ...
Unpacking marathon (0.11.1-1.0.432.ubuntu1404) ...
Selecting previously unselected package mesos.
Preparing to unpack .../mesos_0.25.0-0.2.70.ubuntu1404_amd64.deb ...
Unpacking mesos (0.25.0-0.2.70.ubuntu1404) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up libapr1:amd64 (1.5.0-1) ...
Setting up libaprutil1:amd64 (1.5.3-1) ...
Setting up libserf-1-1:amd64 (1.3.3-1ubuntu0.1) ...
Setting up libsvn1:amd64 (1.8.8-1ubuntu3.2) ...
Setting up libzookeeper-mt2:amd64 (3.4.5+dfsg-1) ...
Setting up libjline-java (1.0-2) ...
Setting up liblog4cxx10 (0.10.0-1.2ubuntu3) ...
Setting up liblog4j1.2-java (1.2.17-4ubuntu3) ...
Setting up libnetty-java (1:3.2.6.Final-2) ...
Setting up libservlet2.5-java (6.0.39-1) ...
Setting up libslf4j-java (1.7.5-2) ...
Setting up libxml-commons-resolver1.1-java (1.2-7build1) ...
Setting up libxml-commons-external-java (1.4.01-2build1) ...
Setting up libxerces2-java (2.11.0-7) ...
Setting up libzookeeper-java (3.4.5+dfsg-1) ...
Setting up libzookeeper-java-doc (3.4.5+dfsg-1) ...
Setting up zookeeper (3.4.5+dfsg-1) ...
/usr/bin/chfn: /lib/x86_64-linux-gnu/libpam_misc.so.0: version `LIBPAM_MISC_1.0' not found (required by /usr/bin/chfn)
adduser: `/usr/bin/chfn -f ZooKeeper zookeeper' returned error code 1. Exiting.
dpkg: error processing package zookeeper (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up zookeeper-bin (3.4.5+dfsg-1) ...
dpkg: dependency problems prevent configuration of zookeeperd:
zookeeperd depends on zookeeper (= 3.4.5+dfsg-1); however:
Package zookeeper is not configured yet.
dpkg: error processing package zookeeperd (--configure):
dependency problems - leaving unconfigured
Setting up marathon (0.11.1-1.0.432.ubuntu1404) ...
Setting up mesos (0.25.0-0.2.70.ubuntu1404) ...
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for ureadahead (0.100.0-16) ...
Errors were encountered while processing:
zookeeper
zookeeperd
E: Sub-process /usr/bin/dpkg returned an error code (1)
The command '/bin/sh -c apt-get update && apt-get -y install mesos=${MESOS_APP_VERSION} marathon=${MARATHON_APP_VERSION}' returned a non-zero code: 100
ERROR DURING BUILDING IMAGE
We have a Marathon app (my-service) which exposes four ports as follows:
"portMappings":[
{
"containerPort":80,
"hostPort":0,
"servicePort":0,
"protocol":"tcp"
},
{
"containerPort":2003,
"hostPort":0,
"servicePort":0,
"protocol":"tcp"
},
{
"containerPort":8125,
"hostPort":0,
"servicePort":0,
"protocol":"udp"
},
{
"containerPort":8126,
"hostPort":0,
"servicePort":0,
"protocol":"tcp"
}
]
When deployed we see four separate service entries in Consul, with a 'udp' tag on the service corresponding to containerPort 8125.
Currently if we use:
listen my-service :49071
mode tcp
option tcplog
balance leastconn {{range service "my-service"}}
server {{.Node}} {{.Address}}:{{.Port}}{{end}}
we get four server entries, one for each of the four ports (or services).
Is it possible to use Consul Template to extract just the address of the service corresponding to containerPort 80? Presumably this would require the ability to add a tag to a specific port, whereas SERVICE_TAGS apply to all services corresponding to the Marathon app?
Any guidance much appreciated. Many thanks.
Hi,
Is there a way to run PanteraS without bind mounting /var/run/docker.sock
? As I'm sure you're aware, this basically gives the PanteraS container full privileges on the host and could open risks for multi-tenant environments. Do you agree? Would it be possible to break out of the PanteraS container by using a spawned container running right next to it?
I am trying to deploy some data containers using PanteraS image.
Since the data container ID need to be set for later mount, I was thinking of two differents possibilities with marathon :
docker run
command and remove itself when finisheddocker run
commanddocker run
properly ?Would you veto against adding this option redispatch
to the HAProxy
config in order to be able to redispatch requests to other upstream servers in case of connection failures?
What are the best use of haproxy_tcp ?
If I have a database and haproxy_tcp set to 3306, does it mean I can access it from port 80 from any host ?
when does version 0.2.0 release?I'm very excited about it
The IP should be the external ip of the host
So the proper docker-compose.yml.tpl file should have this line
--ip=${IP} \
instead of
--ip=0.0.0.0 \
Otherwise, you'll have 0 slaves node on mesos.
Tell me if I am wrong, updating the IP for MESOS_MASTER_PARAMS
and MESOS_SLAVE_PARAMS
worked for me
Is there a reason why registrator
doesn't start when MASTER=true SLAVE=false
?
If I change it to start, along with the exposed ports, I am able to use Consul for Marathon and Chronos tasks.
If there is no specific reason for it, I suggest we change it to start so we can take advantage of consul lookups for marathon and chronos. I've got the change made in my local instance and can open a pull request.
Why does docker.sock get mounted in /tmp
and not in /var/run
? (docker-compose.yml.tpl) :
- "/var/run/docker.sock:/tmp/docker.sock"
I am trying to set the listening interface of Chronos to make it private and not listen to 0.0.0.0
(default)
In the chronos documentation, there is one parameter that could fit for my needs --http_address
but it can only be set to 0.0.0.0, otherwise it doesn't work.
This is also an unresponded issue on mesos/chronos : #393
As far as I know, it is not possible to set the listening interface of Chronos framework.
To use chronos securely :
START_CHRONOS=false
(1) is not possible this is too costy right now 👎
(2) is the solution I have choosen with (4)
I have tried to follow the documentation but couldn't make it to work.
There is in Chronos a --https_port
option. This is the only https related option so I guess it will also enable the https server.
Have you tried built-in CRAM-MD5 mesos authentication mechanism ? Does it fit well with panteras and https/http ?
How do you provide the HTTPs certs to chronos ?
I haven't looked to deeply into panteraS, but does this currently support multi tenant? I'd like to use Marathon in a multi-tenant environment where each "tenant" is capable of managing there own apps, jobs etc..
--docker_socket
is not working yet (It is fix in 0.25. but we have to use 0.23.)
create a simlink to make it running.
I am running into this issue mesosphere/marathon#1763 and the recommendation is to switch to Java 8
(among others).
Since the upcoming releases of marathon
will also force you to switch to Java 8
(https://github.com/mesosphere/marathon/releases/tag/v0.10.0-RC2; under Notes
right at the top of that page) it might be a good time to do that now...
Validation failed, reason(s):
Service 'panteras' configuration key 'restart' contains an invalid type, it should be a string
Service 'panteras' configuration key 'panteras' contains an invalid type, it should be a string or an array
The openvpn container fails:
Error opening configuration file: /etc/openvpn/openvpn.conf
No config found inside the container or in build-source.
Hi,
I was wondering if you have ever successfully gotten PanteraS running on CoreOS. I have been looking at what it will take to move from Ubuntu to CoreOS. One difference I noticed already is that the Stable version of CoreOS uses Docker 1.8.3
, whereas PanteraS uses 1.9.1
. I think the container would certainly need to be downgraded to the same version of Docker. Would this cause any problems with Mesos, etc.?
Thanks
Hi,
For some reason, recently, I've started experiencing the following error:
dnsmasq: Maximum number of concurrent DNS queries reached (max: 150)
As a result, my lookups are failing for my marathon services. Do you have any idea what could be causing this, or if there's a way to look at the dnsmasq logs to determine where the queries are coming from?
Thanks!
Hello,
Thanks again for the great project!
I am having a problem getting my slave images started correctly in CentOS 7.1. I have been able to get it all working in Ubuntu 14.x, however my configuration was slightly different. In my CentOS install, I've got a single Master and 2 Slaves.
I've got the master up and running ok. I can connect to Mesos, Marathon, and Consul without any problems. However, I can't seem to get the slave connected to the master. On the slave, I created the restricted/host
file:
echo 'ZOOKEEPER_HOSTS="c01nhvd612:2181"' >> restricted/host
echo 'CONSUL_HOSTS="-join=c01nhvd612"' >> restricted/host
echo 'IP=10.7.112.106' >> restricted/host
And then generated the yml and started the docker image
MASTER=false SLAVE=true ./generate_yml.sh
docker-compose up
However, I notice that the slave never actually starts. When I look at the logs I see the following error about "not being able to determine the hierarchy where the subsystem freezer is attached":
PanteraS_1 | mesos-slave stderr | I1123 23:06:55.810374 173 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix
PanteraS_1 | mesos-slave stderr | Failed to create a containerizer: Could not create MesosContainerizer: Failed to create launcher: Failed to create Linux launcher: Failed to determine the hierarchy where the subsystem freezer is attached
Doing some googling suggests that it could be a problem when running Mesos as a non-root user. However, I am starting docker-compose as the root user.
Any ideas? Here's a sample of my docker-compose.yml
file
START_CONSUL: "true"
START_CONSUL_TEMPLATE: "true"
START_DNSMASQ: "true"
START_MESOS_MASTER: "false"
START_MARATHON: "false"
START_MESOS_SLAVE: "true"
START_REGISTRATOR: "true"
START_ZOOKEEPER: "false"
CONSUL_APP_PARAMS: "agent -client=0.0.0.0 -data-dir=/opt/consul/ -ui-dir=/opt/consul/dist/ -advertise=10.7.112.106 -node=c01nhvd613.nh.corp -dc=UNKNOWN -join=c01nhvd612 "
CONSUL_TEMPLATE_APP_PARAMS: "-consul=10.7.112.106:8500 -template haproxy.cfg.ctmpl:/etc/haproxy/haproxy.cfg:/opt/consul-template/haproxy_reload.sh "
DNSMASQ_APP_PARAMS: "-d -u dnsmasq -r /etc/resolv.conf.orig -7 /etc/dnsmasq.d --server=/consul/10.7.112.106#8600 --host-record=c01nhvd613.nh.corp,10.7.112.106 --address=/consul/10.7.112.106 "
HAPROXY_ADD_DOMAIN: ""
MARATHON_APP_PARAMS: "--master zk://c01nhvd612:2181/mesos --zk zk://c01nhvd612:2181/marathon --hostname c01nhvd613.nh.corp --no-logger "
MESOS_MASTER_APP_PARAMS: "--zk=zk://c01nhvd612:2181/mesos --work_dir=/var/lib/mesos --quorum=1 --ip=0.0.0.0 --hostname=c01nhvd613 --cluster=mesoscluster "
MESOS_SLAVE_APP_PARAMS: "--master=zk://c01nhvd612:2181/mesos --containerizers=docker,mesos --executor_registration_timeout=5mins --hostname=c01nhvd613 --ip=0.0.0.0 --docker_stop_timeout=5secs --gc_delay=1days --docker_socket=/tmp/docker.sock "
REGISTRATOR_APP_PARAMS: "-ip=10.7.112.106 consul://10.7.112.106:8500 "
ZOOKEEPER_APP_PARAMS: "start-foreground"
ZOOKEEPER_HOSTS: "c01nhvd612:2181"
ZOOKEEPER_ID: "0"
KEEPALIVED_VIP: ""
Thank you!
Marathon,consul-ui and mesos were registered to consul when the role of node is master+slave ,and all of them would be balanced through haproxy
i want to hide consul-ui and mesos,how to do it?
Hello,
I tried to make this paas work in standalone mode. I launch all the containers Simple/SmoothWebappPyton (deploy0 & 1).
I get the following haproxy.cfg :
frontend http-in
bind *:80
#python
acl acl_python hdr(host) -i python.service.consul
use_backend backend_python if acl_python
#python-smooth
acl acl_python-smooth hdr(host) -i python-smooth.service.consul
use_backend backend_python-smooth if acl_python-smooth
backend backend_python
balance roundrobin
option http-server-close
server standalone_31005 192.168.10.10:31005 maxconn 32 weight 1
server standalone_31003 192.168.10.10:31003 maxconn 32 weight 100
server standalone_31002 192.168.10.10:31002 maxconn 32 weight 100
server standalone_31006 192.168.10.10:31006 maxconn 32 weight 1
backend backend_python-smooth
balance roundrobin
option http-server-close
server standalone_31000 192.168.10.10:31000 maxconn 32 weight 100
server standalone_31004 192.168.10.10:31004 maxconn 32 weight 1
server standalone_31001 192.168.10.10:31001 maxconn 32 weight 100
When I browse 192.168.10.10:31001 or 192.168.10.10:31002 I get the app output.
However, when I try to browse 192.168.10.10:80 to be load balanced between the app nodes in a round robin manner, I get an "503 Service Unavailable No server is available to handle this request.".
I tried to debug but I was not getting the haproxy logs. I added to the haproxy.cfg the following :
global
debug
defaults
log global
option tcplog
Here is the logs I got if it may helps :
panteras_1 | consul-template_haproxy stdout | 00000256:http-in.accept(0006)=0007 from [192.168.10.1:51926]
panteras_1 | consul-template_haproxy stdout | 00000257:http-in.accept(0006)=0008 from [192.168.10.1:51927]
panteras_1 | consul-template_haproxy stdout | 00000256:http-in.clireq[0007:ffffffff]: GET / HTTP/1.1
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Host: 192.168.10.10
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Connection: keep-alive
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Cache-Control: max-age=0
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 00000256:http-in.clihdr[0007:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | 00000256:http-in.clicls[0007:ffffffff]
panteras_1 | 00000256:http-in.closed[0007:ffffffff]
panteras_1 | consul-template_haproxy stdout | 00000257:http-in.clireq[0008:ffffffff]: GET /favicon.ico HTTP/1.1
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Host: 192.168.10.10
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Connection: keep-alive
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Accept: */*
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Referer: http://192.168.10.10/
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 00000257:http-in.clihdr[0008:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | 00000257:http-in.clicls[0008:ffffffff]
panteras_1 | 00000257:http-in.closed[0008:ffffffff]
panteras_1 | consul-template_haproxy stdout | 00000258:http-in.accept(0006)=0007 from [192.168.10.1:51928]
panteras_1 | haproxy_watcher stdout | 0000023d:http-in.accept(0006)=0007 from [192.168.10.1:51929]
panteras_1 | consul-template_haproxy stdout | 00000258:http-in.clireq[0007:ffffffff]: GET / HTTP/1.1
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Host: 192.168.10.10
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Connection: keep-alive
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Cache-Control: max-age=0
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 00000258:http-in.clihdr[0007:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | 00000258:http-in.clicls[0007:ffffffff]
panteras_1 | 00000258:http-in.closed[0007:ffffffff]
panteras_1 | haproxy_watcher stdout | 0000023d:http-in.clireq[0007:ffffffff]: GET /favicon.ico HTTP/1.1
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Host: 192.168.10.10
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Connection: keep-alive
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Accept: */*
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Referer: http://192.168.10.10/
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 0000023d:http-in.clihdr[0007:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | haproxy_watcher stdout | 0000023d:http-in.clicls[0007:ffffffff]
panteras_1 | 0000023d:http-in.closed[0007:ffffffff]
panteras_1 | haproxy_watcher stdout | 0000023e:stats.accept(0004)=0007 from [192.168.10.1:51930]
panteras_1 | haproxy_watcher stdout | 0000023e:stats.clireq[0007:ffffffff]: GET / HTTP/1.1
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Host: 192.168.10.10:81
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Connection: keep-alive
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Cache-Control: max-age=0
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 0000023e:stats.clihdr[0007:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | 0000023e:stats.srvrep[0007:ffffffff]: HTTP/1.1 200 OK
panteras_1 | 0000023e:stats.srvhdr[0007:ffffffff]: Cache-Control: no-cache
panteras_1 | 0000023e:stats.srvhdr[0007:ffffffff]: Connection: close
panteras_1 | 0000023e:stats.srvhdr[0007:ffffffff]: Content-Type: text/html
panteras_1 | 0000023e:stats.srvhdr[0007:ffffffff]: Transfer-Encoding: chunked
panteras_1 | consul-template_haproxy stdout | 00000259:stats.accept(0004)=0007 from [192.168.10.1:51931]
panteras_1 | haproxy_watcher stdout | 0000023f:stats.clireq[0007:ffffffff]: GET /favicon.ico HTTP/1.1
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Host: 192.168.10.10:81
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Connection: keep-alive
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Accept: */*
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Referer: http://192.168.10.10:81/
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Accept-Encoding: gzip, deflate, sdch
panteras_1 | 0000023f:stats.clihdr[0007:ffffffff]: Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2
panteras_1 | 0000023f:stats.srvrep[0007:ffffffff]: HTTP/1.1 200 OK
panteras_1 | 0000023f:stats.srvhdr[0007:ffffffff]: Cache-Control: no-cache
panteras_1 | 0000023f:stats.srvhdr[0007:ffffffff]: Connection: close
panteras_1 | 0000023f:stats.srvhdr[0007:ffffffff]: Content-Type: text/html
panteras_1 | 0000023f:stats.srvhdr[0007:ffffffff]: Transfer-Encoding: chunked
Are you experiencing the same problem ? Or maybe I have misconfigured some parts ?
Every nodes is running normally in my environment.
But error appear after adding “haproxy” to the SERVICE_TAGS:
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
panteras_1 | consul-template_haproxy stdout | trying again
deploy1.json:
{
"id": "python-example-canaries",
"cmd": "echo python canaries `hostname` > index.html; python3 -m http.server 8080",
"mem": 16,
"cpus": 0.1,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "ubuntu:14.04.1",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 8080, "hostPort": 0, "protocol": "tcp" }
]
}
},
"env": {
"SERVICE_TAGS" : "python,webapp,haproxy,http,weight=1",
"SERVICE_NAME" : "python"
},
"healthChecks": [
{
"portIndex": 0,
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 10,
"timeoutSeconds": 30,
"maxConsecutiveFailures": 3
},
{
"path": "/",
"portIndex": 0,
"protocol": "HTTP",
"gracePeriodSeconds": 30,
"intervalSeconds": 10,
"timeoutSeconds": 30,
"maxConsecutiveFailures": 3
}
]
}
$ ./install.sh -m vagrant
Gives me:
==> standalone: Creating vagrant_haproxy_1...
==> standalone: Pulling image haproxy...
==> standalone: Traceback (most recent call last):
==> standalone: File "", line 3, in
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.main", line 31, in main
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.docopt_command", line 21, in sys_dispatch
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.command", line 28, in dispatch
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.docopt_command", line 24, in dispatch
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.command", line 56, in perform_command
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.cli.main", line 427, in up
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.project", line 174, in up
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.service", line 199, in recreate_containers
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.service", line 187, in create_container
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.progress_stream", line 37, in stream_output
==> standalone: File "/code/build/fig/out00-PYZ.pyz/fig.progress_stream", line 50, in print_output_event
==> standalone: fig.progress_stream
==> standalone: .
==> standalone: StreamOutputError
==> standalone: :
==> standalone: Error: image library/haproxy not found
The supervisor
page (running on port 9000
) shows that the mesos-slave
process is failing on all nodes with this error:
Exited too quickly (process log may have details)
But all the logs for this process are empty. And Start
ing that single process as well as trying to Restart all
processes doesn't help.
mesos-slave
process?somehow zookeeper is not working in cluster mode on docker 1.7.0
dig into it
There is a race condition with haproxy_reload.sh
. consul-template trigger it always when config change.
It could happen that it is triggered when previous did not finish. And next haproxy_reload.sh ends with
unknown
state.
To Do:
-wait
remove()
function to remove all appearance of previous config, even if this should not exists.unknown
state, than exit and do nothing.Is it possible to run PanteraS without vagrant, using only installed docker?
see here: moby/moby@97cd073
Docker 1.6.1 disabled mounting /sys due to security issues. Do you rely on the mounted /sys from the docker host system?
docker version 1.7.1-0~trusty
not available (anymore?) on apt.dockerproject.org
I get the following error when rebuilding the docker image:
Step 56 : RUN apt-get update && apt-get install -y docker-engine=${DOCKER_APP_VERSION}
---> Running in 0582a53ae5b6
Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-backports InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Hit http://archive.ubuntu.com trusty-updates Release.gpg
Ign http://ppa.launchpad.net trusty InRelease
Hit http://archive.ubuntu.com trusty-backports Release.gpg
Hit http://archive.ubuntu.com trusty-security Release.gpg
Hit http://archive.ubuntu.com trusty Release
Hit http://repos.mesosphere.io trusty InRelease
Hit http://archive.ubuntu.com trusty-updates Release
Hit http://archive.ubuntu.com trusty-backports Release
Get:1 https://apt.dockerproject.org ubuntu-trusty InRelease
Ign https://apt.dockerproject.org ubuntu-trusty InRelease
Hit http://ppa.launchpad.net trusty Release.gpg
Hit http://archive.ubuntu.com trusty-security Release
Get:2 https://apt.dockerproject.org ubuntu-trusty Release
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Hit http://ppa.launchpad.net trusty Release
Hit http://archive.ubuntu.com trusty-updates/main Sources
Hit http://repos.mesosphere.io trusty/main amd64 Packages
Get:3 https://apt.dockerproject.org ubuntu-trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty-updates/restricted Sources
Hit http://archive.ubuntu.com trusty-updates/universe Sources
Hit http://archive.ubuntu.com trusty-updates/main amd64 Packages
Hit http://archive.ubuntu.com trusty-updates/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty-updates/universe amd64 Packages
Hit http://archive.ubuntu.com trusty-backports/main Sources
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty-backports/restricted Sources
Hit http://archive.ubuntu.com trusty-backports/main amd64 Packages
Hit http://archive.ubuntu.com trusty-backports/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty-security/main Sources
Hit http://archive.ubuntu.com trusty-security/restricted Sources
Hit http://archive.ubuntu.com trusty-security/universe Sources
Hit http://archive.ubuntu.com trusty-security/main amd64 Packages
Hit http://archive.ubuntu.com trusty-security/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty-security/universe amd64 Packages
Fetched 6,330 B in 4s (1,473 B/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Version '1.7.1-0~trusty' for 'docker-engine' was not found
The command '/bin/sh -c apt-get update && apt-get install -y docker-engine=${DOCKER_APP_VERSION}' returned a non-zero code: 100
ERROR DURING BUILDING IMAGE
Are there any known issues with deploying a panteras
cluster on AWS
EC2
instances within a VPC
?
I am running into an issue if I try to use more than one master
node: it looks like zookeeper
cannot communicate with the peer nodes. The nodes can talk to each other (I opened all communication ports for all protocols for incoming and outgoing traffic for the nodes in the same VPC/subnet/security group) and pinging the peer nodes works as well as doing some other tests on some other ports. Only zookeeper
seems to be blocked somehow.
If I switch to a 1
master
+ 4
slave
node configuration everything works as expected, but that is surely not the config I want to run on...
Any advice would be appreciated.
Hi,
I was wondering if anyone had any experience running PanteraS over multiple data centers? I know that Consul has support for multiple data centers, but what about Mesos?
How would you configure HA for this solution? I see two options:
I think in order to use the second option, Mesos would need to be configured as a cluster across the 2 data centers.
Does anyone have any experience with this type of configuration?
Would you mind to update gliderlabs/registrator
to the latest version? They changed to use native consul
HTTP health checks instead of the proprietary health checks only available when using the progrium/consul
docker image: gliderlabs/registrator#173
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.