Giter Club home page Giter Club logo

scality / zenko Goto Github PK

View Code? Open in Web Editor NEW
543.0 71.0 85.0 33.08 MB

Zenko is the open source multi-cloud data controller: own and keep control of your data on any cloud.

Home Page: https://www.zenko.io

License: Apache License 2.0

Makefile 0.05% Shell 2.01% Python 4.26% JavaScript 25.16% Dockerfile 0.35% Mustache 4.75% TypeScript 11.38% Gherkin 52.02% Roff 0.01%
multi-cloud aws-s3 azure-storage google-cloud scality object-storage docker-swarm zenko hybrid-cloud kubernetes

zenko's Introduction

Zenko

Zenko logo

Documentation Status Open in GitHub Codespaces

Zenko is Scalityโ€™s open source multi-cloud data controller.

Zenko provides a unified namespace, access API, and search capabilities for data stored locally (using Docker volumes or Scality RING) or in public cloud storage services like Amazon S3, Microsoft Azure Blob storage, or Google Cloud Storage.

Learn more at Zenko.io.

Member of SODA foundation

Soda foundation logo

Contributing

If you'd like to contribute, please review the Contributing Guidelines.

If you have suggestions or questions you can leave the comments on the Discussions in this repository.

Overview

This repository includes installation resources to deploy the full Zenko stack on the Kubernetes orchestration system.

Zenko Stack

The stack consists of:

all configured to talk to each other.

Zenko in Production

  • Includes high availability (HA)
  • Asks for pre-existing volumes

Zenko Kubernetes Helm Chart deployment

Deploying a HA Kubernetes cluster

zenko's People

Contributors

alexanderchan-scality avatar anurag4dsb avatar bert-e avatar cathydossantospinto avatar chengyanjin avatar dora-korpar avatar francoisferrand avatar gdemonet avatar giacomoguiulfo avatar giorgioregni avatar jbwatenbergscality avatar jianqinwang avatar jonathan-gramain avatar kaztoozs avatar kerkesni avatar killiang avatar miniscruff avatar monpote avatar nicolas2bert avatar nicolast avatar philipyoo avatar rachedbenmustapha avatar rahulreddy avatar ssalaues avatar tcarmet avatar thomasj-tech avatar tmacro avatar vrancurel avatar wabernat avatar williamlardier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zenko's Issues

Zenko -- multiple backend

Our current deployment stack assumes a file backend. Our default should actually be the DATA=multiple option. However, with DATA=multiple, a locationConfig is necessary.

  1. We should add the DATA=multiple Env variable for s3-frontend
  2. We should propagate the loctionConfig.json to all of the nodes so the s3-frontend can obtain it.
  3. We should use Docker secrets to store the real AWS credentials and azure credentials and might need to modify s3 to get these from Docker secrets.
  4. Must be documented.

Single Node Kubernetes instllation referece link broken

General Support Information

GitHub issues are reserved for actionable bug reports (including documentation
issues), and feature requests.

All questions regarding configuration, use cases, performance, community,
events, setup and usage recommendations, among other things should be asked on
the Zenko Forum.

Questions opened as GitHub issues will be summarily closed and moved to the
Zenko Forum.

Avoid Duplication

Before reporting a new issue or requesting a feature, please search the issue
list for this repository to ensure the issue has not already been opened (Use
the search bar and select "Issues" on the left pane after searching). If the
issue has already been raised, do not open a new issue. Add a comment to the
existing issue instead.


Bug Report Information

If you are reporting a bug, provide the information described here. If you are
requesting a new feature, delete this section (to the next line).

Description

Link for Single node Kubernetes installation gives 404 error

Steps to Reproduce the Issue

From the landing page check README and click on sing-node kubernetes installation https://github.com/scality/Zenko/blob/development/2.0/docs/minikube.md

Actual Results

404

Expected Results

Open the page for installation instructions

Additional Information

Let us know how your deployed example is configured. Tell us your:

  • Node.js version
  • Docker version
  • npm version
  • Distribution/OS

Include anything else you think will help us resolve the issue.


Feature Requests

If you are requesting a new feature, delete the previous section (Bug Report
Information) and provide the information described here.

Proposal

Describe the feature you want.

Current Behavior

Tell us what the program s doing now that you want to change.

Desired Behavior

Tell us what you want the program to do.

Use Cases

Please provide use cases for changing the program's current behavior. Tell us
the starting state for the desired new feature (i.e., what screen or process
step you want to start from), the steps you'd like to follow (or avoid!), and
the results you want.

Additional Information

To serve you better, we'd like to know a little more about you (if you don't
mind our asking...)

  • Is this request for your company?
    • If it is, for which company?
    • Is your company using any Scality Enterprise Edition products (RING, Zenko
      EE)?
  • Are you willing to contribute this feature yourself?
  • What is your position or title?
  • How did you hear about us?

Download helm curl command does not fetch the correct tar ball

In this helm installation link, https://zenko.readthedocs.io/en/latest/installation/install/install_helm.html, the download helm curl command (curl -LO "https://get.helm.sh/helm-v{{version-number}}-linux-amd64.tar.gz") does not fetch the correct tar ball.

That command fetches a file of type XML 1.0 document, UTF-8 Unicode (with BOM) text and this is untar-able.

The correct way to fetch the tarball is to go to helm's github release pages and download the tarball directly from there.
The correct tarball file type is gzip compressed data, last modified: Wed Jun 10 19:04:20 2020, from Unix

You can check by running file <fileName in terminal.

Large zipfile upload never completes on Zenko from Cyberduck Inbox x

I started a new Zenko instance from Orbit - this works great.

First I verified the Zenko instance is accessible and accepts IO by uploading/downloading several smaller (< 1MB) files from Cyberduck, this succeeds.

I then tried to upload a 99MB zipfile.

The upload slows down from about 50MB/sec to under 5MB/sec, but does not stop at 99MB (100%), then continues uploading the file again to 200% (198.4MB). The upload does not complete and the file never appears.

See the attachment. I tried it a few times and same behavior.

screen shot 2017-11-27 at 9 53 16 am

Signature mismatch using swarm-testing

I ran a vanilla swarm-testing environment and going through the frontend ngnix doesn't work, getting a sig mismatch. Going direct through s3server as opposed to ngninx works so I bet that the ngninx config strips some headers/breaks the signature calculation.

GMBP:swarm-testing giorgio$ aws s3 --endpoint http://localhost:80 mb s3://bucket1 --region=us-east-1
make_bucket failed: s3://bucket1 An error occurred (SignatureDoesNotMatch) when calling the CreateBucket operation: The request signature we calculated does not match the signature you provided.
GMBP:swarm-testing giorgio$ vim docker-stack.yml
GMBP:swarm-testing giorgio$ less docker-stack.yml
GMBP:swarm-testing giorgio$ docker stack deploy -c docker-stack.yml zenko-testing
Updating service zenko-testing_lb (id: amkd4cnwzlyhgi2jz3yote6lr)
Updating service zenko-testing_s3 (id: tyalfqm8taia7u3l0zdrsbunc)
GMBP:swarm-testing giorgio$ aws s3 --endpoint http://localhost:8000 mb s3://bucket1 --region=us-east-1
make_bucket: bucket1

Docs should be in ReST format

As @ballot-scality mentioned in #137 (review), markdown should not be used for authoring documentation.

There is currently a mix of formats, and the docs are scattered in multiple directories, too:

./charts/README.md
./charts/minikube.md
./charts/gke.md
./docs/orbit_registration.md
./swarm-testing/README.md
./swarm-production/README.md
./README.md

./docs/swarm_production_link.rst
./docs/index.rst
./tests/README.rst

Stateful S3-data

Stateful Set for S3-data chart

It seems like this chart is setup as a deployment. Is this what we want going forward? It seems like convention would require this to be a stateful set. Would the S3-data service function properly in a stateful set or is the current configuration the only way of achieving a stateful S3-data chart?

permanent port forwarding

Hi can we have a permanent port forwarding in order to avoid typing the following command each time ?
kubectl port-forward zenko-cloudserver-front-6b869f97cd-r75nx 8000

Zenko v1 API KO ?

Hello,

We use Zenko v1 and get errors in the zenko-cloudserver-manager pod :

{"name":"S3","time":1687768956746,"url":"https://push.api.zenko.io/api/v1/instance/3d22af7d-42a1-4e6a-b846-4329477b4ee8/ws","level":"info","message":"connecting to push server","hostname":"zenko-cloudserver-6fd97d548-7fx6t","pid":19}
{"name":"S3","time":1687768957130,"error":{},"errorMessage":"Unexpected server response: 200","level":"error","message":"error from push server connection","hostname":"zenko-cloudserver-6fd97d548-7fx6t","pid":19}
{"name":"S3","time":1687768957130,"level":"info","message":"disconnected from push server, reconnecting in 10s","hostname":"zenko-cloudserver-6fd97d548-7fx6t","pid":19}
{"name":"S3","time":1687769632811,"level":"info","message":"push server ws not using proxy","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769633206,"error":{},"errorMessage":"Unexpected server response: 200","level":"error","message":"error from push server connection","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769633206,"level":"info","message":"disconnected from push server, reconnecting in 10s","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769643207,"url":"https://push.api.zenko.io/api/v1/instance/3d22af7d-42a1-4e6a-b846-4329477b4ee8/ws","level":"info","message":"connecting to push server","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769643207,"level":"info","message":"push server ws not using proxy","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769643589,"error":{},"errorMessage":"Unexpected server response: 200","level":"error","message":"error from push server connection","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}
{"name":"S3","time":1687769643590,"level":"info","message":"disconnected from push server, reconnecting in 10s","hostname":"zenko-cloudserver-manager-677868dcd8-tfj4h","pid":20}

Orbit UI is also not accessible, is there an incident on your side ?

Many of ours statics return randomly 403 errors.

Thanks

Cloudserver memory leak

Bug Report Information

Memory leak in Cloudserver since 8.1

Description

We upgraded two Zenko instances to 1.2 one month ago and we noticed a lot of cloudserver pod restarts.
It happen on both instances. One of the instance have 3 locations and 3 cloudserver pods. The other have more than 30 cloudserver pods and more than 100 locations.

Steps to Reproduce the Issue

Deploy the latest zenko chart.
Look at cloudserver restarts and grafana cloudserver dashboard.
We tested the 8.1.20 and 8.2.6.

Actual Results

8.2.6 metrics:
8-2-6.png

8.1.20 metrics
8-1-20.png

You can see on both 8.1.20 and 8.2.6 that the heap is still growing. And it end with a pod restart with a NodeJS stacktrace:

      Last State:  Terminated
      Reason:    Error
      Message:   ======================================

    0: ExitFrame [pc: 0xa18e7edbe1d]
Security context: 0x04e105e1e6e9 <JSObject>
    1: connectToNext(aka connectToNext) [0x23d0dc3908f1] [/usr/src/app/node_modules/utapi/node_modules/ioredis/built/connectors/SentinelConnector/index.js:~41] [pc=0xa18e9273f62](this=0x014cadf026f1 <undefined>)
    2: /* anonymous */(aka /* anonymous */) [0x3a96bdf132a1] [/usr/src/app/node_modules/utapi/node_modules/ioredis/built/connectors/SentinelConnector...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x8fa050 node::Abort() [node]
 2: 0x8fa09c  [node]
 3: 0xb0020e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xb00444 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xef4952  [node]
 6: 0xef4a58 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
 7: 0xf00b32 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
 8: 0xf01464 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 9: 0xf040d1 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
10: 0xecd554 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
11: 0x116d6de v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
12: 0xa18e7edbe1d
Aborted (core dumped)
npm ERR! code ELIFECYCLE
npm ERR! errno 134
npm ERR! @zenko/[email protected] start_s3server: `node index.js`
npm ERR! Exit status 134
npm ERR!
npm ERR! Failed at the @zenko/[email protected] start_s3server script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2020-11-05T14_17_01_647Z-debug.log

      Exit Code:    134
      Started:      Wed, 04 Nov 2020 04:36:10 +0100
      Finished:     Thu, 05 Nov 2020 15:17:01 +0100
    Ready:          True
    Restart Count:  13

Expected Results

Like in 8.0, pods should not restart and the heap size should not grow to reach OOM.

8.0.22 metrics:
8-0-22.png

Additional Information

Let us know how your deployed example is configured. Tell us your:

  • Azure AKS 1.16.13

we also tested the latest-8.2 docker image and observed the same symptoms as the 8.2.6 and 8.1.20.

Regards

Instance does not create

When I run Zenko in the sandbox, the instance never gets created, even though it says: "Your instance is currently being created and will be ready in a few seconds.":

screen shot 2018-07-02 at 16 12 22

Why is everything broken

Discussed in #1390

Originally posted by outbackdingo November 12, 2021
document states

Go to https://github.com/Scality/Zenko/releases and download the latest stable version of Zenko.
Unzip or gunzip the file you just downloaded and change to the top-level (Zenko) directory.
Configure with options.yaml
Create an options.yaml file in Zenko/kubernetes/ to store deployment parameters. Enter the following parameters:

ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 0
hosts:
- zenko.local

cloudserver:
endpoint: "zenko.local"

however there is no kubernetes directory

โฏ cd Zenko-2.1.4
โฏ ls -al
total 36
drwxrwxr-x. 1 dingo dingo 248 Nov 12 09:33 .
drwx--x---+ 1 dingo dingo 4628 Nov 12 10:17 ..
-rw-rw-r--. 1 dingo dingo 16 Nov 12 02:29 .dockerignore
drwxrwxr-x. 1 dingo dingo 164 Nov 12 02:29 docs
-rw-rw-r--. 1 dingo dingo 24 Nov 12 02:29 .eslintrc
drwxrwxr-x. 1 dingo dingo 88 Nov 12 02:29 eve
drwxrwxr-x. 1 dingo dingo 100 Nov 12 02:29 .github
-rw-rw-r--. 1 dingo dingo 155 Nov 12 02:29 .gitignore
-rw-rw-r--. 1 dingo dingo 10753 Nov 12 02:29 LICENSE
-rw-rw-r--. 1 dingo dingo 2142 Nov 12 02:29 README.md
-rw-rw-r--. 1 dingo dingo 506 Nov 12 02:29 .readthedocs.yml
drwxrwxr-x. 1 dingo dingo 222 Nov 12 02:29 res
drwxrwxr-x. 1 dingo dingo 84 Nov 12 02:29 solution
drwxrwxr-x. 1 dingo dingo 58 Nov 12 02:29 solution-base
drwxrwxr-x. 1 dingo dingo 218 Nov 12 02:29 tests
-rw-rw-r--. 1 dingo dingo 76 Nov 12 02:29 VERSION

make_bucket failed

Getting an error when running:
$ aws s3 --endpoint http://localhost mb s3://bucket1 --region=us-east-1

make_bucket failed: s3://bucket1 An error occurred (InvalidAccessKeyId) when calling the CreateBucket operation: The AWS access key Id you provided does not exist in our records.

Getting the same "The AWS access key Id you provided does not exist in our records." when running
$ aws s3 --endpoint http://localhost ls

Direct AWS CLI call works: aws s3 ls

Any help will be greatly appreciated.

helm deploy incomplete - backbeat dockers restarting

I was trying to evaluate Zenko by using the helm chart. i tried v1.1.0 and latest 1.1.3 but both show the same issue on kubernates 1.15 deployed via rancher 2.3.2:

the helm is deployed and several containers are up and running, for instance redis and mongodb are both "green" while zenko specific ones will never complete, they are all from zenko/backbeat:8.1.14 image and show the same error.

Any hints?
thank you.

kubectl logs zenko-backbeat-api-5566bdfd68-mfsgt
Log level has been modified to info

[email protected] start /usr/src/app
node index.js

(node:26) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
{"name":"mdManagement","time":1572445217762,"level":"info","message":"connected to mongodb","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
(node:26) DeprecationWarning: collection.update is deprecated. Use updateOne, updateMany, or bulkWrite instead.
{"name":"BackbeatServer:index","time":1572445217903,"error":{"message":"NoSuchKey","code":404,"stack":"Error: NoSuchKey\n at Object.keys.filter.forEach.index (/usr/src/app/node_modules/arsenal/lib/errors.js:29:31)\n at Array.forEach ()\n at errorsGen (/usr/src/app/node_modules/arsenal/lib/errors.js:28:12)\n at Object. (/usr/src/app/node_modules/arsenal/lib/errors.js:35:18)\n at Module._compile (internal/modules/cjs/loader.js:776:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:690:17)","name":"Error"},"level":"error","message":"could not load management db","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"mdManagement","time":1572445222966,"level":"info","message":"connected to mongodb","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"BackbeatServer:index","time":1572445223057,"error":{"message":"NoSuchKey","code":404,"stack":"Error: NoSuchKey\n at Object.keys.filter.forEach.index (/usr/src/app/node_modules/arsenal/lib/errors.js:29:31)\n at Array.forEach ()\n at errorsGen (/usr/src/app/node_modules/arsenal/lib/errors.js:28:12)\n at Object. (/usr/src/app/node_modules/arsenal/lib/errors.js:35:18)\n at Module._compile (internal/modules/cjs/loader.js:776:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:690:17)","name":"Error"},"level":"error","message":"could not load management db","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"mdManagement","time":1572445228118,"level":"info","message":"connected to mongodb","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"BackbeatServer:index","time":1572445228203,"error":{"message":"NoSuchKey","code":404,"stack":"Error: NoSuchKey\n at Object.keys.filter.forEach.index (/usr/src/app/node_modules/arsenal/lib/errors.js:29:31)\n at Array.forEach ()\n at errorsGen (/usr/src/app/node_modules/arsenal/lib/errors.js:28:12)\n at Object. (/usr/src/app/node_modules/arsenal/lib/errors.js:35:18)\n at Module._compile (internal/modules/cjs/loader.js:776:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:690:17)","name":"Error"},"level":"error","message":"could not load management db","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"mdManagement","time":1572445233246,"level":"info","message":"connected to mongodb","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"BackbeatServer:index","time":1572445234168,"error":{"message":"NoSuchKey","code":404,"stack":"Error: NoSuchKey\n at Object.keys.filter.forEach.index (/usr/src/app/node_modules/arsenal/lib/errors.js:29:31)\n at Array.forEach ()\n at errorsGen (/usr/src/app/node_modules/arsenal/lib/errors.js:28:12)\n at Object. (/usr/src/app/node_modules/arsenal/lib/errors.js:35:18)\n at Module._compile (internal/modules/cjs/loader.js:776:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:690:17)","name":"Error"},"level":"error","message":"could not load management db","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"mdManagement","time":1572445239220,"level":"info","message":"connected to mongodb","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}
{"name":"BackbeatServer:index","time":1572445239764,"error":{"message":"NoSuchKey","code":404,"stack":"Error: NoSuchKey\n at Object.keys.filter.forEach.index (/usr/src/app/node_modules/arsenal/lib/errors.js:29:31)\n at Array.forEach ()\n at errorsGen (/usr/src/app/node_modules/arsenal/lib/errors.js:28:12)\n at Object. (/usr/src/app/node_modules/arsenal/lib/errors.js:35:18)\n at Module._compile (internal/modules/cjs/loader.js:776:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:690:17)","name":"Error"},"level":"error","message":"could not load management db","hostname":"zenko-backbeat-api-5566bdfd68-mfsgt","pid":26}

helm install output
NAME: zenko
LAST DEPLOYED: Wed Oct 30 12:32:10 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME AGE
zenko-cosmos-operator 88s
zenko-grafana-clusterrole 88s

==> v1/ClusterRoleBinding
NAME AGE
zenko-cosmos-operator 88s
zenko-grafana-clusterrolebinding 88s

==> v1/ConfigMap
NAME DATA AGE
zenko-backbeat-grafana-dashboard 1 89s
zenko-cloudserver-grafana-dashboard 1 89s
zenko-grafana 1 89s
zenko-grafana-config-dashboards 1 89s
zenko-grafana-dashboards-json 0 89s
zenko-jmx-exporter 1 89s
zenko-mongodb-replicaset-dashboard 1 89s
zenko-mongodb-replicaset-init 1 89s
zenko-mongodb-replicaset-mongodb 1 89s
zenko-mongodb-replicaset-tests 1 89s
zenko-prometheus-server 3 89s
zenko-redis-ha-configmap 3 89s
zenko-redis-ha-probes 2 89s
zenko-zenko-grafana-datasource 1 89s
zenko-zenko-queue-config 1 89s
zenko-zenko-queue-manager-bootstrap 1 89s
zenko-zenko-queue-metrics 1 89s

==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
zenko-cosmos-operator 0/1 1 0 87s
zenko-cosmos-scheduler 0/1 1 0 87s
zenko-zenko-queue-manager 0/1 1 0 87s

==> v1/HorizontalPodAutoscaler
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
zenko-cloudserver Deployment/zenko-cloudserver /80% 2 4 4 86s

==> v1/Job
NAME COMPLETIONS DURATION AGE
zenko-zenko-queue-config-ab364206 0/1 85s 86s
zenko-zenko-queue-manager-bootstrap-0c3fbfe6 0/1 85s 86s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zenko-s3-data Bound pvc-8c92c229-c36a-4dce-b1f3-719f18979802 90Gi RWO nfs-client 88s Filesystem

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
zenko-backbeat-api-5566bdfd68-mfsgt 0/1 ContainerCreating 0 84s
zenko-backbeat-gc-consumer-574d5d69cf-9mk94 0/1 ContainerCreating 0 86s
zenko-backbeat-gc-consumer-574d5d69cf-bhxlp 0/1 ContainerCreating 0 86s
zenko-backbeat-gc-consumer-574d5d69cf-gmn9h 0/1 ContainerCreating 0 86s
zenko-backbeat-ingestion-consumer-db4b76459-222d5 0/1 ContainerCreating 0 84s
zenko-backbeat-ingestion-consumer-db4b76459-hkfvm 0/1 ContainerCreating 0 85s
zenko-backbeat-ingestion-consumer-db4b76459-hwdbr 0/1 ContainerCreating 0 84s
zenko-backbeat-ingestion-producer-69589dfdc6-n6r49 0/1 Pending 0 82s
zenko-backbeat-lifecycle-bucket-processor-78bdf96fc7-8qqmg 0/1 ContainerCreating 0 82s
zenko-backbeat-lifecycle-bucket-processor-78bdf96fc7-jk54t 0/1 ContainerCreating 0 83s
zenko-backbeat-lifecycle-bucket-processor-78bdf96fc7-zmp8j 0/1 ContainerCreating 0 82s
zenko-backbeat-lifecycle-conductor-65d5ccb9f6-99k6k 0/1 ContainerCreating 0 84s
zenko-backbeat-lifecycle-object-processor-7cc4545cfc-2g5m2 0/1 ContainerCreating 0 84s
zenko-backbeat-lifecycle-object-processor-7cc4545cfc-8twnk 0/1 ContainerCreating 0 83s
zenko-backbeat-lifecycle-object-processor-7cc4545cfc-phxxt 0/1 ContainerCreating 0 83s
zenko-backbeat-replication-data-processor-86f5dbcd9-5hlzn 0/1 ContainerCreating 0 86s
zenko-backbeat-replication-data-processor-86f5dbcd9-tsbhl 0/1 ContainerCreating 0 86s
zenko-backbeat-replication-data-processor-86f5dbcd9-vk7lv 0/1 ContainerCreating 0 86s
zenko-backbeat-replication-populator-749cfd8d59-5rs9r 0/1 ContainerCreating 0 82s
zenko-backbeat-replication-status-processor-b8cfd5785-b468l 0/1 ContainerCreating 0 82s
zenko-backbeat-replication-status-processor-b8cfd5785-gkrvt 0/1 ContainerCreating 0 82s
zenko-backbeat-replication-status-processor-b8cfd5785-k68p9 0/1 ContainerCreating 0 82s
zenko-cloudserver-66b4d45b4c-4j9tw 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-cpghv 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-d8xj9 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-dj2lv 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-djhvx 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-kkz6q 0/1 ContainerCreating 0 86s
zenko-cloudserver-66b4d45b4c-mq4z6 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-n9zdh 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-qxw4l 0/1 ContainerCreating 0 86s
zenko-cloudserver-66b4d45b4c-qzj7d 0/1 Terminating 0 85s
zenko-cloudserver-66b4d45b4c-t7259 0/1 ContainerCreating 0 86s
zenko-cloudserver-66b4d45b4c-vxkc9 0/1 ContainerCreating 0 85s
zenko-cloudserver-66b4d45b4c-zn6vk 0/1 Terminating 0 85s
zenko-cloudserver-manager-7dc886bf55-wj4mw 0/1 ContainerCreating 0 83s
zenko-cosmos-operator-5b86756ff7-tvrhb 0/1 ContainerCreating 0 86s
zenko-cosmos-scheduler-5c96f7ff9d-29zhp 0/1 ContainerCreating 0 86s
zenko-grafana-56f8f4f559-jtbnd 0/3 ContainerCreating 0 82s
zenko-mongodb-replicaset-0 0/2 Init:0/3 0 85s
zenko-prometheus-server-0 0/2 Init:0/1 0 79s
zenko-redis-ha-server-0 0/2 Init:0/1 0 83s
zenko-s3-data-78f46ccd8d-sg8zt 0/1 Init:0/1 0 85s
zenko-zenko-queue-0 0/2 ContainerCreating 0 80s
zenko-zenko-queue-config-ab364206-s2r5s 0/1 ContainerCreating 0 85s
zenko-zenko-queue-exporter-6b995f54fc-xl4jp 0/1 ContainerCreating 0 85s
zenko-zenko-queue-manager-5475c68d5c-6th4h 0/1 ContainerCreating 0 83s
zenko-zenko-queue-manager-bootstrap-0c3fbfe6-s2j25 0/1 ContainerCreating 0 85s
zenko-zenko-quorum-0 0/2 ContainerCreating 0 82s

==> v1/Role
NAME AGE
zenko-cosmos-operator 87s
zenko-cosmos-scheduler 87s
zenko-redis-ha 87s

==> v1/RoleBinding
NAME AGE
zenko-cosmos-operator 87s
zenko-cosmos-scheduler 87s
zenko-redis-ha 87s

==> v1/Secret
NAME TYPE DATA AGE
zenko-grafana Opaque 3 89s
zenko-zenko-queue-manager Opaque 3 89s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zenko-backbeat-api ClusterIP 10.43.139.10 8900/TCP 87s
zenko-cloudserver ClusterIP 10.43.123.152 80/TCP 87s
zenko-grafana ClusterIP 10.43.145.174 80/TCP 87s
zenko-mongodb-replicaset ClusterIP None 27017/TCP 87s
zenko-mongodb-replicaset-client ClusterIP None 27017/TCP,9216/TCP 87s
zenko-prometheus-server ClusterIP 10.43.65.181 80/TCP 87s
zenko-prometheus-server-headless ClusterIP None 80/TCP 87s
zenko-redis-ha ClusterIP None 6379/TCP,26379/TCP 87s
zenko-redis-ha-announce-0 ClusterIP 10.43.3.137 6379/TCP,26379/TCP 87s
zenko-redis-ha-announce-1 ClusterIP 10.43.188.83 6379/TCP,26379/TCP 87s
zenko-redis-ha-announce-2 ClusterIP 10.43.232.95 6379/TCP,26379/TCP 87s
zenko-s3-data ClusterIP 10.43.189.105 9991/TCP 87s
zenko-zenko-queue ClusterIP 10.43.0.137 9092/TCP 87s
zenko-zenko-queue-headless ClusterIP None 9092/TCP 87s
zenko-zenko-queue-manager ClusterIP 10.43.99.245 9000/TCP 87s
zenko-zenko-quorum ClusterIP 10.43.17.141 2181/TCP 87s
zenko-zenko-quorum-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP 87s

==> v1/ServiceAccount
NAME SECRETS AGE
zenko 1 88s
zenko-cosmos-operator 1 88s
zenko-cosmos-scheduler 1 88s
zenko-grafana 1 88s
zenko-prometheus-alertmanager 1 88s
zenko-prometheus-kube-state-metrics 1 88s
zenko-prometheus-node-exporter 1 88s
zenko-prometheus-pushgateway 1 88s
zenko-prometheus-server 1 88s
zenko-redis-ha 1 88s
zenko-zenko-queue-manager 1 88s

==> v1/StatefulSet
NAME READY AGE
zenko-mongodb-replicaset 0/3 86s
zenko-redis-ha-server 0/3 86s

==> v1/StorageClass
NAME PROVISIONER AGE
zenko-cosmos-operator-remote-storage kubernetes.io/no-provisioner 88s

==> v1beta1/ClusterRole
NAME AGE
zenko-prometheus-kube-state-metrics 88s
zenko-prometheus-server 88s

==> v1beta1/ClusterRoleBinding
NAME AGE
zenko-prometheus-kube-state-metrics 88s
zenko-prometheus-server 88s

==> v1beta1/CronJob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
zenko-zenko-reporting-count-items @hourly False 0 86s

==> v1beta1/CustomResourceDefinition
NAME AGE
cosmoses.zenko.io 88s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
zenko-backbeat-api 0/1 1 0 87s
zenko-backbeat-gc-consumer 0/3 3 0 87s
zenko-backbeat-ingestion-consumer 0/3 3 0 87s
zenko-backbeat-ingestion-producer 0/1 1 0 87s
zenko-backbeat-lifecycle-bucket-processor 0/3 3 0 87s
zenko-backbeat-lifecycle-conductor 0/1 1 0 87s
zenko-backbeat-lifecycle-object-processor 0/3 3 0 87s
zenko-backbeat-replication-data-processor 0/3 3 0 87s
zenko-backbeat-replication-populator 0/1 1 0 87s
zenko-backbeat-replication-status-processor 0/3 3 0 87s
zenko-cloudserver 0/4 30 0 87s
zenko-cloudserver-manager 0/1 1 0 87s
zenko-s3-data 0/1 1 0 87s
zenko-zenko-queue-exporter 0/1 1 0 87s

==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
zenko-grafana s3.xxx.xxx.xxxx.it 10.xx.xx.29,... 80 86s
zenko-prometheus-server s3.xxx.xxx.xxxx.it 10.xx.xx.29,... 80 86s
zenko-zenko s3.xxx..xxxx.it 10.xx.xx.29,... 80 86s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zenko-mongodb-replicaset N/A 1 0 89s
zenko-zenko-quorum N/A 1 0 89s

==> v1beta1/PodSecurityPolicy
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
zenko-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

==> v1beta1/Role
NAME AGE
zenko 87s
zenko-grafana 87s

==> v1beta1/RoleBinding
NAME AGE
zenko 87s
zenko-grafana 87s

==> v1beta1/StatefulSet
NAME READY AGE
zenko-zenko-queue 0/3 86s
zenko-zenko-quorum 0/3 86s

==> v1beta2/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
zenko-grafana 0/1 1 0 87s

==> v1beta2/StatefulSet
NAME READY AGE
zenko-prometheus-server 0/2 86s

NOTES:

Access the Zenko endpoint from this URL:
http://s3.lxxxx.xxxxx.xxxxx.it

Install with Helm results in an error

On the installation docs page (https://zenko.readthedocs.io/en/2.3.2/installation/install/install_zenko.html),
running this command helm install --name my-zenko -f options.yaml zenko
results in an error

root@ubuntu1804:~/workspace/Zenko/kubernetes# helm install --name my-zenko -f ./zenko/options.yaml zenko
Error: no Chart.yaml exists in directory "/root/workspace/Zenko/kubernetes/zenko"

I've searched within this repository for the Chart.yaml file but I couldn't find anything other than this link Zenko Kubernetes Helm Chart deployment which leads to a 404 page not found.

I'm using minikube as the underlying K8 because I plan on experimenting locally. How can I install Zenko with Helm correctly?

Documentation - awscli: An error occurred (SignatureDoesNotMatch)

When following the Testing Steps at https://github.com/scality/Zenko/blob/master/swarm-testing/README.md I continued to run into (SignatureDoesNotMatch) with :80 in the URL.

$ aws --endpoint-url=http://localhost:80 s3 mb s3://test
make_bucket failed: s3://test An error occurred (SignatureDoesNotMatch) when calling the CreateBucket operation: The request signature we calculated does not match the signature you provided.

$ aws --endpoint-url=http://localhost s3 mb s3://test
make_bucket: test

Removing :80 resolved my issue.

Occured InvalidAccessKeyId error.

I deployed zenko by the quick testing swarm stack guide but I failed just simple test.
How can I solve this problem?

# docker service ls
ID            NAME              MODE        REPLICAS  IMAGE
ki15zc8zyu3c  zenko-testing_s3  replicated  1/1       scality/s3server:latest
z3855gemz44e  zenko-testing_lb  replicated  1/1       zenko/loadbalancer:latest

# aws s3 ls 
--> It's ok.

# aws s3 ls --endpoint-url http://127.0.0.1
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS access key Id you provided does not exist in our records.

# docker logs zenko-testing_s3.1.j99ngt1c6vda26l8cu0pjypfi | tail -1
{"name":"S3","bytesReceived":0,"bodyLength":0,"bytesSent":232,"clientIP":"::ffff:10.0.0.3","clientPort":48122,"httpMethod":"GET","httpURL":"/","httpCode":403,"time":1519371188548,"req_id":"05872256bf4d8d8453e8","elapsed_ms":1.949567,"level":"info","message":"responded with error XML","hostname":"27db7213ccea","pid":126}

Update Docker Stack

It was requested to update the docker swarm stack to the latest images. The only compatibility issue is the use of Dockerize in the swarm stack.

spark write to zenko - bucket does not exist

I can write and read to S3 with the following command:

bin/spark-shell
--packages io.delta:delta-core_2.11:0.5.0,org.apache.hadoop:hadoop-aws:2.7.7
--conf spark.delta.logStore.class=org.apache.spark.sql.delta.storage.S3SingleDriverLogStore
--conf spark.hadoop.fs.s3a.access.key=my-key
--conf spark.hadoop.fs.s3a.secret.key=my-secret

spark.range(5).write.format("parquet").save("s3a://sq-delta1/parquettable1")

When I'm doing the same, including the path.style.access and other options, I always get the error :

  • java.io.IOException: Bucket mnt does not exist
    that the bucket does not exists. Why? When I try MinIO it also works.

Is there something I'm missing, or what could be the problem?

Cmd with Zenko:
bin/spark-shell
--packages io.delta:delta-core_2.11:0.5.0,org.apache.hadoop:hadoop-aws:2.7.7
--conf spark.delta.logStore.class=org.apache.spark.sql.delta.storage.S3SingleDriverLogStore
--conf spark.hadoop.fs.s3a.path.style.access=true
--conf com.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.s3.endpoint.signingRegion=eu-west-hot
--conf spark.hadoop.fs.s3a.endpoint=http://zenko-url
--conf spark.hadoop.fs.s3a.access.key=my-key
--conf spark.hadoop.fs.s3a.secret.key=my-secret
--conf fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem

spark.range(5).write.format("parquet").save("s3a://mnt/delta/testparquet")

Full stack trace

(base) root@anaconda-0:/opt/conda# bin/spark-shell \

--packages io.delta:delta-core_2.11:0.5.0,org.apache.hadoop:hadoop-aws:2.7.7
--conf spark.delta.logStore.class=org.apache.spark.sql.delta.storage.S3SingleDriverLogStore
--conf spark.hadoop.fs.s3a.path.style.access=true
--conf com.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.s3.endpoint.signingRegion=eu-west-hot
--conf spark.hadoop.fs.s3a.endpoint=http://zenko-url
--conf spark.hadoop.fs.s3a.access.key=my-key
--conf spark.hadoop.fs.s3a.secret.key=my-secret
--conf fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
Warning: Ignoring non-spark config property: fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
Warning: Ignoring non-spark config property: com.amazonaws.services.s3.enableV4=true

Ivy Default Cache set to: /root/.ivy2/cache
The jars for the packages stored in: /root/.ivy2/jars
:: loading settings :: url = jar:file:/opt/conda/lib/python2.7/site-packages/pyspark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
io.delta#delta-core_2.11 added as a dependency
org.apache.hadoop#hadoop-aws added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-1f15352a-51ee-4e1b-b10b-bc5ddeef4cdf;1.0
confs: [default]
found io.delta#delta-core_2.11;0.5.0 in central
found org.antlr#antlr4;4.7 in central
found org.antlr#antlr4-runtime;4.7 in central
found org.antlr#antlr-runtime;3.5.2 in central
found org.antlr#ST4;4.0.8 in central
found org.abego.treelayout#org.abego.treelayout.core;1.0.3 in central
found org.glassfish#javax.json;1.0.4 in central
found com.ibm.icu#icu4j;58.2 in central
found org.apache.hadoop#hadoop-aws;2.7.7 in central
found org.apache.hadoop#hadoop-common;2.7.7 in central
found org.apache.hadoop#hadoop-annotations;2.7.7 in central
found com.google.guava#guava;11.0.2 in central
found com.google.code.findbugs#jsr305;3.0.0 in central
found commons-cli#commons-cli;1.2 in central
found org.apache.commons#commons-math3;3.1.1 in central
found xmlenc#xmlenc;0.52 in central
found commons-httpclient#commons-httpclient;3.1 in central
found commons-logging#commons-logging;1.1.3 in central
found commons-codec#commons-codec;1.4 in central
found commons-io#commons-io;2.4 in central
found commons-net#commons-net;3.1 in central
found commons-collections#commons-collections;3.2.2 in central
found javax.servlet#servlet-api;2.5 in central
found org.mortbay.jetty#jetty;6.1.26 in central
found org.mortbay.jetty#jetty-util;6.1.26 in central
found org.mortbay.jetty#jetty-sslengine;6.1.26 in central
found com.sun.jersey#jersey-core;1.9 in central
found com.sun.jersey#jersey-json;1.9 in central
found org.codehaus.jettison#jettison;1.1 in central
found com.sun.xml.bind#jaxb-impl;2.2.3-1 in central
found javax.xml.bind#jaxb-api;2.2.2 in central
found javax.xml.stream#stax-api;1.0-2 in central
found javax.activation#activation;1.1 in central
found org.codehaus.jackson#jackson-core-asl;1.9.13 in central
found org.codehaus.jackson#jackson-mapper-asl;1.9.13 in central
found org.codehaus.jackson#jackson-jaxrs;1.9.13 in central
found org.codehaus.jackson#jackson-xc;1.9.13 in central
found com.sun.jersey#jersey-server;1.9 in central
found asm#asm;3.2 in central
found log4j#log4j;1.2.17 in central
found net.java.dev.jets3t#jets3t;0.9.0 in central
found org.apache.httpcomponents#httpclient;4.2.5 in central
found org.apache.httpcomponents#httpcore;4.2.5 in central
found com.jamesmurty.utils#java-xmlbuilder;0.4 in central
found commons-lang#commons-lang;2.6 in central
found commons-configuration#commons-configuration;1.6 in central
found commons-digester#commons-digester;1.8 in central
found commons-beanutils#commons-beanutils;1.7.0 in central
found commons-beanutils#commons-beanutils-core;1.8.0 in central
found org.slf4j#slf4j-api;1.7.10 in central
found org.apache.avro#avro;1.7.4 in central
found com.thoughtworks.paranamer#paranamer;2.3 in central
found org.xerial.snappy#snappy-java;1.0.4.1 in central
found org.apache.commons#commons-compress;1.4.1 in central
found org.tukaani#xz;1.0 in central
found com.google.protobuf#protobuf-java;2.5.0 in central
found com.google.code.gson#gson;2.2.4 in central
found org.apache.hadoop#hadoop-auth;2.7.7 in central
found org.apache.directory.server#apacheds-kerberos-codec;2.0.0-M15 in central
found org.apache.directory.server#apacheds-i18n;2.0.0-M15 in central
found org.apache.directory.api#api-asn1-api;1.0.0-M20 in central
found org.apache.directory.api#api-util;1.0.0-M20 in central
found org.apache.zookeeper#zookeeper;3.4.6 in central
found org.slf4j#slf4j-log4j12;1.7.10 in central
found io.netty#netty;3.6.2.Final in central
found org.apache.curator#curator-framework;2.7.1 in central
found org.apache.curator#curator-client;2.7.1 in central
found com.jcraft#jsch;0.1.54 in central
found org.apache.curator#curator-recipes;2.7.1 in central
found org.apache.htrace#htrace-core;3.1.0-incubating in central
found org.mortbay.jetty#servlet-api;2.5-20081211 in central
found javax.servlet.jsp#jsp-api;2.1 in central
found jline#jline;0.9.94 in central
found junit#junit;4.11 in central
found org.hamcrest#hamcrest-core;1.3 in central
found com.fasterxml.jackson.core#jackson-databind;2.2.3 in central
found com.fasterxml.jackson.core#jackson-annotations;2.2.3 in central
found com.fasterxml.jackson.core#jackson-core;2.2.3 in central
found com.amazonaws#aws-java-sdk;1.7.4 in central
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.ivy.util.url.IvyAuthenticator (file:/opt/conda/lib/python2.7/site-packages/pyspark/jars/ivy-2.4.0.jar) to field java.net.Authenticator.theAuthenticator
WARNING: Please consider reporting this to the maintainers of org.apache.ivy.util.url.IvyAuthenticator
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
found joda-time#joda-time;2.10.5 in central
[2.10.5] joda-time#joda-time;[2.2,)
:: resolution report :: resolve 2264ms :: artifacts dl 40ms
:: modules in use:
asm#asm;3.2 from central in [default]
com.amazonaws#aws-java-sdk;1.7.4 from central in [default]
com.fasterxml.jackson.core#jackson-annotations;2.2.3 from central in [default]
com.fasterxml.jackson.core#jackson-core;2.2.3 from central in [default]
com.fasterxml.jackson.core#jackson-databind;2.2.3 from central in [default]
com.google.code.findbugs#jsr305;3.0.0 from central in [default]
com.google.code.gson#gson;2.2.4 from central in [default]
com.google.guava#guava;11.0.2 from central in [default]
com.google.protobuf#protobuf-java;2.5.0 from central in [default]
com.ibm.icu#icu4j;58.2 from central in [default]
com.jamesmurty.utils#java-xmlbuilder;0.4 from central in [default]
com.jcraft#jsch;0.1.54 from central in [default]
com.sun.jersey#jersey-core;1.9 from central in [default]
com.sun.jersey#jersey-json;1.9 from central in [default]
com.sun.jersey#jersey-server;1.9 from central in [default]
com.sun.xml.bind#jaxb-impl;2.2.3-1 from central in [default]
com.thoughtworks.paranamer#paranamer;2.3 from central in [default]
commons-beanutils#commons-beanutils;1.7.0 from central in [default]
commons-beanutils#commons-beanutils-core;1.8.0 from central in [default]
commons-cli#commons-cli;1.2 from central in [default]
commons-codec#commons-codec;1.4 from central in [default]
commons-collections#commons-collections;3.2.2 from central in [default]
commons-configuration#commons-configuration;1.6 from central in [default]
commons-digester#commons-digester;1.8 from central in [default]
commons-httpclient#commons-httpclient;3.1 from central in [default]
commons-io#commons-io;2.4 from central in [default]
commons-lang#commons-lang;2.6 from central in [default]
commons-logging#commons-logging;1.1.3 from central in [default]
commons-net#commons-net;3.1 from central in [default]
io.delta#delta-core_2.11;0.5.0 from central in [default]
io.netty#netty;3.6.2.Final from central in [default]
javax.activation#activation;1.1 from central in [default]
javax.servlet#servlet-api;2.5 from central in [default]
javax.servlet.jsp#jsp-api;2.1 from central in [default]
javax.xml.bind#jaxb-api;2.2.2 from central in [default]
javax.xml.stream#stax-api;1.0-2 from central in [default]
jline#jline;0.9.94 from central in [default]
joda-time#joda-time;2.10.5 from central in [default]
junit#junit;4.11 from central in [default]
log4j#log4j;1.2.17 from central in [default]
net.java.dev.jets3t#jets3t;0.9.0 from central in [default]
org.abego.treelayout#org.abego.treelayout.core;1.0.3 from central in [default]
org.antlr#ST4;4.0.8 from central in [default]
org.antlr#antlr-runtime;3.5.2 from central in [default]
org.antlr#antlr4;4.7 from central in [default]
org.antlr#antlr4-runtime;4.7 from central in [default]
org.apache.avro#avro;1.7.4 from central in [default]
org.apache.commons#commons-compress;1.4.1 from central in [default]
org.apache.commons#commons-math3;3.1.1 from central in [default]
org.apache.curator#curator-client;2.7.1 from central in [default]
org.apache.curator#curator-framework;2.7.1 from central in [default]
org.apache.curator#curator-recipes;2.7.1 from central in [default]
org.apache.directory.api#api-asn1-api;1.0.0-M20 from central in [default]
org.apache.directory.api#api-util;1.0.0-M20 from central in [default]
org.apache.directory.server#apacheds-i18n;2.0.0-M15 from central in [default]
org.apache.directory.server#apacheds-kerberos-codec;2.0.0-M15 from central in [default]
org.apache.hadoop#hadoop-annotations;2.7.7 from central in [default]
org.apache.hadoop#hadoop-auth;2.7.7 from central in [default]
org.apache.hadoop#hadoop-aws;2.7.7 from central in [default]
org.apache.hadoop#hadoop-common;2.7.7 from central in [default]
org.apache.htrace#htrace-core;3.1.0-incubating from central in [default]
org.apache.httpcomponents#httpclient;4.2.5 from central in [default]
org.apache.httpcomponents#httpcore;4.2.5 from central in [default]
org.apache.zookeeper#zookeeper;3.4.6 from central in [default]
org.codehaus.jackson#jackson-core-asl;1.9.13 from central in [default]
org.codehaus.jackson#jackson-jaxrs;1.9.13 from central in [default]
org.codehaus.jackson#jackson-mapper-asl;1.9.13 from central in [default]
org.codehaus.jackson#jackson-xc;1.9.13 from central in [default]
org.codehaus.jettison#jettison;1.1 from central in [default]
org.glassfish#javax.json;1.0.4 from central in [default]
org.hamcrest#hamcrest-core;1.3 from central in [default]
org.mortbay.jetty#jetty;6.1.26 from central in [default]
org.mortbay.jetty#jetty-sslengine;6.1.26 from central in [default]
org.mortbay.jetty#jetty-util;6.1.26 from central in [default]
org.mortbay.jetty#servlet-api;2.5-20081211 from central in [default]
org.slf4j#slf4j-api;1.7.10 from central in [default]
org.slf4j#slf4j-log4j12;1.7.10 from central in [default]
org.tukaani#xz;1.0 from central in [default]
org.xerial.snappy#snappy-java;1.0.4.1 from central in [default]
xmlenc#xmlenc;0.52 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 80 | 1 | 0 | 0 || 80 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-1f15352a-51ee-4e1b-b10b-bc5ddeef4cdf
confs: [default]
0 artifacts copied, 80 already retrieved (0kB/23ms)
20/03/12 13:36:35 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/03/12 13:36:42 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
20/03/12 13:36:42 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
20/03/12 13:36:42 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
20/03/12 13:36:42 WARN Utils: Service 'SparkUI' could not bind on port 4043. Attempting port 4044.
Spark context Web UI available at http://anaconda-0.anaconda.spark-test.svc.cluster.local:4044
Spark context available as 'sc' (master = local[*], app id = local-1584020202147).
Spark session available as 'spark'.
Welcome to
____ __
/ / ___ / /
\ / _ / _ `/ __/ '/
/
/ .__/_,// //_\ version 2.4.4
/
/

Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 11.0.6)
Type in expressions to have them evaluated.
Type :help for more information.

scala> spark.range(5).write.format("parquet").save("s3a://mnt/delta/testparquet")

java.io.IOException: Bucket mnt does not exist

at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:298)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:424)
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:524)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
... 49 elided

jq error (redis.port)

Hello,
I installed zenko with helm (The symptoms are the same for both 1.1.6 and 1.2.0.)

$ helm install https://github.com/scality/Zenko/releases/download/1.2.0-rc.3/zenko-helm-chart-1.2.0-rc.3.tgz

But in some pods I get the following error:

jq: error: syntax error, unexpected ':', expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
. | .kafka.hosts="...-zenko-zenko-queue:9092" | .redis.port=tcp://10.244.153.77:6379 | .queuePopulator.logSource="mongo" | .queuePopulator.mongo.replicaSetHosts="...-zenko-mongodb-replicaset-0....-zenko-mongodb-replicaset:27017,zeronsoftn-local-zenko-mongodb-replicaset-1....-zenko-mongodb-replicaset:27017" | .s3.host="..." | .s3.port="80"                                                                            
jq: 1 compile error

Error in applying redis port in docker-entrypoint.sh.

Problem deployments are listed below:

  • backbeat-gc-consumer
  • backbeat-lifecycle-object-processor
  • backbeat-lifecycle-conductor
  • s3-data

I seem the below issue.
scality/cloudserver#1723

cloudstorage(s3-data) doesn't seem to have a problem with one replica, but backbeat has multiple replicas.
Is it ok to have redis on every pod, as in the above solution?

Some containers does not become the running state in Zenko Swarm Stack.

Hi,
I deployed Zenko swarm stack but some containers does not become the running state in Zenko swarm stack. Those are zenko-prod_spark-master, zenko-prod_spark-master and zenko-prod_livy.

# docker service ls
ID                  NAME                           MODE                REPLICAS            IMAGE                                    PORTS
h08dy8pzgw2w        zenko-prod_backbeat-consumer   replicated          1/1                 zenko/backbeat:pensieve-3
yvnrai069v97        zenko-prod_backbeat-producer   replicated          1/1                 zenko/backbeat:pensieve-3
w0mlb7v3jslp        zenko-prod_cache               replicated          1/1                 redis:alpine                             *:30014->6379/tcp
27h85v71f30p        zenko-prod_graphite            replicated          1/1                 scality/clueso-grafana-graphite:latest   *:3000->3000/tcp,*:8005->80/tcp,*:8081->81/tcp
9zg4k9k1oeed        zenko-prod_lb                  global              6/6                 zenko/loadbalancer:latest                *:80->80/tcp
xbtm8jgtvwgu        zenko-prod_livy                replicated          0/1                 scality/clueso-livy:pensieve             *:4040->4040/tcp,*:4041->4041/tcp,*:4042->4042/tcp,*:4043->4043/tcp,*:4044->4044/tcp,*:4045->4045/tcp,*:4046->4046/tcp,*:4047->4047/tcp,*:4048->4048/tcp,*:4049->4049/tcp,*:8998->8998/tcp
5op1u5jlvqqc        zenko-prod_queue               replicated          1/1                 wurstmeister/kafka:1.0.0                 *:30016->9092/tcp
e81x55nmmzdf        zenko-prod_quorum              replicated          1/1                 zookeeper:3.4.11                         *:30017->2181/tcp
5fph6v5y5pnx        zenko-prod_s3-data             replicated          1/1                 zenko/cloudserver:pensieve-1             *:30012->9992/tcp
051pizoatpdy        zenko-prod_s3-front            replicated          1/1                 zenko/cloudserver:pensieve-1             *:30015->8001/tcp
4lu9m8ikgp3p        zenko-prod_s3-metadata         replicated          1/1                 zenko/cloudserver:pensieve-1             *:30013->9993/tcp
0u382awvwprg        zenko-prod_spark-master        replicated          0/1                 scality/clueso-spark-master:pensieve     *:4050->4050/tcp,*:4051->4051/tcp,*:4052->4052/tcp,*:4053->4053/tcp,*:8080->8080/tcp
pxpahj5xz8y5        zenko-prod_spark-worker        replicated          0/1                 scality/clueso-spark-worker:pensieve

zenko-prod_livy becomes up and down repeatly so I checked following logs in zenko-prod_livy.

# docker service logs zenko-prod_livy
zenko-prod_livy.1.rlc2jexkus5r@ip-172-31-4-21    | Waiting for spark master
zenko-prod_livy.1.whbqhss6asan@ip-172-31-4-21    | Waiting for spark master
zenko-prod_livy.1.whbqhss6asan@ip-172-31-4-21    | Cannot contact spark master on 7077
zenko-prod_livy.1.xfco4fq4pcr2@ip-172-31-31-120    | Waiting for spark master
zenko-prod_livy.1.zf0m90z8q016@ip-172-31-31-120    | Waiting for spark master
zenko-prod_livy.1.zf0m90z8q016@ip-172-31-31-120    | Cannot contact spark master on 7077
zenko-prod_livy.1.xfco4fq4pcr2@ip-172-31-31-120    | Cannot contact spark master on 7077
zenko-prod_livy.1.ykj1nwuuft7q@ip-172-31-29-49    | Waiting for spark master
zenko-prod_livy.1.ykj1nwuuft7q@ip-172-31-29-49    | Cannot contact spark master on 7077

How can I fixed this problems?

Can't put object to AWS via Zenko

Hi,

I am getting the following error when trying to upload an object to AWS via Zenko:

aws s3 --endpoint http://s3.lelab.com:8000 --profile zenko cp /etc/hosts s3://bucket123
upload failed: ../../../../etc/hosts to s3://bucket123/hosts An error occurred (ServiceUnavailable) when calling the PutObject operation (reached max retries: 4): The request has failed due to a temporary failure of the server.

[feature]: apache airflow provider

Feature Requests

Proposal

It would be useful to have a Apache Airflow Provider, since airflow is heavily used in Data Engineering. It is not difficult to implement: http://airflow.apache.org/docs/apache-airflow-providers/#how-to-create-your-own-provider

Current Behavior

End user must write provider

Desired Behavior

Ecosystem package for pip

Use Cases

pip install apache-airflow-providers-zenko

Additional Information

To serve you better, we'd like to know a little more about you (if you don't
mind our asking...)

  • Is this request for your company? No
    • If it is, for which company?
    • Is your company using any Scality Enterprise Edition products (RING, Zenko
      EE)? No
  • Are you willing to contribute this feature yourself? N
  • What is your position or title? Open Source Engineer
  • How did you hear about us?

Upgrade procedure from 1 to 2 version

Hello,

We contact you last year via the forum (https://forum.zenko.io/t/deploy-zenko-2-0-1/834) to get upgrade information.

We are using Zenko 1.2.1 and would like to upgrade to 2 version.
I quickly checked documentation, but I don't know if it is up to date, the main link is referencing 1.2 as latest release.

I read 2.0 upgrade page which explain to navigate to Zenko/kubernetes/ from tgz release tarball, but this path does not exist in the archive.

In your reponse in the forum you said :
We are currently working on the steps to achieve a clean upgrade from 1.2 meanwhile, will make a post very soon announcing the current state of things.

I don't see any post related to this point, is there an upgrade procedure somewhere ?

Thanks for you help !

make_bucket failed aws

Hi,

when i ran this command :

!aws s3 mb s3://randhunt-twitch-demos/ --region us-east-2

i get this error :

make_bucket failed: s3://randhunt-twitch-demos/ An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Any helps please

Tutorial instances don't match doc

The Nginx instance is not named as the tutorial documentation indicates, or maybe that's a Mac version thing? In any case the docker-stack.yml file uses this docker not one named Nginx

ID NAME MODE REPLICAS IMAGE PORTS
awuju96gnhho zenko-testing_lb replicated 1/1 zenko/loadbalancer:latest *:80->80/tcp
p3us1marp8hq zenko-testing_s3 replicated 1/1 scality/s3server:latest *:0->8000/tcp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.