Giter Club home page Giter Club logo

kubernetes's People

Contributors

arne-cl avatar automatedops avatar binternet avatar missinglink avatar mkozjak avatar mokto avatar orangejulius avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes's Issues

catch snapshot restore errors

It's possible to get the snapshot name wrong when loading via step 3 of https://github.com/pelias/kubernetes/blob/master/elasticsearch/terraform/templates/load_snapshot.sh.tpl#L66

In the case of an error, elasticsearch returns 404 with an error message such as Blob object [snap-snapshot_1.dat] not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: xxxxxxxxx)

We should catch this error, and any other elasticsearch errors and deal with them, or simply fail the script with a descriptive error message.

The issue is compounded by a subsequent step which locks the cluster using cluster.blocks.read_only, this makes debugging difficult and requires the lock to be released manually before continuing.

So two tasks:

  • Fail on curl errors
  • Do not lock the cluster if the restore opteration was unsuccessful

Is that normal Pelias always download interpolation .gz dbs within init-container ?

Hello guys,

We are using Pelias at my company but i'm wondering something about the pelias-interpolation deployment. Is that normal that the pelias init-container interpolation-download is always downloading again and again the street.db.gz and the address.db.gz from S3 ?

It costs at lot of ressources and a lot of bandwith just to keep downloading that. As i understand it's the only way you found for the moment to keep street and addresses up to date within the container right ?

I'm still looking for a more small and cost efficient setup for the interpolation pod, do you have any tips and tricks ?

Kind regards

How do I access the pelias api running on kubernetes?

I have managed to deploy the pelias api on kubernetes cluster but not able to access it via kubernetes master.

i can able to access it directly using the node -[specific port exposed externally.]

Thanks,
Pravin!

Add chart as dependency

How can I add this chart to my requirements.yaml file as a dependency? I am unable to find a repository which contains this chart.

Allow configuration of many more values

It would be useful if many more of the configuration options, especially in deployment specs, were controllable by template variables.

Useful things to control include:

  • spec.minReadySeconds
  • spec.strategy.rollingUpdate.maxSurge
  • spec.template.spec.containers[].resources.limits (memory and CPU)
  • Placeholder CPUS env var
  • Pod annotations
  • Node assignment options (node selectors, node affinity/anti-affinity, pod affinity/anti-affinity)

Error applying terraform

Error: Error applying plan:

1 error(s) occurred:

* aws_autoscaling_group.elasticsearch: 1 error(s) occurred:

* aws_autoscaling_group.elasticsearch: "pelias-dev-elasticsearch": Waiting up to 10m0s: Need at least 5 healthy instances in ASG, have 0. Most recent activity: {
  ActivityId: "04058d79-c252-871b-9670-97626ce6d90a",
  AutoScalingGroupName: "pelias-dev-elasticsearch",
  Cause: "At 2018-05-14T11:36:59Z a user request created an AutoScalingGroup changing the desired capacity from 0 to 5.  At 2018-05-14T11:37:05Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 5.",
  Description: "Launching a new EC2 instance.  Status Reason: Access denied when attempting to assume role arn:aws:iam::XXXXXXX:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling. Validating load balancer configuration failed.",
  Details: "{\"Subnet ID\":\"subnet-f1bcb5ab\",\"Availability Zone\":\"us-east-1a\"}",
  EndTime: 2018-05-14 11:37:06 +0000 UTC,
  Progress: 100,
  StartTime: 2018-05-14 11:37:06.465 +0000 UTC,
  StatusCode: "Failed",
  StatusMessage: "Access denied when attempting to assume role arn:aws:iam::XXXXXXX:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling. Validating load balancer configuration failed."
}

pelias-pip remains in CrashLoopBackOff

I am trying to get pelias running on kubernetes. As I'm not allowed to run helm install,
I installed pelias manually (using kubectl apply with the files that helm template would generate).

pelias-pip is often in CrashLoopBackOff status.

$ kubectl-1.8.15 -n geocoding-staging describe pod pelias-pip-3387644077-8bj7n
[...]
  Normal   Created     25m                 kubelet, dep-kprodv2-100  Created container with id 759fcad828774a090ad9fd6867d89f7da67e0e731705420fc3fa5cdeeb380931
  Normal   Started     25m                 kubelet, dep-kprodv2-100  Started container with id 759fcad828774a090ad9fd6867d89f7da67e0e731705420fc3fa5cdeeb380931
  Warning  BackOff     24m (x14 over 27m)  kubelet, dep-kprodv2-100  Back-off restarting failed container
  Warning  FailedSync  24m (x7 over 25m)   kubelet, dep-kprodv2-100  Error syncing pod, skipping: failed to "StartContainer" for "pelias-pip" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=pelias-pip pod=pelias-pip-33
87644077-8bj7n_geocoding-staging(efb45eba-7655-11e9-b0f2-901b0efcedbb)"

The logs of the initContainer showed that the dowload was successful:

$ kubectl-1.8.15 -n geocoding-staging logs -f pelias-pip-3387644077-8bj7n -c download
[...]
Downloading whosonfirst-data-admin-zw-latest.db.bz2                                                                                                                                                                                          
done downloading whosonfirst-data-admin-zw-latest.db.bz2                                                                                                                                                                                     
done downloading whosonfirst-data-admin-zm-latest.db.bz2                                                                                                                                                                                     
done downloading whosonfirst-data-admin-us-latest.db.bz2                                                                                                                                                                                     
done downloading whosonfirst-data-latest.db.bz2                                                                                                                                                                                              
All done!

but the pelias-pip logs show a SqliteError: disk I/O error. I'm using emptyDir for the data volume,
so I don't know how this could happen. Any ideas?

$ kubectl-1.8.15 -n geocoding-staging logs pelias-pip-3146733653-rr5ds
{"level":"info","message":"starting with layers neighbourhood,borough,locality,localadmin,county,macrocounty,macroregion,region,dependency,country,empire,continent,marinearea,ocean","label":"wof-pip-service:master","timestamp":"2019-05-14T10:01:52.033Z"}
pip-service is now running on port 3102
{"level":"info","message":"marinearea worker loaded 305 features in 87.217 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:21.489Z"}
{"level":"info","message":"borough worker loaded 273 features in 87.512 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:21.580Z"}
{"level":"info","message":"ocean worker loaded 7 features in 87.427 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:21.705Z"}
{"level":"info","message":"dependency worker loaded 40 features in 88.31 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:22.506Z"}
{"level":"info","message":"empire worker loaded 11 features in 89.299 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:23.491Z"}
{"level":"info","message":"macroregion worker loaded 94 features in 90.313 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:24.595Z"}
{"level":"info","message":"continent worker loaded 8 features in 90.979 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:25.283Z"}
{"level":"info","message":"macrocounty worker loaded 477 features in 92.131 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:26.398Z"}
{"level":"info","message":"country worker loaded 210 features in 93.094 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:27.108Z"}
{"level":"info","message":"region worker loaded 4175 features in 97.457 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:31.625Z"}
{"level":"info","message":"neighbourhood worker loaded 29316 features in 107.278 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:03:41.731Z"}
{"level":"info","message":"136.243.46.254 - - [14/May/2019:10:03:59 +0000] \"GET /12/12 HTTP/1.1\" - - \"-\" \"Go-http-client/1.1\"","label":"pip","timestamp":"2019-05-14T10:03:59.062Z"}
{"level":"info","message":"county worker loaded 40797 features in 128.074 seconds","label":"wof-pip-service:master","timestamp":"2019-05-14T10:04:02.458Z"}
{"level":"info","message":"136.243.46.254 - - [14/May/2019:10:04:09 +0000] \"GET /12/12 HTTP/1.1\" - - \"-\" \"Go-http-client/1.1\"","label":"pip","timestamp":"2019-05-14T10:04:09.056Z"}

/code/pelias/pip-service/node_modules/pelias-whosonfirst/src/components/sqliteStream.js:25
      const elt = this._iterator.next();
                                 ^
SqliteError: disk I/O error
    at SQLiteStream._read (/code/pelias/pip-service/node_modules/pelias-whosonfirst/src/components/sqliteStream.js:25:34)
    at SQLiteStream.Readable.read (_stream_readable.js:452:10)
    at flow (_stream_readable.js:922:34)
    at resume_ (_stream_readable.js:904:3)
    at process._tickCallback (internal/process/next_tick.js:63:19)
    at Function.Module.runMain (internal/modules/cjs/loader.js:745:11)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)
{"level":"error","message":"locality worker exited unexpectedly with code 1, signal null","label":"wof-pip-service:master","timestamp":"2019-05-14T10:04:10.968Z"}
{"level":"info","message":"neighbourhood worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.970Z"}
{"level":"info","message":"continent worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.970Z"}
{"level":"info","message":"dependency worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.970Z"}
{"level":"info","message":"macrocounty worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.971Z"}
{"level":"info","message":"county worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.971Z"}
{"level":"info","message":"region worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.971Z"}
{"level":"info","message":"country worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.971Z"}
{"level":"info","message":"borough worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.972Z"}
{"level":"info","message":"macroregion worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.972Z"}
{"level":"info","message":"empire worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.972Z"}
{"level":"info","message":"ocean worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.976Z"}
{"level":"info","message":"marinearea worker process exiting, stats: {\"calls\":0,\"hits\":0,\"misses\":0}","label":"admin-lookup:worker","timestamp":"2019-05-14T10:04:10.977Z"}

/code/pelias/pip-service/node_modules/pelias-wof-admin-lookup/src/pip/index.js:166
        throw `${layer} worker shutdown unexpectedly`;
        ^
locality worker shutdown unexpectedly

Kubernetes doc out of date

Hey team!

Your Kubernetes documentation (https://github.com/pelias/kubernetes/blob/master/README.md) seems to be out of date:

  • Helm installation instructions lead to current Helm version 3, which is incompatible with the sample code in subchapter Pelias Helm Chart installation, i.e. --name is unknown flag
  • The location where to find Pelias Chart is not explained:

./path/to/pelias/chart

if it is current directory (https://github.com/pelias/kubernetes), then the current directory is not compatible with Helm 3.

helm repo add kubernetes https://github.com/pelias/kubernetes
leads to
Error: looks like "https://github.com/pelias/kubernetes" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/pelias/kubernetes/index.yaml : 404 Not Found

Or did I understand the instructions wrong somehow?

Helm chart not availiable

helm repo add pelias-charts https://github.com/pelias/kubernetes
Error: looks like "https://github.com/pelias/kubernetes" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/pelias/kubernetes/index.yaml : 404 Not Found
helm repo add pelias-chart https://github.com/pelias/pelias
Error: looks like "https://github.com/pelias/pelias" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/pelias/pelias/index.yaml : 404 Not Found
git clone https://github.com/pelias/kubernetes
helm template --namespace pelias kubernetes/Chart.yaml -f kubernetes/values.yaml 
Error: file '/home/ubuntu/Projects/kubernetes-gcp/pelias/kubernetes/Chart.yaml' seems to be a YAML file, but expected a gzipped archive

Where do I get the chart from? The docs just say to reference the [non existent] file. Helm has lost the battle, kustomize is vastly superior in every way and is now built into kubectl. Please consider upgrading to Kustomize.

unable to persist data

kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
annotations: pv.beta.kubernetes.io/gid: "1004"
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/kubevolume"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

In values.yaml I have the following:

enabled: true
replicas: 1
host: "http://pelias-pip-service:3102/"
dockerTag: "latest"
maxUnavailable: 0 # adjusts rolling update settings
retries: 1 # number of time the API will retry requests to the pip service
timeout: 5000 # time in ms the API will wait for pip service responses
annotations: {}
pvc:
create: true
name: task-pv-claim
storageClass: manual
accessModes: ReadWriteMany
storage: 10Gi
annotations: {}

helm install --name pelias --namespace pelias /data/git/pelias-kubernetes/kubernetes/ -f /data/git/pelias-kubernetes/kubernetes/values.yaml

And then I get the following error in my kubernetes dashboard:
"pod has unbound immediate PersistentVolumeClaims"

What am I missing?

even if i disable the dashboard, libpostal and interpolation service still its running

Hey team!

I was using your awesome geocoding engine when I noticed something interesting.
Let me tell you more about it.


Here's what I did ๐Ÿ˜‡

updated a values.yaml with below setting

placeholder:
enabled: false
..

libpostal:
enabled: false


Here's what I got ๐Ÿ™€

I can see these services are still getting deployed to kubernetes cluster

Here's what I was expecting โœจ


Here's what I think could be improved ๐Ÿ†

Interpolation service not coming up for me

Interpolation service not coming up for me...

I just get crashloopbackoff not long after it tries to attach the persistent volume claim:

  Type     Reason                  Age   From                                               Message
  ----     ------                  ----  ----                                               -------
  Normal   Scheduled               24s   default-scheduler                                  Successfully assigned pelias/pelias-interpolation-74f4d87f79-58ztw to ip-10-1-12-71.us-east-2.compute.internal
  Warning  FailedAttachVolume      24s   attachdetach-controller                            Multi-Attach error for volume "pvc-4b24c95a-30ae-11ea-bfa5-02a93554a16a" Volume is already exclusively attached to one node and can't be attached to another
  Normal   SuccessfulAttachVolume  9s    attachdetach-controller                            AttachVolume.Attach succeeded for volume "pvc-4b24c95a-30ae-11ea-bfa5-02a93554a16a"
  Normal   Pulling                 4s    kubelet, ip-10-1-12-71.us-east-2.compute.internal  pulling image "pelias/interpolation:latest"
jknepper@Johns-MacBook-Pro [~/ops-tools/build-files/pelias]kubectl logs -f pelias-interpolation-74f4d87f79-58ztw  -n pelias


Error from server (BadRequest): container "pelias-interpolation" in pod "pelias-interpolation-74f4d87f79-58ztw" is waiting to start: PodInitializing
jknepper@Johns-MacBook-Pro [~/ops-tools/build-files/pelias]

Helm 3 error

Using helm 3, I'm getting errors. This might help with your testing.

I downloaded the repo:

~$ git clone https://github.com/pelias/kubernetes.git

Then I changed the directory

~$ cd kubernetes/

And the ran the helm 3, suggested in a closed issue - see link below, command:

~/kubernetes$ helm install pelias --namespace pelias . -f values.yaml
Error: template: pelias/templates/configmap.tpl:34:14: executing "pelias/templates/configmap.tpl" at <(.Values.api.targets.auto_discover) and (or (eq .Values.api.targets.auto_discover true) (eq .Values.api.targets.auto_discover false))>: can't give argument to non-function .Values.api.targets.auto_discover  

Originally posted by @skulos in #123 (comment)

Investigate using EBS snapshots to load Elasticsearch data

We've generally advocated for using S3 snapshots to load Pelias data into Elasticsearch. S3 is a nice, reliable data storage service, has great support in Elasticsearch, and is pretty darn cheap.

However, it takes a long time to load the ~400GB of data a full planet Elasticsearch cluster needs.

One approach that has sometimes been considered is to avoid using Elasticsearch's built in clustering functionality, and have each Elasticsearch instance have a complete, identical, separate copy of all data.

This has some potential disadvantages, but has some very notable advantages:

  • Prevents catastrophic performance loss from shard imbalance
  • Easy to scale up/down without hurting overall cluster performance during shard rebalancing

Another advantage is that it might be possible to considerably speed up launching of new Elasticsearch instances by using EBS snapshots to store data. EBS volumes can be created quickly from an EBS snapshot.

There are some questions to answer here, but it's worth investigating:

  • What is the performance of a freshly created EBS volume? My understanding is that while the EBS volume is available quickly, the EBS volume will be copying data from S3 (Where EBS snapshots are stored) silently in the background, and until that's complete, performance is quite a bit worse.
  • How quickly can Elasticsearch start up from a fresh snapshot? My testing thus far suggests it's only a few seconds, which is pretty great!
  • Does Kubernetes support creating a Persistent volume from an EBS snapshot? It looks like draft support is at least partly underway

Add a kubernetes Elasticsearch setup as "it does suit very well" now on Kubernetes.

Hello guys,

In your documentation you'r saying that Elasticsearch is not really well suited for Kubernetes. I think it was true in the pasts years but now there is a lot of Elasticsearch setups running on Kubernetes, as mine for exemple.

Since you'r not yet supporting Elasticsearch +v2.4.1 versions, i made a kubernetes deployment setup on AKS with my own Elasticsearch 2.4.1 image and a Client/Master/Data statefulsets setup with PersistentDisks / Replication and everything and ... it works like a charm.

Would you be interested by a PR to add a ES kubernetes deploy method to your project ?

Templates are using outdated Kubernetes version

Hey team!

I was using your awesome geocoding engine when I noticed something interesting.
Let me tell you more about it.


Here's what I did ๐Ÿ˜‡

helm install --name pelias --namespace pelias ./ -f ./values.yaml


Here's what I got ๐Ÿ™€

Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"


Here's what I was expecting โœจ

Magical geocoding success


Here's what I think could be improved ๐Ÿ†

We should migrate to Kubernetes 1.16, to include updating templates from extensions/v1beta1 to apps/v1 and other structural changes to the templates

Helm template failed. Error: render error in "pelias/templates/api-deployment.tpl": template: pelias/templates/api-deployment.tpl:6:22: executing "pelias/templates/api-deployment.tpl" at <.Values.api.replicas>: can't evaluate field replicas in type interface {} : exit status 1

I'm having my problems installing pelias with helm. I only get this error message from helm:
Helm template failed. Error: render error in "pelias/templates/api-deployment.tpl": template: pelias/templates/api-deployment.tpl:6:22: executing "pelias/templates/api-deployment.tpl" at <.Values.api.replicas>: can't evaluate field replicas in type interface {} : exit status 1

Does anybody face the same issue?

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:38Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Typo in the docs

You're missing a colon in the packer variables.json after

"elasticsearch_version": "2.4.6"

helm install error: attributionURL is not a method but has arguments

Running the install command in the root directory of this repo, I get the following error:

$ helm install --name pelias --namespace pelias . -f values.yaml
Error: render error in "pelias/templates/configmap.tpl": template: pelias/templates/configmap.tpl:32:65: executing "pelias/templates/configmap.tpl" at <.Values.api.attribut...>: attributionURL is not a method but has arguments

If I remove the attributionURL line from configmap.tpl, I get the same error for the indexName
variable.

I use these versions:

$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

$ minikube version
minikube version: v1.0.0

Missing Documentation

When running terraform apply the following errors occurred:

Error: Error refreshing state: 2 error(s) occurred:

* data.aws_subnet_ids.all_subnets: 1 error(s) occurred:

* data.aws_subnet_ids.all_subnets: data.aws_subnet_ids.all_subnets: no matching subnet found for vpc with id vpc-fcef1584
* data.aws_ami.elasticsearch_ami: 1 error(s) occurred:

* data.aws_ami.elasticsearch_ami: data.aws_ami.elasticsearch_ami: Your query returned no results. Please change your search criteria and try again.

This happens because there are filters in ami.tf and vpc.tf which are not defined, i commented them out and it seems to be working now.

It appears that the documentation should point that VPC subnets should have a name as defined in variables.tf

Why using NFS Persistent Volumes for the importers?

Hi Pelias Team,

First of all thanks for creating and mantaining that awesome project.

We are evaluating the possibility to use pelias in production running in GKE cluster, as I'm pretty new to the project I don't understand the need to use NFS Persistent Volumes for running the importers. What I though is that importers were independent so there is no need for sharing data between the different jobs running the import.
Our idea was to use persistent volumes using the plugin of the cloud provider and not nfs, in that case GCP. Would it be possible?

Regards,
Pau.

Unable to revive connection while creating pelias schema index on ES

When I ran a Schema create pod it throws below error:
**http://pelias-staging-es-elb-XXXX.ap-southeast-2.elb.amazonaws.com:9200/_nodes => getaddrinfo EAI_AGAINpelias-staging-es-elb-XXXXX.ap-southeast-2.elb.amazonaws.com pelias-staging-es-elb-XXXXX.ap-southeast-2.elb.amazonaws.com:9200
at Log.error (/code/pelias/schema/node_modules/elasticsearch/src/lib/log.js:239:56)
at checkRespForFailure (/code/pelias/schema/node_modules/elasticsearch/src/lib/transport.js:298:18)
at HttpConnector. (/code/pelias/schema/node_modules/elasticsearch/src/lib/connectors/http.js:171:7)
at ClientRequest.wrapper (/code/pelias/schema/node_modules/lodash/lodash.js:4929:19)
at ClientRequest.emit (events.js:182:13)
at Socket.socketErrorListener (_http_client.js:392:9)
at Socket.emit (events.js:182:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
at process._tickCallback (internal/process/next_tick.js:63:19)

Elasticsearch WARNING: 2019-09-06T01:30:05Z
Unable to revive connection: http://pelias-staging-es-elb-XXXXX.ap-southeast-2.elb.amazonaws.com:9200/

Elasticsearch WARNING: 2019-09-06T01:30:05Z
No living connections

{ Error: No Living connections
at sendReqWithConnection (/code/pelias/schema/node_modules/elasticsearch/src/lib/transport.js:266:15)
at next (/code/pelias/schema/node_modules/elasticsearch/src/lib/connection_pool.js:243:7)
at process._tickCallback (internal/process/next_tick.js:61:11) message: 'No Living connections' }
please install mandatory plugins before continuing.**

here are the ES settings specified in values and passed it through configmap.tpl
elasticsearch:
host: "pelias-staging-es-elb-XXXXX.ap-southeast-2.elb.amazonaws.com"
port: 9200
protocol: "http"
keepAlive: false
DeadTimeout: 1200
maxRetries: 15

Do you have any idea how do i get workaround for this issue?

Which data is shared between pelias services?

I have trouble getting the helm installation of pelias running (the docker-compose version works fine). I don't understand if all the services accessing the /data mountPath expect to access the
same PV / share data between each other.

In the values.yaml, the services interpolation and pip are the only ones with a PVC.
Both PVCs are suggested to be used with accessModes: ReadWriteMany.
Since the suggested names of the PVCs are different (interpolation-pvc vs. pip-pvc), I would
assume that they access different PVs (i.e. interpolation and pip don't share data).

If this is true, then accessModes: ReadWriteOnce would be sufficient here.

When running helm template I noticed that the placeholder service is accessing
a VolumeMount named data-volume with mountPath /data (just like interpolation and pip do),
but values.yaml does not suggest a PVC for the placeholder service. In this case,
helm generates this, which is probably wrong:

      volumes:
        - name: data-volume
          emptyDir: {}

Pods running even if disabled

Some of our pods are running even though they are disabled in config:

pelias-api-5566d6f997-xt8hl            1/1       Running            0          10m
pelias-libpostal-6857cdbdf8-rjckp      1/1       Running            1          10m
pelias-placeholder-79977fd7f9-qw2p9    1/1       Running            0          10m

Full config:

elasticsearch:
  host: "some.host.io"
  port: 9211
  protocol: "https"
  auth: "foo:bar"

## API settings
api:
  replicas: 1
  canaryReplicas: 0
  dockerTag: "latest"
  canaryDockerTag: null # set this value to enable the canary deployment
  indexName: "pelias"
  attributionURL: "http://api.yourpelias.com/attribution"
  # Whether the API service should be externally accessible
  # Set this to true if you want to, for example on AWS, set up an ELB for the API
  externalService: false
  # whether the external service  should be internet facing or private (default private)
  privateLoadBalancer: false
  accessLog: "common" # allows configuring access log format. Empty string disables access log
  autocomplete:
    exclude_address_length: 0
  requests:
    memory: 0.25Gi
    cpu: 0.1
  minReadySeconds: 10
  annotations: {}

placeholder:
  enabled: false
  replicas: 1
  host: "http://pelias-placeholder-service:3000/"
  dockerTag: "latest"
  storeURL: "https://s3.amazonaws.com/pelias-data.nextzen.org/placeholder/store.sqlite3.gz"
  cpus: 1 # how many CPUs to allow using via the npm `cluster2` module
  retries: 1 # number of time the API will retry requests to placeholder
  timeout: 5000 # time in ms the API will wait for placeholder responses
  annotations: {}

libpostal:
  enabled: false
  replicas: 1
  host: "http://pelias-libpostal-service:4400/"
  dockerTag: "latest"
  retries: 1 # number of time the API will retry requests to libpostal
  timeout: 5000 # time in ms the API will wait for libpostal responses
  annotations: {}

interpolation:
  enabled: false
  replicas: 1
  host: "http://pelias-interpolation-service:3000/"
  dockerTag: "latest"
  # URL prefix of location where streets.db and address.db will be downloaded
  downloadPath: " https://s3.amazonaws.com/pelias-data.nextzen.org/interpolation/current"
  retries: 1 # number of time the API will retry requests to interpolation service
  timeout: 5000 # time in ms the API will wait for interpolation service responses
  annotations: {}
  pvc: {}
#    create: true
#    name: interpolation-pvc
#    storageClass: aws-efs
#    accessModes: ReadWriteMany
#    storage: 10Gi
#    annotations: {}

pip:
  enabled: false
  replicas: 1
  host: "http://pelias-pip-service:3102/"
  dockerTag: "latest"
  maxUnavailable: 0 # adjusts rolling update settings
  retries: 1 # number of time the API will retry requests to the pip service
  timeout: 5000 # time in ms the API will wait for pip service responses
  annotations: {}
  pvc: {}
#    create: true
#    name: pip-pvc
#    storageClass: aws-efs
#    accessModes: ReadWriteMany
#    storage: 10Gi
#    annotations: {}

dashboard:
  enabled: false
  replicas: 1
  dockerTag: "latest"
  domain: null # set this to enable an ingress

# Deprecated fields
# pipEnabled: true
# pipHost: "http://pelias-pip-service:3102/"

# interpolationEnabled: false
# interpolationHost: "http://pelias-interpolation-service:3000/"

# Importer settings
whosonfirst:
  sqlite: false
  dataHost: null

So in this case, according to #44, libpostal and placeholder shouldn't be running, right?

Installed latest version by running:
helm install /tmp/pelias --name pelias --namespace default --values pelias.yaml
where /tmp/pelias is tracking e15e09e7e62aa4b482ed52245ef9fc9507e2898e.

Support a nodeSelector value

I'm running a cluster that uses both windows and linux nodes. I'd like to be able to specify in the values.yaml a nodeSelector property so that I can tell the chart to only deploy these pods to the linux nodes.

polylines-import: road_network.gz not available in S3 bucket

The initContainer of the polylines-import job crashes:

$ kubectl-1.8.15 -n geocoding-staging logs polylines-import-3fmjl -c download
Connecting to s3.us-east-2.amazonaws.com (52.219.88.186:443)
wget: note: TLS certificate validation not implemented
-                   gunzip: invalid magic
 100% |********************************|   449  0:00:00 ETA

The underlying error is a Redirection (301) without location. If you GET that url
in your browser, you'll get this tip:

<Error>
  <Code>PermanentRedirect</Code>
  <Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
  <Endpoint>s3.amazonaws.com</Endpoint>
  <Bucket>pelias-data.nextzen.org</Bucket>
  <RequestId>7698B1C7B9F041B6</RequestId>
  ...
</Error>

Unfortunately, https://s3.amazonaws.com/pelias-data.nextzen.org/poylines/road_network.gz does also not exist.

How pelias api communicates with PIP API on kubernetes?

Hey team!
I am getting 500 error when i looked closely into pelias api pod logs -

{"level":"info","message":"::ffff:100.96.2.1 - - [20/Sep/2019:05:59:00 +0000] "GET /v1/reverse?point.lat=51.5081124&point.lon=-0.0759493 HTTP/1.1" 500 486","label":"api","timestamp":"2019-09-20T05:59:00.091Z"}


Here's what I did ๐Ÿ˜‡

Pelias API are running fine on kubernetes cluster...
tried to hit http://pelias-api-load-balancer-url:3100/v1/reverse?point.lat=51.5081124&point.lon=-0.0759493

here is snippet of values.yaml

pip:
enabled: true
replicas: 1
host: "http://pelias-pip-service:3102/"
dockerTag: "latest"
..

I did a curl on pelias api pod - but he gives nothing
curl http://100.71.136.79:3102
while curl on pelias api itself works fine

Here's what I got ๐Ÿ™€

but received below error

{"controller":"search","queryType":"reverse","result_count":0,"params":{"size":10,"private":false,"point.lat":51.5081124,"point.lon":-0.0759493,"boundary.circle.lat":51.5081124,"boundary.circle.lon":-0.0759493,"lang":{"name":"English","iso6391":"en","iso6393":"eng","defaulted":false},"querySize":20},"retries":0,"text_length":0,"level":"info","message":"elasticsearch","label":"api","timestamp":"2019-09-20T05:58:51.671Z"}
{"level":"error","message":"elasticsearch error Error: No Living connections","label":"api","timestamp":"2019-09-20T05:58:51.671Z"}
{"level":"info","message":"::ffff:100.96.2.1 - - [20/Sep/2019:05:58:51 +0000] "GET /v1/reverse?point.lat=51.5081124&point.lon=-0.0759493 HTTP/1.1" 500 486","label":"api","timestamp":"2019-09-20T05:58:51.672Z"}
Elasticsearch WARNING: 2019-09-20T05:59:00Z


Here's what I was expecting โœจ


Here's what I think could be improved ๐Ÿ†

Dashboard display empty values

Accessing dashbord service, displays page layout with colorful rectangles, however no values are presented inside. Looks like data can;t be read from pelias index in elasticsearch.

pip.enabled=false does not work as expected

Describe the bug
Deploying the chart while setting pip.enabled=false tries to deploy the pip pods anyway

Steps to Reproduce

helm install ... --set pip.enabled=false ...
kubectl get pods

==> Pod pip is created

Expected behavior
Nothing related to pip should be deployed when pip.enabled=false is set

PIP service and volume claims are disabled when pip.enabled=false

Environment (please complete the following information):

Pastebin/Screenshots

Additional context

References

All calls to `npm` need to be removed in build templates

As we learned in pelias/pelias#745, starting Pelias services by calling npm start has enough issues that we want to completely move away from it.

All our services are handled automatically (the Dockerfiles set a default cmd). However, all the importers need to be updated. They now have start scripts (usually ./bin/start), and we should move to using them in these templates.

address.db.gz and streets.db.gz are not accessable any more from https://s3.amazonaws.com

Hey team!

I was using your awesome geocoding engine when I noticed something interesting.
Let me tell you more about it.

When I'm trying to run the interpolation engine it fails to start b/c https://s3.amazonaws.com/pelias-data.nextzen.org/interpolation/current/street.db.gz and https://s3.amazonaws.com/pelias-data.nextzen.org/interpolation/current/address.db.gz are not accessible any more.

wget -O - https://s3.amazonaws.com/pelias-data.nextzen.org/interpolation/current/street.db.gz
wget -O - https://s3.amazonaws.com/pelias-data.nextzen.org/interpolation/current/address.db.gz

Both returns: Error 403: Forbidden.

Is there any alternative? B/c pretty important service stopped to work. :(

Clarify where to specify data source addresses for non-Portland Metro builds

Hey team!

I was trying to understand how to build a USA-only instance of Pelias and I noticed there is a lack of "direction" on how to specify data sources.


Here's what I was expecting โœจ

A documented set of steps for downloading/hosting data and specifying the resource addresses to Helm.


Here's what I think could be improved ๐Ÿ†

I can think of a few solutions to this problem. An experienced contributor's opinion on best practices for the Pelias ecosystem would be appreciated.

  • Clear documentation on what to change for your specific build, using the existing templates.
  • A separate object in the values.yaml files containing clearly defined dataSources.
  • A separate data-sources.yaml that's imported to the templates.

It would really help to get input on this, as I'm still not clear on what needs changed to specify custom data repositories. I'm happy to implement and submit the PR!

Correctly set limits in /etc/security/limits.conf

default.sh sets elasticsearch limits in a way that might not be quite right:

echo "elasticsearch soft nofile 128000\n
elasticsearch hard nofile 128000\n
root soft nofile 128000\n
root hard nofile 128000" | sudo tee --append /etc/security/limits.conf

The \ns appear in the final file, which is probably not correct.

Additionally, the Elasticsearch logs on startup show the following warnings, suggesting additional properties to set:

[2018-09-28 03:23:58,695][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2018-09-28 03:23:58,695][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2018-09-28 03:23:58,695][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-09-28 03:23:58,696][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited

Full planet build wof data

I have some questions that are difficult to understand as a beginner.

I want to do a full planet build.

In the helm chart only the data for portland metro are downloaded.
If I want other data I have to do a separate build and
adjust the index in the config. Up to here everything is understandable.

Questions

  1. Do the wof data have to be present for pip or does pip also access elasticsearch?
  2. Why is the value for ephemeral_storage so high for the pip service?
  3. If the wof data for pip must be present, then it is not more sensible to run pip as a stateful set so
    each instance has its own wof sqlite database?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.