Giter Club home page Giter Club logo

amq-broker-helm's Issues

Deploying AMQ chart to multiple namespaces fails if route does not include namespace

The introduction of the new parameter "openshift_appdomain" make it harder to do deployments to multiple namespaces.

The namespace must be part of the route template else we will get overlapping routes to AMQ console when deploying to more than one namespace.

https://github.com/mcaimi/amq-custom-templates-openshift/blob/master/artemis-broker/templates/route.yaml#L43

Either we need to update the documentation or need to rethink the use of "openshift_appdomain" or add namespace to places where "openshift_appdomain" is used.

Nil reference in if statement in broker.xml

The following line actually fails if type is not set. (Go Templating language parses the full line and does not stop like Java after the first negative test in an if-statement with "and" operation)

https://github.com/mcaimi/amq-custom-templates-openshift/blob/master/helm/persistence/conf/broker.xml#L144

For example the and-operation could be implemented like this:

         {{- $isMulticast := "" }}
         {{- if ( .type ) }}{{- if eq .type "multicast" }}
             {{- $isMulticast = print "true" }}
         {{- end }}{{- end }}
         {{- if $isMulticast }}
           <multicast/>
         {{- else }}
           <anycast>
             <queue name="{{ .name }}" />
           </anycast>
         {{- end }}

Helm Chart persistence, pod fails with broker.xml: No such file or directory

When installing helm chart "persistence" we get the following error during startup of the pod.
We are using OCP 4.4 and ArgoCD 1.6.3 for deployment of the helm chart, however, that is probably not related to the error.

It looks like the folder "broker" is not created as expected.
The ConfigMaps and DeploymentConfig are created with expected content.

Removing provided -XX:+UseParallelOldGC in favour of artemis.profile provided option
Configuring Broker
Setting journal type to nio
Creating Broker with args --silent --role admin --name broker --http-host amq-broker-persistence-1-wzqv2 --java-options=-Djava.net.preferIPv4Stack=true  --user XXXXX --password XXXXX  --allow-anonymous --data /opt/amq/data --no-amqp-acceptor --host 0.0.0.0 --nio
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Creating ActiveMQ Artemis instance at: /home/jboss/}}

Auto tuning journal ...
done! Your system can make 0.39 writes per millisecond, your journal-buffer-timeout will be 2580000

You can now start the broker by executing:  

   "/home/jboss/}}/bin/artemis" run

Or you can run the broker in the background using:

   "/home/jboss/}}/bin/artemis-service" start

Checking yacfg file under dir: 
sed: can't read /home/jboss/broker/etc/jolokia-access.xml: No such file or directory
Setting multicastPrefix to jmx.topic.
sed: can't read /home/jboss/broker/etc/broker.xml: No such file or directory
sed: can't read /home/jboss/broker/etc/broker.xml: No such file or directory
Setting anycastPrefix to jmx.queue.
sed: can't read /home/jboss/broker/etc/broker.xml: No such file or directory
sed: can't read /home/jboss/broker/etc/broker.xml: No such file or directory
Removing hardcoded -Xms -Xmx from artemis.profile in favour of JAVA_OPTS in log above
sed: can't read /home/jboss/broker/etc/artemis.profile: No such file or directory
Enable artemis metrics plugin
Adding artemis metrics plugin
sed: can't read /home/jboss/broker/etc/broker.xml: No such file or directory
Copying Config files from S2I build
cp: target '/home/jboss/broker/etc/' is not a directory
Custom Configuration file 'BROKER_XML' is enabled
/opt/amq/bin/configure_custom_config.sh: line 48: /home/jboss/broker/etc/broker.xml: No such file or directory
Running Broker
/opt/amq/bin/launch.sh: line 625: /home/jboss/broker/bin/artemis: No such file or directory```

Use Statefulset for pods when using persistent storage

Using recreate as strategy solves partly the persistent storage re-allocation. Using Statefulset will ensure that only one container uses the storage at any time.

The pod specification should be move to a _pod.tpl file and the new file statefulset.yaml and the existing deployment.yaml should wrap the template in _pod.tpl with either StatefulSet or Deployment.

Artemis roles generation

Hi

Currently the additional role mapping in artemis-roles.properties generates output bases on:

{{- range $currentUser := .Values.users }}
{{- range .roles }}
{{ . }} = {{ $currentUser.name }}
{{- end}}
{{- end}}

Having a setup of users defined in values.yaml as

users:

  • name: demouser
    password:
    roles:
    • user
  • name: anotherdemouser
    password:
    roles:
    • user

the following in generated:

####ADDITIONAL ROLE MAPPING
user = demouser

user = anotherdemouser

The issue is that the last defined user (anotherdemouser) in values.yaml overwrites the first (demouser) - which leads to having demouser without being "attached" role user.

If the generated output in artemis-roles.properties (in this case) where

####ADDITIONAL ROLE MAPPING
user = demouser,anotherdemouser

each user will get the role "user"

Add max-redelivery-delay to helm chart

The default max-redelivery-delay is 50000ms, however, it is not possible to override that value in the helm chart.

Add max-redelivery-delay like the other address-settings in queue.defaults and queue.addresses objects.

Implementation note, the if statements round the address setting could be refactored by using the Go Template keyword default
For example:

<max-redelivery-delay>{{ default $.Values.queues.defaults.maxRedeliveryDelay .maxRedeliveryDelay }}</max-redelivery-delay>

How is the drain functionality supposed to work?

I have explored this AMQ Broker helm chart which looks very promising!

I can see traces of a drain functionality in several of the files but do not fully understand it.

  • How is the drain functionality supposed to work?
  • What trigger the drain functionality?
  • How to enable the drain functionality?
  • How to a verify the drain functionality?

I did a basic test by

  1. scaling up cluster to three broker instances.
  2. Produced x messages per broker instance in the cluster. Used the artemis cmd to produce messages and browse the status.
  3. Scaled down the cluster to two broker instances.

I did assume the messages that was stuck on the PV of the previous node 3 would be migrated to any other node but that did not happen and I did not grasp the logic to understand how it is supposed to work. Test was performed with OpenShift 4.7, ArgoCD. Minor changes was made to configuration in values.yml:

  • kind: StatefulSet (Deployment)
  • nodeport - enabled: false (true)
  • clustered: true (false)
  • replicas: 2 or 3 (1)
  • storageclass: was changed to own preference (default)
  • Defined a queue QUEUE_1 as per example.

Configurable resource requests and limits

To be production ready we need to be able to tune the requests and limits of memory and CPU.

For example add resources to _pod.tpl

{{- with .Values.resources }}
resources:
{{ toYaml . | indent 2 }}
{{- end }}

and define an empty block in values.yaml

resources: {}

Documentation should be updated and include an example and maybe even recommendations for calculating numbers:

resources:
    limits:
      cpu: 1000m
      memory: 2000Mi
    requests:
      cpu: 200m
      memory: 1000Mi

Join all of the helm charts into 1 chart

There is really no need for having different helm charts for basic, ssl, persistent, persisten-ssl and so on.
The differences can be implemented with logic in one Helm Chart.

Overrride queue defaults

Queue default are currently hardcoded in the broker.xml
Some of these must be configurable:

max-delivery-attempts
redelivery-delay-multiplier
redelivery-delay
max-size-bytes
address-full-policy
message-counter-history-day-limit

Using the format ".Values.parameters...." does not feel right.
I suggest defaults for queues are written like this in values.yaml:

queueDefaults:
  maxDeliveryAttempts: 3
  redeliveryDelayMultiplier: 1
  redeliveryDelay: 5000
  maxSizeBytes: "100 mb"
  addressFullPolicy: "PAGE"
  messageCounterHistoryDayLimit: 10

We will break the backwards compatibility is we add an extra layer to the queue definition, however, it will be a cleaner design.

queue:
  defaults:
    maxDeliveryAttempts: 3
    ...  
  addresses:
  - name: ...

Add monitoring and metrics

It should be easy to monitor AMQ.
When enabling metric/monitoring then a ServiceMonitor object should be created. Furthermore must the broker.xml include the metrics configuration.

JVM metrics should be managed by configuration.

I'm almost done with a branch implementing this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.