qlik-oss / core-assisted-prescription Goto Github PK
View Code? Open in Web Editor NEWQlik Core use case Assisted Prescription (also known as Custom Analytics).
License: MIT License
Qlik Core use case Assisted Prescription (also known as Custom Analytics).
License: MIT License
I expect /kibana to start with some pre-defined dashboards, but instead I just get the screen that I have to set the index, etc.
deploy-stack.sh
&& wait a few mins ...Kibana shows up with a pre-defined dashboard.
See above
428a1c45c75949d0a1b9c2a305b041497a8787cf
Update the readme to reflect what is needed to get started. Add an quick start, to enable a user to quickly get started.
When deploying the custom-analytics stack using the vsphere driver to e.g. Landskrona DC, the qix engine logs are not stored in elasticsearch. Probably a firewall configuration, but needs to be further investigated.
QIX engine traffic logs should be available
No QIX engine traffic, but other services e.g. mira and qix-session-service are working
Currently we have documentation on how to create a swarm with a fixed set of nodes. We also need to document:
Get the SDK exerciser running with the possibility to make random selections.
In two script files, a wrong script path is used (should both be without source
since the cd out is missing:
c701934e33d7ff2b5466ce409b0c09fd37c3f988
Currently there are a couple of performance requirements described in the use-case in the info repo.
https://github.com/qlik-ea/info/blob/master/docs/use-cases/use-case-custom-analytics/README.md
However there are also a few more performance requirements described in #43.
This task is needed to ensure that the proper requirements are validated on the use-case, and that the scope can be set.
Getting this:
Actually not sure why this happens as I can only see valid options:
https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#privileged
One of the tools that we should further investigate in our scaling / performance epic is the exerciser written by the Scalability team.
Some of the thoughts right now regarding the tool:
Get the exerciser running on our use-case.
Evaluate if the tool makes sense and adds value to us.
We would like to investigate the AB (Apache) performance tool. The questions that we would like to get answered with this investigation includes:
Running docker in Circle CI 2.0 makes use of remote docker environments, hence running docker-compose up in a pipeline will run the containers in the remote docker. Therefore it is not possible to execute tests in the pipeline since the containers are not running on the same machine. Instead we have to build an image containing the tests and execute in the remote docker environment.
Using machine executor is not an option since the docker version support is lagging, currently does not support version 3.1 of compose files.
For more info regarding remote dockers in CCI:
https://circleci.com/docs/2.0/building-docker-images/#separation-of-environments
Task is considered done when a docker image containing the tests is built and executed in the circle ci pipeline i.e. triggered on each commit independent of branch.
Beat goal:
Tool used for performance/scalability testing has been chosen and evaluated . One simple performance scenario has been implemented with selected test tool, and should be possible to execute on a local setup.
Circle CI 2.0 uses remote docker environment in job pipeline, and it is not possible to mount folders into that environment. Our current compose mounts both secrets and apps, which will not work in CCI.
The CCI documentation suggests that you instead use data volumes for mounting in remote docker.
https://circleci.com/docs/2.0/building-docker-images/#separation-of-environments
Task is considered done when a successful docker-compose up -d
can be performed in the job pipeline.
Update to latest version of mira
where async engine discovery is in place.
Verify that the version of Mira works correctly in the Docker Swarm deployment and "local" (non-Swarm) deployment.
More info will be added.
On every merge to master the stack should be deployed to production. This needs to be set up and triggered.
Preferably by Circle CI upon green tests.
Deployment to "production" environment of use case demo.
Shall be possible to run on local dev machine and as part of CI pipeline.
What we need to investigate with Graphana is:
Materializecss + React/Angular or similar.
Something like this shall be used to base the UI implementation on to get started quickly.
Sync with Johan Wastring.
(https://github.com/qlik-ea/qliktive-custom-analytics/blob/master/deploy/deploy.md)
I think it is pretty common nowadays that people are using VirtualBox not only on Windows but also as the primary local virtualization environment on Windows. The doc is a bit misleading as VirtualBox is only mentioned in the context of OSX
$ ./deploy/create-swarm-cluster.sh -d local
- does not work, I need to do a sh $ ./deploy/create-swarm-cluster.sh -d local
(not the sh
at the beginning)
Updating the (outdated) Boot2Docker image without asking me is a bit harsh ...
$ ./deploy/deploy-stack.sh
also only works if I use it with sh
: sh $ ./deploy/deploy-stack.sh
The -e
is not working as expected on all systems, see the attachement#1
npm install
before running the testsOwn space for Qliktive deployment.
Contact person: Roberto Faria
This is a minor suggested change.
The deploy/
folder contains all bash scripts used for various tasks on this use case.
I suggest renaming the folder to scripts/
since the scripts may not necessarily only have to do with deployment.
It's right now missing in the documentation how to set the required secrets in the ./secrets
folder
c701934e33d7ff2b5466ce409b0c09fd37c3f988
A number of E2E tests exists for this use-case exists in the repo. These tests should be executed as part of the CCI pipeline on each commit. The tests can be executed on a local setup i.e. docker-compose up
.
Once we have selected tools and platform to use, this task is about implementing a first performance test case based on this platform.
qliktive-custom-analytics
repository, run a script to set up a local deployment of the system.bash
command line to run the test case.Share with the Scalability team of what we would like to achieve this beat, as well as sync with them of our coming requirements.
Hopefully they'll also be able to share some advice regarding tools and testing strategies.
Investigate what options are needed to deploy with AWS-driver using docker-machine:
--amazonec2-access-key: Your access key ID for the Amazon Web Services API.
--amazonec2-secret-key: Your secret access key for the Amazon Web Services API.
--amazonec2-session-token: Your session token for the Amazon Web Services API.
--amazonec2-ami: The AMI ID of the instance to use.
--amazonec2-region: The region to use when launching the instance.
--amazonec2-vpc-id: Your VPC ID to launch the instance in.
--amazonec2-zone: The AWS zone to launch the instance in (i.e. one of a,b,c,d,e).
--amazonec2-subnet-id: AWS VPC subnet ID.
--amazonec2-security-group: AWS VPC security group name.
--amazonec2-tags: AWS extra tag key-value pairs (comma-separated, e.g. key1,value1,key2,value2).
--amazonec2-instance-type: The instance type to run.
--amazonec2-device-name: The root device name of the instance.
--amazonec2-root-size: The root disk size of the instance (in GB).
--amazonec2-volume-type: The Amazon EBS volume type to be attached to the instance.
--amazonec2-iam-instance-profile: The AWS IAM role name to be used as the instance profile.
--amazonec2-ssh-user: The SSH Login username, which must match the default SSH user set in the ami used.
--amazonec2-request-spot-instance: Use spot instances.
--amazonec2-spot-price: Spot instance bid price (in dollars). Require the --amazonec2-request-spot-instance flag.
--amazonec2-use-private-address: Use the private IP address for docker-machine, but still create a public IP address.
--amazonec2-private-address-only: Use the private IP address only.
--amazonec2-monitoring: Enable CloudWatch Monitoring.
--amazonec2-use-ebs-optimized-instance: Create an EBS Optimized Instance, instance type must support it.
--amazonec2-ssh-keypath: Path to Private Key file to use for instance. Matching public key with .pub extension should exist
--amazonec2-retries: Set retry count for recoverable failures (use -1 to disable)
Build/test in Sense.
Try to get info from prod owner.
Sync with Johan Wastring.
This is one of the tools that has been created out of "possible tools to evaluate" for this beat.
To get a better understanding of our environment we would like to evaluate the usage of Prometheus.
Hopefully we would be able to get better insights of docker metrics. Prometheus doesn't benchmark the applications, but will give you benchmarks from docker.
Answer what value does Prometheus bring?
Is it easy and doesn't weigh down our current implementation?
E2E tests need to be developed for end-user scenarios of use case Custom Analytics UI.
This would be acceptance tests verifying correctness and expected behavior of typical user actions.
This task consists of
Add files that will be passed in to docker, as docker secrets. These files/secrets will then be used in the Authentication module.
This issue links to the issue in Authentication module: qlik-oss/core-assisted-prescription-auth#5
Just realizing while tracking issues: I think we need a versioning strategy or at least expose it in a better way or in a package.json or something so that I can mention which version/tag I was using while testing and reporting issues - otherwise this will be hard to track.
This task is to investigate if engima.js is applicable for being used in performance testing of the qliktive-custom-analytics use-case.
I am experiencing - undeterministic - issues running the demo for > 1h.
The /hellochart
worked properly at the beginning. I'd expect it also works after returning an hour later.
/hellochart
worksLooking into Kibana, I cannot find any useful information. Seems that nothing is currently logged to Kibana/Logstash. So I'd also expect that even this behavior is logged properly to do some analysis.
Personal comment:
I personally think that the first and most important step is, to make sure that even such a behavior is logged properly.
428a1c45c75949d0a1b9c2a305b041497a8787cf
Docker version 17.03.1-ce, build c6d412e
docker-machine version 0.10.0, build 76ed2a6
Update to Mira version 0.0.1-56 so that we can remove usage of Mira endpoints list/
and query/
.
Use qliktive-qix-session-service
version 0.0.1-26
.
In order to collect all the metrics passed by docker, and potentially different containers we need to set up Prometheus to collect and store all the data.
Definition of done:
In AWS there will be a need for DNS resolving. This needs to be investigated.
If I run npm run test:e2e:swarm
on an existing swarm I get the following:
Changing the return value of test-utils.js:getTestHost()
to return the IP-address (192.168.99.105) does the trick, then the test is working as expected.
Note:
Using $ SWARMMANAGER=<IP address or hostname> npm run test:e2e:swarm
as suggested in the documentation, also does the trick.
Related issue: #35
When I want to re-deploy a stack, the script is hanging ...
Will try to reproduce and add more details here.
Add information of how secrets work.
One is supposed to set the secrets in 4 files (the blue ones):
So far so good, but if you want to update the repo often, you certainly run into a conflict with newly pulled files since those four files are git-included.
I suggest a different way of including this in the repo which allows me to pull the latest update without overwriting my local ones.
One possible approach:
Files:
.
└── COOKIE_SIGNING.sample
└── GITHUB_CLIENT_ID.sample
└── GITHUB_CLIENT_SECRET.sample
└── JWT_SECRET.sample
Inside e.g. the JWT_SECRET.sample
you just see the following text:
Create a file JWT_SECRET with a long GUID for the JWT secret, e.g. c6d5811c-ac67-495f-bc55-3cd91bb275c7
Investigate and configure an environment with networks and firewalls withing AWS to deploy the swarm to
I think the most recent repo for the docker-swarm visualizer is dockersamples/visualizer instead of manomarks/visualizer
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.