Giter Club home page Giter Club logo

for-elasticsearch-docs's Introduction

Open Distro Documentation

This repository contains the documentation for Open Distro, a full-featured, open source distribution of Elasticsearch for analytics workloads. You can find the rendered documentation at opendistro.github.io/for-elasticsearch-docs/.

Developer and community contributions remain essential in keeping this documentation comprehensive, useful, organized, and up-to-date.

How you can help

  • Do you work on one of the various Open Distro plugins? Take a look at the documentation for the plugin. Is everything accurate? Will anything change in the near future?

    Often, engineering teams can keep existing documentation up-to-date with minimal effort, thus freeing up the documentation team to focus on larger projects.

  • Do you have expertise in a particular area of Elasticsearch OSS? Cluster sizing? The query DSL? Painless scripting? Aggregations? JVM settings? Take a look at the current content and see where you can add value. The documentation team is happy to help you polish and organize your drafts.

  • Are you a Kibana expert? How did you set up your visualizations? Why is a particular dashboard so valuable to your organization? We have literally nothing on how to use Kibana, only how to install it.

  • Are you a web developer? Do you want to add an optional dark mode to the documentation? A "copy to clipboard" button for our code samples? Other improvements to the design or usability? See major changes for information on building the website locally.

  • Our issue tracker contains documentation bugs and other content gaps, some of which have colorful labels like "good first issue" and "help wanted."

Points of contact

If you encounter problems or have questions when contributing to the documentation, these people can help:

How we build the website

After each commit to this repository, GitHub Pages automatically uses Jekyll to rebuild the website. The whole process takes around 20 seconds.

This repository contains many Markdown files in the /docs directory. Each Markdown file correlates with one page on the website. For example, the Markdown file for this page is here.

Using plain text on GitHub has many advantages:

  • Everything is free, open source, and works on every operating system. Use your favorite text editor, Ruby, Jekyll, and Git.
  • Markdown is easy to learn and looks good in side-by-side diffs.
  • The workflow is no different than contributing code. Make your changes, build locally to check your work, and submit a pull request. Reviewers check the PR before merging.
  • Alternatives like wikis and WordPress are full web applications that require databases and ongoing maintenance. They also have inferior versioning and content review processes compared to Git. Static websites, such as the ones Jekyll produces, are faster, more secure, and more stable.

In addition to the content for a given page, each Markdown file contains some Jekyll front matter. Front matter looks like this:

---
layout: default
title: Alerting Security
nav_order: 10
parent: Alerting
has_children: false
---

If you're making trivial changes, you don't have to worry about front matter.

If you want to reorganize content or add new pages, keep an eye on has_children, parent, and nav_order, which define the hierarchy and order of pages in the lefthand navigation. For more information, see the documentation for our upstream Jekyll theme.

Contribute content

There are three ways to contribute content, depending on the magnitude of the change.

Trivial changes

If you just need to fix a typo or add a sentence, this web-based method works well:

  1. On any page in the documentation, click the Edit this page link in the lower-left.

  2. Make your changes.

  3. Choose Create a new branch for this commit and start a pull request and Commit changes.

Minor changes

If you want to add a few paragraphs across multiple files and are comfortable with Git, try this approach:

  1. Fork this repository.

  2. Download GitHub Desktop, install it, and clone your fork.

  3. Navigate to the repository root.

  4. Create a new branch.

  5. Edit the Markdown files in /docs.

  6. Commit, push your changes to your fork, and submit a pull request.

Major changes

If you're making major changes to the documentation and need to see the rendered HTML before submitting a pull request, here's how to build locally:

  1. Fork this repository.

  2. Download GitHub Desktop, install it, and clone your fork.

  3. Navigate to the repository root.

  4. Install Ruby if you don't already have it. We recommend RVM, but use whatever method you prefer:

    curl -sSL https://get.rvm.io | bash -s stable
    rvm install 2.6
    ruby -v
    
  5. Install Jekyll if you don't already have it:

    gem install bundler jekyll
    
  6. Install dependencies:

    bundle install
    
  7. Build:

    sh build.sh
    
  8. If the build script doesn't automatically open your web browser (it should), open http://localhost:4000/for-elasticsearch-docs/.

  9. Create a new branch.

  10. Edit the Markdown files in /docs.

    If you're a web developer, you can customize _layouts/default.html and _sass/custom/custom.scss.

  11. When you save a file, marvel as Jekyll automatically rebuilds the site and refreshes your web browser. This process takes roughly 20 seconds.

  12. When you're happy with how everything looks, commit, push your changes to your fork, and submit a pull request.

Writing tips

  1. Try to stay consistent with existing content and consistent within your new content. Don't call the same plugin KNN, k-nn, and k-NN in three different places.

  2. Shorter paragraphs are better than longer paragraphs. Use headers, tables, lists, and images to make your content easier for readers to scan.

  3. Use bold for user interface elements, italics for key terms or emphasis, and monospace for Bash commands, file names, REST paths, and code.

  4. Markdown file names should be all lowercase, use hyphens to separate words, and end in .md.

  5. Don't use future tense. Use present tense.

    Bad: After you click the button, the process will start.

    Better: After you click the button, the process starts.

  6. "You" refers to the person reading the page. "We" refers to the Open Distro contributors.

    Bad: Now that we've finished the configuration, we have a working cluster.

    Better: At this point, you have a working cluster, but we recommend adding dedicated master nodes.

  7. Don't use "this" and "that" to refer to something without adding a noun.

    Bad: This can cause high latencies.

    Better: This additional loading time can cause high latencies.

  8. Use active voice.

    Bad: After the request is sent, the data is added to the index.

    Better: After you send the request, the Elasticsearch cluster indexes the data.

  9. Introduce acronyms before using them.

    Bad: Reducing customer TTV should accelerate our ROIC.

    Better: Reducing customer time to value (TTV) should accelerate our return on invested capital (ROIC).

  10. Spell out one through nine. Start using numerals at 10. If a number needs a unit (GB, pounds, millimeters, kg, celsius, etc.), use numerals, even if the number if smaller than 10.

    Bad: 3 kids looked for thirteen files on a six GB hard drive.

    Better: Three kids looked for 13 files on a 6 GB hard drive.

New releases

  1. Branch.

  2. Change the odfe_version, odfe_major_version, and es_version variables in _config.yml.

  3. Start up a new cluster using the updated Docker Compose file in docs/install/docker.md.

  4. Update the version table in version-history.md.

    Use curl -XGET https://localhost:9200 -u admin:admin -k to verify the Elasticsearch version.

  5. Update the plugin compatibility table in docs/install/plugin.md.

    Use curl -XGET https://localhost:9200/_cat/plugins -u admin:admin -k to get the correct version strings.

  6. Update the plugin compatibility table in docs/kibana/plugins.md.

    Use docker ps to find the ID for the Kibana node. Then use docker exec -it <kibana-node-id> /bin/bash to get shell access. Finally, run ./bin/kibana-plugin list to get the plugins and version strings.

  7. Run a build (build.sh), and look for any warnings or errors you introduced.

  8. Verify that the individual plugin download links in docs/install/plugins.md and docs/kibana/plugins.md work.

  9. Check for any other bad links (check-links.sh). Expect a few false positives for the localhost links.

  10. Submit a PR.

Classes within Markdown

This documentation uses a modified version of the just-the-docs Jekyll theme, which has some useful classes for labels and buttons:

[Get started](#get-started){: .btn .btn-blue }

## Get started
New
{: .label .label-green :}
  • Labels come in default (blue), green, purple, yellow, and red.
  • Buttons come in default, purple, blue, green, and outline.
  • Warning, tip, and note blocks are available ({: .warning }, etc.).
  • If an image has a white background, you can use {: .img-border } to add a one pixel border to the image.

These classes can help with readability, but should be used sparingly. Each addition of a class damages the portability of the Markdown files and makes moving to a different Jekyll theme (or a different static site generator) more difficult.

Besides, standard Markdown elements suffice for most documentation.

Math

If you want to use the sorts of pretty formulas that MathJax allows, add has_math: true to the Jekyll page metadata. Then insert LaTeX math into HTML tags with the rest of your Markdown content:

## Math

Some Markdown paragraph. Here's a formula:

<p>
  When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are
  \[x = {-b \pm \sqrt{b^2-4ac} \over 2a}.\]
</p>

And back to Markdown.

Code of conduct

This project has adopted an Open Source Code of Conduct.

Security issue notifications

If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public GitHub issue.

Licensing

See the LICENSE file for our project's licensing. We will ask you to confirm the licensing of your contribution.

Copyright

Copyright Amazon.com, Inc. or its affiliates. All rights reserved.

for-elasticsearch-docs's People

Contributors

abbashus avatar aetter avatar allenyin96 avatar alolita avatar amoo-miki avatar ashwinkumar12345 avatar aws-tina avatar bbarani avatar chenqi0805 avatar chynkm avatar dai-chen avatar fabide avatar fbarbeira avatar goodmirek avatar hyandell avatar inntran avatar jmazanec15 avatar keithhc2 avatar ktkrg avatar lizsnyder avatar mmadoo avatar oddlittlebird avatar peterzhuamazon avatar sreekarjami avatar stockholmux avatar thenom avatar turettn avatar weicongs-amazon avatar wrijeff avatar zacbayhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

for-elasticsearch-docs's Issues

Unable to disable authentication for kibana

I would like to turn off authentication for kibana/elasticsearch in order to evaluate the benefits of the SQL and Alerting in the Open Distro. I was able to do it for elasticsearch using opendistro_security.disabled: true in the elasticsearch.yml file. How can I do the same for kibana.yml ?

I do not need login/password for now as the resources are in an isolated environment/not in production

opendistroforelasticsearch breaks RPM dependencies on CentOS 7

On a fresh CentOS 7 install in GCE, I noticed the following happening after installing opendistroforelasticsearch and attempting to yum upgrade. It appears that opendistroforelasticsearch depends on 6.6.2, but the elasticsearch-oss repository provides 6.7.1 and that causes yum to believe it can be updated.

[root@monitoring-es-1 ~]# yum upgrade
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.team-cymru.com
 * epel: mirror.steadfastnet.com
 * extras: mirror.fileplanet.com
 * updates: mirror.steadfastnet.com
Resolving Dependencies
--> Running transaction check
---> Package elasticsearch-oss.noarch 0:6.6.2-1 will be updated
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistroforelasticsearch-0.8.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-alerting-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-sql-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-performance-analyzer-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-security-0.8.0.0-1.noarch
---> Package elasticsearch-oss.noarch 0:6.7.1-1 will be an update
---> Package glibc.x86_64 0:2.17-260.el7_6.3 will be updated
---> Package glibc.x86_64 0:2.17-260.el7_6.4 will be an update
---> Package glibc-common.x86_64 0:2.17-260.el7_6.3 will be updated
---> Package glibc-common.x86_64 0:2.17-260.el7_6.4 will be an update
---> Package google-cloud-sdk.noarch 0:240.0.0-1.el7 will be updated
---> Package google-cloud-sdk.noarch 0:241.0.0-1.el7 will be an update
---> Package libssh2.x86_64 0:1.4.3-12.el7 will be updated
---> Package libssh2.x86_64 0:1.4.3-12.el7_6.2 will be an update
---> Package python.x86_64 0:2.7.5-76.el7 will be updated
---> Package python.x86_64 0:2.7.5-77.el7_6 will be an update
---> Package python-libs.x86_64 0:2.7.5-76.el7 will be updated
---> Package python-libs.x86_64 0:2.7.5-77.el7_6 will be an update
---> Package tzdata.noarch 0:2018i-1.el7 will be updated
---> Package tzdata.noarch 0:2019a-1.el7 will be an update
--> Finished Dependency Resolution
Error: Package: opendistroforelasticsearch-0.8.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
           Requires: elasticsearch-oss = 6.6.2
           Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
               elasticsearch-oss = 6.6.2-1
           Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.1-1
           Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.0-1
           Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.1-1
           Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.2-1
           Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.0-1
           Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.1-1
           Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.2-1
           Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.3-1
           Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.0-1
           Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.1-1
           Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.2-1
           Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.3-1
           Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.4-1
           Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.0-1
           Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.1-1
           Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-performance-analyzer-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
           Requires: elasticsearch-oss = 6.6.2
           Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
               elasticsearch-oss = 6.6.2-1
           Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.1-1
           Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.0-1
           Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.1-1
           Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.2-1
           Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.0-1
           Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.1-1
           Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.2-1
           Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.3-1
           Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.0-1
           Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.1-1
           Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.2-1
           Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.3-1
           Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.4-1
           Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.0-1
           Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.1-1
           Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-alerting-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
           Requires: elasticsearch-oss = 6.6.2
           Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
               elasticsearch-oss = 6.6.2-1
           Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.1-1
           Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.0-1
           Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.1-1
           Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.2-1
           Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.0-1
           Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.1-1
           Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.2-1
           Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.3-1
           Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.0-1
           Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.1-1
           Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.2-1
           Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.3-1
           Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.4-1
           Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.0-1
           Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.1-1
           Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-security-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
           Requires: elasticsearch-oss = 6.6.2
           Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
               elasticsearch-oss = 6.6.2-1
           Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.1-1
           Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.0-1
           Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.1-1
           Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.2-1
           Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.0-1
           Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.1-1
           Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.2-1
           Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.3-1
           Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.0-1
           Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.1-1
           Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.2-1
           Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.3-1
           Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.4-1
           Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.0-1
           Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.1-1
           Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-sql-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
           Requires: elasticsearch-oss = 6.6.2
           Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
               elasticsearch-oss = 6.6.2-1
           Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.1-1
           Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.0-1
           Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.1-1
           Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.3.2-1
           Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.0-1
           Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.1-1
           Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.2-1
           Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.4.3-1
           Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.0-1
           Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.1-1
           Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.2-1
           Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.3-1
           Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.5.4-1
           Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.0-1
           Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.6.1-1
           Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
               elasticsearch-oss = 6.7.0-1
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Add documentation for cert generation

Current documentation has an instruction to "replace the demo certificates" here but links to a sample docker compose file that assumes the following files exist locally without describing how they are generated:

  • root-ca.pem
  • esnode.pem
  • esnode-key.pem
  • kirk.pem
  • kirk-key.pem

Incorrectly encoded pem files produce an only-slightly-helpful error message: Your keystore or PEM does not contain a certificate. Maybe you confused keys and certificates. which may be related to discrepancies between PRIVATE KEY, PRIVATE ENCRYPTED KEY, and RSA PRIVATE KEY headers which get pretty deep into openssl implementations. For instance, do the demo certs require PKCS#8?

It would help to have documentation and a simple script for securely generating the appropriate files, especially changing the admin client key from "kirk". It might also be useful to describe under what circumstances it is useful to generate non-admin client keys (such as spock or kibana)

Presumably the requirements originate from SearchGuard implementations, but configuring them is non-trivial and its not clear which method (or some other) is preferable:

Not able to use LDAP Authentication

Hi, Ldap Authentication doesn't work
I use FreeIpa and make configuration in config.yml

 ldap:
    http_enabled: true
    transport_enabled: true
    order: 1
    http_authenticator:
      type: basic
      challenge: true
    authentication_backend:
      type: ldap
      config:
        enable_ssl: false
        enable_start_tls: false
        enable_ssl_client_auth: false
        verify_hostnames: true
        hosts:
          - ipahostname:389
        bind_dn: 'username'
        password: 'pasword'
        userbase: 'dc=example,dc=org'
        # Filter to search for users (currently in the whole subtree beneath userbase)
        # {0} is substituted with the username
        usersearch: '(uid={0})'
        # Use this attribute from the user as username (if not set then DN is used)
        username_attribute: uid

i see thic config in kibana web, but there is only one Authentication backend. it is Internal Users Database, when i try to login whith ldap credentials , it fails : [2019-04-11T10:44:46,316][WARN ][c.a.o.s.a.BackendRegistry] [VeBKci6] Authentication finally failed for username from 127.0.0.1:36614
how can i make ldap Authentication up and use it by default?

Add documentation for OpenID connect

Moving from @kfox1111 request Request below:

The documentation mentions JWT support, but it is unclear if it is enough to work with OIDC providers and only additional documentation is needed, or if OIDC support would require additional support. Either way, OIDC support would be very useful.

Documentation on multihost deployment (Docker swarm mode)

It would be great to have a little doc on how to deploy a 2 or 3 nodes cluster and maybe even how to make in bigger on hot (while its running) without downtime.

Ive been researching a little on this and making some testing but cant get it to work with Docker Swarm. If i do i ll add a comment with the docker-compose.yml as a possible example.

Thanks!

Unable to expose Kibana to local network

there is no available parameter on the /etc/kibana/kibana.yml to configure Kibana to listen on any different address than localhost - i am currently running OpenDistro on a Centos VM set up on bridged mode on VirtualBox and i can't reach the server from any of my machines on my LAN - i can see other services exposed (Elasticsearch) but not Kibana.

Any help is appreciated, thank you!

filebeat 6.5.4 output to elasticsearch

our filebeat is install of rpm,opendistro is install of dockerใ€‚if i want to edit filebet.yml send to opendistro ,output.elasticsearch should be how to configure with authority

elasticsearch: java.net.MalformedURLException: unknown protocol: jrt

systemctl status elasticsearch.service
โ— elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-03-13 08:55:02 EDT; 19s ago
Docs: http://www.elastic.co
Main PID: 5715 (code=exited, status=1/FAILURE)

Mar 13 08:54:46 host1 systemd[1]: Started Elasticsearch.
Mar 13 08:54:47 host1 elasticsearch[5715]: java.security.policy: error adding Entry:
Mar 13 08:54:47 host1 elasticsearch[5715]: java.net.MalformedURLException: unknown protocol: jrt
Mar 13 08:54:47 host1 elasticsearch[5715]: java.security.policy: error adding Entry:
Mar 13 08:54:47 host1 elasticsearch[5715]: java.net.MalformedURLException: unknown protocol: jrt
Mar 13 08:55:02 host1 systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 13 08:55:02 host1 systemd[1]: Unit elasticsearch.service entered failed state.
Mar 13 08:55:02 host1 systemd[1]: elasticsearch.service failed.

Open Distro Security not initialized (SG11)

Hello!
tell me earlier at the time of release, the opendistro set it up and everything was fine, and it worked now I decided to roll it out not into the test environment and ran into a problem, after installation for this manual https://opendistro.github.io/for-elasticsearch-docs/docs/install/rpm/ and service start i have a error
Open Distro Security not initialized (SG11). but service i started,
and if i start first node with only role master have error

[2019-04-13T14:26:35,382][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for roles while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for rolesmapping while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for internalusers while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for actiongroups while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.opendistro_security)

what am I doing wrong ? it used to work.
and do i act correctly if i need to make a cluster of 1 master and two date nodes (hot and warm)
first i do the master, then hot and warm

openid docs

I'm trying to connect opendistro kibana to azure ad, but I've found that plugin config doesn't work:

/usr/share/kibana/plugins/opendistro_security/securityconfig/config.yml

      opendistro_security:
        dynamic:
          http:
            anonymous_auth_enabled: false
            xff:
              enabled: true
              internalProxies: '.*' # trust all internal proxies, regex pattern
              remoteIpHeader:  'x-forwarded-for'
              proxiesHeader:   'x-forwarded-by'
              trustedProxies: '.*' # trust all external proxies, regex pattern
          authc:
            openid_auth_domain:
              http_enabled: true
              transport_enabled: true
              order: 0
              http_authenticator:
                type: openid
                challenge: false
                config:
                  openid_connect_url: https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
              authentication_backend:
                type: noop

kibana.yml:

    opendistro_security.auth.type: "openid"
    opendistro_security.openid.connect_url: https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
    opendistro_security.openid.client_id: "{application_id}"
    opendistro_security.openid.client_secret: "{secret}"
    opendistro_security.openid.base_redirect_url: "https://kibana_url"
    opendistro_security.cookie.secure: true
    elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant"]

I'm getting authError every time. What I've missed? Don't see any chance to debug this auth error, there is no any helpful message in log or smth.

Autocomplete for search [kibana]

Hello,
Anyone else having issues about query Autocomplete? It is looking like not working at all.

(Package opendistroforelasticsearch-kibana-0.7.0-1.x86_64 already installed and latest version)

Thanks/

Syntax options
Our experimental autocomplete and simple syntax features can help you create your queries. Just start typing and youโ€™ll see matches related to your data. See docs here.

error use docker๏ผˆdocker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" --name elasticsearch amazon/opendistro-for-elasticsearch:0.8.0๏ผ‰

OpenDistro for Elasticsearch Security Demo Installer
** Warning: Do not use on production or public reachable systems **
Basedir: /usr/share/elasticsearch
Elasticsearch install type: rpm/deb on CentOS Linux release 7.6.1810 (Core)
Elasticsearch config dir: /usr/share/elasticsearch/config
Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
Elasticsearch bin dir: /usr/share/elasticsearch/bin
Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
Elasticsearch lib dir: /usr/share/elasticsearch/lib
Detected Elasticsearch Version: x-content-6.6.2
Detected Open Distro Security Version: 0.8.0.0

Success

Execute this script now on all your nodes and then start all nodes

Open Distro Security will be automatically initialized.

If you like to change the runtime configuration

change the files in ../securityconfig and execute:

"/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh" -cd "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig" -icl -key "/usr/share/elasticsearch/config/kirk-key.pem" -cert "/usr/share/elasticsearch/config/kirk.pem" -cacert "/usr/share/elasticsearch/config/root-ca.pem" -nhnv

or run ./securityadmin_demo.sh

To use the Security Plugin ConfigurationGUI

To access your secured cluster open https://: and log in with admin/admin.

(Ignore the SSL certificate warning because we installed self-signed demo certificates)

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-04-10T07:31:40,842][INFO ][o.e.e.NodeEnvironment ] [I4AIfZa] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [22.3gb], net total_space [25.9gb], types [rootfs]
[2019-04-10T07:31:40,843][INFO ][o.e.e.NodeEnvironment ] [I4AIfZa] heap size [990.7mb], compressed ordinary object pointers [true]
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] node name derived from node ID [I4AIfZaYReGeN5SVri92Aw]; set [node.name] to override
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] version[6.6.2], pid[1], build[oss/tar/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Linux/3.10.0-957.10.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-3511745295708599060, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-04-10T07:31:42,237][INFO ][c.a.o.e.p.c.PluginSettings] [I4AIfZa] loading config ...
[2019-04-10T07:31:42,238][INFO ][c.a.o.e.p.c.PluginSettings] [I4AIfZa] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1
[2019-04-10T07:31:42,727][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] ES Config path is /usr/share/elasticsearch/config
[2019-04-10T07:31:42,806][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] OpenSSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: io.netty.internal.tcnative.SSL
[2019-04-10T07:31:43,022][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] JVM supports TLSv1.3
[2019-04-10T07:31:43,023][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Config directory is /usr/share/elasticsearch/config/, from there the key- and truststore files are resolved relatively
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS Transport Client Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS Transport Server Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS HTTP Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Enabled TLS protocols for transport layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Enabled TLS protocols for HTTP layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-04-10T07:31:43,633][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Clustername: docker-cluster
[2019-04-10T07:31:43,693][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Directory /usr/share/elasticsearch/config has insecure file permissions (should be 0700)
[2019-04-10T07:31:43,693][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/elasticsearch.yml has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/log4j2.properties has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/kirk.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/esnode.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/root-ca.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/esnode-key.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/kirk-key.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [aggs-matrix-stats]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [analysis-common]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [ingest-common]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-expression]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-mustache]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-painless]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [mapper-extras]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [parent-join]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [percolator]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [rank-eval]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [reindex]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [repository-url]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [transport-netty4]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [tribe]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_alerting]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_performance_analyzer]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_security]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_sql]
[2019-04-10T07:31:43,861][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Disabled https compression by default to mitigate BREACH attacks. You can enable it by setting 'http.compression: true' in elasticsearch.yml
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured categories on rest layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured categories on transport layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore: [kibanaserver]
[2019-04-10T07:31:45,641][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore for read compliance events: [kibanaserver]
[2019-04-10T07:31:45,641][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore for write compliance events: [kibanaserver]
[2019-04-10T07:31:45,649][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Message routing enabled: true
[2019-04-10T07:31:45,659][WARN ][c.a.o.s.c.ComplianceConfig] [I4AIfZa] If you plan to use field masking pls configure opendistro_security.compliance.salt to be a random string of 16 chars length identical on all nodes
[2019-04-10T07:31:45,659][INFO ][c.a.o.s.c.ComplianceConfig] [I4AIfZa] PII configuration [auditLogPattern=org.joda.time.format.DateTimeFormatter@55881f40, auditLogIndex=null]: {}
[2019-04-10T07:31:45,956][INFO ][o.e.d.DiscoveryModule ] [I4AIfZa] using discovery type [single-node] and host providers [settings]
[2019-04-10T07:31:46,248][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [I4AIfZa] PerformanceAnalyzer Enabled: true
Registering Handler
[2019-04-10T07:31:46,295][INFO ][o.e.n.Node ] [I4AIfZa] initialized
[2019-04-10T07:31:46,295][INFO ][o.e.n.Node ] [I4AIfZa] starting ...
[2019-04-10T07:31:46,379][INFO ][o.e.t.TransportService ] [I4AIfZa] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-04-10T07:31:46,386][WARN ][o.e.b.BootstrapChecks ] [I4AIfZa] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-04-10T07:31:46,392][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Check if .opendistro_security index exists ...
[2019-04-10T07:31:46,451][INFO ][o.e.h.n.Netty4HttpServerTransport] [I4AIfZa] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-04-10T07:31:46,451][INFO ][o.e.n.Node ] [I4AIfZa] started
[2019-04-10T07:31:46,451][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] 4 Open Distro Security modules loaded so far: [Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions], Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper]]
[2019-04-10T07:31:46,475][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] .opendistro_security index does not exist yet, so we create a default config
[2019-04-10T07:31:46,477][INFO ][o.e.g.GatewayService ] [I4AIfZa] recovered [0] indices into cluster_state
[2019-04-10T07:31:46,479][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Will create .opendistro_security index so we can apply default config
[2019-04-10T07:31:46,521][INFO ][o.e.c.m.MetaDataCreateIndexService] [I4AIfZa] [.opendistro_security] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2019-04-10T07:31:46,526][INFO ][o.e.c.r.a.AllocationService] [I4AIfZa] updating number_of_replicas to [0] for indices [.opendistro_security]
[2019-04-10T07:31:46,694][INFO ][o.e.c.r.a.AllocationService] [I4AIfZa] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).
[2019-04-10T07:31:46,701][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'config' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
[2019-04-10T07:31:46,735][ERROR][c.a.o.e.p.o.OSGlobals ] [I4AIfZa] Error in static initialization of OSGlobals with exception: java.security.AccessControlException: access denied ("java.io.FilePermission" "/proc/self/task" "read")
java.security.AccessControlException: access denied ("java.io.FilePermission" "/proc/self/task" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:?]
at java.security.AccessController.checkPermission(AccessController.java:895) ~[?:?]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:322) ~[?:?]
at java.lang.SecurityManager.checkRead(SecurityManager.java:661) ~[?:?]
at java.io.File.list(File.java:1129) ~[?:?]
at java.io.File.listFiles(File.java:1219) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.os.OSGlobals.enumTids(OSGlobals.java:81) ~[opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.os.OSGlobals.(OSGlobals.java:44) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics_generator.linux.LinuxOSMetricsGenerator.getPid(LinuxOSMetricsGenerator.java:50) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.(ThreadList.java:51) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.ThreadIDUtil.getNativeThreadId(ThreadIDUtil.java:31) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.ThreadIDUtil.getNativeCurrentThreadId(ThreadIDUtil.java:27) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportChannel.set(PerformanceAnalyzerTransportChannel.java:50) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.getShardBulkChannel(PerformanceAnalyzerTransportRequestHandler.java:78) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.getChannel(PerformanceAnalyzerTransportRequestHandler.java:52) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.messageReceived(PerformanceAnalyzerTransportRequestHandler.java:43) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceivedDecorate(OpenDistroSecuritySSLRequestHandler.java:194) [opendistro_security_ssl-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.transport.OpenDistroSecurityRequestHandler.messageReceivedDecorate(OpenDistroSecurityRequestHandler.java:163) [opendistro_security-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceived(OpenDistroSecuritySSLRequestHandler.java:116) [opendistro_security_ssl-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin$7$1.messageReceived(OpenDistroSecurityPlugin.java:652) [opendistro_security-0.8.0.0.jar:0.8.0.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:687) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-04-10T07:31:46,967][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] create_mapping [security]
[2019-04-10T07:31:47,071][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'roles' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
[2019-04-10T07:31:47,092][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,144][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'rolesmapping' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
[2019-04-10T07:31:47,160][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,192][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'internalusers' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
[2019-04-10T07:31:47,205][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,223][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'actiongroups' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
[2019-04-10T07:31:47,233][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,269][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Default config applied
[2019-04-10T07:31:47,293][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Node 'I4AIfZa' initialized

Cannot setup triggers for Monitor

Iโ€™ve created a monitor based on Extraction Query. I am trying to create trigger for that, when I press the create button, it doesnโ€™t do anything. I mean it neither creates that trigger nor shows any error message. How to resolve this issue?
Please help!!

Kibana status is Yellow - "plugin:[email protected] Tenant indices migration failed"

Using the sample docker-compose.yml (https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/).

then run:
docker-compose up
Able to login to Kibana and everything works fine.

then:
docker-compose stop
docker-compose start

Able to login to Kibana but see error:
Kibana status is Yellow
plugin:[email protected] Tenant indices migration failed

Unable to do anything in Kibana. Any help would be appreciated.

Add documentation for Kubernetes deployment

Current documentation has an demo on getting things running with docker but to get things ready in Kubernetes it's a bit painful with just that, and it being a well-known broadly used platform i think it is an worthy effort.

I have a multi-host (cluster) installation successfully running in an private cluster, let me know if you guys have some interest for me to share it/create a PR with the manifests (yamls) i used.

How to Enable HTTP.CORS (Cross-Origin Resource Sharing)

I tried to put this in docker-compose.yml but didn't work.

environment:
      - cluster.name=odfe-cluster
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM

      - http.cors.enabled=true
      - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
      - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
      - http.cors.allow-credentials=true
      - opendistro_security.ssl.http.enabled=false

Does Open Distro supports CORS with different config keys like opendistro_security.xxx or it is currently impossible?

Update Debian Installation docs

Update Docs for Deb INSTALL Guide. OpenJDK11 and apt-transport-https

  1. OpenJDK11 Ubuntu and Debian installation is not the same. Suggest the following minor change, (in bold):

"Install Java 11:

  • Ubuntu: sudo add-apt-repository ppa:openjdk-r/ppa
  • Debian: sudo echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
  1. Before "4 Install OpenDistro for ElasticSearch:" the following needs to be added or indicated earlier as a pre-requisite.
  • apt install apt-transport-https

The previous sudo apt update will fail without apt-transport-https

These are the only two issues found while following the instructions from a clean, headless debian 9.8 installation.

Performance Analyzer "units" API documentation request

Request to "${endpoint}/_opendistro/_performanceanalyzer/metrics/units" outputs what unit we use for each metric in a JSON format. Please help add this to the public documentation.

$ curl localhost:9600/_opendistro/_performanceanalyzer/metrics/units
{"Disk_Utilization":"%","Cache_Request_Hit":"count","TermVectors_Memory":"B","Segments_Memory":"B","HTTP_RequestDocs":"count","Net_TCP_Lost":"segments/flow","Refresh_Time":"ms","GC_Collection_Event":"count","Merge_Time":"ms","Sched_CtxRate":"count/s","Cache_Request_Size":"B","ThreadPool_QueueSize":"count","Sched_Runtime":"s/ctxswitch","Disk_ServiceRate":"MB/s","Heap_AllocRate":"B/s","Heap_Max":"B","Sched_Waittime":"s/ctxswitch","ShardBulkDocs":"count","Thread_Blocked_Time":"s/event","VersionMap_Memory":"B","Master_Task_Queue_Time":"ms","Merge_CurrentEvent":"count","Indexing_Buffer":"B","Bitset_Memory":"B","Norms_Memory":"B","Net_PacketDropRate4":"packets/s","Heap_Committed":"B","Net_PacketDropRate6":"packets/s","Thread_Blocked_Event":"count","GC_Collection_Time":"ms","Cache_Query_Miss":"count","IO_TotThroughput":"B/s","Latency":"ms","Net_PacketRate6":"packets/s","Cache_Query_Hit":"count","IO_ReadSyscallRate":"count/s","Net_PacketRate4":"packets/s","Cache_Request_Miss":"count","CB_ConfiguredSize":"B","CB_TrippedEvents":"count","ThreadPool_RejectedReqs":"count","Disk_WaitTime":"ms","Net_TCP_TxQ":"segments/flow","Master_Task_Run_Time":"ms","IO_WriteSyscallRate":"count/s","IO_WriteThroughput":"B/s","Flush_Event":"count","Net_TCP_RxQ":"segments/flow","Refresh_Event":"count","Points_Memory":"B","Flush_Time":"ms","Heap_Init":"B","CPU_Utilization":"cores","HTTP_TotalRequests":"count","ThreadPool_ActiveThreads":"count","Cache_Query_Size":"B","Paging_MinfltRate":"count/s","Merge_Event":"count","Net_TCP_SendCWND":"B/flow","Cache_Request_Eviction":"count","Segments_Total":"count","Terms_Memory":"B","DocValues_Memory":"B","Heap_Used":"B","Cache_FieldData_Eviction":"count","IO_TotalSyscallRate":"count/s","CB_EstimatedSize":"B","Net_Throughput":"B/s","Paging_RSS":"pages","Indexing_ThrottleTime":"ms","StoredFields_Memory":"B","IndexWriter_Memory":"B","Master_PendingQueueSize":"count","Net_TCP_SSThresh":"B/flow","Cache_FieldData_Size":"B","Paging_MajfltRate":"count/s","ThreadPool_TotalThreads":"count","IO_ReadThroughput":"B/s","ShardEvents":"count","Net_TCP_NumFlows":"count"}

JVM memory configuration options do not apply in the docker image

    environment:
      - cluster.name=odfe-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.zen.ping.unicast.hosts=odfe-node1

The ES_JAVA_OPTS variable is overwritten in /usr/local/bin/docker-entrypoint.sh

The memory configurations are currently taken from /usr/share/elasticsearch/config/jvm.options where it is hardcoded. Changing the memory configuration requires either altering the image or mounting the jvm.options file.

I believe this is more of an issue with the image itself. Which repository would be most suitable?

Remove "reserved" from the Admin and kibanaserver passwords

Since these passwords ship with default values, we need to make it easy to set them. Especially running Docker, it's hard to change them and that reduces the security footprint.

Removing the reserved designation will allow us to change them in the Security UI.

facing issues while configuring kibana with existing stock elastic search

we have a existing elastic search cluster ( stock docker.elastic.co/elasticsearch/elasticsearch:6.6.2)

i tried using opendistro kibana with it. as my usecase is to hava alerting on Elastic search data.

few issues i faced while doing it

1> as my existing Elastic search is running on http and doest have security plugin, i disabled security plugin in open distro plugin to get kibana started
2> now i can see data in kibana but alerting dint work. bellow is error in kibana logs

Alerting - ElasticsearchService - search { [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" } :: {"path":"/.opendistro-alerting-config/_search","query":{},"body":"{"query":{"term":{"monitor.name.keyword":"test"}}}","statusCode":404,"response":"{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}"}
at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
at IncomingMessage.emit (events.js:194:15)
at endReadableNT (_stream_readable.js:1103:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
status: 404,
displayName: 'NotFound',
message:
'[index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" }',
path: '/.opendistro-alerting-config/_search',
query: {},
body:
{ error:
{ root_cause: [Array],
type: 'index_not_found_exception',
reason: 'no such index',
'resource.type': 'index_or_alias',
'resource.id': '.opendistro-alerting-config',
index_uuid: 'na',
index: '.opendistro-alerting-config' },
status: 404 },
statusCode: 404,
response:
'{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}',
toString: [Function],
toJSON: [Function] }
Alerting - ElasticsearchService - search { [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" } :: {"path":"/.opendistro-alerting-config/_search","query":{},"body":"{"query":{"term":{"monitor.name.keyword":"test"}}}","statusCode":404,"response":"{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}"}
at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
at IncomingMessage.emit (events.js:194:15)
at endReadableNT (_stream_readable.js:1103:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
status: 404,
displayName: 'NotFound',
message:
'[index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" }',
path: '/.opendistro-alerting-config/_search',
query: {},
body:
{ error:
{ root_cause: [Array],
type: 'index_not_found_exception',
reason: 'no such index',
'resource.type': 'index_or_alias',
'resource.id': '.opendistro-alerting-config',
index_uuid: 'na',
index: '.opendistro-alerting-config' },
status: 404 },
statusCode: 404,
response:
'{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}',
toString: [Function],
toJSON: [Function] }
{"type":"response","@timestamp":"2019-04-21T18:16:33Z","tags":[],"pid":1,"method":"post","statusCode":200,"req":{"url":"/api/alerting/_search","method":"post","headers":{"host":"internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com","accept":"application/json, text/plain, /","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.5","content-type":"application/json;charset=utf-8","kbn-version":"6.6.2","referer":"http://internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com/app/opendistro-alerting","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:66.0) Gecko/20100101 Firefox/66.0","x-forwarded-for":"10.77.0.195","x-forwarded-port":"80","x-forwarded-proto":"http","content-length":"98","connection":"keep-alive"},"remoteAddress":"10.77.5.236","userAgent":"10.77.5.236","referer":"http://internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com/app/opendistro-alerting"},"res":{"statusCode":200,"responseTime":4,"contentLength":9},"message":"POST /api/alerting/_search 200 4ms - 9.0B"}

let me know if any more info needed

elasticsearch.performanceanalyzer ERROR

Recently, I met a problem. Es has been brushing this log, please help to check it,The error report is as follows
[2019-04-12T15:00:42,433][WARN ][o.e.g.DanglingIndicesState] [node1] [[.opendistro_security/dPNNhxJUT8euAX-TUzN8Lg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2019-04-12T15:03:37,536][ERROR][c.a.o.e.p.m.PerformanceAnalyzerMetrics] [node1] Error in Writing to Tmp File: java.io.IOException: Bad file descriptor for keyPath:/dev/shm/performanceanalyzer/1555052610000//indices/.kibana_1/0
java.io.IOException: Bad file descriptor
at java.io.FileOutputStream.close0(Native Method) ~[?:1.8.0_144]
at java.io.FileOutputStream.access$000(FileOutputStream.java:53) ~[?:1.8.0_144]
at java.io.FileOutputStream$1.close(FileOutputStream.java:356) ~[?:1.8.0_144]
at java.io.FileDescriptor.closeAll(FileDescriptor.java:212) ~[?:1.8.0_144]
at java.io.FileOutputStream.close(FileOutputStream.java:354) ~[?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics.writeToTmp(PerformanceAnalyzerMetrics.java:158) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics.emitMetric(PerformanceAnalyzerMetrics.java:121) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor.lambda$saveMetricValues$0(MetricsProcessor.java:27) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.lambda$invokePrivileged$1(PerformanceAnalyzerPlugin.java:104) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.invokePrivileged(PerformanceAnalyzerPlugin.java:102) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor.saveMetricValues(MetricsProcessor.java:27) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector.collectMetrics(NodeStatsMetricsCollector.java:181) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.lambda$run$0(PerformanceAnalyzerMetricsCollector.java:57) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.lambda$invokePrivileged$1(PerformanceAnalyzerPlugin.java:104) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.invokePrivileged(PerformanceAnalyzerPlugin.java:102) [opendistro_performance_analyzer-0.7.0.0.jar:0.7.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.run(PerformanceAnalyzerMetricsCollector.java:57) [opendistro_performance_analyzer-0.7.0.0.jar:0.7.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]

Not able to map AD groups

Hi,

please could you provide some guide how to connect OpenDistro with AD Groups? LDAP anesthetization works but how to define rules and access for that user?

Thanks a lot

login to kibana through JWT

Hi

  1. With the configuration below kibana can't start because it wants to auth using basic auth but my security config allows only JWT.
    logs
odfe-node1    | [2019-03-21T18:07:15,713][WARN ][c.a.d.a.h.j.HTTPJwtAuthenticator] [jDe_UcC] No Bearer scheme found in header
odfe-node1    | [2019-03-21T18:07:15,713][WARN ][c.a.o.s.a.BackendRegistry] [jDe_UcC] Authentication finally failed for null from 192.168.0.2:59234
  • How can I start kibana using only JWT
  • How can I log in to kibana using JWT
    for example
http://127.0.0.1:5601?jwtToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImJlMjc3MmNlMTAxODRjZmNhZmRhZTk5Y2RlNzk0NGU3IiwiYWNjb3VudElkIjoiYmUyNzcyY2UxMDE4NGNmY2FmZGFlOTljZGU3OTQ0ZTciLCJ0b2tlbiI6IjU2ZTE3OTE4LTA2Y2UtYTJlMS1kY2RmLTgyN2M3YjAzNjU4OCIsInJvbGVzS2V5IjoiYWxsX2FjY2VzcyIsInN1YmplY3RLZXkiOiJhZG1pbiIsImlhdCI6MTU1MzE4OTQ2MiwiZXhwIjoxNTUzMzYyMjYyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0In0.mU9XEYq0B0cQTIvNNND1M_tsTS35NeZAL5suCoQbunw

Security config

opendistro_security:
  dynamic:
    http:
      anonymous_auth_enabled: false
    authc:
      basic_internal_auth_domain:
        http_enabled: false
        transport_enabled: true
        order: 4
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: intern
      jwt_auth_domain:
        enabled: true
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: jwt
          challenge: false
          config:
            signing_key: qwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewq
            # jwt_header: "Authorization: Bearer <token>"
            jwt_url_parameter: "jwtToken"
            roles_key: rolesKey
            subject_key: subjectKey
        authentication_backend:
          type: noop
    authz:
      roles_from_myldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: noop
      roles_from_another_ldap:
        enabled: false
        authorization_backend:
          type: noop
  1. Also I even can't connect to elastic
    a request sample
curl -X GET \
  'https://127.0.0.1:9200' \
  -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImJlMjc3MmNlMTAxODRjZmNhZmRhZTk5Y2RlNzk0NGU3IiwiYWNjb3VudElkIjoiYmUyNzcyY2UxMDE4NGNmY2FmZGFlOTljZGU3OTQ0ZTciLCJ0b2tlbiI6IjU2ZTE3OTE4LTA2Y2UtYTJlMS1kY2RmLTgyN2M3YjAzNjU4OCIsInJvbGVzS2V5IjoiYWxsX2FjY2VzcyIsInN1YmplY3RLZXkiOiJhZG1pbiIsImlhdCI6MTU1MzE4OTQ2MiwiZXhwIjoxNTUzMzYyMjYyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0In0.mU9XEYq0B0cQTIvNNND1M_tsTS35NeZAL5suCoQbunw' \
  -H 'Postman-Token: 79519d07-ea3c-4f3e-819a-0b71964f7653' \
  -H 'cache-control: no-cache'

respone

odfe-node1    | [2019-03-21T17:57:44,940][WARN ][c.a.o.s.a.BackendRegistry] [jDe_UcC] Authentication finally failed for null from 192.168.0.1:53128

Unable to access Kibana installed Azure VM

Hi, I have created a VM (CentOS 7.5) machine on Azure and followed the steps in the website.
When I try to access Kibana using the public IP address (with 5601 port) the page is not available. I have also opened the inbound port 9200 and 5601 in the network firewall on the VM.
Any help?

Not able to install openDistro for Elasticsearch RPM package on Centos7 VM

Hi all,

I am trying to install Open Distro -RPM package on a Centos 7 VM but this installation fails
(I have JDK installed), I don't know if there are some steps missed.
I get the message error messages below:
Error: Package: opendistro-sql-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistroforelasticsearch-0.8.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-security-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-alerting-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-performance-analyzer-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2

Thanks a lot

Fred

Alert id is not available

The documentation mentions that the ctx variable has a field 'alert' with the following content:
"The current, active alert (if it exists). Includes ctx.alert.id, ctx.alert.version, and ctx.alert.isAcknowledged. Null if no alert is active."

However, when verifying the ctx.alert field in my triggered action message, I get the following result:
{state=ACTIVE, error_message=null, acknowledged_time=null, last_notification_time=1555580484091}

The properties are different and there is no alert id or version. The alert id is indispensable for acknoledging the alert using the API.

unknown setting opendistro_security.disabled, documentation requires update?

Hi, i was setting up opendistro with security disabled, for development purpose, and folled the guide to disable security, in the docs here https://opendistro.github.io/for-elasticsearch-docs/docs/security/disable/

A exception is thrown indicating the setting opendistro_security.disabled is removed.

I removed all the settings related to opendistro_security including opendistro_security.disabled, and got it working without security.

I think, the setting has been removed, but the documentation is not updated.

Opendistro elasticsearch change default password without having docker-compose.yml

Hello , I'm trying to change default password of elasticsearch and kibana
for instance admin:admin or kibanaserver:kibanaserver

i took the image of container from https://hub.docker.com/r/amazon/opendistro-for-elasticsearch
and i didn't write any docker compose.yml file. i just run the container without .yml file

in
/usr/share/elasticsearch/plugins/opendistro_security/securityconfig
internal_user.yml , i changed default password from here with new hash but i when i restart to container it's not changing i tried many times but still i get same default password

i follow this page https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker-security/

can someone please help me to figure it out this issue?
Thanks a lot

OpenID Connect not working - Unknown kid

I tried to setup OpenID following the instructions and I am running into an issue where the security plugin is not able to extract the attributes from the JWT token, because of unknown keyID.

Here is the stack-trace and the config files for Kibana and Elastic.

odfe-node1    | [2019-04-26T02:47:59,672][INFO ][c.a.d.a.h.j.AbstractHTTPJwtAuthenticator] [mqs9XQT] Extracting JWT token from eyg.......RESTOFTOKEN....ryry failed
odfe-node1    | com.amazon.dlic.auth.http.jwt.keybyoidc.BadCredentialsException: Unknown kid ACTUALKEYIDVALUE
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.keybyoidc.SelfRefreshingKeySet.getKeyWithKeyId(SelfRefreshingKeySet.java:118) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.keybyoidc.SelfRefreshingKeySet.getKey(SelfRefreshingKeySet.java:58) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.keybyoidc.JwtVerifier.getVerifiedJwtToken(JwtVerifier.java:41) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.extractCredentials0(AbstractHTTPJwtAuthenticator.java:103) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.access$000(AbstractHTTPJwtAuthenticator.java:45) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator$1.run(AbstractHTTPJwtAuthenticator.java:85) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator$1.run(AbstractHTTPJwtAuthenticator.java:82) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at java.security.AccessController.doPrivileged(Native Method) [?:?]
odfe-node1    | 	at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.extractCredentials(AbstractHTTPJwtAuthenticator.java:82) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1    | 	at com.amazon.opendistroforelasticsearch.security.auth.BackendRegistry.authenticate(BackendRegistry.java:448) [opendistro_security-0.8.0.0.jar:0.8.0.0]
--kibana.yml
opendistro_security.multitenancy.enabled: true
opendistro_security.auth.type: openid
opendistro_security.openid.connect_url: https://.../.well-known/openid-configuration
opendistro_security.openid.client_id: {myID}
opendistro_security.openid.client_secret: {mySecret}

--config.yml (Elastic)

basic_internal_auth_domain: 
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: internal
openid_auth_domain:
        enabled: true
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: openid
          challenge: false
          config:
            subject_key: sub
            roles_key: roles
            openid_connect_url: https://.../.well-known/openid-configuration
        authentication_backend:
          type: noop

Performance Analyzer use too many /dev/shm

My opendistro cluster currently running on k8s. I found the /dev/shm was used up, and i found Performance Analyzer is use /dev/shm. But in k8s pod, /dev/shm only has 64MB.
If I mount memory to /dev/shm, it may be "OOM". Can you give me some solution?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.