elastic / stack-docs Goto Github PK
View Code? Open in Web Editor NEWElastic Stack Documentation
License: Other
Elastic Stack Documentation
License: Other
This is one area that is not covered today in our docs today.
Even though tribe is deprecated, it is still in the product on 6.x. There will be questions around upgrading tribe node implementations for those who are not ready to switch to CCS yet. Afaik, they need to upgrade the tribe to 6.0 first before the downstream clusters because if the tribe remains on 5.x, it will not be able to join any downstream clusters that have indices created on 6.0. For example, can they upgrade tribe to 6.0 first and then do rolling restarts of downstream 5.6 clusters to 6.0? This is something we will have to sync up with dev on our recommendations. This will probably depend on the outcome of #17, but I do want to make sure that the results are documented :)
Note that some customers may resist switching to CCS right away because, eg.
So if we decide that we will not be testing/supporting tribe for rolling upgrades because the tribe node is deprecated, we will just have to document it to set the right expectations upfront.
Even for CCS, there will be questions on upgrade ordering (upgrade CCS first? Downstream clusters?).
Add APM Server to the list of selectable stack components and show it in the list of steps if it is selected.
Related to elastic/kibana#21328
The machine learning getting started tutorial should be updated to reflect the new refresh rate options in the Job Management, Anomaly Explorer, and Single Metric Viewer pages.
Some of the screenshots in the Kibana Reference might need to be updated too.
This is still very much a WIP, but time is getting very short so opening this issue to get more eyeballs on it.
Looking for any and all feedback, but the most important thing is making sure that the various paths are fleshed out appropriately. That goes beyond making sure there aren't any paths that throw errors, the actual content & recommendations have to be accurate. And, of course, it needs to look decent -- @cjcenizal gets the cred for the styling, any bad line breaks, weirdo spacing, etc. are on me, along with the content issues.
You can try out a live preview at: https://upgrade-adventure.firebaseapp.com/find_path.html
This is built using Twine--not a great longterm solution, but there wasn't enough time to roll our own. It uses the Sugarcube 2 story format. You can load the attached find_path.txt
file into the online version of Twine to see a visualization of the paths. Right now, that looks like this:
I've also attached the "proofing" version that contains just the content source. (If you import find_path.txt into Twine, you can also view the proofing version in the app.)
The topic about getting started with the Infrastructure UI needs to have a pointer to the Kibana documentation about UI settings:
https://www.elastic.co/guide/en/kibana/master/infrastructure-ui-settings-kb.html
In general, make sure there are sufficient pointers between the topics.
Per elastic/elasticsearch#37942 (comment)
We should probably expand the section in the upgrade docs to match current reality more closely....
In 7.1, we need to be able to differentiate between "anomaly detector jobs" and "data frame analytics jobs" in the documentation.
NOTE: Page names should remain unchanged to ease switching between versions. (For example, after 7.1 is released we don't want someone who selects "show current version" from the 7.0 "Put Job" page to be told the page doesn't exist.)
I wasn't sure which repo to use for this issue, but I think this is probably the best fit, even if the fix ends up elsewhere
I've run into 2 situations recently where someone tried to us ES stylexpack.*
settings to configure SSL in Kibana.
One is on discuss, the other was on private communications.
I'm not sure exactly where the issue lies, whether the docs for setting up TLS is Elasticsearch need to have a link to the Kibana docs, or whether the Kibana docs are hard to find, but 2 cases in as many weeks (that I've seen) implies that we should do something to clarify the setup process.
As @bleskes coordinates testers for the 7.0 stack upgrade process, we'll gather feedback on the docs and record it here.
From @jakommo:
I’ve seen many issues where ECE customers tried to apply on-prem configs, because this was the first search result that came up on google. I was wondering if it would make sense to have both (ECE and on-prem) instructions on the same page, or at least add a note to on-prem docs with a link to the ECE config for this feature.
We probably don't want to merge these docs as they are today, but we should track the issue of how we can improve the documentation of configuration parameters to indicate what they apply to, regardless of your entry point into the docs.
Among the issues that end up confusing customers today:
One proposal might be to standardize how we document configuration parameters and add a field that indicates whether or not the parameter is supported on-prem, in ECE, or on the Elasticsearch Service.
This request comes from @tbragin:
Elastic Stack documentation: https://www.elastic.co/guide/en/elastic-stack/current/index.html
Currently these pages are at the top of all the other content. None of the books there lead to solution-oriented content.
Recommendation: Add pointers from Installing and Getting Started guides for Elastic Stack to next steps of setting up solutions.
We have a note here on version dependency, i.e. they can't do an export from a 6.x cluster to a 5.x monitoring cluster during upgrades: https://www.elastic.co/guide/en/x-pack/current/monitoring-production.html#monitoring-production
The problem is that this note is in the "setup" section of the monitoring guide and users are unlikely going to revisit this page as part of the upgrade. It will be helpful if we can call this out (or cross link) as part of the stack upgrade documentation.
This issue encapsulates the work we know we need to do for the 6.0 upgrade project. It's a superset of #14, so I'll close that issue. We will undoubtedly discover things we need to do along the way (the Rumsfeldian "unknown unknowns".) Teams and tasks are broken down by product. Each product also includes its X-Pack features. There is also a "stack" team that covers cross-product considerations. Elasticsearch and Kibana are a bit heavy on things we already know we need to do, so any extra help there would be welcomed. We could also use some more eyes on Logstash, as @jsvd is currently a one-man army!
.watcher
, .security
)
.kibana
for users without X-Pack (@tylersmalley) #19Is the description「Elasticsearch 8.0.0-alpha1」correct?
master branch So is there no problem?
https://www.elastic.co/guide/en/elastic-stack/master/upgrading-elastic-stack.html
Looking at the current published version of the terminology list in the glossary (https://www.elastic.co/guide/en/elastic-stack-glossary/current/terms.html), there are multiple instances of incomplete links being rendered:
For example:
This value can be overridden by specifying a routing value at index time, or a /mapping-routing-field.html[routing field] in the mapping.
After examining the source, it seems that the {ref}
attribute is not defined, which is breaking the declaration of the links.
This request came to us from Shawn Clink ([email protected]):
It would be helpful if the firewall exception requirements for all the stack were listed in the setup docs (like the kibana configuration) located here:
https://www.elastic.co/guide/en/kibana/current/settings.html
A user contacted us through the "Contact Us" form and wrote:
In the documentation it says to use “_doc” for a query in the following page:
https://www.elastic.co/guide/en/elasticsearch/reference/6.3/docs-update.html
I’ve been fiddling around with a 6.3 installation, and in my experiences only doc
works. The documentation changed in 6.1 to 6.2 to specify _doc
.
<<<
Related to elastic/beats#7035
Update overview (benefits of new method) in https://www.elastic.co/guide/en/elastic-stack-overview/master/xpack-monitoring.html
Update architectural diagrams (include Metricbeat) in https://www.elastic.co/guide/en/elastic-stack-overview/master/how-monitoring-works.html See https://github.com/elastic/Design/issues/1090 and #125
Add new MetricBeat installation, configuration steps for Kibana data collection in https://www.elastic.co/guide/en/elastic-stack-overview/master/monitoring-production.html
Add new MetricBeat installation, configuration steps for all other products' data collection in https://www.elastic.co/guide/en/elastic-stack-overview/master/monitoring-production.html, as that functionality is enabled.
Update the monitoring pages in the Kibana Reference to align with these changes (e.g. https://www.elastic.co/guide/en/kibana/master/monitoring-xpack-kibana.html) See elastic/kibana#23736
Update the monitoring pages in the Elasticsearch Reference to align with these changes (e.g. https://www.elastic.co/guide/en/elasticsearch/reference/master/configuring-monitoring.html). See elastic/elasticsearch#34339
Update the monitoring pages in the other product references to align with these changes (e.g. https://www.elastic.co/guide/en/logstash/current/configuring-logstash.html, https://www.elastic.co/guide/en/beats/filebeat/master/monitoring.html), as that functionality is enabled.
https://www.elastic.co/guide/en/elastic-stack-overview/current/monitoring-production.html
will be better if you mention adding it to elasticsearch.yml
xpack.monitoring.collection.enabled: true
With current instruction found on that page, it won't work as xpack.monitoring.collection.enabled: true
is not set.
Would be nice to provide more information about deploying the stack in a production environment. Right now, we have most of that information buried in the guides for each product, which makes a high level understanding of deployment options difficult.
Following the interactive upgrade guide https://www.elastic.co/products/upgrade_guide
Repro:
-What version of Elasticsearch are you currently running?
5.0-5.5
-Are you running in Elastic Cloud?
No
-Which products are you using with Elasticsearch?
Kibana
-Do you use X-Pack?
Yes
... next steps irrelevant
-Jump straight to 6.0
The resulting steps now mention using the Upgrade Assistant which is a Kibana 5.6 feature. Nowhere in the process is deploying a Kibana 5.6 node, or upgrading Kibana to 5.6 is mentioned.
In this case I believe the xpack migration API should be referenced instead of the upgrade assistant, with steps to upgrade the .kibana and .security indices after installing Elasticsearch 6.0+ (which means starting Elasticsearch 6 with the old .kibana and .security indices? That's how we seem to mention in the docs - haven't tested this path myself yet)
to reproduce:
Which returns:
Here you go!
These are the steps you need to take to upgrade to 6.3.
Upgrade to 5.6
Back up your data.
Use the Elasticsearch Migration plugin to check for upgrade issues.
Address any 5.x breaking changes that affect your applications:
Elasticsearch breaking changes
Upgrade one node at a time to Elasticsearch 5.6.
Install Elasticsearch 5.6.
Install X-Pack.
Restart the node.Prepare to upgrade to 6.3
Back up your data.
Address any 6.0 breaking changes that affect your applications:
Elasticsearch breaking changes
If you're using the default changeme password, change your password.
Check the Elasticsearch deprecation log.
Upgrade to 6.3
Upgrade Elasticsearch to 6.3.
You can perform a rolling upgrade to Elasticsearch 6.3.
Remove the X-Pack plugin before restarting Elasticsearch.
Run bin/elasticsearch-plugin remove x-pack.
However, 6.x requires TLS
The path is therefore missing to enable TLS and the full cluster restart that comes with it.
Instead of the above, the upgrade to 6.x should follow the following path (taken from selecting "jump straing to 6.3" in the last question). In fact, going to 5.6 does not bring any benefits in this case and we should consider removing that question when TLS is not enabled. 5.6 is only required if you plan to roll to 6.x, which is not possible in this case:
Perform a Full Cluster Restart Upgrade to 6.3
Stop sending data to your cluster.
Shut down your cluster and install Elasticsearch 6.3 on all nodes.
Remove the X-Pack plugin before restarting Elasticsearch.
Run bin/elasticsearch-plugin remove x-pack.
Enable Transport Layer Security (TLS) to secure cluster communications.
TLS is required as of 6.0 to prevent unidentified nodes from joining a cluster and encrypt inter-node communications.
Restart your Elasticsearch cluster.
Create a temporary super user and upgrade the internal .security index.
Delete the temporary super user.
A couple of areas where we can improve the getting started with the stack content:
In the "Preparing for an upgrade section" of this page, we mention "Back up your data. You cannot roll back to an earlier version unless you have a backup of your data. For information about creating snapshots, see Snapshot and Restore."
Due to the incremental nature of snapshots, we may want to mention that a new repo must be created after upgrading indexes in 5.6 (and before the upgrade is performed). Otherwise, the user will need to perform a snapshot restore on a version of ES < 6.0.
One possibility for changing the language:
"Back up your data. You cannot roll back to an earlier version unless you have a backup of your data. Note: snapshots are incremental, snapshot repos created in earlier versions of Elasticsearch may not be restorable in newer versions of Elasticsearch. For this reason, it is important to upgrade all indexes in 5.6, create a new snapshot repo, perform a snapshot, and then upgrade Elasticsearch. For information about creating snapshots, see Snapshot and Restore."
CC @eskibars
Let's use this issue to organize the effort to update the upgrade docs for 6.0.0.
The first step is for everyone to take a look at the existing upgrade docs and identify what's missing or needs to be updated:
https://www.elastic.co/guide/en/elastic-stack/current/index.html
Last time around, we decided to point to the docs for each component for detailed instructions. Now is the time to decide if we want to continue with that approach.
When doing upgrade testing from 5.6.14 to 6.6.0 using this doc:
https://www.elastic.co/guide/en/elastic-stack/6.6/upgrading-elastic-stack.html
When ES/Kibana are upgraded to 6.6.0 and Logstash is still on 5.6.14. I was confused by the below statement, when I went to check monitoring for Logstash and it did not appear.
Beats and Logstash 5.6 are compatible with Elasticsearch 6.6.0 to give you flexibility in scheduling the upgrade
@ycombinator pointed me to this matrix to verify monitoring 5.6 is not compatible with 6.6.0
https://www.elastic.co/support/matrix#matrix_compatibility
Maybe we can clarify a bit in the upgrading elastic stack document too about that.
Parent issue #15
It'd be fantastic if we had an intro to the Elastic Stack and it's various use cases under the Overview page. I'm thinking of 3 high level areas we can start with;
Using generic use cases for each, we can then break the stack down and say "for X, install A", "don't install logstash for search" etc, etc.
Example at the bottom of this page, shows text as:
jdbc:es://http://server:3456/timezone=UTC&page.size=250
Correct text is:
jdbc:es://http://server:3456?timezone=UTC&page.size=250
Information related to https://www.elastic.co/blog/elastic-stack-6-3-0-and-6-3-1-may-disable-security-for-trial-licenses
... should be added to the upgrade information (e.g. as an "Important" section near the top of this page: https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html)
This issue is meant to give a high level overview of all the work needed to update the documentation for users upgrading from 6.x to 7.x.
Installation and Upgrade Guide (https://www.elastic.co/guide/en/elastic-stack/7.0/index.html):
Elasticsearch Reference:
Kibana Guide:
Logstash Reference:
Beats Platform Reference:
Cloud:
APM?
One of the 6.0 upgrade team's major deliverables is a set of test environments that internal testers (who, crucially, are not on the upgrade team) can use to vet our documentation. We'd like that test infrastructure to reflect reality as much as possible. Given, however, the sheer configurability of the Elastic Stack, and the number of operating systems and JVMs we support, it's impossible to be exhaustive without incurring a combinatorial explosion of different configurations. So, we should identify the variables that are most likely to affect the upgrade process, and make sure that our test environments account for them. For example, whether a user is using shard filtering for cold indices probably won't affect the steps they take to upgrade the stack, but whether they're using X-Pack Security certainly will.
Here are some variables that come to mind:
upgrade
API directly.Accounting for these alone results in a large number of scenarios. We'd have, for at least two or three operating systems (RHEL-derived, Debian-derived, Windows), and potentially two different ingest architectures:
Like I was saying: combinatorial explosion.
So, we'll need to pare this down in a sane manner. I think it's reasonable to build something like the following for our (to-be formed) group of testers:
Running 5.6:
Running 5.5 (or some other 5.x version below 5.6):
This is clearly not exhaustive, but it's a starting point that covers the more complex scenarios well. I could see arguments for including more, however.
If you look at the forums, as well as support cases, shipper compatibility with the new Infrastructure and Logs UI is one of the most frequent questions around getting started.
https://discuss.elastic.co/c/infrastructure
https://discuss.elastic.co/c/logs
Looking at our documentation, we are missing sufficient detail:
https://www.elastic.co/guide/en/infrastructure/guide/current/install-infrastructure-monitoring.html
At a minimum, I think we should add information around:
1. The fact that shippers of versions 6.5 and above are require
2. That users can customize in kibana.yml which index(es) this Logs and Infrastructure UIs looks at.
See #198
The Getting started guide contains Kibana installation instructions here:
https://www.elastic.co/guide/en/elastic-stack-get-started/master/get-started-elastic-stack.html#install-kibana
The "deb or rpm" instructions use tar.gz files. They ought to be updated to match the specific instructions in https://www.elastic.co/guide/en/kibana/master/deb.html#install-deb and https://www.elastic.co/guide/en/kibana/master/rpm.html#install-rpm
Ideally the instructions can be re-used so they stay in synch
The following is true of all versions: The APIs (user, role, role mapping) only work with objects that were defined via the API.
Objects (users, roles, mappings) defined in files need to be managed via the files (or the users
CLI tool which uses the files).
API exploration calls (and consecutively, Kibana) will not report roles and mappings defined in files.
Example, where role production_user is defined in roles.yml and user "test" is a file realm user created with the users CLI tool.
curl -X GET -u test:password http://localhost:9200/_xpack/security/_authenticate
{"username":"test","roles":["production_user"],"full_name":null,"email":null,"metadata":{},"enabled":true}[root@localhost x-pack]#
curl -X GET -u test:password http://localhost:9200/_xpack/security/role/production_user
{}
curl -X GET -u elastic:password http://localhost:9200/_xpack/security/user/test
{}
Some places to update docs to reflect this behavior:
https://www.elastic.co/guide/en/elastic-stack-overview/6.6/defining-roles.html#roles-management-file
https://www.elastic.co/guide/en/elastic-stack-overview/6.6/mapping-roles.html#mapping-roles-file
We just spent a significant amount of time figuring out why Active Directory / LDAP stops working after upgrading from ES 5.6 to ES 6.4.
The reason is:
We did not find any explicit documentation of this change, even though it did actually break functionality.
In particular, also the Interactive Upgrade Guide should highlight these and similar required changes.
The docs are broken, showing this:
The content looks right to me:
[[install-order-elastic-stack]]
=== Installation Order
Install the Elastic Stack products you want to use in the following order:
. Elasticsearch ({esref}/install-elasticsearch.html[install instructions])
. {xpack} for Elasticsearch ({esref}/install-elasticsearch.html[install instructions])
. Kibana ({esref}/install-elasticsearch.html[install])
. {xpack} for Kibana ({esref}/install-elasticsearch.html[install instructions])
. Logstash ({esref}/install-elasticsearch.html[install])
. {xpack} for Logstash ({esref}/install-elasticsearch.html[install instructions])
. Beats ({esref}/install-elasticsearch.html[install instructions])
. Elasticsearch Hadoop ({esref}/install-elasticsearch.html[install instructions])
Installing in this order ensures that the components each product depends
on are in place.
Therefore I am passing this on to an expert.
Original comment by @lcawl:
Per LINK REDACTED
"... if we want to fully document the functionality in the Data Visualizer, items that are not covered are:
Some of this might be appropriate for help inside the UI, but it should be considered for inclusion in the Kibana User Guide > Machine Learning nonetheless.
We need more cross-stack documentation that addresses specific use cases. We have a lot of info on the website in blogs that describe specific use cases, but that content gets stale over time. We need to pull some of that content into the core docs (or create new tutorials) so the information stays more current.
Right now, the stack getting started docs show how to install and run the OSS components of the stack. For users who are interested in cloud, though, the setup is much simpler. They can eliminate the first two parts of the setup (installing ES and Kibana) and simply set up a cloud account. Then when they configure Beats, they set the cloud.id
and cloud.auth
config options, and are quickly able to see results (with minimal effort).
The stack getting started guide does mention that there is a hosted Elastic Service available on cloud, but I'm wondering if a full tutorial that highlights the simplicity of cloud might be worthwhile. The new tutorial would be very similar to the full stack (OSS) tutorial, except:
Having a tutorial at the same level as the stack getting started (with a similar layout) raises the visibility of cloud and shows how easy it is to get started quickly. This is also a quick win because most of the content can be shared/reused with the OSS version of the tutorial.
@nrichers @lcawl Opening this as an item for discussion. If you think this is a good idea, we talk about ownership. Probably makes the most sense for Lisa (or me, if she's too busy) to own the overall tutorial and for Nik to advise on the contents of the cloud part. WDYT?
Right now, our release notes contain a laundry list of changes. They don't do a very good job of highlighting the major changes that were added for each release.
As a quick fix, the release notes for each product should point to the blog posts that contain the release highlights.
We should do this for all 6.x versions, and possibly 5.6.
The links need to be added to the release notes in the appropriate repo. Use this meta issue to track completion of the work:
We would like to make it easier for customers to deploy the Elastic Stack on Docker and Kubernetes. To that end, we'd like to create tutorials similar to https://www.elastic.co/guide/en/elastic-stack-overview/master/get-started-elastic-stack.html for those environments.
In order to improve the getting started experience for our users that deploy with Docker or Kubernetes we need some updates / changes to the docs. Target date for this is 30 November 2018.
Docker config files and commands in GitHub:
Updates to these docs for Docker will be tracked here:
Updates to these docs for Kubernetes will be tracked here:
Other docs that will be updated through the Helm project (later, not part of this issue):
- [ ] Elasticsearch
- [ ] Kibana
- [ ] APM Server
PRs in:
-[x] Beats
-[ ] Elasticsearch No change needed
-[ ] Kibana No change needed
@tylerjl @exekias Can you volunteer someone to help review? Similar to the blogs and demos I have been running past you, our goal here is to publish good advice in the docs, and we don't want to embarrass ourselves by publishing something that works but is deprecated.
Original comment by @lcawl:
On 5.4 and 5.5 builds from s3, there is disparity with respect to the categorization_filters content.
For example, when I run the following API in the Dev Tools tab in Kibana:
PUT _xpack/ml/anomaly_detectors/it_ops_logs
{
"description": "IT Ops application logs",
"analysis_config": {
"categorization_field_name": "message",
"bucket_span":"30m",
"detectors": [
{
"function": "count",
"by_field_name": "mlcategory",
"detector_description": "Unusual message counts"
}],
"categorization_filters":"\\[statement:.*\\]"
},
"analysis_limits": {
"categorization_examples_limit": 5
},
"data_description": {
"time_field": "time",
"time_format": "epoch_ms"
}
}
It returns the following:
...
"analysis_config": {
"bucket_span": "30m",
"categorization_field_name": "message",
"categorization_filters": [
"""\[statement:.*\]"""
]
...
Ditto when I run GET _xpack/ml/anomaly_detectors/it_ops_logs from the Dev Tools tab.
If I run it from the command line, however, I get the following:
curl -u elastic:changeme -XGET 'localhost:9200/_xpack/ml/anomaly_detectors/it_ops_logs'
{"count":1,"jobs":[{"job_id":"it_ops_logs","job_type":"anomaly_detector","job_version":"5.5.0","description":"IT Ops application logs","create_time":1497295377972,"analysis_config":{"bucket_span":"30m","categorization_field_name":"message","categorization_filters":["\\[statement:.*\\]"],"detectors":[{"detector_description":"Unusual message counts","function":"count","by_field_name":"mlcategory","detector_rules":[],"detector_index":0}],"influencers":[]},"analysis_limits":{"categorization_examples_limit":5},"data_description":{"time_field":"time","time_format":"epoch_ms"},"model_snapshot_retention_days":1,"results_index_name":"shared"}]}Lisas-MBP:~ lcawley$
Note that the categorization_filters content matches what I specified in the PUT command in this case.
The "categorization_filters":"""[statement:.*]""" does not actually seem to be invalid (I can successfully create a job in the Dev Tools with that syntax, it's just a bit confusing that it's returning a different syntax in Dev Tools vs command line.
For interest's sake, to get this to work in the advanced job wizard I must use the following syntax:
.... the categorization_filters property then has the desired format in the Edit JSON tab:
The https://github.com/elastic/docs/blob/master/conf.yaml indicates that we're only generating the Glossary from a single branch: master.
Therefore the glossary folder should be removed from all other branches. Otherwise folks might make changes in branches that aren't used or backport unnecessarily.
Original comment by @LeeDr:
Kibana version: 5.5.0
Elasticsearch version: 5.5.0
Server OS version: Ubuntu
Browser version: Chrome
Browser OS version: Ubuntu
Original install method (e.g. download page, yum, from source, etc.): tar.gz
Description of the problem including expected versus actual behavior:
I think I'm giving a user roles with permissions that should allow them to create a cross cluster index pattern, but it fails.
Steps to reproduce:
makelogs_reader
role, that has index patterns makelogs-*, local:makelogs-*, data:makelogs-*, *:makelogs-*
and privs read, view_index_metadata, read_cross_cluster
kibana_css
role that is just like the kibana_user role except add the read_cross_cluster
priv on the .kibana* index. (I didn't think this step should be necessary since .kibana isn't cross cluster, but I tried it when things didn't work with kibana_user role)makelogs_reader
user with makelogs_reader
and kibana_css
roles:
.:
like data:
Up to the point where I type data
I can see Kibana checking if that index exists and getting a 404 as expected. But as soon as I type the :
I get the red toast error banner and the console shows this;
Errors in browser console (if relevant):
getFieldsForWildcard(data:)
VM10731:1 GET https://localhost:5601/api/index_patterns/_fields_for_wildcard?pattern=data…5B%22_source%22%2C%22_id%22%2C%22_type%22%2C%22_index%22%2C%22_score%22%5D 500 (Internal Server Error)
(anonymous) @ VM10731:1
(anonymous) @ commons.bundle.js?v=15347:37
sendReq @ commons.bundle.js?v=15347:37
serverRequest @ commons.bundle.js?v=15347:37
processQueue @ commons.bundle.js?v=15347:38
(anonymous) @ commons.bundle.js?v=15347:38
$eval @ commons.bundle.js?v=15347:39
$digest @ commons.bundle.js?v=15347:39
$apply @ commons.bundle.js?v=15347:39
(anonymous) @ commons.bundle.js?v=15347:39
completeOutstandingRequest @ commons.bundle.js?v=15347:36
(anonymous) @ commons.bundle.js?v=15347:36
commons.bundle.js?v=15347:38 Error: An internal server error occurred
at kibana.bundle.js?v=15347:228
at processQueue (commons.bundle.js?v=15347:38)
at commons.bundle.js?v=15347:38
at Scope.$eval (commons.bundle.js?v=15347:39)
at Scope.$digest (commons.bundle.js?v=15347:39)
at Scope.$apply (commons.bundle.js?v=15347:39)
at done (commons.bundle.js?v=15347:37)
at completeRequest (commons.bundle.js?v=15347:37)
at XMLHttpRequest.xhr.onload (commons.bundle.js?v=15347:37)
(anonymous) @ commons.bundle.js?v=15347:38
(anonymous) @ commons.bundle.js?v=15347:37
processQueue @ commons.bundle.js?v=15347:38
(anonymous) @ commons.bundle.js?v=15347:38
$eval @ commons.bundle.js?v=15347:39
$digest @ commons.bundle.js?v=15347:39
$apply @ commons.bundle.js?v=15347:39
done @ commons.bundle.js?v=15347:37
completeRequest @ commons.bundle.js?v=15347:37
xhr.onload @ commons.bundle.js?v=15347:37
I know the cross cluster config is OK and data:makelogs-*
works fine for the elastic
super user.
Provide logs and/or server output (if relevant):
The 2 roles I created:
"makelogs_reader": {
"cluster": [],
"indices": [
{
"names": [
"makelogs-*",
"data:makelogs-*",
"*:makelogs-*",
"local:makelogs-*"
],
"privileges": [
"read",
"view_index_metadata",
"read_cross_cluster"
],
"field_security": {
"grant": [
"*"
]
}
}
],
"run_as": [],
"metadata": {},
"transient_metadata": {
"enabled": true
}
},
"kibana_ccs": {
"cluster": [],
"indices": [
{
"names": [
".kibana*"
],
"privileges": [
"manage",
"create",
"index",
"delete",
"read_cross_cluster"
],
"field_security": {
"grant": [
"*"
]
}
}
],
"run_as": [],
"metadata": {},
"transient_metadata": {
"enabled": true
}
}
We already have something like this in the Beats Platform Reference. With a bit of massaging, we could easily create something for the stack.
Parent issue: #15
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.