opendistro / for-elasticsearch-docs Goto Github PK
View Code? Open in Web Editor NEWThe Open Distro for Elasticsearch documentation.
Home Page: https://opendistro.github.io/for-elasticsearch-docs/
License: Apache License 2.0
The Open Distro for Elasticsearch documentation.
Home Page: https://opendistro.github.io/for-elasticsearch-docs/
License: Apache License 2.0
Recently, I met a problem. Es has been brushing this log, please help to check it,The error report is as follows
[2019-04-12T15:00:42,433][WARN ][o.e.g.DanglingIndicesState] [node1] [[.opendistro_security/dPNNhxJUT8euAX-TUzN8Lg]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2019-04-12T15:03:37,536][ERROR][c.a.o.e.p.m.PerformanceAnalyzerMetrics] [node1] Error in Writing to Tmp File: java.io.IOException: Bad file descriptor for keyPath:/dev/shm/performanceanalyzer/1555052610000//indices/.kibana_1/0
java.io.IOException: Bad file descriptor
at java.io.FileOutputStream.close0(Native Method) ~[?:1.8.0_144]
at java.io.FileOutputStream.access$000(FileOutputStream.java:53) ~[?:1.8.0_144]
at java.io.FileOutputStream$1.close(FileOutputStream.java:356) ~[?:1.8.0_144]
at java.io.FileDescriptor.closeAll(FileDescriptor.java:212) ~[?:1.8.0_144]
at java.io.FileOutputStream.close(FileOutputStream.java:354) ~[?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics.writeToTmp(PerformanceAnalyzerMetrics.java:158) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics.emitMetric(PerformanceAnalyzerMetrics.java:121) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor.lambda$saveMetricValues$0(MetricsProcessor.java:27) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.lambda$invokePrivileged$1(PerformanceAnalyzerPlugin.java:104) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.invokePrivileged(PerformanceAnalyzerPlugin.java:102) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor.saveMetricValues(MetricsProcessor.java:27) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector.collectMetrics(NodeStatsMetricsCollector.java:181) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.lambda$run$0(PerformanceAnalyzerMetricsCollector.java:57) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.lambda$invokePrivileged$1(PerformanceAnalyzerPlugin.java:104) ~[?:?]
at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_144]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.PerformanceAnalyzerPlugin.invokePrivileged(PerformanceAnalyzerPlugin.java:102) [opendistro_performance_analyzer-0.7.0.0.jar:0.7.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.run(PerformanceAnalyzerMetricsCollector.java:57) [opendistro_performance_analyzer-0.7.0.0.jar:0.7.0.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
OpenDistro for Elasticsearch Security Demo Installer
** Warning: Do not use on production or public reachable systems **
Basedir: /usr/share/elasticsearch
Elasticsearch install type: rpm/deb on CentOS Linux release 7.6.1810 (Core)
Elasticsearch config dir: /usr/share/elasticsearch/config
Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
Elasticsearch bin dir: /usr/share/elasticsearch/bin
Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
Elasticsearch lib dir: /usr/share/elasticsearch/lib
Detected Elasticsearch Version: x-content-6.6.2
Detected Open Distro Security Version: 0.8.0.0
"/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh" -cd "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig" -icl -key "/usr/share/elasticsearch/config/kirk-key.pem" -cert "/usr/share/elasticsearch/config/kirk.pem" -cacert "/usr/share/elasticsearch/config/root-ca.pem" -nhnv
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-04-10T07:31:40,842][INFO ][o.e.e.NodeEnvironment ] [I4AIfZa] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [22.3gb], net total_space [25.9gb], types [rootfs]
[2019-04-10T07:31:40,843][INFO ][o.e.e.NodeEnvironment ] [I4AIfZa] heap size [990.7mb], compressed ordinary object pointers [true]
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] node name derived from node ID [I4AIfZaYReGeN5SVri92Aw]; set [node.name] to override
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] version[6.6.2], pid[1], build[oss/tar/3bd3e59/2019-03-06T15:16:26.864148Z], OS[Linux/3.10.0-957.10.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-04-10T07:31:40,845][INFO ][o.e.n.Node ] [I4AIfZa] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-3511745295708599060, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-04-10T07:31:42,237][INFO ][c.a.o.e.p.c.PluginSettings] [I4AIfZa] loading config ...
[2019-04-10T07:31:42,238][INFO ][c.a.o.e.p.c.PluginSettings] [I4AIfZa] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1
[2019-04-10T07:31:42,727][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] ES Config path is /usr/share/elasticsearch/config
[2019-04-10T07:31:42,806][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] OpenSSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: io.netty.internal.tcnative.SSL
[2019-04-10T07:31:43,022][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] JVM supports TLSv1.3
[2019-04-10T07:31:43,023][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Config directory is /usr/share/elasticsearch/config/, from there the key- and truststore files are resolved relatively
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS Transport Client Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS Transport Server Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] TLS HTTP Provider : JDK
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Enabled TLS protocols for transport layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-04-10T07:31:43,397][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [I4AIfZa] Enabled TLS protocols for HTTP layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2019-04-10T07:31:43,633][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Clustername: docker-cluster
[2019-04-10T07:31:43,693][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Directory /usr/share/elasticsearch/config has insecure file permissions (should be 0700)
[2019-04-10T07:31:43,693][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/elasticsearch.yml has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/log4j2.properties has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/kirk.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/esnode.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/root-ca.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/esnode-key.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,694][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] File /usr/share/elasticsearch/config/kirk-key.pem has insecure file permissions (should be 0600)
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [aggs-matrix-stats]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [analysis-common]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [ingest-common]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-expression]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-mustache]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [lang-painless]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [mapper-extras]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [parent-join]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [percolator]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [rank-eval]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [reindex]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [repository-url]
[2019-04-10T07:31:43,848][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [transport-netty4]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded module [tribe]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_alerting]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_performance_analyzer]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_security]
[2019-04-10T07:31:43,849][INFO ][o.e.p.PluginsService ] [I4AIfZa] loaded plugin [opendistro_sql]
[2019-04-10T07:31:43,861][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] Disabled https compression by default to mitigate BREACH attacks. You can enable it by setting 'http.compression: true' in elasticsearch.yml
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured categories on rest layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured categories on transport layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2019-04-10T07:31:45,640][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore: [kibanaserver]
[2019-04-10T07:31:45,641][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore for read compliance events: [kibanaserver]
[2019-04-10T07:31:45,641][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Configured Users to ignore for write compliance events: [kibanaserver]
[2019-04-10T07:31:45,649][INFO ][c.a.o.s.a.i.AuditLogImpl ] [I4AIfZa] Message routing enabled: true
[2019-04-10T07:31:45,659][WARN ][c.a.o.s.c.ComplianceConfig] [I4AIfZa] If you plan to use field masking pls configure opendistro_security.compliance.salt to be a random string of 16 chars length identical on all nodes
[2019-04-10T07:31:45,659][INFO ][c.a.o.s.c.ComplianceConfig] [I4AIfZa] PII configuration [auditLogPattern=org.joda.time.format.DateTimeFormatter@55881f40, auditLogIndex=null]: {}
[2019-04-10T07:31:45,956][INFO ][o.e.d.DiscoveryModule ] [I4AIfZa] using discovery type [single-node] and host providers [settings]
[2019-04-10T07:31:46,248][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [I4AIfZa] PerformanceAnalyzer Enabled: true
Registering Handler
[2019-04-10T07:31:46,295][INFO ][o.e.n.Node ] [I4AIfZa] initialized
[2019-04-10T07:31:46,295][INFO ][o.e.n.Node ] [I4AIfZa] starting ...
[2019-04-10T07:31:46,379][INFO ][o.e.t.TransportService ] [I4AIfZa] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-04-10T07:31:46,386][WARN ][o.e.b.BootstrapChecks ] [I4AIfZa] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-04-10T07:31:46,392][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Check if .opendistro_security index exists ...
[2019-04-10T07:31:46,451][INFO ][o.e.h.n.Netty4HttpServerTransport] [I4AIfZa] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-04-10T07:31:46,451][INFO ][o.e.n.Node ] [I4AIfZa] started
[2019-04-10T07:31:46,451][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [I4AIfZa] 4 Open Distro Security modules loaded so far: [Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions], Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper]]
[2019-04-10T07:31:46,475][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] .opendistro_security index does not exist yet, so we create a default config
[2019-04-10T07:31:46,477][INFO ][o.e.g.GatewayService ] [I4AIfZa] recovered [0] indices into cluster_state
[2019-04-10T07:31:46,479][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Will create .opendistro_security index so we can apply default config
[2019-04-10T07:31:46,521][INFO ][o.e.c.m.MetaDataCreateIndexService] [I4AIfZa] [.opendistro_security] creating index, cause [api], templates [], shards [1]/[1], mappings []
[2019-04-10T07:31:46,526][INFO ][o.e.c.r.a.AllocationService] [I4AIfZa] updating number_of_replicas to [0] for indices [.opendistro_security]
[2019-04-10T07:31:46,694][INFO ][o.e.c.r.a.AllocationService] [I4AIfZa] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).
[2019-04-10T07:31:46,701][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'config' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
[2019-04-10T07:31:46,735][ERROR][c.a.o.e.p.o.OSGlobals ] [I4AIfZa] Error in static initialization of OSGlobals with exception: java.security.AccessControlException: access denied ("java.io.FilePermission" "/proc/self/task" "read")
java.security.AccessControlException: access denied ("java.io.FilePermission" "/proc/self/task" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:?]
at java.security.AccessController.checkPermission(AccessController.java:895) ~[?:?]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:322) ~[?:?]
at java.lang.SecurityManager.checkRead(SecurityManager.java:661) ~[?:?]
at java.io.File.list(File.java:1129) ~[?:?]
at java.io.File.listFiles(File.java:1219) ~[?:?]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.os.OSGlobals.enumTids(OSGlobals.java:81) ~[opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.os.OSGlobals.(OSGlobals.java:44) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics_generator.linux.LinuxOSMetricsGenerator.getPid(LinuxOSMetricsGenerator.java:50) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.(ThreadList.java:51) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.ThreadIDUtil.getNativeThreadId(ThreadIDUtil.java:31) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.ThreadIDUtil.getNativeCurrentThreadId(ThreadIDUtil.java:27) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportChannel.set(PerformanceAnalyzerTransportChannel.java:50) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.getShardBulkChannel(PerformanceAnalyzerTransportRequestHandler.java:78) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.getChannel(PerformanceAnalyzerTransportRequestHandler.java:52) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportRequestHandler.messageReceived(PerformanceAnalyzerTransportRequestHandler.java:43) [opendistro_performance_analyzer-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceivedDecorate(OpenDistroSecuritySSLRequestHandler.java:194) [opendistro_security_ssl-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.transport.OpenDistroSecurityRequestHandler.messageReceivedDecorate(OpenDistroSecurityRequestHandler.java:163) [opendistro_security-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.ssl.transport.OpenDistroSecuritySSLRequestHandler.messageReceived(OpenDistroSecuritySSLRequestHandler.java:116) [opendistro_security_ssl-0.8.0.0.jar:0.8.0.0]
at com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin$7$1.messageReceived(OpenDistroSecurityPlugin.java:652) [opendistro_security-0.8.0.0.jar:0.8.0.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:687) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.2.jar:6.6.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.2.jar:6.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-04-10T07:31:46,967][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] create_mapping [security]
[2019-04-10T07:31:47,071][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'roles' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
[2019-04-10T07:31:47,092][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,144][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'rolesmapping' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
[2019-04-10T07:31:47,160][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,192][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'internalusers' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
[2019-04-10T07:31:47,205][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,223][INFO ][c.a.o.s.s.ConfigHelper ] [I4AIfZa] Will update 'actiongroups' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
[2019-04-10T07:31:47,233][INFO ][o.e.c.m.MetaDataMappingService] [I4AIfZa] [.opendistro_security/kZ94xDrnSKi3ayI5mwf-Qw] update_mapping [security]
[2019-04-10T07:31:47,269][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Default config applied
[2019-04-10T07:31:47,293][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [I4AIfZa] Node 'I4AIfZa' initialized
Will there be an official deb repository with debian 8\9 packages?
Hey,
I miss some Information about the default user which Kibana uses to connect to the Elasticsearch-Cluster. After hours of try-and-error I found this article: https://aws.amazon.com/de/blogs/opensource/change-passwords-open-distro-for-elasticsearch/
It describes that open-distro-for-elasticsearch requieres the user 'kibanaserver'.
Would be nice If you can add this information in your documents.
our filebeat is install of rpm,opendistro is install of docker。if i want to edit filebet.yml send to opendistro ,output.elasticsearch should be how to configure with authority
Hi, is it possible to change the default admin password?
Request to "${endpoint}/_opendistro/_performanceanalyzer/metrics/units" outputs what unit we use for each metric in a JSON format. Please help add this to the public documentation.
$ curl localhost:9600/_opendistro/_performanceanalyzer/metrics/units
{"Disk_Utilization":"%","Cache_Request_Hit":"count","TermVectors_Memory":"B","Segments_Memory":"B","HTTP_RequestDocs":"count","Net_TCP_Lost":"segments/flow","Refresh_Time":"ms","GC_Collection_Event":"count","Merge_Time":"ms","Sched_CtxRate":"count/s","Cache_Request_Size":"B","ThreadPool_QueueSize":"count","Sched_Runtime":"s/ctxswitch","Disk_ServiceRate":"MB/s","Heap_AllocRate":"B/s","Heap_Max":"B","Sched_Waittime":"s/ctxswitch","ShardBulkDocs":"count","Thread_Blocked_Time":"s/event","VersionMap_Memory":"B","Master_Task_Queue_Time":"ms","Merge_CurrentEvent":"count","Indexing_Buffer":"B","Bitset_Memory":"B","Norms_Memory":"B","Net_PacketDropRate4":"packets/s","Heap_Committed":"B","Net_PacketDropRate6":"packets/s","Thread_Blocked_Event":"count","GC_Collection_Time":"ms","Cache_Query_Miss":"count","IO_TotThroughput":"B/s","Latency":"ms","Net_PacketRate6":"packets/s","Cache_Query_Hit":"count","IO_ReadSyscallRate":"count/s","Net_PacketRate4":"packets/s","Cache_Request_Miss":"count","CB_ConfiguredSize":"B","CB_TrippedEvents":"count","ThreadPool_RejectedReqs":"count","Disk_WaitTime":"ms","Net_TCP_TxQ":"segments/flow","Master_Task_Run_Time":"ms","IO_WriteSyscallRate":"count/s","IO_WriteThroughput":"B/s","Flush_Event":"count","Net_TCP_RxQ":"segments/flow","Refresh_Event":"count","Points_Memory":"B","Flush_Time":"ms","Heap_Init":"B","CPU_Utilization":"cores","HTTP_TotalRequests":"count","ThreadPool_ActiveThreads":"count","Cache_Query_Size":"B","Paging_MinfltRate":"count/s","Merge_Event":"count","Net_TCP_SendCWND":"B/flow","Cache_Request_Eviction":"count","Segments_Total":"count","Terms_Memory":"B","DocValues_Memory":"B","Heap_Used":"B","Cache_FieldData_Eviction":"count","IO_TotalSyscallRate":"count/s","CB_EstimatedSize":"B","Net_Throughput":"B/s","Paging_RSS":"pages","Indexing_ThrottleTime":"ms","StoredFields_Memory":"B","IndexWriter_Memory":"B","Master_PendingQueueSize":"count","Net_TCP_SSThresh":"B/flow","Cache_FieldData_Size":"B","Paging_MajfltRate":"count/s","ThreadPool_TotalThreads":"count","IO_ReadThroughput":"B/s","ShardEvents":"count","Net_TCP_NumFlows":"count"}
I have an existing ELK set-up running, which has been collecting/storing data for some time. How do I switch to open-distro ES from my existing set-up without losing all the data. Any documentation regarding that will be really helpful.
Hi,
please could you provide some guide how to connect OpenDistro with AD Groups? LDAP anesthetization works but how to define rules and access for that user?
Thanks a lot
Hi. I tried to install opendistro_alerting plugin from page "Standalone plugin install" and can't find any proper links to Standalone Kibana Plugins.
I guess, someone forgot to publish this links.
Can someone tell me links to archives with plugins?
On a fresh CentOS 7 install in GCE, I noticed the following happening after installing opendistroforelasticsearch and attempting to yum upgrade
. It appears that opendistroforelasticsearch depends on 6.6.2, but the elasticsearch-oss repository provides 6.7.1 and that causes yum to believe it can be updated.
[root@monitoring-es-1 ~]# yum upgrade
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.team-cymru.com
* epel: mirror.steadfastnet.com
* extras: mirror.fileplanet.com
* updates: mirror.steadfastnet.com
Resolving Dependencies
--> Running transaction check
---> Package elasticsearch-oss.noarch 0:6.6.2-1 will be updated
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistroforelasticsearch-0.8.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-alerting-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-sql-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-performance-analyzer-0.8.0.0-1.noarch
--> Processing Dependency: elasticsearch-oss = 6.6.2 for package: opendistro-security-0.8.0.0-1.noarch
---> Package elasticsearch-oss.noarch 0:6.7.1-1 will be an update
---> Package glibc.x86_64 0:2.17-260.el7_6.3 will be updated
---> Package glibc.x86_64 0:2.17-260.el7_6.4 will be an update
---> Package glibc-common.x86_64 0:2.17-260.el7_6.3 will be updated
---> Package glibc-common.x86_64 0:2.17-260.el7_6.4 will be an update
---> Package google-cloud-sdk.noarch 0:240.0.0-1.el7 will be updated
---> Package google-cloud-sdk.noarch 0:241.0.0-1.el7 will be an update
---> Package libssh2.x86_64 0:1.4.3-12.el7 will be updated
---> Package libssh2.x86_64 0:1.4.3-12.el7_6.2 will be an update
---> Package python.x86_64 0:2.7.5-76.el7 will be updated
---> Package python.x86_64 0:2.7.5-77.el7_6 will be an update
---> Package python-libs.x86_64 0:2.7.5-76.el7 will be updated
---> Package python-libs.x86_64 0:2.7.5-77.el7_6 will be an update
---> Package tzdata.noarch 0:2018i-1.el7 will be updated
---> Package tzdata.noarch 0:2019a-1.el7 will be an update
--> Finished Dependency Resolution
Error: Package: opendistroforelasticsearch-0.8.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
elasticsearch-oss = 6.6.2-1
Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.1-1
Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.0-1
Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.1-1
Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.2-1
Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.0-1
Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.1-1
Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.2-1
Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.3-1
Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.0-1
Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.1-1
Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.2-1
Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.3-1
Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.4-1
Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.0-1
Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.1-1
Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-performance-analyzer-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
elasticsearch-oss = 6.6.2-1
Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.1-1
Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.0-1
Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.1-1
Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.2-1
Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.0-1
Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.1-1
Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.2-1
Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.3-1
Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.0-1
Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.1-1
Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.2-1
Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.3-1
Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.4-1
Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.0-1
Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.1-1
Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-alerting-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
elasticsearch-oss = 6.6.2-1
Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.1-1
Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.0-1
Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.1-1
Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.2-1
Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.0-1
Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.1-1
Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.2-1
Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.3-1
Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.0-1
Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.1-1
Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.2-1
Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.3-1
Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.4-1
Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.0-1
Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.1-1
Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-security-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
elasticsearch-oss = 6.6.2-1
Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.1-1
Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.0-1
Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.1-1
Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.2-1
Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.0-1
Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.1-1
Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.2-1
Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.3-1
Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.0-1
Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.1-1
Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.2-1
Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.3-1
Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.4-1
Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.0-1
Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.1-1
Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.0-1
Error: Package: opendistro-sql-0.8.0.0-1.noarch (@opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Removing: elasticsearch-oss-6.6.2-1.noarch (@elasticsearch-6.x)
elasticsearch-oss = 6.6.2-1
Updated By: elasticsearch-oss-6.7.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.1-1
Available: elasticsearch-oss-6.3.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.0-1
Available: elasticsearch-oss-6.3.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.1-1
Available: elasticsearch-oss-6.3.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.3.2-1
Available: elasticsearch-oss-6.4.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.0-1
Available: elasticsearch-oss-6.4.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.1-1
Available: elasticsearch-oss-6.4.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.2-1
Available: elasticsearch-oss-6.4.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.4.3-1
Available: elasticsearch-oss-6.5.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.0-1
Available: elasticsearch-oss-6.5.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.1-1
Available: elasticsearch-oss-6.5.2-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.2-1
Available: elasticsearch-oss-6.5.3-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.3-1
Available: elasticsearch-oss-6.5.4-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.5.4-1
Available: elasticsearch-oss-6.6.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.0-1
Available: elasticsearch-oss-6.6.1-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.6.1-1
Available: elasticsearch-oss-6.7.0-1.noarch (elasticsearch-6.x)
elasticsearch-oss = 6.7.0-1
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Hello!
tell me earlier at the time of release, the opendistro set it up and everything was fine, and it worked now I decided to roll it out not into the test environment and ran into a problem, after installation for this manual https://opendistro.github.io/for-elasticsearch-docs/docs/install/rpm/ and service start i have a error
Open Distro Security not initialized (SG11). but service i started,
and if i start first node with only role master have error
[2019-04-13T14:26:35,382][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for roles while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups] (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for rolesmapping while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups] (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for internalusers while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups] (index=.opendistro_security)
[2019-04-13T14:26:35,383][WARN ][c.a.o.s.c.ConfigurationLoader] [lb-master1] No data for actiongroups while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups] (index=.opendistro_security)
what am I doing wrong ? it used to work.
and do i act correctly if i need to make a cluster of 1 master and two date nodes (hot and warm)
first i do the master, then hot and warm
systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-03-13 08:55:02 EDT; 19s ago
Docs: http://www.elastic.co
Main PID: 5715 (code=exited, status=1/FAILURE)
Mar 13 08:54:46 host1 systemd[1]: Started Elasticsearch.
Mar 13 08:54:47 host1 elasticsearch[5715]: java.security.policy: error adding Entry:
Mar 13 08:54:47 host1 elasticsearch[5715]: java.net.MalformedURLException: unknown protocol: jrt
Mar 13 08:54:47 host1 elasticsearch[5715]: java.security.policy: error adding Entry:
Mar 13 08:54:47 host1 elasticsearch[5715]: java.net.MalformedURLException: unknown protocol: jrt
Mar 13 08:55:02 host1 systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Mar 13 08:55:02 host1 systemd[1]: Unit elasticsearch.service entered failed state.
Mar 13 08:55:02 host1 systemd[1]: elasticsearch.service failed.
Using the sample docker-compose.yml (https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/).
then run:
docker-compose up
Able to login to Kibana and everything works fine.
then:
docker-compose stop
docker-compose start
Able to login to Kibana but see error:
Kibana status is Yellow
plugin:[email protected] Tenant indices migration failed
Unable to do anything in Kibana. Any help would be appreciated.
Update Docs for Deb INSTALL Guide. OpenJDK11 and apt-transport-https
"Install Java 11:
sudo echo 'deb http://deb.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
The previous sudo apt update
will fail without apt-transport-https
These are the only two issues found while following the instructions from a clean, headless debian 9.8 installation.
Since these passwords ship with default values, we need to make it easy to set them. Especially running Docker, it's hard to change them and that reduces the security footprint.
Removing the reserved designation will allow us to change them in the Security UI.
I tried to setup OpenID following the instructions and I am running into an issue where the security plugin is not able to extract the attributes from the JWT token, because of unknown keyID.
Here is the stack-trace and the config files for Kibana and Elastic.
odfe-node1 | [2019-04-26T02:47:59,672][INFO ][c.a.d.a.h.j.AbstractHTTPJwtAuthenticator] [mqs9XQT] Extracting JWT token from eyg.......RESTOFTOKEN....ryry failed
odfe-node1 | com.amazon.dlic.auth.http.jwt.keybyoidc.BadCredentialsException: Unknown kid ACTUALKEYIDVALUE
odfe-node1 | at com.amazon.dlic.auth.http.jwt.keybyoidc.SelfRefreshingKeySet.getKeyWithKeyId(SelfRefreshingKeySet.java:118) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.keybyoidc.SelfRefreshingKeySet.getKey(SelfRefreshingKeySet.java:58) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.keybyoidc.JwtVerifier.getVerifiedJwtToken(JwtVerifier.java:41) ~[opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.extractCredentials0(AbstractHTTPJwtAuthenticator.java:103) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.access$000(AbstractHTTPJwtAuthenticator.java:45) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator$1.run(AbstractHTTPJwtAuthenticator.java:85) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator$1.run(AbstractHTTPJwtAuthenticator.java:82) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at java.security.AccessController.doPrivileged(Native Method) [?:?]
odfe-node1 | at com.amazon.dlic.auth.http.jwt.AbstractHTTPJwtAuthenticator.extractCredentials(AbstractHTTPJwtAuthenticator.java:82) [opendistro_security_advanced_modules-0.8.0.0.jar:0.8.0.0]
odfe-node1 | at com.amazon.opendistroforelasticsearch.security.auth.BackendRegistry.authenticate(BackendRegistry.java:448) [opendistro_security-0.8.0.0.jar:0.8.0.0]
--kibana.yml
opendistro_security.multitenancy.enabled: true
opendistro_security.auth.type: openid
opendistro_security.openid.connect_url: https://.../.well-known/openid-configuration
opendistro_security.openid.client_id: {myID}
opendistro_security.openid.client_secret: {mySecret}
--config.yml (Elastic)
basic_internal_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: basic
challenge: false
authentication_backend:
type: internal
openid_auth_domain:
enabled: true
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: openid
challenge: false
config:
subject_key: sub
roles_key: roles
openid_connect_url: https://.../.well-known/openid-configuration
authentication_backend:
type: noop
Where can we find the dockerfiles? I'm not seeing them in https://github.com/opendistro-for-elasticsearch
Since Elasticsearch uses many databases users will likely have a need to query more than 1 index at a time. It appears that this is currently impossible if indexes have different mappings. Would these commits resolve this issue?
I’ve created a monitor based on Extraction Query. I am trying to create trigger for that, when I press the create button, it doesn’t do anything. I mean it neither creates that trigger nor shows any error message. How to resolve this issue?
Please help!!
It would be great to have a little doc on how to deploy a 2 or 3 nodes cluster and maybe even how to make in bigger on hot (while its running) without downtime.
Ive been researching a little on this and making some testing but cant get it to work with Docker Swarm. If i do i ll add a comment with the docker-compose.yml as a possible example.
Thanks!
Hello,
Anyone else having issues about query Autocomplete? It is looking like not working at all.
(Package opendistroforelasticsearch-kibana-0.7.0-1.x86_64 already installed and latest version)
Thanks/
Syntax options
Our experimental autocomplete and simple syntax features can help you create your queries. Just start typing and you’ll see matches related to your data. See docs here.
The documentation mentions that the ctx variable has a field 'alert' with the following content:
"The current, active alert (if it exists). Includes ctx.alert.id, ctx.alert.version, and ctx.alert.isAcknowledged. Null if no alert is active."
However, when verifying the ctx.alert field in my triggered action message, I get the following result:
{state=ACTIVE, error_message=null, acknowledged_time=null, last_notification_time=1555580484091}
The properties are different and there is no alert id or version. The alert id is indispensable for acknoledging the alert using the API.
I am using AWS elasticsearch service, I want to use opendistro for elasticsearch kibana and point to AWS elasticsearch service.
Please let me know what steps I need to take.
Hi
odfe-node1 | [2019-03-21T18:07:15,713][WARN ][c.a.d.a.h.j.HTTPJwtAuthenticator] [jDe_UcC] No Bearer scheme found in header
odfe-node1 | [2019-03-21T18:07:15,713][WARN ][c.a.o.s.a.BackendRegistry] [jDe_UcC] Authentication finally failed for null from 192.168.0.2:59234
http://127.0.0.1:5601?jwtToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImJlMjc3MmNlMTAxODRjZmNhZmRhZTk5Y2RlNzk0NGU3IiwiYWNjb3VudElkIjoiYmUyNzcyY2UxMDE4NGNmY2FmZGFlOTljZGU3OTQ0ZTciLCJ0b2tlbiI6IjU2ZTE3OTE4LTA2Y2UtYTJlMS1kY2RmLTgyN2M3YjAzNjU4OCIsInJvbGVzS2V5IjoiYWxsX2FjY2VzcyIsInN1YmplY3RLZXkiOiJhZG1pbiIsImlhdCI6MTU1MzE4OTQ2MiwiZXhwIjoxNTUzMzYyMjYyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0In0.mU9XEYq0B0cQTIvNNND1M_tsTS35NeZAL5suCoQbunw
Security config
opendistro_security:
dynamic:
http:
anonymous_auth_enabled: false
authc:
basic_internal_auth_domain:
http_enabled: false
transport_enabled: true
order: 4
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: intern
jwt_auth_domain:
enabled: true
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: jwt
challenge: false
config:
signing_key: qwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewqqwertyuiopasdfghjklzxcvbnmnbvcxzasdfghjklpoiuytrewq
# jwt_header: "Authorization: Bearer <token>"
jwt_url_parameter: "jwtToken"
roles_key: rolesKey
subject_key: subjectKey
authentication_backend:
type: noop
authz:
roles_from_myldap:
http_enabled: false
transport_enabled: false
authorization_backend:
type: noop
roles_from_another_ldap:
enabled: false
authorization_backend:
type: noop
curl -X GET \
'https://127.0.0.1:9200' \
-H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6ImJlMjc3MmNlMTAxODRjZmNhZmRhZTk5Y2RlNzk0NGU3IiwiYWNjb3VudElkIjoiYmUyNzcyY2UxMDE4NGNmY2FmZGFlOTljZGU3OTQ0ZTciLCJ0b2tlbiI6IjU2ZTE3OTE4LTA2Y2UtYTJlMS1kY2RmLTgyN2M3YjAzNjU4OCIsInJvbGVzS2V5IjoiYWxsX2FjY2VzcyIsInN1YmplY3RLZXkiOiJhZG1pbiIsImlhdCI6MTU1MzE4OTQ2MiwiZXhwIjoxNTUzMzYyMjYyLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0In0.mU9XEYq0B0cQTIvNNND1M_tsTS35NeZAL5suCoQbunw' \
-H 'Postman-Token: 79519d07-ea3c-4f3e-819a-0b71964f7653' \
-H 'cache-control: no-cache'
respone
odfe-node1 | [2019-03-21T17:57:44,940][WARN ][c.a.o.s.a.BackendRegistry] [jDe_UcC] Authentication finally failed for null from 192.168.0.1:53128
I would like to turn off authentication for kibana/elasticsearch in order to evaluate the benefits of the SQL and Alerting in the Open Distro. I was able to do it for elasticsearch using opendistro_security.disabled: true in the elasticsearch.yml file. How can I do the same for kibana.yml ?
I do not need login/password for now as the resources are in an isolated environment/not in production
we have a existing elastic search cluster ( stock docker.elastic.co/elasticsearch/elasticsearch:6.6.2)
i tried using opendistro kibana with it. as my usecase is to hava alerting on Elastic search data.
few issues i faced while doing it
1> as my existing Elastic search is running on http and doest have security plugin, i disabled security plugin in open distro plugin to get kibana started
2> now i can see data in kibana but alerting dint work. bellow is error in kibana logs
Alerting - ElasticsearchService - search { [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" } :: {"path":"/.opendistro-alerting-config/_search","query":{},"body":"{"query":{"term":{"monitor.name.keyword":"test"}}}","statusCode":404,"response":"{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}"}
at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
at IncomingMessage.emit (events.js:194:15)
at endReadableNT (_stream_readable.js:1103:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
status: 404,
displayName: 'NotFound',
message:
'[index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" }',
path: '/.opendistro-alerting-config/_search',
query: {},
body:
{ error:
{ root_cause: [Array],
type: 'index_not_found_exception',
reason: 'no such index',
'resource.type': 'index_or_alias',
'resource.id': '.opendistro-alerting-config',
index_uuid: 'na',
index: '.opendistro-alerting-config' },
status: 404 },
statusCode: 404,
response:
'{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}',
toString: [Function],
toJSON: [Function] }
Alerting - ElasticsearchService - search { [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" } :: {"path":"/.opendistro-alerting-config/_search","query":{},"body":"{"query":{"term":{"monitor.name.keyword":"test"}}}","statusCode":404,"response":"{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}"}
at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)
at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)
at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
at IncomingMessage.emit (events.js:194:15)
at endReadableNT (_stream_readable.js:1103:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
status: 404,
displayName: 'NotFound',
message:
'[index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id=".opendistro-alerting-config" & index_uuid="na" & index=".opendistro-alerting-config" }',
path: '/.opendistro-alerting-config/_search',
query: {},
body:
{ error:
{ root_cause: [Array],
type: 'index_not_found_exception',
reason: 'no such index',
'resource.type': 'index_or_alias',
'resource.id': '.opendistro-alerting-config',
index_uuid: 'na',
index: '.opendistro-alerting-config' },
status: 404 },
statusCode: 404,
response:
'{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":".opendistro-alerting-config","index_uuid":"na","index":".opendistro-alerting-config"},"status":404}',
toString: [Function],
toJSON: [Function] }
{"type":"response","@timestamp":"2019-04-21T18:16:33Z","tags":[],"pid":1,"method":"post","statusCode":200,"req":{"url":"/api/alerting/_search","method":"post","headers":{"host":"internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com","accept":"application/json, text/plain, /","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.5","content-type":"application/json;charset=utf-8","kbn-version":"6.6.2","referer":"http://internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com/app/opendistro-alerting","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:66.0) Gecko/20100101 Firefox/66.0","x-forwarded-for":"10.77.0.195","x-forwarded-port":"80","x-forwarded-proto":"http","content-length":"98","connection":"keep-alive"},"remoteAddress":"10.77.5.236","userAgent":"10.77.5.236","referer":"http://internal-newkibana01-corp-grabpay-com-1568899982.ap-southeast-1.elb.amazonaws.com/app/opendistro-alerting"},"res":{"statusCode":200,"responseTime":4,"contentLength":9},"message":"POST /api/alerting/_search 200 4ms - 9.0B"}
let me know if any more info needed
I'm trying to connect opendistro kibana to azure ad, but I've found that plugin config doesn't work:
/usr/share/kibana/plugins/opendistro_security/securityconfig/config.yml
opendistro_security:
dynamic:
http:
anonymous_auth_enabled: false
xff:
enabled: true
internalProxies: '.*' # trust all internal proxies, regex pattern
remoteIpHeader: 'x-forwarded-for'
proxiesHeader: 'x-forwarded-by'
trustedProxies: '.*' # trust all external proxies, regex pattern
authc:
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
openid_connect_url: https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
authentication_backend:
type: noop
kibana.yml:
opendistro_security.auth.type: "openid"
opendistro_security.openid.connect_url: https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
opendistro_security.openid.client_id: "{application_id}"
opendistro_security.openid.client_secret: "{secret}"
opendistro_security.openid.base_redirect_url: "https://kibana_url"
opendistro_security.cookie.secure: true
elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant"]
I'm getting authError every time. What I've missed? Don't see any chance to debug this auth error, there is no any helpful message in log or smth.
I found that these new configurations could not be synchronized to the relevant configuration files on opendistro_security,When I add a roles,role_mapping,internal user to the kibana page.and when
i upload the changes via sgadmin ,configuration files is gone on kibana page . I wonder why settings isn't synchronous.
Please describe a way how to move from elasticsearch (default download) to opendistro for elasticsearch. Really interested in it.
Current documentation has an instruction to "replace the demo certificates" here but links to a sample docker compose file that assumes the following files exist locally without describing how they are generated:
Incorrectly encoded pem files produce an only-slightly-helpful error message: Your keystore or PEM does not contain a certificate. Maybe you confused keys and certificates.
which may be related to discrepancies between PRIVATE KEY
, PRIVATE ENCRYPTED KEY
, and RSA PRIVATE KEY
headers which get pretty deep into openssl implementations. For instance, do the demo certs require PKCS#8?
It would help to have documentation and a simple script for securely generating the appropriate files, especially changing the admin client key from "kirk". It might also be useful to describe under what circumstances it is useful to generate non-admin client keys (such as spock or kibana)
Presumably the requirements originate from SearchGuard implementations, but configuring them is non-trivial and its not clear which method (or some other) is preferable:
there is no available parameter on the /etc/kibana/kibana.yml to configure Kibana to listen on any different address than localhost - i am currently running OpenDistro on a Centos VM set up on bridged mode on VirtualBox and i can't reach the server from any of my machines on my LAN - i can see other services exposed (Elasticsearch) but not Kibana.
Any help is appreciated, thank you!
Hi, I have created a VM (CentOS 7.5) machine on Azure and followed the steps in the website.
When I try to access Kibana using the public IP address (with 5601 port) the page is not available. I have also opened the inbound port 9200 and 5601 in the network firewall on the VM.
Any help?
Hi all,
I am trying to install Open Distro -RPM package on a Centos 7 VM but this installation fails
(I have JDK installed), I don't know if there are some steps missed.
I get the message error messages below:
Error: Package: opendistro-sql-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistroforelasticsearch-0.8.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-security-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-alerting-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Error: Package: opendistro-performance-analyzer-0.8.0.0-1.noarch (opendistroforelasticsearch-artifacts-repo)
Requires: elasticsearch-oss = 6.6.2
Thanks a lot
Fred
As title.
My opendistro cluster currently running on k8s. I found the /dev/shm was used up, and i found Performance Analyzer is use /dev/shm. But in k8s pod, /dev/shm only has 64MB.
If I mount memory to /dev/shm, it may be "OOM". Can you give me some solution?
Hello , I'm trying to change default password of elasticsearch and kibana
for instance admin:admin or kibanaserver:kibanaserver
i took the image of container from https://hub.docker.com/r/amazon/opendistro-for-elasticsearch
and i didn't write any docker compose.yml file. i just run the container without .yml file
in
/usr/share/elasticsearch/plugins/opendistro_security/securityconfig
internal_user.yml , i changed default password from here with new hash but i when i restart to container it's not changing i tried many times but still i get same default password
i follow this page https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker-security/
can someone please help me to figure it out this issue?
Thanks a lot
I install open distro for elasticsearch,and open ssl, like "https://127.0.0.1:9200", use default admin account.
but when i use elasticsearch-php-sdk to connect es,it tell me "SSL Problem Received fatal alert: unknown_ca". Am I wrong?
but I can access it by browser.
https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_security.html
thanks!
Do you have a deb package also in the pipeline/roadmap next to the RPM package?
Hi, Ldap Authentication doesn't work
I use FreeIpa and make configuration in config.yml
ldap:
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: ldap
config:
enable_ssl: false
enable_start_tls: false
enable_ssl_client_auth: false
verify_hostnames: true
hosts:
- ipahostname:389
bind_dn: 'username'
password: 'pasword'
userbase: 'dc=example,dc=org'
# Filter to search for users (currently in the whole subtree beneath userbase)
# {0} is substituted with the username
usersearch: '(uid={0})'
# Use this attribute from the user as username (if not set then DN is used)
username_attribute: uid
i see thic config in kibana web, but there is only one Authentication backend. it is Internal Users Database, when i try to login whith ldap credentials , it fails : [2019-04-11T10:44:46,316][WARN ][c.a.o.s.a.BackendRegistry] [VeBKci6] Authentication finally failed for username from 127.0.0.1:36614
how can i make ldap Authentication up and use it by default?
https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/
the end of the compose file(inline below), looks cut off , which makes it invalid yaml.
networks:
odfe-net:
Please create a "commercial support" page if someone needs priority help, needs to get up to speed quickly, requires some training or mentoring, or needs full 24 x 7 production support.
Allow companies to post their information on that page. I have a small Elasticsearch and Kafka related consulting company that I would like to add to the page.
Here is an example: http://camel.apache.org/commercial-camel-offerings.html
I tried to put this in docker-compose.yml but didn't work.
environment:
- cluster.name=odfe-cluster
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
- http.cors.enabled=true
- http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
- http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
- http.cors.allow-credentials=true
- opendistro_security.ssl.http.enabled=false
Does Open Distro supports CORS with different config keys like opendistro_security.xxx
or it is currently impossible?
Add documentation to cover the functionality of the security CLI tools.
environment:
- cluster.name=odfe-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.zen.ping.unicast.hosts=odfe-node1
The ES_JAVA_OPTS variable is overwritten in /usr/local/bin/docker-entrypoint.sh
The memory configurations are currently taken from /usr/share/elasticsearch/config/jvm.options where it is hardcoded. Changing the memory configuration requires either altering the image or mounting the jvm.options file.
I believe this is more of an issue with the image itself. Which repository would be most suitable?
Current documentation has an demo on getting things running with docker but to get things ready in Kubernetes it's a bit painful with just that, and it being a well-known broadly used platform i think it is an worthy effort.
I have a multi-host (cluster) installation successfully running in an private cluster, let me know if you guys have some interest for me to share it/create a PR with the manifests (yamls) i used.
Hi, i was setting up opendistro with security disabled, for development purpose, and folled the guide to disable security, in the docs here https://opendistro.github.io/for-elasticsearch-docs/docs/security/disable/
A exception is thrown indicating the setting opendistro_security.disabled is removed.
I removed all the settings related to opendistro_security including opendistro_security.disabled, and got it working without security.
I think, the setting has been removed, but the documentation is not updated.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.