Giter Club home page Giter Club logo

sql4es's Introduction

Elastic announced the great news that they are working on SQL support at Elastic{ON} 2017! You can find the talk on this topic on the Elastic Website

sql4es: JDBC driver for Elasticsearch

Sql-for-Elasticsearch (sql4es) is a jdbc 4.1 driver for Elasticsearch 6.3.2 implementing the majority of the JDBC interfaces: Connection, Statement, PreparedStatment, ResultSet, Batch and DataBase- / ResultSetMetadata. The screenshot below shows SQLWorkbenchJ with a selection of SQL statements that can be executed using the driver.

SQLWorkbenchJ screenshot with examples

Usage

The sql4es driver can be used by adding the jar file, found within the releases directory of the project, to the tool/application used and load the driver with name 'nl.anchormen.sql4es.jdbc.ESDriver'. The driver expects an URL with the following format: jdbc:sql4es://host:port/index?params.

  • host: the hostname or ip of one of the es hosts (required)
  • port: an optional the port number to use for the transport client (default is 9300)
  • index: the optional index to set active within the driver. Most statements like SELECT, DELETE and INSERT require an active index (also see USE [index/alias] statement below). It is however possible to create new indices, types and aliases without an active index.
  • params: an optional set of parameters used to influence the internals of the driver (specify additional hosts, maximum number of documents to fetch in a single request etc). If your clustername is not 'elasticsearch' you should specify the clustername witin the url (see example below). Please see the Configuration section of this readme for a description of all driver specific parameters.
// register the driver and get a connection for index 'myidx'
Class.forName("nl.anchormen.sql4es.jdbc.ESDriver");
Connection con = DriverManager.getConnection("jdbc:sql4es://localhost:9300/myidx?cluster.name=your-cluster-name");
Statement st = con.createStatement();
// execute a query on mytype within myidx
ResultSet rs = st.executeQuery("SELECT * FROM mytype WHERE something >= 42");
ResultSetMetaData rsmd = rs.getMetaData();
int nrCols = rsmd.getColumnCount();
// get other column information like type
while(rs.next()){
	for(int i=1; i<=nrCols; i++){
  		System.out.println(rs.getObject(i));
	}
}
rs.close();
con.close();

The driver can also be used from applications able to load the jdbc driver. It has been tested with sqlWorkbench/J and Squirrel on an Elasticsearch 6.3.2 cluster. A description on how to use sql4es with sqlWorkbenchJ along with a number of example queries can be found at the bottom of this readme.

Connection con = DriverManager.getConnection("jdbc:sql4es://f03c93be1efeb9be9b2f46b660d10d90.eu-west-1.aws.found.io:9343/indexname?shield.user=username:password&cluster.name=f03c93be1efeb9be9b2f46b660d10d90&ssl");

Supported SQL

Simply said the sql4es driver translates SQL statements to their Elasticsearch counterparts and parses results into ResultSet implementations. The following sql statements are supported:

  • SELECT: fetches documents (with or without scoring) or aggregations from elasticsearch
    • COUNT (DISTINCT ...), MIN, MAX, SUM, AVG
    • DISTINCT
    • WHERE (=, >, >=, <, <=, <>, IN, LIKE, AND, OR, IS NULL, IS NOT NULL, NOT [condition])
    • GROUP BY
    • HAVING
    • ORDER BY
    • LIMIT (without offset, offsets are not supported by sql4es)
  • CREATE TABLE (AS) creates an index/type and optionally indexes the result of a query into it
  • CREATE VIEW (AS): creates an alias, optionally with a filter
  • DROP TABLE/VIEW removes an index or alias
  • INSERT INTO (VALUES | SELECT): inserts documents into an index/type; either provided values or results of a query. Possible to UPDATE documents using INSERT by specifying existing document _id's
  • UPDATE: executed as an elasticsearch Upsert
  • DELETE FROM (WHERE): removes documents
  • USE: selects an index as the driver's active one (used to interpret queries)
  • EXPLAIN SELECT: returns the Elasticsearch query performed for a SELECT statement
  • Table aliases like SELECT … FROM table1 as T1, table2 t2...
    • Table aliases are parsed but not used during query execution

Remarks

Elasticsearch does not support transactions. Hence executing batches cannot be rolled back upon failure (nor can statements be committed). It also takes some time for documents to be indexed fully so executing an INSERT directly followed by a SELECT might not include the inserted documents.

Some SQL statements or Elasticsearch features that are not (yet) supported:

  • UPDATE is not supported, although it is possible to update documents by inserting values for an existing _id
    • added in 0.7.2.1: it is now possible to executes updates like UPDATE index.type SET myInt=100 WHERE myString = 'hundred'
  • Not possible to INSERT nestested objects
    • added in 0.7.2.1 using double quotes: INSERT INTO mytype ("myObject.nestedDoc.myInt") VALUES (1)
  • Not possible to specify offsets (OFFSET offset or LIMIT offset, number)
  • Fields with type 'nested' are not supported because this type requires different methods to query and retrieve data.
    • added in 0.6.2.1: Nested types are detected by the driver and queries on those fields are executed accordingly
  • Parent child relationships are not supported. It is currently not possible to index or retrieve fields of this type.
  • Elasticsearch features like full text search, highlighting, suggestions and templates are not supported.
    • added in 0.6.2.1: full text search can be done using _search = '…' and highlighting trough SELECT highlight(some-field) FROM …
  • Count (Distinct …)
    • Added in 0.9.2.4. It is possible to set the Elasticsearch precision threshold (see cardinality aggregations) by using the precision.threshold parameter in the connection url.

Concepts

Since elasticsearch is a NO-SQL database it does not contain the exact relational objects most people are familiar with (like databases, tables and records). Elasticsearch does however have a similar hierarchy of objects (index, type and document). The conceptual mapping used by sql4es is the following:

  • Database = Index
  • Table = Type
  • Record = document
  • Column = Field
  • View = Alias (this does not fit from a hierarchical perspective but does match given the purpose of views / aliases)

Elasticsearch responses, both search results and aggregations, are put into a ResultSet implementation. Any nested objects are 'exploded' into a lateral view by default; this means that nested objects are treated as joined tables which are put inside the he same row (see this page for explanation). It is possible to represent nested objects as a nested ResultSet, see the Configuration section. Note, that although objects are exploded, arrays with primitives are not! They are put in a java.sql.Array implementation supported by JDBC.

Sql4es works from an active index/alias which means that it resolves references to types from this index. If for example myIndex is currently active the query SELECT * FROM sometype will only return any results if sometype is part of myindex. Executing a SELECT on a type that does not exist within an index will return an empty result. It is possible to change the active index by executing USE [otherIndex] as described below.

QUERIES

This section describes how SQL is interpreted and converted into SE statements. The presto parser is used to parse SQL statements, please see the syntax definition on the presto website.

SELECT

/* basic syntax */
SELECT [field (AS alias)] FROM [types] WHERE [condition] GROUP BY [fields] HAVING [condition] ORDER BY [field (ASC|DESC)] LIMIT [number]
  • fields (AS alias): defines the fields to be retrieved from elasticsearch and put in the ResultSet. It is possible to use * to indicate all fields should be retrieved (including _id and _type). Fields can be addressed by their name, nested fields can be addressed using their hierarchical names in dotted notation like: nesteddoc.otherdoc.field. Using a star will simply fetch all fields, also nested ones, present in a document. It is possible to specify the root of an object in order to fetch all its fields. A query like SELECT nesteddoc FROM type will fetch all fields present in nesteddoc. As a result it might return hundreds of columns if nesteddoc has hundreds of fields.
  • types: the types to execute the query against. This can only be types present in the index or alias that is currently active (also see 'use' statement).
  • condition: standard SQL condition using =, >, >=, <, <=, <>, IN and LIKE operators. Sql4es does not support the NOT operator but '<>' can be used instead. Use AND and OR to combine conditions.
  • limit: only works on non aggregating queries. The use of offset is not supported!
/* the following wil explode any nested objects into a lateral view */
SELECT * from mytype

SELECT _id as id, myInt, myString FROM mytype WHERE myInt >= 3 OR (myString IN ('hello','hi','bye') AND myInt <= 3)

/* If nestedDoc contains 2 fields the result will be exploded to [myInt,nestedDoc.field1, nestedDoc.field2] */
SELECT myInt, nestedDoc FROM mytype WHERE myInt > 3 AND myString <> 'bye'

/* If the array contains 3 objects the resultset will contain 3 rows, despite the LIMIT used! */
SELECT array_of_nested_objects FROM mytype LIMIT 1

Tables/Types

Only types part of the active index or alias can be addressed in the FROM clause. An alias must be created if types from different indices must be accessed in a single query (see CREATE VIEW for alias creation). The query cache is the only exception to this rule. When the query cache identifier (default 'query_cache') is used within FROM it indicates the use of the query cache. Whenever present a query is fetched from the cache rather than executed which minimizes query time in case of time consuming queries.

/* fetch some data from type */
SELECT DISTINCT field, count(1) FROM type, query_cache
/* exactly the same as above but now also hitting the query cache */
SELECT DISTINCT field, count(1) FROM type

Text matching, search and scoring

By default queries are executed as a filter which means elasticsearch does not scores the results and they are returned in an arbitrary order. Add '_score' as one of the selected columns in order to change this behaviour and request scoring. By default results are returned sorted on _score DESC (can be changed to ORDER BY _score ASC). Ordering on another field will disable scoring! In addition it is possible to get the id and type of a document by specifying _id and _type respectively.

Sql4es does not make a difference between searching and matching on textual fields. Behaviour totally depends on the analysis (if any) performed on the textual field being queried/searched. Under the hood a couple of simple rules are used to determine what type of query should be use:

  • a single word will be put in a TermQuery (example: myString = 'hello')
  • multiple words will be put in a MatchPhraseQuery (example: myString = 'hello there')
  • presence of wildcards (% and _) will trigger the use of a WildcardQuery (% is replaced with * and _ with ?). Examples: mystring = '%something' is the same as mystring LIKE '%something')
  • the use of IN (…) will be put in a TermsQuery

In addition it is possible to execute a regular search with all features supported by ES. Searching is done by executing a match on the fictional field '_search' (see examples below). It is possible to request highlights for any text field using the highlight function like: SELECT highlight(field), … Fragment size and number can be set through the global configuration.

/* term query */
SELECT _score, myString FROM mytype WHERE myString = 'hello' OR myString = 'there'
/* Same as above */
SELECT _score, myString FROM mytype WHERE myString IN ('hello', 'there')
/* use of NOT; find all documents which do not contain 'hello' or 'there' */
SELECT _score, myString FROM mytype WHERE NOT myString IN ('hello', 'there')

/* check for NULL values (missing fields) */
SELECT myInt FROM mytype WHERE myString NOT NULL
SELECT myInt FROM mytype WHERE myString IS NULL

/* phrase query */
SELECT _score, highlight(myString), myString FROM mytype WHERE myString = 'hello there'
/* wildcard query */
SELECT _score, myString FROM mytype WHERE myString = 'hel%'
/* a search for exactly the same as the first two */
SELECT _score, highlight(mystirng) FROM mytype WHERE _search = 'myString:(hello OR there)'

Get document by _id

It is possible to execute searches for document id's by specifying '=' or IN predicates on the _id field. It is possible to combine the match on an _id with other fields but matching multiple _id should always be done using IN.

SELECT * FROM mytype WHERE _id = 'whatever_id'
SELECT * FROM mytype WHERE _id = 'whatever_id' AND myInt > 3
SELECT * FROM mytype WHERE _id = 'whatever_id' OR _id = 'another_ID' /* WRONG */
SELECT * FROM mytype WHERE _id IN ('whatever_id', 'another_ID') /* CORRECT */

Aggregation

Sql4es will request an aggregation whenever it finds a DISTINCT, GROUP BY or aggregation functions (MIN, MAX, SUM, AVG or COUNT) are requested without any normal fields. No search results are returned whenever an aggregation is requested.

Sql4es supports some basic arithmetic functions: *, /, +, - and % (modulo). It is also possible to combine different fields from the resultset within a calculation like AVG(field)/100 and SUM(field)/count(1). Note that within the current implementation these calculations are performed within the driver once data has been fetched from Elasticsearch. It is possible to refer to values in other rows within functions using brackets [offset]. For example SUM(volume)/SUM(volume)[-1] will devide the sum of volume column for row X with the value in row X-1. If a value cannot be calculated, for example row number 0 in the example above, it will get value Float.NaN.

/* Aggregates on a boolean and returns the sum of an int field in desc order */
SELECT myBool, sum(myInt) as summy FROM mytype GROUP BY myBool ORDER BY summy DESC

/* This is the same as above */
SELECT DISTINCT myBool, sum(myInt) as summy ROM mytype ORDER BY summy DESC

/* Aggregates on a boolean and returns the sum of an int field only if it is larger than 1000 */
SELECT myBool, sum(myInt) as summy ROM mytype GROUP BY myBool HAVING sum(myInt) > 1000

/* Gets the average of myInt in two different ways... */
SELECT myBool, sum(myInt)/count(1) as average, avg(myInt) FROM mytype GROUP BY myBool

/* Calculates the percentage of growth of the myInt value acros increasing dates */
SELECT myDate, sum(myInt)/sum(myInt)[-1]*100 FROM mytype GROUP BY myDate ORDER BY myDate ASC

/* aggregation on all documents without a DISTINCT or GROUP BY */
SELECT count(*), SUM(myInt) from mytype

/* the following will NOT WORK, a DISTINCT or GROUP BY on mytext is required */
SELECT mytext, count(*), SUM(myInt) from mytype

Some notes on SELECT:

  • limit only works on non aggregating queries. Any 'limits' on an aggregation will be omitted
  • calculations on fields are currently performed within the driver
  • having (filtering on aggregated results) is currently performed within the driver
  • sorting of aggregated results are currently performed within the driver

EXPLAIN

Explain can be used to view the elasticsearch query executed for a SELECT statement by executing:

EXPLAIN [SELECT statement]

USE

Sql4es uses an active index/alias. By default this is the index/alias specified within the URL used to get the connection (if any). It is possible to change the active index/alias by executing:

USE [index / alias]

All subsequent statements will be executed from the specified index/alias. This action only influences the driver and has no effect on Elasticsearch

CREATE & DROP

Sql4es supports creation of indices, types (create table) and aliases (create view). These statements require knowledge of ES mechanics like mappings, type definitions and aliases.

CREATE TABLE (index.)type ([field] "[field definition]" (, [field2])...) WITH (property="value" (, property2=...) )

This creates a mapping for [type] within the currently active index or in the index specified using dot notation. Whenever dotnotation is used it is assumed the part before the first dot refers to the index. If the index specified already exists it just adds the type to this index.

The field definition is the json definition put in the mapping without quoting the json elements! A string type can be defined as follows: CREATE TABLE mytype (stringField "type:string, index:analyzed, analyzer:dutch"). Any mapping elements, like templates, can be set using the WITH clause (see example below). All these json parts will be quoted properly and mashed together into a mapping request.

/* creates a mapping for mytype within newindex with a template to store any strings without analysis */
CREATE TABLE index.mytype (
	myInt "type:integer",
  	myDate "type:date, format:yyyy-MM-dd"
  	myString "type:string, index:analyzed, analyzer:dutch"
) WITH (
  dynamic_templates="[{
    default_mapping: { 
    	match: *,
    	match_mapping_type: string, 
    	mapping: {type: string, index: not_analyzed }
    }
  }]"
)

An empty index can be created using CREATE TABLE index.type (_id "type:string"). The _id field is omitted because it is a standard ES field.

CREATE TABLE (index.)type AS SELECT ...

Creates a new index/type based on the results of a SELECT statement. The new fieldnames are taken from the SELECT, it is possible to use column-aliases to influence the fieldnames. For example CREATE TABLE myaverage AS SELECT avg(somefield) AS average will result in a new type myaverage within the currently active index with a single Double field called 'average'. Note that this is a two step process taking place at the driver. First the query is executed and secondly the index is created and results are written (in bulk) to the new type.

/*Create another index with a type mapping based on the mapping created before*/
CREATE TABLE index.mytype AS SELECT myDate as date, myString as text FROM anyType

/* create a type with a (possibly expensive to calculate) aggregation result */
CREATE TABLE index.myagg AS SELECT myField, count(1) AS count, sum(myInt) AS sum from anyType GROUP BY myField ORDER BY count DESC

CREATE VIEW [alias] AS SELECT * FROM index1 (, [index2])... (WHERE [condition])

Creates a new ES alias containing the specified indexes or adds the indexes to an existing alias. The optional WHERE clause adds a filter on the index-alias pairs specified. See the elasticsearch documentation for information on aliases

DROP TABEL [index] / DROP VIEW [alias]

Removes the specified index or alias

/*Create an elasticsearch alias which includes two indexes with their types */
CREATE VIEW newalias AS SELECT * FROM newindex, newindex2

/* Same as above but with a filter*/
CREATE VIEW newalias AS SELECT * FROM newindex, newindex2 WHERE myInt > 99

/*Use the alias so it can be queried*/
USE newalias

/* removes myindex and remove newalias */
DROP TABLE myindexindex
DROP VIEW newalias

INSERT & DELETE

Describes inserting and deleting data through sql

INSERT INTO (index.)type ([field1], [field2]...) VALUES ([value1], [value2], ...), ([value1], ...), …

Adds one or more documents to the specified type within the index. Fields must be defined and the number of values must match the number of fields defined. It is possible to add multiple documents within a single INSERT statement. It is possible to specify the _id field within the insert statement. In this case it will force elasticsearch to insert the specified document id. The insert acts as an UPDATE if the _id already exists! It is not possible to insert nested objects as they cannot be specified in the SQL language.

INSERT INTO (index.)type SELECT …

Adds all of the results from the SELECT statement to the specified type within the index. Fieldnames to insert are taken from the result (i.e. column aliases can be used). Note that, similar to the 'CREATE TABLE .. AS SELECT' the results are pulled into the driver and then indexed (using Bulk).

/* Insert two documents into the mytype mapping */
INSERT INTO mytype (myInt, myDouble, myString) VALUES (1, 1.0, 'hi there'), (2, 2.0, 'hello!')

/* insert a single document, using quotes around nested object fields */
INSERT INTO mytype (myInt, myDouble, "nestedObject.myString") VALUES (3, 3.0, 'bye, bye')

/* update or insert a document with specified _id */
INSERT INTO mytype (_id, myInt, myDouble) VALUES ('some_document_id', 4, 4.0)

/* copy records from anotherindex.mytype to myindex.mytype that meet a certain condition */
USE anotherindex
INSERT INTO myindex.mytype SELECT * from newtype WHERE myInt < 3 

DELETE FROM type (WHERE [condition])

Deletes all documents from the specified type that meet the condition. If no WHERE clause is specified all documents will be removed. As Elasticsearch can only delete documents based on their _id which means that this statement is executed in two steps. First collect all _id's from documents that meet the condition, secondly delete those documents using the bulk API

/* delete documents that meet a certain condition*/
DELETE FROM mytype WHERE myInt == 3

/*delete all documents from mytype*/
DELETE FROM mytype 

UPDATE

It is possible to update documents within an index/type using standard SQL syntax. Note that nested object names must be surrounded by double quotes:

UPDATE index.type SET field1=value, fiedl2='value', "doc.field"=value WHERE condition

The update is executed in two steps. First the _id's of all documents matching the condition are fetched after which the specified fields for these documents are updated in batch using the Upsert API.

Configuration

It is possible to set parameters through the provided url. All parameters are exposed to elastic search as well which means that is is possible to set Client parameters, see elasticsearch docs. The following driver specific parameters can be set:

  • es.hosts: a comma separated list with additional hosts with optional ports in the format host1(:port1), host2(:port2) … The default port 9300 is taken when no port is specified.
  • fetch.size (int default 10000): maximum number of results to fetch in a single request (10000 is elasticsearch's maximum). Can be lowered to avoid memory issues when documents fetched are very large.
  • results.split (default: false): setting this will split the entire result into multiple ResultSet objects, each with maximal fetch.size number of records. The next ResultSet can be fetched using Statement.getMoreResults(). The default is false and the driver will put all results within a single ResultSet. This setting should be used when the client has insufficient memory to hold all the results within a single ResultSet.
  • scroll.timeout.sec (int, default 10): the time a scroll id remains valid and 'getMoreResults()' can be called. Should be increased or decreased depending on the scenario at hand.
  • query.timeout.ms (int, default 10000): the timeout set on a query. Can be altered depending on the use case.
  • default.row.length (int, default 250): the initial number of columns created for results. Increase this property only when results do not fit (typically indicated by an array index out of bounds exception) triggered when search results are parsed.
  • query.cache.table (string, default 'query_cache'): the fictional table name used to indicate using elasticsearch query cache. Can be changed to make it shorter or more convenient.
  • result.nested.lateral (boolean, default true): specifies weather nested results must be exploded (the default) or not. Can be set to false when working with the driver from your own code. In this case a column containing a nested object (wrapped in a ResultSet) will have java.sql.Types = Types.JAVA_OBJECT and can be used as (ResultSet)rs.getObject(colNr).
  • fragment.size (int, default 100): specifies the preferred fragment length in characters.
  • fragment.count (int, default 1): specifies the maximum number of fragments to return when requesting highlighting.
  • precision.threshold (int, default 3000): specifies the precision used for cardinality aggregations

Example using SQLWorkbenchJ

SQLWorkbenchJ is a SQL GUI that can be used to access an Elasticsearch cluster using the sql4es driver. Follow the steps below to set it up and execute the example statements.

  1. download SQLWorkbenchJ for your platform from the website
  2. Install SQLWorkbencJ and open it
  3. Add the SQL4ES driver:
    1. click 'Manage Drivers' in the bottom left corner
    2. click the blank document icon on the left to add a new driver
    3. Give the driver a descriptive name (sql4es for example)
    4. point the Library to the sql4es jar file on your system (found within the release directory of the project)
    5. set 'nl.anchormen.sql4es.jdbc.ESDriver' as the Classname
    6. set 'jdbc:sql4es://localhost:9300/index' as the sample URL
  4. Click 'Ok' to save the configuration
  5. Add a new database connection using the sql4es driver
    1. click the 'create new connection profile' button in the top
    2. give the profile a descriptive name
    3. specify the sql4es driver added before
    4. specify the url of your Elasticsearch 2.X cluster using the index 'myindex' ('jdbc:sql4es://your.es.host:port/myindex'
    5. click the save button on the top
  6. Select the created profile and click 'Ok' to create the connection
    1. An empty workbench will open when everything is ok.
  7. Copy the statements below into the workbench
  8. Execute the statements one by one and view the results. A brief description of each statement can be found below the SQL statements
    1. Running all the statements as a sequence works as well but will not provide any results for the SELECT statements because the cluster is still indexing the inserts when they are being executed (remember, elasticsearch is not a relational database!)
INSERT INTO mytype (myLong, myDouble, myDate, myText) VALUES (1, 1.25, '2016-02-01', 'Hi there!'),(10, 103.234, '2016-03-01', 'How are you?');

SELECT * FROM mytype;
SELECT _score, myLong, myText FROM mytype WHERE myText = 'hi' AND myDate > '2016-01-01';

EXPLAIN SELECT _score, myLong, myText FROM mytype WHERE myText = 'hi' AND myDate > '2016-01-01';

CREATE TABLE myindex2.mytype2 AS SELECT myText, myLong*10 as myLong10, myDouble FROM mytype;

DELETE FROM mytype WHERE myDouble = 1.25;
USE myindex2;
SELECT * FROM mytype2;
CREATE VIEW myview AS SELECT * FROM myindex, myindex2 WHERE myDouble > 2;
USE myview;
SELECT * FROM mytype, mytype2;
drop view myview;
drop table myindex;
drop table myindex2;
  1. insert two documents into the new type called mytype (the type will be created and mapping will be done dynamically by elasticsearch). Check the DatabaseExplorer option to view the mapping created.

  2. show all documents present in mytype

  3. execute a search and show the score

  4. show the Elasticsearch query performed for the search

  5. create a new index and a new type based on the documents in mytype. Note that mytype2 has some different fieldnames than the type it was created of.

  6. delete some documents

  7. make the newly created index 'myindex2' the active one

  8. show all documents within mytype2

  9. create a view (alias) for myindex and myindex2 with a filter

  10. make the newly created view the active one

  11. select all documents present within the view, note that:

  12. not all documents are shown due to the filter on the alias

  13. some of the fields are empty because the two types queried the empty fields because the two types have a couple of different fields (myDate, myLong and myLong10)

sql4es's People

Contributors

corneversloot avatar jeroenvlek avatar mallim avatar seralf avatar snowch avatar toxeh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql4es's Issues

DISTINCT - COUNT Combination Reg.

Hi,
I am using the driver for the past 6 months and working fine. Now there is a requirement which is not supported by the driver. When we try to query case like SELECT COUNT(DISTINCT(INVOICE_NUMBER)) FROM FACT_TABLE the condition fails and does not return any value. This requirement is for capturing the count of unique invoice from FACT Table as the FACT table is denormalised with both header and line level transaction records. The COUNT(DISTINCT(<COL_NAME>)) can occur more than once in query as well. Can you please help us in getting this accomplished. Thanks.

Elasticsearch 6.5.0 - compilation/test failures.

I was trying to use this driver for ES 6.5.0 as new SQL JDBC driver that is provided from Elasticsearch itself requires Platinium License:
https://www.elastic.co/downloads/jdbc-client
The straight forward compilation doesn't work. I was able to compile without test cases:
mvn package -Dmaven.test.skip=true
after making some changes in the source code. I'm attaching the diff file with my changes (it's generated using diff, not git)
build-ES6.5.0-diff.txt
After that I din't noticed any problems using the driver against my ES cluster, so I didn't make any more changes.

I'll be happy if you apply those changes and we can use master directly and work with latest ES version. Also you can check why tests are failing and what need to be changed there, to ensure that all options are working as expected.

Date Time format issue

Joda date time can not parse date in format of yyyy-MM-dd HH:mm:ss.SSS

2017/06/14 10:35:55 - ES UEMM_AUDIT_TRAIL - EVENT 67.0 - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Couldn't get row from result set
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to get value 'Date' from database resultset, index 0
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2546)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2516)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2494)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:259)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:140)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at java.lang.Thread.run(Thread.java:745)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Caused by: org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to get value 'Date' from database resultset, index 0
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4964)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.BaseDatabaseMeta.getValueFromResultSet(BaseDatabaseMeta.java:2107)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.DatabaseMeta.getValueFromResultSet(DatabaseMeta.java:2950)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2538)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - ... 6 more
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - Caused by: java.sql.SQLException: Unable to parse Date from '2017-03-09 23:04:14.222' : Invalid format: "2017-03-09 23:04:14.222" is malformed at " 23:04:14.222"
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getTimeFromString(ESResultSet.java:340)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getTimestamp(ESResultSet.java:388)
2017/06/14 10:35:55 - XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4946)

Any plan for "Parent-Child" search on sql4es?

Hi,

I think I share the same preference with you guys, I think use SQL to search for result is much more convenient than use JSON or java client to query from Elasticsearch.

However, the question I have here is that does sql4es plan to support "Parent-Child" search on Elasticsearch? I think the "Parent-Child" search is quite a major feature for Elasticsearch. Hopefully sql4es can added this to the future milestone.

Sincerely,

Feiran

Any plan to support Elasticsearch 5.0.0?

Hi
When connect from Sqlworkbenchj with "sql4es-0.9.2.3.jar" this error:

"Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0-alpha4]"

Many thanks

ES 5.4 Error SQL4ES 5.0.0

am able to connect to ES 5.4 using JDBC connection. But getting following error while using table input.

Field in question is of type long in ES. Do i have to use any any type conversion in sql query ?

2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Couldn't get row from result set
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Unable to get value 'BigNumber(16)' from database resultset, index 0
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2546)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2516)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2494)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:259)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:140)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at java.lang.Thread.run(Thread.java:745)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Caused by: org.pentaho.di.core.exception.KettleDatabaseException:
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Unable to get value 'BigNumber(16)' from database resultset, index 0
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX -
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4964)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.BaseDatabaseMeta.getValueFromResultSet(BaseDatabaseMeta.java:2107)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.DatabaseMeta.getValueFromResultSet(DatabaseMeta.java:2950)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.database.Database.getRow(Database.java:2538)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ... 6 more
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Caused by: java.sql.SQLException: Value in column '1' is not of type BigDecimal but is class java.lang.Integer
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at nl.anchormen.sql4es.ESResultSet.getBigDecimal(ESResultSet.java:548)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - at org.pentaho.di.core.row.value.ValueMetaBase.getValueFromResultSet(ValueMetaBase.java:4919)
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - ... 9 more
2017/06/07 10:31:04 - XXXXXXXXXXXXXXXXXXX - Finished reading query, closing connection.

NOT Support in SQL - Pentaho Integration

I am trying to use it with Pentaho as described at https://www.anchormen.nl/elasticsearch-as-a-pentaho-source-through-sql4es/

The driver works well in SQLWorkbenchJ but Pentaho gives an error when trying to create an Analysis Report:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: NOT is currently not supported, use '<>' instead

I'm using SQL4ES 0.8.2.3 , Elasticsearch 2.3.4 and Pentaho 6.1.0.1 Trial.
How could you make it work with Pentaho in the post? Is there any plan to add NOT support?

image

Unable to Connect to Elasticsearch 5.5.3 using JDBC Driver sql4es-0.8.2.4.jar.

Below is the error message, however this driver supports elasticsearch 2.2.0 but not 5.5.3.

We are in urgent need of help to connect to elasticsearch using Oracle DVD 4. Please suggest/advise.

Also I have tried latest driver "sql4es-0.9.2.4.jar", but no luck.

Apr 13, 2018 5:18:08 PM oracle.bi.datasource.logging.ODLLoggerWrapper error
SEVERE: java.util.concurrent.ExecutionException oracle.bi.datasource.exceptions.DatasourceException: [JDSError : 102] Cannot create a connection since there are some errors. Please fix them and try again. Failed creating a data source connection for jdbc:sql4es://localhost:9300/twitter_local?cluster.name=elasticsearch
Cause - [JDSError : 102] Cannot create a connection since there are some errors. Please fix them and try again. Failed creating a data source connection for jdbc:sql4es://localhost:9300/twitter_local?cluster.name=elasticsearch
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:206)
oracle.bi.datasource.service.adf.server.DatasourceServlet.doGet(DatasourceServlet.java:450)
oracle.bi.datasource.service.adf.server.DatasourceServlet.doPost(DatasourceServlet.java:597)
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)

Not able to connect to elastic versions 5.X.X

Hello!
I am trying to connect to elastic server with version 5.3.0 and run SQL queries.
I am getting following error on server side .

[2018-10-15T12:37:06,482][WARN ][o.e.t.n.Netty4Transport ] [gIaF784] exception caught on transport layer [[id: 0x294d658d, L:/10.54.135.59:9300 - R:/1
0.54.135.59:52665]], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1323) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.3.0.jar
:5.3.0]

I understand that support for later versions of elastic is not yet available.
When will the new release be available? Or is there any other way to bypass it.
I tried compiling the master branch code and use the jar, still not able to connect.

I am getting the following error on client side when connecting using mater branch,

Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/client/transport/TransportClient
at nl.anchormen.sql4es.jdbc.ESDriver.connect(ESDriver.java:47)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.client.transport.TransportClient
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 4 more

the last

select noon ,noon_some from person where id=1
the second field Will disappear on the resultset;
because the Inaccurate matches in the 89 line of buildHeaders.java 's startsWith

Date issues

I notices that a java.sql.Date is returned when date type is declared on the mapping. However, in this java version the ability of Date to return time also is deprecated.
It seems like ElasticSearch's date type should be exposed as java.sql.Timestamp

Cannot get nested of nested field correctly

PUT /my_index
{
  "mappings": {
    "blogpost": {
      "properties": {
        "comments": {
          "type": "nested"
        }
      }
    }
  }
}



PUT /my_index/blogpost/1
{
  "title": "Nest eggs",
  "body":  "Making your money work...",
  "comments": [ 
    {
      "user":    { "id" : "John Smith"},
      "comment": "Great article"
    },
    {
      "user":    { "id" : "Alice White"},
      "comment": "More like this please"
    }
  ]
}

Here is sample data I made and queried like
select * from blogpost where _id='1';

It converted correctly as json query

GET /my_index/_search
{
   "size": 200,
   "timeout": 10000,
   "post_filter": {
      "ids": {
         "type": "blogpost",
         "values": [
            "1"
         ]
      }
   }
}

and return data is correct in ES BUT not in sql4es result table.

image

comments.user.id of second row should be "Alice White", but sql4es just filled out first nested of nested data to all nested of nested data.

Issue with exposing metadata of aliases

Hi,

I'm having troubles with a 3rd party BI tool, in reading the metadata of the Elasticsearch indexes exposed as VIEWs.
Is there any difference from the point of view of the meta-data, in the way VIEWs are being exposed (via JDBC) vs what MySQL does, for example ?
Maybe something that should be set as additional param in the URI ?

Any support for 2.4.3?

Let me say this project is awesome!

It works for me when I connect to 2.4.0 but get a transport error when i connect to 2.4.3. Is it supported?

order by parse bug

	// parse ORDER BY
	if(node.getOrderBy().isEmpty()){
		for(SortItem si : node.getOrderBy()/*.get().getSortItems()*/){
			OrderBy ob = si.accept(orderOarser, state);
			if(state.hasException()) return new ParseResult(state.getException());
			orderings.add(ob);
		}
	}

isEmpty => isNotEmpty

the second bug

explain select sum(money) as allMoney,flag from person having max(flag)=-1

java.sql.SQLException: Having reference col: flag, fullN: max(flag) as alias:max(flag) vis: true index: -1 type: 8 not found in SELECT clause

Please modify the fortieth line about arithmetic expression support on HavingParser.java

Value in column X is not of type Boolean but is class java.lang.Boolean

Hi there,

I'm running into an odd issue trying to run a simple select * from statement, where type is a type that exists in more than one index (but the query, and the URL, are specifying a single index).

This is the query:

SELECT * from "data-generator-poc-async".arm

And this is the URL:
jdbc:sql4es://myelastichost:9302/data-generator-poc-async?cluster.name=test23

Using the exact same connection URL and query, I get the following results:

  • Squirrel - returns all documents in the specified index, but with hundreds of columns that do not exist in the index at all (they do exist however for the arm type in other indices on the cluster). The values for all of these columns are being shown as NULL / false for boolean values.
  • WorkbenchJ - Getting back an error: "Value in column '552' is not a Date but is String"
    There are two problems here:
    1. My index doesn't have 552 columns.. the count is probably skewed because it pulls in columns for the arm type from other indices
    2. Using Squirrel I can see that the getColumnTypeName and getColumnClassName for column 552 are both java.sql.Date
  • Apache Drill (ultimately this is what I want to use the sql4es driver with) - Getting back an error: "Value in column '783' is not of type boolean but is class java.lang.Boolean"

The data I have in the index is very simple, Am I doing something wrong? :)

Issue with Elasticsearch 5.2.0

Hi,

I am using Elasticsearch 5.2.0 and querying it in Pentaho 7.1 CDE using sql4es-0.8_5.0.0.jar
JDBC connection is successfully established. But getting following error when querying table,
Caused by: java.lang.NullPointerException
at nl.anchormen.sql4es.ESResultSet.getStatement(ESResultSet.java:905)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.ResultSetTableModelFactory.generateDefaultTableModel(ResultSetTableModelFactory.java:296)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.parametrizeAndQuery(SimpleSQLReportDataFactory.java:328)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:178)
at org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:142)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryStaticInternal(CompoundDataFactory.java:172)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryStatic(CompoundDataFactory.java:154)
at org.pentaho.reporting.engine.classic.core.CompoundDataFactory.queryData(CompoundDataFactory.java:67)
at org.pentaho.reporting.engine.classic.core.cache.CachingDataFactory.queryInternal(CachingDataFactory.java:411)
at org.pentaho.reporting.engine.classic.core.cache.CachingDataFactory.queryData(CachingDataFactory.java:299)
at pt.webdetails.cda.dataaccess.PREDataAccess.performRawQuery(PREDataAccess.java:127)

Could anyone please share inputs on this?

Thanks

Select literal/constant support

Hello!
Thanks for this helpful library.

Could you please implement support for selecting literal values?
I mean queries like "select id, name, 42, 'foo' from Bar". Now such queries always return 0 for integral literals and null for string ones.

Unsupported major.minor version 52.0

Hi
I'm trying to use the driver with SQLWorkbenchJ and SquirrelSQL but I get several errors:

SQLWorkbenchJ give me a generic error: Unable to connect to DB.
SquirrelSQL, after defining the driver conf, give me the error: Unsupported major.minor version 52.0.

I'm using Mac OSX 10.11.5 and Java VM 1.7.

Thank you
Lorenzo

Is best practice for sql4es only to use one ESConnection ?

We used the connection pool to create and maintain Connections when the database was MySQL, PostgreSQL, and Oracle. As this way, we thought to use pool to manage ESConnection. But when we reviewed code, we found that one ESConnection wrap one TransportClient instance. If we use the pool, there will be multiple TransportClient instances in project (one JVM). However, official suggests that one client in one JVM.

So what should we do? Use only one ESConnection instance or multiple ESConnection instances in a pool? thanks for your reply?

Disable automatic discovery?

Is it possible to disable automatic discovery of the cluster nodes? We have an ES cluster behind an NGINX proxy, and the driver is unable to connect to the cluster (2.3.5).
In the ElasticSearch Hadoop driver for example there is a setting called es.nodes.wan.only which disables auto-discovery - is there a way to do it with sql4es?

Thanks,
Dan

getDate(int columnIndex, Calendar cal) not implemented

The getDate function that accepts a column index and a calendar is not implemented:

    @Override
    public Date getDate(int columnIndex, Calendar cal) throws SQLException {
        // TODO Auto-generated method stub
        return null;
    }

Some clients (Apache Drill being one) require this functionality since they always pass a Calendar object.

ElasticSearch 6.X - Failed test cases

Source version: master level
Running CreateITest against ElasticSearch 6.3.2 / 6.4.0 end up with the following error:

At

boolean res = st.execute("CREATE TABLE simpletype (myString \"type:keyword\", myInt \"type:integer\", myDate \"type:date\")");

throws

Caused by: java.io.IOException:
Unexpected character ('t' (code 116)): was expecting double-quote to start field name

At

assertEquals(Types.DATE, rsm.getColumnType(5));

throws

SQL Value in column '5' is not a Date but is Str...

resultsets fetch implementation

Is the resultset FETCH implemented based on this concept https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-search-scrolling.html or is it restricted by the greatest of index.max_result_window and fetch.size ?
Currently I can't find a way to get the entire result set from a SELECT query if the result contains more than fetch.size (or index.max_result_window) rows because offsets are currently not implemented in sql4es.
Any hints/workarounds or plans for the future? :)

Error occurred while creating ad-hoc view in Jaspersoft Server Domain

I am using this driver for creating domain in JasperSoft Server. I could create the domain. BUT while creating the ad-Hoc view, I received the following error -

Caused by: java.sql.SQLException: No active index set for this driver. Pleas specify an active index or alias by executing 'USE <index/alias>' first
at nl.anchormen.sql4es.jdbc.ESStatement.executeQuery(ESStatement.java:62)
at nl.anchormen.sql4es.jdbc.ESStatement.execute(ESStatement.java:188)
at nl.anchormen.sql4es.jdbc.ESPreparedStatement.executeQuery(ESPreparedStatement.java:66)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
at net.sf.jasperreports.engine.query.JRJdbcQueryExecuter.createDatasource(JRJdbcQueryExecuter.java:310)

Could you please help?

ESConnection. isClosed() check error

If I use sql4es driver in apache-comon-dhcp library, Connection initialization error occurs as follows
"initializeConnection: connection closed"
The isClosed () method of ESConnection class may need to be modified as follows:
public boolean isClosed() throws SQLException {
return !active; // old version <-- active
}

Query with identical fields does not correctly populate the result set

Using 8.2.2 built from source (4/20/2016) and testing with SQL Workbench/J Build 119 and Elasticsearch 2.0.2/2.1.1/2.1.2 a query like "SELECT field as field_1, field as field_2, field as field_3 FROM table;" returns a result set with all three columns but only the last populated with values.

first bug

explain select sum(money) as allMoney from person where id=3
result is
{
"size" : 0,
"timeout" : 10000,
"query" : {
"term" : {
"id" : 3
}
},
"aggregations" : {
"filter" : {
"filter" : {
"term" : {
"id" : 3
}
},
"aggregations" : {
"sum(Money)" : {
"sum" : {
"field" : "Money"
}
}
}
}
}
}
the 'money' become 'Money'

I suggest you modify the 251 1ine on Heading.java or SelectParser.java about 'createColumn'

Using with Jasperosft Reports or JDBC Manager

Hi
I have been looking for a ES JDBC driver for months, so very happy i came across this.
I am trying to use it with Jaspersoft Report Studio. When i do a test, it shows it succesful, but when i go through and would normally see the tables/DB (when connecting to MS SQL), it just sits on Pending... under Table, View & Global Temporary.
I can connect to it with JDBC Manager on WIndows but get similar result where it just sits there.

Can you point me to where i can look to see where the issue could be?

Thanks

Incorrectly convert nested query

PUT index
{
  "mappings": {
    "my_case": {
      "properties": {
        "events": {
          "type": "nested" 
        }
      }
    }
  }
}

I made mapping structure like this. every row has more than two events data and would like to find one of events that matches poid and oid.

select name from my_case where (events.poid='A' and events.oid='B');

This is how I tried to query through sql4es, but It seems not results as I expected.

from sql4es
{
  "size": 200,
  "timeout": 10000,
  "post_filter": {
    "bool": {
      "must": [
        {
          "nested": {
            "query": {
              "term": {
                "events.poid": "A"
              }
            },
            "path": "events"
          }
        },
        {
          "nested": {
            "query": {
              "term": {
                "events.oid": "B"
              }
            },
            "path": "events"
          }
        }
      ]
    }
  }
}

It found every events that has poid is 'A' OR oid is 'B'.
How should I made query to find matching both condition?

This is what I want to make through sql4es

   .....
    "query": {
        "nested": {
           "path": "events",
           "filter": {
               "bool" : {
                    "must" : [
                        {"term" : { "events.poid" : "A"}},
                        {"term" : { "events.oid" : "B"}}
                    ]
               }
           }
        }

"having field = value" in select statement can't work correctly

if I have the sql like: select name, count(1) as num from table group by name having num = 5
the "num = 5" can not work correctly because of this use 【==】comparison operation on two Double type values, the code is in the following file:
sql4es/src/main/java/nl/anchormen/sql4es/model/expression/SimpleComparison.java

if(this.comparisonType == Type.EQUAL) return leftValue == rightValue;
if(this.comparisonType == Type.GREATER_THAN) return leftValue > rightValue;
if(this.comparisonType == Type.GREATER_THAN_OR_EQUAL) return leftValue >= rightValue;
if(this.comparisonType == Type.LESS_THAN) return leftValue < rightValue;
if(this.comparisonType == Type.LESS_THAN_OR_EQUAL) return leftValue <= rightValue;

if this can use: return leftValue.equals(rightValue); ?

Unexpected Error occurred attempting to open an SQL connection.

Trying to configure in Virtual Desktop environment and getting the following error (using SQuirreL). Any help please?

java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: Unable to connect to database
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.sql.SQLException: Unable to connect to database
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:171)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 more
Caused by: java.sql.SQLException: Unable to connect to database
at nl.anchormen.sql4es.jdbc.ESConnection.buildClient(ESConnection.java:129)
at nl.anchormen.sql4es.jdbc.ESConnection.(ESConnection.java:78)
at nl.anchormen.sql4es.jdbc.ESDriver.connect(ESDriver.java:46)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 more
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.directExecutor()Ljava/util/concurrent/Executor;
at org.elasticsearch.threadpool.ThreadPool.(ThreadPool.java:190)
at org.elasticsearch.client.transport.TransportClient$Builder.build(TransportClient.java:131)
at nl.anchormen.sql4es.jdbc.ESConnection.buildClient(ESConnection.java:108)
... 11 more

mysql Load Balancing jdbc url style

hi. very thanks

i wanted a mysql Load Balancing jdbc url style setting
So I modify the source code

-- ESDriver.java
package nl.anchormen.sql4es.jdbc;

import java.io.UnsupportedEncodingException;
import java.net.URLDecoder;
import java.nio.charset.Charset;
import java.sql.Connection;
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.DriverPropertyInfo;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import nl.anchormen.sql4es.model.HostPortPair;
import nl.anchormen.sql4es.model.Utils;

/**

  • Basic {@link Driver} implementation used to get {@link ESConnection}.

  • @author cversloot
    *
    */
    public class ESDriver implements Driver{

    private static final Logger logger = LoggerFactory.getLogger(ESDriver.class.getName());

    private static final String ES_HOSTS = "es.hosts";
    private static final String ES_INDEX = "es.index";

    /**

    • Register this driver with the driver manager
      */
      static{
      try {
      DriverManager.registerDriver(new ESDriver());
      } catch (SQLException sqle) {
      logger.error("Unable to register Driver", sqle);
      }
      }

// @OverRide
// public Connection connect(String url, Properties info) throws SQLException {
// Object[] conInfo = parseURL(url, info);
// String host = (String)conInfo[0];
// int port = (int)conInfo[1];
// String index = (String)conInfo[2];
// Properties props = (Properties)conInfo[3];
// return new ESConnection(host, port, index, props);
// }

@Override
public Connection connect(String url, Properties info) throws SQLException {
    Properties properties = parseURL(url, info);

    String hosts = (String)properties.getProperty(ES_HOSTS);
    String index = (String)properties.getProperty(ES_INDEX);
    ConnectionInfo connInfo = new ConnectionInfo(hosts, index, properties);
    return new ESConnection(connInfo);
}

// /**
// * Parses the url and returns information required to create a connection. Properties
// * in the url are added to the provided properties and returned in the object array
// * @param url
// * @param info
// * @return {String:host, int:port, String:index, Properties:info}
// * @throws SQLException
// /
// private Object[] parseURL(String url, Properties info) throws SQLException{
// if(!acceptsURL2(url))
// throw new SQLException("Invalid url");
// try {
// URI uri = new URI(url.substring(12));
// String host = uri.getHost();
// int port = (uri.getPort() < 0 ? Utils.PORT : uri.getPort());
// String index = uri.getPath().length() <= 1 ? null : uri.getPath().split("/")[1];
// Properties props = Utils.defaultProps();
// if(info != null) {
// props.putAll(info);
// }
// info = props;
// if(uri.getQuery() != null)
// for(String keyValue : uri.getQuery().split("&")){
// String[] parts = keyValue.split("=");
// if(parts.length > 1) info.setProperty(parts[0].trim(), parts[1].trim());
// else info.setProperty(parts[0], "");
// }
// return new Object[]{host, port, index, info};
// } catch (URISyntaxException e) {
// throw new SQLException("Unable to parse URL. Pleas use '"+Utils.PREFIX+"//host:port/schema?{0,1}(param=value&)
'", e);
// }catch(ArrayIndexOutOfBoundsException e){
// throw new SQLException("No shema (index) specified. Pleas use '"+Utils.PREFIX+"//host:port/schema?{0,1}(param=value&)*'");
// }catch(Exception e){
// throw new SQLException("Unable to connect to database due to: "+e.getClass().getName(), e);
// }
// }

private Properties parseURL(String url, Properties defaults) throws SQLException 
{
    if(url == null || !url.toLowerCase().startsWith(Utils.PREFIX)) 
        return null;

    Properties urlProps = (defaults != null) ? new Properties(defaults) : new Properties();

    int beginningOfSlashes = url.indexOf("//");
    int questionIndex = url.indexOf("?");
    String queryString = (questionIndex > 1)? url.substring(questionIndex + 1).trim(): "";
    //jdbc:sql4es remove
    url = (questionIndex > 0)? 
            url.substring(beginningOfSlashes + 2, questionIndex) 
            : url.substring(beginningOfSlashes + 2);

    //  host:port,host2,host3:port/index?cluster.name=clustername
    String[] split = url.split("/", 3);
    if(split.length > 2)
    {
        throw new SQLException("Unable to parse URL. Pleas use '"+Utils.PREFIX+"//host:port[,host:port]*/schema?(param=value&)*'");
    }else 
    {
        if(split.length == 1 // host:port,host2,host3:port?cluster.name=clustername
                || split[1].trim().isEmpty()) //host:port,host2,host3:port/?cluster.name=clustername
        {
            // host:port,host2,host3:port?cluster.name=clustername
            throw new SQLException("No shema (index) specified. Pleas use '"+Utils.PREFIX+"//host:port[,host:port]*/schema?(param=value&)*'");
        }
    }
    String hostOrPortPair = split[0];
    // add property : es.hosts
    if(!urlProps.containsKey(ES_HOSTS))
    {
        urlProps.setProperty(ES_HOSTS, hostOrPortPair);
    }else
    {
        String esHosts = urlProps.getProperty(ES_HOSTS).trim();
        if(esHosts.isEmpty())
        {
            urlProps.setProperty(ES_HOSTS, hostOrPortPair);
        }else
        {
            urlProps.setProperty(ES_HOSTS, esHosts+","+hostOrPortPair);
        }
    }

    //add property : es.index
    String index = split[1];
    urlProps.setProperty(ES_INDEX, index);
    if(!queryString.isEmpty())
    {
        String[] querys = queryString.split("&");
        for (String pair : querys)
        {
            int idx = pair.indexOf("=");
            try
            {
                urlProps.put(URLDecoder.decode(pair.substring(0, idx), "UTF-8"), URLDecoder.decode(pair.substring(idx + 1), "UTF-8"));
            } catch (UnsupportedEncodingException e)
            {
                try
                {
                    urlProps.put(URLDecoder.decode(pair.substring(0, idx), Charset.defaultCharset().name()), 
                            URLDecoder.decode(pair.substring(idx + 1), Charset.defaultCharset().name()));
                } catch (UnsupportedEncodingException e1)
                {
                    throw new SQLException("Unable to connect to database due to: "+e.getClass().getName(), e);
                }
            }   
        }
    }

    return urlProps;
}

// @OverRide
// public boolean acceptsURL(String url) throws SQLException {
// if(!url.toLowerCase().startsWith(Utils.PREFIX)) return false;
// try {
// URI uri = new URI(url.substring(5));
// if(uri.getHost() == null) throw new SQLException("Invalid URL, no host specified");
// if(uri.getPath() == null) throw new SQLException("Invalid URL, no index specified");
// if(uri.getPath().split("/").length > 2) throw new SQLException("Invalid URL, "+uri.getPath()+" is not a valid index");
// } catch (URISyntaxException e) {
// throw new SQLException("Unable to parse URL", e);
// }
// return true;
// }

@Override
public boolean acceptsURL(String url) throws SQLException 
{
    return (parseURL(url, null) != null);
}

@Override
public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) throws SQLException {
    Properties props = (Properties)parseURL(url, info);
    Properties defaultProps = Utils.defaultProps();

    DriverPropertyInfo[] result = new DriverPropertyInfo[defaultProps.size() + 2];
    result[0] = new DriverPropertyInfo(ES_HOSTS, (String) props.get(ES_HOSTS));
    result[0].required = true;
    result[0].description = "elasticsearch transport address list";
    result[1] = new DriverPropertyInfo(ES_INDEX, (String) props.get(ES_INDEX));
    result[1].required = true;
    result[1].description = "elasticsearch index name";

    result[2] = new DriverPropertyInfo("cluster.name", (String) props.get("cluster.name"));
    result[2].required = false;
    result[2].description = "If your clustername is not 'elasticsearch' you should specify the clustername";

    int index = 3;
    for(Object key : defaultProps.keySet()){
        result[index] = new DriverPropertyInfo((String)key, props.get(key).toString());
        index++;
    }
    return result;
}

// @OverRide
// public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) throws SQLException {
// Properties props = (Properties)parseURL(url, info)[3];
// DriverPropertyInfo[] result = new DriverPropertyInfo[props.size()];
// int index = 0;
// for(Object key : props.keySet()){
// result[index] = new DriverPropertyInfo((String)key, props.get(key).toString());
// index++;
// }
// return result;
// }

@Override
public int getMajorVersion() {
    return Utils.ES_MAJOR_VERSION;
}

@Override
public int getMinorVersion() {
    return Utils.ES_MINOR_VERSION;
}

@Override
public boolean jdbcCompliant() {
    return false;
}

@Override
public java.util.logging.Logger getParentLogger() throws SQLFeatureNotSupportedException {
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

/**
 * The Class Connection Info.
 */
public static class ConnectionInfo
{
    private List<HostPortPair> hosts = new ArrayList<HostPortPair>(3);

    private String index;

    private Properties properties;

    public ConnectionInfo(String host, int port, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(new HostPortPair(host, port));
    }

    public ConnectionInfo(String hostAndPortPairs, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(hostAndPortPairs);
    }

    public ConnectionInfo(HostPortPair hostPortPair, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        this.addHost(hostPortPair);
    }

    public ConnectionInfo(List<HostPortPair> hostPortPairs, String index, Properties properties) throws SQLException
    {
        this(index, properties);
        for (HostPortPair hostPortPair : hostPortPairs)
        {
            this.addHost(hostPortPair);
        }
    }

    private ConnectionInfo(String index, Properties properties) throws SQLException
    {
        this.setIndex(index);
        String esHosts = properties.getProperty(ES_HOSTS);
        if(esHosts != null)
        {
            this.addHost(esHosts);
        }
        this.setProperties(properties);
    }

    public List<HostPortPair> getHosts()
    {
        return hosts;
    }


    public boolean addHost(HostPortPair hostPortPair)
    {
        for (HostPortPair host : hosts)
        {
            if(host.equals(hostPortPair))
            {
                return false;
            }
        }
        this.hosts.add(hostPortPair);
        return true;
    }

    private void addHost(String hostAndPortPairs) throws SQLException
    {
        for (String hostPort : hostAndPortPairs.split(","))
        {
            String[] _hostPort = hostPort.split(":");
            String host = _hostPort[0].trim();
            HostPortPair hostPair =  (_hostPort.length > 1) ? new HostPortPair(host, Integer.parseInt(_hostPort[1])) : new HostPortPair(host); 
            this.addHost(hostPair);
        }
    }

    public String getIndex()
    {
        return index;
    }

    /**
     * @param index the index to set
     * @throws SQLException 
     */
    public void setIndex(String index) throws SQLException
    {
        if(index == null || index.isEmpty())
        {
            throw new SQLException("Invalid URL, no index specified");
        }
        this.index = index;
    }

    public Properties getProperties()
    {
        return properties;
    }

    public void setProperties(Properties properties)
    {
        if(this.properties == null)
        {
            this.properties = new Properties();
        }
        this.properties.putAll(properties);
    }

    public String toString()
    {
        StringBuilder tmp = new StringBuilder();
        tmp.append(Utils.PREFIX);
        tmp.append("//");
        for (HostPortPair hostPortPair : hosts)
        {
            tmp.append(hostPortPair.toString());
            tmp.append(',');
        }
        tmp.delete(tmp.length()-1, tmp.length());
        tmp.append('/');
        tmp.append(index);
        if(properties.size() > 0)
        {
            tmp.append('?');
            for (Object key : properties.entrySet())
            {
                tmp.append(key);
                tmp.append('=');
                tmp.append(properties.getProperty((String) key));
                tmp.append('&');
            }
            tmp.delete(tmp.length()-1, tmp.length());
        }
        return tmp.toString();
    }
}

}

---- ESConnection.java
package nl.anchormen.sql4es.jdbc;

import java.net.InetAddress;
import java.net.UnknownHostException;
import java.sql.Array;
import java.sql.Blob;
import java.sql.CallableStatement;
import java.sql.Clob;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.NClob;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLClientInfoException;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.sql.SQLWarning;
import java.sql.SQLXML;
import java.sql.Savepoint;
import java.sql.Statement;
import java.sql.Struct;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.Executor;

import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.test.ESIntegTestCase;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import nl.anchormen.sql4es.ESDatabaseMetaData;
import nl.anchormen.sql4es.jdbc.ESDriver.ConnectionInfo;
import nl.anchormen.sql4es.model.Heading;
import nl.anchormen.sql4es.model.HostPortPair;
import nl.anchormen.sql4es.model.Utils;

/**

  • A {@link Connection} implementation effectively wrapping an Elasticsearch
  • {@link Client}.
  • @author cversloot
    *
    */
    public class ESConnection implements Connection
    {

// private String host;
// private int port;
// private String index;
// private Properties props;

private ConnectionInfo connInfo;

private Client client;
private boolean active = true;
private int timeout = Integer.MAX_VALUE;

private final Logger logger = LoggerFactory.getLogger(this.getClass());
private boolean autoCommit = false;
private boolean readOnly = true;
private List<ESStatement> statements = new ArrayList<ESStatement>();


/**
 * Builds the es {@link Client} using the provided parameters.
 * 
 * @param host
 * @param port
 * @param index
 * @param props
 *            the properties will all be copied to the Settings.Builder used
 *            to create the Client
 * @throws SQLException
 */
public ESConnection(String host, int port, String index, Properties props) throws SQLException
{
    this(new ConnectionInfo(host, port, index, props));
}

public ESConnection(ConnectionInfo connectionInfo) throws SQLException
{
    this.connInfo = connectionInfo;
    this.client = buildClient();
    try
    {
        this.getTypeMap(); // loads types into properties
    } catch (Exception e)
    {
        throw new SQLException("Unable to connect to specified elasticsearch host(s)", e);
    }
}

// /**
// * Builds the Elasticsearch client using the properties this connection was
// * instantiated with
// *
// * @return
// * @throws SQLException
// */
// private Client buildClient() throws SQLException
// {
// if (props.containsKey("test"))
// { // used for integration tests
// return ESIntegTestCase.client();
// } else
// {
// try
// {
// Settings.Builder settingsBuilder = Settings.settingsBuilder();
// for (Object key : this.props.keySet())
// {
// settingsBuilder.put(key, this.props.get(key));
// }
// Settings settings = settingsBuilder.build();
// TransportClient client = TransportClient.builder().settings(settings).build()
// .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port));
//
// // add additional hosts if set in URL query part
// if (this.props.containsKey("es.hosts"))
// for (String hostPort : this.props.getProperty("es.hosts").split(","))
// {
// String newHost = hostPort.split(":")[0].trim();
// int newPort = (hostPort.split(":").length > 1 ? Integer.parseInt(hostPort.split(":")[1])
// : Utils.PORT);
// client.addTransportAddress(
// new InetSocketTransportAddress(InetAddress.getByName(newHost), newPort));
// logger.info("Adding additional ES host: " + hostPort);
// }
//
// // check if index exists
// if (index != null)
// {
// boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(index)).actionGet()
// .isExists();
// if (!indexExists)
// throw new SQLException("Index or Alias '" + index + "' does not exist");
// }
// return client;
// } catch (UnknownHostException e)
// {
// throw new SQLException("Unable to connect to " + host, e);
// } catch (Throwable t)
// {
// throw new SQLException("Unable to connect to database", t);
// }
// }
// }

private Client buildClient() throws SQLException
{
    Properties props = connInfo.getProperties();
    if (props.containsKey("test"))
    { // used for integration tests
        return ESIntegTestCase.client();
    } else
    {
        try
        {
            Settings.Builder settingsBuilder = Settings.settingsBuilder();
            for (Object key : props.keySet())
            {
                settingsBuilder.put(key, props.get(key));
            }
            Settings settings = settingsBuilder.build();
            TransportClient client = TransportClient.builder().settings(settings).build();

            List<HostPortPair> hosts = connInfo.getHosts();
            for (HostPortPair host : hosts)
            {
                client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host.getHost()), host.getPort()));
                logger.info("Adding additional ES host: " + host);
            }

            // check if index exists
            if (connInfo.getIndex() != null)
            {
                boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(connInfo.getIndex())).actionGet()
                        .isExists();
                if (!indexExists)
                    throw new SQLException("Index or Alias '" + connInfo.getIndex() + "' does not exist");
            }
            return client;
        } catch (UnknownHostException e)
        {
            throw new SQLException("Unable to connect to " + connInfo.getHosts(), e);
        } catch (Throwable t)
        {
            throw new SQLException("Unable to connect to database", t);
        }
    }
}

public Client getClient()
{
    return this.client;
}

@Override
public <T> T unwrap(Class<T> iface) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public boolean isWrapperFor(Class<?> iface) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement() throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESStatement(this);
}

@Override
public PreparedStatement prepareStatement(String sql) throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESPreparedStatement(this, sql);
}

@Override
public CallableStatement prepareCall(String sql) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement(int resultSetType, int resultSetConcurrency) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESStatement(this);
}

@Override
public String nativeSQL(String sql) throws SQLException
{
    return sql;
}

@Override
public void setAutoCommit(boolean autoCommit) throws SQLException
{
    this.autoCommit = autoCommit;
}

@Override
public boolean getAutoCommit() throws SQLException
{
    return autoCommit;
}

@Override
public void commit() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void rollback() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void close() throws SQLException
{
    if (isClosed())
        return;
    for (ESStatement st : this.statements)
        st.close();
    statements.clear();
    client.close();
    this.active = false;
}

@Override
public boolean isClosed() throws SQLException
{
    return !active;
}

@Override
public DatabaseMetaData getMetaData() throws SQLException
{
    HostPortPair hostPortPair = this.connInfo.getHosts().get(0);
    return new ESDatabaseMetaData(hostPortPair.getHost(), hostPortPair.getPort(), client, this.getClientInfo(), this);

// return new ESDatabaseMetaData(this);
}

@Override
public void setReadOnly(boolean readOnly) throws SQLException
{
    this.readOnly = readOnly;
}

@Override
public boolean isReadOnly() throws SQLException
{
    return readOnly;
}

@Override
public void setCatalog(String catalog) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public String getCatalog() throws SQLException
{
    return null;
}

@Override
public void setTransactionIsolation(int level) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public int getTransactionIsolation() throws SQLException
{
    return ESConnection.TRANSACTION_NONE;
}

@Override
public SQLWarning getWarnings() throws SQLException
{
    return null;
}

@Override
public void clearWarnings() throws SQLException
{
    // TODO
}

@Override
public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency)
        throws SQLException
{
    // TODO use params?
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    return new ESPreparedStatement(this, sql);
}

@Override
public CallableStatement prepareCall(String sql, int resultSetType, int resultSetConcurrency) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Map<String, Class<?>> getTypeMap() throws SQLException
{
    ResultSet rs = getMetaData().getColumns(null, null, null, null);
    Map<String, Map<String, Integer>> tableColumnInfo = new HashMap<String, Map<String, Integer>>();
    while (rs.next())
    {
        String table = rs.getString(3);
        String col = rs.getString(4);
        int type = rs.getInt(5);
        if (!tableColumnInfo.containsKey(table))
            tableColumnInfo.put(table, new HashMap<String, Integer>());
        tableColumnInfo.get(table).put(col, type);
    }
    this.connInfo.getProperties()
        .put(Utils.PROP_TABLE_COLUMN_MAP, tableColumnInfo);

    Map<String, Class<?>> result = new HashMap<String, Class<?>>();
    for (String type : tableColumnInfo.keySet())
    {
        for (String field : tableColumnInfo.get(type).keySet())
        {
            result.put(type + "." + field, Heading.getClassForTypeId(tableColumnInfo.get(type).get(field)));
        }
    }
    return result;
}

@Override
public void setTypeMap(Map<String, Class<?>> map) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setHoldability(int holdability) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public int getHoldability() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Savepoint setSavepoint() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Savepoint setSavepoint(String name) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void rollback(Savepoint savepoint) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void releaseSavepoint(Savepoint savepoint) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Statement createStatement(int resultSetType, int resultSetConcurrency, int resultSetHoldability)
        throws SQLException
{
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");
    ESStatement st = new ESStatement(this);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, int resultSetType, int resultSetConcurrency,
        int resultSetHoldability) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public CallableStatement prepareCall(String sql, int resultSetType, int resultSetConcurrency,
        int resultSetHoldability) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public PreparedStatement prepareStatement(String sql, int autoGeneratedKeys) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, int[] columnIndexes) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public PreparedStatement prepareStatement(String sql, String[] columnNames) throws SQLException
{
    // TODO use params
    if (this.client == null)
    {
        throw new SQLException("Unable to connect on specified schema '" + this.connInfo.getIndex() + "'");
    }
    if (isClosed())
        throw new SQLException("Connection closed");

    ESPreparedStatement st = new ESPreparedStatement(this, sql);
    statements.add(st);
    return st;
}

@Override
public Clob createClob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public Blob createBlob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public NClob createNClob() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public SQLXML createSQLXML() throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public boolean isValid(int timeout) throws SQLException
{
    return active;
}

@Override
public void setClientInfo(String name, String value) throws SQLClientInfoException
{
    this.connInfo.getProperties().setProperty(name, value);
}

@Override
public void setClientInfo(Properties properties) throws SQLClientInfoException
{
    this.connInfo.setProperties(properties);
}

@Override
public String getClientInfo(String name) throws SQLException
{
    return this.connInfo.getProperties().getProperty(name);
}

@Override
public Properties getClientInfo() throws SQLException
{
    return this.connInfo.getProperties();
}

@Override
public Array createArrayOf(String typeName, Object[] elements) throws SQLException
{
    return null;
}

@Override
public Struct createStruct(String typeName, Object[] attributes) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setSchema(String schema) throws SQLException
{
    boolean indexExists = client.admin().indices().exists(new IndicesExistsRequest(schema)).actionGet().isExists();
    if (!indexExists)
        throw new SQLException("Index '" + schema + "' does not exist");
    this.connInfo.setIndex(schema);
}

@Override
public String getSchema() throws SQLException
{
    return this.connInfo.getIndex();
}

@Override
public void abort(Executor executor) throws SQLException
{
    throw new SQLFeatureNotSupportedException(Utils.getLoggingInfo());
}

@Override
public void setNetworkTimeout(Executor executor, int milliseconds) throws SQLException
{
    this.timeout = milliseconds;
}

@Override
public int getNetworkTimeout() throws SQLException
{
    return timeout;
}

}

------ HostPortPair
package nl.anchormen.sql4es.model;

import java.sql.SQLException;

public class HostPortPair
{
private String host;
private int port;

public HostPortPair(String host) throws SQLException
{
    this(host, Utils.PORT);
}

public HostPortPair(String host, int port) throws SQLException
{
    if(host == null || host.isEmpty() || port <= 0)
        throw new SQLException("Invalid URL, no host specified");
    this.host = host;
    this.port = port;
}

/**
 * @return the host
 */
public String getHost()
{
    return host;
}
/**
 * @param host the host to set
 */
public void setHost(String host)
{
    this.host = host;
}

/**
 * @return the port
 */
public int getPort()
{
    return port;
}
/**
 * @param port the port to set
 */
public void setPort(int port)
{
    this.port = port;
}

@Override
public String toString()
{
    return host+":"+port;
}

@Override
public boolean equals(Object o)
{
    if (o == this) return true;
    if (!(o instanceof HostPortPair)) return false;
    HostPortPair other = (HostPortPair) o;
    if (this.port != other.port) return false;
    if (this.host.equals(other.host)) 
        return false;
    return true;
}

}

Need cardinality aggregation to be supported

Please help to implement cardinality-aggregation. Thanks.

Finding Distinct Countsedit
The first approximate aggregation provided by Elasticsearch is the cardinality metric. This provides the cardinality of a field, also called a distinct or unique count. You may be familiar with the SQL version:

SELECT COUNT(DISTINCT color)
FROM cars

https://www.elastic.co/guide/en/elasticsearch/guide/current/cardinality.html
https://www.elastic.co/guide/en/elasticsearch/reference/2.2/search-aggregations-metrics-cardinality-aggregation.html

Incorrect type for nested fields

In our index we have a field called "system" that has two nested properties - host (String) and sampletime (Date).
Using WorkbenchJ, the following query works OK:

SELECT system.host, system.sampletime
FROM "index"."data-generator-poc-async"."arm" limit 1

But the following queries fail with an error "Value in column '82' is not a Date but is String" (column 82 is system.sampletime):

SELECT system
FROM "index"."data-generator-poc-async"."arm" limit 1

SELECT *
FROM "index"."data-generator-poc-async"."arm" limit 1

Memory consumption

Hi,

It seems like there is an issue of memory while doing a select from index (type) without where clause filter, or limit.
lets say we have index/type - foo/bar, and we write the statement "select * from bar",
If the index is large enough, the driver will explode with memory, no matter what fetch.size is set to.
Is there any way to limit this ? Doesn't the fetch.size parameter supposed to stop it from growing that much ?

BTW, this should be very easy to reproduce, even on the twitter example.
(Of course, as the number of fields returning is larger, this issue will be more "burning")

some problems when integrate into pentaho and saiku

**Hi, Corné ,
Firstly, thank you for giving us such a good tool. These days I am looking for jdbc tool for elasticsearch. Sql4es is what I am searching for. But I encounter some problems. could you help me for that? Maybe this is not a defect of sql4es. I use the sql4es in SQLWorkbench and it works well.

  1. I followed the article [https://www.anchormen.nl/elasticsearch-as-a-pentaho-source-through-sql4es/] , but it showed exceptions after I opened the analysis report and selected some fields. It seemed that pentaho knows fields name but can not get field data from Elasticserach. it uses sql4es.8.2.2 es2.2.1 and pentaho business trial 6.1.0. Did you change some configurations when you use sql4es in pentaho? some screenshots
    https://github.com/borderlayout/test/blob/master/test1.png
    https://github.com/borderlayout/test/blob/master/test2.png
    https://github.com/borderlayout/test/blob/master/test3.png
    https://github.com/borderlayout/test/blob/master/test4.png
    https://github.com/borderlayout/test/blob/master/test5.png
  2. I also tried to use sql4es in saiku community edition(saiku CE). [http://community.meteorite.bi/] ,but I encountered another problem.The saiku CE3.7.4 uses apache jackrabbit 2.8.0 which calls lucene-core:3.6.0, and sql4es(0.8.2.3) uses elasticsearch2.3.2 which also calls lucene(version 5.5.0),
    the version of lucene are different and the saiku runs failed. Could you give me some suggestion for it?**

Gary Wu
email: [email protected]

the log of pentaho when I select column:
11:34:37,575 ERROR [c] Ticket Number: 1464089677575
11:34:37,578 ERROR [c] Exception in AJAX handler.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at com.pentaho.analyzer.report.ReportRequestService.get(SourceFile:219)
at com.pentaho.analyzer.content.controller.n.a(SourceFile:168)
at com.pentaho.analyzer.content.controller.b.b(SourceFile:202)
at com.pentaho.analyzer.content.AnalyzerContentGenerator.a(SourceFile:311)
at com.pentaho.analyzer.content.AnalyzerContentGenerator.createContent(SourceFile:157)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutput.generateContent(GeneratorStreamingOutput.java:236)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutput.write(GeneratorStreamingOutput.java:163)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutputProvider.writeTo(GeneratorStreamingOutputProvider.java:54)
at org.pentaho.platform.web.http.api.resources.GeneratorStreamingOutputProvider.writeTo(GeneratorStreamingOutputProvider.java:33)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1479)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1391)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1381)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
at org.pentaho.platform.web.servlet.JAXRSServlet.service(JAXRSServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.pentaho.platform.web.servlet.JAXRSServlet.service(JAXRSServlet.java:114)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoWebContextFilter.doFilter(PentahoWebContextFilter.java:185)
at com.pentaho.platform.web.http.filters.PentahoEnterpriseWebContextFilter.doFilter(SourceFile:72)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoRequestContextFilter.doFilter(PentahoRequestContextFilter.java:87)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:399)
at org.springframework.security.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109)
at org.springframework.security.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.ui.ExceptionTranslationFilter.doFilterHttp(ExceptionTranslationFilter.java:101)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.providers.anonymous.AnonymousProcessingFilter.doFilterHttp(AnonymousProcessingFilter.java:105)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.pentaho.platform.web.http.security.RequestParameterAuthenticationFilter.doFilter(RequestParameterAuthenticationFilter.java:191)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.ui.basicauth.BasicProcessingFilter.doFilterHttp(BasicProcessingFilter.java:174)
at org.pentaho.platform.web.http.security.PentahoBasicProcessingFilter.doFilterHttp(PentahoBasicProcessingFilter.java:115)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.pentaho.platform.web.http.filters.HttpSessionPentahoSessionIntegrationFilter.doFilter(HttpSessionPentahoSessionIntegrationFilter.java:263)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.wrapper.SecurityContextHolderAwareRequestFilter.doFilterHttp(SecurityContextHolderAwareRequestFilter.java:91)
at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53)
at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:411)
at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:188)
at org.springframework.security.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:99)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at com.pentaho.ui.servlet.SystemStatusFilter.doFilter(SourceFile:87)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:114)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.WebappRootForwardingFilter.doFilter(WebappRootForwardingFilter.java:70)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.pentaho.platform.web.http.filters.PentahoPathDecodingFilter.doFilter(PentahoPathDecodingFilter.java:34)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:522)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1095)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1502)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1458)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at com.pentaho.analyzer.report.ReportRequestService.get(SourceFile:175)
... 86 more
Caused by: java.lang.RuntimeException: java.sql.SQLException: No result found for this query
at com.pentaho.analyzer.report.ReportManagerImpl.runReport(SourceFile:320)
at com.pentaho.analyzer.report.ReportManagerImpl.runReport(SourceFile:139)
at com.pentaho.analyzer.report.ReportRequestService$2.a(SourceFile:275)
at com.pentaho.analyzer.report.ReportRequestService$2.call(SourceFile:270)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.sql.SQLException: No result found for this query
at nl.anchormen.sql4es.ESQueryState.execute(ESQueryState.java:188)
at nl.anchormen.sql4es.ESQueryState.execute(ESQueryState.java:172)
at nl.anchormen.sql4es.jdbc.ESStatement.executeQuery(ESStatement.java:68)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
at mondrian.rolap.SqlStatement.execute(SqlStatement.java:199)
at mondrian.rolap.RolapUtil.executeQuery(RolapUtil.java:350)
at mondrian.rolap.SqlTupleReader.prepareTuples(SqlTupleReader.java:412)
at mondrian.rolap.SqlTupleReader.readMembers(SqlTupleReader.java:529)
at mondrian.rolap.SqlMemberSource.getMembersInLevel(SqlMemberSource.java:535)
at mondrian.rolap.SmartMemberReader.getMembersInLevel(SmartMemberReader.java:147)
at mondrian.rolap.RolapCubeHierarchy$CacheRolapCubeHierarchyMemberReader.getMembersInLevel(RolapCubeHierarchy.java:724)
at mondrian.rolap.RolapSchemaReader.getLevelMembers(RolapSchemaReader.java:579)
at mondrian.rolap.RolapSchemaReader.getLevelMembers(RolapSchemaReader.java:565)
at mondrian.rolap.RolapCube$RolapCubeSchemaReader.getLevelMembers(RolapCube.java:2829)
at mondrian.olap.DelegatingSchemaReader.getLevelMembers(DelegatingSchemaReader.java:195)
at mondrian.olap.Query$QuerySchemaReader.getLevelMembers(Query.java:1535)
at mondrian.olap.fun.FunUtil.getNonEmptyLevelMembers(FunUtil.java:2235)
at mondrian.olap.fun.FunUtil.levelMembers(FunUtil.java:2243)
at mondrian.olap.fun.LevelMembersFunDef$1.evaluateList(LevelMembersFunDef.java:37)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.calc.impl.AbstractIterCalc.evaluate(AbstractIterCalc.java:50)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.calc.impl.AbstractIterCalc.evaluate(AbstractIterCalc.java:50)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.olap.fun.GenerateFunDef$GenerateListCalcImpl.evaluateList(GenerateFunDef.java:113)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.olap.fun.OrderFunDef$CalcImpl.evaluateList(OrderFunDef.java:202)
at mondrian.calc.impl.AbstractListCalc.evaluate(AbstractListCalc.java:65)
at mondrian.rolap.RolapResult.evaluateExp(RolapResult.java:1089)
at mondrian.rolap.RolapNamedSetEvaluator.ensureList(RolapNamedSetEvaluator.java:98)
at mondrian.rolap.RolapNamedSetEvaluator.evaluateTupleIterable(RolapNamedSetEvaluator.java:66)
at mondrian.mdx.NamedSetExpr$1.evaluateIterable(NamedSetExpr.java:98)
at mondrian.rolap.RolapResult.executeAxis(RolapResult.java:976)
at mondrian.rolap.RolapResult.evalLoad(RolapResult.java:801)
at mondrian.rolap.RolapResult.loadMembers(RolapResult.java:757)
at mondrian.rolap.RolapResult.(RolapResult.java:401)
at mondrian.rolap.RolapConnection.executeInternal(RolapConnection.java:672)
at mondrian.rolap.RolapConnection.access$000(RolapConnection.java:52)
at mondrian.rolap.RolapConnection$1.call(RolapConnection.java:623)
at mondrian.rolap.RolapConnection$1.call(RolapConnection.java:621)
... 4 more

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.