Giter Club home page Giter Club logo

metabase-clickhouse-driver's Introduction

ClickHouse driver for Metabase


About

ClickHouse (GitHub) database driver for the Metabase (GitHub) business intelligence front-end.

Compatibility with ClickHouse

The driver aims to support the current stable and LTS releases (see the related docs).

After 1.50.1:

ClickHouse version Supported?
23.8+
23.6 - 23.7 Best effort

1.50.0 and earlier:

ClickHouse version Supported?
23.8+
23.3 - 23.7 Best effort

For connection impersonation feature, the minimal required ClickHouse version is 24.4; otherwise, the feature is disabled by the driver.

The CSV Uploads feature currently works only with ClickHouse Cloud (see this issue for more details).

Installation

Run using Metabase Jar

  1. Download a fairly recent Metabase binary release (jar file) from the Metabase distribution page.
  2. Download the ClickHouse driver jar from this repository's Releases page
  3. Create a directory and copy the metabase.jar to it.
  4. In that directory create a sub-directory called plugins.
  5. Copy the ClickHouse driver jar to the plugins directory.
  6. Make sure you are the in the directory where your metabase.jar lives.
  7. Run MB_PLUGINS_DIR=./plugins; java -jar metabase.jar.

For example (using Metabase v0.49.3 and ClickHouse driver 1.4.0):

export METABASE_VERSION=v0.49.3
export METABASE_CLICKHOUSE_DRIVER_VERSION=1.4.0

mkdir -p mb/plugins && cd mb
curl -o metabase.jar https://downloads.metabase.com/$METABASE_VERSION/metabase.jar
curl -L -o plugins/ch.jar https://github.com/ClickHouse/metabase-clickhouse-driver/releases/download/$METABASE_CLICKHOUSE_DRIVER_VERSION/clickhouse.metabase-driver.jar
MB_PLUGINS_DIR=./plugins; java -jar metabase.jar

Run as a Docker container

Alternatively, if you don't want to run Metabase Jar, you can use a Docker image:

export METABASE_DOCKER_VERSION=v0.49.3
export METABASE_CLICKHOUSE_DRIVER_VERSION=1.4.0

mkdir -p mb/plugins && cd mb
curl -L -o plugins/ch.jar https://github.com/ClickHouse/metabase-clickhouse-driver/releases/download/$METABASE_CLICKHOUSE_DRIVER_VERSION/clickhouse.metabase-driver.jar
docker run -d -p 3000:3000 \
  --mount type=bind,source=$PWD/plugins/ch.jar,destination=/plugins/clickhouse.jar \
  metabase/metabase:$METABASE_DOCKER_VERSION

Choosing the Right Version

Metabase Release Driver Version
0.33.x 0.6
0.34.x 0.7.0
0.35.x 0.7.1
0.37.3 0.7.3
0.38.1+ 0.7.5
0.41.2 0.8.0
0.41.3.1 0.8.1
0.42.x 0.8.1
0.44.x 0.9.1
0.45.x 1.1.0
0.46.x 1.1.7
0.47.x 1.2.3
0.47.7+ 1.2.5
0.48.x 1.3.4
0.49.x 1.4.0
0.49.14+ 1.5.1
0.50.x 1.50.2

After Metabase 0.50.0, a new naming convention exists for the driver's releases. The new one is intended to reflect the Metabase version the driver is supposed to run on. For example, the driver version 1.50.0 means that it should be used with Metabase v0.50.x or Metabase EE 1.50.x only, and it is not guaranteed that this particular version of the driver can work with the previous or the following versions of Metabase.

Creating a Metabase Docker image with ClickHouse driver

You can use a convenience script build_docker_image.sh, which takes three arguments: Metabase version, ClickHouse driver version, and the desired final Docker image tag.

./build_docker_image.sh v0.49.3 1.4.0 my-metabase-with-clickhouse:v0.0.1

where v0.49.3 is Metabase version, 1.4.0 is ClickHouse driver version, and my-metabase-with-clickhouse:v0.0.1 being the tag.

Then you should be able to run it:

docker run -d -p 3000:3000 --name my-metabase my-metabase-with-clickhouse:v0.0.1

or use it with Docker compose, for example:

version: '3.8'
services:
  clickhouse:
    image: 'clickhouse/clickhouse-server:24.3-alpine'
    container_name: 'metabase-clickhouse-server'
    ports:
      - '8123:8123'
      - '9000:9000'
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
  metabase:
    image: 'my-metabase-with-clickhouse:v0.0.1'
    container_name: 'metabase-with-clickhouse'
    environment:
      'MB_HTTP_TIMEOUT': '5000'
      # Replace with a timezone matching your ClickHouse or DateTime columns timezone
      'JAVA_TIMEZONE': 'UTC'
    ports:
      - '3000:3000'

Using certificates

In the "Advanced options", add the following to the "Additional JDBC connection string options" input:

sslrootcert=/path/to/ca.crt

where /path/to/ca.crt is the absolute path to the server CA on the Metabase host or Docker container (depends on your deployment).

Make sure that you tick "Use a secure connection (SSL)" as well.

Operations

The driver should work fine for many use cases. Please consider the following items when running a Metabase instance with this driver:

  • Create a dedicated user for Metabase, whose profile has readonly set to 2.
  • Consider running the Metabase instance in the same time zone as your ClickHouse database; the more time zones involved the more issues.
  • Compare the results of the queries with the results returned by clickhouse-client.
  • Metabase is a good tool for organizing questions, dashboards etc. and to give non-technical users a good way to explore the data and share their results. The driver cannot support all the cool special features of ClickHouse, e.g. array functions. You are free to use native queries, of course.

Known limitations

  • As the underlying JDBC driver version does not support columns with AggregateFunction type, these columns are excluded from the table metadata and data browser result sets to prevent sync or data browsing errors.
  • If the past month/week/quarter/year filter over a DateTime64 column is not working as intended, this is likely due to a type conversion issue. See this report for more details. This issue was resolved as of ClickHouse 23.5.
  • If introspected ClickHouse version is lower than 23.8, the driver will not use startsWithUTF8 and fall back to its non-UTF8 counterpart instead. There is a drawback in this compatibility mode: potentially incorrect filtering results when working with non-latin strings. If your use case includes filtering by columns with such strings and you experience these issues, consider upgrading your ClickHouse server to 23.8+.

Contributing

Check out our contributing guide.

metabase-clickhouse-driver's People

Contributors

aquarius-wing avatar calherries avatar enqueue avatar lucas-tubi avatar metamben avatar mshustov avatar nellicus avatar qoega avatar s-huk avatar slvrtrn avatar veschin avatar zhicwu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metabase-clickhouse-driver's Issues

Run undockerized

Hi,
Is the anyway to use metabase-clickhouse in an undockerized metabase?

Filter Options for FixedString Fields

We currently offer the regular text / String filter options for FixedString fields. However, using some of the functions can break the query, for example "Contains", which will result in such a query:

SELECT count(*) AS `count` FROM `myanalytics`.`orders` WHERE (lowerUTF8(`myanalytics`.`orders`.`customer_organization_code`) like '%foo%')

The like function is not defined for FixedString columns: Illegal type FixedString(3) of argument of function like

Request error when filtering by field

Hi!,
Thank you very much for your work. We have 2 types of tables, some are called through snake_case, others through dot.case. When you do a filter by dot.case table, you get this error (this is part of traceback):

Code: 47, e.displayText() = DB::Exception: Missing columns: 'replicated.backend.booking.issuing.success.date' while processing query: 'SELECT date, datetime, client_id, visit_id, agent, tag, ab_tag, token, order_number, email, account_id, phone, order_total, tickets_total, cross_total, payment_type, amount_insurance, amount_fly_sms, amount_refund, amount_coverage, amount_online_registration, amount_vip, departure_iata_code, arrival_iata_code, departure_date, return_date, passengers, adults, childs, infants, cabin_class, x_request_id, profit, bonus_use_amount, bonus_from_order_amount, marker, `segments.segment_index`, `segments.ticket_index`, `segments.gds_pnr`, `segments.airline_pnr`, `segments.part_index`, `segments.departure_code`, `segments.arrival_code`, `segments.departure_time`, `segments.arrival_time`, `segments.flight_time_mins`, `segments.operating_carrier`, `segments.marketing_carrier`, `segments.plating_carrier`, `segments.marketing_flight_number`, `segments.booking_class` FROM replicated.`backend.booking.issuing.success` WHERE (toDate(replicated.backend.booking.issuing.success.date) >= '2020-01-01') AND (toDate(replicated.backend.booking.issuing.success.date) <= '2020-01-31') LIMIT 100

You can see that the point is that there are no necessary backquotes in toDate function. Is it possible to fix this problem, please?

distinct result is not correct

hi team,

i do 'distinct' data using two different ways.

the first uses the distinct(id) method, the second uses the uniqExact(id) native query

but the result is different.

is it bug or not?

Filtering Dates with no Results

Hello!
I'm Using Metabase v0.35.3 and Driver 0.7.1

I have a DATE column and have data on the table like this picture below:

Screenshot from 2020-04-29 18-27-51

When I filter like this, it doesn't return values:

Screenshot from 2020-04-29 18-28-09
The generated SQL query is like this:

SELECT *
FROM `client`.`aggregated_stock`
WHERE (`date` >= parseDateTimeBestEffort('2020-04-26 00:00:00.000Z')
   AND `date` < parseDateTimeBestEffort('2020-04-27 00:00:00.000Z'))
ORDER BY `date` DESC
LIMIT 1048576

0 Results!

This is really a bug, but I think its because of the parseDateTimeBestEffort() function and it turns into a TIMESTAMP and the column is DATE type. BUT the interval, like it's on the query should not to this.

Other filters like BETWEEN doesn't work either.
PREVIOUS,BEFORE,CURRENT, it works!

@enqueue , do you know whats happening?

Broken Enum in Metabase 0.40.3.1

In Metabase 0.38.2 all was working excellent. Upgrading has broken display Enum. Other types works.

Driver version: 0.7.5

Full trace:

[7436e1b1-ee71-43ef-a33d-3514e7e96085] 2021-08-29T21:33:38+03:00 ERROR metabase.sync.analyze.query-results Error generating insights for column: {:name ua_type, :base_type :type/Enum}
clojure.lang.ExceptionInfo: Output of col->ResultColumnMetadata does not match schema: 
	   {:base_type (not ("Valid field data type (keyword or string)" :type/Enum))}  
 {:type :schema.core/error, :schema {:name java.lang.String, #schema.core.OptionalKey{:k :unit} (maybe (constrained (named (cond-pre Keyword Str) "Keyword or string") "Valid field datetime unit keyword or string")), #schema.core.OptionalKey{:k :id} (maybe (constrained Int "Integer greater than zero")), #schema.core.OptionalKey{:k :semantic_type} (maybe (pred "Valid field semantic or relation type (keyword or string)")), #schema.core.OptionalKey{:k :description} (maybe (constrained Str "Non-blank string")), Keyword Any, #schema.core.OptionalKey{:k :field_ref} (cond-pre (conditional clojure.core$partial/fn--5839 [(one (eq :aggregation) :aggregation) (one Int "aggregation-clause-index")] clojure.core$constantly/fn--5672 (recursive (var metabase.mbql.schema/Field*))) (pred "Field or aggregation reference as it comes in to the API")), #schema.core.OptionalKey{:k :fingerprint} (maybe {(optional-key :global) {(optional-key :distinct-count) Int, (optional-key :nil%) (maybe (constrained Num "Valid percentage between (inclusive) 0 and 1.")), Any Any}, (optional-key :type) (constrained {(optional-key :type/Number) {(optional-key :min) (maybe Num), (optional-key :max) (maybe Num), (optional-key :avg) (maybe Num), (optional-key :q1) (maybe Num), (optional-key :q3) (maybe Num), (optional-key :sd) (maybe Num), Any Any}, (optional-key :type/Text) {(optional-key :percent-json) (maybe (constrained Num "Valid percentage between (inclusive) 0 and 1.")), (optional-key :percent-url) (maybe (constrained Num "Valid percentage between (inclusive) 0 and 1.")), (optional-key :percent-email) (maybe (constrained Num "Valid percentage between (inclusive) 0 and 1.")), (optional-key :percent-state) (maybe (constrained Num "Valid percentage between (inclusive) 0 and 1.")), (optional-key :average-length) (maybe Num), Any Any}, (optional-key :type/DateTime) {(optional-key :earliest) (maybe Str), (optional-key :latest) (maybe Str), Any Any}, Any Any} "Type-specific fingerprint with exactly one key"), (optional-key :experimental) {Any Any}, Any Any}), :display_name java.lang.String, :base_type (pred "Valid field data type (keyword or string)")}, :value {:base_type :type/Enum, :display_name "ua_type", :name "ua_type"}, :error {:base_type (not ("Valid field data type (keyword or string)" :type/Enum))}}
	at metabase.sync.analyze.query_results$fn__49868$col__GT_ResultColumnMetadata__49873.invoke(query_results.clj:67)
	at metabase.sync.analyze.query_results$insights_rf$iter__49888__49892$fn__49893$fn__49894$fn__49895.invoke(query_results.clj:86)
	at metabase.sync.analyze.query_results$insights_rf$iter__49888__49892$fn__49893$fn__49894.invoke(query_results.clj:85)
	at metabase.sync.analyze.query_results$insights_rf$iter__49888__49892$fn__49893.invoke(query_results.clj:84)
	at clojure.lang.LazySeq.sval(LazySeq.java:42)
	at clojure.lang.LazySeq.seq(LazySeq.java:51)
	at clojure.lang.RT.seq(RT.java:535)
	at clojure.core$seq__5402.invokeStatic(core.clj:137)
	at clojure.core$seq__5402.invoke(core.clj:137)
	at metabase.sync.analyze.query_results$insights_rf$iter__49909__49913$fn__49914.invoke(query_results.clj:92)
	at clojure.lang.LazySeq.sval(LazySeq.java:42)
	at clojure.lang.LazySeq.seq(LazySeq.java:51)
	at clojure.lang.RT.seq(RT.java:535)
	at clojure.core$seq__5402.invokeStatic(core.clj:137)
	at clojure.core$apply.invokeStatic(core.clj:660)
	at clojure.core$apply.invoke(core.clj:660)
	at metabase.sync.analyze.query_results$insights_rf.invokeStatic(query_results.clj:92)
	at metabase.sync.analyze.query_results$insights_rf.invoke(query_results.clj:79)
	at metabase.query_processor.middleware.results_metadata$insights_xform.invokeStatic(results_metadata.clj:127)
	at metabase.query_processor.middleware.results_metadata$insights_xform.invoke(results_metadata.clj:124)
	at metabase.query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__49967$rff_SINGLEQUOTE___49970.invoke(results_metadata.clj:146)
	at metabase.query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__45181$fn__45182.invoke(add_rows_truncated.clj:37)
	at metabase.query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__49982$fn__49983.invoke(splice_params_in_response.clj:34)
	at metabase.query_processor.middleware.add_timezone_info$add_timezone_info$fn__45335$fn__45336.invoke(add_timezone_info.clj:17)
	at metabase.query_processor.middleware.format_rows$format_rows$fn__47833$fn__47836.invoke(format_rows.clj:77)
	at metabase.query_processor.middleware.annotate$add_column_info$fn__41591$fn__41593.invoke(annotate.clj:615)
	at metabase.query_processor.middleware.limit$limit$fn__47888$fn__47889.invoke(limit.clj:40)
	at metabase.query_processor.middleware.mbql_to_native$mbql__GT_native$fn__47902$fn__47905.invoke(mbql_to_native.clj:30)
	at metabase.query_processor.context.default$default_reducef.invokeStatic(default.clj:55)
	at metabase.query_processor.context.default$default_reducef.invoke(default.clj:48)
	at metabase.query_processor.context$reducef.invokeStatic(context.clj:69)
	at metabase.query_processor.context$reducef.invoke(context.clj:62)
	at metabase.query_processor.context.default$default_runf$respond_STAR___37445.invoke(default.clj:69)
	at metabase.driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:485)
	at metabase.driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)
	at metabase.driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:472)
	at metabase.driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)
	at metabase.driver.sql_jdbc$fn__82288.invokeStatic(sql_jdbc.clj:54)
	at metabase.driver.sql_jdbc$fn__82288.invoke(sql_jdbc.clj:52)
	at clojure.lang.MultiFn.invoke(MultiFn.java:244)
	at metabase.query_processor.context$executef.invokeStatic(context.clj:59)
	at metabase.query_processor.context$executef.invoke(context.clj:48)
	at metabase.query_processor.context.default$default_runf.invokeStatic(default.clj:68)
	at metabase.query_processor.context.default$default_runf.invoke(default.clj:66)
	at metabase.query_processor.context$runf.invokeStatic(context.clj:45)
	at metabase.query_processor.context$runf.invoke(context.clj:39)
	at metabase.query_processor.reducible$pivot.invokeStatic(reducible.clj:34)
	at metabase.query_processor.reducible$pivot.invoke(reducible.clj:31)
	at metabase.query_processor.middleware.mbql_to_native$mbql__GT_native$fn__47902.invoke(mbql_to_native.clj:25)
	at metabase.query_processor.middleware.check_features$check_features$fn__47016.invoke(check_features.clj:39)
	at metabase.query_processor.middleware.limit$limit$fn__47888.invoke(limit.clj:37)
	at metabase.query_processor.middleware.cache$maybe_return_cached_results$fn__46468.invoke(cache.clj:204)
	at metabase.query_processor.middleware.optimize_temporal_filters$optimize_temporal_filters$fn__48148.invoke(optimize_temporal_filters.clj:204)
	at metabase.query_processor.middleware.validate_temporal_bucketing$validate_temporal_bucketing$fn__50080.invoke(validate_temporal_bucketing.clj:50)
	at metabase.query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__45587.invoke(auto_parse_filter_values.clj:43)
	at metabase.query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__41716.invoke(wrap_value_literals.clj:161)
	at metabase.query_processor.middleware.annotate$add_column_info$fn__41591.invoke(annotate.clj:608)
	at metabase.query_processor.middleware.permissions$check_query_permissions$fn__46888.invoke(permissions.clj:81)
	at metabase.query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__49009.invoke(pre_alias_aggregations.clj:40)
	at metabase.query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__47089.invoke(cumulative_aggregations.clj:60)
	at metabase.query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__49306.invoke(resolve_joined_fields.clj:102)
	at metabase.query_processor.middleware.resolve_joins$resolve_joins$fn__49619.invoke(resolve_joins.clj:171)
	at metabase.query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__45163.invoke(add_implicit_joins.clj:190)
	at metabase.query_processor.middleware.large_int_id$convert_id_to_string$fn__47852.invoke(large_int_id.clj:59)
	at metabase.query_processor.middleware.format_rows$format_rows$fn__47833.invoke(format_rows.clj:74)
	at metabase.query_processor.middleware.add_default_temporal_unit$add_default_temporal_unit$fn__44457.invoke(add_default_temporal_unit.clj:23)
	at metabase.query_processor.middleware.desugar$desugar$fn__47155.invoke(desugar.clj:21)
	at metabase.query_processor.middleware.binning$update_binning_strategy$fn__45974.invoke(binning.clj:229)
	at metabase.query_processor.middleware.resolve_fields$resolve_fields$fn__46691.invoke(resolve_fields.clj:34)
	at metabase.query_processor.middleware.add_dimension_projections$add_remapping$fn__44812.invoke(add_dimension_projections.clj:312)
	at metabase.query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__45041.invoke(add_implicit_clauses.clj:147)
	at metabase.query_processor.middleware.upgrade_field_literals$upgrade_field_literals$fn__50029.invoke(upgrade_field_literals.clj:40)
	at metabase.query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__45326.invoke(add_source_metadata.clj:123)
	at metabase.query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__49181.invoke(reconcile_breakout_and_order_by_bucketing.clj:100)
	at metabase.query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45534.invoke(auto_bucket_datetimes.clj:147)
	at metabase.query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46738.invoke(resolve_source_table.clj:45)
	at metabase.query_processor.middleware.parameters$substitute_parameters$fn__48991.invoke(parameters.clj:111)
	at metabase.query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46790.invoke(resolve_referenced.clj:79)
	at metabase.query_processor.middleware.expand_macros$expand_macros$fn__47539.invoke(expand_macros.clj:184)
	at metabase.query_processor.middleware.add_timezone_info$add_timezone_info$fn__45335.invoke(add_timezone_info.clj:15)
	at metabase.query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__49982.invoke(splice_params_in_response.clj:32)
	at metabase.query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192$fn__49196.invoke(resolve_database_and_driver.clj:31)
	at metabase.driver$do_with_driver.invokeStatic(driver.clj:60)
	at metabase.driver$do_with_driver.invoke(driver.clj:56)
	at metabase.query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192.invoke(resolve_database_and_driver.clj:25)
	at metabase.query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__47779.invoke(fetch_source_query.clj:274)
	at metabase.query_processor.middleware.store$initialize_store$fn__49991$fn__49992.invoke(store.clj:11)
	at metabase.query_processor.store$do_with_store.invokeStatic(store.clj:44)
	at metabase.query_processor.store$do_with_store.invoke(store.clj:38)
	at metabase.query_processor.middleware.store$initialize_store$fn__49991.invoke(store.clj:10)
	at metabase.query_processor.middleware.validate$validate_query$fn__50036.invoke(validate.clj:10)
	at metabase.query_processor.middleware.normalize_query$normalize$fn__47915.invoke(normalize_query.clj:22)
	at metabase.query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__45181.invoke(add_rows_truncated.clj:35)
	at metabase.query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__49967.invoke(results_metadata.clj:147)
	at metabase.query_processor.middleware.constraints$add_default_userland_constraints$fn__47032.invoke(constraints.clj:42)
	at metabase.query_processor.middleware.process_userland_query$process_userland_query$fn__49078.invoke(process_userland_query.clj:134)
	at metabase.query_processor.middleware.catch_exceptions$catch_exceptions$fn__46972.invoke(catch_exceptions.clj:173)
	at metabase.query_processor.reducible$async_qp$qp_STAR___38255$thunk__38256.invoke(reducible.clj:103)
	at metabase.query_processor.reducible$async_qp$qp_STAR___38255.invoke(reducible.clj:109)
	at metabase.query_processor.reducible$sync_qp$qp_STAR___38264$fn__38267.invoke(reducible.clj:135)
	at metabase.query_processor.reducible$sync_qp$qp_STAR___38264.invoke(reducible.clj:134)
	at clojure.lang.AFn.applyToHelper(AFn.java:156)
	at clojure.lang.AFn.applyTo(AFn.java:144)
	at clojure.core$apply.invokeStatic(core.clj:667)
	at clojure.core$apply.invoke(core.clj:660)
	at metabase.query_processor$process_userland_query.invokeStatic(query_processor.clj:241)
	at metabase.query_processor$process_userland_query.doInvoke(query_processor.clj:237)
	at clojure.lang.RestFn.invoke(RestFn.java:423)
	at metabase.query_processor$fn__50126$process_query_and_save_execution_BANG___50135$fn__50138.invoke(query_processor.clj:253)
	at metabase.query_processor$fn__50126$process_query_and_save_execution_BANG___50135.invoke(query_processor.clj:245)
	at metabase.query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179$fn__50182.invoke(query_processor.clj:265)
	at metabase.query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179.invoke(query_processor.clj:258)
	at metabase.api.dataset$run_query_async$fn__56417.invoke(dataset.clj:56)
	at metabase.query_processor.streaming$streaming_response_STAR_$fn__56396$fn__56397.invoke(streaming.clj:72)
	at metabase.query_processor.streaming$streaming_response_STAR_$fn__56396.invoke(streaming.clj:71)
	at metabase.async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)
	at metabase.async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)
	at metabase.async.streaming_response$do_f_async$fn__16088.invoke(streaming_response.clj:84)
	at clojure.lang.AFn.applyToHelper(AFn.java:152)
	at clojure.lang.AFn.applyTo(AFn.java:144)
	at clojure.core$apply.invokeStatic(core.clj:665)
	at clojure.core$with_bindings_STAR_.invokeStatic(core.clj:1973)
	at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1973)
	at clojure.lang.RestFn.invoke(RestFn.java:425)
	at clojure.lang.AFn.applyToHelper(AFn.java:156)
	at clojure.lang.RestFn.applyTo(RestFn.java:132)
	at clojure.core$apply.invokeStatic(core.clj:669)
	at clojure.core$bound_fn_STAR_$fn__5749.doInvoke(core.clj:2003)
	at clojure.lang.RestFn.invoke(RestFn.java:397)
	at clojure.lang.AFn.run(AFn.java:22)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
[7436e1b1-ee71-43ef-a33d-3514e7e96085] 2021-08-29T21:33:38+03:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: null
{:database_id 3,
 :started_at #t "2021-08-29T18:33:38.124666Z[GMT]",
 :via
 [{:status :failed,
   :class clojure.lang.ExceptionInfo,
   :error "Error reducing result rows",
   :stacktrace
   ["--> query_processor.context.default$default_reducef$fn__37441.invoke(default.clj:60)"
    "query_processor.context.default$default_reducef.invokeStatic(default.clj:57)"
    "query_processor.context.default$default_reducef.invoke(default.clj:48)"
    "query_processor.context$reducef.invokeStatic(context.clj:69)"
    "query_processor.context$reducef.invoke(context.clj:62)"
    "query_processor.context.default$default_runf$respond_STAR___37445.invoke(default.clj:69)"
    "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:485)"
    "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)"
    "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:472)"
    "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)"
    "driver.sql_jdbc$fn__82288.invokeStatic(sql_jdbc.clj:54)"
    "driver.sql_jdbc$fn__82288.invoke(sql_jdbc.clj:52)"
    "query_processor.context$executef.invokeStatic(context.clj:59)"
    "query_processor.context$executef.invoke(context.clj:48)"
    "query_processor.context.default$default_runf.invokeStatic(default.clj:68)"
    "query_processor.context.default$default_runf.invoke(default.clj:66)"
    "query_processor.context$runf.invokeStatic(context.clj:45)"
    "query_processor.context$runf.invoke(context.clj:39)"
    "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
    "query_processor.reducible$pivot.invoke(reducible.clj:31)"
    "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__47902.invoke(mbql_to_native.clj:25)"
    "query_processor.middleware.check_features$check_features$fn__47016.invoke(check_features.clj:39)"
    "query_processor.middleware.limit$limit$fn__47888.invoke(limit.clj:37)"
    "query_processor.middleware.cache$maybe_return_cached_results$fn__46468.invoke(cache.clj:204)"
    "query_processor.middleware.optimize_temporal_filters$optimize_temporal_filters$fn__48148.invoke(optimize_temporal_filters.clj:204)"
    "query_processor.middleware.validate_temporal_bucketing$validate_temporal_bucketing$fn__50080.invoke(validate_temporal_bucketing.clj:50)"
    "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__45587.invoke(auto_parse_filter_values.clj:43)"
    "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__41716.invoke(wrap_value_literals.clj:161)"
    "query_processor.middleware.annotate$add_column_info$fn__41591.invoke(annotate.clj:608)"
    "query_processor.middleware.permissions$check_query_permissions$fn__46888.invoke(permissions.clj:81)"
    "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__49009.invoke(pre_alias_aggregations.clj:40)"
    "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__47089.invoke(cumulative_aggregations.clj:60)"
    "query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__49306.invoke(resolve_joined_fields.clj:102)"
    "query_processor.middleware.resolve_joins$resolve_joins$fn__49619.invoke(resolve_joins.clj:171)"
    "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__45163.invoke(add_implicit_joins.clj:190)"
    "query_processor.middleware.large_int_id$convert_id_to_string$fn__47852.invoke(large_int_id.clj:59)"
    "query_processor.middleware.format_rows$format_rows$fn__47833.invoke(format_rows.clj:74)"
    "query_processor.middleware.add_default_temporal_unit$add_default_temporal_unit$fn__44457.invoke(add_default_temporal_unit.clj:23)"
    "query_processor.middleware.desugar$desugar$fn__47155.invoke(desugar.clj:21)"
    "query_processor.middleware.binning$update_binning_strategy$fn__45974.invoke(binning.clj:229)"
    "query_processor.middleware.resolve_fields$resolve_fields$fn__46691.invoke(resolve_fields.clj:34)"
    "query_processor.middleware.add_dimension_projections$add_remapping$fn__44812.invoke(add_dimension_projections.clj:312)"
    "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__45041.invoke(add_implicit_clauses.clj:147)"
    "query_processor.middleware.upgrade_field_literals$upgrade_field_literals$fn__50029.invoke(upgrade_field_literals.clj:40)"
    "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__45326.invoke(add_source_metadata.clj:123)"
    "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__49181.invoke(reconcile_breakout_and_order_by_bucketing.clj:100)"
    "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45534.invoke(auto_bucket_datetimes.clj:147)"
    "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46738.invoke(resolve_source_table.clj:45)"
    "query_processor.middleware.parameters$substitute_parameters$fn__48991.invoke(parameters.clj:111)"
    "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46790.invoke(resolve_referenced.clj:79)"
    "query_processor.middleware.expand_macros$expand_macros$fn__47539.invoke(expand_macros.clj:184)"
    "query_processor.middleware.add_timezone_info$add_timezone_info$fn__45335.invoke(add_timezone_info.clj:15)"
    "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__49982.invoke(splice_params_in_response.clj:32)"
    "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192$fn__49196.invoke(resolve_database_and_driver.clj:31)"
    "driver$do_with_driver.invokeStatic(driver.clj:60)"
    "driver$do_with_driver.invoke(driver.clj:56)"
    "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192.invoke(resolve_database_and_driver.clj:25)"
    "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__47779.invoke(fetch_source_query.clj:274)"
    "query_processor.middleware.store$initialize_store$fn__49991$fn__49992.invoke(store.clj:11)"
    "query_processor.store$do_with_store.invokeStatic(store.clj:44)"
    "query_processor.store$do_with_store.invoke(store.clj:38)"
    "query_processor.middleware.store$initialize_store$fn__49991.invoke(store.clj:10)"
    "query_processor.middleware.validate$validate_query$fn__50036.invoke(validate.clj:10)"
    "query_processor.middleware.normalize_query$normalize$fn__47915.invoke(normalize_query.clj:22)"
    "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__45181.invoke(add_rows_truncated.clj:35)"
    "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__49967.invoke(results_metadata.clj:147)"
    "query_processor.middleware.constraints$add_default_userland_constraints$fn__47032.invoke(constraints.clj:42)"
    "query_processor.middleware.process_userland_query$process_userland_query$fn__49078.invoke(process_userland_query.clj:134)"
    "query_processor.middleware.catch_exceptions$catch_exceptions$fn__46972.invoke(catch_exceptions.clj:173)"
    "query_processor.reducible$async_qp$qp_STAR___38255$thunk__38256.invoke(reducible.clj:103)"
    "query_processor.reducible$async_qp$qp_STAR___38255.invoke(reducible.clj:109)"
    "query_processor.reducible$sync_qp$qp_STAR___38264$fn__38267.invoke(reducible.clj:135)"
    "query_processor.reducible$sync_qp$qp_STAR___38264.invoke(reducible.clj:134)"
    "query_processor$process_userland_query.invokeStatic(query_processor.clj:241)"
    "query_processor$process_userland_query.doInvoke(query_processor.clj:237)"
    "query_processor$fn__50126$process_query_and_save_execution_BANG___50135$fn__50138.invoke(query_processor.clj:253)"
    "query_processor$fn__50126$process_query_and_save_execution_BANG___50135.invoke(query_processor.clj:245)"
    "query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179$fn__50182.invoke(query_processor.clj:265)"
    "query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179.invoke(query_processor.clj:258)"
    "api.dataset$run_query_async$fn__56417.invoke(dataset.clj:56)"
    "query_processor.streaming$streaming_response_STAR_$fn__56396$fn__56397.invoke(streaming.clj:72)"
    "query_processor.streaming$streaming_response_STAR_$fn__56396.invoke(streaming.clj:71)"
    "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)"
    "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)"
    "async.streaming_response$do_f_async$fn__16088.invoke(streaming_response.clj:84)"],
   :error_type :qp,
   :ex-data {:type :qp}}],
 :error_type :invalid-query,
 :json_query
 {:type "native",
  :native {:query "select ua_type from zoon.stat\nwhere event_date = today()\nlimit 1\n", :template-tags {}},
  :database 3,
  :parameters [],
  :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
 :status :failed,
 :class clojure.lang.ExceptionInfo,
 :stacktrace
 ["--> query_processor.middleware.annotate$fn__41515$deduplicate_cols_names__41520.invoke(annotate.clj:529)"
  "query_processor.middleware.annotate$fn__41543$merged_column_info__41548$fn__41552.invoke(annotate.clj:576)"
  "query_processor.middleware.annotate$fn__41543$merged_column_info__41548.invoke(annotate.clj:569)"
  "query_processor.middleware.annotate$add_column_info_xform$combine__41584.invoke(annotate.clj:600)"
  "query_processor.reducible$combine_additional_reducing_fns$fn__38278.invoke(reducible.clj:206)"
  "query_processor.middleware.limit$limit_xform$fn__47885.invoke(limit.clj:21)"
  "query_processor.context.default$default_reducef$fn__37441.invoke(default.clj:58)"
  "query_processor.context.default$default_reducef.invokeStatic(default.clj:57)"
  "query_processor.context.default$default_reducef.invoke(default.clj:48)"
  "query_processor.context$reducef.invokeStatic(context.clj:69)"
  "query_processor.context$reducef.invoke(context.clj:62)"
  "query_processor.context.default$default_runf$respond_STAR___37445.invoke(default.clj:69)"
  "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:485)"
  "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)"
  "driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:472)"
  "driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:463)"
  "driver.sql_jdbc$fn__82288.invokeStatic(sql_jdbc.clj:54)"
  "driver.sql_jdbc$fn__82288.invoke(sql_jdbc.clj:52)"
  "query_processor.context$executef.invokeStatic(context.clj:59)"
  "query_processor.context$executef.invoke(context.clj:48)"
  "query_processor.context.default$default_runf.invokeStatic(default.clj:68)"
  "query_processor.context.default$default_runf.invoke(default.clj:66)"
  "query_processor.context$runf.invokeStatic(context.clj:45)"
  "query_processor.context$runf.invoke(context.clj:39)"
  "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
  "query_processor.reducible$pivot.invoke(reducible.clj:31)"
  "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__47902.invoke(mbql_to_native.clj:25)"
  "query_processor.middleware.check_features$check_features$fn__47016.invoke(check_features.clj:39)"
  "query_processor.middleware.limit$limit$fn__47888.invoke(limit.clj:37)"
  "query_processor.middleware.cache$maybe_return_cached_results$fn__46468.invoke(cache.clj:204)"
  "query_processor.middleware.optimize_temporal_filters$optimize_temporal_filters$fn__48148.invoke(optimize_temporal_filters.clj:204)"
  "query_processor.middleware.validate_temporal_bucketing$validate_temporal_bucketing$fn__50080.invoke(validate_temporal_bucketing.clj:50)"
  "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__45587.invoke(auto_parse_filter_values.clj:43)"
  "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__41716.invoke(wrap_value_literals.clj:161)"
  "query_processor.middleware.annotate$add_column_info$fn__41591.invoke(annotate.clj:608)"
  "query_processor.middleware.permissions$check_query_permissions$fn__46888.invoke(permissions.clj:81)"
  "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__49009.invoke(pre_alias_aggregations.clj:40)"
  "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__47089.invoke(cumulative_aggregations.clj:60)"
  "query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__49306.invoke(resolve_joined_fields.clj:102)"
  "query_processor.middleware.resolve_joins$resolve_joins$fn__49619.invoke(resolve_joins.clj:171)"
  "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__45163.invoke(add_implicit_joins.clj:190)"
  "query_processor.middleware.large_int_id$convert_id_to_string$fn__47852.invoke(large_int_id.clj:59)"
  "query_processor.middleware.format_rows$format_rows$fn__47833.invoke(format_rows.clj:74)"
  "query_processor.middleware.add_default_temporal_unit$add_default_temporal_unit$fn__44457.invoke(add_default_temporal_unit.clj:23)"
  "query_processor.middleware.desugar$desugar$fn__47155.invoke(desugar.clj:21)"
  "query_processor.middleware.binning$update_binning_strategy$fn__45974.invoke(binning.clj:229)"
  "query_processor.middleware.resolve_fields$resolve_fields$fn__46691.invoke(resolve_fields.clj:34)"
  "query_processor.middleware.add_dimension_projections$add_remapping$fn__44812.invoke(add_dimension_projections.clj:312)"
  "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__45041.invoke(add_implicit_clauses.clj:147)"
  "query_processor.middleware.upgrade_field_literals$upgrade_field_literals$fn__50029.invoke(upgrade_field_literals.clj:40)"
  "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__45326.invoke(add_source_metadata.clj:123)"
  "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__49181.invoke(reconcile_breakout_and_order_by_bucketing.clj:100)"
  "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45534.invoke(auto_bucket_datetimes.clj:147)"
  "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46738.invoke(resolve_source_table.clj:45)"
  "query_processor.middleware.parameters$substitute_parameters$fn__48991.invoke(parameters.clj:111)"
  "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46790.invoke(resolve_referenced.clj:79)"
  "query_processor.middleware.expand_macros$expand_macros$fn__47539.invoke(expand_macros.clj:184)"
  "query_processor.middleware.add_timezone_info$add_timezone_info$fn__45335.invoke(add_timezone_info.clj:15)"
  "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__49982.invoke(splice_params_in_response.clj:32)"
  "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192$fn__49196.invoke(resolve_database_and_driver.clj:31)"
  "driver$do_with_driver.invokeStatic(driver.clj:60)"
  "driver$do_with_driver.invoke(driver.clj:56)"
  "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__49192.invoke(resolve_database_and_driver.clj:25)"
  "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__47779.invoke(fetch_source_query.clj:274)"
  "query_processor.middleware.store$initialize_store$fn__49991$fn__49992.invoke(store.clj:11)"
  "query_processor.store$do_with_store.invokeStatic(store.clj:44)"
  "query_processor.store$do_with_store.invoke(store.clj:38)"
  "query_processor.middleware.store$initialize_store$fn__49991.invoke(store.clj:10)"
  "query_processor.middleware.validate$validate_query$fn__50036.invoke(validate.clj:10)"
  "query_processor.middleware.normalize_query$normalize$fn__47915.invoke(normalize_query.clj:22)"
  "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__45181.invoke(add_rows_truncated.clj:35)"
  "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__49967.invoke(results_metadata.clj:147)"
  "query_processor.middleware.constraints$add_default_userland_constraints$fn__47032.invoke(constraints.clj:42)"
  "query_processor.middleware.process_userland_query$process_userland_query$fn__49078.invoke(process_userland_query.clj:134)"
  "query_processor.middleware.catch_exceptions$catch_exceptions$fn__46972.invoke(catch_exceptions.clj:173)"
  "query_processor.reducible$async_qp$qp_STAR___38255$thunk__38256.invoke(reducible.clj:103)"
  "query_processor.reducible$async_qp$qp_STAR___38255.invoke(reducible.clj:109)"
  "query_processor.reducible$sync_qp$qp_STAR___38264$fn__38267.invoke(reducible.clj:135)"
  "query_processor.reducible$sync_qp$qp_STAR___38264.invoke(reducible.clj:134)"
  "query_processor$process_userland_query.invokeStatic(query_processor.clj:241)"
  "query_processor$process_userland_query.doInvoke(query_processor.clj:237)"
  "query_processor$fn__50126$process_query_and_save_execution_BANG___50135$fn__50138.invoke(query_processor.clj:253)"
  "query_processor$fn__50126$process_query_and_save_execution_BANG___50135.invoke(query_processor.clj:245)"
  "query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179$fn__50182.invoke(query_processor.clj:265)"
  "query_processor$fn__50170$process_query_and_save_with_max_results_constraints_BANG___50179.invoke(query_processor.clj:258)"
  "api.dataset$run_query_async$fn__56417.invoke(dataset.clj:56)"
  "query_processor.streaming$streaming_response_STAR_$fn__56396$fn__56397.invoke(streaming.clj:72)"
  "query_processor.streaming$streaming_response_STAR_$fn__56396.invoke(streaming.clj:71)"
  "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)"
  "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)"
  "async.streaming_response$do_f_async$fn__16088.invoke(streaming_response.clj:84)"],
 :context :ad-hoc,
 :error
 "Input to deduplicate-cols-names does not match schema: \n\n\t   [(named [{:base_type (not (\"Valid field type\" :type/Enum)), :field_ref (named [nil nil (named {:base-type (not (\"Valid field type\" :type/Enum))} \"options\")] \"Must be a valid instance of one of these clauses: :expression, :field\")}] cols)]  \n\n",
 :row_count 0,
 :running_time 0,
 :ex-data
 {:type :schema.core/error,
  :value
  [[{:display_name "ua_type",
     :source :native,
     :field_ref [:field "ua_type" {:base-type :type/Enum}],
     :name "ua_type",
     :base_type :type/Enum}]],
  :error
  [(named [{:base_type (not ("Valid field type" :type/Enum)), :field_ref (named [nil nil (named {:base-type (not ("Valid field type" :type/Enum))} "options")] "Must be a valid instance of one of these clauses: :expression, :field")}] cols)]},
 :data {:rows [], :cols []}}

Cannot connect ClickHouse with Metabase 0.37

Cannot connect clickhoust with metabase 3.7

I use the docker and java to run the driver with the latest metabase, it cannot connect to clickhouse

But when I download the jar of metabase 3.5.7 it works well

Substring Broken (1-based versus 0-based)

The substring function does not work as expected for custom fields.

Screenshot_20201215_150412

In Metabase, the indices seem to be 0-based, whereas ClickHouse substring function expects indices to be 1-based:

Character indexing starts from one (as in standard SQL).

From clickhouse-server.err.log:

Indices in strings are 1-based: while executing 'FUNCTION substring(username :: 3, 0 :: 5, 10 :: 6) -> substring(username, 0, 10) String : 7'

Let‘s see what Metabase maintainers agree on in metabase/metabase#12445

http read timeout 30

Hello,

I'm getting http read timeout in 30 seconds. I've checked Clickhouse settings related to timeout which are default as 600 seconds. (receive_timeout and send_timeout options. By default it's 600 sec.)

:params nil}, :cause {:class ru.yandex.clickhouse.except.ClickHouseException, :error "ClickHouse exception, code: 159, host: 192.16.0.25, port: 8123; Read timed out", :cause {:class java.net.SocketTimeoutException, :error "Read timed out"}}}

How can I increase http read timeout using metabase-clickhouse jdbc?

Thanks

Allow blank username

Right now metabase will not allow adding a connection with no username but this is a valid config.

Support of *Merge functions

Hi, as I understand currently it's not possible to use Merge combinator.

I have a table with AggregatingMergeTree engine and columns with types like AggregateFunction(count), AggregateFunction(uniq, String).
Metabase is trying to do count(distinct `test6_click_aggregated`.`userKey_hll`) instead of uniqMerge(userKey_hll) and it's not possible to write this function to the Custom Expression field.

It works in the SQL mode, but it would be nice to have this functions.

With which version of metabase shall we test the driver ?

Hello
First , thanks for moving forward on metabase. This is something i was looking since at least 6 months.
Now i want to test the clickhouse drivers , but following the procedure you describe ( the simple one , get the driver prerelase and put them in the plugin directory) is not working ( the clickhouse driveris not visible when adding a database). After digging a bit , i'm under the impression that dynamic driver loading is not supported before the 3.2 of metabase ( not yet release).

Can you confirm that and that the only way to test the new drivers is to have a dev environment where you should add the drivers as decribe in your doc ?

Thanks in advance

ps : To increase the number of people testing the drivers and have more feedback to make it official at metabase , it can be usefull to provide a beta version of the metabase jar including the drivers ? ( Of course if the guys /license of metabase allow it)

Error description not shown

Clickhouse driver in python clearly explains error with description. It's inconvenient to copy query, insert into python script to see where error is.

Current error description looks like this:

ClickHouse exception, code: 403, host: HOST, port: PORT

What I see in python-clickhousedriver:

ServerException: Code: 403.
DB::Exception: Invalid expression for JOIN ON. In expression `--o.ship_address_id` = `--o.id` columns `--o.ship_address_id` and `--o.id` are from the same table but from different arguments of equal function. Supported syntax: JOIN ON Expr([table.]column, ...) = Expr([table.]column, ...) [AND Expr([table.]column, ...) = Expr([table.]column, ...) ...]. 

Any way to fix this?

Performance Issues

I found that when there were too many charts, there would be obvious Carton. Yes, it's obvious. This is not the case when you switch to MySQL data sources. I'm sure it has nothing to do with the amount of data and the network.

Docker installation do not work

  • metabase/metabase-head:latest
    no app folder in it, only bin and target
  • metabase/metabase
    no app/pplugins folder, clickhouse plugin one in this folder and there is no clickohouse in database type drop down

Which version of metabase is working with this tutorial?

SSL certificates for connection

Hi,
I'm trying to connect to Yandex.Cloud MDB Clickhouse from Metabase, but gets an error:

ru.yandex.clickhouse.except.ClickHouseUnknownException: ClickHouse exception, code: 1002, host: {host}, port: 8443; sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

In additional parameters I add:

ssl=true&sslrootсert=/usr/local/share/ca-certificates/Yandex/YandexInternalRootCA.crt

Filtering on Custom Field Broken

When filtering on a custom field (via expression), the driver uses the human-readable field name instead of the correct alias.

SELECT 
    source.warennummerKN8 AS warennummerKN8, 
    source.`sum-where` AS `sum-where`, 
    source.`sum-where_2` AS `sum-where_2`
FROM 
(
    SELECT 
        rsianalytics.customs_atlasaes.warennummerKN8 AS warennummerKN8, 
        sum(multiIf((rsianalytics.customs_atlasaes.statistik_datum >= parseDateTimeBestEffort('2018-12-31 23:00:00')) AND (rsianalytics.customs_atlasaes.statistik_datum <= parseDateTimeBestEffort('2019-06-29 22:00:00')), rsianalytics.customs_atlasaes.anzahlPositionen, 0.)) AS `sum-where`, 
        sum(multiIf((rsianalytics.customs_atlasaes.statistik_datum >= parseDateTimeBestEffort('2019-12-31 23:00:00')) AND (rsianalytics.customs_atlasaes.statistik_datum <= parseDateTimeBestEffort('2020-06-29 22:00:00')), rsianalytics.customs_atlasaes.anzahlPositionen, 0.)) AS `sum-where_2`
    FROM rsianalytics.customs_atlasaes
    GROUP BY rsianalytics.customs_atlasaes.warennummerKN8
    ORDER BY rsianalytics.customs_atlasaes.warennummerKN8 ASC
) AS source
WHERE source.`Positionen 2019` > 0
LIMIT 2000
FORMAT TabSeparatedWithNamesAndTypes

Received exception from server (version 20.4.5):
Code: 47. DB::Exception: Received from cdb1.dev.riege.de:9000. DB::Exception: There's no column 'source.Positionen 2019' in table 'source'.

See how it's using source.Positionen 2019 instead of sum_where (or sum_where_2).

This is how I reproduce this on our development database, trying to follow the steps in the Metabase blog post on Time Series Comparison

metabase_error_query_alias

The MBQL query from the log:

:query
  {:source-query
   {:source-table 521,
    :aggregation
    [[:aggregation-options
      [:sum-where
       [:field-id 18361]
       [:between
        [:field-id 38231]
        [:absolute-datetime (t/zoned-date-time "2020-01-01T00:00+01:00[Europe/Berlin]") :default]
        [:absolute-datetime (t/zoned-date-time "2020-02-01T00:00+01:00[Europe/Berlin]") :default]]]
      {:display-name "Positionen 2019", :name "sum-where"}]
     [:aggregation-options
      [:sum-where
       [:field-id 18361]
       [:between
        [:field-id 38231]
        [:absolute-datetime (t/zoned-date-time "2020-02-01T00:00+01:00[Europe/Berlin]") :default]
        [:absolute-datetime (t/zoned-date-time "2020-03-01T00:00+01:00[Europe/Berlin]") :default]]]
      {:display-name "Positionen März", :name "sum-where_2"}]],
    :breakout [[:field-id 18791]],
    :order-by [[:asc [:field-id 18791]]]},
   :filter [:> [:field-literal "Positionen 2019" :type/Float] [:value 0 nil]],
   :source-metadata
   [{:name "angemeldetesVerfahren",
     :id 18791,
     :table_id 521,
     :display_name "Angemeldetes Verfahren",
     :base_type :type/Text,
     :special_type :type/Category,
     :fingerprint {:global {:distinct-count 5, :nil% 0.0}, :type {:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :average-length 2.0}}},
     :settings nil}
    {:name "sum-where", :display_name "Positionen 2019", :base_type :type/Integer, :special_type :type/Number, :settings nil}
    {:name "sum-where_2", :display_name "Positionen März", :base_type :type/Integer, :special_type :type/Number, :settings nil}],
   :fields [[:field-literal "angemeldetesVerfahren" :type/Text] [:field-literal "sum-where" :type/Integer] [:field-literal "sum-where_2" :type/Integer]],
   :limit 2000},
  :type :query,
  :middleware {:add-default-userland-constraints? true},
  :info
  {:executed-by 11,
   :context :ad-hoc,
   :nested? false,
   :query-hash [102, 39, 72, -66, -50, -78, -70, -9, 63, -37, 106, 123, 79, 6, -61, 75, -118, 90, 2, 117, 33, 78, -79, -20, -51, -3, -116, 83, -106, 72, -116, 111]},
  :constraints {:max-results 10000, :max-results-bare-rows 2000}},
 :data {:rows [], :cols []}}

Additional JDBC options not apply

By default, our user costs 10 million lines per read. But for certain groups of users we extend this with the parameter max_rows_to_read.

Now we use Redash, and it works fine there. But here in Metabase, with the same user, the same request and the same period, we catch the error of the request limit. If you look at traceback, it's the same:

"ClickHouse exception, code: 158, host: ch-analytics.amazon, port: 8123; Code: 158, e.displayText() = DB::Exception: Limit for rows to read (or to read), maximum: 10000000 (version 19.7.3.1)\n",
:cause {:class java.lang.Throwable, :error "Code: 158, e.displayText() = DB::Exception: Limit for rows to read exceeded: 13254656 rows read (or to read), maximum: 10000000 (version 19.7.3.1)\n".

That is, you can see that despite the fact that max_rows_to_read=200000000 is written in Additional JDBC Options, it does not work.

clickhouse exception with readonly user

Versions
Metabase 0.32.9
metabase-clickhouse-driver 0.3

What did I do
I queried a clickhouse database from metabase with a read only user.

What did I expect
The query runs and I get the data as the readonly user has enough permissions to query the database. The default query complexity restrictions configured in the database are used.

What happens instead
When I try to query clickhouse from metabase and a readonly user, I get an exception because it cannot modify 'max_result_rows'.

The stack trace of the error is:

ClickHouseException:
 Message: ClickHouse exception, code: 164, host: host.docker.internal, port: 8123; Code: 164, e.displayText() = DB::Exception: Cannot modify 'max_result_rows' setting in readonly mode (version 19.9.3.31 (official build))

 SQLState: null
 Error Code: 164
07-09 12:26:44 ERROR sql-jdbc.execute :: nil
07-09 12:26:44 WARN middleware.process-userland-query :: Query failure {:status :failed,
 :class java.lang.Exception,
 :error "ClickHouse exception, code: 164, host: host.docker.internal, port: 8123",
 :stacktrace
 ("--> driver.sql_jdbc.execute$do_with_try_catch.invokeStatic(execute.clj:219)"
  "driver.sql_jdbc.execute$do_with_try_catch.invoke(execute.clj:209)"
  "driver.sql_jdbc.execute$execute_query.invokeStatic(execute.clj:283)"
  "driver.sql_jdbc.execute$execute_query.invoke(execute.clj:277)"
  "driver.clickhouse$eval543$fn__544.invoke(clickhouse.clj:217)"
  "query_processor$fn__42016$execute_query__42021$fn__42022.invoke(query_processor.clj:65)"
  "query_processor$fn__42016$execute_query__42021.invoke(query_processor.clj:59)"
  "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__31852.invoke(mbql_to_native.clj:38)"
  "query_processor.middleware.annotate$result_rows_maps__GT_vectors$fn__33650.invoke(annotate.clj:428)"
  "query_processor.middleware.annotate$add_column_info$fn__33555.invoke(annotate.clj:363)"
  "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__35249.invoke(cumulative_aggregations.clj:57)"
  "query_processor.middleware.resolve_joins$resolve_joins$fn__39124.invoke(resolve_joins.clj:184)"
  "query_processor.middleware.limit$limit$fn__36107.invoke(limit.clj:19)"
  "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__41885.invoke(results_metadata.clj:86)"
  "query_processor.middleware.format_rows$format_rows$fn__36095.invoke(format_rows.clj:26)"
  "query_processor.middleware.add_dimension_projections$add_remapping$fn__32190.invoke(add_dimension_projections.clj:234)"
  "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__33789.invoke(add_source_metadata.clj:127)"
  "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__39168.invoke(resolve_source_table.clj:48)"
  "query_processor.middleware.add_row_count_and_status$add_row_count_and_status$fn__32952.invoke(add_row_count_and_status.clj:16)"
  "query_processor.middleware.driver_specific$process_query_in_context$fn__35460.invoke(driver_specific.clj:12)"
  "query_processor.middleware.resolve_driver$resolve_driver$fn__38788.invoke(resolve_driver.clj:23)"
  "query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__34649$fn__34650.invoke(bind_effective_timezone.clj:9)"
  "util.date$call_with_effective_timezone.invokeStatic(date.clj:88)"
  "util.date$call_with_effective_timezone.invoke(date.clj:77)"
  "query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__34649.invoke(bind_effective_timezone.clj:8)"
  "query_processor.middleware.store$initialize_store$fn__41910$fn__41911.invoke(store.clj:11)"
  "query_processor.store$do_with_new_store.invokeStatic(store.clj:41)"
  "query_processor.store$do_with_new_store.invoke(store.clj:37)"
  "query_processor.middleware.store$initialize_store$fn__41910.invoke(store.clj:10)"
  "query_processor.middleware.async$async__GT_sync$fn__34250.invoke(async.clj:23)"
  "query_processor.middleware.async_wait$runnable$fn__34375.invoke(async_wait.clj:89)"),
 :query
 {:type "native",
  :native {:query "SELECT uniq(user)\nFROM requests", :template-tags {}},
  :parameters [],
  :async? true,
  :middleware {:add-default-userland-constraints? true, :userland-query? true},
  :info
  {:executed-by 1,
   :context :ad-hoc,
   :card-id nil,
   :nested? false,
   :query-hash [-106, 35, -19, 30, 118, 76, 45, -108, -10, 69, 95, -43, -119, -74, -47, -42, -14, 48, -112, 53, 88, 127, -112, -45, 59, -39, 30, -66, 69, -22, 38, -92]},
  :constraints {:max-results 10000, :max-results-bare-rows 2000}},
 :cause
 {:class ru.yandex.clickhouse.except.ClickHouseException,
  :error
  "ClickHouse exception, code: 164, host: host.docker.internal, port: 8123; Code: 164, e.displayText() = DB::Exception: Cannot modify 'max_result_rows' setting in readonly mode (version 19.9.3.31 (official build))\n",
  :cause {:class java.lang.Throwable, :error "Code: 164, e.displayText() = DB::Exception: Cannot modify 'max_result_rows' setting in readonly mode (version 19.9.3.31 (official build))\n"}}}

I don't know enough clojure and even less of how the metabase internals work, but from the stacktrace loks, could the problem come from in the :add-default-userland-constraints? true? Is that value configurable somehow?

Native query Variables filde filter not working

Good day, I tried to make an interactive filtrable dashboard form a native query, which I did succeed to create but with some issues
Whenever I use the Variables field filter (that enables the user to fuzzy search values of the column) the result is coming in empty. I overcome this issue by using a number variable instead of the field filter. in the dashboard, I mapped the filter to be associated with the corresponding column in the table, yet in the query itself, I have to resort to using a manually typed value which is inconvenient for the end-user, and most likely not usable. I also mapped the column in the data model to an ENUM to associate each value to a human-readable name. but I'm losing that readability by using number variable

could you guy's help out, it's not a deal-breaker by you know I think its good to polish the product as Metabase+clickhous is amazing and blazing fast.

image

DB::Exception: Key expression contains comparison between inconvertible types: DateTime64(3) and DateTime inside entry_time >= parseDateTimeBestEffort('2021-01-15 00:00:00') (version 20.8.4.11 (official build))

when use the DateTime(64) field to filter,it's say between time ,it will trans error .
eg
'entry_time ' is DateTime(64) , it will trans to 'entry_time >= parseDateTimeBestEffort('2021-01-15 00:00:00') '.
if it can trans to 'entry_time >= '2021-01-15 00:00:00',no need the function, it will work.
thanks

0.38 update failed to load the driver

Good day, Hope you are healthy and doing well
we have tested the update 0.38 which brings a lot of new cool anticipated features. unfortunately it failed to see the driver
image

Group by datetime field fail

when group by datatime field, metabase failed:

ClickHouse exception, code: 215, host: xx, port: 8123; Code: 215, e.displayText() = DB::Exception: Received from xx:9000. DB::Exception: Column `first_time` is not under aggregate function and not in GROUP BY. (version 20.3.9.70 (official build))

I find the sql generated by metabase, and execute it in clickhouse:

SELECT 
    toStartOfMonth(toDateTime(dw.ads_label_company.first_time)) AS first_time, 
    count() AS count
FROM dw.ads_label_company
GROUP BY toStartOfMonth(toDateTime(dw.ads_label_company.first_time))
ORDER BY toStartOfMonth(toDateTime(dw.ads_label_company.first_time)) ASC

Received exception from server (version 18.16.1):
Code: 215. DB::Exception: Received from xx:9000. DB::Exception: Column first_time is not under aggregate function and not in GROUP BY..

The problem is the as field, how can I solve it?
Thanks.

ClickHouse exception, code: 1002

I am getting the following error when trying to run a custom query:

ClickHouse exception, code: 1002, host: unknown, port: -1

I have tried setting the following params:

socket_timeout=42000000&connect_timeout=42000000&receive_timeout=42000000&send_timeout=42000000

It works fine for smaller queries

No method in multimethod 'connection-details->spec' for dispatch value: :clickhouse

Clickhouse and metabase version

Clickhouse: 19.17.2.4
Metabase: v0.34.0 (Latest)
metabase-clickhouse-driver: 0.6

Deployment

I copy the driver jar clickhouse.metabase-driver.jar to the plugins directory, as follows:

# ll plugins/
total 117740
-rw-r--r-- 1 root root   392466 Dec 23 17:39 bigquery.metabase-driver.jar
-rw-r--r-- 1 root root  3875030 Dec 23 18:26 clickhouse.metabase-driver.jar
-rw-r--r-- 1 root root   465476 Dec 23 17:39 druid.metabase-driver.jar
-rw-r--r-- 1 root root   617794 Dec 23 17:39 googleanalytics.metabase-driver.jar
-rw-r--r-- 1 root root  3055790 Dec 23 17:39 google.metabase-driver.jar
-rw-r--r-- 1 root root  6550563 Dec 23 17:39 mongo.metabase-driver.jar
-rw-r--r-- 1 root root    68018 Dec 23 17:39 oracle.metabase-driver.jar
-rw-r--r-- 1 root root   140859 Dec 23 17:39 presto.metabase-driver.jar
-rw-r--r-- 1 root root  2381192 Dec 23 17:39 redshift.metabase-driver.jar
-rw-r--r-- 1 root root 30587032 Dec 23 17:39 snowflake.metabase-driver.jar
-rw-r--r-- 1 root root 63997026 Dec 23 17:39 sparksql.metabase-driver.jar
-rw-r--r-- 1 root root  7114948 Dec 23 17:39 sqlite.metabase-driver.jar
-rw-r--r-- 1 root root  1229843 Dec 23 17:39 sqlserver.metabase-driver.jar
-rw-r--r-- 1 root root    50615 Dec 23 17:39 vertica.metabase-driver.jar

Start Metabase:

MB_PLUGINS_DIR=./plugins; nohup  java -jar metabase.jar &

Error Info

12-23 18:27:55 ERROR driver.util :: Database connection error
java.lang.IllegalArgumentException: No method in multimethod 'connection-details->spec' for dispatch value: :clickhouse
        at clojure.lang.MultiFn.getFn(MultiFn.java:156)
        at clojure.lang.MultiFn.invoke(MultiFn.java:233)
        at metabase.driver.sql_jdbc.connection$details__GT_connection_spec_for_testing_connection.invokeStatic(connection.clj:152)
        at metabase.driver.sql_jdbc.connection$details__GT_connection_spec_for_testing_connection.invoke(connection.clj:147)
        at metabase.driver.sql_jdbc.connection$can_connect_QMARK_.invokeStatic(connection.clj:158)
        at metabase.driver.sql_jdbc.connection$can_connect_QMARK_.invoke(connection.clj:154)
        at metabase.driver.sql_jdbc$fn__69874.invokeStatic(sql_jdbc.clj:35)
        at metabase.driver.sql_jdbc$fn__69874.invoke(sql_jdbc.clj:34)
        at clojure.lang.MultiFn.invoke(MultiFn.java:234)
        at metabase.driver.util$can_connect_with_details_QMARK_$fn__21110.invoke(util.clj:32)
        at metabase.util$do_with_timeout$fn__13131.invoke(util.clj:302)
        at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
        at clojure.lang.AFn.call(AFn.java:18)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
12-23 18:27:55 DEBUG middleware.log :: POST /api/database 400 28.3 ms (0 DB calls) 
{:valid false,
 :dbname "No method in multimethod 'connection-details->spec' for dispatch value: :clickhouse",
 :message "No method in multimethod 'connection-details->spec' for dispatch value: :clickhouse"}

Filter not working with UUID field type

I'm creating a variable in a question with field type UUID. When i filter, metabase add in query this code " CAST('XXXX-XXX-XXX', 'uuid')"

image

In Clickhouse it is working when I change uuid lower case to upper case in cast ex: CAST('XXXX-XXX-XXX', 'UUID').

how do I make the filter work with fields of type uuid ?

Filter on IPv4 column

My stack:

  • ClickHouse server version 21.4.6 revision 54447.
  • Metabase 0.39.1
  • Clickouse plugin 0.7.5

Problem description

In metabase: Ask question -> Custom question -> Filter -> ClientHost is some_ip

It's not possible to add a filter on column that is of type IPv4, it returns an error:

ClickHouse exception, code: 53, host: <HIDDEN>, port: 8123; Code: 53, e.displayText() = DB::Exception: Cannot convert string 10.0.1.23 to type IPv4: while executing 'FUNCTION notEquals(ClientHost : 0, '10.0.1.23' : 1) -> notEquals(ClientHost, '10.0.1.23') UInt8 : 3' (version 21.4.6.55 (official build)) 

Part of relevant table definition:

CREATE TABLE IF NOT EXISTS mydb.mytable (
    ClientHost IPv4,
)

Normally I query this via client with where ClientHost = toIPv4('10.0.1.23')

Be Smart About Nullable

MetaBase provides great filters for quickly finding records where certain columns are (not) empty:

metabase_filter_empty

If you select the "Not Empty" option, the driver will add a condition like WHERE foo IS NOT NULL to the query. This works fine if you are using Nullable data types, but it fails for empty cells. The driver should try to make an effort to make the "empty" filter actually work, e.g. by using the empty function for applicable data types.

Perhaps this will change when [metabase/metabase#5299] is addressed, but for now we can re-formulate the query in the driver to check String fields for both, null and empty.

Inconvertible types at date range filter

I have such error when try apply date range filter to Date type column. Are parameter values for date range shouldn't be at such format "yyyy-MM-dd" (without time)?

ClickHouse exception, code: 169, host: 127.0.0.1, port: 8123; Code: 169, e.displayText() = DB::Exception: Key expression contains comparison between inconvertible types: Date and String inside (date AS `adserver.stat.date`) >= '2019-10-27 00:00:00'

Information about Metabase
Version: 0.33.4
Server: Ubuntu 18.04

Search with non-ASCII letters doesn't work

I have a search field in Metabase. Values contain non-ASCII (cyrillic) letters. The look like Я_1, Я_2 and so on. For example I type Я in search field and Metabase sends query to Clickhouse like this:

SELECT
	`db_name`.`table_name`.`route_name` AS `route_name_mb_alias`
FROM
	`db_name`.`table_name`
WHERE
	(lower(`db_name`.`table_name`.`route_name`) like 'я%')

And Clickhouse returns nothing because lower function doesn't affect non-ASCII letters. One should use lowerUTF8 instead.

Is it possible to force using lowerUTF8 function while searching?

Check Behavior for Array, Enum Data Types

We should check whether the use of more complicated data types breaks the driver somehow.

Candidates:

  • Array(String)
  • Array(Array(UInt64))
  • Array(LowCardinality(String))
  • Enum8('foo' = 42, 'bar' = 23)

Support urls with underscores

On the Metabase db configuration page, if the host contains underscores, e.g.: clickhouse_app, connection attempt always results in port is missed or wrong. With hosts without underscores, it works like a correctly.

Metabase log:

08-09 07:59:08 DEBUG middleware.log :: PUT /api/database/4 400 10 ms (1 DB calls) 
{:valid false, :dbname "port is missed or wrong", :message "port is missed or wrong"}

Driver version: 0.3 https://github.com/enqueue/metabase-clickhouse-driver/releases/download/0.3/clickhouse.metabase-driver.jar
Metabase version: v0.32.10

Arithmetic functions with other types other than Int or Float

Hello! We cannot make Arithmetic calculations on clickhouse work only for INT or FLOAT as they say here: https://clickhouse.tech/docs/en/sql-reference/functions/arithmetic-functions/

"Arithmetic functions work for any pair of types from UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64, Float32, or Float64."

My error on Metabase says exactly that:

ClickHouse exception, code: 43, host: ec2-54-158-250-11.compute-1.amazonaws.com, port: 8123; Code: 43, e.displayText() = DB::Exception: Illegal types Float64 and Decimal(38, 2) of arguments of function divide (version 20.3.5.21 (official build))

So, on the driver we would cast the column to Float, right?

What do you think @enqueue ?

Connection inside docker network

Hello,

I am trying to use metabase and clickhouse with the metabase-clickhouse-driver, however, I can't manage to connect to the clickhouse database when using a bridge network. The metabase container manage to communicate with the clickhouse container with ping -c 1 name_clickhouse_container:5432, and when I use --network host metabase also manage to connect with host localhost, but when I am trying to set name_clickhouse_container as host inside metabase, ther error message says it is missing the port.

Would you know where the issue might reside?

Best regards,
David

I am trying to use this project: https://github.com/plausible/hosting with the docker instruction

docker run -d -p 3000:3000 \
-v ~/metabase-data:/metabase-data \
-net hosting_default \
-e "MB_DB_FILE=/metabase-data/metabase.db" \
--name metabase metabase/metabase

[please adapt the docker call in order to add the clickhouse driver].

Query with "Custom Field" and more than 30 columns

ClickHouse exception, code: 47, host: 194.182.175.90, port: 8123; Code: 47, e.displayText() = DB::Exception: Missing columns:

After 30 columns the alias "source" is not used anymore

Example
SELECT source.id AS id,

analytics.tablename.column1 AS column1
FROM (SELECT analytics.tablename.id,

analytics.tablename.column1
.) source

Zoom into Map: Comparison between Decimal and Float64

Zoom into a pinpoint map

ru.yandex.clickhouse.except.ClickHouseException: ClickHouse exception, code: 43, host: localhost, port: 8123; Code: 43, e.displayText() = DB::Exception: No operation lessOrEquals between Decimal(8, 6) and Float64, e.what() = DB::Exception

Support for WITH TOTALS

The WITH TOTALS query runs fine but I do not see the totals output anywhere. Is it possible to support this in Metabase?

two columns with same alias name

When using UI, Metabase can produce queries having two aggregates with the same aliases:

SELECT 
  sum(`default`.`stats`.`clicks`) AS `sum`,
  sum(`default`.`stats`.`impressions`) AS `sum`
FROM `default`.`stats`
FORMAT TabSeparatedWithNamesAndTypes;

it's ok for PostgreSQL and for some other databases, but not ok for ClickHouse:

2019.04.01 13:32:45.843693 [ 42 ] {38d731ce-1536-4670-ba77-645c7c415f08} <Error> HTTPHandler: Code: 179, e.displayText() = DB::Exception: Different expressions with the same alias sum:
sum(impressions) AS sum
and
sum(clicks) AS sum

I guess driver should take that into account and probably add suffixes for the aliases having the same name

Problem with Enum type

Hi,

I'm encountering problems with enums when the clickhouse definition is longer than 255 chars as it cannot be written into the DATABASE_TYPE column of the database.

The redacted error message for metabase.sync.util is:

Sep 30 13:18:01 ERROR metabase.sync.util :: Error checking if Fields (<REDACTED_LIST_OF_FIELDS>) need to be created or reactivated: Value too long for column "DATABASE_TYPE VARCHAR(255) NOT NULL COMMENT 'The actual type of this column in the database. e.g. VARCHAR or TEXT.'": "'Enum8(''unknown'' = 0, ''A'' = 1, ''B'' = 2, ''C'' = 3, ''D'' = 4, ''E'' = ... (275)"; SQL statement:
INSERT INTO "METABASE_FIELD" ("DESCRIPTION", "DATABASE_TYPE", "TABLE_ID", "SPECIAL_TYPE", "NAME", "UPDATED_AT", "PARENT_ID", "DISPLAY_NAME", "CREATED_AT", "BASE_TYPE") VALUES (NULL, ?, 26, NULL, ?, ?, NULL, ?, ?, ?) [22001-197]
("org.h2.message.DbException.getJdbcSQLException(DbException.java:357)"
"org.h2.message.DbException.get(DbException.java:179)"
"org.h2.table.Column.validateConvertUpdateSequence(Column.java:398)"
"org.h2.table.Table.validateConvertUpdateSequence(Table.java:798)"
"org.h2.command.dml.Insert.insertRows(Insert.java:177)"
"org.h2.command.dml.Insert.update(Insert.java:134)"
"org.h2.command.CommandContainer.update(CommandContainer.java:102)"
"org.h2.command.Command.executeUpdate(Command.java:261)"
"org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:199)"
"org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:153)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:410)"
"clojure.java.jdbc$db_do_execute_prepared_return_keys$exec_and_return_keys__14493.invoke(jdbc.clj:961)"
"clojure.java.jdbc$db_do_execute_prepared_return_keys.invokeStatic(jdbc.clj:989)"
"clojure.java.jdbc$db_do_execute_prepared_return_keys.invoke(jdbc.clj:945)"
"clojure.java.jdbc$db_do_prepared_return_keys.invokeStatic(jdbc.clj:1020)"
"clojure.java.jdbc$db_do_prepared_return_keys.invoke(jdbc.clj:997)"
"clojure.java.jdbc$db_do_prepared_return_keys.invokeStatic(jdbc.clj:1022)"
"clojure.java.jdbc$db_do_prepared_return_keys.invoke(jdbc.clj:997)"
"toucan.db$simple_insert_many_BANG_$iter__17393__17397$fn__17398.invoke(db.clj:547)"
"clojure.lang.LazySeq.sval(LazySeq.java:42)"
"clojure.lang.LazySeq.seq(LazySeq.java:51)"
"clojure.lang.Cons.next(Cons.java:39)"
"clojure.lang.RT.next(RT.java:709)"
"clojure.core$next__5371.invokeStatic(core.clj:64)"
"clojure.core$dorun.invokeStatic(core.clj:3142)"
"clojure.core$doall.invokeStatic(core.clj:3148)"
"clojure.core$doall.invoke(core.clj:3148)"
"toucan.db$simple_insert_many_BANG_.invokeStatic(db.clj:544)"
"toucan.db$simple_insert_many_BANG_.invoke(db.clj:530)"
"toucan.db$insert_many_BANG_.invokeStatic(db.clj:563)"
"toucan.db$insert_many_BANG_.invoke(db.clj:550)"
"--> sync.sync_metadata.fields.sync_instances$fn__48056$insert_new_fields_BANG___48061$fn__48062.invoke(sync_instances.clj:45)"
"sync.sync_metadata.fields.sync_instances$fn__48056$insert_new_fields_BANG___48061.invoke(sync_instances.clj:41)"
"sync.sync_metadata.fields.sync_instances$fn__48099$create_or_reactivate_fields_BANG___48104$fn__48105.invoke(sync_instances.clj:68)"
"sync.sync_metadata.fields.sync_instances$fn__48099$create_or_reactivate_fields_BANG___48104.invoke(sync_instances.clj:56)"
"sync.sync_metadata.fields.sync_instances$fn__48126$sync_active_instances_BANG___48131$fn__48132$iter__48133__48137$fn__48138$fn__48143.invoke(sync_instances.clj:104)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:149)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"sync.sync_metadata.fields.sync_instances$fn__48126$sync_active_instances_BANG___48131$fn__48132$iter__48133__48137$fn__48138.invoke(sync_instances.clj:99)"
"sync.util$sum_for_STAR_.invokeStatic(util.clj:462)"
"sync.util$sum_for_STAR_.invoke(util.clj:459)"
"sync.sync_metadata.fields.sync_instances$fn__48126$sync_active_instances_BANG___48131$fn__48132.invoke(sync_instances.clj:98)"
"sync.sync_metadata.fields.sync_instances$fn__48126$sync_active_instances_BANG___48131.invoke(sync_instances.clj:85)"
"sync.sync_metadata.fields.sync_instances$fn__48310$sync_instances_BANG___48319$fn__48322.invoke(sync_instances.clj:193)"
"sync.sync_metadata.fields.sync_instances$fn__48310$sync_instances_BANG___48319.invoke(sync_instances.clj:178)"
"sync.sync_metadata.fields.sync_instances$fn__48310$sync_instances_BANG___48319$fn__48320.invoke(sync_instances.clj:184)"
"sync.sync_metadata.fields.sync_instances$fn__48310$sync_instances_BANG___48319.invoke(sync_instances.clj:178)"
"sync.sync_metadata.fields$fn__48478$sync_and_update_BANG___48483$fn__48484.invoke(fields.clj:76)"
"sync.sync_metadata.fields$fn__48478$sync_and_update_BANG___48483.invoke(fields.clj:72)"
"sync.sync_metadata.fields$fn__48506$sync_fields_for_table_BANG___48515$fn__48521$fn__48523.invoke(fields.clj:102)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:149)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"sync.sync_metadata.fields$fn__48506$sync_fields_for_table_BANG___48515$fn__48521.invoke(fields.clj:90)"
"sync.sync_metadata.fields$fn__48506$sync_fields_for_table_BANG___48515.invoke(fields.clj:83)"
"sync.sync_metadata.fields$fn__48551$sync_fields_BANG___48556$fn__48557$iter__48558__48562$fn__48563$fn__48564.invoke(fields.clj:117)"
"sync.sync_metadata.fields$fn__48551$sync_fields_BANG___48556$fn__48557$iter__48558__48562$fn__48563.invoke(fields.clj:116)"
"sync.sync_metadata.fields$fn__48551$sync_fields_BANG___48556$fn__48557.invoke(fields.clj:116)"
"sync.sync_metadata.fields$fn__48551$sync_fields_BANG___48556.invoke(fields.clj:111)"
"sync.util$fn__39397$run_step_with_metadata__39402$fn__39406$fn__39408.invoke(util.clj:356)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invokeStatic(util.clj:104)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invoke(util.clj:98)"
"sync.util$with_start_and_finish_debug_logging.invokeStatic(util.clj:121)"
"sync.util$with_start_and_finish_debug_logging.invoke(util.clj:118)"
"sync.util$fn__39397$run_step_with_metadata__39402$fn__39406.invoke(util.clj:353)"
"sync.util$fn__39397$run_step_with_metadata__39402.invoke(util.clj:348)"
"sync.util$fn__39589$run_sync_operation__39594$fn__39595$fn__39596.invoke(util.clj:441)"
"sync.util$fn__39589$run_sync_operation__39594$fn__39595.invoke(util.clj:441)"
"sync.util$fn__39589$run_sync_operation__39594.invoke(util.clj:435)"
"sync.sync_metadata$fn__49238$sync_db_metadata_BANG___49243$fn__49244$fn__49245.invoke(sync_metadata.clj:51)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:149)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:147)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"driver$fn__18264.invokeStatic(driver.clj:646)"
"driver$fn__18264.invoke(driver.clj:646)"
"sync.util$sync_in_context$fn__39303.invoke(util.clj:140)"
"sync.util$with_db_logging_disabled$fn__39300.invoke(util.clj:131)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invokeStatic(util.clj:104)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invoke(util.clj:98)"
"sync.util$with_start_and_finish_logging$fn__39289.invoke(util.clj:116)"
"sync.util$with_sync_events$fn__39284.invoke(util.clj:90)"
"sync.util$with_duplicate_ops_prevented$fn__39275.invoke(util.clj:69)"
"sync.util$do_sync_operation.invokeStatic(util.clj:168)"
"sync.util$do_sync_operation.invoke(util.clj:165)"
"sync.sync_metadata$fn__49238$sync_db_metadata_BANG___49243$fn__49244.invoke(sync_metadata.clj:50)"
"sync.sync_metadata$fn__49238$sync_db_metadata_BANG___49243.invoke(sync_metadata.clj:47)"
"sync$fn__49288$sync_database_BANG___49293$fn__49294$fn__49295.invoke(sync.clj:28)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:149)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"sync.util$do_with_error_handling.invokeStatic(util.clj:147)"
"sync.util$do_with_error_handling.invoke(util.clj:144)"
"driver$fn__18264.invokeStatic(driver.clj:646)"
"driver$fn__18264.invoke(driver.clj:646)"
"sync.util$sync_in_context$fn__39303.invoke(util.clj:140)"
"sync.util$with_db_logging_disabled$fn__39300.invoke(util.clj:131)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invokeStatic(util.clj:104)"
"sync.util$with_start_and_finish_logging_SINGLEQUOTE_.invoke(util.clj:98)"
"sync.util$with_start_and_finish_logging$fn__39289.invoke(util.clj:116)"
"sync.util$with_sync_events$fn__39284.invoke(util.clj:90)"
"sync.util$with_duplicate_ops_prevented$fn__39275.invoke(util.clj:69)"
"sync.util$do_sync_operation.invokeStatic(util.clj:168)"
"sync.util$do_sync_operation.invoke(util.clj:165)"
"sync$fn__49288$sync_database_BANG___49293$fn__49294.invoke(sync.clj:26)"
"sync$fn__49288$sync_database_BANG___49293.invoke(sync.clj:19)"
"events.sync_database$process_sync_database_event$fn__64936.invoke(sync_database.clj:36)")

I've seen a bug reported on the metabase project metabase/metabase#11002 but I'm wondering if the full Enum definition is really needed. If not, could we mitigate the problem directly at the driver by just setting the type as Enum8 or Enum16?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.