This project is now deprecated.
The great folks from ING maintain a fork which uses the Datastax Java Driver 4+: https://github.com/ing-bank/cassandra-jdbc-wrapper
A JDBC wrapper for the Datastax Java Driver for Cassandra
License: Apache License 2.0
This project is now deprecated.
The great folks from ING maintain a fork which uses the Datastax Java Driver 4+: https://github.com/ing-bank/cassandra-jdbc-wrapper
Hello.
We receive an error message: "unconfigured table schema_keyspaces" when trying to connect to cassandra. (we can connect with an older jdbc driver over rpc, without any problems, so there are no problems on the Cassandra side.)
Do you have any plans to support Cassandra 3?
If thats the case, -whats the time frame?
Please see the real issue after the comment below.
Hi. Thanks for writing this wrapper. I encountered a problem when trying to use the driver in Solr.
Caused by: java.lang.IllegalArgumentException: duplicate key: url
at com.github.adejanovski.cassandra.jdbc.CassandraDriver.connect(CassandraDriver.java:97)
For reference, my Solr data-config.xml consist of this:
Looks like the "url" of the dataSource is causing it.
Cheers!
Select a bigint type column from Cassandra in RJDBC gives this error:
Error in .jcall(rp,"I","fetch",stride,block):
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [bigint <--> java.lang.Double]
How would i go about creating a Custom Codec ?
I need PreparedStatment getParameterMetaData()
I could not fetch the variables of type date and time from Cassandra Database using this connector, while generating the report in jasper studio following error is being displayed.
net.sf.jasperreports.engine.JRException: net.sf.jasperreports.engine.JRException: Unable to get value for result set field "pow_date" of class java.util.Date. at com.jaspersoft.studio.editor.preview.view.control.ReportController.fillReport(ReportController.java:551) at com.jaspersoft.studio.editor.preview.view.control.ReportController.access$18(ReportController.java:526) at com.jaspersoft.studio.editor.preview.view.control.ReportController$1.run(ReportController.java:444) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63) Caused by: net.sf.jasperreports.engine.JRException: Unable to get value for result set field "pow_date" of class java.util.Date. at net.sf.jasperreports.engine.JRResultSetDataSource.getFieldValue(JRResultSetDataSource.java:389) at net.sf.jasperreports.engine.fill.JRFillDataset.setOldValues(JRFillDataset.java:1501) at net.sf.jasperreports.engine.fill.JRFillDataset.next(JRFillDataset.java:1402) at net.sf.jasperreports.engine.fill.JRFillDataset.next(JRFillDataset.java:1378) at net.sf.jasperreports.engine.fill.JRBaseFiller.next(JRBaseFiller.java:1194) at net.sf.jasperreports.engine.fill.JRVerticalFiller.fillReport(JRVerticalFiller.java:108) at net.sf.jasperreports.engine.fill.JRBaseFiller.fill(JRBaseFiller.java:615) at net.sf.jasperreports.engine.fill.BaseFillHandle$ReportFill.run(BaseFillHandle.java:135) at java.lang.Thread.run(Thread.java:748) Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [date <-> java.util.Date] at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:679) at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:526) at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:506) at com.datastax.driver.core.CodecRegistry.access$200(CodecRegistry.java:140) at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:211) at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:208) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) at com.google.common.cache.LocalCache.get(LocalCache.java:3934) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) at com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:480) at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:448) at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:430) at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:69) at com.datastax.driver.core.AbstractGettableByIndexData.getTimestamp(AbstractGettableByIndexData.java:165) at com.datastax.driver.core.AbstractGettableData.getTimestamp(AbstractGettableData.java:26) at com.github.adejanovski.cassandra.jdbc.CassandraResultSet.getDate(CassandraResultSet.java:469) at net.sf.jasperreports.engine.JRResultSetDataSource.readDate(JRResultSetDataSource.java:401) at net.sf.jasperreports.engine.JRResultSetDataSource.getFieldValue(JRResultSetDataSource.java:21
Please suggest any appropriate method to solve the issue.
Hi Alexander,
I am on the branch 3.0.3-spanshot which fixes the previous bug. However, I have found that that in MetadataResultSets: 170 there is the definition for charol as follows
//CHAR_OCTET_LENGTH
Integer charol = null;
if (jtype instanceof JdbcAscii || jtype instanceof JdbcUTF8) charol = Integer.MAX_VALUE;
and for those datatype that are not defined (i.e. JdbcDate), when trying to create column based on retrieved metadata, results in the following exception
Caused by: java.lang.NumberFormatException: For input string: "null"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[na:1.8.0_91]
at java.lang.Integer.parseInt(Integer.java:580) ~[na:1.8.0_91]
at java.lang.Integer.valueOf(Integer.java:766) ~[na:1.8.0_91]
at liquibase.snapshot.CachedRow.getInt(CachedRow.java:38) ~[liquibase-core-3.4.1.jar:na]
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.readDataType(ColumnSnapshotGenerator.java:315) ~[liquibase-core-3.4.1.jar:na]
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.readColumn(ColumnSnapshotGenerator.java:216) ~[liquibase-core-3.4.1.jar:na]
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.addTo(ColumnSnapshotGenerator.java:111) ~[liquibase-core-3.4.1.jar:na]
My workaround was just to set it up by default to max_value (is that wrong?) and it worked
Integer charol = Integer.MAX_VALUE;
Cheers,
Latest DBEAVER Community Edition 6.3.2
Latest cassandra-jdbc-wrapper 3.1.0 (full fat pre-bake jar)
Connects fine, and see keyspaces and "tables", but when trying to access the table properties or data:
org.jkiss.dbeaver.DBException: null
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:209)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:109)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$17.run(ResultSetViewer.java:3434)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: java.lang.NumberFormatException: null
at java.base/java.lang.Integer.parseInt(Unknown Source)
at java.base/java.lang.Integer.parseInt(Unknown Source)
at com.github.adejanovski.cassandra.jdbc.MetadataRow.getInt(MetadataRow.java:119)
at com.github.adejanovski.cassandra.jdbc.MetadataRow.getInt(MetadataRow.java:124)
at com.github.adejanovski.cassandra.jdbc.CassandraMetadataResultSet.getInt(CassandraMetadataResultSet.java:503)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCResultSetImpl.getInt(JDBCResultSetImpl.java:485)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.safeGetInt(JDBCUtils.java:109)
at org.jkiss.dbeaver.ext.generic.model.GenericUtils.safeGetInt(GenericUtils.java:84)
at org.jkiss.dbeaver.ext.generic.model.TableCache.fetchChild(TableCache.java:149)
at org.jkiss.dbeaver.ext.generic.model.TableCache.fetchChild(TableCache.java:1)
at org.jkiss.dbeaver.model.impl.jdbc.cache.JDBCStructCache.loadChildren(JDBCStructCache.java:129)
at org.jkiss.dbeaver.model.impl.jdbc.cache.JDBCStructCache.getChildren(JDBCStructCache.java:221)
at org.jkiss.dbeaver.ext.generic.model.GenericTableBase.getAttributes(GenericTableBase.java:149)
at org.jkiss.dbeaver.ext.generic.model.GenericTableBase.getAttributes(GenericTableBase.java:1)
at org.jkiss.dbeaver.model.impl.jdbc.struct.JDBCTable.readRequiredMeta(JDBCTable.java:922)
at org.jkiss.dbeaver.model.impl.jdbc.struct.JDBCTable.readData(JDBCTable.java:130)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:111)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:154)
... 4 more
Hi,
I'm trying to query C* 3.7 with your driver but my column family (table) model contains a list of frozen tuple of strings and the driver don't support it. Like this cql example :
CREATE TABLE IF NOT EXISTS test.users(
user_id uuid,
time timeuuid,
created_at date STATIC,
info map<text, text>,
ancestors list<frozen<tuple<text,text>>>,
childs list<frozen<tuple<text,text>>>
I don't know how to build a jdbc driver but do you know if this type is easy to handle? How could I fix (update) your driver to do the job ?
Thanks !
Error in .jcall(rp, "I", "fetch", stride, block) :
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [frozen<tuple<varchar, varchar>> <-> java.lang.String]
Is it possible to connect to Azure Cassandra with this driver?
And how do I connect via SSL? It's possible?
Thanks.
When calling method com.github.adejanovski.cassandra.jdbc.CassandraResultSet#getTimestamp(int, java.util.Calendar)
I get a CodecNotFoundException
.
This error is caused because this method delegates to com.github.adejanovski.cassandra.jdbc.CassandraResultSet#getTimestamp(int)
passing a wrong value in the index
parameter.
public Timestamp getTimestamp(int index) throws SQLException {
this.checkIndex(index);
java.util.Date date = this.currentRow.getTimestamp(index - 1);
return date == null?null:new Timestamp(this.currentRow.getTimestamp(index - 1).getTime());
}
public Timestamp getTimestamp(int index, Calendar calendar) throws SQLException {
this.checkIndex(index);
java.util.Date date = this.currentRow.getTimestamp(index - 1);
return date == null?null:this.getTimestamp(index - 1);
}
If the method getTimestamp(int, java.util.Calendar)
receives 2 in the index
param, it would then call getTimestamp(int)
with index - 1
. This causes the call to this.currentRow.getTimestamp(index - 1)
being done over the previous column, not the current one.
com.datastax.driver.core.exceptions.SyntaxError: line 0:-1 no viable alternative at input ''
at com.github.adejanovski.cassandra.jdbc.CassandraStatement.doExecute(CassandraStatement.java:222):222 at com.github.adejanovski.cassandra.jdbc.CassandraStatement.execute(CassandraStatement.java:230):230
This happens for the most basic table creation.
However the tables are successfully created in cassandra
Using something like jdbc:cassandra://192.168.56.201--192.168.56.202:9042/key1 doesn't work:
JDBC-Driver: cassandra-jdbc-wrapper-3.0.3.jar
Product: DbVisualizer Free 9.5.2 [Build #2598]
OS: Windows 7
OS Version: 6.1
OS Arch: x86
Java Version: 1.8.0_77
Java VM: Java HotSpot(TM) Client VM
Java Vendor: Oracle Corporation
An error occurred while establishing the connection:
Long Message:
Connection url must specify a host, e.g., jdbc:cassandra://localhost:9042/Keyspace1
When we try to insert a SQL Date into date type column in cassandra, it is throwing Codec not found exception.
What would be the right way to use it ? @adejanovski
This happens with solr 6.3 at the very end of a huge dataimport.
getNext() failed for query 'select * from mykeyspace.mytable':org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLRecoverableException: method was called on a closed Statement
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:475)
at org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:458)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLRecoverableException: method was called on a closed Statement
at com.github.adejanovski.cassandra.jdbc.CassandraStatement.checkNotClosed(CassandraStatement.java:137)
at com.github.adejanovski.cassandra.jdbc.CassandraStatement.getMoreResults(CassandraStatement.java:329)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
Solr can't deal with that exception.
Hello,
I am using Cassandra 3 cluster (5 machines) and 3.1.0 version of your driver.
It seems that we cannot specify other values for socket options, because in SessionHolder I see only next:
builder.withSocketOptions(new SocketOptions().setKeepAlive(true));
Can we somehow get new version with possibility to specify:
Thanks in advance.
Hi Alexander,
I am using your JDBC wrapper for Cassandra JDBC data Operator and I receive following NPE while trying to get ResultSet from a table where contain a TimeStamp column.
Seems you do not handle the null value exception if the type is Timestamp:
else if (typeName.equals("timestamp")){
return (Object) new Timestamp((currentRow.getTimestamp(index-1)).getTime());
Thanks in advance for your answer.
Michael
Hey,
I was trying to use the jdbc wrapper to batch insert(using batch update of Spring's JdbcTemplate).
It is basically an array of sql insert statements(strings)..
Now, what happens, is that the check that is done to see if these update statements are valid is if the CassandraStatement#execute function returns true.. Now this happens because the only check that is done this is "this.currentResultSet != null" while it is not null(but the "this.currentResultSet.hasMoreRows()" immediately returns false(so there are no rows returned in this result set..
When it returns true, the JdbcTemplate of Spring throws an exception of it being an invalid batch SQL statement..
The only thing needed is turn this row into "this.currentResultSet != null && this.currentResultSet.hasMoreRows()" or any equivalent ..
Thanks :)
EDIT:
Also, when opening a connection, it prints "Datacenter: %s; Host: %s; Rack: %s"(forgotten String.fomrat())..
Hi, I've been studing the cassandra a little, and have found a possible issue.
I've followed this tutorial.
Columns emp_phone
and emp_sal
are of "varint" data type.
Data inserted as mentioned on the tutorial.
Following simple select * from emp;
, in the GUI i'm seeing following error (while in the cqlsh the select works well):
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varint <-> java.lang.Integer] at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:679) at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:526) at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:506) at com.datastax.driver.core.CodecRegistry.access$200(CodecRegistry.java:140) at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:211) at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:208) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) at com.google.common.cache.LocalCache.get(LocalCache.java:3934) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) at com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:480) at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:448) at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:430) at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:69) at com.datastax.driver.core.AbstractGettableByIndexData.getInt(AbstractGettableByIndexData.java:139) at com.datastax.driver.core.AbstractGettableData.getInt(AbstractGettableData.java:26) at com.github.adejanovski.cassandra.jdbc.CassandraResultSet.getObject(CassandraResultSet.java:757) at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCResultSetImpl.getObject(JDBCResultSetImpl.java:612) at org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCNumberValueHandler.fetchColumnValue(JDBCNumberValueHandler.java:154) at org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCAbstractValueHandler.fetchValueObject(JDBCAbstractValueHandler.java:49) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetDataReceiver.fetchRow(ResultSetDataReceiver.java:124) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.fetchQueryData(SQLQueryJob.java:679) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:506) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:424) at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:164) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:416) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:774) at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:2914) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:110) at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:164) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:108) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$17.run(ResultSetViewer.java:3421) at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Cassandra Version - [cqlsh 5.0.1 | Cassandra 3.0.12.1586 | DSE 5.0.7 | CQL spec 3.4.0 | Native protocol v4]
Soap UI Version - 5.2.1/5.3
When trying to connect to Cassandra through SoapUI using com.github.adejanovski.cassandra.jdbc.CassandraDriver I am unable to send the credentials for the Cassandra server. Could you please share the format for the connection string with credentials.
I am using the below groovy script.
1) With No Credentials
import java.sql.DriverManager;
import com.github.adejanovski.cassandra.jdbc.CassandraDriver;
com.eviware.soapui.support.GroovyUtils.registerJdbcDriver("com.github.adejanovski.cassandra.jdbc.CassandraDriver");
def con = DriverManager.getConnection("jdbc:cassandra://172.17.80.171:9042/transactionhistorylookup");
def stmt = con.createStatement();
Error message
java.sql.SQLNonTransientConnectionException: com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host /172.17.80.171:9042: Host /172.17.80.171:9042 requires authentication, but no authenticator found in Cluster configuration error at line: 7
2) With Credentials
When I try to pass the credentials in the Connection string as below, I get the following error.
def con = DriverManager.getConnection("jdbc:cassandra:cassandra/cassandra//172.17.80.171:9042/transactionhistorylookup","cassandra","cassandra");
Error Message - java.sql.SQLNonTransientConnectionException: Connection url must specify a host, e.g., jdbc:cassandra://localhost:9042/Keyspace1 error at line: 8
WARNING: An internal object pool swallowed an Exception.
java.sql.SQLFeatureNotSupportedException: the Cassandra implementation is always in auto-commit mode
at com.github.adejanovski.cassandra.jdbc.CassandraConnection.rollback(CassandraConnection.java:379)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
I'm using this driver with RJDBC (https://cran.r-project.org/web/packages/RJDBC/index.html), and am able to successfully connect to my local Cassandra instance and retrieve a list of tables. When I try to run a query, I get
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [int <-> java.lang.Double]
Hi We have a requirement , where we need to use joins and group by funcitons, will this provide support for joins and Group by.
Assuming I execute a select query like this one:
SELECT id, ts_update FROM mytable ;
where ts_update
is a timestamp.
I used the Cassandra JDBC wrapper with MyBatis 3.4.6. My column ts_update
is mapped on field of type OffsetDateTime
in a Java bean.
In this version of MyBatis, the mapping of the columns of type timestamp
was implemented as following:
public OffsetDateTime getNullableResult(ResultSet rs, String columnName) throws SQLException {
Timestamp timestamp = rs.getTimestamp(columnName);
return getOffsetDateTime(timestamp);
}
So, it worked with the current version of CassandraResultSet
(3.1.0). But, when I updated MyBatis to the version 3.5.1, I got the following error:
Caused by: java.sql.SQLFeatureNotSupportedException: the Cassandra implementation does not support this method
at com.github.adejanovski.cassandra.jdbc.AbstractResultSet.getObject(AbstractResultSet.java:136)
at com.zaxxer.hikari.pool.HikariProxyResultSet.getObject(HikariProxyResultSet.java)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ibatis.logging.jdbc.ResultSetLogger.invoke(ResultSetLogger.java:69)
at com.sun.proxy.$Proxy161.getObject(Unknown Source)
at org.apache.ibatis.type.OffsetDateTimeTypeHandler.getNullableResult(OffsetDateTimeTypeHandler.java:38)
at org.apache.ibatis.type.OffsetDateTimeTypeHandler.getNullableResult(OffsetDateTimeTypeHandler.java:28)
at org.apache.ibatis.type.BaseTypeHandler.getResult(BaseTypeHandler.java:81)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.getPropertyMappingValue(DefaultResultSetHandler.java:472)
The issue comes from the fact that the method getObject(String, Class<T>)
is not implemented in CassandraResultSet
but called by the new implementation of the OffsetDateTime handling:
public OffsetDateTime getNullableResult(ResultSet rs, String columnName) throws SQLException {
return (OffsetDateTime)rs.getObject(columnName, OffsetDateTime.class);
}
So, the mentioned method should be implemented.
Hi Alexander,
I am trying to put in action liquibase on Cassandra using your JDBC wrapper and I receive following NPE while trying to create an snapshot for a table
Caused by: java.lang.NullPointerException
at com.github.adejanovski.cassandra.jdbc.MetadataResultSets.makeColumns(MetadataResultSets.java:163)
at com.github.adejanovski.cassandra.jdbc.CassandraDatabaseMetaData.getColumns(CassandraDatabaseMetaData.java:135)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData$3.fastFetchQuery(JdbcDatabaseSnapshot.java:277)
at liquibase.snapshot.ResultSetCache$SingleResultSetExtractor.fastFetch(ResultSetCache.java:290)
at liquibase.snapshot.ResultSetCache.get(ResultSetCache.java:58)
at liquibase.snapshot.JdbcDatabaseSnapshot$CachingDatabaseMetaData.getColumns(JdbcDatabaseSnapshot.java:239)
at liquibase.snapshot.jvm.ColumnSnapshotGenerator.addTo(ColumnSnapshotGenerator.java:108)
It seems that the statement looses the connection anywhere.
Thanks in advance for your answer,
Eneko.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.