datanucleus / datanucleus-cassandra Goto Github PK
View Code? Open in Web Editor NEWDataNucleus support for persistence to Cassandra datastores
DataNucleus support for persistence to Cassandra datastores
Looks like the cluster builder is created without ability to change any parameters is there any workaround for this?
The Java driver is totally reworked throwing away backwards compatibility so it means a major upgrade.
A migration guide is at https://docs.datastax.com/en/developer/java-driver/4.13/upgrade_guide/#4-0-0
Hi,
I'm using JClouds with DataNucleus JPA for Cassandra (#33) and all the tests fail because of this error:
testBlobExists
javax.persistence.NonUniqueResultException: Expected a single result for query: SELECT FROM org.jclouds.jdbc.entity.ContainerEntity c WHERE c.name = :name : The query returned more than one instance BUT either unique is set to true or only aggregates are to be returned, so should have returned one result maximum
org.datanucleus.store.query.QueryNotUniqueException: The query returned more than one instance BUT either unique is set to true or only aggregates are to be returned, so should have returned one result maximum
at org.datanucleus.store.query.Query.executeQuery(Query.java:2001)
at org.datanucleus.store.query.Query.executeWithMap(Query.java:1873)
at org.datanucleus.api.jpa.JPAQuery.getSingleResult(JPAQuery.java:257)
at org.jclouds.jdbc.repository.ContainerRepository.findContainerByName(ContainerRepository.java:40)
at org.jclouds.jdbc.service.JdbcService.findContainerByName(JdbcService.java:86)
at org.jclouds.jdbc.service.JdbcService$$EnhancerByGuice$$99921f29.CGLIB$findContainerByName$10(<generated>)
at org.jclouds.jdbc.service.JdbcService$$EnhancerByGuice$$99921f29$$FastClassByGuice$$5a562b39.invoke(<generated>)
at com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:70)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:52)
at org.jclouds.jdbc.service.JdbcService$$EnhancerByGuice$$99921f29.findContainerByName(<generated>)
at org.jclouds.jdbc.strategy.JdbcStorageStrategy.containerExists(JdbcStorageStrategy.java:93)
at org.jclouds.blobstore.config.LocalBlobStore.blobExists(LocalBlobStore.java:626)
at com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:37)
at com.sun.proxy.$Proxy46.blobExists(Unknown Source)
at org.jclouds.jdbc.BaseJdbcBlobStoreTest.testBlobExists(BaseJdbcBlobStoreTest.java:460)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.datanucleus.api.jpa.JPAQuery.getSingleResult(JPAQuery.java:266)
at org.jclouds.jdbc.repository.ContainerRepository.findContainerByName(ContainerRepository.java:40)
at org.jclouds.jdbc.service.JdbcService.findContainerByName(JdbcService.java:86)
at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:70)
at org.jclouds.jdbc.strategy.JdbcStorageStrategy.containerExists(JdbcStorageStrategy.java:93)
at org.jclouds.blobstore.config.LocalBlobStore.blobExists(LocalBlobStore.java:626)
at com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:37)
at com.sun.proxy.$Proxy46.blobExists(Unknown Source)
at org.jclouds.jdbc.BaseJdbcBlobStoreTest.testBlobExists(BaseJdbcBlobStoreTest.java:460)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
... Removed 28 stack frames
The cause is the non respect of the constraint defined for name attribute (https://github.com/jclouds/jclouds-labs/blob/master/jdbc/src/main/java/org/jclouds/jdbc/entity/ContainerEntity.java#L37):
@Entity
@Table
public class ContainerEntity {
@Id
@GeneratedValue
private Long id;
@Column(unique = true)
private String name;
private Date creationDate;
private ContainerAccess containerAccess;
Thus, the following method generates the exception described above (https://github.com/jclouds/jclouds-labs/blob/master/jdbc/src/main/java/org/jclouds/jdbc/repository/ContainerRepository.java#L36):
public ContainerEntity findContainerByName(String name) {
try {
return entityManager.get().createQuery("SELECT c FROM " + entityClass.getName() + " c WHERE c.name = :name", entityClass)
.setParameter("name", name)
.getSingleResult();
} catch (NoResultException e) {
return null;
}
}
Thanks
Youcef HILEM
This codefragment copied into
src/main/java/org/datanucleus/samples/jdo/tutorial/Main.java
gives an error
Query q=pm.newQuery("javax.jdo.query.JPQL", "Select p.name from Inventory as inventory join inventory.products as p");
q.declareImports("import org.datanucleus.samples.jdo.tutorial.*");
List products = (List)q.execute();
Result:org.datanucleus.query.inmemory.InMemoryFailure
Pass through to the Cassandra Builder object
== When a join is performed, if the field keys are of type string, the operation works,
however if the field keys are bigint, or int the join fails as the wrong row.getXXX() operation
is called ==
public class User implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.TABLE)
@Basic(optional = false)
@Column(name = "user_id", nullable = false)
private Long userId;
@Basic(optional = false)
@Column(name = "email", nullable = false, length = 255)
private String email;
@Basic(optional = false)
@Column(name = "password", nullable = false, length = 64)
private String password;
.... constructor, getters and setters....
}
public class Notepad implements Serializable {
@Id
@Basic(optional = false)
@Column(name = "notepad_id", nullable = false)
private Long notepadId;
@Basic(optional = false)
@Column(name = "user_id_notepad", nullable = false)
private Long userIdNotepad;
@OneToOne(fetch = FetchType.EAGER)
@JoinColumn(name = "user_id_notepad", referencedColumnName = "user_id", insertable = false, updatable = false)
private User user;
.... contructor, getter and setter ....
}
When JPA is used to fetch the Notepad entity e.g. via a query method such as :
final TypedQuery q = em.createNamedQuery("Notepad.findByNotepadId", Notepad.class);
q.setParameter("notepadId", 123L);
List list = q.getResultList();
Though this could be any type of fetch eg. em.find(NotePad.class, 123L)
SEVERE: EjbTransactionUtil.handleSystemException: Value user_id is of type bigint
com.datastax.driver.core.exceptions.InvalidTypeException: Value userid_id is of type bigint
at com.datastax.driver.core.AbstractGettableByIndexData.checkType(AbstractGettableByIndexData.java:89)
at com.datastax.driver.core.AbstractGettableByIndexData.getString(AbstractGettableByIndexData.java:212)
at com.datastax.driver.core.AbstractGettableData.getString(AbstractGettableData.java:26)
at com.datastax.driver.core.AbstractGettableData.getString(AbstractGettableData.java:139)
at org.datanucleus.store.cassandra.fieldmanager.FetchFieldManager.fetchNonEmbeddedObjectField(FetchFieldManager.java:241)
at org.datanucleus.store.cassandra.fieldmanager.FetchFieldManager.fetchObjectField(FetchFieldManager.java:226)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:1590)
Method: protected Object fetchNonEmbeddedObjectField(AbstractMemberMetaData mmd, RelationType relationType, ClassLoaderResolver clr)
230 {
231 int fieldNumber = mmd.getAbsoluteFieldNumber();
232 MemberColumnMapping mapping = getColumnMapping(fieldNumber);
233
234 if (RelationType.isRelationSingleValued(relationType))
235 {
236 if (row.isNull(mapping.getColumn(0).getName()))
237 {
238 return null;
239 }
240
241 Object value = row.getString(mapping.getColumn(0).getName());
242 return getValueForSingleRelationField(mmd, value, clr);
243 }
244 else if (RelationType.isRelationMultiValued(relationType))
245 {
.....
In the code above, line 241, here it should test the column type, using mapping.getColumn(0).getTypeName()
and if it is a 'bigint' say, then do a call on row.getLong(mapping.getColumn(0).getName()) etc.
This also applies for data types 'bigint' - row.getLong(..),
'int' - row.getInt()
if it is neither of these, then the default would be to call row.getString() as before.
Cassandra requires a compatible type assigning to the surrogate columns, and we dont do this currently for the SOFTDELETE column. Should be "boolean"
jdo/general I18NTest tries to create identifiers with accented characters. Cassandra rejects them. If they were quoted then this should work.
With Cassandra a persist is an UPSERT, hence replacing existing data if there is any. This is not always what we would want and so should provide an option to always check for existence before INSERT.
See PersistenceNucleusContext.
Refer to datanucleus/datanucleus-core#134
Hi,
Since Cassandra 3.4, LIKE queries can be achieved using a SSTable Attached Secondary Index (SASI).
JPQL supports the SQL LIKE operator to provide a limited form of string pattern matching.
Is this feature supported with SASI?
Thanks
Y. HILEM
Persisting an entity graph results in javax.persistence.PersistenceException: Unable to add column with name=depositor_id to table=depositor for class=eu.drus.jpa.unit.test.model.Depositor since one with same name already exists (superclass?).
The issue can be reproduced by running the github.com/dadrus/jpa-unit CleanupTest.
Refer to core issue 19
I need Cassandra 3.0 support, and DataNucleus doesn't yet support it because it only just came out.
Cassandra CQL3 has a BATCH statement where you can batch together many insert/update/delete statements. This could be used in conjunction with a custom "FlushProcess".
When we retrieve a List from Cassandra we retrieve it in the exact same order as persisted. That is, it is stored as a Collection in the owner object. If a user puts some @orderby annotation they may want the list reordering.
This applies to ALL non-RDBMS datastores, so creating some generic code would make sense and reorder them in-memory, maybe using a Comparator.
We currently only support SoftDelete schema requirements. We need to support setting the delete flag (to false) on a persist, checking for it (as false) on a fetch, and setting it (to true) on a delete.
When we do a JDOQL/JPQL query of a candidate that has a multitenancy or softdelete column we need to restrict the candidate by the associated surrogate column(s). So, for multitenancy, restrict just to the current tenant, and for softDelete, restrict to the delete flag being FALSE.
Since CQL cannot join to related objects we need to not evaluate filters of that nature and do in-memory
The 2.2 release of this driver makes non-backwards compatible changes!
Statement stmt = new SimpleStatement(cql); // Datastax 2.1
Statement stmt = session.newSimpleStatement(cql); // Datastax 2.2+
It introduces a datatype "date" (whereas before all date/timestamp were persisted as "timestamp").
It also introduces getters for Date/Timestamp separately on Row, whereas before we just had getDate() that handled Timestamp!
FetchSize should just be a hint to fetch so many records in one go, not to only allow access to those records.
Currently if the fetch size is set it pulls in that many records in the initialise() method, and then doesn't go further!
Refer to core issue 128 for database creation/deletion.
Support for BULK_DELETE, BULK_UPDATE type Native CQL queries.
When we execute a query and use FetchFieldManager, we have to use the constructor for an ExecutionContext (i.e ObjectProvider not yet known). This means that we cannot easily wrap any SCO fields in FetchFieldManager. Consequently we need to call
ObjectProvider.replaceAllLoadedSCOFieldsWithWrappers()
just after the FetchFieldManager process.
When we make use of a persistable class in the Cassandra plugin we currently go direct to "manageClasses", whereas we should simply check for StoreData, and only then go to the Cassandra "Session" object to check it.
Cassandra v4 maps "timestamp" onto java.time.Instant, "date" onto java.time.LocalDate and "time" onto java.time.LocalTime.
We should allow use of fields of those types in persistable classes
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.