datanucleus / datanucleus-rdbms Goto Github PK
View Code? Open in Web Editor NEWDataNucleus support for persistence to RDBMS Datastores
DataNucleus support for persistence to RDBMS Datastores
Dup of issue 44 for 5.0 branch
Support for core-180.
Allow soft deletion of objects by configuring in metadata that deleted instances of class/interface X is soft-deleted.
e.g.
@PersistenceCapable
@SoftDelete
public class MyClass
{
...
}
We would need to add a surrogate column for the soft delete flag (boolean), as well as update to retrieval of objects to check on the flag, and change DeleteRequest to do an UPDATE setting the flag.
The log of the test :
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - Column "pymodule.name
" added to internal representation of table.
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - Field [metamicro.jet.core.persistency.modules.PyModule.name] -> Column(s) [pymodule.name
] using mapping of type "org.datanucleus.store.mapped.mapping.StringMapping" (org.datanucleus.store.rdbms.mapping.VarCharRDBMSMapping)
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - An error occurred while auto-creating schema elements - rolling back
10:08:19,171 (main) ERROR [DataNucleus.SchemaTool] - An exception was thrown during the operation of SchemaTool. Please refer to the log for full details. The following may help : Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
org.datanucleus.exceptions.NucleusException: Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
at org.datanucleus.store.rdbms.table.ClassTable.initializePK(ClassTable.java:1024)
at org.datanucleus.store.rdbms.table.ClassTable.preInitialize(ClassTable.java:252)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTable(RDBMSManager.java:2563)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTables(RDBMSManager.java:2354)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTablesAndValidate(RDBMSManager.java:2625)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.run(RDBMSManager.java:2279)
at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113)
at org.datanucleus.store.rdbms.RDBMSManager.addClasses(RDBMSManager.java:912)
at org.datanucleus.store.rdbms.SchemaTool.createSchema(SchemaTool.java:673)
at org.datanucleus.store.rdbms.SchemaTool.main(SchemaTool.java:289)
jet-core-persistency-modules.jdo.gz
This would lead to one less DDL statement in schema creation, and certainly works with Oracle v11
I try to move items from one persistent object's List to the List of an other persistent object.
With
javax.jdo.option.Optimistic=true and
datanucleus.cache.level2.type=none and
the an index mapping with javax.jdo.annotations.Order(mappedBy="pos")
there seems to be a problem with this move.
Some of the items loose there connection to the parent List. They are not contained in the source List and also not contained in the destination List.
test-jdo.zip
The RDBMSManager schema management process is embodied in the "ClassAdder" process. This operates in its own transaction using a separate connection. The structure of RDBMSManager is overly complex as a result. We should simply provide a separate process giving it access to the key schema information, and have methods to do specific things
We cater for CaseStringExpression / CaseNumericExpression = null but not the general variant.
relation-discriminator-column should apply to n-1 join tables. Currently, only ElementContainerTable deals with it, which means that the parent must have a Collection of children for it to work. Since N-1 is a subset of N-M, I think it should be fairly trivial to implement this issue
When asked to implement it due to it "being trivial", the response was along the lines of "oh, I can't do things like that". Raised on the old DN forum.
Some RDBMS don't return enough info so we can't do accurate compare
Oracle (for storing large amounts of data, offline, in a BLOB column) requires some whacky process of inserting EMPTY_BLOB() on an INSERT and then retrieving and setting the actual value of the BLOB field. This is only implemented for tables of classes currently, and not for join tables.
v6.0 updates means that we have OracleCollectionMapping and the element mapping would be such as OracleBlobColumnMapping (in the join table). The add element of the backing store fires off any INSERT (or UPDATE if it was to do one). This would need to call mapping.performSetPostProcessing(...).
The difficult part of this is tied to OracleBlobColumnMapping.setPostProcessing (and equivalent Clob method). This needs to do
SELECT {blob} FROM join_tbl WHERE ID = ?
but with a join table we don't have an "id", we have the owner, but that only restricts to all elements of the collection. We also (may, with an indexed List) have an index column. We need to restrict to a particular element of the collection (or particular key/value of the map).
Note that we could allow a BLOB to store less than 4k bytes (?) by just putting the value into the INSERT statement, but why use a BLOB in that case?
If we have
class A
{
@Join
B[] bs;
}
class B
{
}
and do pm.makePersistent(a); this will persist the A, and the B(s), but not the join table entries (hence the array is not stored).
When we return the class name in a SELECT, where there is no discriminator provided, we select it as "NUCLEUS_TYPE". Elsewhere, in the SQL API, we use names like DN_DATASTOREID, DN_VERSION, etc so using DN_TYPE would be more consistent.
A transient Persistable object is not supported as a query parameter. We just put NULL into any query where this is done. We should log a WARNING that we are doing this.
Currently when a query has JDOHelper.getObjectId it just considers the identity value and not the class. It should also have a mapping for the discriminator, or equivalent (when using union). In particular
Need to add use-cases that this is aimed at, since in some cases it would not apply.
We replaced these with TypeConverters some time back.
We provide support for "bulk-fetch" of multi-valued fields when the user executes a query and the multi-valued field is in the fetch plan. Our current support involves issuing a
SELECT ... FROM element WHERE EXISTS (SELECT id FROM owner WHERE element.owner_id = owner.id AND (where clause of query))
We should also allow the user to be able to request a (INNER) JOIN rather than EXISTS. This would involve applying the query WHERE clause direct to the element SELECT somehow, so likely may involve some modification to QueryToSQLMapper
I have defined my entities like this:
@Entity
public class Person {
// id and other stuff
@ManyToOne(cascade = CascadeType.ALL)
private Address address;
}
@Entity
public class Address {
// id and other stuff
}
When I am doing a delete of a person, I expect that the address is deleted, too. But instead, when I am doing
entityManager.remove(person);
I get an warning
Jun 15, 2016 6:50:16 PM org.datanucleus.store.rdbms.mapping.java.PersistableMapping preDelete
WARNING: Delete of my.sample.Person@50669be4 needs delete of related object at my.sample.Person.address but cannot delete it direct since FK is here
My persistence.xml looks like this:
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="my-sample-pu" transaction-type="RESOURCE_LOCAL">
<provider>org.datanucleus.api.jpa.PersistenceProviderImpl</provider>
<class>my.sample.Address</class>
<class>my.sample.Person</class>
<properties>
<property name="javax.persistence.jdbc.url" value="jdbc:postgresql://localhost:5432/sample"/>
<property name="javax.persistence.jdbc.user" value="sample"/>
<property name="javax.persistence.jdbc.password" value="sample"/>
<property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
</properties>
</persistence-unit>
</persistence>
I've encountered a bug. This only happens if the query is a little bit complex.
The abstract pattern of the query is like:
predicateA && (predicateB || predicateC)
Where predicateB is a 'contains' statement, and predicateC is combination of three additional conditions. If I query for predicateA && predicateB && predicateC, it works.
test-datanucleus.zip
Please check the attached test case.
I have extracted the problem following the same data structure with my real project.
The problem I faced is the SQL generated from JDO is incorrect.
I tried to replicate the same scenario in the test case, but in the test case, it raises another problems.
If running 'mvn clean compile test' specified in the template you provided, the tables generated in the database seem different from the second way below
Generate with my ant script (running datanucleus_schema_tool in my attached 'mybuild.xml')
My expectation of how the generated tables look like should be the same as the second way. We use ant script to generate the tables in our production.
Because I defined some fields as type of ArrayList and also specified in jdo file, it doesn't generate the join table with method 1).
Anyway, this seems also a problem.
When running the test, it gives the following error:
Exception thrown when executing query : SELECT DISTINCT 'mydomain.model.Watch' AS NUCLEUS_TYPE,A0
.KEY
,A0
.NAME
FROM WATCH
A0
CROSS JOIN STRATEGY
VAR_VARTOPSTRATEGY
WHERE A0
.COMPANY_MANUFACTURE_KEY_EID
= ? AND ((EXISTS (SELECT 1 FROM STRATEGY
A0_SUB
WHERE A0_SUB
.PRIMARYSTRATEGIES_KEY_OWN
= A0
.KEY
AND A0_SUB
.KEY
= ?)) OR (EXISTS (SELECT 1 FROM STRATEGY
A0_SUB
INNER JOIN STRATEGY
B0_SUB
ON A0_SUB
.KEY
= B0_SUB
.KEY
WHERE A0_SUB
.SECONDARYSTRATEGIES_KEY_OWN
= A0
.KEY
) AND EXISTS (SELECT 1 FROM STRATEGY
A0_SUB
WHERE A0_SUB
.DOWNLINEMARKETSTRATEGIES_KEY_OWN
= VAR_VARTOPSTRATEGY
.KEY
AND A0_SUB
.KEY
= B0_SUB
.KEY
) AND VAR_VARTOPSTRATEGY
.KEY
= ?))
I believe my data structure is clearly defined.
Sort Order: Ascending order - Click to sort in descending order
All
Comments
Change History
Activity Stream
[ Permalink | Edit | Delete | « Hide ]
Ray added a comment - 20/Jul/15 04:32 PM
Please check out the test case.
[ Permalink | Edit | Delete | « Hide ]
Andy Jefferson added a comment - 05/Aug/15 09:59 AM
Attached is your testcase using the DataNucleus template.
Moved the package.xxx file to src/main/resources (where all non-source have to be with Maven).
Changed the package.xxx file to be an ORM file since that will override what is in all annotations.
Added a persistence property.
Includes the log file obtained when running it "mvn clean test".
All passes.
[ Permalink | Edit | Delete | « Hide ]
Andy Jefferson added a comment - 05/Aug/15 10:02 AM
Can't see any issue.
If you don't get join tables with then you look in the log and work out why ... some metadata not overriding the higher level, so put it in ORM (which is where it should be anyway IMHO).
If you start up a PMF and want all tables to be known about then you either use an auto-start, or persistence.xml and specify the persistence property datanucleus.persistenceUnitLoadClasses as per the docs.
I see no exception from the query.
[ Permalink | Edit | Delete | « Hide ]
Ray added a comment - 05/Aug/15 02:54 PM
I see your changes and also you change the version to 4.0.0-release
I just download your modified test case and tried to run it, it throws exception (no javax.jdo.xxx).
I added a dependency
org.datanucleus javax.jdo 3.2.0-m1But when I run "mvn clean test" it still throws exceptions:
Nested Throwables StackTrace:
java.lang.NullPointerException
at org.datanucleus.api.jdo.metadata.JDOAnnotationReader.processMemberAnnotations(JDOAnnotationReader.java:1083)
at org.datanucleus.metadata.annotations.AbstractAnnotationReader.getMetaDataForClass(AbstractAnnotationReader.java:225)
at org.datanucleus.metadata.annotations.AnnotationManagerImpl.getMetaDataForClass(AnnotationManagerImpl.java:167)
at org.datanucleus.metadata.MetaDataManagerImpl.loadAnnotationsForClass(MetaDataManagerImpl.java:2793)
at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1075)
at org.datanucleus.enhancer.DataNucleusEnhancer.getFileMetadataForInput(DataNucleusEnhancer.java:782)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:500)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.datanucleus.maven.AbstractDataNucleusMojo.executeInJvm(AbstractDataNucleusMojo.java:331)
at org.datanucleus.maven.AbstractEnhancerMojo.enhance(AbstractEnhancerMojo.java:281)
at org.datanucleus.maven.AbstractEnhancerMojo.executeDataNucleusTool(AbstractEnhancerMojo.java:81)
You said you run the test and saw no errors. Did you see the log
">> Watch result set = 2" ?
Some RDBMS allow specification of FOREIGN KEYs at the end of a CREATE TABLE statement. We should allow it.
CREATE TABLE TBL1
(
ID INT NOT NULL,
COL1 INT,
FOREIGN KEY (COL1) REFERENCES TBL2 (COLX)
)
The complication is that the related table needs to exist, so need to introduce ordering into table creation whereas without this we don't (and just send the CREATE FK statement when both tables exist)
The problem arises from the fact that the ownerMapping is being created first, and at that point it thinks a PK is required. Later on the elementMapping is created and it then knows not to have a PK, but ColumnImpl has no method to disable the PK on the owner.
A workaround for this issue for those using 5.0.0-m2 is to add metadata to the like this
which will turn off the PK for ALL join table columns
We support a parameter that is a Collection as per the JPA spec, but allowing an array would make sense also
If we have a Map<Simple, PC> with the map stored in a join table, such as in jdo/identity HashMapTest.testNormalPutNullValues we get a DB structure like this
CREATE TABLE HASHMAP1
{
IDENTIFIERA INTEGER NOT NULL,
IDENTIFIERB VARCHAR(255) NOT NULL,
CONSTRAINT HASHMAP1_PK PRIMARY KEY (IDENTIFIERA,IDENTIFIERB)
}
CREATE TABLE HASHMAP1_ITEMS
(
IDENTIFIERA_OID INTEGER NOT NULL,
IDENTIFIERB_OID VARCHAR(255) NOT NULL,
"KEY" VARCHAR(255) NOT NULL,
IDENTIFIERA_VID INTEGER NULL,
IDENTIFIERB_VID VARCHAR(255) NULL,
CONSTRAINT HASHMAP1_ITEMS_PK PRIMARY KEY (IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
)
CREATE TABLE CONTAINERITEM
(
IDENTIFIERA INTEGER NOT NULL,
IDENTIFIERB VARCHAR(255) NOT NULL,
...
CONSTRAINT CONTAINERITEM_PK PRIMARY KEY (IDENTIFIERA,IDENTIFIERB)
)
The persist step issues
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key2'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key1'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key5'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>, <'Key4'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>, <'Key3'>)
and then if we try MapField.values().iterator() it generates
SELECT 'org.jpox.samples.types.container.ContainerItem ' AS NUCLEUS_TYPE,A0.IDENTIFIERA,A0.IDENTIFIERB,A0."NAME",A0.STATUS,A0."VALUE"
FROM CONTAINERITEM A0
INNER JOIN HASHMAP1_ITEMS B0 ON A0.IDENTIFIERA = B0.IDENTIFIERA_VID AND A0.IDENTIFIERB = B0.IDENTIFIERB_VID
WHERE B0.IDENTIFIERA_OID = <-604059026> AND B0.IDENTIFIERB_OID = <'-290476856'>
Since it is selecting the VALUE table then it cannot find the NULL values. Would need to select the JOIN table and join to the value.
The entrySet() operation works fine
Look at "test.jdo.application" DependentFieldTest "testDependentFieldsInverseMapsDeletion".
This passes when using pessimistic transactions, but when switching to optimistic it causes
testDependentFieldsInverseMapsDeletion(org.datanucleus.tests.DependentFieldTest) Time elapsed: 0.463 sec <<< ERROR!
javax.jdo.JDOUserException: Cannot write fields to a deleted object
FailedObject:2
at org.datanucleus.api.jdo.state.PersistentDeleted.transitionWriteField(PersistentDeleted.java:126)
at org.datanucleus.state.AbstractStateManager.transitionWriteField(AbstractStateManager.java:584)
at org.datanucleus.state.JDOStateManagerImpl.preWriteField(JDOStateManagerImpl.java:4662)
at org.datanucleus.state.JDOStateManagerImpl.setObjectField(JDOStateManagerImpl.java:2625)
at org.datanucleus.state.JDOStateManagerImpl.setObjectField(JDOStateManagerImpl.java:2521)
at org.datanucleus.store.mapped.scostore.FKMapStore.removeValue(FKMapStore.java:701)
at org.datanucleus.store.mapped.scostore.FKMapStore.remove(FKMapStore.java:658)
at org.datanucleus.store.mapped.scostore.FKMapStore.clear(FKMapStore.java:734)
at org.datanucleus.store.types.sco.queued.ClearMapOperation.perform(ClearMapOperation.java:35)
at org.datanucleus.store.types.sco.queued.ClearMapOperation.perform(ClearMapOperation.java:26)
at org.datanucleus.store.types.sco.queued.OperationQueue.performAll(OperationQueue.java:137)
at org.datanucleus.store.types.sco.backed.HashMap.flush(HashMap.java:248)
at org.datanucleus.store.mapped.mapping.MapMapping.preDelete(MapMapping.java:250)
at org.datanucleus.store.rdbms.request.DeleteRequest.execute(DeleteRequest.java:178)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteTable(RDBMSPersistenceHandler.java:492)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObject(RDBMSPersistenceHandler.java:461)
at org.datanucleus.state.JDOStateManagerImpl.internalDeletePersistent(JDOStateManagerImpl.java:4518)
at org.datanucleus.state.JDOStateManagerImpl.flush(JDOStateManagerImpl.java:4868)
at org.datanucleus.ObjectManagerImpl.flushInternal(ObjectManagerImpl.java:3227)
at org.datanucleus.ObjectManagerImpl.flush(ObjectManagerImpl.java:3167)
at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3308)
So it gets to the remove method where it will check on dependent key/value but then tries to delete the value. Probably needs to flush things to the datastore first
We currently just set the jdbc-type to the default for the java type. But if the datastore type is inconsistent with that then this leads to errors. Better option is to come up with a default based on the datastore column (when it exists). With JDO we don't want to allow this since there is a default defined there, so have it controlled via a boolean persistence property
Seems to be like DB2
http://msdn.microsoft.com/en-us/library/ms186734.aspx
This comment shows how the generated SQL could look like.
Using the ROW_NUMBER() and OVER() statements requires either an order-clause or a partition-clause.
For this use-case the order-clause should be defined. I'd suggest to order by primary-key, if there is no ordering present in the JDOQL/JPQL and otherwise to use the user-defined ordering.
Note, that the ROW_NUMBER() method is 1-based, so to express the JDOQL (fromIncl, toExcl) one could use
WHERE RowNumber > fromIncl AND RowNumber <= toExcl
or
-- SQLServer between is inclusive
BETWEEN fromIncl+1 AND toExcl
The actual query would use an inline-view either expressed using the WITH keyword or as sub-query for the from-expression.
-- Example using WITH:
WITH OrderedQuery AS
(
SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.IdField) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
SELECT *
FROM OrderedQuery
WHERE RowNumber > fromIncl AND RowNumber <= toExcl
-- ###################################
-- Example using from-sub-query
SELECT * FROM
(
SELECT SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.IdField) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
WHERE RowNumber > fromIncl AND RowNumber <= toExcl
SELECT * FROM
(
SELECT SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.UserOrderField1 t.UserOrderField2 DESC) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
WHERE RowNumber BETWEEN fromIncl+1 AND toExcl
Unfortunately I was not able to find out which SQLServer version support this.
I've successfully tested with SQLServer 2008 and found some posts saying it will work at least down to SQLServer 2000.
Hope this was useful.
Note that this is for SQLServer prior to 2012. With SQLServer 2012 there is support for SQL 2008 standard OFFSET/FETCH which is added by NUCRDBMS-733
See NUCCORE-1394
We already support queries of Optional.get() and using the optional field directly. We should also support Optional.get().field when the Optional is for a Persistable object
While analysing wrong NucleusOptimisticException's in my application I deteced this bug:
Preconditions: Optimistic locking with VersionStrategy.VERSION_NUMBER. Version field mapped to class member. Reference to other object obj.other. Object not in L2 cache.
The access to obj.other for the hollow object obj sets the transactionalVersion of obj to 0 which leads to a NucleusOptimisticException at commit.
see test case.
test-jdo(1).zip
If we want to do something like
SELECT FROM MyClass WHERE :myObj.someField.contains(this.field)
and then try to compile it, we get a message like
org.datanucleus.store.rdbms.sql.expression.IllegalExpressionOperationException: Cannot perform operation ".contains" on org.datanucleus.store.rdbms.sql.expression.NullLiteral@2ea41516
at org.datanucleus.store.rdbms.sql.expression.SQLExpression.invoke(SQLExpression.java:601)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.processInvokeExpression(QueryToSQLMapper.java:3585)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compilePrimaryExpression(AbstractExpressionEvaluator.java:213)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileUnaryExpression(AbstractExpressionEvaluator.java:182)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileAdditiveMultiplicativeExpression(AbstractExpressionEvaluator.java:161)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileRelationalExpression(AbstractExpressionEvaluator.java:136)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileOrAndExpression(AbstractExpressionEvaluator.java:78)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.evaluate(AbstractExpressionEvaluator.java:46)
at org.datanucleus.query.expression.Expression.evaluate(Expression.java:338)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.compileFilter(QueryToSQLMapper.java:495)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.compile(QueryToSQLMapper.java:416)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:918)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:344)
at org.datanucleus.store.query.Query.compile(Query.java:1669)
This is because the parameter doesn't have a value at the point of compilation, hence you have the equivalent of an NPE but in a query. Need a better way of handling this
We discovered a situation in which Datanucleus does not create SQL statements for changes made in persistence capables.
If a one-2-many relationship realized via HashSet is updated multiple times in different transactions, the DN cache provides other data than the database.
You can find a test case reproducing this issue at Github: https://github.com/nbartels/test-jdo/tree/bug8
Please provide a fix for the 4.X branch, too.
ElementContainerStore has "emd", AbstractMapStore has "kmd", "vmd".
The load of a Map currently can involve more than 1 SQL. This is embodied in SCOUtils.populateMapDelegateWithStoreData()
.
In this method we reads in the keys (if persistable), then the values (if persistable), and then the "entries" (ids of keys and values) so we can associate the keys to the values. For a Map<Persistable, Persistable>
this means 3 SQL statements.
Issue 282 also had the following, which is effectively the same area.
When we have a Map and want to get the entries (Map.entrySet()), we currently select the "map table". When using a join table to form the relation this will be the join table. When the key / value has its own table we simply have a FK to the key table or value table respectively. We don't join across right now (although there is some code in there that doesn't work for all situations).
We currently support detached, or persistent Persistable objects as query input parameter, but not transient. With APPLICATION_ID we could potentially extract PK field values using reflection.
Note though that there is a TCK test for not supporting this so that would need resolving
We do not support an embedded object with a Collection of non-embedded objects, so best to advise the user at schema generation. The problem here is that the embedded object has no "id" and so it is debatable what to put in the join table as the owner id; one option would be the id of the owner of the embedded object, but that embedded type could be involved in other relations.
See PersistenceNucleusContext.
Apache uses data nucleus RDBMS libs (version 3.2.9), with BoneCP connection pooling (0.8.0-RELEASE). I am trying to find out what the default connection pool size is.
I dug around a bit in both data nucleus code and the BoneCP code and what I see in reality does not match the logic in the code.
Using "datanucleus.connectionpool.maxPoolSize" seems to have no effect on the max number of connections to the backend DB. Using "datanucleus.connectionpool.minPoolSize", the total number of connections to the backendDB are 2x the value set for this property.
I have played around with these values a little bit in number of different clusters in-house and the observations were consistent.
For example:
minPoolSize=10, maxPoolSize=15 total number to the backend DB was 20.
minPoolSize=20, maxPoolSize=30, total number of CNXS to backend DB was 40.
Without setting these values, I see the total number of CNXS to DB are 10.
In a different environment,
the default CNXS were 14.
when minPoolSize=12 and maxPoolSize=30, we observed a total of 42 connections to the backend DB.
Looking at the code neither behavior makes any sense.
The code above seems to be using the minPoolSize and maxPoolSize values to set the min and max setMinConnectionsPerPartition()/setMaxConnectionsPerPartition on the DataStore.
https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L64
https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L66
https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L70
So the default values for
partitionCount=1 (recommended value is 2-4 for partitionCount)
minConnectionsPerPartition=1
maxConnectionsPerPartitions=2
So when using defaults across the board, we should only see a max of 2 connections to the backend DB. I am seeing 10 in my env and 14 in a customer env.
when minPoolSize=12, maxPoolSize=30, we should see a max of 30 connections, but we are seeing 42.
when minPoolSize=10, maxPooSize=15, we should see a max of 15 but we are seeing 20 even under no load scenarios.
Could you please explain the whats being enforced and whats not. Thanks
Hi everyone, we have a scenario where sometimes the order metadata of a list member it's not populated. After a server application restart the error disappear and later can appear again.
Error message:
Class "LoanAccount" has collection field "funds" and this has no mapping in the table for the index of the element class "InvestorFund". Maybe you declared the field as a java.util.Collection and instantiated it as a java.util.List yet omitted the element in the MetaData ? javax.jdo.JDOUserException: Class "LoanAccount" has collection field "funds" and this has no mapping in the table for the index of the element class "InvestorFund". Maybe you declared the field as a java.util.Collection and instantiated it as a java.util.List yet omitted the element in the MetaData ? at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:636) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:720) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:740)
JDO models:
@PersistenceCapable(detachable = "true", table = "GUARANTY")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "BASE_SECURITY")
public abstract class BaseSecurity implements Serializable {
static final String DISCRIMINATOR_COLUMN = "DISCRIMINATOR";
...
}
@PersistenceCapable(detachable = "true")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "GUARANTY")
public class Guaranty extends BaseSecurity {
...
}
@PersistenceCapable(detachable = "true")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "INVESTOR_FUND")
public class InvestorFund extends BaseSecurity{
....
}
@PersistenceCapable(detachable = "true")
public class LoanAccount {
...
@Element(dependent = "true")
@Persistent
@Column(name = "GUARANTEES_ENCODEDKEY_OWN", target = "Guaranty")
@Order(column = "GUARANTEES_INTEGER_IDX")
private List<Guaranty> guarantees = null;
@Element(dependent = "true")
@Persistent
@Column(name = "FUNDS_ENCODEDKEY_OWN", target = "InvestorFound")
@Order(column = "FUNDS_INTEGER_IDX")
private List<InvestorFund> funds = null;
...
}
Note
Error is raised only for the funds field from the LoanAccount, never for the guarantees field
Any help is appreciated, thanks!
AbstractMemberMetaData has "getMapsIdAttribute" from annotations/XML. Need to make use of it.
Could be useful if fully supporting '@EmbeddedId', for example
@Entity
public class Employee
{
@Id
long id;
}
@Entity
@IdClass(DependentId.class)
public class Dependent
{
@EmbeddedId
DependentId id;
@MapsId("employeePK")
@ManyToOne
Employee employee;
}
@Embeddable
public class DependentId
{
String name; // matches name of @Id attribute
long employeePk; // matches type of Employee PK
...
}
Description
For one to many relationship, after migrating to datanucleus 4 the JDO queries are having a very low performance. Datanucelus brings all sub-objects in memory and for tables with millions of entries this takes the CPU at 100%.
Model example
@PersistenceCapable(detachable` = "true")
public class Activity {
@Persistent(valueStrategy = IdGeneratorStrategy.UUIDHEX)
@Column(jdbcType = "VARCHAR", length = 32)
private String encodedKey;
@Persistent(defaultFetchGroup = "true")
@Element(dependent = "true")
private List<FieldChangeItem> fieldChanges = null;
.....
}
When fetching the activities without any filter all field changes are taken into the memory.
Generated SQL that is causing problems:
SELECT
'....model.FieldChangeItem' AS `NUCLEUS_TYPE`,
....
FROM
`FIELDCHANGEITEM` `A0`
WHERE
`A0`.`FIELDCHANGES_INTEGER_IDX` >= 0
AND EXISTS( SELECT
'.....model.Activity' AS `NUCLEUS_TYPE`,
`A0_SUB`.`ENCODEDKEY` AS `DN_APPID`
FROM
`ACTIVITY` `A0_SUB`
WHERE
`A0`.`FIELDCHANGES_ENCODEDKEY_OWN` = `A0_SUB`.`ENCODEDKEY`)
ORDER BY `NUCORDER0`
Note
The problem is present for defined fetch groups as well, not only for defaultFetchGroup = "true".
Used Workaround
Removed defaultFetchGroup = "true" and made lazy programatic fetching for this list. In this case filtered by parent key queries were generated when the field change items were fetched.
If we have something like
class Base (SINGLE_TABLE, with DISCRIMINATOR)
class Sub extends Base (SUPERCLASS_TABLE)
and we have a query like
SELECT b FROM Base b WHERE (TREAT(b) AS Sub).someField = value
then this currently ignore the TREAT (cast) whereas it should add a DISCRIMINATOR clause.
The problem is that we can only add the discriminator clause to a BooleanExpression ... i.e let it propagate back up to the
{...}.someField = value
and add the discriminator constraint there.
Support persisting of Java enums as database enums.
PostgreSQL: http://www.postgresql.org/docs/9.3/static/datatype-enum.html
MySQL: https://dev.mysql.com/doc/refman/5.0/en/enum.html
Not supported by all DB's though. Firebird, H2, SQLServer, Oracle for example do not have enums.
The preferred handling is to use a CHECK constraint on a VARCHAR column, as per https://stackoverflow.com/a/9366855/8558216 and DataNucleus already supports those via the following extension "enum-check-constraint" specified on the ColumnMetaData.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.