Giter Club home page Giter Club logo

jdbm2's People

Contributors

jankotek avatar

Watchers

 avatar

jdbm2's Issues

Need forceClose

It starts with this error:
java.lang.InternalError: wrong BPage header
        at jdbm.btree.BPage.deserialize(BPage.java:1037)
        at jdbm.btree.BPage.deserialize(BPage.java:59)
        at jdbm.recman.BaseRecordManager.fetch2(BaseRecordManager.java:545)
        at jdbm.recman.BaseRecordManager.fetch(BaseRecordManager.java:509)
        at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:220)
        at jdbm.btree.BPage.loadBPage(BPage.java:892)
        at jdbm.btree.BPage.access$000(BPage.java:59)
        at jdbm.btree.BPage$Browser.getNext(BPage.java:1469)
        at jdbm.btree.BTreeSortedMap$1$2.ensureNext(BTreeSortedMap.java:116)
        at jdbm.btree.BTreeSortedMap$1$2.next(BTreeSortedMap.java:140)
        at jdbm.btree.BTreeSortedMap$1$2.next(BTreeSortedMap.java:109)

So now I need to close it (need to release the filehandle so I can umount the 
volume with the jdbm data), but I get this error:

ERROR: inUse blocks at close time
elem 0: BlockIO(23869,false,null)

java.lang.Error: inUse blocks at close time
        at jdbm.recman.RecordFile.close(RecordFile.java:334)
        at jdbm.recman.BaseRecordManager.close(BaseRecordManager.java:306)
        at jdbm.recman.CacheRecordManager.close(CacheRecordManager.java:241)


In my particular case, I could care less about the data.  I just need to clean 
up and start over (without killing java).

Original issue reported on code.google.com by [email protected] on 7 Mar 2013 at 12:53

HashBucket.writeExternal is destructive when HTree has a custom key serializer

What steps will reproduce the problem?
1. Run the attached junit test case

What is the expected output? What do you see instead?

Expected: Junit test succeeds.

Actual: ClassCastException stating that a byte array cannot be cast to Long.

java.lang.ClassCastException: [B cannot be cast to java.lang.Long
    at jdbm.htree.TestInsertUpdate$1.serialize(TestInsertUpdate.java:1)
    at jdbm.htree.HashBucket.writeExternal(HashBucket.java:225)
    at jdbm.htree.HTree$1.serialize(HTree.java:70)
    at jdbm.htree.HTree$1.serialize(HTree.java:1)
    at jdbm.recman.BaseRecordManager.update2(BaseRecordManager.java:466)
    at jdbm.recman.BaseRecordManager.update(BaseRecordManager.java:452)
    at jdbm.recman.CacheRecordManager.updateCacheEntries(CacheRecordManager.java:323)
    at jdbm.recman.CacheRecordManager.commit(CacheRecordManager.java:255)
    at jdbm.htree.TestInsertUpdate.testInsertUpdateWithCustomSerializer(TestInsertUpdate.java:39)
    at 
...
(more traces from the JUnit4 framework, run within Eclipse Indigo)


What version of the product are you using? On what operating system?

JDBM2.1, running in Eclipse Indigo, Windows

Please provide any additional information below.

The exception is caused by HashBucket.writeExternal modifying keys in-place if 
a custom key serializer is used. After serialization with a custom key 
serializer, this._keys no longer contains the keys but the serialized keys as 
byte arrays.

The attached file HashBucket.java provides a fix for this bug.

Original issue reported on code.google.com by [email protected] on 7 Jun 2011 at 1:12

Attachments:

[Enhancement] RecordHeader.setAvailableSize() speed improvement

The current implementation of RecordHeader.setAvailableSize() calls deconvert() 
twice: once to read the old value, and once for the new value. It even writes 
the value back if it was (and remains) zero.

Attached is a rewritten RecordHeader.setAvailableSize() which avoids that by 
manipulating directly the "internal fragmentation" that is actually stored.

Original issue reported on code.google.com by [email protected] on 10 Jun 2011 at 8:50

Attachments:

Bpage Header Exception after kill and resume

What steps will reproduce the problem?
1.run program with jdbm
2.kill it (may executing "put")
3.resume

What is the expected output? 
resume wihout error.

What do you see instead?
Bpage Header Exception

What version of the product are you using? On what operating system?
jdbm2 2.4
centos 64bit

Original issue reported on code.google.com by [email protected] on 25 Aug 2012 at 12:34

Improving the SoftReference cache

The usual pattern for implementing a Soft reference cache is to use a 
WeakHashMap. That way, you don't need all of these separate threads and queues 
and whatnot. The trick is that the map value needs to be a SoftReference, and 
that SoftReference needs to hold a hard reference to the map key. That way, the 
map isn't flushed until the values that it references are cleared.

How do I submit code to this project?


Original issue reported on code.google.com by [email protected] on 28 Feb 2011 at 12:34

Performance and threading patch

Here are some possible changes that address threading and performance issues.

The course locking on both Btree and CacheRecordManager have been reduced.

The CacheRecordManager change also reenables a bounded soft cache or falls back 
to a weak cache.

Original issue reported on code.google.com by [email protected] on 22 Jul 2010 at 7:23

Attachments:

File left locked after exception during init

Hello,

You need to make sure that you close all locked files in RecordFile
as if the file is corrupted a Throwable is raised on init and a lock on teh 
file is kept from RecordFile - e.g. no way to delete the file pragmatically 
after this type of Exception ( Throwable )

Exception in thread "main" java.lang.InternalError: Unknown serialization 
header: 227
    at jdbm.helper.Serialization.readObject(Serialization.java:887)
    at jdbm.helper.Serialization.deserializeArrayList256Smaller(Serialization.java:1116)
    at jdbm.helper.Serialization.readObject(Serialization.java:840)
    at jdbm.recman.TransactionManager.recover(TransactionManager.java:206)
    at jdbm.recman.TransactionManager.<init>(TransactionManager.java:87)
    at jdbm.recman.RecordFile.<init>(RecordFile.java:111)
    at jdbm.recman.BaseRecordManager.reopen(BaseRecordManager.java:237)
    at jdbm.recman.BaseRecordManager.<init>(BaseRecordManager.java:232)
    at jdbm.RecordManagerFactory.createRecordManager(RecordManagerFactory.java:74)
    at jdbm.RecordManagerFactory.createRecordManager(RecordManagerFactory.java:52)

Regards
Vasko  

Original issue reported on code.google.com by [email protected] on 7 Jul 2013 at 5:28

java.lang.InternalError when retrieveing from a large map

What steps will reproduce the problem?
1.Create RecordManager
2.create a hashmap( recman.hashMap("lemmas") )
3.put 600 000 pairs of Strings
4.retrieve map.size()


What is the expected output? What do you see instead?

An exception is raised

java.lang.InternalError: bytes left: 31

What version of the product are you using? On what operating system?
jdbm-2.0, Linux

Please provide any additional information below.

Stacktrace :

java.lang.InternalError: bytes left: 31
    at jdbm.htree.HashNode$1.deserialize(HashNode.java:54)
    at jdbm.htree.HashNode$1.deserialize(HashNode.java:39)
    at jdbm.recman.BaseRecordManager.fetch2(BaseRecordManager.java:514)
    at jdbm.recman.BaseRecordManager.fetch(BaseRecordManager.java:478)
    at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:226)
    at jdbm.htree.HashDirectory$HDIterator.prepareNext(HashDirectory.java:527)
    at jdbm.htree.HashDirectory$HDIterator.next2(HashDirectory.java:483)
    at jdbm.htree.HashDirectory$HDIterator.next(HashDirectory.java:565)
    at jdbm.htree.HTreeMap$1.size(HTreeMap.java:173)
    at java.util.AbstractMap.size(AbstractMap.java:67)

Original issue reported on code.google.com by [email protected] on 2 May 2011 at 11:31

[Bug fix] Disk memory leak in BTree.java

Please apply the following change from the original JDBM:
http://jdbm.cvs.sourceforge.net/viewvc/jdbm/jdbm/src/main/jdbm/btree/BTree.java?
r1=1.17&r2=1.18

See this old ApacheDS bug report for a discussion:
http://old.nabble.com/-ApacheDS---JDBM--Index-.db-files-grow-from-attribute-repl
ace-td22606145.html

ApacheDS has included this fix in their fork of JDBM 1.0 since March 31, 2009:
http://svn.apache.org/viewvc/directory/apacheds/trunk/jdbm/src/main/java/jdbm/bt
ree/BTree.java?view=log

Original issue reported on code.google.com by [email protected] on 16 Jun 2011 at 6:27

Access JDBM database concurrently

In my application i was using two threads 

I want to use JDBM in this scenario.
1.) for inserting data in to database using  thread 1.

2.)  Retrieving data from the database using thread 2.


Is this possible with JDBM?

if so please provide an example.

Original issue reported on code.google.com by [email protected] on 21 Apr 2012 at 12:03

Bug in getFirstLargerThan in FreePhysicalRowIdPage.java

Bug in getFirstLargerThan in FreePhysicalRowIdPage.java

The algorithm in getFirstLargerThan(int size) (FreePhysicalRowIdPage.java) has 
a bug:

If a slot was found which points to a record whose size is far greater than the 
searched size (i.e. waste > wasteMargin) then the wrong information is stored 
in bestSlotSize.

Line 185 should say:
bestSlotSize = theSize

and not:
bestSlotSize = size

Also the condition in 179 should be:
(bestSlotSize >= theSize)

and not:
(bestSlotSize >= size)

The current algorithm basically ignores wasteMargin2 and returns the first 
record which fits regardless of the wasted space.

To make the algorithm a bit better I suggest to store the bestSlotWaste rather 
than the bestSlotSize

Original issue reported on code.google.com by [email protected] on 19 Dec 2011 at 12:47

Attachments:

[Enhancement] RecordFile.read() implementation doesn't make sense

The outer loop (while (remaining > 0)) doesn't make sense since the decrements 
at the end are wrong anyway; it just works if the whole block is read on the 
first file.read(b). This outer loop appears to be a leftover from the original 
JDBM implementation.

Besides, there's no need to pass in the nBytes parameter; the RecordFile class 
creates BlockIos always with an array of BLOCK_SIZE bytes, and thus the 
RecordFile class can rely on this fact and just use buffer.length.

Attached is a rewritten RecordFile.read(). Use or don't use as you like; if you 
do, also omit the last parameter in the call.

(Although I didn't do any serious performance testing, it appears that this 
change does speed up running all the Junit tests by a few percent.)

Original issue reported on code.google.com by [email protected] on 8 Jun 2011 at 11:42

Attachments:

HTreeMap.clear() fails on maps with ConcurrentModificationException

Run the following test class:

    public static void main(final String[] args) throws IOException {
        final RecordManager recman = RecordManagerFactory.createRecordManager("d:/temp/jdbm-db/db1");
        final HTree tree = HTree.createInstance(recman);
        recman.setNamedObject("test", tree.getRecid());
        final HTreeMap<String,String> treeMap = tree.asMap();

        for (int i = 0; i < 100; i++) {
            treeMap.put(String.valueOf(i),String.valueOf(i));
        }
        recman.commit();
        System.out.println("finished adding");

        treeMap.clear();
        recman.commit();
        System.out.println("finished clearing");
    }

It fails with:

Exception in thread "main" java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
    at java.util.AbstractList$Itr.next(AbstractList.java:343)
    at jdbm.htree.HashDirectory$HDIterator.next2(HashDirectory.java:480)
    at jdbm.htree.HashDirectory$HDIterator.next(HashDirectory.java:565)
    at jdbm.htree.HTreeMap$1$2.ensureNext(HTreeMap.java:98)
    at jdbm.htree.HTreeMap$1$2.next(HTreeMap.java:122)
    at jdbm.htree.HTreeMap$1$2.next(HTreeMap.java:91)
    at java.util.AbstractCollection.clear(AbstractCollection.java:395)
    at java.util.AbstractMap.clear(AbstractMap.java:271)
    at LocalTester.main(LocalTester.java:43)

Oddly enough, it works if the iterator is 100.

Original issue reported on code.google.com by jrivard on 7 Jan 2011 at 3:36

java.lang.Error: Bad magic on log file, what does it mean?

I am getting the following exception when I tried to create an instance of a 
RecordManager class

Caused by: java.lang.Error: Bad magic on log file
at jdbm.recman.TransactionManager.recover(TransactionManager.java:196) 
~[jdbm-2.4.jar:2.4]
at jdbm.recman.TransactionManager.<init>(TransactionManager.java:87) 
~[jdbm-2.4.jar:2.4]
at jdbm.recman.RecordFile.<init>(RecordFile.java:111) ~[jdbm-2.4.jar:2.4]
at jdbm.recman.BaseRecordManager.reopen(BaseRecordManager.java:237) 
~[jdbm-2.4.jar:2.4]
at jdbm.recman.BaseRecordManager.<init>(BaseRecordManager.java:232) 
~[jdbm-2.4.jar:2.4]
at jdbm.RecordManagerFactory.createRecordManager(RecordManagerFactory.java:74) 
~[jdbm-2.4.jar:2.4]
at jdbm.RecordManagerFactory.createRecordManager(RecordManagerFactory.java:52) 
~[jdbm-2.4.jar:2.4]

What exactly does this error mean?
Is it similar to the python bad magic number case which is seen when you change 
python versions for compiled code?

Original issue reported on code.google.com by [email protected] on 21 Jun 2012 at 5:11

.t file not deleted.

What steps will reproduce the problem?
1. Create a JDBM recordManager
2. Create and Htree Instance.
3. Put values, commit, close.
4. Close transaction.
Running this simply from a main method does not reproduce this problem, but 
when running it in a web application with concurrent access creates a problem. 
The .t file fails to be deleted.


What is the expected output? What do you see instead?
.t file which is used for logging transactions should be deleted. It does not, 
since the FileInputStream & ObjectInputStream has not been closed.

What version of the product are you using? On what operating system?
JDBM, which is a inactive project. Same problem in JDBM2 code. Windows 7, JDK 
6.26.

Please provide any additional information below.
Problem exists in the recover() method present in the TransactionManager class. 
This method is called from the TrnsactionManager Constructor. The recover() 
method creates a .t file for logging transactions, but fails to close the 
inputStream before trying to delete it. Hence it fails.

If possible, please fix in JDBM, which creates a .lg file instead of a .t file, 
as well.

Original issue reported on code.google.com by [email protected] on 17 Nov 2011 at 6:22

jdbm.helper.Serialization.writeLongArray(DataOutputStream, long[]) is broken

The following test fails:

  public void testNegativeLongsArray() throws ClassNotFoundException, IOException {
    long[] l = new long[] { -12 };
    Object deserialize = Serialization.deserialize(Serialization.serialize(l));
    assertEquals(l, deserialize);
  }

I think the problem lies in these ifs:
        if(0>=min && max<=255){
There are plenty of such ifs in this method (and may be others). What was meant 
IMO, if
        if(0=<min && max<=255){
(if every value of the array lies between 0 and 255)

Original issue reported on code.google.com by [email protected] on 3 Jan 2012 at 7:19

Retrieval of objects from PrimaryStoreMap failing

What steps will reproduce the problem?
1. Create a PrimaryStoreMap, and secondary index using Longs as keys, Strings 
as values
2. Add 1000 records, checking that each added correctly
3. In separate program attempt to iterate through records

What is the expected output? What do you see instead?
I expect it to iterate through all the records, instead some of them are not 
found. I can see that they've been added using the secondary key extractor, but 
when the key is looked up there's nothing there.

What version of the product are you using? On what operating system?
The 2.0 zip from the downloads page.

Please provide any additional information below.
I tried reading with and without the cache enabled, same problem.
I add records to the primary store key, then on a secondary index I use the 
keys 1-1000. When I add a record I check that the key is in secondary index.
I've attached two files, one to write, one to read, which are modifications of 
the samples provided that I was using for some performance testing.

This is a pity because the library is very easy to use. 

Original issue reported on code.google.com by [email protected] on 6 May 2011 at 12:27

NPE in BTree.getRoot()

I can't extract a test case (too much code around - impl of Lucene directory 
akin to the BDB version); sorry.
It seems like creating one record, delete same record, find on same key was the 
offending workflow.
Anyway, I worked around it by being defensive in getRoot():
<code>
    /**
     * Return the root BPage, or null if it doesn't exist.
     */
    BPage<K, V> getRoot()
            throws IOException {
        if (_root == 0) {
            return null;
        }
        BPage<K, V> root = (BPage<K, V>) _recman.fetch(_root, _bpageSerializer);
        if (root != null) {
            root._recid = _root;
            root._btree = this;
        }
        return root;
    }
</code>

 On Mac OS, Java 6.


Original issue reported on code.google.com by [email protected] on 14 Jan 2011 at 11:42

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.