Giter Club home page Giter Club logo

dfki / leechcrawler Goto Github PK

View Code? Open in Web Editor NEW
8.0 7.0 5.0 97.55 MB

Incremental crawling capabilities for Apache Tika. Crawl content out of e.g. file systems, http(s) sources (webcrawling) imap(s) servers or your own arbitrary data sources. LeechCrawler offers additional Tika parsers providing these crawling capabilities.

Home Page: https://github.com/DFKI/leechcrawler

License: BSD 3-Clause "New" or "Revised" License

Java 99.65% Groovy 0.35%
incremental crawling tika metadata extraction

leechcrawler's Introduction

LeechCrawler

Incremental crawling capabilities for Apache Tika. Crawl content out of e.g. file systems, http(s) sources (webcrawling) or imap(s) servers. LeechCrawler offers additional Tika parsers providing these crawling capabilities.
It is the RDF free successor of Aperture from the DFKI GmbH Knowledge Management group. In the case you want to make a project with us, feel free to contact us!

LeechCrawler is published under the 3-Clause BSD License, Owner/Organization: DFKI GmbH, 2013.

The key intentions of LeechCrawler:

  • Ease of use - crawl a data source with a few lines of code.
  • Low learning curve - Leech integrates seamlessly into the Tika world.
  • Extensibility - write your own crawlers, support new data source protocols and plug them in by simply adding your jar into the classpath.
  • All parsing capabilities from Apache Tika are supported, including your own parsers.
  • Incremental crawling (second run crawls only the differences inside a data source, according to the last crawl). Offered for existing and new crawlers.
  • Create easily Lucene and SOLR indices.

How to start | Code snippets / Examples | Extending LeechCrawler | Mailing list | People/Legal Information | Supporters| Data Protection


Crawl something incrementally in 1 minute:

String strSourceUrl = "URL4FileOrDirOrWebsiteOrImapfolderOrImapmessageOrSomething";

Leech leech = new Leech();
CrawlerContext crawlerContext = new CrawlerContext();
crawlerContext.setIncrementalCrawlingHistoryPath("./history/4SourceUrl");
leech.parse(strSourceUrl, new DataSinkContentHandlerAdapter()
{
    public void processNewData(Metadata metadata, String strFulltext)
    {
        System.out.println("Extracted metadata:\n" + metadata + "\nExtracted fulltext:\n" + strFulltext);
    }
    public void processModifiedData(Metadata metadata, String strFulltext)
    {
    }
    public void processRemovedData(Metadata metadata)
    {
    }
    public void processErrorData(Metadata metadata)
    {
    }
}, crawlerContext.createParseContext());

leechcrawler's People

Contributors

fnogatz avatar mkaesberger avatar reuschling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

leechcrawler's Issues

Possible memory leak

I noticed several instances of 'de.dfki.km.leech.parser.incremental.IncrementCrawlingHistory' remains in memory and never get garbage collected. After bit of an investigation it seems an instance is created per session and sticks to memory forever.

Please refer to the attached screenshot.

Thanks.
screenshot_20180402_140842

Exception while crawling doc file

Hi,

I got this exception while crawling directory with single .doc file.
java.lang.ClassNotFoundException: org.apache.http.util.Args.

Adding the following dependency to the maven build resoles the issue.

        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpcore</artifactId>
            <version>4.4</version>
        </dependency>

Thanks,
Wazhar

Tika 1.8 support

Hi,

Tika 1.7 support worked well for me, 1.8 is out. Any plans for the upgrade ?

Appreciate your efforts.

Wazhar

Fails on the following construct imaps://[email protected]:[email protected]:993/INBOX

There appears to be a problem with the following construct

imaps://[email protected]:[email protected]:993/INBOX

which results in

May 09, 2021 6:50:36 AM de.dfki.km.leech.sax.PrintlnContentHandler processErrorData
INFO: ## PrintlnContentHandler ERROR data ##########################
no data entity id known - in the case of a sub-entity, set it inside the metadata at your implementation of getSubDataEntitiesInformation(..)
under the key CrawlerParser.SOURCEID. Otherwise you maybe try to process an unsupported/broken URL, or it is totally strange.

metadata:

errorStacktrace: 'javax.mail.AuthenticationFailedException: failed to connect, no password specified?
at javax.mail.Service.connect(Service.java:400)
at de.dfki.km.leech.parser.ImapCrawlerParser.connect2Server(ImapCrawlerParser.java:121)
at de.dfki.km.leech.io.ImapURLStreamProvider.addFirstMetadata(ImapURLStreamProvider.java:85)
at de.dfki.km.leech.Leech.parse(Leech.java:671)
at de.dfki.km.leech.Leech.parse(Leech.java:487)
at de.dfki.km.leech.Leech.main(Leech.java:110)

It is a requirement of the email server to use [email protected] as the username, aliases cannot be used as they also require the same construct.
The email server works ok with the mail client supplied as part of the operating system.

Debugging the code shows that the construct is altered to

imaps://foo%40myemailerserver.lan:[email protected]:993/INBOX

Running with this construct causes

metadata:

X-Parsed-By: 'org.apache.tika.parser.CompositeParser'
X-Parsed-By: 'de.dfki.km.leech.parser.ImapCrawlerParser'
errorStacktrace: 'org.apache.tika.exception.TikaException: Error while crawling 'imaps://[email protected]:[email protected]:993/INBOX'
at de.dfki.km.leech.parser.CrawlerParser.parse(CrawlerParser.java:230)
at de.dfki.km.leech.parser.ImapCrawlerParser.parse(ImapCrawlerParser.java:383)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
at de.dfki.km.leech.parser.incremental.IncrementalCrawlingParser.parse(IncrementalCrawlingParser.java:170)
at de.dfki.km.leech.parser.filter.URLFilteringParser.parse(URLFilteringParser.java:119)
at de.dfki.km.leech.Leech.parse(Leech.java:675)
at de.dfki.km.leech.Leech.parse(Leech.java:487)
at de.dfki.km.leech.Leech.main(Leech.java:110)
Caused by: java.lang.NullPointerException
at de.dfki.km.leech.parser.ImapCrawlerParser.getSubDataEntitiesInformation(ImapCrawlerParser.java:354)
at de.dfki.km.leech.parser.CrawlerParser.parse(CrawlerParser.java:170)
... 9 more

errorMessage: 'Error while crawling “imaps://foo%40myemailerserver.lan:[email protected]:993/INBOX''
dataEntityId: 'imaps://foo%40myemailerserver.lan:@myemailerserver.lan:993/INBOX'
resourceName: 'imaps://foo%40myemailerserver.lan:@myemailerserver.lan:993/INBOX'
source: 'imaps://foo%40myemailerserver.lan:@myemailerserver.lan:993/INBOX'
dataEntityContentFingerprint: 'INBOX'
Content-Type: 'remotedatasource/imapfolder'

May 09, 2021 6:58:40 AM de.dfki.km.leech.sax.CrawlReportContentHandler crawlFinished
INFO: Crawl finished:
Report: First handled data entity at 09/05/21 06:58, 1 processed entities, duration 0ms
New data entities: 0
Modified data entities: 0
Removed data entities: 0
Unmodified data entities: 0
Double data entities: 0
Error data entities: 1
remotedatasource/imapfolder:1

On a more positive note, tests with files and web pages work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.