Giter Club home page Giter Club logo

processors's Introduction

Build Status Maven Central

What is it?

This is the main public code repository of the Computational Language Understanding (CLU) Lab at University of Arizona. Please see http://clulab.github.io/processors/ for more information about this software, installation and usage instructions.

Changes

License

Our code is licensed as follows:

  • main, odin, openie - Apache License Version 2.0. Please note that these subprojects do not interact with the corenlp subproject below.
  • corenlp - GPL Version 3 or higher, due to the dependency on Stanford's CoreNLP. If you use only CluProcessor, this dependency does not have to be included in your project.

processors's People

Contributors

adarshp avatar ajaynagesh avatar alicekwak avatar beckysharp avatar dependabot[bot] avatar dispalt avatar dpfried avatar egolaparra avatar enfageorge avatar enoriega avatar gcgbarbosa avatar hickst avatar hubert10 avatar jerryzeyu avatar kwalcock avatar marcovzla avatar maxaalexeeva avatar mihaisurdeanu avatar moldovean avatar myedibleenso avatar razvandu avatar robertvacareanu avatar vanh17 avatar zoharsacks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

processors's Issues

String intern

Hi,
In the Reame, it's mentioned that

Furthermore, we used our own implementation to intern strings (i.e., avoiding the storage of duplicated strings multiple times).

However, in the recent release 5.8.0, I see

We no longer intern Strings by default in BioNLPProcessor.

What's the reasoning behind it?

Global case (in)sensitivity

In an Odin application where case sensitivity doesn't matter, it would be convenient to have some global variable (or at least rule-level variable) to turn off case sensitivity rather than having to enter (?i) at the start of every regex. I think this ought to apply to quoted strings, too.

Dependency problem?

Is there some dependency issue going on? I get
Failure to find edu.stanford.nlp:stanford-corenlp-models

I solved it by replacing this dependency by a couple of others (see my forked version).

java.lang.NoSuchMethodError, scala 2.11.4, processors 5.3

Basic app:

object Main extends App {

  val proc:Processor = new CoreNLPProcessor(withDiscourse = true)

  // the actual work is done here
  val doc = proc.annotate("John Smith went to China. He visited Beijing, on January 10th, 2013.")

}

fails with:

Adding annotator tokenize
Adding annotator ssplit

Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.0 sec].
Adding annotator lemma
Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [2.4 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [2.8 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [1.7 sec].
Initializing JollyDayHoliday for sutime with classpath:edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
Aug 06, 2015 5:53:50 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Ignoring inactive rule: null
Aug 06, 2015 5:53:50 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Ignoring inactive rule: temporal-composite-8:ranges
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [1.8 sec].
Exception in thread "main" java.lang.NoSuchMethodError: edu.stanford.nlp.trees.TreebankLanguagePack.punctuationWordRejectFilter()Ljava/util/function/Predicate;
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.mkGSF(CoreNLPProcessor.scala:49)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.gsf$lzycompute(CoreNLPProcessor.scala:39)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.gsf(CoreNLPProcessor.scala:39)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor$$anonfun$parse$1.apply(CoreNLPProcessor.scala:70)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor$$anonfun$parse$1.apply(CoreNLPProcessor.scala:65)
    at scala.collection.Iterator$class.foreach(Iterator.scala:750)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.parse(CoreNLPProcessor.scala:65)
    at edu.arizona.sista.processors.Processor$class.annotate(Processor.scala:73)
    at edu.arizona.sista.processors.shallownlp.ShallowNLPProcessor.annotate(ShallowNLPProcessor.scala:23)
    at edu.arizona.sista.processors.Processor$class.annotate(Processor.scala:56)
    at edu.arizona.sista.processors.shallownlp.ShallowNLPProcessor.annotate(ShallowNLPProcessor.scala:23)
    at Main$.delayedEndpoint$Main$1(Main.scala:13)
    at Main$delayedInit$body.apply(Main.scala:8)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
    at scala.App$class.main(App.scala:76)
    at Main$.main(Main.scala:8)
    at Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

build.sbt:

name := "Test"

version := "1.0"

scalaVersion := "2.11.4"

scalacOptions += "-feature"

libraryDependencies ++= Seq(
  "edu.arizona.sista" %% "processors" % "5.3",
  "edu.arizona.sista" %% "processors" % "5.3" classifier "models"
)

libraryDependencies += "com.github.javaparser" % "javaparser-core" % "2.1.0"

Tried to redownload models in case they were corrupted - no luck.

new version of CoreNLP available

It is not mentioned if we can use CoreNLP 3.2.0 (released on June 20th). Maybe useful to say something about that in the documentation.

Changes of 3.2.0 compared to 1.3.5:
Improved tagger speed, new and more accurate parser model

Please upload the package to the maven

I want to use the package from a non-GPL code, but the code is licensed GPL and cannot download from the maven server. Will you upload the package to the maven?

Build failing with mvn

Hi
Every time I try and build the .jar files with mvn package it fails. I am including my full -e output below.

Thanks

[INFO] Error stacktraces are turned on.
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for edu.arizona.sista:processors:jar:2.0
[WARNING] 'build.plugins.plugin.version' for org.scala-tools:maven-scala-plugin is missing. @ line 89, column 15
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building processors 2.0
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ processors ---
[INFO]
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ processors ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 42 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ processors ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-scala-plugin:2.15.2:compile (default) @ processors ---
[INFO] Checking for multiple versions of scala
[WARNING] Expected all dependencies to require Scala version: 2.10.1
[WARNING] edu.arizona.sista:processors:2.0 requires scala version: 2.10.1
[WARNING] org.scalatest:scalatest_2.10:2.0.M6-SNAP17 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] includes = [/*.scala,/.java,]
[INFO] excludes = []
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ processors ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /var/opt/ENV/lib/processors/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ processors ---
[INFO] No sources to compile
[INFO]
[INFO] --- maven-scala-plugin:2.15.2:testCompile (default) @ processors ---
[INFO] Checking for multiple versions of scala
[WARNING] Expected all dependencies to require Scala version: 2.10.1
[WARNING] edu.arizona.sista:processors:2.0 requires scala version: 2.10.1
[WARNING] org.scalatest:scalatest_2.10:2.0.M6-SNAP17 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] includes = [**/
.scala,**/*.java,]
[INFO] excludes = []
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.6:test (default-test) @ processors ---
[INFO] Surefire report directory: /var/opt/ENV/lib/processors/target/surefire-reports


T E S T S

Running edu.arizona.sista.processors.TestFastNLPProcessor
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.302 sec <<< FAILURE!
Running edu.arizona.sista.processors.TestProcessorThreading
Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 0.07 sec <<< FAILURE!
Running edu.arizona.sista.processors.TestCoreNLPProcessor
Adding annotator tokenize
Adding annotator ssplit
Adding annotator tokenize
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [3.2 sec].
Adding annotator tokenize
Adding annotator pos
Adding annotator lemma
Adding annotator tokenize
Adding annotator pos
Adding annotator lemma
Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [7.8 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [7.6 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [5.4 sec].
Initializing JollyDayHoliday for sutime
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
Mar 02, 2014 12:15:49 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Ignoring inactive rule: temporal-composite-8:ranges
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
Adding annotator tokenize
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [1.9 sec].
Constituent parse tree:
(ROOT
(S
(NP
(NNP John)
(NNP Doe)
)
(VP
(VBD went)
(PP
(TO to)
(NP
(NNP China)
)
)
)
)
)
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Adding annotator lemma
Adding annotator ner
Adding annotator parse
Adding annotator dcoref
Adding annotator tokenize
Adding annotator parse
List(
(NP
(NNP John)
(NNP Doe)
),
(NP
(DT the)
(NN president)
),
(NP
(NNP IBM)
),
(NP
(NNP China)
))
Adding annotator tokenize
Adding annotator parse
S constituent is:
(S
(NP
(NNP John)
(NNP Doe)
)
(VP
(VBD went)
(PP
(TO to)
(NP
(NNP China)
)
)
)
)
NP constituent is:
(NP
(NNP John)
(NNP Doe)
)
VP constituent is:
(VP
(VBD went)
(PP
(TO to)
(NP
(NNP China)
)
)
)
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.053 sec
Running edu.arizona.sista.processors.TestDocumentSerializer
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Adding annotator lemma
Adding annotator ner
Adding annotator parse
Adding annotator dcoref
Constructed a document with 2 sentences.
Generated annotations:
S 2
T 6
John 0 4 NNP John PERSON O _
Doe 5 8 NNP Doe PERSON O _
went 9 13 VBD go O O _
to 14 16 TO to O O _
China 17 22 NNP China LOCATION O _
. 22 23 . . O O _
D 1
2
1 0 nn
2 1 nsubj
2 4 prep_to
EOX
Y 1
ROOT 0 0 6 1 S 1 0 6 3 NP 1 0 2 2 NNP 0 0 1 1 John 0 0 1 0 NNP 0 1 2 1 Doe 0 1 2 0 VP 0 2 5 2 VBD 0 2 3 1 went 0 2 3 0 PP 0 3 5 2 TO 0 3 4 1 to 0 3 4 0 NP 0 4 5 1 NNP 0 4 5 1 China 0 4 5 0 .0 5 6 1 . 0 5 6 0
EOS
T 6
There 24 29 RB there O O _
, 29 30 , , O O _
he 31 33 PRP he O O _
visited 34 41 VBD visit O O _
Beijing 42 49 NNP Beijing LOCATION O _
. 49 50 . . O O _
D 1
3
3 0 advmod
3 2 nsubj
3 4 dobj
EOX
Y 1
ROOT 0 0 6 1 S 3 0 6 5 ADVP 0 0 1 1 RB 0 0 1 1 There 0 0 1 0 , 0 1 2 1 ,0 1 2 0 NP 0 2 3 1 PRP 02 3 1 he 0 2 3 0 VP 0 35 2 VBD 0 3 4 1 visited 0 3 40 NP 0 4 5 1 NNP 0 4 5 1Beijing 0 4 5 0 . 0 5 6 1 .0 5 6 0
EOS
C 4
0 1 0 2 1
1 4 4 5 3
1 2 2 3 1
0 4 4 5 2
EOD

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.368 sec

Results :

Tests in error:
testParser2(edu.arizona.sista.processors.TestFastNLPProcessor)
testParser1(edu.arizona.sista.processors.TestFastNLPProcessor)
testTwoThreadsSameProcCoreNLP(edu.arizona.sista.processors.TestProcessorThreading)
testTwoThreadsDifferentProcCoreNLP(edu.arizona.sista.processors.TestProcessorThreading)
testTwoThreadsSameProcFastNLP(edu.arizona.sista.processors.TestProcessorThreading)
testTwoThreadsDifferentProcFastNLP(edu.arizona.sista.processors.TestProcessorThreading)

Tests run: 17, Failures: 0, Errors: 6, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:30.794s
[INFO] Finished at: Sun Mar 02 12:16:45 PST 2014
[INFO] Final Memory: 11M/210M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.6:test (default-test) on project processors: There are test failures.
[ERROR]
[ERROR] Please refer to /var/opt/ENV/lib/processors/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.6:test (default-test) on project processors: There are test failures.

Please refer to /var/opt/ENV/lib/processors/target/surefire-reports for the individual test results.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures.

Please refer to /var/opt/ENV/lib/processors/target/surefire-reports for the individual test results.
at org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:55)
at org.apache.maven.plugin.surefire.SurefirePlugin.execute(SurefirePlugin.java:592)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

NER has no way to manually specify a resolution for label ambiguities

The NER assigns labels based on an ordered set of categories. If the same text string occurs in two (or more) input files of different categories, there is apparently no way to specify the desired order of labeling, since the category hierarchy is fixed.

For example: the MITRE model identifies 'p110' and 'p85' as protein families but these strings are really nicknames and do not occur in the PFAM or InterPro protein family databases. Even if they were valid family names, however, they would still be ambiguous as they are also protein names, listed in the Uniprot protein database. Since the NER always labels proteins first, and there is no way to specify an override, these will always be labeled as proteins by the NER and treated as such by subsequent processing.

Looks similar to a library I wrote

I love it. I'll poke in deeper when I have the time. I wrote a similar library while I worked at the UW.

https://github.com/knowitall/nlptools

It was the product of spare time so it has some issues, but it wraps a number of libraries (breeze, cleanlp, etc). You might want to take a look and see if it gives you any ideas for your library.

The NLP community needs a "go to" location for NLP tools with a clean API.

Multi-label classifier

Hi,

I wanted to ask if your earning package has any support for multi-label classification, if yes, how can I use it? i couldn't find any example.

thanks,

Integrating SRL

I'm in the middle of adding SRL annotations from the LTH SRL parser when I noticed some existing code, particularly Reader.scala in swirl2, that might be suited to this.

Is Reader.scala ready for this? I'm particularly interested in combining this information with the discourse tree.

Question about runtime setup please

Hi Mihai,

I seem to really suck at guessing the sbt configuration for safely running the sample code at https://github.com/sistanlp/processors#annotating-entire-documents within my own application. I made my way through adding the necessary imports, after having added the maven dependency in my build.sbt, and hoped for good. I get the below exception for the line val doc = proc.annotate("John Smith went to China. He visited Beijing, on January 10th, 2013.")

00:11:17.670 [run-main-0] DEBUG e.a.s.discourse.rstparser.RSTParser - Loading RST parsing model from: edu/arizona/sista/discourse/rstparser/model.const.rst.gz
[error] (run-main-0) java.lang.NullPointerException
java.lang.NullPointerException
    at java.util.zip.InflaterInputStream.<init>(InflaterInputStream.java:83)
    at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:76)
    at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:90)
    at edu.arizona.sista.discourse.rstparser.RSTParser$.loadFrom(RSTParser.scala:202)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor$.fetchParser(CoreNLPProcessor.scala:465)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.rstConstituentParser$lzycompute(CoreNLPProcessor.scala:36)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.rstConstituentParser(CoreNLPProcessor.scala:36)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.discourse(CoreNLPProcessor.scala:453)
    at edu.arizona.sista.processors.Processor$class.annotate(Processor.scala:75)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.annotate(CoreNLPProcessor.scala:25)
    at edu.arizona.sista.processors.Processor$class.annotate(Processor.scala:54)
    at edu.arizona.sista.processors.corenlp.CoreNLPProcessor.annotate(CoreNLPProcessor.scala:25)
    at my scala line of code copied above .....
    at Boot$delayedInit$body.apply(Boot.scala:14)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
    at scala.App$class.main(App.scala:71)
    at Boot$.main(Boot.scala:4)
    at Boot.main(Boot.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)

Can you please advise what should I include in my build configuration, env variables or classpath, in order to include everything that's needed for run time?

Should I really avoid Java 7 with version 3.3?
I added the following in my build.sbt to that effect:

scalaVersion  := "2.10.4"
scalacOptions += "-target:jvm-1.6"
javacOptions ++= Seq("-source", "1.6", "-target", "1.6")

I have specifically installed Oracle Java 6 now for this (I had only Java 7 before on the current machine), but I think that all should not be required as I assume my Java 7 code should be able to use Java 6 class files. So some advice would come in very much appreciated here....

Slow runtime performance

Hi!

Thanks for creating this Scala library, which helped me out a lot recently! Great stuff!

I'm puzzled about a couple of aspects of the library though: each time I run my program to annotate text (I'm just creating an instance of CoreNLPProcessor, then invoking its annotate method on that text), the library downloads a number of files and writes the following to the standard output:

Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.8 sec].
Adding annotator lemma
Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [5.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [2.4 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [4.4 sec].
Initializing JollyDayHoliday for sutime
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt

Oct 03, 2013 9:57:46 AM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Ignoring inactive rule: temporal-composite-8:ranges
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [0.7 sec].
Adding annotator dcoref

I have some issues with this:

  1. Downloading these files each time the program really slows down runtime performance. Is there anyway to download these files and put them somewhere that the library can find so that it doesn't need to do this? Alternatively, can the library cache this data? (This is my primary concern.)
  2. I'd rather not have information written to the standard output. Is there any way to turn this off?

(I'm assuming that your library has come control over these issues, but if they should be directed to Stanford's CoreNLP library developers instead, let me know.)

Thanks again!

Mike

Issue in Dependency Resolution.

I am trying to download SBT dependencies by adding following lines in build.sbt

libraryDependencies ++= Seq(
"edu.arizona.sista" %% "processors" % "5.0",
"edu.arizona.sista" %% "processors" % "5.0" classifier "models",
)

However this depends internally on following dependencies

"edu.arizona.sista" %% "banner" % "1.0-SNAPSHOT",
"edu.arizona.sista" %% "banner" % "1.0-SNAPSHOT" classifier "dragontool",
"edu.arizona.sista" %% "banner" % "1.0-SNAPSHOT" classifier "heptag"

The banner dependencies are not getting resolved by SBT.

My Scala version is := "2.11.5"

Is there a way to keep Banner as a managed dependency ?

The error is as follow

sbt.ResolveException: unresolved dependency: edu.arizona.sista#banner_2.11;1.0-SNAPSHOT: not found
(*:31mupdate0m) sbt.ResolveException: unresolved dependency: edu.arizona.sista#banner_2.11;1.0-SNAPSHOT: not found

example code can't be compiled

I get the following error when I try to compile your example code:

Incompatible types: String cannot be converted to Document
        Document doc = proc.annotate("John Smith went to China. He visited Beijing, on January 10th, 2013.");

Fix?

NullPointerExcepton

Hi ,I run the testParser1 method in TestFastNLPProcessor .
it throw NullPointerExcepton for the below lines:

// sista: it is possible that model files are included in a jar with a different name
// (this happens in the jar distribution of processors)
// inspect all jars in the classpath, to see if any contains our model files
if(! found) {
String infoFile = getName() + File.separator + getName() + "_singlemalt.info";
InputStreamReader inputReader = new InputStreamReader(this.getClass().getClassLoader().getResourceAsStream(infoFile));

i guess it did not import the model file . but i already import processors-2.1.jar and And I import the processors-2.1-model.jar also. I donot know where the issue ? (I have tested the corenlpTest , it is ok)

Recursion error in DependecyUtils

I found an error when using the DependencyUtils. It seems like it's getting stuck in a loop in the dependency graph and the recursion blows the stack. This is a snippet of the stack trace:

Caused by: java.lang.StackOverflowError
    at scala.runtime.BoxesRunTime.boxToInteger(BoxesRunTime.java:69)
    at edu.arizona.sista.processors.Sentence.dependencies(Document.scala:73)
    at edu.arizona.sista.utils.DependencyUtils$.getIncoming$2(DependencyUtils.scala:129)
    at edu.arizona.sista.utils.DependencyUtils$.followTrailOut$2(DependencyUtils.scala:145)
    at edu.arizona.sista.utils.DependencyUtils$.edu$arizona$sista$utils$DependencyUtils$$followTrail$3(DependencyUtils.scala:138)
    at edu.arizona.sista.utils.DependencyUtils$.followTrailOut$2(DependencyUtils.scala:150)

Need to override NER identifications

We often know how a given entity should be labeled. Assignments from knowledge sources should be able to override the NER's default classifications.

For example: the CRF seems to be responsible for identifying 'H-RAS' and 'K-RAS' (but not 'HRAS' or 'KRAS') as protein families even though our knowledge sources list these exclusively as proteins.

Example Scala code for v5.8.1 keeps throwing OutOfMemoryError

The example code from the README file for Scala keeps throwing OutOfMemoryError with default SBT parameters.

I remember that I have previously used this library without making any changes to JVM parameters and it works just fine. This is the first time, and with the current version, that I have noticed something like this.

Any idea why this might be happening?

Add SBT Assembly

In certain applications it might be nice to have a single jar with all code and dependencies. This can be easily accomplished using sbt-assembly.

In project create new file assembly.sbt with contents addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.2") (changing version as appropriate).

Then run sbt assembly to create the jar.

To avoid running all the tests as part of this process, add this line (padded with whitespace) to the top level build.sbt

test in assembly := {}

Serialization not working

Thanks for the wrapper, I got exception in this last line:

proc.parse(doc)
val ser = new DocumentSerializer
val out1 = ser.save(doc)

I copied this from test files, the exception is :
java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class

mvn package fails over englishPCFG.ser.gz

mvn package seems to fail over the following, even though running the StanfordNLP shell script (corenlp.sh) from the same directory works (and successfully finds the file listed below as missing as per its logging). The error message received from mvn package:

Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... java.io.IOException: Unable to resolve "edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz" as either class path, filename or URL

If it's relevant, this is when using:
Java(TM) SE Runtime Environment (build 1.7.0_45-b18). Ubuntu 13.04.
I hope it's not for real that only Java 1.6 should work...

The full mvn package output:

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building processors
[INFO] task-segment: [package]
[INFO] ------------------------------------------------------------------------
[debug] execute contextualize
[INFO] [resources:resources {execution: default-resources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory ......../sistanlp-processors/src/main/resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] No sources to compile
[INFO] [scala:compile {execution: default}]
[INFO] Checking for multiple versions of scala
[WARNING] Expected all dependencies to require Scala version: 2.10.1
[WARNING] edu.arizona.sista:processors:1.4 requires scala version: 2.10.1
[WARNING] org.scalatest:scalatest_2.10:2.0.M6-SNAP17 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] includes = [/*.scala,/.java,]
[INFO] excludes = []
[INFO] Nothing to compile - all classes are up to date
[debug] execute contextualize
[INFO] [resources:testResources {execution: default-testResources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory ......./sistanlp-processors/src/test/resources
[INFO] [compiler:testCompile {execution: default-testCompile}]
[INFO] No sources to compile
[INFO] [scala:testCompile {execution: default}]
[INFO] Checking for multiple versions of scala
[WARNING] Expected all dependencies to require Scala version: 2.10.1
[WARNING] edu.arizona.sista:processors:1.4 requires scala version: 2.10.1
[WARNING] org.scalatest:scalatest_2.10:2.0.M6-SNAP17 requires scala version: 2.10.0
[WARNING] Multiple versions of scala libraries detected!
[INFO] includes = [**/
.scala,**/*.java,]
[INFO] excludes = []
[INFO] Nothing to compile - all classes are up to date
[INFO] [surefire:test {execution: default-test}]
[INFO] Surefire report directory: ..................../sistanlp-processors/target/surefire-reports


T E S T S

Running edu.arizona.sista.processor.corenlp.TestCoreNLPProcessor
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Adding annotator tokenize
Adding annotator pos
Adding annotator tokenize
Adding annotator pos
Adding annotator tokenize
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ...
java.io.IOException: Unable to resolve "edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz" as either class path, filename or URL
at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:419)
at edu.stanford.nlp.io.IOUtils.readStreamFromString(IOUtils.java:367)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromSerializedFile(LexicalizedParser.java:606)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromFile(LexicalizedParser.java:401)
at edu.arizona.sista.processor.corenlp.CoreNLPProcessor.parse(CoreNLPProcessor.scala:293)
at edu.arizona.sista.processor.corenlp.TestCoreNLPProcessor.testParse(TestCoreNLPProcessor.scala:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:103)
at org.apache.maven.surefire.Surefire.run(Surefire.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:350)
at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021)
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Adding annotator tokenize
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ...
java.io.IOException: Unable to resolve "edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz" as either class path, filename or URL
.
.
.
and so on.

gazetteer matcher for Odin

We'd like to add an efficient matcher for terms in a gazetteer.

Notes

  • entries in the gazetteer may be single or multi-token
  • need a matcher for untokenized text
  • need a matcher for tokenized text

Unscallable and extremely slow performance compared to stanford-corenlp

I am using an actor based pipeline to annotate 3.2 gb of text in 3 - 4 million text entities. when I use core-nlp directly everything works fine. when I replace that with sistanlp it barely does anything. I am using 12 actors on my macbook to do the work, so 12 instances of annotators.

  1. Only one core is saturated.
  2. Works barely happens even with the single core.

I suspect some really bad concurrency choices were made. It is probably the interning symbol table.

GC overhead limit exceeded

Hi,

Thanks for the library, it is really great and useful. However, sometime I get the exception from title.
It happens when I use new CoreNLPProcessor(withDiscourse = true). Without this flag, memory consuming is not so big, but still suspicious.

So, I cannot run withDiscourse even on my working station, and on production (we use several cheap AWS instances) I occasionally get OOM even without withDiscourse. Sometime even at sbt compile stage.

I create that new CoreNLPProcessor in companion-object and call it from appropriating class. I use it to annotate input text and to do some analysis with syntacticTree and dependencies, so I think there are no leaks in my code.

I'd appreciate your help in resolving this issue: how can I reduce memory usage. Thanks.

Upgrade to CoreNLP 3.5

Stanford CoreNLP 3.5 has brought about some novelties, and upgraded to Java 8 (see Upgrade to Java 1.8; add annotators for dependency parsing, relation extraction). https://mailman.stanford.edu/pipermail/java-nlp-user/2014-October/006457.html provides some upgrade advice and more discussion of the additions/changes. It would be very nice to upgrade sistanlp to use it, and publish to maven.

Not sure how much Scala support for Java 8 is needed though, as official Scala releases only included some support, maybe that is enough.

Not able to get result for the string "2013 Sales Report" in Stanford Core NLP SuTime program

Hi
I have tried to execute Stanford Core Nlp Full SuTime Program I'm Able to get the Result for the String "2013 IBM Sales Report" ,"2013 SalesReport" and all But the same is not coming for the string "2013 Sales Report". if I give space between Sales and Report the result is not coming Same problem for the string "2013 HR Report". How to Resolv this Problem. Suggest me some solution for this issue

My Program is:-

package sutime;

import java.io.FileNotFoundException;
import java.io.IOException;
import java.text.ParseException;
import java.util.Properties;
import java.util.List;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.AnnotationPipeline;
import edu.stanford.nlp.pipeline.POSTaggerAnnotator;
import edu.stanford.nlp.pipeline.PTBTokenizerAnnotator;
import edu.stanford.nlp.pipeline.WordsToSentencesAnnotator;
import edu.stanford.nlp.time.SUTime;
import edu.stanford.nlp.time.TimeAnnotations;
import edu.stanford.nlp.time.TimeAnnotator;
import edu.stanford.nlp.time.TimeExpression;

public class AnnotatorsModified {

/**
 * @param args
 * @throws ClassNotFoundException 
 * @throws IOException 
 * @throws ParseException 
 */
public static void main(String[] args) throws IOException, ClassNotFoundException,FileNotFoundException, ParseException {


    String data1 ="2011 Sales Report";


    final String DEFAULT_CLASSIFIER_PATH="F:/u/nlp/data/ner/goodClassifiers/english.all.3class.distsim.crf.ser.gz"; 

    final String             DEFAULT_AUX_CLASSIFIER_PATH="F:/u/nlp/data/ner/goodClassifiers/english.muc.7class.distsim.crf.ser.gz";



    Properties props = new Properties();

    AnnotationPipeline pipeline = new AnnotationPipeline();
    pipeline.addAnnotator(new PTBTokenizerAnnotator(false));
    pipeline.addAnnotator(new WordsToSentencesAnnotator(false));
    pipeline.addAnnotator(new POSTaggerAnnotator(false));
    pipeline.addAnnotator(new TimeAnnotator("sutime", props));



    Annotation annotation = new Annotation(data1);
    annotation.set(CoreAnnotations.DocDateAnnotation.class, "2013-07-14");
    pipeline.annotate(annotation);


         System.out.println(annotation.get(CoreAnnotations.TextAnnotation.class));


        List<CoreMap> timexAnnsAll = annotation.get(TimeAnnotations.TimexAnnotations.class);
        for (CoreMap cm : timexAnnsAll) {
        cm.get(CoreAnnotations.TokensAnnotation.class);
        TimeExpression timeExpr = cm.get(TimeExpression.Annotation.class);
        SUTime.Temporal temporal = timeExpr.getTemporal();
        System.out.println("\n **********SuTime************ ");
        System.out.println("TimeLabel:"+temporal.getTimeLabel());
        System.out.println("TimexValue:"+temporal.getTimexValue());
        System.out.println("Duration:"+temporal.getDuration());
        System.out.println("Granularity:"+temporal.getGranularity());
        System.out.println("Period:"+temporal.getPeriod());
        System.out.println("Range:"+temporal.getRange());
        System.out.println("Time:"+temporal.getTime());
        System.out.println("Type:"+temporal.getTimexType());
        System.out.println("UncertinityGranularity:"+temporal.getUncertaintyGranularity()); 


    }   


 }

}

unicode_to_ascii.tsv

I am getting a error from BioNLPPreProcessor:

Exception in thread "main" java.lang.AssertionError: assertion failed: Failed to find resource file edu/arizona/sista/processors/bionlp/unicode_to_ascii.tsv in the classpath!
    at scala.Predef$.assert(Predef.scala:165)
    at edu.arizona.sista.processors.bionlp.BioNLPPreProcessor$.loadUnicodes(BioNLPPreProcessor.scala:77)
...

indeed the file unicode_to_ascii.tsv is not on that path. Should this file be checked into the repo or am I missing something?

Cross-build for Scala 2.10.0

First of call, great initiative and project!

Is there any chance of having a 2.10.0/1 cross-build? We are using Akka (and Scala 2.10 futures) for we can't use Scala 2.9.x in our projects.

Version 5.0 is not in maven central

  1. Adding the dependencies to pom file.
  2. Running maven->Reinport in intellj IDEA.

Expected - dependencies are recognized.
Actual - maven reports error on version 5.0.

NOTE: I have checked versions at maven central latest version is 3.3.

Dependency

How can I get different dependencies? (e.g. Basic vs Collapsed vs CC Collapsed)

edu.arizona.sista.learning.TestSVMRankingClassifier errors upon assembly

I am not sure this is a realy issue (maybe only a local setup issue), but I have tried to build from trunk today (sbt compile and assembly, after integrating sbt-assembly 0.11.2), and got some errors:

Finished all threads.
Estimated thread times:
Thread #0: 11309.0
Thread #1: 9942.0
[info] ScalaTest
[info] Run completed in 5 minutes, 21 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[error] Failed: Total 36, Failed 2, Errors 0, Passed 34
[error] Failed tests:
[error]     edu.arizona.sista.learning.TestSVMRankingClassifier
[error] (core/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 427 s, completed Jan 3, 2015 3:34:15 PM

Does the SVM require a certain minimum of free memory or any other special requirement? I did not notice any specific error other than above.

Load different languages

How can I run it with the Spanish model that provides the Stanford CoreNLP?
Is it possible? Because it automatically load the English one.

Thanks

"damage" matched as entity

There are a lot of damage-binding proteins and other entities with "damage" in their name, but I don't think "damage" should be matched as it is being, e.g. in "Two of these sites ( Ser966 and Ser957 in Smc1 ) have been shown to be phosphorylated by the ATM kinase in response to DNA damage."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.