Giter Club home page Giter Club logo

Comments (36)

mjy avatar mjy commented on May 29, 2024 2

Use BibTeX, as the persistence model. Simple format, many supporting libraries, frameworks etc use it, everything exports it. Anystyle is based on it. Anybody can export to it (Zotero, Endnote), etc.

from general.

mdoering avatar mdoering commented on May 29, 2024 1

Wow, quite a thread already. I would like to focus on the API requests/responses and not so much on the backend storage technology. Said that I wanna remark that we use solr in GBIF for 800 million records fine. It is definitely scalable and modern versions allow range and even spatial queries. Without taking a premature decision I would first try to just use Postgres for all storage and search in the beginning to keep things simple. Postgres allows to store and search schemaless JSON effectively these days, it also has a native KVP store that I found useful in ChecklistBank already. So there are many options available.

Both BibJSON and CSL-JSON look good to me, even though CSL-JSON is not fully standardized yet. I would slightly prefer CSL-JSON though for the large number of tools that already exist. The citeproc-java library provides various importers including BibTEX & Endnote and also connecting to Zotero and Mendeley. Also the backing by CrossRef and Mendeley is great (you can even get all CrossRef DOIs as a CSL-JSON dump).

The thing I am mostly unclear about is the degree of normalization we need. Both BibJSON and CSL-JSON present the entire metadata for a reference including a container e.g. the journal or book, but its not really a separate entity in the JSON. @deepreef suggested to

support linking Journals and other "parent" references (e.g., books from chapters) using persistent identifiers (rather than relying on text only to cross-link)

Dealing with a flat reference instance is great for aggregating (on article level at least, leave out the page/treatment citation for now). But is this true for manually curating references? Thinking about the services the API needs to expose to allow users to manage literature it would help if at least journals and books are entities on their own. We could then also attach known spelling variations so subsequent lookups become better, e.g. for abbreviated journals which are common in botany.

On the other hand I would very strongly try to avoid excessive normalization as it is the case in the current CoL model. It makes integrating data much harder and at some point is also a nuisance for an editor and the UI.

I am therefore leaning towards keeping DOI level references mostly flat and start off with managing just journals as separate entities. Journals are very special as there are only a few, we have various spellings for them and they are crucial in discovering an existing identifier for an article. It would also allows us to manage and scan ToC feeds in the future spotting newly described names automatically like uBio did. Does that seem reasonable or are there good reasons to further normalize the data?

from general.

mjy avatar mjy commented on May 29, 2024 1

@rdmpage

  1. I think we're on the same page. By 'domain specific content' I mean to point out that we, and only we, should have special knowledge about certain parts of this problem (names). Other parts of the domain (literature), are not special to us, and we would be arrogant to think that we can come up with a better solution for those domains than, for example, the Library of Congress, or a Russian lady who has done what we can not.

  2. I think we're talking past each other in part here, or maybe it's not an issue. I'm thinking baseline, and in some ways your numbers support my point- If only 1/3 to 1/2 of names can actually be linked to a digital presence then a flat citation is super reasonable first step. In the worst case scenario the flat citation solution handles 2/3 of all available data, that's a pretty nice start. If you're saying that there are dark digital presences out there that would mean we can link 3/4 (arbitrary number off the top of my head) of names to something digital, that's another issue, i.e. making dark data visible.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

from general.

deepreef avatar deepreef commented on May 29, 2024

My comments:

  1. I recommend that we use one of the templates suggested by @rdmpage here. What we DON'T want to do is get trapped in a process of finding the perfect, all-encompasing data model that perfectly parses everything and accomodates every permutation -- that's what killed the TDWG Literature group. Let the library community sort out the perfect model (actually, they kind of already have). For our purposes, we want to keep it as simple as possible, and compatible with one of the aforereferenced templates. I'll defer to @rdmpage for his preference on which is best, but I would strongly advise that we select one with minimally the following properties:
  • Support for embedding n-number of persistent identifiers for each record
  • support for n-number of ordered Authors parsed as FamilyName and GivenName (would be very nice to also capture "role" for each author, but not critical).
  • Support for linking Authors, Journals and other "parent" references (e.g., books from chapters) using persistent identifiers (rather than reyling on text only to cross-link)
  • Parsed Volume number and minimally StartPage
    It looks like BibJSON has most of this, and would have all of it if we could include the same "identifier" structure for authors as already exists for the Ref and the Parent ref, and also maybe add on a "role" property for authors.
  1. The best I've seen is AnyStyle. But @gsautter might have other suggestions.

  2. See my comments for # 1.

  3. Should be covered by # 1/# 3.

  4. The most comprehensive is RefBank (again, perhaps involve @gsautter). It's a dirty bucket, but @gsautter has done a lot of parsing already, and we could also play with running it against AnyStyle services.

The main barrier to viewing text will be paywalls. Between BHL and Gallica (and perhaps Google Books), much/most the old stuff should be online in some form already. New stuff published open-access as well.

Minimal metadata for the citation is critical to tracking down the text content (e.g., BHL). See my other comment on this. Chances are, if you already have access to the text, then the metadata is probably there for the taking as well.

Reply from @rdmpage came in just as I was writing above. Amazingly :-) , I agree with 100% of what he says. Particularly this bit:

So, I would frame the problem of linking names to publications as (1) linking to the smallest citable container of the name (typically an article with a DOI) and (2) the location within the document (typically a fragment identifier).

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@mjy Arghhh, not BibTeX! Please dear God no.

The world has moved on, JSON has won the data format wars, indeed, arguably it's going to win the database wars as document stores such as CouchDB and Elastic Search mean you can store, index, and exchange pure JSON. This is liberating, no more database schema (have you seen the CoL schema 😱 ), no more ORM, etc. Ironically, the current CoL API returns some pretty nice JSON (e.g. Hoffmannilena lobata so that one could pretty much recreate CoL as a series of documents in CouchDB and be done. But I digress.

Looking at the formats personally I'd go for CSL-JSON, mainly because it's the default for CrossRef, and we can use them as an example of how to add the extra things @deepreef is after. For example, CrossRef add ORCIDs into author names where they have them, this could be extended to other identifiers, such as VIAF, ISNI, ZooBank, Wikispecies, etc. CSL-JSON also supports the whole suite of CSL formatting tools, for those anally retentive types who are fussy about how references should look like.

In some ways deciding on formats may be premature, but I would make a plea for the liberating effects of having JSON as both the exchange and the database format, and the desirability of learning from people who handle bibliographic data at massive scale for a living, e.g. CrossRef.

from general.

deepreef avatar deepreef commented on May 29, 2024

OK, so I may have misunderstood the comment from @mjy , but my interpretation of the "persistence model" is different from the "exchange format". Or maybe not? I think BibTex is fine as a model, and BibJSON is fine as an exchange format. But if @rdmpage prefers CLS-JSON, I can roll that way. However, it seems more than we need in this context (i.e., we don't care about styles, we just care about a simple structure of core metadata to use as a bridge between names and peristsent identifiers). I honestly don't care, as it appears both approaches (BibTex/BibJSON and CLS-JSON) will allow inclusion of the elements I think are minimally needed.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@deepreef I guess I'm arguing that "persistence model" = "exchange format" is a lot simpler, and quite achievable for bibliographic data.

from general.

deepreef avatar deepreef commented on May 29, 2024

Ah! OK, I understand. I'm a bit underwhelmed by the gap between promise and deleivery (or, more precisely, functional and practical implementation) of the whole "inexed text file as database" thing. I've been hearing about it for years, but so far not seeing a lot of migration that way. You may well be right that this approach will eventually win the database wars; but my hunch is that for at least the initial implementation of CoL+, we're still talking about persistence of content in tables and fields (e.g., mySQL or whatever).

@mdoering maybe you can clarify, are you more asking about a database-type model, or a standard data exchange format in your point 1? Or, were you mostly asking about "flat" vs. "hierarchical" in terms of how we capture and store the information (e.g., is there one Reference table with a recursive link between an "Article" instance and its parent "Journal" instance, vs. one table for articles and one table for journals)? Or something else? To me, the implementation of the data store doesn't matter so much, but the degree of parsing and the exchange format do matter. To that end, I would strongly advocate for:

  • Author names parsed to at least GivenName, FamilyName and Suffix (it's helpful to include the "Jr." etc. as a parsed component of author names)
  • Authors enumerated in some way (either as an easily parsable blob of text, or iterative ordered listing)
  • Multiple identifiers for each instance of Reference, Author, and "parent" (e.g., book for chapter or journal for article).
  • Standard set of parsed citation metadata (doesn't need to be fancy or extensive)

Additional nicities would include support for multiple & qualified titles (helpful for variants, alternate languages, etc.), and multiple & qualified dates (stated cover date vs. publication date, etc.), and a few other things.

More important than either the data model or the exchange format is that we come to some common agreement on what the different "Types" (or "classes") of literature items there are. The simplest thing would be to get away from things like "Magazine" vs. "Journal" vs "Newspaper", or "Chapter in a book" vs. "Article in a Conference Proceedings", and stick to four core classes of Reference objects:

  1. Series (Journals, Magazines, Newspapers, multivolume Book Series, etc.)
  2. Volume (Book, single-volumes within book series or conference proceedings, etc.)
  3. Article (Articles within series, authored chapters within books, etc.)
  4. Section (subsections of volumes or articles typically more granular than what appear within a bibliography, but useful for capturing specific authorship, dates, or titles for smaller sections -- such as Treatments in our context).

... and maybe a few defined business rules about how these things can be nested and/or cross-referenced. Once we have those defined, and agree on an exchange format, then it pretty-much doesn't matter how we store the content (CouchDB vs a SQL database vs. a Triplestore vs. whatever) -- as long as we make sure we have sufficient granularity both for parsed citation metadata and Reference-object instances to accomodate the needs of COL+.

from general.

gsautter avatar gsautter commented on May 29, 2024

I'm with @rdmpage regarding the exchange model, as BibTeX puts the authors in a single string separated by "and" ... that's just unsafe for parsing.
BibJSON might be a good choice, or MODS XML. RefBank uses the latter, mainly because XML comes with XSLT, which makes it a very versatile data format that easily transforms into everything else.

However, separating the storage model from the exchange model is extremely advisable, as there might be multiple implementations at some point, and that separation gives implementing parties some liberty in their back-end design. A huge argument for wider buy-in, I guess.

On top of that, NoSQL surely is very easy to implement and will work great on test datasets. However, exclusively using NoSQL does away with a few things:

  • having to define a schema (admitted)
  • the ability to treat numbers as numbers on range queries, and supporting such queries with appropriate index structures to make them highly responsive even on large datasets
  • elaborate support for aggregate functions
  • the support of technology optimized for 30+ years to handle large datasets in general

Bottom line: once a dataset grows beyond test (toy) size, using relational technology (not ORM, just plain old SQL) offers vast advantages over its alternatives, mostly in terms of performance, but also regarding what queries you can ask.

I recently asked a Solr expert what kind of search performance he'd expect for a Solr instance with 200,000+ documents in it ... rising eye brows ...
Plazi has been using a hybrid approach for years (XML in files, relational tables for management data, indexing, search, and statistics), and that approach has shown it scales quite well.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@gsautter Obviously arguments can be made either way - as always I'm playing devil's advocate and trying to argue for simplicity and applicability.

As an aside, and without wanting to get into a technology war, suggesting that NoSQL is only OK for toy datasets may surprise the folks at Elasticsearch who've built a company on analysing massive datasets, and BioStor has close to 200,000 articles and is happily running on CouchDB with Lucence as the search engine, all hosted in the cloud. BioNames has literally millions of "documents" stored in CouchDB, of which around 300,000 are bibliographic records. Every technology has strengths and weaknesses, I'd make a plea for keeping options and minds open, there's a lot of cool stuff out there, and the field changes fast.

from general.

gsautter avatar gsautter commented on May 29, 2024

I know what ElasticSearch can do, especially for full test search. But such search is always exact match, not range queries, as might be desirable for pages and years for data cleanup and reconciliation purposes.

Not questioning that BioStore runs fine on NoSQL, but which kinds of queries does it ask the DB?

The one question regarding BioNames would be how they join up the names with the bibliographic records ... get them in separate queries? Could they, for instance, easily get a list of their bibliographic records with the count of names attached to them, sorted descending by that count?

I'm well aware that the field changes fast, and I guess it always has been. All the more impressive and important are technologies that prevail, and relational databases are one of them. And for a reason.

from general.

mjy avatar mjy commented on May 29, 2024

So apparently we need a table that let's us quickly resolve the pissing-match about whose technology is the right technology (NOSQL - argh for the love of god no). We can all fill it out, and understand where each other is coming from, and get down to the business of actually building endpoints.

@rdmpage for the record, so we end any doubt- exchange everything in JSON, I don't care what flavor, but the simpler the better. If you can't see the route from BiBTex as key value pairs, trivially exported to JSON, we've got bigger problems.

My point about about BibTeX is I don't want to think about reference attributes, they are pointless when you have a DOI. @gsautter author strings are just fine in BiBTeX when you consider it an ends to a means, i.e. a step to getting something like an ORCID ID for a person. People will want to record attributes as an intermediate to getting the DOI. So which ones to chose- those in BiBTeX. I defy anyone to find a simpler method to actually build a bibliography than by using something that serializes BIbTeX (if you can do that you can serialize "anything" btw). At the SOTOL workshop we compiled and exchanged thousands of references, and built a pipeline on it with ZERO training beyond explaining some Zotero bits. Basically zero uptime, using existing scripts and tools.

from general.

gsautter avatar gsautter commented on May 29, 2024

Not after a pissing match at all ... just trying to inject some technology considerations into the discussion.

from general.

mjy avatar mjy commented on May 29, 2024

But you can see the writing on the wall, right? ;)

from general.

gsautter avatar gsautter commented on May 29, 2024

@mjy , you are right regarding lookups, but what about identifying authors as entities, as @deepreef intends to do? Having author names stored individually might turn out helpful in that general purpose, and co-authorship might also play some role there.

from general.

gsautter avatar gsautter commented on May 29, 2024

I do see the writing on the wall, sure ... ;-D

from general.

mjy avatar mjy commented on May 29, 2024

@gsautter Exactly. We actually use 4 different approaches for recording taxon names "authors" in our current model- it's a little hellish, but it reflects very well the different levels refinement of incoming data. In our "graph" you can think of it as People nodes that are used in Role nodes that are linked to References or directly to to Protonyms. We also can record "verbatim" values, i.e. author name strings that are not normalized to People nodes at both the name and reference node/objects. Given this one can imagine there is a priority for determining which value to display as the author name based on a simple set of rules.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@mjy Defining a set of services would be one way to side-step the implementation decisions, sure. Some endpoints could be relevant outside CoL, so maybe some could follow the https://github.com/OpenRefine/OpenRefine/wiki/Reconciliation-Service-API used by OpenRefine.

from general.

mjy avatar mjy commented on May 29, 2024

@rdmpage that route sounds very worthwhile exploring to me.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@mdoering Sounds good to me. This is pretty much what I do in BioNames, articles and journals are stored as separate JSON documents, linked via ISSNs (if no ISSN then an OCLC number). WorldCat has APIs for retrieving data about journals, including RSS feeds amongst other details.

As part of the IPNI mapping I'm working on I have a local store of articles, based on harvesting CrossRef, JSTOR, OAI-PMH endpoints, screen scraping, etc. Dumping this to CSL-JSON would be straightforward and could be part of the input into whatever bibliographic store you decide to go with. I'm guessing your also going to want to support other formats for importing, such as BibTeX, RIS, etc.

from general.

mdoering avatar mdoering commented on May 29, 2024

Exactly @rdmpage. I was just wondering if all journals indeed have an ISSN? Even old ones predating modern times? And yes, allow BibTEX and others for importing at least. If its easy also as responses in the API but that would not be a priority

from general.

rdmpage avatar rdmpage commented on May 29, 2024

from general.

mjy avatar mjy commented on May 29, 2024

@mdoering - nothing you propose raised flags to me, all sounds quite good. Just minor thoughts rather than complaints here-

  • re "DOI level references mostly flat". I'd be curious to try "all flat", with identifiers (e.g. DOIs) pointing to them. Serials are over-rated, and another concept that would need to be bashed/hashed out. Think of them as as an other edge on reference that can be added later. IMO requiring them to start will only delay getting to the domain specific content (names).

  • For the record, conceptually in TW we settled on two subclasses- completely flat/verbatim, or BibTeX attributed, no intermediates. That is you can use the BibTeX format if you break out attributes, it will only render right if you break out enough of them to meet the render model. Decision has been awesome as it allows us to not have to worry about formatting and displaying a billion intermediate forms. Having just two formats is a pain enough.

  • I like to think of what a Librarian told me as she looked over the citations section of my master's thesis, finding problems. Her sole judgement on what to communicate to me was "What is wrong with this citation such that it prevents the reader from finding the source material, i.e. finding out more?". If the premise of the reference is to "guide the reader to the source material" then a flat reference will meet the need in nearly every case. This is proven as it is the excepted pattern in all publications. Note that this doesn't mean it will make the job (finding the original source content) easy, it will just make it possible (assuming it is possible). The DOI makes finding the source almost trivial. If one thinks of splitting out other attributes then one could determine which to split out first by prioritizing them according to the question "how much simpler does it make it to find my original source material". Of course there are many other reasons for breaking down references, but for addressing technical questions I like to start with this one, as it feels parsimonious to me.

  • One last comment- why not just contact Zotero, Library of Congress, the Russian gal who is storing every reference ever, or somebody who does this already- how are they indexing references, focus on questions of scalability, not content?

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@mjy Playing devul's advocate:

  1. "the domain specific content (names)" I'd argue that literature is at least equal in importance the names, because the literature is where the evidence for those names lies. IMHO our field ended up with lots of fairly useless list of names as if these meant much by themselves. If we want to (a) make something useful to taxonomists and (b) something useful to anyone wanting to find out something about organisms, names are not enough. Let's stop treating the literature as a second class citizen, it's one reason we're in this mess in the first place.

  2. "guide the reader to the source material" It's the 21st century, I think we can do better than that. the default expectations should be a digital identifier that resolves to a potentially readable publication. Obviously a lot of stuff isn't currently available digitally, but a lot is. I'd estimate 1/3 to 1/2 of names can be linked to a currently digitised publication. Once you look at what publishers, archives, societies, and BHL are digitising there's a huge digital library growing that we should be making use of.

from general.

rdmpage avatar rdmpage commented on May 29, 2024

@mjy If your definition of "flat" includes, say, CSL-JSON, then yes, we're on the same page 😉

from general.

mjy avatar mjy commented on May 29, 2024

from general.

deepreef avatar deepreef commented on May 29, 2024

Wow... lots of great stuff. OK, a couple things:
@mdoering

I am therefore leaning towards keeping DOI level references mostly flat and start off with managing just journals as separate entities.

This is a bit contradictory ("flat" vs. "journals as separate entities"), but I am in favor of attaching identifiers to journals/series (however you structure the data).

@mjy

Serials are over-rated, and another concept that would need to be bashed/hashed out. Think of them as as an other edge on reference that can be added later. IMO requiring them to start will only delay getting to the domain specific content (names).

I see the wisdom in this, and agree. However, from my experience, in the long run I think you will find that serials are seriously under-rated. I have found that the bang-for-the-buck value in dealing with serials as separate entities (as opposed to text-blob properties of article-level references) is actually extremely high if you are at all concerned with de-duplication/reconciliation of article-level instances, and especially if you want to discover DOIs or link to BHL pages.

If you're only interested in references as rough metadata properties of name-usages, then full-flat is probably all that's needed. But if you want to leverage the reference metadata (e.g., generate cross-links to BHL and discover DOIs and such), then cleaning up the Serials is the key. For example, the ZooBank service to find BHL pages for names automatically using text-string journal names alone yielded about 10,000 hits and about 40% accuracy. Once we cross-linked the ~3K journals between BHL and ZooBank (with no other changes or cleanup), that instantly went up to >50,000 hits with >90% accuracy. That's better than 10-fold improvement in our ability to "guide the reader to the source material".

Most of the rest of the article-level metadata is superfluous in this context. The difference here is whether a human reads a bibliograohic citation and figures out how to find it, vs. cross-linking electronic datasets to each other automatically, minimizing the need for human intervention. I think the latter is much more scalable, but I think we need to accomdate both, as humans need to verify and correct the automatic links, and sort out the cases where there is no automatic link.

In fact, if we're going to ditch anything (to keep things as simple as possible), I would say forget about article-level metadata entirely. DOIs are the ONLY thing we need to capture for new content going forward (no need to capture any metadata once we have the DOIs). But the work is in discovering the DOIs when you don't already have them. Furthermore, for dealing with a 250-year legacy of content (as CoL needs to do), DOIs solve only a small fraction of the problem. If our goal is to link to BHL pages for the pre-DOI stuff, the most important things you need are the BHL TitleID for the journal/book, the volume number (for Series articles only), and the page number. The rest of the metadata only helps in the <10% of cases where those three pieces of infromation don't produce the correct result.

So, in summary: if our goal is to connect taxon names to source content, then we only need two approaches to cover almost all of what we're after:

  1. DOIs (only) for modern content
  2. BHL TitleID + Page number [+ Volume number for article-based content] for historical stuff.

But as @rdmpage pointed out, probably only about 1/3 to 1/2 of the literature we are concerned with is already digitized. That leaves 1/2 to 2/3 that will likely require human effort to track down. For that reason, the rest of the metadata can be extremely helpful. You could argue that for those cases we don't need clean Journals; but I would argue that the cost of getting the best of both worlds is much lower than the costs of treating digitized literature differently from non-digitized in terms of metadata pasrsing, cleaning and capture -- especially as the digitized body is growing.

I'm glad to hear that we seem to have converged on CSL-JSON.

OK, way too much text already....

from general.

mdoering avatar mdoering commented on May 29, 2024

This is looking good and very useful! I am getting closer with a RAML doc and its html rendering for a very early API draft so we can move the discussion to that level soon

from general.

mjy avatar mjy commented on May 29, 2024

@mdoering what are you using to render RAML html? Just the standard recommended package?

from general.

mdoering avatar mdoering commented on May 29, 2024

I have tried all of the java, javascript, nodejs and Go RAML tools but support for the 1.0 specs is nowhere complete. Especially the global types section which I'd like to use a lot is not rendered correctly. Inheritance didn't show properly.

I ended up creating a new project free-raml based on the slate static api doc generator and the java raml-parser-2.

A rendering of the very early draft is here: https://sp2000.github.io/colplus/api/nomenclator.html
Please don't comment yet on the API itself, there's lots of unfinished stuff.

from general.

mdoering avatar mdoering commented on May 29, 2024

the raml file for it is https://github.com/mdoering/free-raml/blob/master/src/test/resources/nomenclator.raml

from general.

mdoering avatar mdoering commented on May 29, 2024

I thought I can close this issue as we have embraced CSL-JSON for persistence and API exposure.
But I am somewhat disappointed by the tools existing for java. We had to work with our own set of classes to work with the data which is ok. But I cannot find a simple way to construct a formatted citation string! The citeproc-java library is just a wrapper around the original Javascript library and it's dead slow. It takes several seconds to format a single citation! Thats unbearable.

I am tempted to switch to BibJSON. It is also far simpler. The amount of JSON you get from CrossRef is astonishing:

curl -LH "Accept: application/vnd.citationstyles.csl+json" http://dx.doi.org/10.1093/database/baw125

And worse, it doesn't comply with the JSON schema the citeproc guys put up, so parsing becomes a nightmare too. (e.g. compare original-title):
https://github.com/citation-style-language/schema/blob/master/csl-data.json

Checkout the BibText version instead:
curl -LH "Accept: text/bibliography; style=bibtex" http://dx.doi.org/10.1093/database/baw125

from general.

mdoering avatar mdoering commented on May 29, 2024

BibJSON also improves over BibTex that it has proper author, journal and license objects, not just a single authorteam string.

from general.

mdoering avatar mdoering commented on May 29, 2024

Wow, CrossRef returns sth different then doi.org does which adhers more to the csl json schema:
curl -LH "Accept: application/citeproc+json" http://dx.doi.org/10.1093/database/baw125

, but still has deviations like an array of strings for original-title

The whole CSL-JSON thing does not feel properly standardized.

from general.

mdoering avatar mdoering commented on May 29, 2024

citeproc-java in its early 3.0 alpha release now runs natively in java and does not rely on GraalVM and Javascript anymore.
So much better now. We'll stick with CSL-JSON for COL

from general.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.