Comments (112)
@stats-tgeorge - seems to still not be working... I mentioned paper #209
in my previous reply - I cross-referenced this issue there where they are discussing the issue.
Regardless - I can work on the review checklist offline and then fill it in when we get the issue sorted out.
from jose-reviews.
I see you added a comment there. Sounds like a good plan. I thought it was worth 1 more try before adding to their workload. This repo situation is a little different since the review repo was created after editorialbot was implemented. Non-the-less is suspect the problems are related.
from jose-reviews.
@nniiicc – could you try generating your checklist now?
from jose-reviews.
@nniiicc Are you still able to complete this review? Thank you!
from jose-reviews.
We thought that open-source software should be submitted to JOSS, whereas open-source educational material should be submitted to JOSE.
This current submission is about educational material related to the proper use of the open-source software within the context of good statistical and research practice. What needs to be reviewed (the "module") is the content of the JOSE paper, not necessarily the software per se (the performance
package has already been published in JOSS before). The current paper/module specifically focuses on the treatment of statistical outliers which has not been covered in the JOSS paper. Therefore, it seems that the reviewer checklist is not really adapted for aspects that target software components. Perhaps the software components can be safely ignored and the focus can remain on the JOSE paper.
Edit: so yes, in short, the module is contained within the paper itself
from jose-reviews.
@rempsyc I have filed three small issues #619 #620 #621 in your repository. Overall, I really enjoyed the modules on outliers and look forward to getting this submission across the finish line soon.
from jose-reviews.
I believe I have finished addressing @nniiicc's comments.
from jose-reviews.
In the eventuality that all reviewers are satisfied with the changes and that this paper gets accepted for publication before October 11, I will be able to include it in my postdoctoral fellowship application :) No worry if we cannot make this deadline.
from jose-reviews.
@rempsyc thank you for the module on outlier detection. I enjoyed reading it, and it offers a great example of outlier/extreme value detection. I filed two small issues in the main repository related to a few small points I thought were overstated in the JOSE paper and about adding context to the examples.
from jose-reviews.
@rempsyc, thanks for the edits/clarifications!
@stats-tgeorge, this is good to go from my view!
from jose-reviews.
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
from jose-reviews.
Software report:
github.com/AlDanial/cloc v 1.88 T=0.11 s (1829.3 files/s, 259912.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 146 3067 5098 11108
XML 2 0 175 3268
Markdown 12 618 0 1429
TeX 4 95 20 743
Rmd 7 436 818 360
YAML 23 51 49 229
-------------------------------------------------------------------------------
SUM: 194 4267 6160 17137
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
from jose-reviews.
Wordcount for paper.md
is 3438
from jose-reviews.
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5334/irsp.289 is OK
- 10.1016/j.jesp.2013.03.013 is OK
- 10.1016/j.jesp.2017.09.011 is OK
- 10.1177/0956797611417632 is OK
- 10.21105/joss.03139 is OK
- 10.21105/joss.04684 is OK
- 10.1080/00401706.1977.10489493 is OK
- 10.1002/wics.1421 is OK
- 10.3758/BF03214411 is OK
- 10.1037/0033-2909.114.3.510 is OK
MISSING DOIs
- None
INVALID DOIs
- None
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
Hello @nniiicc & @lebebr01, this is a friendly reminder to complete this review. Thank you!
from jose-reviews.
@editorialbot generate my checklist
from jose-reviews.
@stats-tgeorge - Not sure where to file this issue - but it seems that the @editorialbot
is not responding in this repo ... See my command above, and also this example from another paper under review #209 (comment)
from jose-reviews.
@editorialbot commands
from jose-reviews.
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
from jose-reviews.
@editorialbot commands
from jose-reviews.
Hello @nniiicc, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set jose-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
@editorialbot check repository
from jose-reviews.
Software report:
github.com/AlDanial/cloc v 1.88 T=0.11 s (1844.2 files/s, 262027.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 146 3067 5098 11108
XML 2 0 175 3268
Markdown 12 618 0 1429
TeX 4 95 20 743
Rmd 7 436 818 360
YAML 23 51 49 229
-------------------------------------------------------------------------------
SUM: 194 4267 6160 17137
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
from jose-reviews.
Wordcount for paper.md
is 3438
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
@editorialbot check repository
from jose-reviews.
Software report:
github.com/AlDanial/cloc v 1.88 T=0.11 s (1836.3 files/s, 260904.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 146 3067 5098 11108
XML 2 0 175 3268
Markdown 12 618 0 1429
TeX 4 95 20 743
Rmd 7 436 818 360
YAML 23 51 49 229
-------------------------------------------------------------------------------
SUM: 194 4267 6160 17137
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
from jose-reviews.
Wordcount for paper.md
is 3438
from jose-reviews.
@editorialbot check repository from branch JOSE_paper
from jose-reviews.
Software report:
github.com/AlDanial/cloc v 1.88 T=0.10 s (1880.7 files/s, 267217.2 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 146 3067 5098 11108
XML 2 0 175 3268
Markdown 12 618 0 1429
TeX 4 95 20 743
Rmd 7 436 818 360
YAML 23 51 49 229
-------------------------------------------------------------------------------
SUM: 194 4267 6160 17137
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
from jose-reviews.
Wordcount for paper.md
is 3438
from jose-reviews.
@editorialbot set JOSE_paper as branch
from jose-reviews.
Done! branch is now JOSE_paper
from jose-reviews.
@editorialbot check repository
from jose-reviews.
Software report:
github.com/AlDanial/cloc v 1.88 T=0.10 s (2000.2 files/s, 284197.4 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 146 3067 5098 11108
XML 2 0 175 3268
Markdown 12 618 0 1429
TeX 4 95 20 743
Rmd 7 436 818 360
YAML 23 51 49 229
-------------------------------------------------------------------------------
SUM: 194 4267 6160 17137
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
from jose-reviews.
Wordcount for paper.md
is 3438
from jose-reviews.
@nniiicc can you try one more time to generate your checklist? Otherwise I will bring in help. Thank you!
from jose-reviews.
@editorialbot generate my checklist
from jose-reviews.
Review checklist for @nniiicc
Conflict of interest
- As the reviewer I confirm that I have read the JOSE conflict of interest policy and that there are no conflicts of interest for me to review this work.
Code of Conduct
- I confirm that I read and will adhere to the JOSE code of conduct.
General checks
- Repository: Is the source for this learning module available at the https://github.com/easystats/performance?
- License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
- Version: Does the release version given match the repository release?
- Authorship: Has the submitting author (@rempsyc) made visible contributions to the module? Does the full list of authors seem appropriate and complete?
Documentation
- A statement of need: Do the authors clearly state the need for this module and who the target audience is?
- Installation instructions: Is there a clearly stated list of dependencies?
- Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support
Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)
- Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
- Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
- Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
- Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
- Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.
JOSE paper
- Authors: Does the
paper.md
file include a list of authors with their affiliations? - A statement of need: Does the paper clearly state the need for this module and who the target audience is?
- Description: Does the paper describe the learning materials and sequence?
- Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
- Could someone else teach with this module, given the right expertise?
- Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
- References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
from jose-reviews.
Hello @lebebr01, are you still able to complete this review?
from jose-reviews.
Hi @stats-tgeorge,
Yes I can, but I won't be able to get to it until early to mid September. Let me know if that timeline is too long.
from jose-reviews.
Hi @stats-tgeorge,
Yes I can, but I won't be able to get to it until early to mid September. Let me know if that timeline is too long.
@lebebr01 That works - we understand it is a busy time of year! Thank you!
from jose-reviews.
@lebebr01 this is a friendly reminder of this review. Thank you again!
from jose-reviews.
@stats-tgeorge - can you clarify that this is a learning module submission (as opposed to a software submission)?
I don't see any learning modules here to review. The repository connected to this submission https://github.com/easystats/performance is software. The only aspect of the repository that is related to learning or learning modules are contained in the JOSE paper ... Sorry if I am missing something obvious here.
from jose-reviews.
@rempsyc can you respond to this? @nniiicc would you consider it (2) open-source software, created as educational technology or infrastructure? We also accept that as part of JOSE. That was the type of submission I was considering it. I see the checklist is more specific to the learning modules though. I'm going to follow up to see if we have two versions. Thank you!
from jose-reviews.
I think it could be either...
The paper is very strong - it gives a great overview of common dilemmas in identifying and overcoming issues related to outliers with the R package the authors have developed. The examples in the paper are clear, and have working code that I can run locally.
I guess stepping through the checklist I'm just confused what I should be reviewing... the paper, the software, or the learning modules (which are right now in the paper)...
from jose-reviews.
@nniicc I verified with the EiC that we need the check all parts of the checklist since this was submitted (and appropriately so) as a learning module. Please continue as you suggested, reviewing the learning modules that are in the JOSE paper. It sounds like @rempsyc cleared it up but I wanted to circle back. Let me know if there is more confusion and thank you!
from jose-reviews.
@stats-tgeorge i believe you tagged the wrong person
from jose-reviews.
@stats-tgeorge i believe you tagged the wrong person
Sorry about that!
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
@stats-tgeorge I suggest maybe moving on to a second reviewer that could do the review quickly - maybe @evamaxfield
or @isaaconline
- Two researchers in my lab that both have very good statistical background and knowledge of R
from jose-reviews.
In the eventuality that all reviewers are satisfied with the changes and that this paper gets accepted for publication before October 11, I will be able to include it in my postdoctoral fellowship application :) No worry if we cannot make this deadline.
Hello @rempsyc, I am working on this. If I cannot reconnect with the second reviewer soon I will seek another.
from jose-reviews.
Review checklist for @lebebr01
Conflict of interest
- As the reviewer I confirm that I have read the JOSE conflict of interest policy and that there are no conflicts of interest for me to review this work.
Code of Conduct
- I confirm that I read and will adhere to the JOSE code of conduct.
General checks
- Repository: Is the source for this learning module available at the https://github.com/easystats/performance?
- License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
- Version: Does the release version given match the repository release?
- Authorship: Has the submitting author (@rempsyc) made visible contributions to the module? Does the full list of authors seem appropriate and complete?
Documentation
- A statement of need: Do the authors clearly state the need for this module and who the target audience is?
- Installation instructions: Is there a clearly stated list of dependencies?
- Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support
Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)
- Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
- Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
- Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
- Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
- Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.
JOSE paper
- Authors: Does the
paper.md
file include a list of authors with their affiliations? - A statement of need: Does the paper clearly state the need for this module and who the target audience is?
- Description: Does the paper describe the learning materials and sequence?
- Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
- Could someone else teach with this module, given the right expertise?
- Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
- References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
from jose-reviews.
@rempsyc We are ready to keep moving forward now. Have you made an archive on Zenodo; if so, please report the DOI here
from jose-reviews.
@editorialbot create post-review checklist
from jose-reviews.
I'm sorry @rempsyc, I'm afraid I can't do that. That's something only editors are allowed to do.
from jose-reviews.
Post-Review Checklist for Editor and Authors
Additional Author Tasks After Review is Complete
- Double check authors and affiliations (including ORCIDs)
- Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSE paper.
- Archive the release on Zenodo/figshare/etc and post the DOI here.
- Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSE paper.
- Make sure that the license listed for the archive is the same as the software license.
Editor Tasks Prior to Acceptance
- Read the text of the paper and offer comments/corrections (as either a list or a PR)
- Check the references in the paper for corrections (e.g. capitalization)
- Check that the archive title, author list, version tag, and the license are correct
- Set archive DOI with
@editorialbot set <DOI here> as archive
- Set version with
@editorialbot set <version here> as version
- Double check rendering of paper with
@editorialbot generate pdf
- Specifically check the references with
@editorialbot check references
and ask author(s) to update as needed - Recommend acceptance with
@editorialbot recommend-accept
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
@editorialbot check references
from jose-reviews.
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5334/irsp.289 is OK
- 10.1016/j.jesp.2013.03.013 is OK
- 10.1016/j.jesp.2017.09.011 is OK
- 10.1177/0956797611417632 is OK
- 10.21105/joss.03139 is OK
- 10.21105/joss.04684 is OK
- 10.1080/00401706.1977.10489493 is OK
- 10.1002/wics.1421 is OK
- 10.3758/BF03214411 is OK
- 10.1037/0033-2909.114.3.510 is OK
MISSING DOIs
- None
INVALID DOIs
- None
from jose-reviews.
@rempsyc I know you are trying to move quickly. I believe I am stopped until you work on your items above.
from jose-reviews.
Correct, I am waiting for permission to merge the JOSE branch in easystats/performance#586 and create a new release (since I am not the maintainer of the performance
package) since this seems required by the instructions in order to put the release on Zenodo and get the doi. I'm going to wait a bit more and if we cannot merge the branch soon, I will simply build the package tar file and upload that on Zenodo. I was hesitant to do that because the performance
package already has a Zenodo that automatically updates with every release (which also changes the doi), so that will be a bit redundant. But creating a new release also means submitting the release to CRAN, which means other easystats dependencies have to be compatible, etc. Perhaps I can put the current branch on Zenodo as a separate file and get the doi and take care of the CRAN release later on? Perhaps @strengejacke can share his opinion on the best way to handle this as well.
from jose-reviews.
@stats-tgeorge can you clarify whether the release and what should be on Zenodo should be of the performance
package or of the educational module which is simply the paper??
from jose-reviews.
@editorialbot set main as branch
from jose-reviews.
Done! branch is now main
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
Thanks @stats-tgeorge
- The release for the latest version, v0.10.6 (which contains the package, the paper, and the new vignette) is available here: https://github.com/easystats/performance/releases/tag/v0.10.6
- The release has been archived on Zenodo: https://doi.org/10.5281/zenodo.8411009
I think I've double-checked everything and checked all the boxes on my end. However, I am not able to check the boxes on your post myself, as the post is locked for me.
from jose-reviews.
@editorialbot set 10.5281/zenodo.8411009 as archive
from jose-reviews.
Done! archive is now 10.5281/zenodo.8411009
from jose-reviews.
@editorialbot set 0.10.6 as version
from jose-reviews.
Done! version is now 0.10.6
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
- The archive title has to match the paper title.
- Author list needs to match paper author list
- License in archive just links back to archive
from jose-reviews.
@rempsyc two reference in your paper are missing DOIs. Can you find those?
from jose-reviews.
@editorialbot set v0.10.6 as version
from jose-reviews.
Done! version is now v0.10.6
from jose-reviews.
two reference in your paper are missing DOIs. Can you find those?
- Gnanadesikan 1972: added
- Tukey 1963: no doi available
from jose-reviews.
Thanks for the clarification!
- The archive title has to match the paper title.
- Author list needs to match paper author list
- License in archive just links back to archive
All fixed now. New doi: 10.5281/zenodo.8411224
from jose-reviews.
@editorialbot check references
from jose-reviews.
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5334/irsp.289 is OK
- 10.1016/j.jesp.2013.03.013 is OK
- 10.1016/j.jesp.2017.09.011 is OK
- 10.1177/0956797611417632 is OK
- 10.21105/joss.03139 is OK
- 10.21105/joss.04684 is OK
- 10.1080/00401706.1977.10489493 is OK
- 10.2307/2528963 is OK
- 10.1002/wics.1421 is OK
- 10.3758/BF03214411 is OK
- 10.1037/0033-2909.114.3.510 is OK
MISSING DOIs
- None
INVALID DOIs
- None
from jose-reviews.
@editorialbot generate pdf
from jose-reviews.
👉📄 Download article proof 📄 View article proof on GitHub 📄 👈
from jose-reviews.
@editorialbot recommend-accept
from jose-reviews.
Attempting dry run of processing paper acceptance...
from jose-reviews.
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5334/irsp.289 is OK
- 10.1016/j.jesp.2013.03.013 is OK
- 10.1016/j.jesp.2017.09.011 is OK
- 10.1177/0956797611417632 is OK
- 10.21105/joss.03139 is OK
- 10.21105/joss.04684 is OK
- 10.1080/00401706.1977.10489493 is OK
- 10.2307/2528963 is OK
- 10.1002/wics.1421 is OK
- 10.3758/BF03214411 is OK
- 10.1037/0033-2909.114.3.510 is OK
MISSING DOIs
- None
INVALID DOIs
- None
from jose-reviews.
👋 @openjournals/jose-eics, this paper is ready to be accepted and published.
Check final proof 👉📄 Download article
If the paper PDF and the deposit XML files look good in openjournals/jose-papers#139, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
from jose-reviews.
Hello @openjournals/jose-eics I believe this one is ready. TY!
from jose-reviews.
Hi, just wanted to check on the current status of this submission - is there anything we can or need to do/check before publication of the paper?
from jose-reviews.
On a first browse of this submission, I am confused. The repository link points to the package performance
, which has already been published in JOSS. So where is the educational material we are publishing in JOSE?
from jose-reviews.
I am looking through the history of this submission, and found the first review checklist, showing that this was submitted as a "learning module." (JOSE accepts two kinds of papers: those reporting on learning modules, and those reporting on educational software.)
However, the associated repository that has been linked to this submission points to the software package performance
—a piece of software already reviewed and published in JOSS.
Do I take it that the authors have written their "learning module" in the paper itself? And there is no associated "learning module" to go along with this submission?
from jose-reviews.
Related Issues (20)
- [REVIEW]: R for Data Analysis: An open-source resource for teaching and learning analytics with R HOT 47
- [PRE REVIEW]: ChooChoo the Checklist tool HOT 33
- [PRE REVIEW]: A Data Carpentry- Style Metagenomics Workshop HOT 22
- [REVIEW]: Planet_LB: Lattice-Boltzmann solutions for planetary geodynamics problems HOT 95
- [REVIEW]: Manim Slides: A Python package for presenting Manim content anywhere HOT 71
- Tests: Are there automated tests or manual steps described so that the function of the software can be verified? HOT 1
- [REVIEW]: A Data Carpentry- Style Metagenomics Workshop HOT 73
- [PRE REVIEW]: languagemodels: A Python Package for Exploring Modern Natural Language Processing HOT 15
- [PRE REVIEW]: Check your outliers! An introduction to identifying statistical outliers in R with *easystats* HOT 58
- [PRE REVIEW]: An R Companion for Introduction to Data Mining HOT 52
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support HOT 1
- Installation: Does installation proceed as outlined in the documentation? (and documentation is sufficient?) HOT 1
- Installation instructions: Is there a clearly stated list of dependencies? (Ideally these should be handled with an automated package management solution.) HOT 1
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support HOT 1
- Authors: Does the `paper.md` file include a list of authors with their affiliations? HOT 1
- Installation: Does installation proceed as outlined in the documentation? (and documentation is sufficient?) HOT 1
- Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.) HOT 1
- [REVIEW]: ChooChoo the Checklist tool HOT 20
- LICENSE and COPYING file? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jose-reviews.