freecodecamp / open-api Goto Github PK
View Code? Open in Web Editor NEWfreeCodeCamp's open-api Intiative
License: BSD 3-Clause "New" or "Revised" License
freeCodeCamp's open-api Intiative
License: BSD 3-Clause "New" or "Revised" License
Failing build: https://travis-ci.org/freeCodeCamp/open-api/builds/370905160
Commit: 06d028e
@Bouncey / @raisedadead I'm happy to look at this tomorrow if you don't get around to fixing it.
As an authenticated codeCamp user, I would to programatically retrieve the current challenge.
We can create GraphQL schemas from Mongoose schema. I love the idea of making sure the GraphQL schema and DB model is in sync.
However, after the initial migration we won't have to rely on that one MongoDB instance anymore, and this design makes less sense.
Figuring out a way forward should inform #37
I like the rules for commit messages, but please can we relax them.
My proposal:
$1($2): $3
$1
: One of feat fix chore docs tests
$2
: Any single word, any case
$3
: Any string, any case
Example:
feat(types): Expand User type and apply 'on-the-fly' migrations
Reasoning:
I like to keep my commit messages consistent with the code is touches. If I fix a bug in someBuggyFunction
, I want to reference it in my commit message. Or if I am working with types, I want to use StartCase in the commit message to reference the type I have worked on, which uses StartCase in the code base.
Definition of done: critical alerts create a phone call to team members.
This could be possible by having critical alarms firing of separate SNS topics that have a Twilio webhook as subscriber.
I've seen people create Lambas to connect to Twilio when they fire, but that kind of defeats the purpose, we want to know when Lambdas are on ๐ฅ
Warning: this will be less sophisticated than services like Pagerduty, VictorOps etc, having a schedule, and escalations is well out of scope for this issue.
/cc @freeCodeCamp/open-api did I miss anything, and concerns? Is this a blocker for our first release?
Hi there,
We need to make some decisions on how to move forward on this project, and what the scope is going to be. There are a few options to discuss, they should probably be broken apart and judged by their merits, but let's see what options there are first:
We expose GraphQL via this project and operations are resolved..
One party pooper is that I've seen is that the REST API return responses that I did not expect. An example is hitting http://127.0.0.1:3001/api/users, I happily received a 200 OK, but useless HTML. I've not explored why this is set up like it is, could someone give me some context?
@raisedadead just copy the https://github.com/freeCodeCamp/freeCodeCamp/blob/staging/.travis.yml?
From #55 (comment)
@raisedadead whats your normal strategy here? Function as a service could be cost effective, but it looks like Digital Ocean does not support it.
Add AWS Cloudwatch notifications that send out SMS messages to the dev team whenever an AWS service hits a limit. For example, whenever Lambda hits 5,000 calls per second
Nothing should remain in the database for the user being deleted
Related issue suggesting to : freeCodeCamp/freeCodeCamp#16104 (limited to Collaborators, so I am unable to chip in ๐ )
I think we should consider using auth0, they provide a secure and scalable solution, and they have a unlimited free tier for open source projects. I doubt we could do a better job than auth0, and it would unblock the team to work on projects.
The library support looks great, see https://auth0.com/opensource They support social providers, passwordless, and MFA, see https://auth0.com/docs/identityprovider for an overview.
As done in FCC monolith.
Where do our logs go, what do other freeCodeCamp project do? Can and if so, are we happy to re-use this?
/cc @freeCodeCamp/open-api
We need to verify, rather early than later, that performance between our Lambdas and our current MongoDB server is acceptable. If it is not, we could consider moving our MongoDB server to AWS, or see what performance a hosted solution would give us.
@freeCodeCamp/open-api any thoughts on how we can verify this? Stand up another Mongo server in DO, , let open-api staging communicate with it an measure the latency? Do we want to compare Learn -> open-api -> Mongo with www.freecodecamp.org?
If we are going to use GraphQL this will be a little more complicated than using REST.
Things to consider:
This article provides a good background:
https://github.com/howtographql/howtographql/blob/master/content/graphql/advanced/4-security.md
Githubs approach: https://developer.github.com/v4/guides/resource-limitations/
Line 25 in 2652e5c
This is creating conflicts in yarn.lock and should ideally be removed or migrated
Too early to do now, but let's not forget to create a guide when this project is in a shape where it's ready for other contributors.
Thanks for opening up access to Snyk @raisedadead!
There is one low severity vulnerability found; Prototype pollution via the "hoek" module. ( https://snyk.io/vuln/npm:hoek:20180212 )
There are bunch of libraries using it, I've raised a PR for one, but there are a few others and chasing that all up does not seem like a wise use of my time.
Does Snyk support whitelisting of "blessed" vulnerabilities, or do you adjust the sev level and ignore say "low"? @freeCodeCamp/open-api how do you normally handle these reports in the freeCodeCamp org?
General report: https://snyk.io/test/github/freecodecamp/open-api?severity=high&severity=medium&severity=low
We deploy automatically, to make this more transparent it would be great to announce deployment in 2 channels:
Close and move issues from https://github.com/freeCodeCamp/freeCodeCamp/projects/5 to this tracker.
We should still be using the project board above, while reporting and discussing the threads here.
See announcement blogpost at https://dev-blog.apollodata.com/exposing-trace-data-for-your-graphql-server-with-apollo-tracing-97c5dd391385 and the docs at https://www.apollographql.com/docs/engine/performance.html . It' would be good to start recording this data .
If we run from Lambda, we'll need a standalone container to keep track of state, see https://www.apollographql.com/docs/engine/setup-node.html#lambda and https://www.apollographql.com/docs/engine/setup-lambda.html
Hi @roelver how are you?
The current readme of the project has a paragraph that shows as follows:
Getting an API key
All requests must have a API-key in the request. In this stage there is no online resource to generate an API-key. If you want an API key for your app, please ask for it on the FreeCodeCamp/DataScience Gitter. The API-key is specified in the query string as key=.
This is driving traffic into the room when we haven't been able to give a clear answer about the existence of the API.
Shouldn't that message be changed into something like "on-hold" if the API key is not currently available?
Or should we provide those keys?
This account must be linked to the corresponding auth0 account by uuid
Dataloader is a project from Facebook and allows removing pressure from the database: https://github.com/facebook/dataloader .
As per the README, it does not replace shared application level caches. It's meant to be used per request, and prevent multiple DB calls when one will do. See https://github.com/facebook/dataloader/blob/master/README.md.
Premature optimisation for now, but will come in handy when we are in need for performance improvements.
See https://github.com/withspectrum/spectrum/tree/alpha/api/loaders for a JS implementation.
Currently the package name is open-data
and repository name is open-api
.
I propose we change both. It's too early to decide, but this is an issue to remind us to decide this before we launch anything.
Is this still a working project?
I am working on a project with some other campers and we are trying to find an easy way to figure out how many certificates (if any) a FCC user has achieved. Is it possible for us to use this API for that purpose?
A "cold" Lambda takes a while to start. A common technique is to to have a scheduled events invoke the Lambda. This will make AWS start a container and process, which it will then keep around in case another invocation will follow. Those subsequent invocations will be much faster, as all set up has already been done.
Keeping Lambdas warm in a low traffic is important to keep things responsive. However it is not needed if there is enough traffic to have warmed containers around at any time.
Before we spend more time investigating the best way to keep Lambdas warm, could someone provide insights on the frequency and volume of traffic we have, to see if we need to worry about this at all?
@raisedadead, @Bouncey could you shed some light on this?
2 approaches the first approach would be more cost efficient as the warming invocation could have a shorter running time, and hence cost, than an health endpoint invocation. The other plus is that there is no API gateway hit involved either, saving more cost:
/health
) to monitor statistics. This invocation would keep the endpoint warm, although there would be a bigger chance of a normal client request then trigger a cold start while the monitoring endpoint is hitting the health endpoint.@raisedadead proposed to switch to Yarn. Some discussion took place in PRs, which I'll link here for posterity.
Original discussion took place in #59, I mentioned:
I'd be happy to start using Yarn!
I guess the use case is a little less compelling now that NPM has a lock file. However the speed difference still looks compelling when node_modules and a lockfile are present -judging by some of the results at https://github.com/pnpm/node-package-manager-benchmark .
@Bouncey anything to add?
And @Bouncey added:
yarn does things that are nice like --prune automatically, which is manual only in npm. Plus you get shorter script commands: npm run format becomes yarn format
Yay for yarn!
We have decided on authentication in #23. We'll now need to think about authorisation.
Authentication can be done using Auth0. Both /graphql and the graphcool IDE endpoints should be approachable without authentication, and only queries and mutations needing authentication should require it.
There are plenty of options here as the specification does not dictates a solution.
A related thread full of interesting viewpoints ardatan/graphql-tools#313
./graphql/resolvers/directive.test.js
----> ./graphql/resolvers/directives.test.js
As recommended in https://github.com/withspectrum/spectrum/blob/alpha/docs/backend/api/testing.md
See https://github.com/facebook/jest
@QuincyLarson @raisedadead @Bouncey any options on testing here, lessons learned that I could build upon?
Hey, i have posted in freecodecamp gitter group but haven't got a response. I am wondering if i could receive an API which i will use for my exercise in creating a leader camp board with extended functionalities. My private email is [email protected]. Would appreciate your reply.
Thanks!
https://github.com/freeCodeCamp/open-api/blob/staging/serverless.yml#L37
This is going deliver payloads from all forks
npm run prepare-production
to be executed only in deploy environmentsIt is possible for clients to send a token for an existing user that does not have the accountLinkId
, even though the user has one assigned to them and an entry in the database.
accountLinkId
uuid
as the accountLinkId
and update the user record held at Auth0accountLinkId
There is scope for setting the accountLinkId
over in Auth0, so every token has an accountLinkId
from the very first login.
Do we still need to handle the case for if we end up setting it here because it is missing? I think it would lead to some crappy UX having to invalidate the token and having new users log out and in again.
I am going to raise a PR that will add the current rules we have set in Auth0 to this repo. Just so we all have some visibility of them and what they are doing.
As part of #25 I'm looking at ways to protect ourselves against harmful queries. One of the easier ways to prevent queries from generating excessive load is to set a limit to the amount of items a query return can return. Setting a limit on the amount of results returned requires us to support pagination, hence this issue.
/cc @Bouncey, @raisedadead
It's good to have commit linting in PRs, but has little value in branch builds, specially a deployment build.
We could test the value of ${TRAVIS_PULL_REQUEST}
and only run commitlint-travis
if it is set to true.
{ TypeError: Cannot read property 'authorization' of undefined
at checkAuthAndResolve (/home/stuart/open-api/.webpack/service/handler.js:485:33)
at users (/home/stuart/open-api/.webpack/service/handler.js:660:109)
# ...
}
Not much of a problem whilst I am developing the feature that is throwing errors, but not very helpful for future developers potentially having to cross-ref the bundle to the source manually.
Expected output:
{ TypeError: Cannot read property 'authorization' of undefined
at checkAuthAndResolve (/home/stuart/open-api/graphql/resolvers/checkResolvers.js:10:47)
at users (/home/stuart/open-api/graphql/typeDefs/User/index.js:5:17)
# ...
}
Deploy a simple Lambda serving GraphQL, using Travis. Initially against a small staging database.
The current license is MIT ( https://github.com/freeCodeCamp/open-api/blob/staging/LICENSE ) and mentions "feathers", so I'm guessing it's a copy.
freeCodeCamp uses BSD 3-Clause (https://github.com/freeCodeCamp/freeCodeCamp/blob/staging/LICENSE.md) for the code.
There is another license file https://github.com/freeCodeCamp/open-api/blob/staging/LICENSE.md that seems to pertain to the the dataset, and refers to http://opendatacommons.org/licenses/odbl/1.0/
@raisedadead do you know what to do here? Are there any guidelines or strong opinions here?
How is monitoring set up at the moment, what do other freeCodeCamp project do? Can and if so, are we happy to re-use this?
/cc @freeCodeCamp/open-api
See http://graphql.org/learn/caching for background. Current implementations are inMemory and memcache. As per the docs it's probably wise to start with in memory. There is a request open to use Redis, but progress on it is unclear: https://github.com/apollographql/apollo-engine-js/issues/55 .
This should be in the form of a JSON file.
Possible separate lambda function?
Definition of done: queries and mutations needed by the events project are supported by open-api.
We should have a docker setup for this early on.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.