Giter Club home page Giter Club logo

spruce's Introduction

Spruce · GitHub license

Spruce is the React UI for MongoDB's continuous integration software.

Table of Contents

Getting Started

Running Locally

  1. Clone the Spruce Github repository
  2. Ask a colleague for their .cmdrc.json file and follow the instructions here
  3. Run npm install
  4. Start a local evergreen server by doing the following:
  • Clone the evergreen repo into your go path
  • Run make local-evergreen
  1. Run npm run dev. This will launch the app and point it at the local evergreen server you just ran.

Storybook

Run npm run storybook to launch storybook and view our shared components.

Code Formatting

Install the Prettier code formatting plugin in your code editor if you don't have it already. The plugin will use the .prettierrc settings file found at the root of Spruce to format your code.

GQL Query Linting

Follow these directions to enable query linting during local development so your Evergreen GraphQL schema changes are reflected in your Spruce query linting results.

  1. Symlink the standard definition language GraphQL schema used in your backend to a file named sdlschema in the root of the Spruce directory to enable query linting with ESlint like so ln -s /path/to/schema sdlschema.graphql
  2. Run npm run eslint to see the results of query linting in your terminal or install a plugin to integrate ESlint into your editor. If you are using VSCode, we recommend ESLint by Dirk Baeumer.

Environment Variables

env-cmd is used to configure build environments for production, staging and development. This file is git ignored because it contains API keys that we do not want to publish. It should be named .cmdrc.json and placed in the config folder at the root of the project. This file is required to deploy Spruce to production and to staging. Ask a team member to send you their copy of the file, which should look like the following:

{
  "devServer": {
    "REACT_APP_GQL_URL": "http://localhost:9090/graphql/query",
    "REACT_APP_API_URL": "http://localhost:3000/api",
    "REACT_APP_UI_URL": "http://localhost:9090",
    "REACT_APP_SPRUCE_URL": "http://localhost:3000"
  },
  "staging": {
    "REACT_APP_API_URL": "https://evergreen-staging.corp.mongodb.com/api",
    "REACT_APP_UI_URL": "https://evergreen-staging.corp.mongodb.com"
  },
  "prod": {
    "REACT_APP_DEPLOYS_EMAIL": "[email protected]", 
    "REACT_APP_SITE_URL": "https://spruce.mongodb.com",
    "REACT_APP_BUGSNAG_API_KEY": "this-is-the-api-key",
    "REACT_APP_API_URL": "https://evergreen.mongodb.com/api",
    "REACT_APP_UI_URL": "https://evergreen.mongodb.com",
    "REACT_APP_NEW_RELIC_ACCOUNT_ID": "dummy-new-relic-account-id",
    "REACT_APP_NEW_RELIC_AGENT_ID": "dummy-new-relic-agent-id",
    "REACT_APP_NEW_RELIC_APPLICATION_ID": "dummy-new-relic-application-id",
    "REACT_APP_NEW_RELIC_LICENSE_KEY": "dummy-new-relic-license-key",
    "REACT_APP_NEW_RELIC_TRUST_KEY": "dummy-new-relic-trust-key"
  }
}

GraphQL Type Generation

We use Code generation to generate our types for our GraphQL queries and mutations. When you create a query or mutation you can run the code generation script with the steps below. The types for your query/mutation response and variables will be generated and saved to gql/generated/types.ts. Much of the underlying types for subfields in your queries will likely be generated there as well and you can refer to those before creating your own.

Setting up code generation

  • create a symlink from the schema.graphql file from evergreen with the spruce folder using ln -s path-to-evergreen-schema.graphql sdlschema.graphql

Using code generation

  • From within the spruce folder run npm run codegen
  • As long as your queries are declared correctly the types should generate

Code generation troubleshooting and tips

  • Queries should be declared with a query name so the code generation knows what to name the corresponding type.
  • Each query and mutation should have a unique name.
  • Since query analysis for type generation occurs statically we cant place dynamic variables with in query strings we instead have to hard code the variable in the query or pass it in as query variable.

How to get data for your feature

If you need more data to be able to test out your feature locally the easiest way to do it is to populate the local db using real data from the staging or production environments.

  1. You should identify if the data you need is located in the staging or prod db and ssh into them (You should be connected to the office network or vpn before proceeding). The urls for these db servers can be located in the fabfile.py located in the evergreen directory or here.

  2. You should ensure you are connected to a secondary node before proceeding.

  3. Run mongo to open the the mongo shell.

  4. Identify the query you need to fetch the data you are looking for.

    mci:SECONDARY> rs.slaveOk() // Allows read operations on a secondary node
    mci:SECONDARY> use mci // use the correct db
    switched to db mci
    mci:SECONDARY>  db.distro.find({_id: "archlinux-small"}) // the full query
    
  5. Exit from the mongo shell and prepare to run mongoexport

    mongoexport --db=mci --collection=distro --out=distro.json --query='{_id: "archlinux-small"}' 
    2020-07-29T17:41:50.266+0000	connected to: localhost
    2020-07-29T17:41:50.269+0000	exported 1 record
    

    After running this command a file will be saved to your home directory with the results of the mongoexport

    Note you may need to provide the full path to mongoexport on the staging db

    /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.0.5/bin/mongoexport --db=mci --collection=distro --out=distro.json --query='{_id: "archlinux-small"}' 
    2020-07-29T17:41:50.266+0000	connected to: localhost
    2020-07-29T17:41:50.269+0000	exported 1 record
    
  6. Exit the ssh session using exit or Ctrl + D

  7. You can now transfer this json file to your local system by running the following command. scp <db you sshed into>:~/distro.json . This will save a file named distro.json to the current directory

  8. You should run this file through the scramble-eggs script to sanitize it and remove any sensitive information make scramble file=<path to file>.json from within the evergreen folder

  9. Once you have this file you can copy the contents of it to the relevant testdata/local/<collection>.json file with in the evergreen folder

  10. You can then delete /bin/.load-local-data within the evergreen folder and run make local-evergreen to repopulate the local database with your new data.

Notes

When creating your queries you should be sure to limit the amount of documents so you don't accidently export an entire collection you can do this by passing a --limit=<number> flag to mongoexport

Deployment

Requirements

You must be on the master Branch if deploying to prod.

A .cmdrc.json file is required to deploy because it sets the environment variables that the application needs in production and staging environments. See Environment Variables section for more info about this file.

How to Deploy:

Run the deploy:prod or deploy:staging npm command

  1. npm run deploy:prod = deploy to https://spruce.mongodb.com
  2. npm run deploy:staging = deploy to http://evergreen-staging.spruce.s3-website-us-east-1.amazonaws.com/

After deploying you will be prompted to run git push --tags or git push upstream --tags depending on your setup, this is important so we can track releases.

spruce's People

Contributors

supajoon avatar khelif96 avatar john-m-liu avatar dependabot[bot] avatar malikchaya2 avatar dominoweir avatar ybrill avatar bsamek avatar ablack12 avatar arjunpatel100 avatar nalin1729 avatar maryamiqba avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.