Firefly serves as the host for the Filament application.
Firefly will serve Filament itself for use inside a browser and also provide access to services for consumption in Filament through an Environment Bridge.
Request
v
+-----------------------+
.com/app/* +----------+Load balancer (HAProxy)+-------+ .com/api/*
.com/assets/*| +----------+------------+ | *.net/*
| | | websocket
v v v
+-------------------------+ +------------+ +-----------------------+
|Web Server | |Login server| |Project server |
|static files (Nginx) | +------------+ |+----------------+ |
|filament + firefly/inject| ||Container server| ... |
+-------------------------+ |+----------------+ |
+-----------------------+
.com is short for the main app domain
local-aurora.montagestudio.com
staging-aurora.montagestudio.com
work.montagestudio.com
.net is short for the project/preview domain
*.local-project.montagestudio.net
*.staging-project.montagestudio.net
*.project.montagestudio.net
Created with http://www.asciiflow.com/
Firefly consists of four machines managed by Vagrant: Load Balancer
(frontend entrypoint), Web Server (the filament application itself),
Login, and Project. The # of machines is scalable, as defined by .env
files:
a local environment has one of each machine, staging has two login and project
servers, production has four.
The Project machine is just a host for Docker containers. Actual user projects are stored in these containers. These are where we perform git checkouts and npm installs, server the app for the live preview, and use other backend tools like glTF conversion. Each Docker container contains exactly one user project.
-
You must check out Filament next to Firefly, so that it looks like this:
filament/ firefly/
-
Install VirtualBox from https://www.virtualbox.org/wiki/Downloads if you don't have it installed already. Mac OS X users, you must add
/Applications/VirtualBox.app/Contents/MacOS/VBoxManage
to yourPATH
.export PATH=$PATH:/Applications/VirtualBox.app/Contents/MacOS/VBoxManage
-
Install Vagrant from http://www.vagrantup.com/downloads.html
-
Run
vagrant plugin install vagrant-cachier
. This will cache apt packages to speed up the initialization of the VMs. -
Run
vagrant plugin install vagrant-vbguest
. This will keep the VirtualBox Guest additions up to date.
Run npm start
This can take up to 20 minutes the first time as the local VMs are provisioned from scratch, however the result is a local setup that's very similar to the production setup. This means that we should be able to avoid causing problems that would usually only be seen in production.
You can then access the server at http://local-aurora.montagestudio.com:2440/
There is a lot of output when provisioning, and a number of warnings. The ones below are expected:
dpkg-preconfigure: unable to re-open stdin: No such file or directory
The guest additions on this VM do not match the installed version of VirtualBox
adduser: The group `admin' already exists.
adduser: The user `admin' does not exist.
chown: invalid user: `admin:admin'
stdin: is not a tty
Run npm stop
This will shutdown the VMs. You can bring them back up with npm start
which
should take < 1 minute now that they are all set up.
After running npm stop
the machines are not using CPU, but still take up
disk space. Instead of npm stop
you can run vagrant destroy
to remove the
VMs from disk. You can use npm start
to bring them back, but this will take
almost the same amount of time as the initial setup.
Run npm run deploy
You will need to run this whenever you make changes to Firefly only. Changes to Filament do not need a server refresh, simply reload the page to see UI changes.
This will restart the login
and project
servers, and stop all running
containers so on the next request they will be restarted with the updated code.
If either login
or project
fail to deploy the previous version will remain
running and the last 20 lines of the error log will be output.
Run npm run container-rm-all
to remove all containers from the project server.
Run npm run container-rebuild
if you make changes to the Dockerfile
. This
will rebuid the base container image.
Run
npm run login-debug
ornpm run project-debug
This sends a signal to the server process to enable debug mode, and then starts
node-inspector
. Sometimes the command exits with a weird error but running it
again works.
The port that node-inspector
is exposed on is defined in the package.json and
forwarded in the Vagrantfile.
Run
npm run login-remote-debug
and use 10.0.0.4:5858 as the connection point ornpm run project-remote-debug
and use 10.0.0.5:5858 as the connection point
You can connect using the node-inspector running on the host machine or any other client that supports node.js remote debugging such as WebStorm.
var log = require("./logging").from(__filename);
log("string", {object: 1}, 123, "http://example.com");
Only use console.log
while developing.
Some special characters will change the output:
Wrapping a string in *
s will make it appear red in the logs, this is useful
when you need to log an error:
log("*some error*", error.stack)
var track = require("./track");
To aid debugging in production we track errors and some major events. Errors
on the Joey chains are automatically tracked, however whenever you write a
.catch
(or, in old parlance, .fail
) then you should add some code to track
the error.
If you have a request
variable in scope then use the following code which
will pull the user's session data and other information from the request:
track.error(error, request, /*optional object*/ data);
If you don't have request
then you hopefully have the username
:
track.errorForUsername(error, /*string*/ username, /*optional object*/ data);
Events can be tracked with the following code, using the same "rules" as above
for using request
:
track.message(/*string*/ message, request, /*optional object*/ data, /*optional string*/ level);
track.messageForUsername(/*string*/ message, /*string*/ username, /*optional object*/ data, /*optional string*/ level);
Messages should be written in the present tense without "-ing", e.g. "create container", not "creating container" or "created container" (unless the action really was in the past).
XMLHttpRequest cannot load http://local-aurora.montagestudio.com:2440/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.
This happens when the project subdomain doesn't have the session cookie.
Why? This is caused by a cross-domain request to the project domain. When the
project server doesn't find a valid session it does a 304 redirect back to the
app domain. This is blocked because there are no cross-domain headers on the
app domain (despite the request now really being non-cross domain). Hence the
error showing the app domain in the message, and the Origin
being null
because it comes from a redirect.
You can ssh
into the different machines with vagrant ssh $NAME
. Files are
generally located at /srv
. You can run the commands below to directly follow
the logs for the different servers:
Run npm run login-logs
When the server fails to launch:
vagrant ssh login -c "tail -f /home/montage/stderr.log"
vagrant ssh login -c "sudo tail -n 30 /var/log/upstart/firefly-login.log"
Run npm run project-logs
When the server fails to launch:
vagrant ssh project -c "tail -f /home/montage/stderr.log"
vagrant ssh project -c "sudo tail -n 30 /var/log/upstart/firefly-project.log"
Run npm run container-logs
This will find the most recently launched container and start following the logs.
vagrant ssh web-server -c "tail -f /var/log/nginx/filament.access.log"
vagrant ssh load-balancer -c "tail -f /var/log/haproxy.log"
You can also see the state of the load-balancer (HAProxy) and the servers at
http://local-aurora.montagestudio.com:2440/admin?stats and logging in with
user montage
, password Mont@ge1789
.
Run npm run container-files
This will find the most recently launched container and list all the files that have changed inside of it. This is a quick way to see the state of the container.
Run npm run container-copy-files
This will copy the files out of the container into a temporary directory. You can look at the files but of course any changes won't be reflected in the container.
Run npm run container-rm-all
to remove existing containers then,
run npm run project-mount-workspaces
You can then vagrant ssh project
and find the workspaces from the containers
at /srv/workspaces/$ID
, where $ID
can be copied from the logs output
(look for something like
Existing container for stuk stuk asdf is fdfe3244c4201429d4e28266cb3bbb488a132f21ae818ddd1ee693dcddc0bcf8
)
To reload the server just Ctrl+C the server and run the second command above
again, instead of running npm run deploy
.
When finished run npm run project-unmount-workspaces
This will remove all the containers and workspaces on the project server, and then restart the regular server.
Only root
and the docker
group can access the containers, so log into the
project server and change to root
:
vagrant ssh project
sudo su
If don't know the ID of the container then you can get a list of all the
running containers with docker ps
, or include the stopped containers with
docker ps -a
.
If don't know which of the IDs you want then there is a map from
{user, owner, repo}
to IDs in /srv/container-index.json
. Look through that
file to find the relevant ID. (I hope this will change pretty soon, probably
to Redis.)
Using the ID of the container you can perform various actions:
docker logs $ID
docker diff $ID
docker cp $ID:/workspace /tmp
The session is available as request.session
. After a Github auth it has a
githubAccessToken
property, containing the token.
The session contains the Github access token and Github username. It's encrypted and stored entirely on the client. When a server recieves a session that it hasn't seen before then it uses the Github token to populate other information from the Github API in memory.
This scheme isn't perfect as sessions can't be revoked (although the user can revoke their Github token on Github, killing the session on our end as well), but it allows all the servers to be relatively stateless.
Here are some more useful commands if you change any config files or other aspects of the provisioning.
If you need to change the Upstart config files you need to restart the service:
vagrant ssh login -c "sudo cp /vagrant/deploy/services/firefly-login.conf /etc/init/firefly-login.conf"
vagrant ssh login -c "sudo service firefly-login restart"
vagrant ssh project -c "sudo cp /vagrant/deploy/services/firefly-project.conf /etc/init/firefly-project.conf"
vagrant ssh project -c "sudo service firefly-project restart"
The new config needs to be copied across and certain values replaced. (This command is adapted from the Vagrantfile).
vagrant ssh load-balancer -c "sudo cp /vagrant/deploy/files/haproxy.cfg /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/redirect scheme https .*//' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server login1 [0-9\.]*/server login1 10.0.0.4/' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server login2 .*//' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server static1 [0-9\.]*/server static1 10.0.0.3/' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/use-server .*//' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server project1 [0-9\.]*/server project1 10.0.0.5/' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server project2 .*//' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server project3 .*//' /etc/haproxy/haproxy.cfg;\
sudo sed -i.bak 's/server project4 .*//' /etc/haproxy/haproxy.cfg;\
sudo service haproxy reload"
-
Run the specs (
npm test
) at the project's root and make sure there are nojshint
errors and all specs pass successfully.Note: the binary dependencies are compiled for Linux instead of OS X so when running
npm test
on a non-Linux platform it will attempt to SSH into a VM to run the tests. If you get the errorVM must be running to open SSH connection
then runnpm start
and try again.Note: there is a dummy spec called
_config-spec.js
(the_
prefix causes it to be run first), that hides the logging while running the tests. If you need to see the logs then comment out the lines in it. It also adds theSlowSpecReporter
... -
If a spec takes more than 100ms then it is a "slow" spec and a message telling you this will be logged. Make it faster, or wrap the
it
in anif (process.env.runSlowSpecs)
block. Runnpm run slow-tests
to run the slow tests. -
Make sure all commit messages follow the 50 character subject/72 character body formatting used throughout git
-
Make sure commit messages start with uppercase present tense commands e.g. Prefer "Clear selection when clicking templateExplorer" over "Cleared selection when clicking templateExplorer"
-
Turn on "strip trailing whitespace on save" or equivalent in your editor
-
Indent by 4 spaces, not tabs
The deployment process mainly relies on two third-party tools: packer
and tugboat
.
packer
is used to create images through Digital Ocean. It uses the
build/*-image.sh
scripts to set up default values and perform pre-build
steps, then reads the ./*-image.json
configuration files to walk through its
image creation process. These .json files are used to describe the properties
of DigitalOcean droplets (using the standard DO API), to copy configuration
files from files
and services
, and to execute scripts from provision
on
the droplets that initialize them for use. Finally packer takes a snapshot of
the droplet to produce an image (.iso
) which will be used when deploying.
tugboat
is used to actually interact with the existing droplets, while
packer
creates intermediate temporary droplets just for the purpose of
snapshotting. It allows us to flash the new images onto existing droplets, ssh
into them by name, check their statuses, etc. Note: When this deployment
process was first created there was no equivalent to tugboat
, but now there
is an official tool maintained by DigitalOcean, doctl
, which we should
migrate to instead.
-
Run
deploy/build/setup.sh
. This will install packer and tugboat. You will likely get a message about needing to runtugboat authorize
, ignore this for now. -
Run
source deploy/build/env.sh
to set up your shell's environment. This adds packer and tugboat to your path and defines several useful environment variables for debugging the deployment process. -
Run
echo $DIGITALOCEAN_API_TOKEN
after setting the environment up above to print out the api token to use in the next step. -
Run
tugboat authorize
. Use the API token from the previous step, and when it asks, set the default ssh username to admin and give the absolute path to your ssh public key (e.g. /home/username/.ssh/id_rsa.pub on Linux or /Users/username/.ssh/id_rsa.pub on MacOS). -
Add your public key to
deploy/files/authorized_keys
. You will not be able to ssh into the droplets without it.
deploy/build/images.sh
:
SYNOPSIS
deploy/build/images.sh [-ftx] [-b branch] [-c branch] [-n build_number] [-r build_revision]
DESCRIPTION
Uses packer to generate the `.iso` images. There are several images that can be built
using this script: the base image, which is the base for all other images and only needs
to be rebuilt when upgrading the OS; individual base images which install the required
software for each VM; specific images which install application-specific configuration.
For most deployments only the specific (non-base) images need to be rebuilt, unless
something at the OS level or underlying software level is changed.
Building images will automatically tag the `firefly` and `filament` repositories with a new
release version unless specified otherwise. The two repositories will be cloned temporarily
while doing the deployment, so any local application changes that have not been pushed
upstream will not be included in the deployment (except the `deploy/` directory, which
is only used locally).
OPTIONS
-f Force base image rebuild. Rebuilds the `base-image` and `*-base-image`.
Only needed when upgrading the OS or making a change in the base software needed
on a machine.
-t Do not tag repositories. Prevents the step from automatically creating a new
release on `firefly` and `filament`. Useful when debugging and trying out many
deployments, but all published deployments should be given a release.
-x BASh debug mode.
-b filament_branch
Specify which branch of filament should be pulled.
-c firefly_branch
Specify which branch of firefly should be pulled.
-n build_number
Use the build_number when creating a new release. E.g. the '21' part of 'miranda/21'.
Defaults to one more than the last build number if not specified.
-r build_revision
The name to use when creating a new release. E.g. the 'miranda' part of 'miranda/21'.
Defaults to the revision defined in deploy/build/env.sh if not specified.
deploy/build/rebuild.sh
:
SYNOPSIS
deploy/build/rebuild.sh [-p] [-n build_number] [-r build_revision]
DESCRIPTION
Once the images are built, this script actually resets the Digital Ocean droplets with
the new images. Each of the droplets in the selected working set will be shut down,
the corresponding image will be written over their file system, and they will be rebooted.
Deploys staging unless specified for production.
OPTIONS
-p Deploys to production droplets instead of staging.
-n build_number
Use the image with the given build_number. Defaults to the latest build of the
current revision if not specified.
-r build_revision
Use the image with the given build_revision. Defaults to the revision defined
in deploy/build/env.sh if not specified.
Run one of these scripts to start the build of an image.
Configuration files to be copied into the images.
Scripts that are run inside of a new VM to set up all the packages and code that are needed.
These are configuration files for Upstart, to launch services when a machine boots. See the manual.
This directory is created in the root of firefly to store the packer
and tugboat
binaries
Email [email protected] with the update in the subject and a body of #minor
or #major
to indicate the severity, or #good
if everything is okay again. Minor or major statuses will appear in the tool (good ones won't) and all posts appear on http://status.montagestudio.com/
Remember: always resolve #major or #minor problems with a #good post, so that the warning will disappear in the tool.
If you want to include more information in the body put it before the #tag, but this won't be shown in the tool. Example:
To: [email protected]
Subject: Issues opening projects
There are problems opening projects at the moment.
#major
More information at https://www.tumblr.com/docs/en/email_publishing