Giter Club home page Giter Club logo

build's Introduction

ARM Cluster

Some of the Node.js Build ARM resources: 3 generations of Raspberry Pi and other ARMv7 & ARM64 hardware

Node.js Build Working Group

Chat with us on Slack.

Purpose

The Node.js Build Working Group maintains and controls infrastructure used for continuous integration (CI), releases, benchmarks, web hosting (of nodejs.org and other Node.js web properties) and more.

Our mission is to provide Node.js Project and libuv with solid computing infrastructure in order to improve the quality of the software itself by targeting correctness, speed and compatibility and to ensure streamlined delivery of binaries and source code to end-users.

This repository contains information used to set up and maintain the various pieces of Node.js Project infrastructure managed by the Build Working Group. It is intended to be open and transparent, if you see any relevant information missing please open an issue. If you are interested in joining, please read GOVERNANCE.md to understand the process and reasoning we use for granting access to the resources we manage.

Build WG Members

Above list is manually synced with the gpg member list.

Infra Admins

Jenkins Admins

Admin access to https://ci.nodejs.org/.

Release Admins

Access to release secrets.

Above list is manually synced with the gpg member list.

Release Jenkins Admins

Admin access to https://ci-release.nodejs.org/.

GitHub Bot Admins

If you are interested in joining the Build WG, or for more information about accesses and team roles see GOVERNANCE.md.

Emeriti

Infrastructure Providers

The Node.js Project is proud to receive contributions from many companies, both in the form of monetary contributions in exchange for membership or in-kind contributions for required resources. The Build Working Group collaborates with the following companies who contribute various kinds of cloud and physical hardware to the Node.js project.

Tier-1 Providers

The Node.js Project's tier-1 infrastructure providers contribute the largest share of infrastructure to the Node.js project. Without these companies, the project would not be able to provide the quality, speed and availability of test coverage that it does today.

Tier 1 Infrastructure Providers

  • DigitalOcean: a popular cloud hosting service, provides a significant amount of the resources required to run the Node.js project including key CI infrastructure/servers required to host nodejs.org.

  • Rackspace: a popular managed cloud company, provides significant resources used to power much of the Node.js project's CI system, including key Windows compilation servers, along with additional services such as Mailgun for some nodejs.org and iojs.org email services.

Tier-2 Providers

The Node.js Project's tier-2 infrastructure providers fill essential gaps in architecture and operating system variations and shoulder some of the burden from the tier-1 providers, contributing to availability and speed in our CI system.

Tier 2 Infrastructure Providers

  • Microsoft: Provides Windows-related test infrastructure on Azure for the Node.js CI system.

  • Joyent: A private cloud infrastructure company, provides SmartOS and other test/build resources for the Node.js CI system, resources for backup of our critical infrastructure, redundancy for nodejs.org and or unencrypted.nodejs.org mirror.

  • IBM:

  • Scaleway: Scalable cloud platform designed for developers & growing companies, contributes key ARMv7 hardware for test and release builds for the Node.js CI system.

  • Cloudflare: CDN and internet traffic management provider, are responsible for providing fast and always-available access to nodejs.org.

  • ARM: Semiconductor intellectual property supplier, have donated ARMv8 / ARM64 hardware used by the Node.js CI system for build and testing Node.js.

  • Intel: "The world leader in silicon innovation," contributes hardware used for benchmarking in the Node.js project's CI system to advance and accelerate Node.js performance.

  • MacStadium: Managed hosting provider for Mac. Provides Mac hardware used for testing in the Node.js project's CI system.

  • Packet: Bare metal cloud for developers. Through their Works on Arm, Packet provides ARM64 build infrastructure and additional resources for powering our CI system.

Community Donations

From time to time, the Node.js Build Working group calls for, and receives donations of hardware in order to expand the breadth of the build and test infrastructure it maintains.

The Node.js Project would like to thank the following individuals and companies that have donated miscellaneous hardware:

  • NodeSource for a Raspberry Pi B, a Raspberry Pi B+, a Raspberry Pi 2 B and an ODROID-XU3
  • Andrew Chilton @chilts for a Raspberry Pi B
  • Julian Duque @julianduque for a Beaglebone Black
  • Andi Neck @andineck for 2 x Raspberry Pi B+
  • Bryan English @bengl for 2 x Raspberry Pi B+
  • Continuation Labs @continuationlabs for a Raspberry Pi B+
  • C J Silverio @ceejbot for a Raspberry Pi B+ and a Raspberry Pi 2 B
  • miniNodes for a Raspberry Pi B+ and a Raspberry Pi 2 B
  • Simeon Vincent @svincent for 3 x Raspberry Pi 2 B
  • Joey van Dijk @joeyvandijk and Techtribe for 2 x Raspberry Pi 2 B and an ODROID-U3+
  • Matteo Collina @mcollina for a Raspberry Pi 2 B
  • Sam Thompson @sambthompson for a Raspberry Pi 2 B
  • Louis Center @louiscntr for a Raspberry Pi 2 B
  • Dav Glass @davglass for 2 x ODROID-XU3, Raspberry Pi 1 B+, Raspberry Pi 3, power, networking and other miscellaneous equipment
  • Tessel for a Tessel 2
  • KahWee Teng @kahwee for a Raspberry Pi 3
  • Chinmay Pendharkar @notthetup and Sayanee Basu @sayanee for a Raspberry Pi 3
  • Michele Capra @piccoloaiutante for a Raspberry Pi 3
  • Pivotal Agency for two Raspberry Pi 3's
  • SecuroGroup for two Raspberry Pi 1 B+'s and two Raspberry Pi 3's
  • William Kapke @williamkapke for three Raspberry Pi 3's and networking equipment
  • Jonathan Barnett @indieisaconcept for a Raspberry Pi B+
  • James Snell @jasnell for a Raspberry Pi 2
  • Michael Dawson @mhdawson for a Raspberry Pi 1 B+
  • Chris Lea @chrislea for a Raspberry Pi 1 B+

If you would like to donate hardware to the Node.js Project, please reach out to the Build Working Group, via the #nodejs-build channel on the OpenJS Foundation Slack instance or contact Rod Vagg directly. The Build Working Group reserves the right to choose what hardware is actively used and how it is used, donating hardware does not guarantee its use within the testing infrastructure as there are many other factors that must be considered. Some donated hardware, while not used actively in the CI infrastructure, is used from time to time for R&D purposes by the project.

CI Software

Build and test orchestration is performed by Jenkins.

The Build WG will keep build configuration required for a release line for 6 months after the release goes End-of-Life, in case further build or test runs are required. After that the configuration will be removed.

build's People

Contributors

cclauss avatar dependabot[bot] avatar fhemberger avatar fishrock123 avatar gibfahn avatar jbergstroem avatar joaocgreis avatar joyeecheung avatar lucalanziani avatar maclover7 avatar mhdawson avatar mmarchini avatar molow avatar nschonni avatar ovflowd avatar phillipj avatar piccoloaiutante avatar rafaelgss avatar refack avatar richardlau avatar rvagg avatar ryanaslett avatar sam-github avatar santigimeno avatar stefanstojanovic avatar sxa avatar targos avatar trott avatar ulisesgascon avatar xhmikosr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

build's Issues

More capacity from Rackspace

I know @pquerna has facilitated some of this directly, but my team (Developer Experience) has an official program for capacity for OSS projects, for ci/cd, web, builds, etc.

Let me know if I can help with this in any way...

CI for testing release procedures

Recently, we've had 2? failures due to release bugs.

I propose we set up CI tasks to run the release in a purely testing way. (So that we can go back and check though commits with it, unlike the nightly, which auto-publishes.)

cc @rvagg

ARM builds

I believe ARM is an important target but I'm unsure the best way to tackle CI with it. We either need to find some beefy ARM cpus somewhere that don't take 12h to compile Node or do a cross-compile then push to actual boxes to test or a distcc as @mmalecki has suggested or perhaps some crazy virtualization?

Need some input from cleverer people than me on this.

/cc @wolfeidau

Windows

I'm making an issue here mainly to collect names of people who can legitimately help with making Windows a first-class citizen. It's such a different beast that it really requires people who spend their time in it and understand the ecosystem rather than drop-ins (like me) who make do with their historic knowledge and occasional googling.

Examples of things that we need people for:

  1. We're heading in to a place where we can only build with C++11 capable compilers, which is only MSVC 2013 on the official end (and 2015 when it's properly released). What are the implications of this for Windows build platform support (currently I only have Windows 2012 in the node-forward build cluster) and what are the implications for our ability to produce binaries that support older versions of Windows?
  2. There has been some discussion of moving to clang at some point but apparently it's not quite "ready", is anybody following this? Do we have any clue about our ability to even start testing with it?
  3. Fixing minor bugs in core and keeping the test suite solid and relevant for Windows. Consider the test-cluster-eaccess.js example--originally implemented for non-Windows only but was then expanded to work on Windows but tested something completely different on Windows because of the nature of named pipes. It has since been changed in io.js but only because it was noticed while tracking down unrelated problems.

io.js Build WG Meeting

http://doodle.com/d7q3x4ezrze7x73b

Sorry folks, it's totally my fault that we don't have momentum here, I struggle to find good slots in my schedule where I can even propose a meeting let alone make it work for others. Can @iojs/build have a look at the above Doodle and record whether it works for you or not? If it doesn't we can easily push to next week where it might be easier for me.

I'd also like to ask someone else take the reigns for keeping the momentum of meetings up, perhaps @jbergstroem is interested in doing this? If it's left to me then it'll slip too far.

32-bit _test_ boxes for Windows

Currently only doing x64 builds and test on Windows 2008 and 2012 but we are doing binaries for x86 as well.

We have 2 x test boxes for each of 2008 and 2012 currently set up to be redundant so if Jenkins has more work to do then we can overflow and handle more capacity since Windows is one of the slowest to build & test. We could either re-purpose one each of those for 32-bit and lose redundancy or spin up a duplicate set to do 32-bit builds & test.

Thoughts? Specifically @kenperkins

CI security

From what I understand, the current Jenkins set up for Node.js and libuv run by Joyent will do builds for commits and pull requests. I'm having a hard time figuring out how this can be made secure outside of the unixes with containerisation (Solaris, Linux, ... ?). The hole I see is in running builds for pull requests basically opens these boxes up to executing arbitrary code from anybody with a GitHub account which could potentially compromise the machines themselves which is a particular concern if some of these builds will end up being actual releases.

Looking for insight from people with more experience on this than me. The most common use-case for Jenkins is in-house builds rather than open source projects so I'm not sure if this comes up a whole lot.

/cc @tjfontaine @evilpacket

Collect a list of resources

In order to get some things done, it'd be good to get a full list of what resources are available (and where). Based on that, I'd like to:

  • update the main readme.md (platforms and builders)
  • start digging into access redundancy (ref last meeting about 1+ access to all resources)
  • look at how which resources would be better suited for certain things (performance tracking, tests for all PR's, etc)

I can create a list from what @rvagg mentioned on the meeting, but think it'd be even better if @rvagg perhaps kicked this list off? How about:

  • hostname, location (sponsored by), hardware/vm assigned resources

Install ccache on buildbots

Use ccache to speed up CI builds. The ARM buildbot in particularly is hellishly slow.

I don't know how much storage the buildbots have but if space is at a premium, it may make sense to clean out ~/.ccache once a month or so.

/cc @rvagg

integrating npm tests into CI

I've been doing some work to make it so npm tests can be run as part of the overall build process for Node / io.js, preferably without too many side effects. It's kinda tough to mirror the behavior of a CI environment, so it would be great if somebody could either point me at a Vagrant image / Docker container, or get me access to a subset of the build environments so I have something to hammer on while I'm getting things sorted out.

also a windows ci environment would be great thanks

Website build on commit

cc @iojs/website

Currently we are using github-webhook with the following configuration on the server:

{
  "port": 9999,
  "path": "/webhook",
  "secret": "orly?",
  "log": "/home/iojs/github-webhook.log",
  "rules": [{
    "event": "push",
    "match": "ref == \"refs/heads/master\" && repository.full_name == \"iojs/website\"",
    "exec": "cd /home/iojs/website.github/ && git reset --hard && git clean -fdx && git fetch origin && git checkout origin/master && rsync -avz --delete --exclude .git /home/iojs/website.github/public/ /home/iojs/www/"
  }]
}

i.e. the "build" process is:

  1. in the existing clone of iojs/website, do a reset and clean
  2. fetch from origin
  3. checkout origin/master
  4. rsync the ./public/ directory of the repo into the live site directory

What I want to suggest we add is a build step in between 3 and 4 here, but it needs to be done inside a container so we don't give free reign for code in the website repo to run on the server.

Something like this:

docker pull iojs:latest && \
docker run \
  --rm \
  -v /home/iojs/website.github/:/website/ \
  -v /home/iojs/.npm:/npm/ \
  iojs:latest \
  bash -c " \
    adduser iojs --gecos iojs --disabled-password && \
    su iojs -c ' \
      npm config set loglevel http && \
      npm config set cache /npm/ && \
      cd /website/ && \
      npm install && \
      node_modules/.bin/gulp build \
    ' \
  "

I've just run this and it seems to work fine and I could enable it right now if that's suitable to the website team.

Note for build team (@kenperkins in particular) our Ansible script for the website needs an initial git clone of iojs/website to /home/iojs/website.github/, I don't think we are doing that currently. The above command will also need /home/iojs/.npm/ to be made and owned by iojs.

Missing Nightlies

Two issues, possibly connected:

  • We haven't had nightly builds for a few days.
  • The .json file for nightly builds is missing the latest two builds I can see in the tree.

First io.js release build plan / discussion

Things we need to make a first official io.js release in mid January (this is top of my head, please contribute if you see something that I don't have).

Target

At a minimum we need to release a solid source tarball that's tested, tagged and good to compile and use as a fully compatible version of joyent/node, v0.12-worthy. Version will be 1.0.0, perhaps with an -alpha.x suffix, that'll be up to the TC.

Binaries would be good but may be practical only for Linux at this stage in lieu of signing keys.

Need

  • OSX 10.10 and 10.9 hooked up to CI (Voxer has machines ready for us to hook up, we just need to do some VM work) - @ryanstevens is responsible for this, @rvagg to work with him to make this happen
  • CentOS5 hooked up to CI to ensure RHEL5-level compatibility (yak shaving abounds here) - @rvagg responsible for this, I believe I have a good strategy after shaving the RHEL6 yak with C++11

Nice but not essential

  • OSX 10.8 hooked up to CI
  • At least one version of FreeBSD hooked up to CI (Voxer has the hardware on offer for this)
  • A Solaris-ish machine hooked up to CI, what are the chances of Joyent offering a box?
  • Signing keys for io.js org so we can release proper Mac and Windows binaries
  • Linux releases via deb.nodesource.com and rpm.nodesource.com (or similar hosts if not the same hosts) - @rvagg to work with @chrislea to make this happen

Decisions

  • Do we mark Linux packages as "conflicting" with "nodejs" or set up an "alternatives" style system? (I know it's doable on Debian-based systems but am vague on RHEL/Fedora-based systems here)
  • Exact version, up to TC, not an urgent decision
  • How are tarballs named and hosted, straight from GitHub?

Hook it up!

@rvagg you ready to hook this up yet? We've got a node repo in node-forward that people are going to be working in. What do we have to do to get this running on it.

Sub-WG clause

We currently have one project operating as a sub-WG of the io.js Build Team WG. Do we need something in our GOVERNANCE.md file that addresses these types of groups?

CI build platforms

With both Node and libuv being very widely adopted across disparate platforms, it's time for a CI system to match that spread. We should be able to define a list of primary targets that are essential as part of the mix and secondary targets that add additional value but not a main focus of the core team.

Current Node.js core and libuv Jenkins build bot list: http://jenkins.nodejs.org/computer/

Let's try and limit this discussion to CI as much as possible and leave release build platforms for another discussion.

Likely using Jenkins with a very distributed collection of build bots. I've been in contact with DigitalOcean, IBM and @mmalecki so far on hardware provisioning, looking forward to Rackspace and any others that want to step up. NodeSource is happy to cop the maintenance burden and likely some of the cost and do the bidding of the core team(s).

Here's my straw-man, to start discussion off:

Primary

  • Linux (64-bit with at least one 32-bit, maybe CentOS)
    • Ubuntu LTS versions still being supported
    • Ubuntu latest stable
    • EL last three versions (CentOS 5, 6 & 7 in lieu of RHEL 5, 6 & 7)
    • Debian stable
    • Something for ARMv6 (rpi) and ARMv7
  • Windows (64-bit only)
    • Windows Server 2008 R2 (NT 6.1, same as Windows 7)
    • Windows Server 2012 (same as Windows 8)
    • Need variations for VS 2012 and VS 2013
  • OSX (64-bit only)
    • 10.8 "Mountain Lion"
    • 10.9 "Mavericks"
  • Solaris (64-bit only)
    • SmartOS 13.4.2
    • SmartOS 14.2.0

Secondary

  • Linux
    • Debian unstable & testing
    • EL next (CentOS 7 beta)
  • Windows
    • Windows 7 32-bit
    • MinGW
    • VS 2010 on something
  • FreeBSD
  • POWER

Looking for input from anyone but particularly the core team who need to be the ones deciding which are the primary platforms they actually care about, and we're considering both Node and libuv here. I'm happy to do a bunch of the legwork for you but I'll need your guidance because build targets is not my decision to make.

@tjfontaine @bnoordhuis @piscisaureus @trevnorris @TooTallNate @saghul @indutny

Others who have shown an interest in this discussion (or I just feel like pulling in!):

@ingsings @pquerna @voodootikigod @mmalecki @andrewlow @guille @othiym23 @dshaw @wblankenship @wolfeidau

Please subscribe to https://github.com/node-forward/build for further notifications from other issues if you're interested so we don't have to go and pull everyone in each time.

Performance tracking

Hi hi,

Not sure exactly how to word this, but it would be great to pick some choice benchmarks and track performance from build to build.

Especially if they represented benchmarks that developers sometimes use to pick languages/stacks/frameworks because we're all idiots^H^H^H^H^H^H really interested in incredibly specific use cases.

For example, I'd love to see Node climb a bit higher in some of these tests: http://www.techempower.com/benchmarks/ – and it would be great to be able to track any efforts involved in getting there.

Also, for regression obvs.

First io.js Build WG and Docker sub-WG meeting

The blame is on me for not organising this sooner, I've been shouldering too much of this effort on my own and would love to make space for others to help out.

This is an open meeting to anyone who feels they have something to contribute. My preference is to include those who have already stepped up with code or help with Build, Docker or other parts of io.js but I also recognise the lack of obvious ways to contribute so far may have held back additional contributors. So if you have some skills and interest in this space then you're welcome too.

Meeting via Google Hangouts, fill in your details here if you want to attend: http://doodle.com/r5cz2dq6rcpd9b5e

The Docker sub-WG is the most active group so I'd love for at least these people to be involved in this meeting:

Other people who have had some involvement with Build, mainly through contributing to discussions and showing an interest in the build repo, you may or may not have an interest in joining us:

(just calling out names here to get the ball rolling, this is not an exhaustive list of people that can be involved by any means)

I'm also interested in having some libuv input since we're taking responsibility for libuv CI.

The proposed meeting dates are a couple of weeks away, mainly selfishly due to constraints on my part but also to give us time to discuss possible agenda items here.

  • Adopt a charter and whatever else this document says we need to do: https://github.com/iojs/io.js/blob/v1.x/WORKING_GROUPS.md
  • Discuss the relationship between "Build" and "Docker"
  • I'll propose a roadmap for the activity we need to conduct for Build and we can discuss that and possibly even assign some responsibilities and map out a path. Items including:
    • Make more progress towards automatic testing of pull requests to iojs/io.js and libuv/libuv via the build-containers work (these are already running on jenkins).
    • Automatic testing of merged commits via the full CI build set
    • Reporting status to iojs/io.js and libuv/libuv. I have a nice custom badge for the READMEs in my head that would show status across the various platforms.
    • Easier test triggers for io.js and libuv collaborators so they can request containerised or full CI runs on any fork & branch/commit without having to log in to Jenkins to do it.
    • Longer-term plans to replace Jenkins with our own solution.
  • Discuss the current list of platforms and how we might want to extend it and a timeframe for extending it. I know that the libuv folk are interested in a broader set of test platforms than io.js but we're not doing much to oblige that yet.
    • We also need to discuss how we might expand the list of cloud/hosting companies providing hardware. We're leaning pretty heavily on DigitalOcean and Rackspace at the moment and I wouldn't mind diversifying a little and even bringing in providers of other platforms.
  • Discuss how ARM fits into the picture. I'm quite interested in ramping up our ARM testing and nightly/release builds to make io.js perfect for IOT/embedded/single-board applications. I've been considering putting out a call for hardware donations so we can get better coverage of hardware that people are actually using.

Enable CI/CD for PRs to iojs/website

Would be nice to have pull requests to iojs/website automatically trigger a build and deployment to a staging server and then comment in the PR With the link to an ephemeral domain.

State of the build (io.js) April 2015

State of the build (io.js) April 2015

This is a summary of activity and resources within the io.js Build WG. I'm doing this to present to the WG meeting that's coming up but also to shine a bit of light into things that are mostly in my head. Some of this information could go on the README or other documentation for the project. I'd like to update this information each month so we can see how it evolves over time. Summarising in this way shows up a few TODO items that we need to tackle as a group and should also show us where our priorities should be going forward.

Build cluster servers

DigitalOcean

We have a fairly open account with DigitalOcean and this is where we do all of our non-ARM Linux computing. We also run https://iojs.org/ from here.

  • 2 x 16G instances for iojs-build-containers for running untrusted builds, 3 x Ubuntu container types and 2 x Debian container types
  • 6 x 4G instances for Ubuntu: 10.04 32-bit, 10.04 64-bit, 12.04 64-bit, 14.04 32-bit, 14.04 64-bit, 14.10 64-bit
  • 4 x 4G instances for CentOS: v5 32-bit, v5 64-bit, v6 64-bit, v7 64-bit
  • 2 x 4G instances for CentOS for release builds: v5 32-bit, v5 64-bit

Currently myself, @wblankenship and now @jbergstroem have access in to all of these machines.

Rackspace

We have a somewhat open account with Rackspace and have @kenperkins on the team who is able to give us more resources if we need them.

  • 2 x 30 GB Compute v1 instances for Windows Server 2008 R2 SP1
  • 2 x 30 GB Compute v1 instances for Windows Server 2012 R2
  • 1 x 30 GB Compute v1 instance for Windows Server 2012 R2 with Visual Studio 2015, not currently running in the general CI group
  • 2 x 30 GB Compute v1 instances for Windows Server 2008 R2 SP1 release builds: 32-bit and 64-bit

Currently it's just myself that have the Administrator passwords for these boxes, I need to identify someone else on the build team member who is competent on Windows so we can reduce our bus-factor here. The release build machines contain signing keys so I'd like to keep that somewhat restricted and will likely share access @wblankenship who is also NodeSource.

Voxer

Voxer have a primary interest in FreeBSD support in io.js for their own use which is where the FreeBSD machines come in. They are very fast because they are not virtualised at all. The FreeBSD machines are behind the Voxer VPN and the Mac Mini servers will be soon.

  • 1 x FreeBSD 10.1-RC3 32-bit jail
  • 1 x FreeBSD 10.1-RC3 64-bit jail
  • 2 x 2015 Mac Mini servers running virtual machines, each with:
    • 1 x OS X 10.10 for test builds
    • 1 x OS X 10.10 for release builds - one server creates .pkg files, the other creates the source tarball and the darwin tarball

Currently myself and @jbergstroem have VPN access into the Voxer network to connect to the FreeBSD machines. Only I have access to the Mac Mini servers but I need to get @wblankenship on to them as well at some point. They contain our signing keys in the release VMs so I'll need to keep access somewhat restricted.

Joyent

Joyent are provided two zones for test builds, they are multiarch and we are using them to do both 64-bit and 32-bit builds.

  • 1 x 8G High CPU zones with 8 vCPUs for SmartOS 64-bit tests
  • 1 x 8G High CPU zones with 8 vCPUs for SmartOS 32-bit tests

Currently myself, @geek and @jbergstroem have access to these machines.

Scaleway

Scaleway, formerly Online Labs, have provided us with a 5-server account on their ARMv7 cluster. We are using them to run plain Debian Wheezy (armhf) on ARMv7 but could potentially be running other OS combinations as well. The ARMv7 release binaries will eventually come from here as Wheezy represents the oldest libc I think we're likely to want to support on ARM.

  • 2 x ARMv7 Marvell Armada 370/XP running Debian Wheezy (armhf)
  • 1 x ARMv7 Marvell Armada 370/XP running Debian Wheezy (armhf) for release builds (yet to take over from the existing ARMv7 machine creating release builds)

Currently only I have access to these machines but I should share access with someone else from the build team.

Linaro

Linaro exists to help open source projects prepare for ARM support. We are being supported by ARM Holdings in this as they have an interest in seeing ARMv8/AArch64 support improved (we now have working ARMv8 builds!). Our access is on a monthly renewal basis but I just need to continue to request continued access.

  • 1 x ARMv8 / AArch64 APM X-Gene Mustang running Ubuntu 14.04

Currently only I have access, access is via an SSH jump-host so it's a little awkward to just give others access. I haven't asked about getting other keys into that arrangement but it likely would be OK. An interim measure is to create an SSH tunnel for access to this server, which I have done previously for io.js team members needing to test & debug their work.

I'm still investigating further ARMv8 hardware so we can expand our testing but low cost hardware is hard to get hold of at the moment and I'd really like to find a corporate partner that we can work with on this (WIP).

NodeSource

The rest of the io.js ARM cluster is running in my office and consists of hardware donated by community members and NodeSource. I'm still looking for further donations here because the more the better, particularly for the slow hardware. Not included in this list is a Beagle Bone Black donated by @julianduque that I haven't managed to hook up yet, but will do because of an interesting OS combination it comes with (and also its popularity amongst NodeBots users).

  • 2 x Raspberry Pi v1 running Raspbian Wheezy
  • 1 x Raspberry Pi v1 Plus running Raspbian Wheezy
  • 1 x Raspberry Pi v2 running Raspbian Wheezy
  • 1 x ARMv7 ODROID-XU3 / Samsung Exynos4412 Prime Cortex-A9 (big-LITTLE) running ODROID Ubuntu 14.04 for both test and release builds under different user accounts (currently creating ARMv7 binaries but this needs to be switched to the Debian Wheezy machine from Scaleway).

Currently only I have access to these machines but have given SSH tunnel access to io.js team members in the past for one-off test/debug situations.

iojs.org

We are only running a single Ubuntu 14.04 4G instance on DigitalOcean for the website, it holds all of the release builds too. The web assets are served via nginx with http redirected to https serving a certificate provided by @indutny.

Only myself, @wblankenship, @indutny and @kenperkins have full access to this machine and I'd like to keep that fairly restricted because of the security implications for the builds.

All of the release build servers in the CI cluster have access to the staging user on the server in order to upload their build artifacts. A job in crontab promotes nightly builds to the appropriate dist directory to be publicly accessible.

The 3 individuals authorised to create io.js releases (listed on the io.js README) have access to the dist user on the server in order to promote release builds from staging to the dist directory where they become publicly accessible. Release builds also have their SHASUMS256.txt files signed by the releasers.

The iojs/website team only have access via a GitHub webhook to the iojs user. The webhook responds to commits on master of their repo and performs a install and build of their code in an unprivileged account within a Docker container. A successful build results in a promotion of the website code to the public directory. A new release will also trigger a website rebuild via a job in crontab that checks the index.tab file's last update date.

This week I upgraded this machine to a 60G from a 30G because we filled up the disk with nightly, next-nightly and release builds. We'll need to come up with a scalable solution to this in the medium-term.

Jenkins

Jenkins is run on an 80G instance on DigitalOcean with Ubuntu 14.04. It's using the NodeSource wildcard SSL cert so I need to restrict access to this machine. It no longer does any slave work itself but is simply coordinating the cluster of build slaves listed above.

Automation

We now have automation of nightly and next-nightly builds via a crontab job running a node program that checks if one should be created at the end of each day UTC and triggers a build via Jenkins if it needs to.

We also have the beginnings of automation for PR testing for io.js. I'm still to publish the source that I have for this but it's currently triggering either a full test run or a containerised test run depending on whether you are in the iojs/Collaborators team or not. New PRs and any updates to commits on PRs will trigger new test runs. Currently there is no reporting of activity back to the PRs so you have to know this is happening and know where to look to see your test run. This is a work in progress, but at least there's progress.

Scripted setup

  • All of the non-ARM Linux server setups for build/release machines are written in Ansible scripts in the iojs/build repo.
  • The FreeBSD and SmartOS server setups are Ansibilised in the iojs/build repo (I'm assuming that's what there works, I believe these were both contributed by @jbergstroem and perhaps @geek too).
  • The Windows setup procedure is documented in the iojs/build repo (not scripted).
  • The ARMv7 and Raspberry Pi server setups have been Ansiblised but not merged into iojs/build yet.
  • The iojs.org server setup is in the process of having its Ansible scripts updated to match the reality of the server, work in progress by @kenperkins #54.

Activity summary

  • Our main io.js test job in Jenkins has performed ~511 build & test cycles and thanks to the hard work of io.js collaborators the tests are almost all passing across platforms with the exception of some Jenkins-specific timeouts on Windows builds.
  • Our main libuv test job in Jenkins has performed ~84 build & test cycles. The libuv team has a bit of work to do on their test suite across platforms before this will be as useful to them.
  • We have built and are serving:
    • 19 releases
    • 71 nightlies
  • We are now building and serving binaries for:
    • Linux ARMv6, ARMv7, x64, x86 all as both .tar.gz and .tar.xz
    • OS X as 64-bit .tar.gz and as .pkg installer
    • Windows x64 and x64 both plain .exe files and .msi installer
  • According to my (hacky, and potentially dodgy) log scraping shell scripts:
    • We've had ~1.5M downloads of io.js binaries from the website since 1.0.0
    • Our peak was 146,000 downloads on the 20th of March

iojs_downloads

  • We don't have Google Analytics (or other) running on iojs.org (I think) but traffic trends can be deduced from this graph thanks to DigitalOcean

iojs_traffic

Ability for native module developers to hook in to the build infrastructure

It would be grand to have the ability to provide the same build infrastructure that is used to compile node to be provided as a service for developers of native modules that implement/utilize node-pre-gyp. Happy to recommend/advise on this from experience with node-serialport, but the ideal case would be for on npm publish a web hook is triggered that would build and store the compiled module for each (or a subset thereof) the target platforms.

A thought.

CI target architectures for libuv

Continued from libuv/libuv#12 also see #1 for additional context.

Here's a strawman proposal for architectures libuv should be tested against. They are split in to 3 classes mainly based on how difficult they will be to set up and include in the build set and how important they are to have solidly tested builds against.


Class A

  • CentOS 6 64-bit (EL6)
  • CentOS 6 32-bit (EL6)
  • CentOS 7 64-bit (EL7)
  • Ubuntu 10.04 LTS (Lucid Lynx) 64-bit
  • Ubuntu 10.04 LTS (Lucid Lynx) 32-bit
  • Ubuntu 12.04 LTS (Precise Pangolin) 64-bit
  • Ubuntu 12.04 LTS (Precise Pangolin) 32-bit
  • Ubuntu 14.04 LTS (Trusty Tahr) 64-bit
  • Ubuntu 14.04 LTS (Trusty Tahr) 32-bit
  • Ubuntu 14.10 (Utopic Unicorn) 64-bit
  • Debian stable (wheezy) 64-bit
  • Debian stable (wheezy) 32-bit
  • Windows Server 2008 R2 + Visual C++ 2012 64-bit
  • Windows Server 2008 R2 + Visual C++ 2012 32-bit
  • Windows Server 2012 R2 + Visual C++ 2013 64-bit
  • Windows Server 2012 R2 + Visual C++ 2013 32-bit
  • Mac OS X 10.8 (Mountain Lion) + XCode 5
  • Mac OS X 10.9 (Mavericks) + XCode 5
  • Mac OS X 10.10 (Yosemite) + XCode 6

Class B

  • CentOS 5 64-bit (EL5)
  • CentOS 5 32-bit (EL5)
  • SmartOS
  • ARMv6 32-bit (Linux)
  • ARMv7 32-bit (Linux)
  • ARMv8 32-bit (Linux, one day, when suitable hardware & OS is available)
  • ARMv8 64-bit (Linux, one day, when suitable hardware & OS is available)
  • FreeBSD stable/9 (maybe)
  • FreeBSD stable/10

Class C

  • MinGW 32-bit
  • MinGW 64-bit
  • POWER8

The open questions, for me at least, are:

  • What build configurations are acceptable here? Is it good enough to build debug and test that in all cases or do we need some testing of release builds somewhere?
  • Windows support: what is expected of libuv by its various consumers? We are being pushed in to MSVC2013 territory for Node.js because of the upstream adoption of C++11 by V8 but this is obviously not the case for libuv, so how much depth is needed there and are the MinGW builds still important (initially proposed in #1).

Are there any concerns here held by the libuv team? / @indutny @saghul @bnoordhuis @piscisaureus (I'm guessing here at who constitutes libuv-core btw).

Benchmarking infrastructure

Since we already have an issue to kick-start the performance tracking (#11), I thought it might be good to kick-start one about finding ways to reliably measure this over time. I think we can all agree that the lowest possible requirement is full access and control of hardware; so the discussion is rather about what would warrant benchmarking.

My end goal would be to measure the improvement (or decrease) of how io.js performs in "real" environments. This would include being run from different os:es, hardware or emulation/virtualisation/jails. In terms of prioritisation, my hunch is that the most common scenario would be a virtualised linux environment (kvm and xen), followed by linux hardware, then windows and other derivatives (fbsd, docker, ..). Since each environment requires different "warm up" phases, it might take a while to get this right. Additionally, we should probably try to reuse the build artefacts.

Using parts of are we fast yet could be a quick way to get a frontend rolling.

I think this could be a relevant topic for the upcoming build meeting.

Releases: Provide parsable catalog files of releases

Continuing nodejs/node#40 here. The summary is that to help version managers do their job with io.js we need to provide parsable catalog files detailing what releases are available (and perhaps other metadata).

Suggestion for now is to provide both a simple .txt file list in the same directory as release tarballs as well as a .json file that is extensible so we can put in additional metadata like shasums. See the comments in nodejs/node#40 for some great ideas.

The build team need to come up with a proposal for how it'll work before passing it back to the TC for acceptance. Be sure to include other interested parties in the discussion.

/ @smikes @ljharb @kenperkins @alexgorbatchev @keithamus @naholyr @mostman79 @gkatsev @Fishrock123 @arb

Smoke testing with select npm packages

This discussion came up on the TC meeting today, prompted by a question on IRC, noting it here as a TODO if someone has spare energy and time to devote to starting this effort.

It would be ideal if io.js was regularly tested against a list of npm packages to test for breakage. Perhaps the list could comprise some of the most popular packages and/or some of the most interesting use-cases of Node/io.js to test for edge-cases. The tests could be simply running the test suites of specific versions against the given version of io.js.

Integrate w/ GitHub Releases API

So, GitHub has this "releases" feature. Every tag automatically becomes a "release" but there is also an API to add other resources to that release like our binary builds and even the Changelog.

This came up recently when people started asking for more features from the website's release section like an RSS feed, nodejs/iojs.org#79, which we would actually get for free if we were using the GitHub Releases API.

Also, because these tags/releases already exist it would be great if people found all the relevant resources there if that's the place they decide to look for releases.

Moved build efforts to iojs

In case you are following this repo and didn't notice, we've moved to the iojs org where we'll be tracking the io.js project

Move authoritative release storage to the cloud

Currently, the single server for iojs.org is the authoritative source for builds. When a build server finishes, it directly scp's the build to iojs.org.

I'm proposing we make the build outputs store the builds in the cloud, and then sync back down to the iojs.org website. This would allow a quick recovery should we lose our webserver, or if we need to spin up additional capacity.

This could theoretically work in conjunction to #55.

Remove company affiliations from README.

I ran across this tweet:

I really don’t like the fact that in @official_iojs people have company names associated to them =[ https://t.co/UkuwQ0HO0T
tomgco sent Jan 20, 2015 

I think that the company affiliations send the wrong message. All these people are here of their own merit and would retain membership were they to change companies.

OS X Builds

Voxer was kind enough to donate a couple of mac mini's for the purpose of building. We need to build out OS X oriented targets within VM's for security purposes in a similar fashion to the linux counterparts

Short-term CI infra plans (discussion)

The current state of the build infrastructure can be seen here: http://jenkins.node-forward.nodesource.com/

Summary:

  • We have automatic builds for both io.js and libuv
  • "multi" builds target the initial cluster we have been building:
    • Ubuntu 10.04, 12.04, 14.04 (and 14.04 32-bit)
    • CentOS 6, CentOS 7 (RHEL 6 & 7 by proxy)
    • Windows Server 2012 with Visual Studio 2013
    • Ubuntu 14.04-based ARMv7
  • "containers" is a cluster of the 3 LTS Ubuntu version run in Docker containers, see build-containers for how they work, these are intended to be run against all new incoming pull requests from untrusted sources (mostly everyone except TC)

Currently we also have the ability to trigger builds on either the full "multi" or "containers" from any repo on GitHub but they must be triggered manually by someone who has access to Jenkins. So far that's only myself, @ryanstevens and @indutny but I can expand that list to the full TC and other trusted helpers.

Short-term goals

  • Hook up to the GitHub status API so we can run "containers" builds for every PR and report the status back to the repo inline
  • Hook up "multi" builds for a "trusted" list, initially comprised of the TC
  • Make it easier for TC (et. al.) to request arbitrary pull requests be run against "multi" prior to merge (I'm actually thinking that a bot putting a message in a PR with a link that works for TC members would be good enough here for now).

Pre-first-release goals

  • Fill out the missing parts of the CI cluster, see #18
  • Start collecting build artifacts and storing in S3 to make releases easier (I think this is the best approach for releases right now)

Mid-term goals

  • Expand the CI cluster to include some more meaningful platforms, particularly those that the libuv team want because they have broader cross-platform goals than io.js and are heavily relied upon by many other projects. See the README of this repo, #14 and #17 for further info and discussion on platforms.
  • Find Windows people who care and can help make Windows better! #15

Miscellaneous goals

  • Bootstrap a performance-focused team by setting up a basic platform for a nightly (or other regular) run of a performance suite (to be developed) so report back gut-feel numbers at a minimum, arewefastyet-style. I don't see this as a long term responsibility of the build team and it should be a separate team focused on performance and obsessed with how best to measure and report performance over time; we just need to bootstrap it because we're the infra team and the hardware and tooling connects logically with what we're doing. I'm proposing @brycebaril as initial lead for this team. Needs further discussion from TC but I know they need something better than the ad-hoc micro-benchmarking efforts that go on now.

Beyond that, we want to fully make our own CI tooling and deprecate Jenkins, but that's a lower priority than just moving io.js forward.

use Ansible Vault to store website crt and key files

Instead of having these in .gitignore and required to be on a local machine to deploy, how about we store these in Ansible Vault?

Example:

ssl_certificate: |
  -----BEGIN CERTIFICATE-----
  blahblahblah
  -----END CERTIFICATE-----

ssl_certificate_key: |
  -----BEGIN RSA PRIVATE KEY-----
  blahblahblah
  -----END RSA PRIVATE KEY-----

Docker Webhook

CI servers should respond to a notification that a docker image iojs/build has been pushed to on the docker registry. This will keep our containers on the CI servers in sync with the docker registry and in turn with our github repos.

What is needed:

Implement a webhook endpoint for the repository and have it initiate a docker pull iojs when hit. Then register the endpoint at: https://registry.hub.docker.com/u/iojs/build/settings/webhooks/

Should work on >1 server.

Picking the right CI tooling

Today Node is built using Jenkins.

Given the scale of what we want to do with Node, should we take this opportunity to consider a new CI tool?

If yes, we should generate a list of candidates and discuss.

Alpine Linux / Docker Build

Would really be appreciated if some help could be rendered to getting an alpine image for iojs passing the test suite.

If there could be two build images, one for just node, and another with everything needed for node_gyp and nan to run.

Also, would be nice to see tests for building common binary modules (sqlite3, expat and a few others).

TC Seat

This just came up in the TC meeting. It would be a good idea for the build team to assign someone to sit on the TC in a "non-voting" capacity (not that we've actually brought anything to a vote yet, we've been finding a pretty easy consensus most of the time).

We'll leave it to you to figure out who that is and then we can work on finding a time that we can schedule that works for Europe, USA and wherever the build person is :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.