Giter Club home page Giter Club logo

atlas's Introduction

Atlas

Build Status Zip Status Tarball Status

Atlas Logo


Atlas is a tool for automating the deployment, configuration, and maintenance of DevOps engineering systems. It can be run interactively from the command line, or can be run entirely unattended as part of an Azure DevOps (formerly known as VSTS) build or release definition. An Atlas workflow revolves around making the appropriate REST API calls to Azure DevOps, Active Directory, and Azure Resource Manager.

There is a REST API for everything. With Atlas you can make the configuration of everything from CI/CD to production servers consistent, reproducible, and reviewable by capturing them as source controlled templates.


Install

Atlas is currently under active development.

Daily builds of the Atlas CLI are available as self-contained downloads:

Platform Master Branch (0.1) Latest Build
Windows x64 Download latest zip Zip Status
Linux x64 Download latest tar.gz Tarball Status

If you want to use a package manager:

Install global tool (Windows or Linux)

  1. If dotnet --version isn't >= 2.1.300 then install or upgrade .NET Core
  2. dotnet tool install -g atlas-cli --add-source https://aka.ms/atlas-ci/index.json

Getting Started

An existing workflow can be executed directly from a public web server. You can run any of the examples in this repository with the atlas deploy command:

atlas deploy https://github.com/Microsoft/Atlas/tree/master/examples/101-messages

Creating a new workflow

To create a new workflow, from a console window execute mkdir demo to create a new subfolder.

Add a demo/workflow.yaml file to declare operations:

operations:
- message: Running my workflow...
- message: {{ info.greeting }}, {{ info.name }}!
- message: "All values: {{ json . }}"

Add a demo/values.yaml file to declare defaults:

info:
  greeting: Hello
  name: World

Run it!

> atlas deploy demo --set info.name=Atlas

Atlas

  - Running my workflow...

  - Hello, Atlas!

  - All values: {"info": {"greeting": "Hello", "name": "Atlas"}}

Exploring the examples

You can also clone the Atlas GitHub repo to explore the examples and see kinds of operations Atlas can perform.

git clone https://github.com/Microsoft/Atlas.git
cd Atlas/examples
atlas deploy 101-messages

Features

  • YAML or JSON syntax to define workflows and input parameters

  • Handlebars template engine enables workflows to be highly flexible

  • JMESPath provides query language for inputs, outputs, and data transformations

  • Works cross-platform as a .NET Core executable

  • Invokes any Azure RM, Azure AD, or Azure DevOps REST API

  • From the command line, REST API calls are secured via interactive Active Directory login, similar to az login

  • From an Azure DevOps build or release definition, REST API calls are secured via Azure DevOps service connection to Azure

  • Renders output values and additional templated files to a target folder

  • Operations support conditional executions, retries and looping, and can throwing detailed exceptions

  • Extensively detailed log output and safe --dry-run support simplify troubleshooting

  • Values which are declared secret are redacted (replaced with xxxx) when written to console output and log files

Limitations

  • Does not allow arbitrary code or command-line execution in order to limit what can be done to the machine executing a workflow

  • Currently designed for Active Directory authentication for Azure and Azure DevOps resources

  • Not yet available as a class library package

Goals

  • Packing workflows into zip or tarball archive files, publishing and executing workflows from feed locations

  • Establishing a repository for collaboration on common in-progress and stable workflows, and default location for common workflows

  • Shared workflows for larger scenarios, e.g. ASP.NET Core services on Kubernetes with Azure DevOps CI/CD, Azure VM clusters, Azure DNS, ATM, and ALB for geo-redundant load balancing and service routing


System Requirements

Running Atlas

Atlas runs on Windows and Linux. Windows 10 and Ubuntu 16.04 are the tested environments.

Building Atlas from source

Prerequisites:

  • Required: Download and install the .NET Core SDK
  • Optional: Install or update Visual Studio 2017
  • Optional: Download and install Visual Studio Code

To clone and build from source, run the following commands from a console window:

git clone https://github.com/Microsoft/Atlas.git
cd Atlas
build.cmd *or* ./build.sh

Running Atlas from source

To run locally from source, run the following commands:

dotnet restore
./atlas.sh

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Reporting Security Issues

Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) at [email protected]. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.

atlas's People

Contributors

aileronajay avatar itssmutnuri avatar jacksonisaac avatar james-flynn-ie avatar jschenken avatar kant avatar karpatmsft avatar lodejard avatar microsoftopensource avatar msftgits avatar toothlessgear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlas's Issues

Add the ability to run a workflow from an https url

Simplest case is a path to folder of static file server, like: https://raw.githubusercontent.com/Microsoft/Atlas/master/examples/202-throwing-errors

Should recognize related folder https://github.com/Microsoft/Atlas/tree/master/examples/202-throwing-errors and change it to the related form automatically.

Maybe should also recognize if the path ends in /workflow.yaml and remove for convenience.

Example 402-cdn-with-storage-account should move to build folder

The example which creates storage account, cdn, uploads files, etc has become very specific to the Atlas release pipeline.

It should probably be moved to a build subfolder instead. The example 402 can be simplified and show blob uploading and cdn creation without being an atlas release mechanism.

Need an operation-level foreach property

Should work like values but anything which evaluates to an array would cause the operation to run once for each array item.

Example usage:

operations:
- message: Listing subscriptions
  request: apis/azure/subscriptions-list.yaml
  output:
    subscriptions: (result.body.value)

- foreach:
    azure:
      subscription: (subscriptions[].subscriptionId)
  message: (['Listing resource groups in ', azure.subscription])
  request: apis/azure/resourcegroups-list.yaml
  output:
    results:
      (azure.subscription):
        resourceGroups: (result.body.value)

Support sub-workflows as an operation type

It becomes difficult very quickly to organize a large workflow into stages with child templates. Specific problems:

  • The syntax for including an indented workflow fragment is complicated
    {{# indent depth=4 }}
    {{> child-template-name.yaml }}
    {{/ indent }}
  • The same working memory is used for the whole workflow, making it possible to consume values or produce output unexpectedly
  • Troubleshooting syntax errors becomes more difficult because the overall result becomes a single, large workflow file.
  • Individual stages can't be run directly
  • Workflow fragments often have re-usable logic. You end up with a situation where the same fragment file is copied into several locations, leaving you with the problem of keeping the

See Sub-workflows design docs for problem statement and solution proposal

Add a {{ yaml }} helper

Should work like the {{ json [expr] }} helper, where the first non-null expr is evaluation is serialized

Should also accept an option like {{ yaml [expr] indent=10 }} because yaml is sensitive to indentation.

The very first line will probably be an issue... Might not want to indent the very first line. Or might want to include a leading newline character to ensure every non-empty line has the same indentation.

Leading newline might be the way to go, actually, because that will also enable the helper to be used on a single line. That is to say, both of the following examples would have the same effect.

something:
  another: 
    {{ yaml foo indent=4 }}  
something:
  another: {{ yaml foo indent=4 }}  

Allow for custom server certificate validation

For example when invoking REST API on aks cluster, then the the apiserver endpoint certificate might need to be validated using the cluster-specific self-signed certificate authority.

Load workflow metadata from readme.md yaml

Based on the autorest technique of using the readme.rm file as both documentation, description, and packaging metadata.

author: string
license: path
dependencies:
  swagger: { <mount-path> : [<swagger-reference>] }
  packages: { <mount-path>: <package-reference> }

The log files on build agents has ansi color codes in it

On a linux vsts build agent the ansi color codes in the stdout are not being interpreted, and are showing up like ๐Ÿ”ฒ[36m

Should figure out if they've come up with a better way with the latest .NET Core to generate color cross-platform than to write ansi to stdout.

Documentation needed!

One area that has been highlighted for getting started is to describe the various parts and files of a workflow and how they work together

Need to be able to jmespath query values and output keys as well

When looping over data and adding values to a hash table it's very tricky to build that property-object relationship using to_object.

Better would be to be able to jmespath query property names in the same way as values.

  output:
    results:
      "(azure.subscription)":
        resourceGroups: (result.body.value[])

The query will need to result in a single string - if it is anything complex there's no way to know how to map what parts of the complex key should associate with what parts of the complex value.

Oh... I take that back. If the key results in an array of string and the value results in an equally sized array, it would be possible to use that information to turn it into a hash object.

  output:
    result:
      (result.body.value[].id): (result.body.value[])

Use "Visual Studio" client id

The devops REST API calls should probably use the Visual Studio client id 872cd9fa-d31f-45e0-9eab-6e460a02d1f1 instead of the Atlas client id.

Reading 30mb files as json binary properties is slow

Maybe this is an optimization problem, or maybe it's a bad idea to read several dozen mb of binary files as a base64 encoded json property in order to upload it to blob storage.

We should consider adding a way to declare the body of an api as coming from a file instead of being declared as a binary json property. Something like a top level property like bodyFile or input.

Should also add a command-line switch named -i|--input in order to provide multiple base paths. Reading files which are not part of the template folder would be relative to one of those paths. It should also be impossible for a workflow to read any file that isn't a sub-path of an --input directory.

Using REST API response headers and status

Probably means the response body shouldn't be the root data in a request operation's output:

Instead of (@) the body would be queried at (result.body), and (result.headers) and (result.status) would also be available.

Which should also have a nice side-effect of leaving the rest of working memory available as well.

Update references to VSTS in README.md

The README.md text was written before the Azure DevOps branding was announced. There are a lot of places where it should be updated.

Some things might be a little tricky... Distinguishing between Azure DevOps the product, and DevOps the role... Also, if there are Azure DevOps API and Azure Active Directory API, what do you call the Azure API that's distinct? Azure Resources API? or Azure Resource Manager API?

TestResults should be published when build fails

In the build-ci.yaml file the script block has set -e which causes the step to fail when the first non-zero exit code is returned from a command.

This means the following line will correctly cause the build to fail if any unit tests fail:

docker run --name ${test_container_name} atlas-cli-test:0.1.$(Build.BuildId)

Unfortunately the following line must be run in order for the PublishTestResults task to be able to publish the trx files, which lets the build summary show test result details.

docker cp ${test_container_name}:/app/test/Microsoft.Atlas.CommandLine.Tests/TestResults/. $(Build.ArtifactStagingDirectory)/TestResults

It would be nice if the script run docker cp and docker rm commands even if the docker run failed dur to unit tests

Isolate token caches by input parameters

Token cache files should be isolated (different file names on disk) by various auth parameters.

  • Different tenant (or authority) values should mean there is no overlap between credentials used.

  • A different client app id should also mean the tokens have no overlap.

  • The resource (or scope) values probably shouldn't have isolated token stores...

Need some better logging switches

Should get ILogger welded into the console app for debug output. Consider things like --verbose should control is ILogger output should go to the console output and --debug should control if a logs folder is written to the _output folder.

Atlas build task should take connection info for Azure DevOps orgs

The Azure task that can run in DevOps pipelines should be able to take several Azure resource manager and Azure DevOps service endpoints.

The task should call atlas account add several times so that the credentials are available during execution, and call atlas account clear afterwards to remove them.

It should also set the ATLAS_CONFIG_DIR environment variable in a way that ensures the credentials won't be used by other processes running on the machine.

(Edit: strikeout text is either done or moved out into different issues.)

Tool package ID is a little odd

Why is the tool package named dotnet-atlas when the tool is called atlas and doesn't actually execute as part of the dotnet CLI? As far as I can tell, atlas isn't limited to .NET projects and just happens to be written in .NET (awesome!). It's probably a good idea to use a name like atlas-cli if possible (and better to change this now than after a 1.0 ships ;))

Enable workflows to catch exceptions

I am facing a case where I am looping on a list of values (VSTS packages) and perform an API call on each (promote a certain package version in my case) . I would like to ignore the errors thrown by that API (when the package version combination doesn't exist) and proceed with the repeat loop instead.

Workflow should generate request yaml files from swagger

Declaring request yaml can be inconvenient. Would be better to declare swagger file references which will auto-generated a request yaml file per operationId.

generate:
  swagger:
    {apis/path/prefix}: 
    - paths/to/swaggers.json
    - paths/to/folders[/readme.md] 

Swagger has a convention of title is client name and operationId is method name. So the generated files will have the effective path

api/path/prefix/title/operationId.yaml

What is the current status of Atlas?

Hello everybody,

I'd like to assert what's the current Atlas' status. The term, Atlas, is used in several contexts in Azure (for instance, MongoDB Atlas and atlas module for azure-maps-control package). It makes quite difficult to find consistent and updated documentation. Besides, considering that the newest commits are 4 years old, it makes me thing about Atlas' usage.

Does it worth using Atlas for deploying something in Azure? Is it "abandoned software"? Is it supposed to be still available, or is it deprecated?

By no means, these questions are sent to inquiry the project in itself. My intention is to verify the feasibility of using Atlas in the Microsoft Azure project (it was proposed as a requirement).

Best regards,
phanxen

Support jmespath in template operation's write property

it would be nice in an operation like this

- template: example-template.yaml
  write: example-output.yaml

if the write filename could be late-bound to variables (like message property can be)

- template: example-template.yaml
  write: ( ['example-', thing.name, '.yaml'] )

Values inconsistent when using model.yaml

The render-time values can be incomplete when a model.yaml file is used to normalize defaults...

  • The model.yaml should be merged onto incoming values as an overlay.

  • The resulting merged values should be use for the initial template rendering as well as the initial runtime values

Ability to run workflow from zip file

The path to the workflow location is special if it ends in .zip

Commands should operate on those paths exactly as if the file was an unzipped folder on disk.

Paths to zip on disk, and https urls with paths ending in .zip should both operate this way

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.