circleci-archived / circleci-orbs Goto Github PK
View Code? Open in Web Editor NEWThe source code for some of the orbs published by CircleCI
Home Page: https://circleci.com/orbs/registry
The source code for some of the orbs published by CircleCI
Home Page: https://circleci.com/orbs/registry
If the awscli
is already present (i.e. aws
is in PATH), the Install AWS CLI
step will still check for the presence of pip
and sudo
The step should see that the command is present and exit successfully.
Many maven plugins use extraneous artifacts for testing, processing, etc. It would be nice to specify a qualified that gets tagged on to the end of the key to have separate caches per project.
we need a way for open-source contributors' PRs to build on Circle—without exposing our secrets
circleci/heroku 0.0.6
I’d like to replace custom steps with orb usage.
The deploy-via-git job supports a maintenance-mode
param, which runs heroku maintenance:on
before pushing the code.
I’m not using this job because I need to run a command provided by the slack orb, and I think params in the workflow section make things a little messy, so I kept my original jobs and replaced steps with orb commands.
The deploy-via-git command doesn’t have a maintenance-mode
param, so I need to add heroku/install
and keep the manual call to heroku maintenance:on
.
Jobs and commands in the heroku orb should have the same configurability.
The test result path says it defaults to surefire path, but thats only true if calling the job. We should add it to the command.
https://github.com/CircleCI-Public/circleci-orbs/blob/master/src/maven/orb.yml#L102
At least [email protected]
, using python:3.7.3-alpine3.9
. Probably others as well.
#!/bin/sh -eo pipefail
if [[ $(command -v pip) == "" ]]; then
echo "PIP is required to install AWS CLI and is not available."
exit 1
else
if [[ $(command -v sudo) == "" ]]; then
echo "SUDO is required to install AWS CLI and is not available."
exit 1
else
if [[ $(command -v aws) == "" ]]; then
sudo pip install awscli
else
echo "AWS CLI is already installed."
fi
fi
fi
sh: : unknown operand
sh: : unknown operand
AWS CLI is already installed.
#!/bin/sh -eo pipefail
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID --profile default
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY --profile default
/bin/sh: aws: not found
Exited with code 127
The scripts should work properly.
This is being caused by using bashism (both -o pipefail and [[ .. ]]
), but specifying /bin/sh
in the shebang. Any /bin/sh
compatible shell (such as Busybox/ash in Alpine) fails to parse the script correctly.
Edit: For anyone stumbling upon this issue later, the script being referred to above is from this repo: https://github.com/circleci-public/aws-cli-orb
Hey @ndintenfass et al!
I've been talking more internally, and I think having world readable orbs is going to be a dealbreaker for us. But while playing around with helm I came up with a proposal I'd like to run by you all to improve inline orbs, which would solve our issues.
Allow inline orbs to be defined in an orbs/
folder inside .circleci/
. The orbs would be defined exactly the same way as they are now, but referenced as inline orbs. This would allow us to include a shared orbs repo as a submodule, and share orbs effectively across the organization. It would also (hopefully) be an easy change on your end, since you don't have to worry about making per-org orbs repositories.
The inspiration for this comes form helm packages. A helm package is created by merging a bunch of templates (stored in a templates/
subdirectory) together, and supplying the templates with an associated values.yml
file. From our perspective, it's a very effective solution for config sharing, because we as a central SRE team can publish templates that teams can include just by copying a file. Keeping the convention of "don't edit templates" or "don't edit the orbs/
folder" is much easier than providing a set of inline orbs for teams to copy and paste into their circle.yml.
we need 'em
A PR merged to staging should conclude with a CI job that opens a new PR from staging => master & copies the title & body of the original merged PR to the new PR
1.0.6
https://circleci.com/gh/AIWIP/printt-cloud-print/596
It should do whatever it was trying to do without crashing
Customers need to be able to override docker version or other aspects of executor.
https://discuss.circleci.com/t/overriding-executor-for-job-from-orb/27136
It's common, especially for Gradle Kotlin DSL users, to store dependency versions outside the build.gradle.kts
file. There should be a way to specify files containing those versions for the with_cache
command.
This repository is a monorepo of orbs and we no longer want that. The circleci/gradle
orb should be broken out into its own repository.
A great way to do this is to use the Orb Starter Kit (OSK). The OSK will help you create the scaffolding needed for an orb. Then, the orb source from within this repo can be copied over into your scaffold, eventually creating a brand new orb source repository.
You can then link here to your new repository containing the orb.
Usage of sudo
was assumed. It wasn't installed so the install command failed.
sudo
to run the CLI. This needs to be investigated.sudo
.circleci/[email protected]
Orb step Configure AWS Access Key ID
fails, even though step Install AWS CLI
succeeds.
end of output of Install AWS CLI
:
Running cmd: /usr/bin/python virtualenv.py --no-download --python /usr/bin/python /root/.local/lib/aws
Running cmd: /root/.local/lib/aws/bin/pip install --no-cache-dir --no-index --find-links file:///root/awscli-bundle/packages/setup setuptools_scm-1.15.7.tar.gz
Running cmd: /root/.local/lib/aws/bin/pip install --no-cache-dir --no-index --find-links file:///root/awscli-bundle/packages awscli-1.16.204.tar.gz
You can now run: /root/bin/aws --version
/root/project
output of Configure AWS Access Key ID
#!/bin/bash -eo pipefail
aws configure set aws_access_key_id \
$AWS_ACCESS_KEY_ID \
--profile default
/bin/bash: aws: command not found
Exited with code 127
Config:
version: 2.1
orbs:
aws-s3: circleci/[email protected]
jobs:
build:
docker:
- image: node:latest
[...]
steps:
[...]
- run:
name: Build
command: npm run build
- aws-s3/sync:
from: dist
to: 's3://[REDACTED]'
arguments: '--delete'
It should find the aws
command if the step just before says it's installed correctly.
0.0.8
mvn
command is hard-coded in with_cache
command.
I should be able to override the default maven command (I am using maven wrapper - ./mvnw
instead of mvn
)
Inside the install-package-manager
command we have sudo npm i npm
which I think should have the --global
?
There is no support. There should be support.
make one
This repository is a monorepo of orbs and we no longer want that. The circleci/codecov-clojure
orb should be broken out into its own repository.
A great way to do this is to use the Orb Starter Kit (OSK). The OSK will help you create the scaffolding needed for an orb. Then, the orb source from within this repo can be copied over into your scaffold, eventually creating a brand new orb source repository.
You can then link here to your new repository containing the orb.
I noticed that this repository contains an Orb for CodeCov, but I don't believe it's related to the one published here - https://circleci.com/orbs/registry/orb/codecov/codecov
I couldn't actually find this repository's CodeCov Orb on the registry.
Is this repository's Orb maintained, or supported?
we should encourage this new usage pattern as much as possible to boost adoption by partners, etc.
https://circleci.com/docs/2.0/reusing-config/#environment-variable-name
since orbs are used in other orbs, and since sometimes things like parameter names/types, etc., change between releases, it would be very helpful to, for example, in evaluating the previously linked PR, be able to quickly, on GitHub, compare aws-cli-orb 0.0.1 to v0.1.4, to rule out breaking changes
while this is possible from within the circleci CLI, i think a nice way to do it in the monorepo (or in individual orb repositories, & this is something i've already been thinking about for orbs repos' best practices) would be to use git tags/releases in concordance w/ orb prod releases
we can get the new orb release version at the very end of the master
publishing flow, after we've promoted the latest commit-dev release to prod; and from there, we can retroactively tag/release the same version on GitHub, either in a couple steps after the main prod-promote step, or else in a new CCI job
this will make a git diff
between version super simple & will make the previously mentioned scenario much more easily navigable
0.0.9
Any arguments passed to aws-code-deploy/create-deployment-group
which are intended for group creation are also passed to AWS CLI when calling aws deploy get-deployment-group
, which makes the check for group existence fail each and every time because those options are not recognized by get-deployment-group
. Then the step tries to create the group again and fails every time because the deployment group with specified name already exists.
CIRCLE_WORKFLOW_ID=cf22d16d-2421-4a55-9ae0-2327f7b11fd6
Example below is adapted from one of the steps from workflow above
ensure-deployment-created
set +e
aws deploy get-deployment-group \
--application-name my-app \
--deployment-group-name test --ec2-tag-filters Key=environment,Value=test,Type=KEY_AND_VALUE --auto-rollback-configuration enabled=true,events=DEPLOYMENT_FAILURE
if [ $? -ne 0 ]; then
set -e
echo "No deployment group named test found. Trying to create a new one"
aws deploy create-deployment-group \
--application-name my-app \
--deployment-group-name test \
--deployment-config-name CodeDeployDefault.OneAtATime \
--service-role-arn $CODEDEPLOY_ROLE_ARN --ec2-tag-filters Key=environment,Value=test,Type=KEY_AND_VALUE --auto-rollback-configuration enabled=true,events=DEPLOYMENT_FAILURE
else
set -e
echo "Deployment group named test already exists. Skipping creation."
fi
Output:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: --ec2-tag-filters, Key=environment,Value=test,Type=KEY_AND_VALUE, --auto-rollback-configuration, enabled=true,events=DEPLOYMENT_FAILURE,
No deployment group named test found. Trying to create a new one
An error occurred (DeploymentGroupAlreadyExistsException) when calling the CreateDeploymentGroup operation: An Deployment Group with the name test already exists for this application.
Exited with code 255
Either the arguments should be split in two (arguments to get-deployment-group
and create-deployment-group
separately), or they should be only used for create-deployment-group
and omitted for get-deployment-group
call.
Maven orb forces people to use circleci/openjdk:8-jdk-node
which means anyone using a different version of Java must call commands directly.
This repository is a monorepo of orbs and we no longer want that. The circleci/heroku
orb should be broken out into its own repository.
A great way to do this is to use the Orb Starter Kit (OSK). The OSK will help you create the scaffolding needed for an orb. Then, the orb source from within this repo can be copied over into your scaffold, eventually creating a brand new orb source repository.
You can then link here to your new repository containing the orb.
This doesn't allow using passwords with spaces or other "strange" characters.
1.0.10
Experiencing here a similar issue to #156 , but the syntax of our config matches the documentation. Running aws-s3/sync
with overwrite: true
causes the following error:
/bin/bash: line 5: --delete: command not found
Exited with code 127
The failed job is here: https://circleci.com/gh/thebloggerprogramme/frontend/23
Our top-level config looks like this:
version: 2.1
orbs:
aws-s3: circleci/[email protected]
heroku: circleci/[email protected]
Here's the config for that job:
deploy-staging:
docker:
- image: circleci/python:2.7
working_directory: ~/build
steps:
- checkout
- attach_workspace:
at: ~/build
- aws-s3/sync:
from: dist/assets
to: 's3://mybucket/test'
arguments: |
--acl public-read \
--cache-control "max-age=86400"
overwrite: true
Looking at the output in the job, it does look like a problem with the way the command is being constructed;
#!/bin/bash -eo pipefail
aws s3 sync \
dist/assets s3://thebloggerprogramme-staging/test \
--acl public-read \
--cache-control "max-age=86400"
\
--delete
CircleCI support initially reported that our config had two trailing backslashes after the --acl public-read
option, like this;
deploy-staging:
docker:
- image: circleci/python:2.7
working_directory: ~/build
steps:
- checkout
- attach_workspace:
at: ~/build
- aws-s3/sync:
from: dist/assets
to: 's3://mybucket/test'
arguments: |
--acl public-read \\
--cache-control \"max-age=86400\"
overwrite: true
but I can confirm that the config.yml that was checked in had a single trailing slash, as you can see in the job config above.
The sync works fine without the overwrite
parameter. After spending a little time looking into this it turns out we don't need the --delete
option but I thought it would be useful to raise the issue anyway.
The codecov orb in this repo doesn't appear to be published at all. And the codecov/codecov orb will not work on alpine, even if bash is installed because the command still runs with shell sh and so even though it runs bash
the use of <( )
causes it to fail.
I can't find a repo for codecov/codecov so I can not send a patch, maybe simply adding shell: bash
to the command would do it.
This circleci/codecov orb works fine, so it would be great if it were published.
We should be able to use git tags + workflow filters to sometimes automatically publish minor or major releases if folks tag a commit in a particular way. this would be awesome, so we don't just continually push patch releases forever
Would you guys accept a PR for circleci-cli/install
that can optionally take a parameter named token
. That parameter would be set in ~/.circleci/cli.yml
. It would basically do what you have written here.
The Install AWS CLI
script checks for the presence of sudo
in PATH, even if already running as root.
The script should skip checking for sudo if already running as root
.
Pushing a build up to an app that is currently behind (or is missing any commits) to the existing build fails as git requires a force push to override the commits in Heroku's repository.
Any pushed branch should be able to be deployed to Heroku regardless of if it is up to date with the existing pushed commits.
This brings up a question though: should this be configurable via a parameter in config? Something like force: true/false
? I can put up a PR pretty quick once that is answered. I currently lean towards always force pushing, since the real history of the repo is likely stored elsewhere anyways.
This repository is a monorepo of orbs and we no longer want that. The circleci/aws-s3
orb should be broken out into its own repository.
A great way to do this is to use the Orb Starter Kit (OSK). The OSK will help you create the scaffolding needed for an orb. Then, the orb source from within this repo can be copied over into your scaffold, eventually creating a brand new orb source repository.
You can then link here to your new repository containing the orb.
There doesn't seem to be any documentation on what an orb actually is. Looking at this repo, I still don't really have a clue what it is meant to do. Can someone please add a quick description of what an orb is and what it is meant for?
0.0.9
settings_file
required, even though it should be optional.
Should not be required.
The following table does not make sense - it says that settings_file
is "(optional)" in the Description column but then marks it with a tick in the Required column.
From https://circleci.com/orbs/registry/orb/circleci/maven
That said, thanks for the Orb - love your work!
This repository is a monorepo of orbs and we no longer want that. The circleci/codecov
orb should be broken out into its own repository.
A great way to do this is to use the Orb Starter Kit (OSK). The OSK will help you create the scaffolding needed for an orb. Then, the orb source from within this repo can be copied over into your scaffold, eventually creating a brand new orb source repository.
You can then link here to your new repository containing the orb.
0.0.9
Manually stopped a deployment using CodeDeploy in AWS and it led to a successful deployment result in CircleCI for the deploy-bundle command.
Manually stopping a deployment using CodeDeploy should lead to a failed deployment result in CircleCI for the deploy-bundle command.
If Gradle Kotlin DSL is used, with_cache
will not be able to generate the key, since gradle file will be named build.gradle.kts
. This file name should be used as an alternative to build.gradle
without additional configuration because Kotlin DSL is an official part of Gradle.
2.1
aws-s3 1.0.8
When overwrite set to true throws an error:
aws s3 sync
bucket s3://aviv-ci-test
--acl public-read
--cache-control "max-age=86400"
--delete
upload: bucket/build_asset1.txt to s3://aviv-ci-test/build_asset1.txt
/bin/bash: line 3: --cache-control: command not found
Exited with code 127### Expected behavior
Delete the previous files on the bucket and sync
circleci/[email protected]
I can't figure out how to add the slack/approval-notification
into my workflow if it's not the very first one. I see how I can get this workflow:
slack/approval-notification -> hold_for_approval -> deploy_prod
But, what I want is this workflow:
build -> test -> deploy_dev -> slack/approval-notification -> hold_for_approval -> deploy_prod
I think the slack/approval-notification
job needs the ability to specify a list of things it requires before it triggers. Perhaps it already exists, but it certainly isn't clear from the documentation how you would use it!
Hey circle!
For the docker-publish orb, I'm wondering what the best / easiest way is to tag a docker container with some string produced at runtime. For example I'd want to build the image (with latest)
docker build -t user/repo .
Then derive a tag by pinging the software inside:
DOCKER_TAG=$(docker run user/repo --version)
Then tag the container:
docker tag user/repo:latest user/repo:$DOCKER_TAG
and then push both of them
docker push user/repo
I have a recipe to do this the old way (without an orb) but I'm hoping there is a way to do it with orbs because I like them a lot. :)
This line
git push https://heroku:<< parameters.api-key >>@git.heroku.com/<< parameters.app-name >>.git << parameters.branch >>
(https://github.com/CircleCI-Public/circleci-orbs/blob/master/src/heroku/orb.yml#L65) should rather be:
git push https://heroku:<< parameters.api-key >>@git.heroku.com/<< parameters.app-name >>.git << parameters.branch >>:master
I believe.
I wasn't certain if this should be added to this project or the CLI project, this seemed appropriate.
When running the circleci cli tool to look at the official orbs that have been published, I noticed that the source returned from the circleci/gradle orb is incorrect.
I ran circleci orb source circleci/[email protected]
and the source returned appears to be the config.yml for the cli project and has nothing to do with gradle. (https://github.com/CircleCI-Public/circleci-cli/blob/master/.circleci/config.yml)
aws-s3@volatile
nothing, but python 2 is about to retired.
upgrade aws-s3 base image to python 3?
https://github.com/CircleCI-Public/circleci-orbs/blob/5f4aba3265497162aca0461bcce7053d0a9f7e1a/src/aws-s3/orb.yml#L17
Node's Orb should rely on package manager lockfiles instead and if you choose yarn: true
to be using yarn.lock
as cache-key
for the with-cache
command.
(via https://twitter.com/tunnckoCore/status/1072932605979447296)
circleci/[email protected]
While trying to understand why gradle/with_cache
was never using any cache, I found that the problem was in the checksum generation. Those two builds (the second one is a rerun of the first one) illustrates the problem (look at the output of the to find commands and compare the two runs):
The find
command returns the files in unspecified order and the shasum
is sensitive to that. Because of that the the gradle/with_cache
command is computing different checksum values on runs that have no changes but multiple build.gradle
files.
I expect a rerun to lead to the same checksum and therefore re-use the cache.
I suggest to change the checksum command that way:
find . -name 'build.gradle' | sort | xargs cat | shasum | awk '{print $1}' > /tmp/gradle_cache_seed
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.