Giter Club home page Giter Club logo

git-resource's Introduction

Concourse: the continuous thing-doer.

Discord Build Contributors Help Wanted

Concourse is an automation system written in Go. It is most commonly used for CI/CD, and is built to scale to any kind of automation pipeline, from simple to complex.

booklit pipeline

Concourse is very opinionated about a few things: idempotency, immutability, declarative config, stateless workers, and reproducible builds.

The road to Concourse v10

Concourse v10 is the code name for a set of features which, when used in combination, will have a massive impact on Concourse's capabilities as a generic continuous thing-doer. These features, and how they interact, are described in detail in the Core roadmap: towards v10 and Re-inventing resource types blog posts. (These posts are slightly out of date, but they get the idea across.)

Notably, v10 will make Concourse not suck for multi-branch and/or pull-request driven workflows - examples of spatial change, where the set of things to automate grows and shrinks over time.

Because v10 is really an alias for a ton of separate features, there's a lot to keep track of - here's an overview:

Feature RFC Status
set_pipeline step #31 ✔ v5.8.0 (experimental)
Var sources for creds #39 ✔ v5.8.0 (experimental), TODO: #5813
Archiving pipelines #33 ✔ v6.5.0
Instanced pipelines #34 ✔ v7.0.0 (experimental)
Static across step 🚧 #29 ✔ v6.5.0 (experimental)
Dynamic across step 🚧 #29 ✔ v7.4.0 (experimental, not released yet)
Projects 🚧 #32 🙏 RFC needs feedback!
load_var step #27 ✔ v6.0.0 (experimental)
get_var step #27 🚧 #5815 in progress!
Prototypes #37 ⚠ Pending first use of protocol (any of the below)
run step 🚧 #37 ⚠ Pending its own RFC, but feel free to experiment
Resource prototypes #38 🙏 #5870 looking for volunteers!
Var source prototypes 🚧 #6275 planned, may lead to RFC
Notifier prototypes 🚧 #28 ⚠ RFC not ready

The Concourse team at VMware will be working on these features, however in the interest of growing a healthy community of contributors we would really appreciate any volunteers. This roadmap is very easy to parallelize, as it is comprised of many orthogonal features, so the faster we can power through it, the faster we can all benefit. We want these for our own pipelines too! 😆

If you'd like to get involved, hop in Discord or leave a comment on any of the issues linked above so we can coordinate. We're more than happy to help figure things out or pick up any work that you don't feel comfortable doing (e.g. UI, unfamiliar parts, etc.).

Thanks to everyone who has contributed so far, whether in code or in the community, and thanks to everyone for their patience while we figure out how to support such common functionality the "Concoursey way!" 🙏

Installation

Concourse is distributed as a single concourse binary, available on the Releases page.

If you want to just kick the tires, jump ahead to the Quick Start.

In addition to the concourse binary, there are a few other supported formats. Consult their GitHub repos for more information:

Quick Start

$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up
Creating docs_concourse-db_1 ... done
Creating docs_concourse_1    ... done

Concourse will be running at 127.0.0.1:8080. You can log in with the username/password as test/test.

⚠️ If you are using an M1 mac: M1 macs are incompatible with the containerd runtime. After downloading the docker-compose file, change CONCOURSE_WORKER_RUNTIME: "containerd" to CONCOURSE_WORKER_RUNTIME: "houdini". This feature is experimental

Next, install fly by downloading it from the web UI and target your local Concourse as the test user:

$ fly -t ci login -c http://127.0.0.1:8080 -u test -p test
logging in to team 'main'

target saved

Configuring a Pipeline

There is no GUI for configuring Concourse. Instead, pipelines are configured as declarative YAML files:

resources:
- name: booklit
  type: git
  source: {uri: "https://github.com/vito/booklit"}

jobs:
- name: unit
  plan:
  - get: booklit
    trigger: true
  - task: test
    file: booklit/ci/test.yml

Most operations are done via the accompanying fly CLI. If you've got Concourse installed, try saving the above example as booklit.yml, target your Concourse instance, and then run:

fly -t ci set-pipeline -p booklit -c booklit.yml

These pipeline files are self-contained, maximizing portability from one Concourse instance to the next.

Learn More

Contributing

Our user base is basically everyone that develops software (and wants it to work).

It's a lot of work, and we need your help! If you're interested, check out our contributing docs.

git-resource's People

Contributors

agrrh avatar alext avatar alucillo avatar aoldershaw avatar davidb avatar fenech avatar fmy avatar heldersepu avatar hy0044 avatar jamie-pate avatar jdziat avatar keymon avatar knifhen avatar krishicks avatar luan avatar mariash avatar ngehrsitz avatar norbertbuchmueller avatar oppegard avatar pivotal-bin-ju avatar ppaulweber avatar shyx0rmz avatar simonjohansson avatar talset avatar taylorsilva avatar vito avatar xoebus avatar xtremerui avatar youssb avatar zachgersh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

git-resource's Issues

ignore_path option does not work as expected

We have the following ignore paths in our pipeline.

ignore_paths:
  - releases/bosh-openstack-cpi/**
  - .final_builds/**
  - docs/**
  - README.md

We have tested the glob patterns locally with the following command which resembles what the check does in the git-resource:

$ git log --format='%H' -- . ':!.final_builds/**' ':!releases/**' ':!docs/**'

Unfortunately it does not filter expected commits:
https://main.bosh-ci.cf-app.com/pipelines/bosh-openstack-cpi/resources/bosh-cpi-release
(Concours apparently runs in version 0.71.1)

Commits that should not be part of the list are the following:
1ff876e53c2addd4cc2a4b54eaf20e30b7e2fb19
774ec6f200df0a93b3f669cf9bdb4b6d36f4429f

Strange thing is, in our internal concourse (currently 0.71.0) we have the same pipeline set and it works.
(We also have a pipeline where it partially worked, but here we are not fully aware what pipeline-configs have been applied)

Any ideas?

Git check hangs indefinitely

I'm running Concourse 2.1.0 and occasionally see git resources lock up. In happens across all pipelines without any error message. Attempting a check-resource via fly also hangs with no output. This seems similar to #68, although I'm not using btrfs.

I've tried aborting and restarting jobs without any luck. I've also left a job running overnight but came back to it still waiting on the git resource. Deleting and recreating the pipeline does allow it to immediately succeed.

`[ci skip]` not respected

commits with [ci skip] are now showing up inside our git resources and triggering builds looking at the repo in question is public should allow reproduction of the issue.

Local git version: git version 2.5.0
Version of git inside container which made commit: git version 1.9.1

HTTPS auth doesn't work

Git resources using https auth don't seem to function at all.

Using this setup:

resources:
- name: reponame
  type: git
  source:
    uri: https://github.com/companyname/reponame.git
    branch: master
    username: githubusername
    password: githubpassword

The check script fails with the following error message:

resource script '/opt/resource/check []' failed: exit status 128

stderr:
Cloning into '/tmp/git-resource-repo-cache'...
fatal: could not read Username for 'https://github.com': No such device or address

HTTPS auth is important because you can't authenticate submodules using private keys. Thanks!

Clarify supported git url schemes

Hi,
first steps with Concourse CI for me, first of all thanks for that great tool with interesting different concepts. :)

I've tried to connect a git-resource to my private git repository (not hosted on github). Is it the case that only the scp-like url scheme git@...:/path/to/repo.git/ is supported? I've tried several attempts similar to the following ones, but only the last one seems to work (all the others fail with fatal: Could not read from remote repository.). I've specified the correct private_key, btw.

ssh://example.com:1234/path/to/repo.git/
git://example.com:1234/path/to/repo.git/
[email protected]:/path/to/repo.git/

The first schemes are needed if you want to specify a different port other than 22.

Also local directory repositories don't seem to work, neither with /path/to/repo.git/ nor file:///path/to/repo.git/.

Could you please confirm or clarify in the documentation what is supported?

recursive submodules and triggering

Hi,

I just wonder about the following scenario:

GitRepoA*
+--- SubmoduleAA*
:    +--- SubmoduleAAA
     |    +--- fileAAAA
     |    +--- fileAAAB
     +--- SubmoduleAAB*
     :    :

SubSubModule AAB has changed, therefore SubmoduleAA and GitRepoA were changed, too (new submodule commits). SubmoduleAAA has no changes.

If GitRepoA is my source-uri, how do I express that I get triggered only if I had a change on fileAAAB? Just setting "paths: [SubmoduleAA/SubmoduleAAA/fileAAAB]" ???

Best regards, Stephan

Should the in script reuse the clone maintained by the check script?

Firstly, i should say that i am still trying to get my head round how this resource works, so i might have the wrong end of the stick here.

This is an idea for an enhancement.

The 'check' script does this:

destination=$TMPDIR/git-resource-repo-cache

if [ -d $destination ]; then
  cd $destination
  git fetch
  git reset --hard FETCH_HEAD

Which means that if the resource container is reused, it will do incremental fetches from the origin into $TMPDIR/git-resource-repo-cache to get new commits.

The 'in' script does this:

cat > $payload <&0
# ...
uri=$(jq -r '.source.uri // ""' < $payload)
# ...
git clone --single-branch $depthflag $uri $branchflag $destination

Which means that it's always pulling a fresh (albeit possibly shallow) clone from the origin.

Would it be possible for the 'in' script to clone from $TMPDIR/git-resource-repo-cache instead? That means the clone would be purely local, and would not need to go out to the network again.

You'd need to make sure it was a standalone clone, not using hardlinks or sharing or whatever, but there are flags for that.

I appreciate that you probably want the clone's remote to be the real origin, not the local cache, but that can be arranged. Perhaps do a clone from the origin with --depth 0 (is that allowed?) then do a pull from the local cache. Or else clone, then use git remote to rewrite the remote. You'd need to do something about tracking branches too.

You'd need to allow for the case where 'in' is asked for a version of the repository which isn't in the cache (say because the resource container was deleted in between the 'check' and the 'in'). You could deal with that by checking to see if the version is there, and if not, calling into the 'check' machinery to create and update the cache.

Add support for fetching only single branch

Git fetches all branches by default, and this resource is following this schema. I had small master branch (~1MB) with few other big branches (~0.5GB), simple getting the master took like 2-3 minutes. git clone done manually in shell did the same - and downloaded 0.5GB of data. Deleting big remote branches helped for me, this resource started getting master in few seconds instead.

git clone -b mybranch --single-branch git://sub.domain.com/repo.git

Add git-crypt support.

I would like to try out concourse, but almost all of my repos are encrypted with git-crypt.

So I was thinking about adding git-crypt support and sending a PR, is this something you are interested in?

`paths` option broken for merge commits

If you have a paths option, then the check script will fail to trigger for merge commits even if the commits which they merged in contained changes to files which are included in paths.

Presumably this is because of the shallow clone, so there's NO files included in the merge commit.

Detecting multiple tags via filter

/cc @Jonty

Opening a separate issue to augment tag_filter. It would be nice if it could support the full resource interface, part of which defines that it should be possible to discover old versions, and ranges of versions, not just the latest.

Something like this might work pretty well (replacing the git describe that we currently do):

git tag --list <pattern> --sort=creatordate [--contains <cursor>]

Example, run from buildroot which has date-formatted tags and RCs:

~/w/buildroot (master) $ git tag --list "*.*" --sort=creatordate --contains 2015.08
2015.08
2015.08.1
2015.11-rc1
2015.11-rc2
2015.11-rc3
2015.11
2015.11.1
2016.02-rc1
2016.02-rc2
2016.02-rc3
2016.02
2016.05-rc1
2016.05-rc2
2016.05-rc3
2016.05

(...which brings up another point: how to omit RCs?)

And another example, in concourse/concourse:

~/w/concourse (develop) $ git tag --list "v*" --sort=creatordate
v0.14.0
v0.15.0
v0.16.0
v0.17.0
v0.18.0
v0.19.0
v0.20.0
v0.21.0
v0.22.0
v0.23.0
v0.24.0
v0.25.0
v0.26.0
v0.27.0
v0.28.0
v0.29.0
v0.30.0
v0.31.0
v0.32.0
v0.33.0
v0.34.0
v0.35.0
v0.36.0
v0.37.0
v0.38.0
v0.39.0
v0.40.0
v0.41.0
v0.42.0
v0.43.0
v0.44.0
v0.45.0
v0.46.0
v0.47.0
v0.48.0
v0.49.0
v0.50.0
v0.51.0
v0.52.0
v0.53.0
v0.54.0
v0.55.0
v0.56.0
v0.57.0
v0.58.0
v0.59.0
v0.59.1
v0.60.0
v0.60.1
v0.61.0
v0.62.0
v0.62.1
v0.63.0
v0.64.0
v0.64.1
v0.65.0
v0.65.1
v0.66.0
v0.66.1
v0.67.0
v0.67.1
v0.68.0
v0.69.0
v0.69.1
v0.70.0
v0.71.0
v0.71.1
v0.72.0
v0.72.1
v0.73.0
v0.74.0
v0.75.0
v0.76.0
v1.0.0
v1.0.1
v1.1.0
v1.2.0
~/w/concourse (develop) $ git tag --list "v*" --sort=creatordate --contains v1.0.1
v1.0.1
v1.1.0
v1.2.0
~/w/concourse (develop) $

Better diagnostics when required settings not found

I just wasted a non-trivial amount of time figuring out why git-resource wouldn't work for checking out a repo via ssh.

Turns out I was using private-key: instead of private_key: in my pipeline.yml

In both cases, fly set-pipeline happily accepted my inputs, letting me down only when trying to check the resource, with this error

$ fly -t local cr -r test/resource-tutorial
error: check failed with exit status '128':
Cloning into '/tmp/git-resource-repo-cache'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights

(see also #66 (comment))

At least git-resource should both provide proper diagnostics and check that it actually has a private key specified when trying to check out an ssh repo. A warning statement about missing settings in the output returned by fly would be good enough already.

Bettery yet, fly set-pipeline should validate the yml file, though that may be a lot more difficult

Clone submodules in parallel

There is no reason this needs to be done serially. You could add something like this:

git submodule status | awk '{print $2}' | xargs -P5 -n1 git submodule update --init $depthflag --recursive

here:

git-resource/assets/in

Lines 59 to 66 in d3fc957

if [ "$submodules" == "all" ]; then
git submodule update --init $depthflag --recursive
elif [ "$submodules" != "none" ]; then
submodules=$(echo $submodules | jq -r '(.[])')
for submodule in $submodules; do
git submodule update --init $depthflag --recursive $submodule
done
fi

giphy

I'll do a PR if you want.

`annotate` param should be relative to the container, not the repo

It's worth noting that the filename for the 'tag:' parameter needs to be expressed relative to the base of the container (e.g.: version/number), whereas the filename for the 'annotate:' parameter needs to be expressed relative to the base of the git repository (e.g.: ../version/number)

Not too difficult to figure that out by reading the source, but it's a surprising piece of information that should probably be called out explicitly in the documentation.

Support pushing via HTTPS and basic auth

Some environments only allow HTTP(S) traffic into the Git repository; and so need to use https:// protocol with basic auth credentials; rather than private key

Allow checking to trigger on tagging, and filter the tags

We'd really like to only trigger some pipelines when we push a new tag marking a release, however we also push other tags so we would like to be able to filter the tags used by the check script.

Obviously the checked-out version would match the latest matching tag that has been pushed.

Opening this here as we've expressed interest in working on it in Slack and were told it would be accepted upstream.

git history can get borked

Our git-resource for cf-deployment is having a few problems.

First, it seems to fetch the wrong branch. The desired branch is develop, which is at SHA 210655a. Instead, it seems to be fetching fa43ce from a 5-month-old WIP branch that we've since deleted. There hasn't been any activity on this branch, but for some reason, the three commits on this branch are the three topmost versions of git resource in the concourse UI.

Second, even without the inclusion of this WIP branch, the remainder of the history is missing the most recent commits on develop.

screen shot 2016-03-04 at 12 12 10 pm
screen shot 2016-03-04 at 12 12 36 pm

As you can see, ref 3e48b is the most recent commit in the history of develop that the git resource his picked up (it doesn't seem to have found refs 2782123, 9f545d2, or 210655a, even though they are all in the history of develop).

We're not sure what's causing this, or how to find out the root cause. (We were trying to delete a bunch of data from the build_events table. We're not sure how this would affect resource versions, but it could be that something was going wrong wit hthe db.) We're going to work around it by re-building the pipeline.

Fails to clone private submodules

It seems that it does not use ssh key to do submodule init. I can clone my repo and its submodules separately (using the same ssh key) and each succeeds, but unless I say { submodules: none } when doing the get on the parent, the parent fails to clone with:

Identity added: /tmp/git-resource-private-key (/tmp/git-resource-private-key)
Cloning into '/tmp/build/get'...
Fetching HEAD
843d29e Rename some concourse stuff
Submodule 'src/main/resources/[redacted]-secrets' (git@[redacted].git) registered for path 'src/main/resources/[redacted]-secrets'
Cloning into '/tmp/build/get/src/main/resources/[redacted]-secrets'...
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of 'git@[redacted].git' into submodule path '/tmp/build/get/src/main/resources/[redacted]-secrets' failed

File containing sha of HEAD

It'd be handy if there was a file containing the sha of the git repo checked out. Currently, the way to get this is via git rev-parse HEAD.

source.uri is case-sensitive

We have a repo on github under the Company-ProductGroup org name, let's call it Company-ProductGroup/instrumentation

When we wrote and tried to use a pipeline with the source.uri value of [email protected]:company-productgroup/instrumentation.git, the git clone failed. Correcting the capitalization of the org name let things carry on correctly.

We would expect that github org names are not case-sensitive.

add support for shallow submodules

Currently, no --depth parameter is applied when doing git submodule update --init --recursive $submodule

We have a large submodule that we'd like to clone shallowly.

variables don't expand properly in uri

Curly bracket expansion doesn't play nice when used as part of a URI:

resources:
- name: tools
  type: git
  source:
    uri: [email protected]:{{github-username}}/tools.git
    branch: master
    private_key: {{github-private-key}}

jobs:
- name: test
  plan:
  - get: tools

After set-pipeline passing --var github-username=bacoboy expands with extra quotes:

      uri: [email protected]:"bacoboy"/tools.git

This results in a resource error:

resource script '/opt/resource/check []' failed: exit status 128

stderr:
Identity added: /tmp/git-resource-private-key (/tmp/git-resource-private-key)
Cloning into '/tmp/git-resource-repo-cache'...
fatal: remote error: 
  %s is not a valid repository name
  Email [email protected] for help

Ability to disable git LFS

We use git lfs for some development dependencies (~2GB of them) but we don't need them on CI (and it slows down the builds enormously).

It looks like it should be as easy as adding a param like disable_git_lfs and adding an if to the inputs script. Does that sound reasonable if I make a PR?

Git resource doesn't actually push to git lfs

Hi Folks,

I know we (the data toolsmiths) were the folks who originally submitted the LFS PRs to the git resource, but we discovered they don't work! I am creating this issue to track the fact that it still definitely doesn't work, and we (the data toolsmiths) should probably submit some more PRs to actually make LFS work.

Best,
Zak + @cjcjameson

P.S. Do u ever think about space?

Git resource occasionally refuses to connect to repo(s)

We are running 1.4.1 and find that randomly, a git resource which has worked fine will refuse to pull from a repo which has new commits available.

There are no errors, the resource simply spins and never times out.

This is a random issue that affects most of our pipelines which normally work without an issue.

Anyone else experience a similar issue?

tags-only push

the git resource does not seem to be able to tag a repo at the sha provided to a build, if there have been commits after that sha on the branch.

when we try to do this, we get errors on push if we don't specify rebase, and it tags the wrong sha if we do specify rebase.

a tags-only push would solve this problem for us

submodules of submodules seem to be borked in new git resource version

After cloning cf-release I see the following when running git status:

fatal: Not a git repository: /tmp/build/get/.git/modules/src/capi-release/modules/src/cloud_controller_ng
fatal: 'git status --porcelain' failed in submodule src/capi-release

Here is a pipeline that can recreate the issue on concourse v1.3.0:

resources:
- name: cf-release
  type: git
  source:
    uri: https://github.com/cloudfoundry/cf-release.git
    branch: master

jobs:
- name: check-status
  plan:
  - aggregate:
    - get: cf-release
      resource: cf-release
  - task: status
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: pivotalcfreleng/minimal
      inputs:
      - name: cf-release
      run:
        path: /usr/bin/git
        args:
        - status
        dir: cf-release

Tag behaviors should work for detached HEAD tags

Currently, when specifying a value for tag_filter the only candidates that are considered are the ones that are reachable from the specified (or default) branch. Tags that are detached HEADs (or in other branches) are not considered since they are not retrieved during git fetch <BRANCH>.

In Maven (and Java generally) the version of an application exists in source control. So, when we tag a release, it requires a commit that contains those changes. In the Spring projects, it’s common that master never has a release on it. Version numbers go from one snapshot to another from the perspective of master, and the release hangs off and has to be directly targeted.

screen shot 2016-08-15 at 15 04 34

Based on a Slack conversation I'd like to advocate that any behavior having to do with tags (tag_filter is the only one that's obvious to me) ignore branches completely. All tags, regardless of what they're reachable from HEAD or not, should be considered. This policy should hold true for other future features having to do with tags as well.

Attempts to add a private key with a passphrase for SSH fail silently

If you give Concourse an SSH private key that has a passphrase the only error you will see is when you click the resource is something like unable to check. Only experienced Concoursers recognize the missing hashes for a git resource and suggest connectivity problems with the private key. We figured the issue out but I figured I would create an issue in the hopes some kind of relatively simple check can be added to help future devs.

path: dir/**/* not triggering

We have a pipeline that is not getting triggered on a change to dir/file.md when defining the glob dir/**/*. When using dir/* it works.

Locally the commands git log --format='%H' --reverse $ref..HEAD -- dir/**/* and git log --format='%H' --reverse $ref..HEAD -- dir/* produce the same output.

git-resource not ignoring submodules when using put

Hi,
We have a pipeline where the get params include a list of submodules that should be cloned. This works fine when checking out the source-code before building. However after building we push a tag back to the repository and this is where the problem occurs.

After the tag is pushed the source-code is pulled again, this time all submodules are cloned. How can we avoid this full submodule clone?

Best regards Andreas Knifh

Can't push without tags

A put command to a github resource is always a git push --tag. We use tags to mark release versions and commits deployed to specific environments. Right now, we have to delete all tags locally in our scripts before pushing from Concourse. We would prefer to be able to just specify in our plan configuration that we don't want to push tags, or for this to be the default behavior.

Create a tunnel for using ssh behind proxy git connections

Hi,

In our company is mandatory to use a proxy to connect to internet sites. We have a git repository in internet and only configured for ssh access, so we need to create a tunnel using corkscrew across the proxy (http://www.techrepublic.com/blog/linux-and-open-source/using-corkscrew-to-tunnel-ssh-over-http/).

We tested in ours concourse workers instances and it is working, but for using that in git-resource we need to setup the ssh config.

Before to create a fork to implement this functionality and use it, we want to know if we could implement and donate this code to this repository, and which are the steps to follow to colaborate.

We think that this functionality is useful to big companies with a complex environments, so if you want we can colaborate.

What do you think? Could you answer the questions above?

Thanks,
Carlos León

When configured with depth N, /in should still be able to fetch a ref N+1 commits ago

The git repository check determines the SHA limited to the paths that are not filtered. When the resource is obtained at getting phase with depth - the depth is based on the overall repository. This difference results in git failing because the target sha is unrecognizable.

Philosophically speaking since the resource is limited to several paths - the depth parameter in the get operation should be considered as depth for the filtered paths - not the overall repo. Easier said than done, though. There are strategies to obtain the needed depth - check this question for ideas: http://stackoverflow.com/questions/39935546/get-git-sha-depth-on-a-remote-branch/39941969#39941969

Note that since the resource check is cloning the whole repo - if there is a mechanism to store the current depth - it can be used as starting point and this will reduce significantly the number of iterations until it gets to the right one.

As a hot patch, the depth might be ignored - if filters on paths are used ... or it might be reported as an issue when the pipeline validation is made. Note that the connection combining these two will result in "random" failures is hard to make.

Support shallow cloning

For large repos with years of history, it'd be useful to support the --depth option when cloning a repo.

Merge commits not being detected

At some point recently (I think it's recent), merge commits have stopped being detected. For example, given a repository's commit log:

screen shot 2016-04-11 at 15 04 33

and the resource history:

screen shot 2016-04-11 at 15 04 02

Note that the following commits aren't listed:

  • 76730de
  • be5d390
  • 52d3397
  • c135c4d

The common thread is that these are all merge commits. We use merge commits to delineate units of work, but the thing that has flagged this for us is that we use them to trigger Pivotal Tracker transitions.

screen shot 2016-04-11 at 15 08 54

Not triggering on a merge commit, or not triggering after branch config change

Using BDD style description, because it's what I know.

Given

I have a branch called foo
Foo is a feature branch off develop
My git resource is configured as branch: foo
The last check of my git resource pointed to sha aaa

When

I merged foo to develop as a merging commit - creating sha bbb
And pushed develop to github
And updated my git resource config to branch: develop
And fly configured concourse

Then

My git resource never triggered with bbb
And I waited a long time

When

I commited another change to develop - creating sha ccc

Then

My git resource triggered with ccc
And never showed bbb

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.