aweris / gale Goto Github PK
View Code? Open in Web Editor NEWGitHub Action Local Executor
License: Apache License 2.0
GitHub Action Local Executor
License: Apache License 2.0
Hi! I was wondering what the pros/cons are of integrating with https://github.com/nektos/act.
The way I see it is:
Pros:
Cons:
About the minimal work:
I believe the main interface to be implemented is https://github.com/nektos/act/blob/f84a566ded8d03bc4daee36bb7df2502a8351409/pkg/runner/runner.go#L16 which is quite general. Like I said, there may be some parts of the upstream code where we'd need to add extension points but overall this seems doable.
What do you think?
The current step interface in gale
encompasses setup, pre, main and post execution methods and their conditional checks. Not all types of steps within the application make use of the pre and post-execution methods, leading to unnecessary boilerplate code and potential confusion.
A modular approach, potentially using hooks or functional interfaces, may help streamline the implementation, making it more intuitive and reducing redundancies.
Step
interface, even if they don't utilize pre and post-execution methods.Functional Interface Decomposition: Break down the Step
interface into smaller functional interfaces, such as PreExecutable
, PostExecutable
, etc. This allows steps to implement only the interfaces that are relevant to their operation.
Hook-based System: Instead of having separate methods in the interface, consider having a central execution method that uses hooks or listeners. These hooks can be registered dynamically based on the type of step. For instance, if a step has pre-execution logic, it can register a pre-execution hook.
Base and Extended Step Classes: Another object-oriented approach could involve having a base Step
class with the most common methods and then extended classes or interfaces that add pre and post-execution capabilities. Steps can then extend or implement as needed.
Decorator Pattern: Use the decorator pattern to add pre and post-execution capabilities around a core step. This allows for flexible composition and avoids altering existing code.
Step
interface's methods across different step typesThe CLI is not intuitive currently, both --workflow
and --job
are mandatory, so they should not be CLI options, but arguments instead. Here is a quick proposal:
gale <workflow> <job>
And add info and examples in the help.
Example:
$ gale
Workflow and job must be provided, detected the following workflows:
- .github/workflows/deploy.yaml
--- build (invoke with `gale .github/workflows/deploy.yaml build`)
--- test (invoke with `gale .github/workflows/deploy.yaml test`)
--- deploy (invoke with `gale .github/workflows/deploy.yaml deploy`)
This is just a suggestion, I think 1.
is a must-have, 2.
is a nice-to-have and could be implemented differently. I just think it would save time to introspect a repo.
The existing cache mechanism in the services/artifactcache
, which was reverse-engineered to locally support actions/cache, does not currently have a mechanism for cache entry expiration or cleanup. Cache entries only get deleted when the entire dagger cache volume is deleted. This behavior may lead to potential space constraints and inefficient cache usage over time.
To align more closely with typical caching patterns and potentially with how caching works in GitHub Actions, it's essential to introduce a cache entry expiration/cleanup mechanism.
Optional:
Currently gale
executed by it's own wrapper like:
Current usage of gale
:
gale.New(&cfg, client).
WithModifier(func(container *dagger.Container) (*dagger.Container, error) {
// create a file in the container to use for artifact upload
container = container.WithNewFile("/foo/bar.txt", dagger.ContainerWithNewFileOpts{
Contents: "Hello World",
Permissions: 0600,
})
// bind artifact service to the container
container = container.WithServiceBinding("artifact", services.ArtifactService(client))
// mock run id and runtime variables
container = container.WithEnvVariable("GITHUB_RUN_ID", "1")
container = container.WithEnvVariable("ACTIONS_RUNTIME_TOKEN", "test")
container = container.WithEnvVariable("ACTIONS_RUNTIME_URL", "http://artifact:8080/")
// enable debug mode
container = container.WithEnvVariable("RUNNER_DEBUG", "1")
return container, nil
}).
WithStep(&model.Step{Uses: "actions/upload-artifact@v2", With: map[string]string{"name": "test", "path": "/foo/bar.txt"}}, false).
WithStep(&model.Step{Uses: "actions/download-artifact@v2", With: map[string]string{"name": "test"}}, false).
WithStep(&model.Step{Run: "cat bar.txt"}, false).
Exec(context.Background())
This approach creates relatively simpler usage for when working on Github Actions
but it makes really hard to combine with dagger.Container
workflows or use custom action
support in your container.
Instead of using the wrapper option, we could use a collection of atomic operations like:
dagger.Container().
From(image).
With(gale.AddStep(&gale.Step{...}).
With(gale.AddStep(&gale.Step{...}).
With(gale.Exec{...}).
Stdout(ctx)
While running gale
on local-only branches, certain workflows are failing. It appears that specific actions, like the checkout action, attempt to pull the complete Git history. This, combined with some API calls meant for the GitHub environment, results in errors when executed on branches that are strictly local.
Gale
is run on a local-only branch, workflows that rely on extensive Git history or make certain API calls fail.gale
should be able to handle workflows on local-only branches by either mocking or bypassing operations that are not feasible locally.gale
that includes actions like checkout.gale
is being run on a local-only branch and adjust the behavior of actions like checkout accordingly.gale
replicates GitHub Actions' behavior, even if some operations are mocked or bypassed.Workflows listed ok using: gale list
Executed gale run .github/workflows/workflow.yml job-name
76: exec /entrypoint
76: > in service v975nob2u91fo.8951mv3r8eggq.dagger.local
Error: input:1: container.from.withUnixSocket.withFile.withEnvVariable.withMountedCache.withServiceBinding.withEnvVariable.withEnvVariable.withServiceBinding.withEnvVariable.withEnvVariable.withExec.sync process "sh -c echo workflow not found && exit 1" did not complete successfully: exit code: 1
Running on MacOS Ventura 13.5 - Macbook Pro M1
GitHub Actions allows the use of jobs.<job_id>.outputs
to create a map
of outputs for a job. These outputs are essential as they can be utilized by all downstream jobs that depend on this job. Implementing this feature in gale
would enhance its compatibility with GitHub Actions, allowing users to mimic these functionalities locally.
gale
)jobs.<job_id>.needs
.needs
context.jobs.<job_id>.outputs
in gale
.gale
.jobs.<job_id>.outputs
locally using gale
.gale
should evaluate expressions in outputs at the end of each job.gale
should redact secret data from outputs before they are logged or sent to other jobs.gale
should issue warnings when output size limits are approached or exceeded.I have an OS matrix build. Sometimes there are only errors in the mac build because reasons. I end up pushing debug commits and minor adjustments... (example: https://github.com/hofstadter-io/hof/actions/runs/5992905676)
I would love to be able to fix these issues with gale
. My current issue revolves around using docker via the cli on a default mac worker, to pull & run images only, by running the docker cli "by hand" and via a Go Exec call from our own cli. We are testing whether our cli can run the docker cli.
If this requires actual mac hardware, that is probably fine, though it would be great if it supported a cloud based. We are considering hooking one of these services up to GHA custom workers, which might make the env more predictable and debugable. Basically everyone's builds have been failing due to an unentitled (not signed) qemu binary (v8.0.4)
Here is the issue, on the github action we use to setup docker on mac. crazy-max/ghaction-setup-docker#18 (comment) This would be a good action for macos gale
to test with.
When gale
is run via the dagger
CLI, how does it ensure compatibility with the Engine that dagger
manages?
As you know, when a command is run via dagger run <YOUR_COMMAND HERE>
, the dagger
CLI provisions the Engine. So if I have dagger v0.8.4
running locally which provisions Engine v0.8.4
, but gale v0.0.4
expects v0.8.1
, could this cause issues?
๐โโ๏ธ For the dagger run
gif size issue, have you tried dagger run --progress plain
? cc @vito for more terminal recording & sharing tips.
Hi.
First: very interesting project ๐!
I'm having an error when trying to gale run
(gale list
works like a charm) even with this repo. I tried both ways with dagger run gale run ...
and gale run ...
and got the same results.
$ gale run .github/workflows/lint.yaml golangci-lint
Creating new Engine session... OK!
Establishing connection to Engine... 1: connect
1: > in init
1: starting engine
1: starting engine [0.98s]
1: starting session
1: starting session [0.28s]
1: connect DONE
OK!
6: resolve image config for ghcr.io/aweris/gale/tools/ghx:v0.0.7
6: > in from ghcr.io/aweris/gale/tools/ghx:v0.0.7
6: resolve image config for ghcr.io/aweris/gale/tools/ghx:v0.0.7 DONE
8: pull ghcr.io/aweris/gale/tools/ghx:v0.0.7
8: > in from ghcr.io/aweris/gale/tools/ghx:v0.0.7
8: resolve ghcr.io/aweris/gale/tools/ghx:v0.0.7@sha256:f4454a5b736fc05e6077fe88cee9b9d118089455234edb281822309cf7136c5a
8: resolve ghcr.io/aweris/gale/tools/ghx:v0.0.7@sha256:f4454a5b736fc05e6077fe88cee9b9d118089455234edb281822309cf7136c5a [0.01s]
8: pull ghcr.io/aweris/gale/tools/ghx:v0.0.7 DONE
8: pull ghcr.io/aweris/gale/tools/ghx:v0.0.7 CACHED
8: > in from ghcr.io/aweris/gale/tools/ghx:v0.0.7
8: pull ghcr.io/aweris/gale/tools/ghx:v0.0.7 CACHED
13: upload . DONE
13: > in host.directory .
13: upload . DONE
13: upload .
13: > in host.directory .
13: transferring eyJvd25lcl9jbGllbnRfaWQiOiJ1eTdla3dheWo4M3QwcmdkYTA2bWc1YjBjIiwicGF0aCI6Ii4iLCJpbmNsdWRlX3BhdHRlcm5zIjpudWxsLCJleGNsdWRlX3BhdHRlcm5zIjpudWxsLCJmb2xsb3dfcGF0aHMiOm51bGwsInJlYWRfc2luZ2xlX2ZpbGVfb25seSI6ZmFsc2UsIm1heF9maWxlX3NpemUiOjB9:
13: transferring eyJvd25lcl9jbGllbnRfaWQiOiJ1eTdla3dheWo4M3QwcmdkYTA2bWc1YjBjIiwicGF0aCI6Ii4iLCJpbmNsdWRlX3BhdHRlcm5zIjpudWxsLCJleGNsdWRlX3BhdHRlcm5zIjpudWxsLCJmb2xsb3dfcGF0aHMiOm51bGwsInJlYWRfc2luZ2xlX2ZpbGVfb25seSI6ZmFsc2UsIm1heF9maWxlX3NpemUiOjB9: 32.94MiB [0.23s]
13: upload . DONE
12: copy . CACHED
12: > in host.directory .
12: copy . CACHED
16: resolve image config for docker.io/alpine/git:latest
16: > in from alpine/git
16: resolve image config for docker.io/alpine/git:latest DONE
23: pull docker.io/alpine/git:latest
23: > in from alpine/git
23: resolve docker.io/alpine/git@sha256:b60414517a102de62fae6159fce9f1280892027469d21ddcf505b8d4139080b5
23: resolve docker.io/alpine/git@sha256:b60414517a102de62fae6159fce9f1280892027469d21ddcf505b8d4139080b5 [0.01s]
23: pull docker.io/alpine/git:latest DONE
21: exec git rev-parse --symbolic-full-name HEAD CACHED
21: exec git rev-parse --symbolic-full-name HEAD CACHED
16: resolve image config for docker.io/alpine/git:latest
16: > in from alpine/git
16: resolve image config for docker.io/alpine/git:latest DONE
26: exec git rev-parse HEAD CACHED
26: exec git rev-parse HEAD CACHED
16: resolve image config for docker.io/alpine/git:latest
16: > in from alpine/git
16: resolve image config for docker.io/alpine/git:latest DONE
23: pull docker.io/alpine/git:latest CACHED
23: > in from alpine/git
23: pull docker.io/alpine/git:latest CACHED
43: exec git log -1 --follow --format=%H -- .github/workflows/lint.yaml
43: [0.17s] 89e207f49397b681b56666da1ffab3b9d81c0e0c
43: exec git log -1 --follow --format=%H -- .github/workflows/lint.yaml DONE
6: resolve image config for ghcr.io/aweris/gale/tools/ghx:v0.0.7
6: > in from ghcr.io/aweris/gale/tools/ghx:v0.0.7
6: resolve image config for ghcr.io/aweris/gale/tools/ghx:v0.0.7 DONE
64: resolve image config for ghcr.io/aweris/gale/services/artifact:v0.0.7
64: > in from ghcr.io/aweris/gale/services/artifact:v0.0.7
64: ...
63: resolve image config for ghcr.io/aweris/gale/services/artifactcache:v0.0.7 DONE
63: > in from ghcr.io/aweris/gale/services/artifactcache:v0.0.7
63: resolve image config for ghcr.io/aweris/gale/services/artifactcache:v0.0.7 DONE
64: resolve image config for ghcr.io/aweris/gale/services/artifact:v0.0.7 DONE
64: > in from ghcr.io/aweris/gale/services/artifact:v0.0.7
64: resolve image config for ghcr.io/aweris/gale/services/artifact:v0.0.7 DONE
78: resolve image config for ghcr.io/aweris/gale/runner/ubuntu:22.04
78: > in from ghcr.io/aweris/gale/runner/ubuntu:22.04
78: resolve image config for ghcr.io/aweris/gale/runner/ubuntu:22.04 DONE
135: pull ghcr.io/aweris/gale/services/artifactcache:v0.0.7
135: > in from ghcr.io/aweris/gale/services/artifactcache:v0.0.7
135: > in service cc9hc0oab4qtk.30g09cbj69b8k.dagger.local
135: resolve ghcr.io/aweris/gale/services/artifactcache:v0.0.7@sha256:20cc9a7822a0bf6401797c4b67f05f8e0bc4be196aec88a555aea7ff2d710ce9
135: resolve ghcr.io/aweris/gale/services/artifactcache:v0.0.7@sha256:20cc9a7822a0bf6401797c4b67f05f8e0bc4be196aec88a555aea7ff2d710ce9 [0.01s]
135: pull ghcr.io/aweris/gale/services/artifactcache:v0.0.7 DONE
133: pull ghcr.io/aweris/gale/services/artifact:v0.0.7
133: > in from ghcr.io/aweris/gale/services/artifact:v0.0.7
133: > in service h94g2296tkk2u.30g09cbj69b8k.dagger.local
133: resolve ghcr.io/aweris/gale/services/artifact:v0.0.7@sha256:fef3d8d45d0b2a2c30ab0dca5cec149a0cbdb4a2f74923d63abbbf9099abdb1d
133: resolve ghcr.io/aweris/gale/services/artifact:v0.0.7@sha256:fef3d8d45d0b2a2c30ab0dca5cec149a0cbdb4a2f74923d63abbbf9099abdb1d [0.01s]
133: pull ghcr.io/aweris/gale/services/artifact:v0.0.7 DONE
135: pull ghcr.io/aweris/gale/services/artifactcache:v0.0.7 CACHED
135: > in from ghcr.io/aweris/gale/services/artifactcache:v0.0.7
135: > in service cc9hc0oab4qtk.30g09cbj69b8k.dagger.local
135: pull ghcr.io/aweris/gale/services/artifactcache:v0.0.7 CACHED
133: pull ghcr.io/aweris/gale/services/artifact:v0.0.7 CACHED
133: > in from ghcr.io/aweris/gale/services/artifact:v0.0.7
133: > in service h94g2296tkk2u.30g09cbj69b8k.dagger.local
133: pull ghcr.io/aweris/gale/services/artifact:v0.0.7 CACHED
134: exec /entrypoint
134: > in service cc9hc0oab4qtk.30g09cbj69b8k.dagger.local
134: [0.14s] Starting server on port 8080
134: ...
139: mkdir / DONE
139: mkdir / DONE
139: mkdir / CACHED
139: mkdir / CACHED
144: pull ghcr.io/aweris/gale/runner/ubuntu:22.04
144: > in from ghcr.io/aweris/gale/runner/ubuntu:22.04
144: resolve ghcr.io/aweris/gale/runner/ubuntu:22.04@sha256:612d0fee8124ba5e09142023930c8b358af116de4ddcd236f57a88d302173584
144: resolve ghcr.io/aweris/gale/runner/ubuntu:22.04@sha256:612d0fee8124ba5e09142023930c8b358af116de4ddcd236f57a88d302173584 [0.01s]
144: pull ghcr.io/aweris/gale/runner/ubuntu:22.04 DONE
138: mkfile /job_run.json
138: ...
143: copy /ghx /usr/local/bin/ghx CACHED
143: copy /ghx /usr/local/bin/ghx CACHED
141: mkdir /home/runner/_temp/ghx/runs/9
141: ...
138: mkfile /job_run.json DONE
138: mkfile /job_run.json DONE
137: copy / /home/runner/_temp/ghx/runs/9
137: ...
141: mkdir /home/runner/_temp/ghx/runs/9 DONE
141: mkdir /home/runner/_temp/ghx/runs/9 DONE
140: mkfile /home/runner/_temp/ghx/runs/9/event.json
140: ...
137: copy / /home/runner/_temp/ghx/runs/9 DONE
137: copy / /home/runner/_temp/ghx/runs/9 DONE
140: mkfile /home/runner/_temp/ghx/runs/9/event.json DONE
140: mkfile /home/runner/_temp/ghx/runs/9/event.json DONE
132: exec /entrypoint
132: > in service h94g2296tkk2u.30g09cbj69b8k.dagger.local
132: ...
145: exec /usr/local/bin/ghx run 9 ERROR: process "/usr/local/bin/ghx run 9" did not complete successfully: exit code: 1
145: [2.86s] Error: EOF: 1: connect
145: [2.86s] 1: > in init
145: [2.86s] 1: starting engine
145: [2.86s] failed to list containers: fork/exec /usr/bin/docker: exec format error
145: [2.86s] 1: starting engine [1.05s]
145: [2.86s] 1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
145: [2.86s] Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
145: [2.86s]
145: [2.86s] Please visit https://dagger.io/help#go for troubleshooting guidance.
145: [2.86s] Usage:
145: [2.86s] ghx run <run-id> [flags]
145: [2.86s]
145: [2.86s] Flags:
145: [2.86s] -h, --help help for run
145: [2.86s]
145: [2.86s] Global Flags:
145: [2.86s] --home string home directory for ghx (default "/home/runner/_temp/ghx")
145: [2.86s]
145: [2.86s] Error executing command: EOF: 1: connect
145: [2.86s] 1: > in init
145: [2.86s] 1: starting engine
145: [2.86s] failed to list containers: fork/exec /usr/bin/docker: exec format error
145: [2.86s] 1: starting engine [1.05s]
145: [2.86s] 1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
145: [2.86s] Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
145: [2.86s]
145: [2.86s] Please visit https://dagger.io/help#go for troubleshooting guidance.
145: exec /usr/local/bin/ghx run 9 ERROR: process "/usr/local/bin/ghx run 9" did not complete successfully: exit code: 1
132: exec /entrypoint
132: > in service h94g2296tkk2u.30g09cbj69b8k.dagger.local
Error: input:1: container.from.withUnixSocket.withFile.withEnvVariable.withMountedCache.withServiceBinding.withEnvVariable.withEnvVariable.withServiceBinding.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withMountedCache.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withMountedDirectory.withWorkdir.withSecretVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withNewFile.withEnvVariable.withMountedSecret.withDirectory.withExec.sync process "/usr/local/bin/ghx run 9" did not complete successfully: exit code: 1
Stdout:
Error executing command: EOF: 1: connect
1: > in init
1: starting engine
failed to list containers: fork/exec /usr/bin/docker: exec format error
1: starting engine [1.05s]
1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Please visit https://dagger.io/help#go for troubleshooting guidance.
Stderr:
Error: EOF: 1: connect
1: > in init
1: starting engine
failed to list containers: fork/exec /usr/bin/docker: exec format error
1: starting engine [1.05s]
1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Please visit https://dagger.io/help#go for troubleshooting guidance.
Usage:
ghx run <run-id> [flags]
Flags:
-h, --help help for run
Global Flags:
--home string home directory for ghx (default "/home/runner/_temp/ghx")
Please visit https://dagger.io/help#go for troubleshooting guidance.
Error executing command: input:1: container.from.withUnixSocket.withFile.withEnvVariable.withMountedCache.withServiceBinding.withEnvVariable.withEnvVariable.withServiceBinding.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withMountedCache.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withMountedDirectory.withWorkdir.withSecretVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withEnvVariable.withNewFile.withEnvVariable.withMountedSecret.withDirectory.withExec.sync process "/usr/local/bin/ghx run 9" did not complete successfully: exit code: 1
Stdout:
Error executing command: EOF: 1: connect
1: > in init
1: starting engine
failed to list containers: fork/exec /usr/bin/docker: exec format error
1: starting engine [1.05s]
1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Please visit https://dagger.io/help#go for troubleshooting guidance.
Stderr:
Error: EOF: 1: connect
1: > in init
1: starting engine
failed to list containers: fork/exec /usr/bin/docker: exec format error
1: starting engine [1.05s]
1: connect ERROR: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Error: new client: failed to run container: : fork/exec /usr/bin/docker: exec format error
Please visit https://dagger.io/help#go for troubleshooting guidance.
Usage:
ghx run <run-id> [flags]
Flags:
-h, --help help for run
Global Flags:
--home string home directory for ghx (default "/home/runner/_temp/ghx")
Please visit https://dagger.io/help#go for troubleshooting guidance.%
Ubuntu 22.04.2 LTS
{
"major": "0",
"minor": "0",
"gitVersion": "v0.0.7",
"gitCommit": "c5ea88e6a3fdf9c75d1923bda919365bfe4126de",
"commitDate": "2023-09-04T12:48:02Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
Docker version 24.0.6, build ed223bc
gh version 2.34.0 (2023-09-06)
curl -sfLo install.sh https://raw.githubusercontent.com/aweris/gale/main/hack/install.sh
sh -x ./install.sh
++ curl -s https://api.github.com/repos/aweris/gale/releases/latest
++ grep '"tag_name":'
++ sed -E 's/.*"([^"]+)".*/\1/'
+ RELEASE=v0.0.4
+ GALE_VERSION=v0.0.4
+ BIN_DIR=.
+ main
+ local os
++ uname -s
++ tr '[:upper:]' '[:lower:]'
+ os=linux
+ local arch
++ uname -m
++ tr '[:upper:]' '[:lower:]'
+ arch=x86_64
+ [[ -z v0.0.4 ]]
+ install_gale v0.0.4 linux x86_64
+ local version=v0.0.4
+ local os=linux
+ local arch=x86_64
+ local file_name=gale_v0.0.4_linux_x86_64.tar.gz
+ local download_url=https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_x86_64.tar.gz
+ echo 'Downloading https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_x86_64.tar.gz'
Downloading https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_x86_64.tar.gz
+ curl -sL https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_x86_64.tar.gz
+ tar xz -C .
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
+ echo 'Installed gale v0.0.4 to .'
Installed gale v0.0.4 to .
+ ./gale version
./install.sh: line 20: ./gale: Is a directory
+ echo Done.
Done.
URL in the script is set to https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_x86_64.tar.gz
It should be https://github.com/aweris/gale/releases/download/v0.0.4/gale_v0.0.4_linux_amd64.tar.gz
, as per https://github.com/aweris/gale/releases/tag/v0.0.4
To ensure compatibility with GitHub Actions, Gale should add GitHub's reusable workflows to streamline its CI/CD processes and promote code reuse across repositories, just like GitHub does.
gale
does not recognize or execute GitHub Actions' reusable workflows.gale
should accurately replicate the behavior of GitHub Actions by recognizing and executing reusable workflows as if they were being run on GitHub.When gale
and ghx
imports different dagger version it fails with following error.
failed to get exit code: input:1: container.from Unavailable: connection error: desc = "error reading server preface: command [docker exec -i dagger-engine-e7ae55a590c74d2e buildctl dial-stdio] has exited with exit status 1, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Error response from daemon: No such container: dagger-engine-e7ae55a590c74d2e\n"
We should ensure they're always using the same version. Maybe we should consider mono repo
again.
Our error handling system is inadequate and resolving issues such as #28 is proving to be a daunting task. To improve the UX of the cli, we must implement clear and easy-to-understand error messages.
This issue has been created to monitor and address this problem.
I've tried with version v0.0.1 and main (commit a635d5802ba3f0eb11945fd0d7f9b9e752c5ed64), both with the same result:
~/forks/gale/gale --workflow foo --job bar
Connected to engine b13dbd03d7d0
failed to get exit code: input:1: Syntax Error GraphQL request (1:122) Unexpected )
1: query{container{from(address:"ghcr.io/catthehacker/ubuntu:act-22.04"){withUnixSocket(path:"/var/run/docker.sock", source:){withFile(path:"/usr/local/bin/ghx", source:){withEnvVariable(name:"RUNNER_NAME", value:"Gale Agent"){withEnvVariable(value:"/home/runner/_temp", name:"RUNNER_TEMP"){withEnvVariable(name:"RUNNER_OS", value:"linux"){withEnvVariable(name:"RUNNER_ARCH", value:"x64"){withEnvVariable(name:"RUNNER_TOOL_CACHE", value:"/opt/hostedtoolcache"){withEnvVariable(name:"RUNNER_DEBUG", value:""){withDirectory(directory:, path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withWorkdir(path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_NUMBER", value:"1"){withEnvVariable(name:"CI", value:"false"){withEnvVariable(value:"", name:"GITHUB_BASE_REF"){withEnvVariable(name:"GITHUB_EVENT_PATH", value:"/home/runner/_temp/workflow/event.json"){withEnvVariable(name:"GITHUB_REF_TYPE", value:""){withEnvVariable(value:"R_kgDOJhldTA", name:"GITHUB_REPOSITORY_ID"){withEnvVariable(name:"GITHUB_ACTION_PATH", value:""){withEnvVariable(name:"GITHUB_HEAD_REF", value:""){withEnvVariable(name:"GITHUB_REF_PROTECTED", value:"false"){withEnvVariable(name:"GITHUB_ACTIONS", value:"false"){withEnvVariable(name:"GITHUB_RUN_ID", value:"1"){withEnvVariable(name:"GITHUB_SERVER_URL", value:"https://github.com"){withEnvVariable(name:"GITHUB_TOKEN", value:"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\n"){withEnvVariable(name:"GITHUB_REPOSITORY_OWNER_ID", value:"MDQ6VXNlcjIxNjQ4Nw=="){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKFLOW_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION", value:""){withEnvVariable(name:"GITHUB_ACTOR", value:"samalba"){withEnvVariable(name:"GITHUB_API_URL", value:"https://api.github.com"){withEnvVariable(name:"GITHUB_REPOSITORY", value:"samalba/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_ATTEMPT", value:"1"){withEnvVariable(name:"GITHUB_REF_NAME", value:""){withEnvVariable(value:"samalba", name:"GITHUB_REPOSITORY_OWNER"){withEnvVariable(name:"GITHUB_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION_REPOSITORY", value:""){withEnvVariable(name:"GITHUB_ACTOR_ID", value:"216487"){withEnvVariable(value:"", name:"GITHUB_PATH"){withEnvVariable(value:"", name:"GITHUB_WORKFLOW"){withEnvVariable(name:"GITHUB_RETENTION_DAYS", value:"0"){withEnvVariable(name:"GITHUB_WORKFLOW_REF", value:""){withEnvVariable(name:"GITHUB_ENV", value:""){withEnvVariable(name:"GITHUB_EVENT_NAME", value:"push"){withEnvVariable(name:"GITHUB_GRAPHQL_URL", value:"https://api.github.com/graphql"){withEnvVariable(name:"GITHUB_JOB", value:""){withEnvVariable(name:"GITHUB_REF", value:""){withNewFile(path:"/home/runner/_temp/workflow/event.json", contents:"{}", permissions:420){withExec(args:["ghx","with","job","--workflow=foo","--job=bar"]){withExec(args:["ghx","run"]){directory(path:"/home/runner/_temp/ghx"){file(path:"exit-code"){contents}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}
^
Please visit https://dagger.io/help#go for troubleshooting guidance.
Error executing job: <nil>Error: input:1: Syntax Error GraphQL request (1:122) Unexpected )
1: query{container{from(address:"ghcr.io/catthehacker/ubuntu:act-22.04"){withUnixSocket(path:"/var/run/docker.sock", source:){withFile(path:"/usr/local/bin/ghx", source:){withEnvVariable(name:"RUNNER_NAME", value:"Gale Agent"){withEnvVariable(value:"/home/runner/_temp", name:"RUNNER_TEMP"){withEnvVariable(name:"RUNNER_OS", value:"linux"){withEnvVariable(name:"RUNNER_ARCH", value:"x64"){withEnvVariable(name:"RUNNER_TOOL_CACHE", value:"/opt/hostedtoolcache"){withEnvVariable(name:"RUNNER_DEBUG", value:""){withDirectory(directory:, path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withWorkdir(path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_NUMBER", value:"1"){withEnvVariable(name:"CI", value:"false"){withEnvVariable(value:"", name:"GITHUB_BASE_REF"){withEnvVariable(name:"GITHUB_EVENT_PATH", value:"/home/runner/_temp/workflow/event.json"){withEnvVariable(name:"GITHUB_REF_TYPE", value:""){withEnvVariable(value:"R_kgDOJhldTA", name:"GITHUB_REPOSITORY_ID"){withEnvVariable(name:"GITHUB_ACTION_PATH", value:""){withEnvVariable(name:"GITHUB_HEAD_REF", value:""){withEnvVariable(name:"GITHUB_REF_PROTECTED", value:"false"){withEnvVariable(name:"GITHUB_ACTIONS", value:"false"){withEnvVariable(name:"GITHUB_RUN_ID", value:"1"){withEnvVariable(name:"GITHUB_SERVER_URL", value:"https://github.com"){withEnvVariable(name:"GITHUB_TOKEN", value:"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\n"){withEnvVariable(name:"GITHUB_REPOSITORY_OWNER_ID", value:"MDQ6VXNlcjIxNjQ4Nw=="){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKFLOW_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION", value:""){withEnvVariable(name:"GITHUB_ACTOR", value:"samalba"){withEnvVariable(name:"GITHUB_API_URL", value:"https://api.github.com"){withEnvVariable(name:"GITHUB_REPOSITORY", value:"samalba/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_ATTEMPT", value:"1"){withEnvVariable(name:"GITHUB_REF_NAME", value:""){withEnvVariable(value:"samalba", name:"GITHUB_REPOSITORY_OWNER"){withEnvVariable(name:"GITHUB_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION_REPOSITORY", value:""){withEnvVariable(name:"GITHUB_ACTOR_ID", value:"216487"){withEnvVariable(value:"", name:"GITHUB_PATH"){withEnvVariable(value:"", name:"GITHUB_WORKFLOW"){withEnvVariable(name:"GITHUB_RETENTION_DAYS", value:"0"){withEnvVariable(name:"GITHUB_WORKFLOW_REF", value:""){withEnvVariable(name:"GITHUB_ENV", value:""){withEnvVariable(name:"GITHUB_EVENT_NAME", value:"push"){withEnvVariable(name:"GITHUB_GRAPHQL_URL", value:"https://api.github.com/graphql"){withEnvVariable(name:"GITHUB_JOB", value:""){withEnvVariable(name:"GITHUB_REF", value:""){withNewFile(path:"/home/runner/_temp/workflow/event.json", contents:"{}", permissions:420){withExec(args:["ghx","with","job","--workflow=foo","--job=bar"]){withExec(args:["ghx","run"]){directory(path:"/home/runner/_temp/ghx"){file(path:"exit-code"){contents}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}
^
Please visit https://dagger.io/help#go for troubleshooting guidance.
Usage:
gale [flags]
gale [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
version Print version information
Flags:
--export Export the runner directory after the execution. Exported directory will be placed under .gale directory in the current directory.
-h, --help help for gale
--job string Name of the job
--workflow string Name of the workflow. If workflow doesn't have name, than it must be relative path to the workflow file
Use "gale [command] --help" for more information about a command.
Error executing command: input:1: Syntax Error GraphQL request (1:122) Unexpected )
1: query{container{from(address:"ghcr.io/catthehacker/ubuntu:act-22.04"){withUnixSocket(path:"/var/run/docker.sock", source:){withFile(path:"/usr/local/bin/ghx", source:){withEnvVariable(name:"RUNNER_NAME", value:"Gale Agent"){withEnvVariable(value:"/home/runner/_temp", name:"RUNNER_TEMP"){withEnvVariable(name:"RUNNER_OS", value:"linux"){withEnvVariable(name:"RUNNER_ARCH", value:"x64"){withEnvVariable(name:"RUNNER_TOOL_CACHE", value:"/opt/hostedtoolcache"){withEnvVariable(name:"RUNNER_DEBUG", value:""){withDirectory(directory:, path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withWorkdir(path:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_NUMBER", value:"1"){withEnvVariable(name:"CI", value:"false"){withEnvVariable(value:"", name:"GITHUB_BASE_REF"){withEnvVariable(name:"GITHUB_EVENT_PATH", value:"/home/runner/_temp/workflow/event.json"){withEnvVariable(name:"GITHUB_REF_TYPE", value:""){withEnvVariable(value:"R_kgDOJhldTA", name:"GITHUB_REPOSITORY_ID"){withEnvVariable(name:"GITHUB_ACTION_PATH", value:""){withEnvVariable(name:"GITHUB_HEAD_REF", value:""){withEnvVariable(name:"GITHUB_REF_PROTECTED", value:"false"){withEnvVariable(name:"GITHUB_ACTIONS", value:"false"){withEnvVariable(name:"GITHUB_RUN_ID", value:"1"){withEnvVariable(name:"GITHUB_SERVER_URL", value:"https://github.com"){withEnvVariable(name:"GITHUB_TOKEN", value:"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\n"){withEnvVariable(name:"GITHUB_REPOSITORY_OWNER_ID", value:"MDQ6VXNlcjIxNjQ4Nw=="){withEnvVariable(name:"GITHUB_WORKSPACE", value:"/home/runner/work/hf-model-ops/hf-model-ops"){withEnvVariable(name:"GITHUB_WORKFLOW_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION", value:""){withEnvVariable(name:"GITHUB_ACTOR", value:"samalba"){withEnvVariable(name:"GITHUB_API_URL", value:"https://api.github.com"){withEnvVariable(name:"GITHUB_REPOSITORY", value:"samalba/hf-model-ops"){withEnvVariable(name:"GITHUB_RUN_ATTEMPT", value:"1"){withEnvVariable(name:"GITHUB_REF_NAME", value:""){withEnvVariable(value:"samalba", name:"GITHUB_REPOSITORY_OWNER"){withEnvVariable(name:"GITHUB_SHA", value:""){withEnvVariable(name:"GITHUB_ACTION_REPOSITORY", value:""){withEnvVariable(name:"GITHUB_ACTOR_ID", value:"216487"){withEnvVariable(value:"", name:"GITHUB_PATH"){withEnvVariable(value:"", name:"GITHUB_WORKFLOW"){withEnvVariable(name:"GITHUB_RETENTION_DAYS", value:"0"){withEnvVariable(name:"GITHUB_WORKFLOW_REF", value:""){withEnvVariable(name:"GITHUB_ENV", value:""){withEnvVariable(name:"GITHUB_EVENT_NAME", value:"push"){withEnvVariable(name:"GITHUB_GRAPHQL_URL", value:"https://api.github.com/graphql"){withEnvVariable(name:"GITHUB_JOB", value:""){withEnvVariable(name:"GITHUB_REF", value:""){withNewFile(path:"/home/runner/_temp/workflow/event.json", contents:"{}", permissions:420){withExec(args:["ghx","with","job","--workflow=foo","--job=bar"]){withExec(args:["ghx","run"]){directory(path:"/home/runner/_temp/ghx"){file(path:"exit-code"){contents}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}
^
Please visit https://dagger.io/help#go for troubleshooting guidance
After some digging, it turns out it's because gale can't find the docker socket under /var/run/docker.sock
(I am using colima). Reading DOCKER_HOST
from the env should fix it.
The action is stuck at the Booting builder
step.
Probably some docker config or socket etc is missing in the dagger container.
Related call: https://github.com/docker/setup-buildx-action/blob/ecf95283f03858871ff00b787d79c419715afc34/src/main.ts#L104C1-L107
Logs:
โ โ Run docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c
โ โ โ
โ โ โ Docker info
โ โ โ โ
โ โ โ โ [command]/usr/bin/docker version
โ โ โ โ Client:
โ โ โ โ Version: 20.10.25+azure-1
โ โ โ โ API version: 1.41
โ โ โ โ Go version: go1.19.9
โ โ โ โ Git commit: b82b9f3a0e763304a250531cb9350aa6d93723c9
โ โ โ โ Built: Thu Apr 6 10:55:17 UTC 2023
โ โ โ โ OS/Arch: linux/arm64
โ โ โ โ Context: default
โ โ โ โ Experimental: true
โ โ โ โ
โ โ โ โ Server: Docker Desktop 4.20.1 (110738)
โ โ โ โ Engine:
โ โ โ โ Version: 24.0.2
โ โ โ โ API version: 1.43 (minimum version 1.12)
โ โ โ โ Go version: go1.20.4
โ โ โ โ Git commit: 659604f
โ โ โ โ Built: Thu May 25 21:50:59 2023
โ โ โ โ OS/Arch: linux/arm64
โ โ โ โ Experimental: false
โ โ โ โ containerd:
โ โ โ โ Version: 1.6.21
โ โ โ โ GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
โ โ โ โ runc:
โ โ โ โ Version: 1.1.7
โ โ โ โ GitCommit: v1.1.7-0-g860f061
โ โ โ โ docker-init:
โ โ โ โ Version: 0.19.0
โ โ โ โ GitCommit: de40ad0
โ โ โ โ [command]/usr/bin/docker info
โ โ โ โ Client:
โ โ โ โ Context: default
โ โ โ โ Debug Mode: false
โ โ โ โ Plugins:
โ โ โ โ buildx: Docker Buildx (Docker Inc., 0.10.4+azure-1)
โ โ โ โ compose: Docker Compose (Docker Inc., 2.18.0+azure-1)
โ โ โ โ
โ โ โ โ Server:
โ โ โ โ Containers: 37
โ โ โ โ Running: 37
โ โ โ โ Paused: 0
โ โ โ โ Stopped: 0
โ โ โ โ Images: 16
โ โ โ โ Server Version: 24.0.2
โ โ โ โ Storage Driver: overlay2
โ โ โ โ Backing Filesystem: extfs
โ โ โ โ Supports d_type: true
โ โ โ โ Using metacopy: false
โ โ โ โ Native Overlay Diff: true
โ โ โ โ userxattr: false
โ โ โ โ Logging Driver: json-file
โ โ โ โ Cgroup Driver: cgroupfs
โ โ โ โ Cgroup Version: 2
โ โ โ โ Plugins:
โ โ โ โ Volume: local
โ โ โ โ Network: bridge host ipvlan macvlan null overlay
โ โ โ โ Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
โ โ โ โ Swarm: inactive
โ โ โ โ Runtimes: io.containerd.runc.v2 runc
โ โ โ โ Default Runtime: runc
โ โ โ โ Init Binary: docker-init
โ โ โ โ containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
โ โ โ โ runc version: v1.1.7-0-g860f061
โ โ โ โ init version: de40ad0
โ โ โ โ Security Options:
โ โ โ โ seccomp
โ โ โ โ Profile: builtin
โ โ โ โ cgroupns
โ โ โ โ Kernel Version: 5.15.49-linuxkit-pr
โ โ โ โ Operating System: Docker Desktop
โ โ โ โ OSType: linux
โ โ โ โ Architecture: aarch64
โ โ โ โ CPUs: 8
โ โ โ โ Total Memory: 31.31GiB
โ โ โ โ Name: docker-desktop
โ โ โ โ ID: 9ed3f7ec-31c8-425d-9048-71adad2c1592
โ โ โ โ Docker Root Dir: /var/lib/docker
โ โ โ โ Debug Mode: false
โ โ โ โ HTTP Proxy: http.docker.internal:3128
โ โ โ โ HTTPS Proxy: http.docker.internal:3128
โ โ โ โ No Proxy: hubproxy.docker.internal
โ โ โ โ Registry: https://index.docker.io/v1/
โ โ โ โ Labels:
โ โ โ โ Experimental: false
โ โ โ โ Insecure Registries:
โ โ โ โ hubproxy.docker.internal:5555
โ โ โ โ 127.0.0.0/8
โ โ โ โ Live Restore Enabled: false
โ โ โ โ
โ โ โ โ
โ โ โ Buildx version
โ โ โ โ
โ โ โ โ [command]/usr/bin/docker buildx version
โ โ โ โ github.com/docker/buildx 0.10.4+azure-1 c513d34049e499c53468deac6c4267ee72948f02
โ โ โ โ
โ โ โ
โ โ โ Creating a new builder instance
โ โ โ โ
โ โ โ โ [command]/usr/bin/docker buildx create --name builder-a2fbf9e8-eee7-4386-81e1-0801660d85f1 --driver docker-container --buildkitd-flags --allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host --use
โ โ โ โ builder-a2fbf9e8-eee7-4386-81e1-0801660d85f1
โ โ โ โ
โ โ โ Booting builder
โ โ โ โ
โ โ โ โ [command]/usr/bin/docker buildx inspect --bootstrap --builder builder-a2fbf9e8-eee7-4386-81e1-0801660d85f1
โ โ โ โ aweris/ghx#1 [internal] booting buildkit
โ โ โ โ aweris/ghx#1 pulling image moby/buildkit:buildx-stable-1
โ โ โ โ aweris/ghx#1 pulling image moby/buildkit:buildx-stable-1 3.7s done
โ โ โ โ aweris/ghx#1 creating container buildx_buildkit_builder-a2fbf9e8-eee7-4386-81e1-0801660d85f10
โ โป
in case u r wondering, this Act
Currently, gale
does not support PowerShell (pwsh
) as a shell parameter in run steps. The primary reasons for this omission are the lack of PowerShell in our default run environment and the absence of preliminary testing to ensure smooth integration.
The implementation of PowerShell support was initially postponed due to concerns related to testing and environmental setup. A significant point of deliberation has been whether to include pwsh
in the default runner image or to mandate the use of a custom runner image. To maintain focus and manageability for this update, we should concentrate solely on the integration and testing of PowerShell. Any residual issues or considerations can be tackled in subsequent phases or updates.
When discussing Github Actions, and Act, I problem I keep hearing about is "the big fat image". In order to reproduce the original Github Actions environment in a container, you need to download a massive docker image - tens of GB. That makes things slow and expensive, and makes many other improvements harder.
So it seems that this "BFI" is a bottleneck; and removing that bottleneck would make a lot of people in the Github Actions ecosystem happier and more productive. But how?
I am starting this issue to invite conversation and debate on this topic. I know @aweris and @tiborvass have opinions. Perhaps @kjuulh and @cpdeethree too? :)
To get us started, I will list possible solutions that were mentioned to me at some point.
Just keep the BFI. This is the default option, where we decide this problem is not really that big of a problem in practice. Therefore there is no need to change what "ain't broke".
Dockerfiles. Just write a custom Dockerfile (or equivalent) describing the dependencies for your particular workflow. As your workflow changes and evolves, it's your responsibility to keep that Dockerfile up-to-date. This involves trial and error, since the upstream developer of the Github Actions is not sharing with you a dependency list: you have to run the thing, wait for an error, then try again. Sounds cumbersome, but maybe in practice it's fine?
Error parsing. Dynamically run the thing, catch errors, dynamically infer from the error message what is missing, modify the container image accordingly, then try again. Do this with just the right balance of magic and manual configuration, so that it's easy to get started, but you never get stuck when the default configuration doesn't work. For example, "command not found: go" would be caught and resolved with "apk add golang" - note that my example involves a distro-specific solution.
Syscall tracing. Same idea as "error parsing", but uses strace and other system-level tracing to catch errors at a deeper layer. Same general problem of balancing magic and manual control. As discussed with @aweris
stargz: take advantage of Dagger/buildkit's support for stargz to keep the "BFI", but make everything fast and lightweight because only the files that are needed get downloaded, just in time. This could be the best of both worlds: no magical tooling to develop or massive packaging/annotation effort to scale to the whole Github Actions ecosystem; but we get the benefits anyway. I am lazy so I am instinctively drawn to the solution with the most benefits and the least work needed :) But there may be a catch that makes this option simply infeasible. cc @sipsma @tiborvass
Other?
Given the intricacy of the project and the multitude of dependencies and moving components, troubleshooting and discerning meaningful messages from errors is increasingly challenging. To enhance the user experience, there's a necessity for a system of preflight checks that ensures all prerequisites are met before the primary execution of tasks or features.
Some examples of current complications include:
Runner Image Dependencies: Our project assumes that the runner image contains all required binaries. For instance, if pwsh
is missing, any GitHub action step using pwsh
as a shell will fail.
Features Still in Development: There are instances where we inadvertently run workflows that rely on incomplete or in-development features, leading to unforeseen errors or behaviors.
Local Configurations: Our tool requires specific configurations from the localhost. Any misconfiguration or oversight can result in unexpected issues.
Introduce a validate
Command (or Flag): A new command or flag for the run command, such as --validate
, could be introduced. When invoked, this would execute a series of preflight checks and report any issues or missing dependencies.
Graceful Failure: In scenarios where the requirements are not met, the tool should provide a clear and helpful error message pointing the user to potential resolutions. This approach ensures that any failures happen gracefully and informatively.
Extensible Checks: As the project grows, new checks might be needed. The preflight system should be built in a way that allows for easy additions or modifications to the checks performed.
Improved User Experience: By eliminating potential pitfalls before they occur, users have a smoother and more intuitive experience.
Reduced Troubleshooting Overhead: By preemptively identifying and addressing common issues, there will likely be fewer support inquiries and bug reports related to setup and configuration issues.
Enhanced Reliability: A project that can self-diagnose and provide feedback on its readiness is generally perceived as more robust and dependable.
Sorry for the vague title. I'm hoping to continue here a conversation that started live. The topic is: what should Project Gale prioritize? I see two options:
A better alternative to nektos/act. This means: drop-in compatibility for an entire Github Actions workflow, with the goal of testing it locally. Just like Act - but better in some way. Under the hood it leverages Dagger to be 10x faster, or simpler to use, or easier to customize, etc. But in this scenario you don't need to know about Dagger to use Gale - it's just an ingredient. The biggest challenge is compatibility: even a small incompatibility can ruin the experience ("why bother to test on Gale if I'll have to test again on Github?"), and the compatibility surface area is quite large. First 80% is easy; last 20% is very hard. The biggest benefit of this option, is that the potential audience is huge - lots of people use Github Actions, many of them use Act, many of those don't love it and would welcome an alternative.
A Github Actions loader for Dagger developers. This means: engineers already using Dagger to develop their pipelines, and looking for a way to incorporate some of their legacy Github Actions into their new Dagger project with minimal effort (for example to overcome objections by their team), get an easy way to do this. In just a few lines of code, boom, they can run the same Github Action with the same inputs, from code instead of YAML. This can greatly accelerate adoption of Dagger. This is what @kjuulh is looking for, for example. pro: compatibility is easier - less surface area (only run one step at a time; possibly one job? only need to run it from code; stakes are lower too: if only 50% of the my actions can be run, I can use that 50% for my Dagger project). con: smaller audience - you need to be a Dagger developer to enjoy the benefits of Gale. At the moment there are less Dagger developers than Github Actions users (although we hope to change that one day :)
NOTE: this may not change the core implementation of Gale. But it can change how it is packaged, documented, and marketed - quite a bit!
I would love to hear everyone's thoughts on this!
Option | What is it for? | Who is it for? | Pros | Cons |
---|---|---|---|---|
1 | Like Act, but better. Run your Github Actions workflow locally, for easier local testing | Github Actions users | Bigger audience | Unclear differentiation; Compatibility is a big problem |
2 | Run Github Actions from your Dagger pipeline. Makes it easier to adopt Dagger incrementally when already using Github Actions. | Teams who use Github Actions, and want to adopt Dagger | Solves a clear problem for Dagger developers; compatibility is less of a problem | Smaller audience |
To enhance the accuracy with which Gale
replicates GitHub Actions locally, we need to support various triggers and events. However, given the local context, certain events (e.g., external webhooks, repository dispatches) may not be feasible or logical to simulate. Our goal should be to support those that make sense in a local development environment.
gale
does not adequately simulate or recognize many GitHub Action triggers and events.gale
should be able to recognize and process those GitHub Actions' events and triggers that are logical to simulate locally.Gale
for users to manually specify or mock these triggers and events.gale
in a manner resembling their GitHub counterparts.The current implementation of the loading custom actions from jobs.<job_id>.steps[*].uses only supports tags
as action reference. We need to upgrade the action loader to handle tags, branches and commits for better system flexibility and compatibility with GitHub actions.
Core logic happening in getActionDirectory method
jobs.<job_id>.steps[*].uses
is passed as a source
to the getActionDirectory
method. The reference is parsed using parseRepoRef
and assumed to be a tag.To enhance gale
's user experience, we aim to introduce a new parameter to the gale run
command that allows users to export all generated artifacts (including cache, outputs, logs) either to a user-specified location or a default location. This will streamline the process of accessing and exporting crucial artifacts produced during the run.
gale
generates and handles artifacts, cache, and logs locally.gale run
to specify an export location for their artifacts.gale
should export the artifacts to a sensible default location.gale run
, considering usability and potential conflicts with existing parameters.gale
to process the export request after a successful or failed run, ensuring all artifacts are correctly saved to the desired location.At present, gale
employs github-cli
to locate the commit sha for workflows. However, this approach produces an error when executing gale
on a repository where a branch only exists locally. To resolve this issue, we need to either overlook this error or modify the verification process.
โ โ โ โ [5.16s] ERROR ./bin/gale run --workflows-dir ci/workflows ci/workflows/secret-printer.yaml test --secret FOO=foo --secret BAR=bar
โ โ โ โ Error: input:1: container.pipeline.from.withUnixSocket.withFile.withEnvVariable.withMountedCache.withServiceBinding.withEnvVariable.withEnvVariable.withExec.sync process "sh -c echo failed to get current repository: gh execution
โ โ โ โ failed: exit status 1 stderr: gh: No commit found for the ref refs/heads/aweris/demo-gale (HTTP 404)\n && exit 1" did not complete successfully: exit code: 2
โ โ โ โ
dagger -m github.com/aweris/gale/ghx -m github.com/aweris/gale call --repo aweris/gale --branch main --workflows-dir examples/workflows list
Setup tracing at https://dagger.cloud/traces/setup. To hide: export NOTHANKS=1
โ connect 1.3s
โ initialize 1.2s
! input: moduleSource.asModule resolve: failed to create module: select: failed to update module dependencies: failed to initialize dependency modules: failed to initialize dependency module: select: failed to initialize module: failed to call module "ghx" to get functions: call constructor: process "go build -o /runtime ." did not complete successfully: exit code: 1
โ resolving module ref 0.8s
โ installing module 0.4s
! input: moduleSource.asModule resolve: failed to create module: select: failed to update module dependencies: failed to initialize dependency modules: failed to initialize dependency module: select: failed to initialize module: failed to call module "ghx" to get functions: call constructor: process "go build -o /runtime ." did not complete successfully: exit code: 1
โ ModuleSource.asModule: Module! 0.4s
! failed to create module: select: failed to update module dependencies: failed to initialize dependency modules: failed to initialize dependency module: select: failed to initialize module: failed to call module "ghx" to get functions: call constructor: process "go build -o /runtime ." did not complete successfully: exit code: 1
Error: input: moduleSource.asModule resolve: failed to create module: select: failed to update module dependencies: failed to initialize dependency modules: failed to initialize dependency module: select: failed to initialize module: failed to call module "ghx" to get functions: call constructor: process "go build -o /runtime ." did not complete successfully: exit code: 1
Stderr:
# github.com/aweris/gale/ghx
./source.go:13:13: dag.Host undefined (type *dagger.Client has no field or method Host)
./source.go:13:38: undefined: HostDirectoryOpts
gale
currently does not support composite actions. Composite actions allow users to bundle multiple steps into a single action. To improve gale
compatibility with a broader spectrum of Github Actions, it's crucial to implement support for composite actions.
gale
cannot recognize or execute composite actionsgale
should be adept at identifying composite actions, parsing their steps, and sequentially executing them.A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.