containers / build Goto Github PK
View Code? Open in Web Editor NEWanother build tool for container images (archived, see https://github.com/rkt/rkt/issues/4024)
License: Apache License 2.0
another build tool for container images (archived, see https://github.com/rkt/rkt/issues/4024)
License: Apache License 2.0
Example taken from https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema
...
"app": {
...
"isolators": [
{
"name": "resource/cpu",
"value": {
"request": "250m",
"limit": "500m"
}
},
{
"name": "resource/memory",
"value": {
"request": "1G",
"limit": "2G"
}
},
{
"name": "os/linux/capabilities-retain-set",
"value": {
"set": ["CAP_NET_BIND_SERVICE"]
}
}
],
...
If the local file doesn't exist acbuild will attempt to perform meta discovery on the provided start point. This results in misleading error messages when the aci doesn't exist:
begin: Get https://beginlocaltest.aci?ac-discovery=1: dial tcp: lookup beginlocaltest.aci on 10.7.2.1:53: no such host
Downstream package maintainers should be able to reuse packages that are usually built from released/tagged versions instead of random snapshot.
Whenever possible please bundle (vendor) released/tagged versions of components in Godeps
. Thanks.
Support should be added for alternate execution engines, like runc.
Example:
root@core-01:/home/core/testcore-01 test # echo "test" > foo.txt
root@core-01:/home/core/testcore-01 test # rkt image export busybox busybox.aci
root@core-01:/home/core/testcore-01 test # ls
busybox.aci foo.txt
root@core-01:/home/core/testcore-01 test # acbuild begin busybox.aci
root@core-01:/home/core/testcore-01 test # acbuild copy foo.txt /tmp/foo.txt
root@core-01:/home/core/testcore-01 test # acbuild run cat /tmp/foo.txt
Timezone UTC does not exist in container, not updating container timezone.
cat: can't open '/tmp/foo.txt': No such file or directory
run: exit status 1
root@core-01:/home/core/testcore-01 test # acbuild run ls /tmp
Timezone UTC does not exist in container, not updating container timezone.
root@core-01:/home/core/testcore-01 test # ls .acbuild/
currentaci/ depstore-expanded/ depstore-tar/
core-01 test # ls .acbuild/currentaci/rootfs/tmp
foo.txt
After updating to the lastest master this morning, I started getting the following error:
[jcollie@pc28043 rkt-logstash]$ acbuild version
acbuild version v0.1.0-4-g40f07a0
appc version 0.7.1+git
[jcollie@pc28043 rkt-logstash]$ sudo acbuild begin
[jcollie@pc28043 rkt-logstash]$ sudo acbuild dependency add dmacc.net/centos7 --image-id="sha512-baae275b452242b1ccdba64bff15a7a9" --label="version=0.1"
[jcollie@pc28043 rkt-logstash]$ sudo acbuild label add os linux
[jcollie@pc28043 rkt-logstash]$ sudo acbuild label add arch amd64
[jcollie@pc28043 rkt-logstash]$ sudo acbuild run --debug -- yum -y update
Running: [yum -y update]
Downloading dmacc.net/centos7: [===============================] 62.9 MB/62.9 MB
panic: runtime error: makeslice: len out of range== ] 41.6 MB/62.9 MB
goroutine 1 [running]:
strings.Repeat(0x8ca320, 0x1, 0xfffffffffffffff5, 0x0, 0x0)
/usr/local/go/src/strings/strings.go:464 +0x5f
github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress.DrawTextFormatBarForW.func2(0x4f38000, 0x3bfe5e9, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress/draw.go:114 +0xcf
github.com/appc/acbuild/registry.newIoprogress.func1(0x4f38000, 0x3bfe5e9, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/registry/fetch.go:441 +0x244
github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress.DrawTerminalf.func1(0x4f38000, 0x3bfe5e9, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress/draw.go:54 +0xf8
github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress.(*Reader).drawProgress(0xc820063d10)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress/reader.go:76 +0x1e1
github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress.(*Reader).Read(0xc820063d10, 0xc8204c6000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/coreos/ioprogress/reader.go:50 +0x115
io.copyBuffer(0x7fd0406453f8, 0xc820028228, 0x7fd040647418, 0xc820063d10, 0xc8204c6000, 0x8000, 0x8000, 0x4f30000, 0x0, 0x0)
/usr/local/go/src/io/io.go:381 +0x247
io.Copy(0x7fd0406453f8, 0xc820028228, 0x7fd040647418, 0xc820063d10, 0xc820028228, 0x0, 0x0)
/usr/local/go/src/io/io.go:351 +0x64
github.com/appc/acbuild/registry.Registry.uncompress(0xc8200f6520, 0x15, 0xc8200f65a0, 0x1a, 0x860100, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/registry/fetch.go:256 +0x5e2
github.com/appc/acbuild/registry.Registry.fetchACIWithSize(0xc8200f6520, 0x15, 0xc8200f65a0, 0x1a, 0x430100, 0xc8200f6be0, 0x11, 0xc820085b80, 0x1, 0x4, ...)
/opt/acbuild/gopath/src/github.com/appc/acbuild/registry/fetch.go:134 +0x556
github.com/appc/acbuild/registry.Registry.Fetch(0xc8200f6520, 0x15, 0xc8200f65a0, 0x1a, 0x540100, 0xc8200f6be0, 0x11, 0xc820085b80, 0x1, 0x4, ...)
/opt/acbuild/gopath/src/github.com/appc/acbuild/registry/fetch.go:58 +0x1c9
github.com/appc/acbuild/registry.Registry.FetchAndRender(0xc8200f6520, 0x15, 0xc8200f65a0, 0x1a, 0x100, 0xc8200f6be0, 0x11, 0xc820085b80, 0x1, 0x4, ...)
/opt/acbuild/gopath/src/github.com/appc/acbuild/registry/fetch.go:71 +0xf3
github.com/appc/acbuild/lib.(*ACBuild).renderACI(0xc820085980, 0x100, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/lib/run.go:200 +0x35c
github.com/appc/acbuild/lib.(*ACBuild).Run(0xc820085980, 0xc820062fa0, 0x3, 0x5, 0x0, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/lib/run.go:87 +0x4f1
main.runRun(0xd19fc0, 0xc820062fa0, 0x3, 0x5, 0x2)
/opt/acbuild/gopath/src/github.com/appc/acbuild/acbuild/run.go:48 +0x19f
main.runWrapper.func1(0xd19fc0, 0xc820062fa0, 0x3, 0x5)
/opt/acbuild/gopath/src/github.com/appc/acbuild/acbuild/acbuild.go:123 +0x73
github.com/appc/acbuild/Godeps/_workspace/src/github.com/spf13/cobra.(*Command).execute(0xd19fc0, 0xc820062f00, 0x5, 0x5, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/spf13/cobra/command.go:496 +0x6e3
github.com/appc/acbuild/Godeps/_workspace/src/github.com/spf13/cobra.(*Command).Execute(0xd17c00, 0x0, 0x0)
/opt/acbuild/gopath/src/github.com/appc/acbuild/Godeps/_workspace/src/github.com/spf13/cobra/command.go:561 +0x180
main.main()
/opt/acbuild/gopath/src/github.com/appc/acbuild/acbuild/acbuild.go:198 +0x94
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
goroutine 19 [select]:
net/http.(*persistConn).writeLoop(0xc8200d33f0)
/usr/local/go/src/net/http/transport.go:1009 +0x40c
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:686 +0xc9d
goroutine 21 [IO wait]:
net.runtime_pollWait(0x7fd040647120, 0x72, 0xc8200121b0)
/usr/local/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0xc8200aced0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200aced0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200ace70, 0xc820458000, 0x8000, 0x8000, 0x0, 0x7fd040641050, 0xc8200121b0)
/usr/local/go/src/net/fd_unix.go:232 +0x23a
net.(*conn).Read(0xc82015e018, 0xc820458000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
crypto/tls.(*block).readFromUntil(0xc820424ff0, 0x7fd0406472a0, 0xc82015e018, 0x5, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:455 +0xcc
crypto/tls.(*Conn).readRecord(0xc8201b5b80, 0x99c017, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:540 +0x2d1
crypto/tls.(*Conn).Read(0xc8201b5b80, 0xc820456000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:901 +0x167
net/http.noteEOFReader.Read(0x7fd03edcf960, 0xc8201b5b80, 0xc8200d34f8, 0xc820456000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:1370 +0x67
net/http.(*noteEOFReader).Read(0xc820442b60, 0xc820456000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
<autogenerated>:126 +0xd0
bufio.(*Reader).fill(0xc8203f30e0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Peek(0xc8203f30e0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:132 +0xcc
net/http.(*persistConn).readLoop(0xc8200d34a0)
/usr/local/go/src/net/http/transport.go:876 +0xf7
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:685 +0xc78
goroutine 18 [IO wait]:
net.runtime_pollWait(0x7fd0406471e0, 0x72, 0xc8200121b0)
/usr/local/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0xc8200acd10, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200acd10, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200accb0, 0xc820134000, 0x1000, 0x1000, 0x0, 0x7fd040641050, 0xc8200121b0)
/usr/local/go/src/net/fd_unix.go:232 +0x23a
net.(*conn).Read(0xc8200281e0, 0xc820134000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
crypto/tls.(*block).readFromUntil(0xc820128000, 0x7fd0406472a0, 0xc8200281e0, 0x5, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:455 +0xcc
crypto/tls.(*Conn).readRecord(0xc82007e840, 0x99c017, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:540 +0x2d1
crypto/tls.(*Conn).Read(0xc82007e840, 0xc820135000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:901 +0x167
net/http.noteEOFReader.Read(0x7fd03edcf960, 0xc82007e840, 0xc8200d3448, 0xc820135000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:1370 +0x67
net/http.(*noteEOFReader).Read(0xc820423f40, 0xc820135000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
<autogenerated>:126 +0xd0
bufio.(*Reader).fill(0xc8203f29c0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Peek(0xc8203f29c0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:132 +0xcc
net/http.(*persistConn).readLoop(0xc8200d33f0)
/usr/local/go/src/net/http/transport.go:876 +0xf7
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:685 +0xc78
goroutine 22 [select]:
net/http.(*persistConn).writeLoop(0xc8200d34a0)
/usr/local/go/src/net/http/transport.go:1009 +0x40c
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:686 +0xc9d
Running git bisect points to the following commit:
a5ad61df3b0d636115d3543c82befc0c84ffdb48 is the first bad commit
commit a5ad61df3b0d636115d3543c82befc0c84ffdb48
Author: Derek Gonyeo <[email protected]>
Date: Mon Oct 19 18:04:15 2015 -0700
*: refactored library functions, added struct with common fields
The library was refactored so that all the exposed functions were
attached to a struct containing the information about the current
context. This makes using the library much cleaner.
The Registry struct had some of its fields renamed to be consistent with
the new struct in the library.
The functions for acquiring and releasing the lock were also moved from
acbuild/acbuild.go to the new struct.
:040000 040000 0b4c18665811cff0026264baf86df63fafcc05ee 55039ab1fead9942c9614bbbe9f739b6f657cf0d M acbuild
:040000 040000 a0e24013e636d4e8189af2c8d6d87acb0f914996 ce69a0ba1fd044cde549f7320f8aeef1f6ca40ee M lib
:040000 040000 3a03acbd4d502a348d419705fb0f1257906a7060 d0ac54b87df26184a5bc3c884b8bc8c36fa50363 M registry
It would be great to have some sort of pod that we can run for acbuild. Their would be a couple of moving parts:
The basic idea would be to do an acbuild against a remote server by sending it the context of a currently checked out project like a Go project, rails app, etc.
It would be very convenient if cobra could be configured in such a way that it would stop trying to parse flags once it sees the run command, so that it's not necessary to put -- in between run
and the command.
As a result of the recent begin changes:
derek@energia ~/aci> acbuild --modify alpine-latest-linux-amd64.aci set-name localhost/alpine
unknown host when fetching image, check your connection and local file paths must start with '/' or '.'
Spitballing. It'd be cool to have a mode where acbuild uses rkt's cas as its backend store for images. That could mean you could rapidly iterate on images using acbuild and then be able to test them using rkt without incurring the overhead of having to load them into the store every time.
Obviously we don't want to couple acbuild to rkt so we would probably implement it using some kind of plug-in. Or if they completely share(d) the backend format it could in theory be as simple as pointing acbuild at the rkt cas directory? (obvious concerns there with compatibility)
/cc @chancez
This issue should perhaps be put on appc/spec
, but I'm temporarily putting it here to reduce noise.
As discussed OOB, we were planning to use pathWhitelist
to represent file removal. However, a whitelist is much more clumsy to use than a black list for removing files. For instance, if I simply want to remove a single file hello.txt
, I would essentially have to add to the whitelist all files and directories that are not hello.txt
. This process is not only error-prone, but could also result in a large manifest
.
I propose two solutions:
The simplest solution is simply to have a pathBlacklist
. Intuitively, the blacklist would specify all files that we want to exclude from the final rendered image.
We could also borrow ideas from overlayfs:
whiteouts and opaque directories
--------------------------------
In order to support rm and rmdir without changing the lower
filesystem, an overlay filesystem needs to record in the upper filesystem
that files have been removed. This is done using whiteouts and opaque
directories (non-directories are always opaque).
A whiteout is created as a character device with 0/0 device number.
When a whiteout is found in the upper level of a merged directory, any
matching name in the lower level is ignored, and the whiteout itself
is also hidden.
A directory is made opaque by setting the xattr "trusted.overlay.opaque"
to "y". Where the upper filesystem contains an opaque directory, any
directory in the lower filesystem with the same name is ignored.
So in this approach, an ACI that removes files would have the "whiteouts" and "opaque directories" described above in its rootfs
. When rendered, it would remove the corresponding files in the lower layer.
Personally, I like the second approach, as it keeps all the "state" in rootfs
, reducing the manifest
to the simple metadata file that it was supposed to be. It's also worth noting that the two approaches can coexist.
What are your thoughts? @klizhentas @jonboulle @jzelinskie
We should have acbuild packaged in an ACI and then be able to invoke it using any appc runtime, e.g. rkt. I expect this would just mount in the user's project/asset directly to a known location (/data
or whatever) and then run the script they pass to it.
For example with rkt this might look like:
$ ls
app.js
build-nodejs.sh
$ rkt run --volume data,kind=host,source=$(pwd) appc.io/acbuild ./build-nodejs.sh
Where "data" would be defined as a mountpoint in the appc.io/acbuild
image.
Then this would output the ACI in the same directory.
$ ls
app.js
build-nodejs.sh
nodejs-latest-linux-amd64.aci
I think most scripts users will write to use acbuild will follow the same general formula:
This formula doesn't require any of bash's complex control flow like for loops, and the resulting script must be run as root if it uses the run
command. This will require careful inspection of any build scripts the user downloads, since blindly running shell scripts as root is terrifying.
I propose being able to write a file composed solely of comments and acbuild commands, in which acbuild will exec itself with every line that isn't a comment or empty. In the event of an error acbuild can call abort
to clean up after itself, and this offers a little more security that an untrusted shell script.
Such a file could look like this:
# start the build
begin
# we're using alpine
dep add quay.io/coreos/alpine-sh
# this is an nginx container
run -- apk install nginx
# end the build
end ./mycoolapp.aci
And could be invoked like: acbuild script ./build-my-cool-app.acb
In rkt/rkt#434 I proposed an aci builder library (you can find it here https://github.com/sgotti/acibuilder https://github.com/sgotti/fsdiffer). Maybe it can be improved to support also overlayfs and be used here (no need to vendor, just take it!)
Certain ACIs cause the following error when acbuild attempts to untar them:
extracttar error: exit status 1, output: error extracting tar: error extracting tarball: archive/tar: invalid tar header
An example of an ACI that causes this is at https://aci.gonyeo.com/nginx-latest-linux-amd64.aci. The application was made with buildroot, and the ACI was built with actool. rkt has no issues running the ACI.
A context free mode should be added to support quick modifications to local ACI files.
For many of its commands, acbuild
reference an ACI (or several ACIs) that might be located locally on disk or potentially somewhere else (HTTP endpoint, AC discovery endpoint, ...). We should define a consistent format that this reference. We faced a similar question in rkt but haven't quite resolved it yet: rkt/rkt#715
The ExpandTar function untars the ACI in a chroot, which requires root permissions to set up.
We need to start thinking about how acbuild works w.r.t versions of the spec.
Initially at least acbuild needs to be versioned, I suggest it could just be 1:1 with the spec itself (i.e. acbuild --version would be equal to the version of appc that is vendored in)
Next would be to start thinking about forwards/backwards compatibility.
A new command, squash
, should be added to support squashing together the given ACI with its dependencies.
acbuild begin
acbuild dep add quay.io/coreos/alpine-sh
acbuild squash
would be equivalent to
acbuild begin ./alpine-sh
The command actool patch-manifest
has some features that overlap with acbuild
:
https://github.com/appc/spec/blob/master/actool/manifest.go#L56
--manifest=MANIFEST_FILE
--name=example.com/app
(acbuild set-name ACI_NAME
)--exec="/app --debug"
(acbuild set-exec CMD
)--user=uid
(acbuild set-user USER
)--group=gid
(acbuild set-group GROUP)--capability=CAP_SYS_ADMIN,CAP_NET_ADMIN
--mounts=work,path=/opt,readOnly=true[:work2,...]
(acbuild mount add NAME PATH
)--ports=query,protocol=tcp,port=8080[:query2,...]
(acbuild port add NAME PROTOCOL PORT
)--supplementary-groups=gid1,gid2,...
--isolators=resource/cpu,request=50m,limit=100m[:resource/memory,...]
Should actool
get the missing features and then actool patch-manifest
be deprecated?
Occasionally when running acbuild I'll get an error like this:
umount: /home/jcollie/dev/rkt-logstash/.acbuild/target: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
Once that happens and subsequent commands fail with:
run: remove .acbuild/target/tmp: directory not empty
It doesn't happen every time (maybe 5-10% of the time).
Issue for blocking references on the next acbuild release.
It would be convenient (once acpush is finished) to be able to specify an image name, and have the image pushed to that location (optionally signing it with #15).
Something like the following:
acbuild end --keyring ./keys.pub --secret-keyring ./keys.sec quay.io/dgonyeo/mycoolapp
Would generate an aci and asc file, and then push them to quay. I imagine that in the event of push failure the build would be left in progress, so that a push could be reattempted or the ACI could be written to the local filesystem (also with the end
command).
Each command supported by acbuild should have at least one functional test.
begin
should support being given an image name instead of a local file as a starting point. acbuild would then download the ACI and use it as the starting point for the new build.
acbuild begin quay.io/coreos/alpine-sh
It would be neat if acbuild kept track of the acbuild commands that get run on an ACI in the image's annotations.
It would provide inside into how a given image gets built.
as of 0e884ec
vbatts@valse ~/src/appc/acbuild/examples/acserver (master *) $ ./build-acserver.sh
Building acserver...
Beginning build with an empty ACI
Setting name of ACI to example.com/acserver
Copying host:acserver to aci:/bin/acserver
Copying host:/home/vbatts/opt/gopath/src/github.com/appc/acserver/templates to aci:/templates
Adding port "http"="tcp"
Adding mount point "acis"="/acis"
Setting exec command [/bin/acserver]
Writing ACI to acserver-latest-linux-amd64.aci
vbatts@valse ~/src/appc/acbuild/examples/acserver (master *) $ tar tvf acserver-latest-linux-amd64.aci
drwxr-xr-x 1000/1001 0 2015-11-06 05:12 rootfs
drwxr-xr-x 1000/1001 0 2015-11-06 05:12 rootfs/bin
-rwxr-xr-x 1000/1001 7158435 2015-11-06 05:12 rootfs/bin/acserver
drwxrwxr-x 1000/1001 0 2015-11-06 04:58 rootfs/templates
-rw-rw-r-- 1000/1001 2464 2015-11-06 04:58 rootfs/templates/index.html
-rw-r--r-- root/root 345 2015-11-06 05:12 manifest
Example taken from https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema
...
"app": {
...
"workingDirectory": "/opt/work",
...
There are issues with the UnTar in its current state, and it's not a particularly easy thing to write. It should be replaced with a library.
Example taken from https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema
...
"app": {
...
"eventHandlers": [
{
"exec": [
"/usr/bin/data-downloader"
],
"name": "pre-start"
},
{
"exec": [
"/usr/bin/deregister-worker",
"--verbose"
],
"name": "post-stop"
}
],
...
Concurrent invocations in acbuild's current state can cause issues. A file lock should be acquired/released in the build context's directory to prevent this.
I am trying to build an image using acbuild (1:1 example taken from
https://coreos.com/blog/rkt-0.10.0-with-new-api-service/
).
This fails on Fedora 23.
[root@vmd9666 x]# cat x
acbuild begin quay.io/listhub/alpine
acbuild set-name example.com/nginx
acbuild run apk update
acbuild run apk add nginx
acbuild set-exec -- /usr/sbin/nginx -g "daemon off;"
acbuild port add http tcp 80
acbuild mount add html /usr/share/nginx/html
acbuild label add arch amd64
acbuild label add os linux
acbuild write --overwrite nginx-latest-linux-amd64.aci
acbuild end
[root@vmd9666 x]# cd /opt/PDFreactor/^C
[root@vmd9666 x]# sh x
Downloading quay.io/listhub/alpine: [==========================] 2.63 MB/2.63 MB
Failed to create directory /home/ajung/src/x/.acbuild/currentaci/rootfs/sys/fs/selinux: Read-only file system
Failed to create directory /home/ajung/src/x/.acbuild/currentaci/rootfs/sys/fs/selinux: Read-only file system
Timezone Europe/Berlin does not exist in container, not updating container timezone.
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
v3.2.3-104-g838b3e3 [http://dl-4.alpinelinux.org/alpine/v3.2/main]
OK: 5299 distinct packages available
Failed to create directory /home/ajung/src/x/.acbuild/currentaci/rootfs/sys/fs/selinux: Read-only file system
Failed to create directory /home/ajung/src/x/.acbuild/currentaci/rootfs/sys/fs/selinux: Read-only file system
Timezone Europe/Berlin does not exist in container, not updating container timezone.
(1/2) Installing pcre (8.37-r1)
(2/2) Installing nginx (1.8.0-r1)
Executing nginx-1.8.0-r1.pre-install
Executing busybox-1.23.2-r0.trigger
OK: 7 MiB in 17 packages
vagrant@vagrant-ubuntu-vivid-64:/vagrant$ ./build-jenkins
Beginning build with an empty ACI
dependency add: ACIdentifier must contain only lower case alphanumeric characters plus "-._~/"
Hi
I used the nginx example as below to create aci image in my CoreOS node.
acbuild begin
acbuild dependency add quay.io/fermayo/ubuntu
acbuild run -- apt-get update
acbuild run -- apt-get -y install nginx
acbuild set-exec /usr/sbin/nginx
acbuild set-name example.com/ubuntu-nginx
acbuild write ubuntu-nginx.aci
acbuild end
aci image got created successfully, but rkt container starts and exits. rkt version used is 0.9.0 and coreos version is 845.0.0. Following are the logs:
core@core-01 ~ $ sudo rkt run --debug --insecure-skip-verify ubuntu-nginx.aci
rkt: using image from file /usr/share/rkt/stage1-coreos.aci
rkt: using image from file /home/core/ubuntu-nginx.aci
rkt: searching for app image quay.io/fermayo/ubuntu
rkt: remote fetching from url https://quay.io/c1/aci/quay.io/fermayo/ubuntu/latest/aci/linux/amd64/
rkt: fetching image from https://quay.io/c1/aci/quay.io/fermayo/ubuntu/latest/aci/linux/amd64/
Downloading ACI: [=============================================] 72.4 MB/72.4 MB
2015/11/08 05:25:26 Preparing stage1
2015/11/08 05:25:27 Writing image manifest
2015/11/08 05:25:27 Loading image sha512-973e0f0a1c8252b1e92c882c902e4a00a0c75284678c39334f7c8a14d1a2740c
2015/11/08 05:25:33 Writing image manifest
2015/11/08 05:25:33 Writing pod manifest
2015/11/08 05:25:33 Setting up stage1
2015/11/08 05:25:33 Wrote filesystem to /var/lib/rkt/pods/run/5005b059-212e-4f68-82f3-ba4a7d5529c6
2015/11/08 05:25:33 Pivoting to filesystem /var/lib/rkt/pods/run/5005b059-212e-4f68-82f3-ba4a7d5529c6
2015/11/08 05:25:33 Execing /init
2015/11/08 05:25:33 Loading networks from /etc/rkt/net.d
2015/11/08 05:25:33 Loading network default with type ptp
Spawning container rkt-5005b059-212e-4f68-82f3-ba4a7d5529c6 on /var/lib/rkt/pods/run/5005b059-212e-4f68-82f3-ba4a7d5529c6/stage1/rootfs.
Press ^] three times within 1s to kill container.
systemd 222 running in system mode. (-PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL +XZ -LZ4
+SECCOMP +BLKID -ELFUTILS +KMOD -IDN)
Detected virtualization systemd-nspawn.
Detected architecture x86-64.
Welcome to Linux!
Initializing machine ID from container UUID.
[ OK ] Created slice -.slice.
[ OK ] Listening on Journal Socket.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Created slice system.slice.
[ OK ] Started Pod shutdown.
Starting Pod shutdown...
[ OK ] Created slice system-prepare\x2dapp.slice.
Starting Journal Service...
[ OK ] Started ubuntu-nginx Reaper.
Starting ubuntu-nginx Reaper...
[ OK ] Started Journal Service.
Starting Prepare minimum environment for chrooted applications...
[ OK ] Started Prepare minimum environment for chrooted applications.
[ OK ] Started Application=ubuntu-nginx Image=example.com/ubuntu-nginx.
Starting Application=ubuntu-nginx Image=example.com/ubuntu-nginx...
[ OK ] Reached target rkt apps target.
Sending SIGTERM to remaining processes...
Sending SIGKILL to remaining processes...
Halting system.
Container rkt-5005b059-212e-4f68-82f3-ba4a7d5529c6 has been shut down.
Any ideas?
Thanks
Sreenivas
Running the example in the README created an ACI with nothing in the rootfs. Not sure if this is a bug or if I'm missing something.
Here's my build file
$ cat build.sh
#!/bin/bash
acbuild begin
acbuild dependency add quay.io/fermayo/ubuntu
acbuild run -- apt-get update
acbuild run -- apt-get -y install nginx
acbuild set-exec /usr/sbin/nginx
acbuild set-name example.com/ubuntu-nginx
acbuild write ubuntu-nginx.aci
acbuild end
After building the ACI file and attempting to run it I got the following error message.
$ sudo rkt run --insecure-skip-verify ubuntu-nginx.aci
rkt: using image from file /usr/bin/stage1-coreos.aci
rkt: using image from file /home/eric/p/acbuild/ubuntu-nginx.aci
rkt: using image from local store for image name quay.io/fermayo/ubuntu
[ 8331.672240] nginx[4]: Error: Unable to open "/usr/sbin/nginx": No such file or directory
Further inspection of the ACI file shows an empty rootfs directory
$ tar -tvf ubuntu-nginx.aci
drwxr-xr-x 1000/1000 0 2015-11-08 12:54 rootfs
-rw-r--r-- root/root 271 2015-11-08 12:54 manifest
The manifest file looks pretty reasonable
$ tar -xf ubuntu-nginx.aci manifest -O | jq .
{
"acKind": "ImageManifest",
"acVersion": "0.7.1+git",
"name": "example.com/ubuntu-nginx",
"labels": [
{
"name": "arch",
"value": "amd64"
},
{
"name": "os",
"value": "linux"
}
],
"app": {
"exec": [
"/usr/sbin/nginx"
],
"user": "0",
"group": "0"
},
"dependencies": [
{
"imageName": "quay.io/fermayo/ubuntu"
}
]
}
Running Fedora 23. Here's some additional version data
$ sudo acbuild version
acbuild version v0.1.1-38-g0e884ec
appc version 0.7.1+git
$ sudo rkt version
rkt version 0.10.0
appc version 0.7.1
$ sudo systemd-nspawn --version
systemd 222
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN
$ uname -r
4.2.5-300.fc23.x86_64
A mode should be added where every acbuild command implicitly creates a new layer (akin to how a Dockerfile works). This would probably make the most sense as a flag on the begin
command.
derek@energia ~/aci> acbuild --modify ./alpine-latest-linux-amd64.aci set-name localhost/alpine
"/bin/bbsuid": permission denied - call write as root
Coming from rkt/rkt#1028 starting with a CLI library like it should be useful (see rkt/rkt#632 (comment) for a list of features/requirementd) and avoid additional work now and in the future.
The README describes what acbuild should look like; now we need an initial implementation of the thing.
acbuild should have functionality for adding/removing/modifying isolators in an ACI.
It would be really convenient to be able to end a build and have acbuild produce the .asc
file in addition to the .aci
file, if it's handed the keyrings.
Something like the following:
acbuild end --keyring ./keys.pub --secret-keyring ./keys.sec mycoolapp-latest-linux-amd64.aci
Would produce both a mycoolapp-latest-linux-amd64.aci
file and a mycoolapp-latest-linux-amd64.aci.asc
file in the current directory.
I think a command like acbuild layer
should be supported that would add a new layer into the current ACI.
acbuild exec -- wget http://example.com/very-big-file.deb
acbuild exec -- dpkg -i ./very-big-file.deb
acbuild exec -- rm ./very-big-file.deb
acbuild layer
The above would produce a layered image, but the file very-big-file.deb
would not exist in any of the layers.
If I'm invoking two acbuilds acbuild where the first is using begin
and the second is using --modify
, the second one fails to start because the lock for the first is created in the current directory, and the second cannot acquire the lock.
It would be great if the .acbuild
context was named after the build, and the lock was associated with a particular .acbuild
context (or aci if using --modify
).
Ex:
acbuild begin path/to/example.aci --name example-app
where --name must be unique for this particular build, to allow for multiple concurrent builds.
This would result in something like:
.acbuild-example-app
and a lock named .acbuild-example-app.lock
in the current directory.
acbuild --modiy path/to/example.aci set name coreos.com/example
could produce a lock named something like:
.path_to_example.aci.lock
You could still only have one instance of an acbuild begin
in a single directory, but you could have as many acbuild --modify
running in the same directory, assuming each have their own ACI to modify (still limited to one acbuild per aci).
For non-Linux users, it would be nice to have a very simple Vagrantfile to get them up and running with acbuild.
Example: https://github.com/coreos/rkt/blob/master/Vagrantfile
Every time I've seen acbuild run
used, the following warning is printed before the actual command is exec'd:
Timezone Europe/Berlin does not exist in container, not updating container timezone.
We should prevent the error from being printed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.