Continuous Self-Service Integration Deployment and Validation
symantec / dominator Goto Github PK
View Code? Open in Web Editor NEWThe Dominator Config Management and Image Deployment System
License: Apache License 2.0
The Dominator Config Management and Image Deployment System
License: Apache License 2.0
I'm not sure if this falls under documentation or a bug but it has me a bit confused so I wanted to check if my understanding was correct before looking into it more.
I am building a bootstrap image that has the following filter lines defined.
"FilterLines": [
"/etc/fstab",
"/etc/hostname",
"/etc/machine-id",
"/data(|/.*)$",
"/var/log/.*",
"/var/mail",
"/var/spool/mail"
]
I have an image that inherits from this image and defines only a filter.add file. No filter file exists in the git repo when building.
When i deploy this to one of my subs for testing I get the following update loop that keeps on happening.
2019/11/22 21:13:01 Fetch(10.131.135.76:6971) 5 objects at unlimited speed
2019/11/22 21:13:01 Fetch() complete. Read: 38364 B in 443µs (84482 KiB/s)
2019/11/22 21:13:01 Boosting CPU limit: 100%
2019/11/22 21:13:02 Update()
2019/11/22 21:13:02 Made inode: /home/admin/test/.subd/root/var/log/alternatives.log from: 02df44f4aa2956aaf0a087d66090e093c6e10dfb9514dd0648cdc55fa6b4c266006d57b24acdde2657b25410c2ff5959554dd348974c0e5ec669a4e97a30caa2
2019/11/22 21:13:02 Made inode: /home/admin/test/.subd/root/var/log/apt/eipp.log.xz from: a58fa549fd8814fb7c4728667f915cfd04b44cbefbc8909ff98bf66a81a9cf7b928809c715a5bb59f9149e346e3a0bd1b35bb6ec1300ffe048b9892ae6bc56d3
2019/11/22 21:13:02 Made inode: /home/admin/test/.subd/root/var/log/apt/history.log from: c71c4e4b259ef78749f0737120f545c05c2d56740c4ec5770a4f859007b0c6f924e9da53565270d86b1faefc5786d24e14c8c924762c64f2669f14779003ae3c
2019/11/22 21:13:02 Made inode: /home/admin/test/.subd/root/var/log/apt/term.log from: 6fedf96409a5bf21fbefd6f05d6c15689bb55da7916ab3d522924aa2d9f2dc23f4e3f60c00a5d8e38464691d46eeae57734fdfbc71df84dda7df89dc766dc777
2019/11/22 21:13:02 Made inode: /home/admin/test/.subd/root/var/log/dpkg.log from: cb4a2cd397dd4447f1605295f48c0c4e548ab950a62741a8542c9f66574189885847564bf8d7255964d5372f5b4040c30d59766d3265eaab408a0fc9d6f0554d
2019/11/22 21:13:02 Update() completed in 527.727µs (change window: 506.367µs)
2019/11/22 21:13:03 Restoring CPU limit: 8%
This is using the default exclude files constants. I would expect that these files don't show up or try to be placed on the file system at all since they are filtered in the bootstrap image. Additionally I can tell /etc/fstab
is properly excluded in the inherited image.
An image stream may have a filter.add
and a triggers.add
but not computed-files.add
. This causes a tremendous amount of duplication.
With vm-control
a VM owner should be able to save the VM metadata and volume data to a remote storage system (i.e. AWS S3) and then later restore/rebuild the VM in case of disaster.
The Hypervisor will have to be given credentials to write to the S3 bucket. Undecided whether vm-control
will provide the credentials from the user or if the Fleet Manager will provide the credentials. For automated backups (issue #469), the Fleet Manager will need credentials anyway.
https://goreportcard.com/report/github.com/Symantec/Dominator. Some of them a very serious. Consider adding gometalinter or something else to CI.
Dominator’s main role is holistic management of the file tree. Would it be in scope for Dominator to also manage parts of the process tree? Some cases to consider:
Dominator already influences the process tree indirectly by the virtue of the trigger mechanism.
Dominator (subd) takes great care to maximize the probability of a successful convergence. Is it prepared to deal with a malicious user playing games with cycles in the file hierarchy? I don't have the time to investigate right now but am filing this lest I forget.
See ELOOP in https://pubs.opengroup.org/onlinepubs/009695399/functions/rename.html
Using vm-control
a VM owner should be able to set up an automated backup schedule. As described in issue #468, the VM metadata and volume data will be saved to a remote storage system (i.e. AWS S3).
The Fleet Manager will orchestrate the automated backups, allowing for global rate limits to be applied.
The Fleet Manager will provide the Hypervisors with temporary credentials to give write access to the S3 bucket.
@cviecco my current thinking is to have a similar workflow as when importing libvirt VMs: stop the SmallStack VM, create libvirt VM, ask to commit, defer or abandon.
Commit: destroy the SmallStack VM.
Defer: leave both VMs existing, SmallStack VM is stopped. Manual cleanup essential.
Abandon: destroy libvirt VM, start SmallStack VM.
Sound good?
It would be nice if the various services exposed a Prometheus /metrics
endpoint. This would allow for easy monitoring and alerting, especially useful for Kubernetes users, who very often use Prometheus.
As the project is in Go, we have an efficient library for instrumenting code.
subd
is consuming 70% of a single core of my test VM. I have started it as subd -rootDir=/test
where /test is an empty directory. Adding -showStats
reveals that many scan cycles are being started in short succession.
On inspection I think sub/scanner.scannerDaemon
needs something like a time.Ticker
to place an upper bound on the frequency of scans.
I'm running 42542a1.
pi$ go install ./cmd/dominator
# github.com/Symantec/Dominator/lib/wsyscall
lib/wsyscall/wrappers_linux.go:11: cannot use source.Nlink (type uint32) as type uint64 in assignment
lib/wsyscall/wrappers_linux.go:17: cannot use source.Blksize (type int32) as type int64 in assignment
# github.com/Symantec/Dominator/lib/filesystem
lib/filesystem/compare.go:192: cannot use left.MtimeSeconds (type int64) as type int32 in assignment
lib/filesystem/compare.go:193: cannot use int64(left.MtimeNanoSeconds) (type int64) as type int32 in assignment
lib/filesystem/compare.go:194: cannot use right.MtimeSeconds (type int64) as type int32 in assignment
lib/filesystem/compare.go:195: cannot use int64(right.MtimeNanoSeconds) (type int64) as type int32 in assignment
lib/filesystem/compare.go:361: cannot use left.MtimeSeconds (type int64) as type int32 in assignment
lib/filesystem/compare.go:362: cannot use int64(left.MtimeNanoSeconds) (type int64) as type int32 in assignment
lib/filesystem/compare.go:363: cannot use right.MtimeSeconds (type int64) as type int32 in assignment
lib/filesystem/compare.go:364: cannot use int64(right.MtimeNanoSeconds) (type int64) as type int32 in assignment
pi$
# filegen-server
Do not run the filegen server as root
# imageserver
Do not run the Image Server as root
# dominator -alsoLogToStderr
-username argument missing
#
This should be homogenized: filegen-server
and imageserver
should follow dominator
and accept a -username
flag. Right now, the easiest way to treat these binaries equally is by setuid'ing them all to a non-privileged user, but setuid is a security anti-pattern...
(Btw, mdbd
seems like a good candidate for the set of daemons that don't require root.)
I cannot get a single Subd.Update
to complete. Forcing the stack trace using SIGQUIT
points at goroutine 37, which seems stuck reading from the ack channel:
$ sudo ~/bin/subd -alsoLogToStderr -logDir="" -rootDir /test -CAfile /etc/ssl/certs/DominatorCA.pem -initialLogDebugLevel 100
2018/02/21 08:00:12 Restoring CPU limit: 50%
2018/02/21 08:00:13 reverse listener: remember(127.0.0.1:39884): 0xc42000e040
2018/02/21 08:00:13 Update()
^\SIGQUIT: quit
PC=0x459b51 m=0 sigcode=128
goroutine 0 [idle]:
runtime.futex(0xb080a8, 0x0, 0x0, 0x0, 0x7ffe00000000, 0x43142a, 0x0, 0x0, 0x7ffe74f9b608, 0x41096b, ...)
/home/vagrant/go/src/runtime/sys_linux_amd64.s:526 +0x21
runtime.futexsleep(0xb080a8, 0x7ffe00000000, 0xffffffffffffffff)
/home/vagrant/go/src/runtime/os_linux.go:45 +0x4b
runtime.notesleep(0xb080a8)
/home/vagrant/go/src/runtime/lock_futex.go:151 +0x9b
runtime.stoplockedm()
/home/vagrant/go/src/runtime/proc.go:2096 +0x8c
runtime.schedule()
/home/vagrant/go/src/runtime/proc.go:2488 +0x2da
runtime.park_m(0xc420000180)
/home/vagrant/go/src/runtime/proc.go:2599 +0xb6
runtime.mcall(0x0)
/home/vagrant/go/src/runtime/asm_amd64.s:351 +0x5b
goroutine 1 [sleep, locked to thread]:
time.Sleep(0x3b7dbe04)
/home/vagrant/go/src/runtime/time.go:102 +0x166
github.com/Symantec/Dominator/sub/scanner.scannerDaemon(0xc420028640, 0x10, 0xc420022fa0, 0x1e, 0xc4201b1590, 0xc42017e720, 0x8eaea0, 0xc42002d470)
/home/vagrant/src/github.com/Symantec/Dominator/sub/scanner/scand.go:43 +0x9d
github.com/Symantec/Dominator/sub/scanner.startScanning(0xc420028640, 0x10, 0xc420022fa0, 0x1e, 0xc4201b1590, 0x8eaea0, 0xc42002d470, 0xc42017ac00)
/home/vagrant/src/github.com/Symantec/Dominator/sub/scanner/scand.go:32 +0x133
github.com/Symantec/Dominator/sub/scanner.StartScanning(0xc420028640, 0x10, 0xc420022fa0, 0x1e, 0xc4201b1590, 0x8eaea0, 0xc42002d470, 0xc42017ac00)
/home/vagrant/src/github.com/Symantec/Dominator/sub/scanner/api.go:151 +0x77
main.main()
/home/vagrant/src/github.com/Symantec/Dominator/cmd/subd/main.go:425 +0x964
goroutine 19 [syscall, 2 minutes]:
os/signal.signal_recv(0x0)
/home/vagrant/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/home/vagrant/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/home/vagrant/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 4 [chan send, 2 minutes]:
github.com/Symantec/tricorder/go/tricorder.newIdSequence.func1(0xc42017e000)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:43 +0x41
created by github.com/Symantec/tricorder/go/tricorder.newIdSequence
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:40 +0x58
goroutine 5 [select, 2 minutes]:
github.com/Symantec/tricorder/go/tricorder.(*region).handleRequests(0xc420168280)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:185 +0xf0
github.com/Symantec/tricorder/go/tricorder.newRegion.func1(0xc420168280)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:143 +0x2b
created by github.com/Symantec/tricorder/go/tricorder.newRegion
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:142 +0x15a
goroutine 7 [select, 2 minutes]:
github.com/Symantec/tricorder/go/tricorder.(*region).handleRequests(0xc420169220)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:185 +0xf0
github.com/Symantec/tricorder/go/tricorder.newRegion.func1(0xc420169220)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:143 +0x2b
created by github.com/Symantec/tricorder/go/tricorder.newRegion
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:142 +0x15a
goroutine 8 [select, 2 minutes]:
github.com/Symantec/tricorder/go/tricorder.(*region).handleRequests(0xc420169270)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:185 +0xf0
github.com/Symantec/tricorder/go/tricorder.newRegion.func1(0xc420169270)
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:143 +0x2b
created by github.com/Symantec/tricorder/go/tricorder.newRegion
/home/vagrant/src/github.com/Symantec/tricorder/go/tricorder/metric.go:142 +0x15a
goroutine 35 [IO wait]:
internal/poll.runtime_pollWait(0x7f06b0789e30, 0x72, 0xc4201d9268)
/home/vagrant/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42017a198, 0x72, 0xffffffffffffff00, 0x8e5ec0, 0xac9638)
/home/vagrant/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42017a198, 0xc4201a2800, 0x800, 0x800)
/home/vagrant/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42017a180, 0xc4201a2800, 0x800, 0x800, 0x0, 0x0, 0x0)
/home/vagrant/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42017a180, 0xc4201a2800, 0x800, 0x800, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc42000e040, 0xc4201a2800, 0x800, 0x800, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/net.go:176 +0x6a
crypto/tls.(*block).readFromUntil(0xc420090750, 0x7f06b074ad60, 0xc42000c100, 0x5, 0xc42000c100, 0x42aa64)
/home/vagrant/go/src/crypto/tls/conn.go:493 +0x96
crypto/tls.(*Conn).readRecord(0xc4201b2000, 0x8b0217, 0xc4201b2120, 0xc4201d9668)
/home/vagrant/go/src/crypto/tls/conn.go:595 +0xe0
crypto/tls.(*Conn).Read(0xc4201b2000, 0xc420123000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/home/vagrant/go/src/crypto/tls/conn.go:1156 +0x100
bufio.(*Reader).fill(0xc4200c6600)
/home/vagrant/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc4200c6600, 0x1309c80a, 0xc4201d96c8, 0xc4201d96f8, 0x758ba0, 0xc4201b5628, 0x3fe433461309c800)
/home/vagrant/go/src/bufio/bufio.go:341 +0x2c
bufio.(*Reader).ReadBytes(0xc4200c6600, 0x9a10a, 0x9a1db, 0xbe9b6b98c0ec5bd0, 0x220fb9d8f2, 0x0, 0x0)
/home/vagrant/go/src/bufio/bufio.go:419 +0x6b
bufio.(*Reader).ReadString(0xc4200c6600, 0xa, 0x0, 0x0, 0x0, 0x0)
/home/vagrant/go/src/bufio/bufio.go:459 +0x38
github.com/Symantec/Dominator/lib/srpc.handleConnection(0xc42006e440)
/home/vagrant/src/github.com/Symantec/Dominator/lib/srpc/server.go:370 +0x9f
github.com/Symantec/Dominator/lib/srpc.httpHandler(0x8e8780, 0xc4200201c0, 0xc420178c00, 0x1)
/home/vagrant/src/github.com/Symantec/Dominator/lib/srpc/server.go:324 +0x873
github.com/Symantec/Dominator/lib/srpc.tlsHttpHandler(0x8e8780, 0xc4200201c0, 0xc420178c00)
/home/vagrant/src/github.com/Symantec/Dominator/lib/srpc/server.go:220 +0x44
net/http.HandlerFunc.ServeHTTP(0x8af718, 0x8e8780, 0xc4200201c0, 0xc420178c00)
/home/vagrant/go/src/net/http/server.go:1947 +0x44
net/http.(*ServeMux).ServeHTTP(0xb06e40, 0x8e8780, 0xc4200201c0, 0xc420178c00)
/home/vagrant/go/src/net/http/server.go:2337 +0x130
net/http.serverHandler.ServeHTTP(0xc4201bbba0, 0x8e8780, 0xc4200201c0, 0xc420178c00)
/home/vagrant/go/src/net/http/server.go:2694 +0xbc
net/http.(*conn).serve(0xc42001e0a0, 0x8e8bc0, 0xc42006e100)
/home/vagrant/go/src/net/http/server.go:1830 +0x651
created by net/http.(*Server).Serve
/home/vagrant/go/src/net/http/server.go:2795 +0x27b
goroutine 10 [select]:
main.main.func1(0xc42017e720, 0x8af768)
/home/vagrant/src/github.com/Symantec/Dominator/cmd/subd/main.go:385 +0x8d3
created by github.com/Symantec/Dominator/sub/scanner.startScanning
/home/vagrant/src/github.com/Symantec/Dominator/sub/scanner/scand.go:31 +0xd9
goroutine 11 [syscall, 2 minutes]:
syscall.Syscall6(0x120, 0x3, 0xc420050c90, 0xc420050c84, 0x80800, 0x0, 0x0, 0x1, 0x9, 0xc420050c40)
/home/vagrant/go/src/syscall/asm_linux_amd64.s:44 +0x5
syscall.accept4(0x3, 0xc420050c90, 0xc420050c84, 0x80800, 0xc420028520, 0xf, 0x10000000000000f)
/home/vagrant/go/src/syscall/zsyscall_linux_amd64.go:1546 +0x7e
syscall.Accept4(0x3, 0x80800, 0x0, 0x0, 0x0, 0xc420050d90, 0x48589d)
/home/vagrant/go/src/syscall/syscall_linux.go:454 +0x88
internal/poll.accept(0x3, 0x8af800, 0x0, 0x0, 0xc420070000, 0x7f06b07e9da8, 0x784f01, 0x10100c420028540)
/home/vagrant/go/src/internal/poll/sock_cloexec.go:17 +0x3f
internal/poll.(*FD).Accept(0xc42017af80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/vagrant/go/src/internal/poll/fd_unix.go:365 +0xd1
net.(*netFD).accept(0xc42017af80, 0x0, 0xc4200c7df8, 0x452940)
/home/vagrant/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc42000e4c0, 0xc4200c7da0, 0xc420050fa8, 0x860c01)
/home/vagrant/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).Accept(0xc42000e4c0, 0xc420050fa8, 0xc42000e040, 0x7f06b074ab80, 0xc42000e040)
/home/vagrant/go/src/net/tcpsock.go:259 +0x49
github.com/Symantec/Dominator/lib/net/reverseconnection.(*Listener).listen(0xc42009e3c0, 0xc4200c7da0)
/home/vagrant/src/github.com/Symantec/Dominator/lib/net/reverseconnection/listener.go:117 +0x49
created by github.com/Symantec/Dominator/lib/net/reverseconnection.listen
/home/vagrant/src/github.com/Symantec/Dominator/lib/net/reverseconnection/listener.go:70 +0x1dc
goroutine 12 [chan receive, 2 minutes]:
github.com/Symantec/Dominator/lib/net/reverseconnection.(*Listener).accept(...)
/home/vagrant/src/github.com/Symantec/Dominator/lib/net/reverseconnection/listener.go:92
github.com/Symantec/Dominator/lib/net/reverseconnection.(*Listener).Accept(0xc42009e3c0, 0x8afb08, 0xc42001e0a0, 0x8e8c80, 0xc4201deff0)
/home/vagrant/src/github.com/Symantec/Dominator/lib/net/reverseconnection/api.go:65 +0xc3
net/http.(*Server).Serve(0xc4201bbba0, 0x8e8400, 0xc42009e3c0, 0x0, 0x0)
/home/vagrant/go/src/net/http/server.go:2770 +0x1a5
net/http.Serve(0x8e8400, 0xc42009e3c0, 0x0, 0x0, 0x0, 0x0)
/home/vagrant/go/src/net/http/server.go:2389 +0x73
created by github.com/Symantec/Dominator/sub/httpd.StartServer
/home/vagrant/src/github.com/Symantec/Dominator/sub/httpd/api.go:31 +0x147
goroutine 13 [select, 2 minutes, locked to thread]:
runtime.gopark(0x8b0180, 0x0, 0x88fb11, 0x6, 0x18, 0x1)
/home/vagrant/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc4201eaf50, 0xc42017e9c0)
/home/vagrant/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
/home/vagrant/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
/home/vagrant/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 37 [chan receive, 2 minutes]:
github.com/Symantec/Dominator/sub/scanner.doDisableScanner(0xc400000001)
/home/vagrant/src/github.com/Symantec/Dominator/sub/scanner/scand.go:71 +0x5b
github.com/Symantec/Dominator/sub/rpcd.(*rpcType).updateAndUnlock(0xc4200200e0, 0xc4200294e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0xc4201ee5a0, 0x3, 0x3, ...)
/home/vagrant/src/github.com/Symantec/Dominator/sub/rpcd/update.go:70 +0xc1
created by github.com/Symantec/Dominator/sub/rpcd.(*rpcType).Update
/home/vagrant/src/github.com/Symantec/Dominator/sub/rpcd/update.go:41 +0x295
rax 0xca
rbx 0xb07f60
rcx 0xffffffffffffffff
rdx 0x0
rdi 0xb080a8
rsi 0x0
rbp 0x7ffe74f9b5d0
rsp 0x7ffe74f9b588
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0xffffffffffffffff
r13 0x4
r14 0x3
r15 0x38
rip 0x459b51
rflags 0x286
cs 0x33
fs 0x0
gs 0x0
$
State of the root tree (at /test
):
$ sudo find /test
/test
/test/.subd
/test/.subd/root
/test/.subd/tmp
/test/.subd/tmp/fsbench
/test/.subd/tmp/fsbench/801
/test/.subd/objects
/test/.subd/objects/2f
/test/.subd/objects/2f/7d
/test/.subd/objects/2f/7d/83709aef0716b3e191d9db08a4e6c3a3170b8acdd771ed3d49c1c101a9295923d548396a0765eedf19e2fe8e00ee28d197c4eaadd407eb52be6b18dfc4ea
/test/.subd/objects/81
/test/.subd/objects/81/38
/test/.subd/objects/81/38/1f1dacd4824a6c503fd07057763099c12b8309d0abcec4000c9060cbbfa67988b2ada669ab4837fcd3d4ea6e2b8db2b9da9197d5112fb369fd006da545de
$
Would you consider using a different rpc protocol such as protobuf, thrift, etc so that components can be subbed into dominator that are written in another programming language? It looks like the existing code uses srpc if I'm reading the code correctly which seems to be a golang specific rpc mechanism. I'd like to implement a filegen-server but learning Go presents a roadblock for me and greatly extends the time it'll take for me to implement something useful.
Hi Everyone,
I am going through various documents for Dominator.
I saw Machine Birthing system document which reads PXE boot requests.
https://docs.google.com/document/u/1/d/1y7rPTuG145fdPhqaCdLu_03D1dC1UTRzyEq61ygIQeg/pub
Would it be possible to share link to the Machine Birthing system source?
Thanks,
Mark
I was not able to find the description of the process of registration of new VM started with subd on board in Dominator or/and mdb.
Who initiates this relation initially? VM with subd or dominator calling subd?
What is recommended and possible ways?
The only mention I have found is discovering inventory via cis here
Any other sources are supported now?
Also, could you please point me to some samples of config files if there are any?
Very appreciate any clarifications.
@cviecco I'm thinking about some implementation questions. I'll put each one in a separate comment so that each can be replied to independently.
First, should VM owners specify which Gluster/volume to use for the backing store, or should they just specify that they want any GlusterFs volume?
As far as I can tell I've followed the install instructions exactly in the readme. When I try to add an image to the imageserver i get an log line in the logs saying:
Apr 16 16:49:05 dominator imageserver[11856]: 2019/04/16 16:49:05 bad line: "$methods"
I tried to add the image as follows:
//sparse.0 is a sparse file created with dd
ubuntu@dominator:~/certs$ dd if=/dev/zero of=sparse bs=1 count=0 seek=512M
ubuntu@dominator:~/certs$ ~/go/bin/imagetool -debug -logDebugLevel 10 add sparse.0 sparse "" ""
Error adding image: "sparse.0": error checking for image existence: EOF
I'm guessing I created the ssl certificates incorrectly? I'm not sure where to start debugging. The -debug
and -logDebugLevel10
give no additional output
I think some/most of the pieces are here in the repo but an install package or script would really be helpful. I see there's an install.lib file under scripts that might be what I want but I'm not sure how to use it.
I'm working my way through the docs here. I'm wondering if there are any example manifest files I can reference. Thanks!
Hi Everyone,
I wanted to check if combination various tools ( server side - dominator, image server etc. ) are available as ready to deploy format.
Either Docker Compose or Vagrant or others would do.
https://github.com/Symantec/Dominator/tree/master/init.d
If not, i can make and contribute.
Thanks
Mark
Looks like make-cert utility is not there anymore, however it is mentioned in getting-started.md guide
What is new recommended way to generate certificates for subd?
Thank you
It would be good to have a metric that holds, for example, a float32 obtained by taking a 4-byte prefix of a SHA-512 of the merkle hash of the entire file system. Such values could be logged to time series databases, and used by monitoring systems for making sure that the managed hosts converge to the same bits. Subd already scans the file system and calculates hashes so deriving a merkle hash should not introduce much extra overhead.
Dominator is using the following import path:
"gopkg.in/fsnotify.v0"
However, upon visiting that url, one learns that the canonical import path is now:
"gopkg.in/fsnotify/fsnotify.v0"
This is currently tripping up dep
(and potentially other tools).
# imagetool show bootstrap/Debian-9/2018-05-21:00:31:19 | head -1
drwx------ 1 0 0 /
#
The /etc/imaginator/conf.json
used:
{
"BootstrapStreams": {
"bootstrap/Debian-9": {
"BootstrapCommand": [
"debootstrap",
"--arch=amd64",
"stretch",
"$dir",
"http://deb.debian.org/debian/"
],
"FilterLines": [
"/etc/hostname",
"/etc/machine-id",
"/var/log/.*"
],
"PackagerType": "deb"
}
},
"ImageStreamsUrl": "file:///etc/imaginator/image-streams.json",
"ImageStreamsToAutoRebuild": [],
"PackagerTypes": {
"deb": {
"CleanCommand": [
"apt-get",
"clean"
],
"InstallCommand": [
"apt-get",
"-q",
"-y",
"--no-install-recommends",
"install"
],
"ListCommand": {
"ArgList": [
"dpkg-query",
"-f",
"${binary:Package} ${Version} ${Installed-Size}\n",
"--show"
],
"SizeMultiplier": 1024
},
"RemoveCommand": [
"apt-get",
"-q",
"--purge",
"-y",
"--allow-remove-essential",
"remove"
],
"UpdateCommand": [
"apt-get",
"-q",
"-y",
"update"
],
"UpgradeCommand": [
"apt-get",
"-q",
"-y",
"-o",
"Dpkg::Options::=--force-confold",
"dist-upgrade"
],
"Verbatim": [
"export DEBIAN_FRONTEND=noninteractive"
]
}
}
}
# cat /var/log/imaginator/2018-05-21:15:16:13.994
2018/05/21 15:16:14 Building new image for stream: bootstrap/Debian-9
2018/05/21 15:21:30 Error building image: bootstrap/Debian-9: expected integer
#
The following patch works around it:
diff -purN imagebuilder/builder/bootstrapImage.go imagebuilder/builder/bootstrapImage.go
--- imagebuilder/builder/bootstrapImage.go 2018-05-21 15:50:22.388786285 +0000
+++ imagebuilder/builder/bootstrapImage.go 2018-05-21 15:50:59.807545192 +0000
@@ -117,8 +117,8 @@ func (packager *packagerType) writePacka
func (packager *packagerType) writePackageInstallerContents(writer io.Writer) {
fmt.Fprintln(writer, "#! /bin/sh")
fmt.Fprintln(writer, "# Created by imaginator.")
- fmt.Fprintln(writer, "mount -n none -t proc /proc")
- fmt.Fprintln(writer, "mount -n none -t sysfs /sys")
+ fmt.Fprintln(writer, "mount -n none -t proc /proc 2>/dev/null")
+ fmt.Fprintln(writer, "mount -n none -t sysfs /sys 2>/dev/null")
for _, line := range packager.Verbatim {
fmt.Fprintln(writer, line)
}
The error is due to fmt.Fscanf
in (*bootstrapStream).build
:
output := new(bytes.Buffer)
err := runInTarget(nil, output, rootDir, packagerPathname,
"show-size-multiplier")
if err != nil {
return nil, err
}
sizeMultiplier := uint64(1)
nScanned, err := fmt.Fscanf(output, "%d", &sizeMultiplier)
... trying to parse the following error from mount
as an integer:
mount: none is already mounted or /sys busy
none is already mounted on /proc
none is already mounted on /sys
none is already mounted on /proc
Is there a gitter/slack/irc/etc place where I can chat with fellow dominator users? I'm running into some silly n00b issues that I think could get resolved really easily but would take forever to do over github issues and poking around the source code.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.