lavabit / robox Goto Github PK
View Code? Open in Web Editor NEWThe tools needed to robotically create/configure/provision a large number of operating systems, for a variety of hypervisors, using packer.
The tools needed to robotically create/configure/provision a large number of operating systems, for a variety of hypervisors, using packer.
Hi, I made a prototype base box for MirBSD, also known as MirOS. It's similar to OpenBSD but with a few quirks. Here are the packer tweaks and tests I implemented to support this VM:
https://github.com/mcandre/packer-templates/tree/master/miros
Basic functionality seems to work, including rsync-based shared folders and provisioning files and scripts. Some notes:
packer build
is able to (just barely) succeed with the provided boot_command
, but only because no complex applications are running at the same time that halt -p
executes. The MirBSD devs are not interested in adding ACPI support, the VirtualBox developers are not interested in improving APM support. Applications will generally work, and file system can be generally safe, as long as sensitive user applications are safely terminated prior to vagrant halt
. Not an ideal situation, but one we can document for the corner case of people wishing to deploy MirBSD VM's while avoiding accidental file system corruption! If you find that VMware or qemu provides better APM support, please let me know. Then it would actually make sense to offer VMware/libvirt providers but not VirtualBox, to safeguard against this issue.boot_command
, that would ordinarily be done at a later phase over SSH (much faster). This is because of how Packer doesn't understand that the last SSH provisioning script might trigger a final powerdown that is not recognized by ACPI. And so in practice Packer stalls the build waiting for an SSH connection that never happens. Fortunately, MirBSD is a fairly lean install (~100MiB) and so the boot_command
provisioning is minimal in time and complexity.make
wrapper that mirrors these files locally with wget. It's up to you whether to do this; for debugging this really speeds up the build cycle. When everything is running smoothly, there's no particular reason to locally mirror these files.pkg_add
still works, and I plan to look into pkgsrc later, in order to get more edgy packages like cmake and pip installed in some of my personal downstream boxes.Now that Debian v10 Buster has graduated from testing to stable, the repository URLs have changed. This means the generic/debian10 VM is in a corrupt apt
state, unable to install packages. Let's point the buster VM's at the sharp teeth, unleashed, ultra-beast stable release!
Would love to see an AIX VM!
Now that Haiku has released a more stable beta, could we get a base box for Haiku?
https://www.haiku-os.org/get-haiku/
Here's an example packer template for Haiku alpha:
https://github.com/mcandre/packer-templates/tree/master/haiku
https://developers.redhat.com/blog/2019/05/07/red-hat-enterprise-linux-8-now-generally-available/
generic-hyperv.json
generic-libvirt.json
generic-parallels.json
generic-virtualbox.json
generic-vmware.json
I'm trying to use roboxes/fedora29 but I'm running into issues when trying to have a synced folder.
Apparently the fedora package virtualbox-guest-additions.x86_64 does not supply the vboxsf file system. See this redhat bug and this one for discussion.
Currently the vbguest plugin also fails to properly install the guest additions with this bug.
So I'm wondering if there is a way of getting a fedora in a virtual machine with synced folders and being able to do vagrant destroy
and vagrant up
and have it working without having to roll one's own fedora box.
Maybe using nfs is a solution, but my networking setup is a bit complex, so I haven't gotten round to that.
I don't know if roboxes could create a box compatible with synced folders, but I thought I should at least report the current state of affairs.
We use a non public subdomain inside our internal DNS infrastructure. Hard coding public DNS servers breaks this (common) setup.
This renders all your Ubuntu 18.04 and 18.10 images unusable for many companies.
root@default-u1804:~# dig proxy.internal.ourdomain.com
; <<>> DiG 9.11.3-1ubuntu1.2-Ubuntu <<>> proxy.internal.ourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 34613
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;proxy.internal.ourdomain.com. IN A
;; Query time: 31 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Mon Nov 19 09:16:11 PST 2018
;; MSG SIZE rcvd: 57
Starting a generic/ubuntu1804 vagrant box with VirtualBox with 256M memory gives a kernel panic and an out of memory error on the console, vagrant ssh does not work, and provisioning fails (happens early in the boot process).
This issue does not occur with bento/ubuntu-18.04 or ubuntu/bionic64. Is there a different kernel or kernel options in use, or something else that might cause this?
Which namespace should users prefer when referencing these boxes from Vagrant Cloud, the "generic" or "roboxes" namespace? Will one become deprecated in favor of the other over time?
Hey, I stumbled on this project when on the Vagrant store looking for boxes and noticed the generic boxes were all done quite well, but was unable to find some boxes offered by you that I'd like to see. So first off, thank you for this project! Appreciate the work you've put in here.
I made a comment on the other issue where you were asking for the next distros that people would like to see added. But, I would also like to see documentation on how to contribute to the repository if that's not too hard. That way, it wouldn't only be upon you to make boxes, but others could follow your steps so that they can add in the Operating Systems that they'd like to see. Looking at the repo now, it's a little unclear what all needs to happen in order to get another OS added for automated box building.
@mcandre when I'm able to get RAM/SSDs for the recently donated blades, and then get those robots up and running, I plan to add more distros. Devuan, possibly Minix are at the top of my personal wish list, but if you have 1 or 2 that you think you think are important, now would be the time to suggest them. MacOS/Windows are also near the top of my list, but they'll require the most work.
Technically speaking once I get the new blade server working I should have the capacity to add several distros. The bottleneck will be the time it takes to test/troubleshoot the new distro on the 5 different hypervisors I currently target. If your willing to tweak your templates so they fit into the Robox generic pipeline, and then submit a pull request, that would make it easier for me to accept more of the distros you keep requesting.
If your interested I can setup an experimental branch you can work with while you get the new boxes integrated.
In the list of operating systems on https://roboxes.org, the link to RedHat has been incorrectly entered as https://https//www.redhat.com/en/technologies/linux-platforms/enterprise-linux
Should be https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
I tried to run this box seems to be getting the error
There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.
Path: <provider config: parallels>
Line number: 42
Message: ArgumentError: wrong number of arguments (given 4, expected 1..2)
Bringing up a box with a Vagrantfile that shares my home folder with the guest generates the following error:
MacBook-Pro:ubuntu-18.04(master)$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'generic/ubuntu1804' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'generic/ubuntu1804'
default: URL: https://vagrantcloud.com/generic/ubuntu1804
==> default: Adding box 'generic/ubuntu1804' (v1.8.38) for provider: virtualbox
default: Downloading: https://vagrantcloud.com/generic/boxes/ubuntu1804/versions/1.8.38/providers/virtualbox.box
default: Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
==> default: Successfully added box 'generic/ubuntu1804' (v1.8.38) for 'virtualbox'!
==> default: Importing base box 'generic/ubuntu1804'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'generic/ubuntu1804' is up to date...
==> default: Setting the name of the VM: ubuntu-1804_default_1541001124673_44044
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 80 (guest) => 8081 (host) (adapter 1)
default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2200
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
default: /home/vohi => /Users/vohi
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:
mount -t vboxsf -o uid=1000,gid=1000 home_vohi /home/vohi
The error output from the command was:
mount: /home/vohi: wrong fs type, bad option, bad superblock on home_vohi, missing codepage or helper program, or other error.
The Vagrantfile is:
# -*- mode: ruby -*-
# vi: set ft=ruby :
$user = ENV['USER']
Vagrant.configure("2") do |config|
config.vm.box = "generic/ubuntu1804"
config.vm.network "forwarded_port", guest: 80, host: 8081, host_ip: "127.0.0.1"
config.vm.synced_folder "~", "/home/#{$user}"
config.vm.provision "shell", path: "provision.sh", args: "/home/#{$user}"
end
Could robox offer bhyve, xhyve providers in addition to the other providers, in order to run guests more efficiently on FreeBSD and macOS respectively?
Could we get a box for Debian Buster, which provides some newer packages for CloudABI development?
Otherwise, multiple instances of the same image will be leased the same IP via dhcp and hell breaks loose.
Hi,
As mentioned in issue #11 having a hard-coded external dns breaks our internal dns resolver. In the mean time, I've taken a bit of time to modify your robox.sh script for our use, without the hard-coded dns.
Would this be something you'd be open to merge via PR? Now that the robox namespace seems to be preferred, maybe we could utilise the 'generic' namespace for the boxes without dns changes, and the 'robox' namespace for boxes with hard-coded dns?
Thoughts? Perhaps we could even add this as a robox.sh build-time option?
I cannot ssh into generic/ubuntu1604, provider virtualbox, version 1.8.24.
If I log into the console sshd
is running and listening on port 22, however there is an error in the logs
error: Could not load host key: /etc/ssh/ssh_host_rsa_key
error: Could not load host key: /etc/ssh/ssh_host_dsa_key
error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
> vagrant up --provider virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'generic/ubuntu1604'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'generic/ubuntu1604' is up to date...
==> default: Setting the name of the VM: generic_ubuntu1604_default_1534863324235_72684
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'unknown' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
> ssh [email protected] -p 2460
Connection closed by 127.0.0.1 port 2460
Example packer template:
https://github.com/mcandre/packer-templates/tree/master/smartos
I'm unable to get generic/ubuntu1804 and generic/ubuntu1604 to work properly with libvirt.
Steps to reproduce:
vagrant init generic/ubuntu1804
vagrant up --provider=libvirt
Vagrant gets stuck at Waiting for SSH to become available...
but I can see that the VM has booted successfully in virt-manager.
I have previously used the ubuntu1604 box successfully on this computer and I'm unsure what has happened.
The same problem is not present when using virtualbox as provider.
Environment
Host OS: Ubuntu 18.04
Vagrant: 2.0.1
libvirt: 4.0.0
QEMU emulator version 2.11.1
2.2.4
Ubuntu 18.04 LTS (Bionic Beaver)
Alpine Linux 3.9
Vagrant.configure("2") do |config|
config.vm.box = "generic/alpine39"
config.vm.hostname = "alpine.example.org"
end
Alpine VM is up and requested hostname is set.
vagrant up
fails on Setting hostname....
==> default: Setting hostname...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
# Save current hostname saved in /etc/hosts
CURRENT_HOSTNAME_FULL="$(hostname -f)"
CURRENT_HOSTNAME_SHORT="$(hostname -s)"
# New hostname to be saved in /etc/hosts
NEW_HOSTNAME_FULL='alpine.exampler.org'
NEW_HOSTNAME_SHORT="${NEW_HOSTNAME_FULL%%.*}"
# Update sysconfig
sed -i 's/\(HOSTNAME=\).*/\1alpine.exampler.org/' /etc/sysconfig/network
# Set the hostname - use hostnamectl if available
if command -v hostnamectl; then
hostnamectl set-hostname --static 'alpine.exampler.org'
hostnamectl set-hostname --transient 'alpine.exampler.org'
else
hostname 'alpine.exampler.org'
fi
# Update ourselves in /etc/hosts
if grep -w "$CURRENT_HOSTNAME_FULL" /etc/hosts; then
sed -i -e "s/( )$CURRENT_HOSTNAME_FULL( )/$NEW_HOSTNAME_FULL/g" -e "s/( )$CURRENT_HOSTNAME_FULL$/$NEW_HOSTNAME_FULL/g" /etc/hosts
fi
if grep -w "$CURRENT_HOSTNAME_SHORT" /etc/hosts; then
sed -i -e "s/( )$CURRENT_HOSTNAME_SHORT( )/$NEW_HOSTNAME_SHORT/g" -e "s/( )$CURRENT_HOSTNAME_SHORT$/$NEW_HOSTNAME_SHORT/g" /etc/hosts
fi
# Restart network
service network restart
Stdout from the command:
127.0.0.1 localhost.lavabit.com localhost localhost.localdomain localhost
127.0.0.1 localhost.lavabit.com localhost localhost.localdomain localhost
::1 localhost localhost.localdomain
Stderr from the command:
sed: /etc/sysconfig/network: No such file or directory
* service: service `network' does not exist
Run vagrant up
with the provided Vagrantfile
The same error is mentioned on
hashicorp/vagrant: #10584
I created a Vagrantfile similar to this:
Vagrant.configure("2") do |config|
# I have some proxy settings here
config.vm.hostname = "foo"
config.vm.box = "generic/fedora30"
config.vm.box_check_update = false
end
When I run vagrant up
(on Linux) it fails:
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'generic/fedora30'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: client-vm_default_1563189751261_38790
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Configuring proxy environment variables...
==> default: Configuring proxy for Git...
==> default: Configuring proxy for Yum...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
[default] GuestAdditions seems to be installed (6.0.8) correctly, but not running.
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
bash: line 4: start: command not found
bash: line 4: start: command not found
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims: 6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
bash: line 4: setup: command not found
==> default: Checking for guest additions in VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
setup
Stdout from the command:
Stderr from the command:
bash: line 4: setup: command not found
I have the vagrant-vbguest plugin installed if that matters.
$ vagrant --version
Vagrant 2.2.4
$ vagrant plugin list
vagrant-disksize (0.1.3, global)
- Version Constraint: > 0
vagrant-docker-compose (1.3.0, global)
- Version Constraint: > 0
vagrant-ignition (0.0.3, global)
- Version Constraint: > 0
vagrant-libvirt (0.0.45, global)
- Version Constraint: > 0
vagrant-openstack-provider (0.13.0, global)
- Version Constraint: > 0
vagrant-proxyconf (2.0.1, global)
- Version Constraint: > 0
vagrant-vbguest (0.18.0, global)
- Version Constraint: > 0
Box version: generic/fedora30 (virtualbox, 1.9.18)
Update: indeed this problem is related to vagrant-vbguest. Could this vagrant box support vagrant-vbguest? Otherwise the following workaround in the Vagrantfile works (I found it in some other Vagrantfile):
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end
Hello!
I have a strange issue with the some boxes under libvirt provider. For some reason the box can't find the rootfs after booting up. After spending countless hours rummaging through config files and ending up with reinstalling vagrant, libvirt, and qemu, I still haven't managed to find the root cause.
Current behavior:
$ vagrant init generic/arch
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
==> default: Box 'generic/arch' could not be found. Attempting to find and install...
default: Box Provider: libvirt
default: Box Version: >= 0
==> default: Loading metadata for box 'generic/arch'
default: URL: https://vagrantcloud.com/generic/arch
==> default: Adding box 'generic/arch' (v1.9.6) for provider: libvirt
default: Downloading: https://vagrantcloud.com/generic/boxes/arch/versions/1.9.6/providers/libvirt.box
==> default: Successfully added box 'generic/arch' (v1.9.6) for 'libvirt'!
==> default: Uploading base box image as volume into libvirt storage...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default: -- Name: vagrant_default
==> default: -- Domain type: kvm
==> default: -- Cpus: 2
==> default: -- Feature: acpi
==> default: -- Feature: apic
==> default: -- Feature: pae
==> default: -- Memory: 2048M
==> default: -- Management MAC:
==> default: -- Loader:
==> default: -- Base box: generic/arch
==> default: -- Storage pool: default
==> default: -- Image: /var/lib/libvirt/images/vagrant_default.img (32G)
==> default: -- Volume Cache: default
==> default: -- Kernel:
==> default: -- Initrd:
==> default: -- Graphics Type: vnc
==> default: -- Graphics Port: 5900
==> default: -- Graphics IP: 127.0.0.1
==> default: -- Graphics Password: Not defined
==> default: -- Video Type: cirrus
==> default: -- Video VRAM: 256
==> default: -- Sound Type:
==> default: -- Keymap: en-us
==> default: -- TPM Path:
==> default: -- INPUT: type=mouse, bus=ps2
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
This phase never finishes. However, on the serial console initramfs complains about missing rootfs:
I have this issue only on Fedora, the generic/arch
box works fine on CentOS 7.
$ cat /etc/fedora-release
Fedora release 29 (Twenty Nine)
$ rpm -q libvirt vagrant vagrant-libvirt
libvirt-4.7.0-1.fc29.x86_64
vagrant-2.1.2-3.fc29.noarch
vagrant-libvirt-0.0.40-5.fc29.noarch
I'm under impression I keep missing something pretty obvious here, as some boxes work (like generic/debian9
) and some don't (like generic/arch
or generic/centos7
).
Would it be possible to add 9p and NFS support into Vagrant libvirt boxes by default, since VirtualBox shares aren't available there and those are (afaik) the only other options for mounts that don't have to be synced manually?
Hi,
I'm managing jobs that I want to execute on a vagrant-managed VM in subdirectories which contain the necessary script files. The ideas is that given a directory jobs/test with files main.sh and other stuff, I can basically just do
$ vagrant upload jobs/test test ubuntu1804
and then run
$ vagrant ssh -c test/main.sh ubuntu1804
This worked nicely for all sorts of images until recently. It works with version 1.8.54, but with the latest version 1.9.2 I get an error from vagrant:
/opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:398:in `await_response_state': scp: error: unexpected filename: . (RuntimeError)
from /opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:365:in `block (3 levels) in start_command'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/channel.rb:610:in `do_close'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:573:in `channel_closed'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:682:in `channel_close'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:549:in `dispatch_incoming_packets'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:249:in `ev_preprocess'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:101:in `each'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:101:in `ev_preprocess'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:29:in `process'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:228:in `process'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `block in loop'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `loop'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `loop'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/channel.rb:272:in `wait'
from /opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:284:in `upload!'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:296:in `block in upload'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:709:in `block in scp_connect'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:489:in `connect'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:707:in `scp_connect'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:293:in `upload'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/upload/command.rb:104:in `block in execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:238:in `block in with_target_vms'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:232:in `each'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:232:in `with_target_vms'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/upload/command.rb:69:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'
Given that the only change is the version of the box, I suspect something in the configuration to have broken things.
I can still upload individual files, but that's not what I need :)
Perhaps you are aware of some changes here that could have broken this?
Cheers,
Volker
Could we get a Plan 9 box? Would be awesome to quickly build and test apps in a Plan 9 VM!
For better performance please use the VirtualBox SATA controller.
From https://docs.oracle.com/cd/E97728_01/E97727/html/harddiskcontrollers.html
Like a real SATA controller, Oracle VM VirtualBox's virtual SATA controller operates faster and also consumes fewer CPU resources than the virtual IDE controller.
And I found a number of anecdotes in online discussions confirming this.
https://choosealicense.com/no-permission/
refering to a nice discussion
github/choosealicense.com#196
Hello,
First thanks a lot for the work on building vagrant box and provide them to us !
I noticed that the last debian 10, don't provide Hyper-V sometimes. Is it a bug in the build system ?
Please see v1.9.19 and v1.9.14, no hyper-v, nor parralels:
https://app.vagrantup.com/generic/boxes/debian10
https://app.vagrantup.com/roboxes/boxes/debian10
The last Debian 10 (buster), v1.9.19, is the stable debian (all previous version are based on debian 10 testing). I would have very like to find Hyper-V as provider for this first one Debian 10 stable box.
Is it a bug or is a new build planned very soon for Hyper-V ?
Thanks !
hi,
it appears to have been changed for vmware already, but i was unable to boot the libvirt variant of your rhel8 image with the default setting:
disk_bus=scsi
it times out searching for the root volume and fdisk in rescue mode does not show any disks.
I was able to boot the VM with vagrant-libvirt using
disk_bus="sata"
in the Vagrant File setting.
Hello,
please name the OpenBSD boxes with the release number like openbsd64 for openbsd 6.4.
OpenBSD releases are numbered with 0.1 increased from release to release.
Regards
The following servers are hardcoded in the generic/ubuntu1804 image:
# systemd-resolve --status
Global
DNS Servers: 4.2.2.1
4.2.2.2
208.67.220.220
Wouldn't it be expected that the local DNS from DHCP should have priority over these servers? If running in an environment where access is blocked to external DNS servers, then this image requires further work to use, rather than working out of the box, like say, bento/ubuntu-18.04 or ubuntu/bionic64.
I'm using generic/centos7
with the following configuration
Vagrant.configure("2") do |config|
config.vm.define "cent7" do |cent7|
cent7.vm.box = "generic/centos7"
cent7.vm.network "private_network", ip: "172.20.120.10"
cent7.vm.synced_folder "bin/", "/vagrant/bin"
end
end
But there seems to be some issue related to synced_folder
. I'm seeing the following error when I run vagrant up cent7
An error occurred while executing `vmrun`, a utility for controlling
VMware machines. The command and output are below:
Command: ["enableSharedFolders", "/Users/nalluri/Projects/consul-client/.vagrant/machines/cent7/vmware_desktop/f2febc17-913d-48b4-b94a-f5dc84e46f12/generic-centos7-vmware.vmx", {:notify=>[:stdout, :stderr]}]
Stdout: Error: There was an error mounting the Shared Folders file system inside the guest operating system
Stderr:
On arch I run:
$ vagrant init generic/freebsd12
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'freebsd/FreeBSD-11.0-STABLE' version '2017.05.11.2' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection reset. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Connection reset. Retrying...
default: Warning: Remote connection disconnect. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Hello
I'm packaging boxes via Packer and chef/bento based configuration files. VirtualBox is all fine, but I hit a wall trying to run my vmware CentOS / RHEL boxes over ESXi using vagrant + vagrant plugin vagrant-vmware-esxi : disk seems to be ignored, the VM boot is looping on PXE/TFTP lookup.
So I'm trying to understand hence this question. I tried some upstream boxes : centos/6 and centos/7 does work, generic/centos6 centos7 rhel7 doesn't, same symptoms. CentOS is building his boxes over a VMWare Workstation. Mine are built over ESXi 5.1 5.5. Where/how do you build yours ?
And any hints for investigation would be welcomed, I've been at it for days, I'm out of idea!
Maybe hyperv templates should install
https://www.microsoft.com/en-us/download/details.aspx?id=55106
Could we get boxes for more architectures, to help test applications in different environments? I think most of these boxes are amd64, which is awesome, and i386, powerpc, arm, mips would be great as well!
Unfortunately, most hypervisors do not currently support non-x86-based guests. Though I think it is possible to run some non-x86 guest boxes with Vagrant, via the Vagrant libvirt plugin. This enables users to run non-x86 guests on x86 hosts, though the hosts must be running (GNU?) Linux natively.
Could we get some libvirt-based ppc, arm, and mips boxes published? These take longer to build and run, but they are worth it for projects that need to test on lots of different architectures.
Hello.
generic/fedora29 (v1.8.40, libvirt)
Image boot is stuck for 1.5 minutes with messages A start job is running for dev-disk...8c3382.device
, Dependency failed for Resume from ...8c3382
(this is swap disk UUID, /dev/sda2), and then stuck forever with A start job is running for dev-disk...8243e.device
message (this is /, /dev/sda3).
However, booting this image manually into rescue mode (from Grub menu) works. After regenerating initrd
from rescue mode with dracut --verbose --force
, booting to normal mode also works.
Thank you for all your boxes. Your project is probably the only one which builds boxes for parallels, hyperv, virtualbox, libvirt, and vmware_desktop - 5 platforms at the same time, allowing to use consistent environment across different virtualization engines.
Hey, thank you for maintaining so many useful Vagrant boxes online, and supporting many backend providers as well! I'd love to see even more guest OS's available through robox! Feel free to crib a few from my packer templates:
https://github.com/mcandre/packer-templates
Thank you for maintaining so many operating system base boxes, for so many providers already. Honestly, this is a massive feature matrix to have completed, kudos!
What if we took the boxes and offered them on even more providers, like Amazon/Google Cloud/Azure/Oracle images, as well as Docker and Triton images (where possible)? This could grow the userbase and possibly funding sources a bit further, plugging gaps where no compatible images currently exist. For Docker and Triton builders, we would get a bonus effect of many GNU/Linux guests available as lightweight containers, as opposed to more heavyweight virtual machines.
robox is a wonderfully comprehensive collection of base boxes covering a wide variety of different operating systems. I gather that time to pack all these boxes is a constraint on further development, so I wonder if we can improve packing time somehow.
For example, is there a significant restriction on build resources, RAM, CPU cores, and so on? Perhaps we could provide a donation link specifically for funding cloud resources used to build our images.
That could help with the hardware constraints on build time, by scaling vertically. Could we also scale horizontally? Perhaps building boxes from a pool of hosts.
Finally, what steps can we take to improve the build time of each particular box? I've done some (premature) optimizations on my own box templates, such as minifying boot_command
contents, and compressing provisioning media, as keyboard input delivery and FTP can be run slowly on certain virtual guests.
Honestly, the bottlenecks of OS installation tend to be the long, uncontrolled process of running the install wizards. But what other things can we shave off, even just a few minutes from each build? Reducing boot splash timeouts, selecting faster virtual hardware (when guest compatible), and ensuring that installation media (ISO's, IMG's) are sourced from fast online caches. Any other ideas for accelerating builds?
We can strip out some components to reduce total image size, see the cleanup scripts in https://github.com/mcandre/packer-templates for examples. In general, caches should be cleaned, including /tmp mounts and any OS package manager caches.
We can shrink images even further by removing some nonessential software packages, like perl in Ubuntu, that aren't strictly needed in order to boot up and serve SSH commands. This could break some user expectations, of course, so it's up to you to decide how minimal or maximal we want the base boxes to be.
I recently updated Vagrant, and now it complains that the virtual network device that many boxes use by default, E1000, is insecure. Does this happen with the generic/ boxes as well? Can we pack more secure base boxes while Vagrant works on a fix?
https://github.com/lavabit/robox/blob/master/scripts/arch/virtualbox.sh#L10
pacman --sync --noconfirm virtualbox-guest-utils-nox
Should be
pacman --sync --noconfirm virtualbox-guest-modules-arch virtualbox-guest-utils-nox
Without specifying the guest modules to install, it is defaulting to virtualbox-guest-dkms
, which does not work with a Linux kernal.
I keep getting a 404 when I attempt to "vagrant up" after initializeing a box. I also get a 404 if I try to add a box directly. I have tried this for Fedora 28, Fedora 29, and Fedora 29 Silverblue:
C:\Users\Jonathan Calloway\vagrant>vagrant box add generic/fedora28
==> box: Loading metadata for box 'generic/fedora28'
box: URL: https://vagrantcloud.com/generic/fedora28
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.
Enter your choice: 4
==> box: Adding box 'generic/fedora28' (v1.8.52) for provider: virtualbox
box: Downloading: https://vagrantcloud.com/generic/boxes/fedora28/versions/1.8.52/providers/virtualbox.box
box: Progress: 0% (Rate: 0/s, Estimated time remaining: --:--:--)
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
The requested URL returned error: 404 Not Found
When specifying timezones, please select UTC, which will result in more predictable behavior on our servers.
Please provide a base box for Debian sid, so that the very latest Debian packages can be directly used by VM's.
For example, sparc64-flavored g++ cross-compiler toolchains for amd64 guests, are available in sid but not buster.
https://packages.debian.org/search?suite=sid&arch=sparc64&searchon=names&keywords=g%2B%2B
https://packages.debian.org/search?suite=buster&arch=sparc64&searchon=names&keywords=g%2B%2B
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.