Giter Club home page Giter Club logo

boot2docker-vagrant-box's People

Contributors

davidgiesberg avatar dduportal avatar dlitz avatar felixbuenemann avatar ffung avatar fnichol avatar ingoclaro avatar jfoy avatar laurent-sarrazin avatar legal90 avatar lmars avatar luislavena avatar mitchellh avatar myatagaw avatar phpguru avatar refractalize avatar sennerholm avatar shawnbutts avatar simongiraud avatar simonista avatar spheromak avatar tfduesing avatar wjordan avatar wojtekmach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

boot2docker-vagrant-box's Issues

private_network collision

The static 192.168.10.1 private_network defined in the box's Vagrantfile at

override.vm.network "private_network", ip: "192.168.10.10", id: "default-network", nic_type: "virtio"

can collide with the host machine's local network.

I don't think this network configuration can be overridden by a project-level Vagrantfile. (According to its documentation, Vagrant does not override network configuration.)

Might it be better, then, to leave this private_network configuration out of boot2docker-vagrant-box?

Docker compose commands time out

I have everything working with normal docker:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world
a8219747be10: Pull complete
91c95931e552: Already exists
hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd18681cf5daeb82aab55838d
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (Assuming it was not already locally available.)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
$ docker info
Containers: 1
Images: 2
Storage Driver: aufs
 Root Dir: /mnt/sda2/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 4
 Dirperm1 Supported: true
Execution Driver: native-0.2
Kernel Version: 4.0.2-boot2docker
Operating System: Boot2Docker 1.6.1 (TCL 5.4); master : 43209d4 - Thu May  7 22:06:28 UTC 2015
CPUs: 1
Total Memory: 1.465 GiB
Name: boot2docker
ID: MBPZ:LYOH:LV6D:T3PM:U5GS:6OU5:4RHF:AHO5:WCAC:TTAQ:QYMB:LBHZ
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 17
System Time: Mon May 18 16:03:20 UTC 2015
EventsListeners: 0
Init SHA1: 2dabfc43e5f856a0712787a6ff78ceaf791cc9e7
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda2/var/lib/docker
Username: hidden
Registry: [https://index.docker.io/v1/]

But when I run my docker-compose command, I get a timeout:

$ docker-compose ps
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 191, in ps
  File "/compose/build/docker-compose/out00-PYZ.pyz/compose.project", line 229, in containers
  File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 385, in containers
  File "/compose/build/docker-compose/out00-PYZ.pyz/docker.client", line 82, in _get
  File "/compose/build/docker-compose/out00-PYZ.pyz/requests.sessions", line 395, in get
  File "/compose/build/docker-compose/out00-PYZ.pyz/requests.sessions", line 383, in request
  File "/compose/build/docker-compose/out00-PYZ.pyz/requests.sessions", line 486, in send
  File "/compose/build/docker-compose/out00-PYZ.pyz/requests.adapters", line 387, in send
requests.exceptions.Timeout: (<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x103b9ffd0>, 'Connection to 192.168.59.103 timed out. (connect timeout=60)')

Everything worked fine with boot2docker. I haven't touched the default Vagrant file and this is my boot2local.sh:

#!/bin/sh

# Regenerate certs for the newly created Iprivate network IP
sudo /etc/init.d/docker restart
# Copy tls certs to the vagrant share to allow host to use it
sudo cp -r /var/lib/boot2docker/tls /vagrant/

I also have:

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/me/Projects/project-name/src/tls
export DOCKER_TLS_VERIFY=1
export B2D_NFS_SYNC=1

Vagrant boot2docker does not auto rsync

Hi I'm having problems with rsync_auto with Vagrant and using Docker as a provider, that means that I have a Docker Host VM (virtualBox).

Vagrant syncs my folder only in provision time(only once).

I want to sync my files on change (my machine <--> docker host)

config.vm.box = "dduportal/boot2docker"

config.vm.synced_folder ".", "/project", type: "rsync",
rsync__auto: true

When I run vagrant rsync-auto it syncs my files correctly. but I want that Vagrant sync automatically. There is no error after Vagrant up

dockerhost: Rsyncing folder: /Users/user/workspace/docker/provisioning/ => /project

I already tested to change /project directory permissions 777, does not work as well.

Environment:

  • Mac: OS X Yosemite, 10.10.5
  • Vagrant: 1.8.1
  • Docker: 1.10.2

Current directory mapped to the wrong path

Vagrant 1.7.4
Docker 1.8.1

#27 introduced mapping the host path the same path in the guest, which is very useful

However the current directory determination in the template is off
CURRENT_DIR = File.expand_path(File.dirname(__FILE__))
sets the path to the expanded template of the box and not my own Vagrantfile. In my case it expands to

/Users/fai/.vagrant.d/boxes/dduportal-VAGRANTSLASH-boot2docker/1.8.1/virtualbox

This of course invalidates the purpose of #27

This also makes copying the certificates to your current host directory impossible, the instructions with regard to the bootlocal.sh is broken bc /vagrant is not mounted

# Regenerate certs for the newly created Iprivate network IP
sudo /etc/init.d/docker restart
# Copy tls certs to the vagrant share to allow host to use it
sudo cp -r /var/lib/boot2docker/tls /vagrant/

My workaround for now is to just add the default synced folder back again:

config.vm.synced_folder ".", "/vagrant"

Not sure how to fix this maybe Dir.pwd will work for mounting the correct directory, but that still doesn't provide a fix with copying the certificates to your local host with bootlocal.sh; there's no deterministic way inside the guest to determine what your 'current host' mount directory is, especially if you have multiple synced_folders; passing the shell provisioner the current host directory seems like a viable solution

nfs simple example "failed: Connection refused"

I have posted my question in stackoverflow related to fns and the message failed: Bad file descriptor vagrant-and-docker-nfs-failed-bad-file-descriptor Using Vagrant and Docker as a provider.

So I decided to test a sample example posted by @dduportal #10 (comment)

And after vagrant up, the execution hangs for a while and finally it shows an error

This is the output

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'dduportal/boot2docker'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'dduportal/boot2docker' is up to date...
==> default: A newer version of the box 'dduportal/boot2docker' is available! You currently
==> default: have version '1.10.1'. The latest is version '1.11.1'. Run
==> default: `vagrant box update` to update.
==> default: Setting the name of the VM: ex2_default_1463056322815_57229
==> default: Fixed port collision for 2375 => 2375. Now on port 2200.
==> default: Fixed port collision for 2376 => 2376. Now on port 2201.
==> default: Fixed port collision for 22 => 2222. Now on port 2202.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 2375 (guest) => 2200 (host) (adapter 1)
    default: 2376 (guest) => 2201 (host) (adapter 1)
    default: 22 (guest) => 2202 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2202
    default: SSH username: docker
    default: SSH auth method: private key
    default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Exporting NFS shared folders...
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
==> default: Mounting NFS shared folders...

The error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o 'vers=3,udp' 192.168.10.1:'/Users/it/workspace/docker-vagrant/ex2' /vagrant

Stdout from the command:

Stderr from the command:

mount.nfs: an incorrect mount option was specified
mount: mounting 192.168.10.1:/Users/it/workspace/docker-vagrant/ex2 on /vagrant failed: Connection refused

I Previously installed (I don't know if was necessary)

  • OSXFuse 2-8.3 (downloading the dmg file)
  • nfts-3g (using brew install homebrew/fuse/ntfs-3g)

I'm used:

  • Mac Yosemite 10.10.5
  • Vagrant 1.8.1
  • dduportal/boot2docker (virtualbox, 1.10.1)

Error on reload of a Vagrant docker container

I am using this box as a proxy box for other Vagrant managed Docker projects. Everytime I run vagrant reload on on of those projects I get the following error:

There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["sharedfolder", "add", "1b364c41-c16a-4001-9aee-8bf10b180d97", "--name", "vagrant", "--hostpath", "/Users/adamduro/Workspace/docker/vagrant"]

Stderr: VBoxManage: error: The machine 'zg-docker-host' is already locked for a session (or being unlocked)
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component Machine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LockMachine(a->session, LockType_Write)" at line 1002 of file VBoxManageMisc.cpp

Here is my Vagantfile for the proxy setup that uses this box:

# Vagrantfile for the proxy VM

Vagrant.configure("2") do |config|

  config.vm.define "zg-docker-host", primary: true do |dhost|

    dhost.vm.box = "dduportal/boot2docker"
    dhost.vm.box_version = "= 1.5.0"

    dhost.vm.network :private_network, ip: "192.168.100.200"

    dhost.vm.provider "virtualbox" do |v|
      # On VirtualBox, we don't have guest additions or a functional vboxsf
      # in TinyCore Linux, so tell Vagrant that so it can be smarter.
      v.name = "zg-docker-host"
      v.check_guest_additions = false
      v.memory = 2048
    end

    # b2d doesn't support NFS
    dhost.nfs.functional = false

    dhost.vm.network "forwarded_port", guest: 8080, host: 8080
    dhost.vm.network "forwarded_port", guest: 3000, host: 3000
    dhost.vm.network "forwarded_port", guest: 3232, host: 3232
    dhost.vm.network "forwarded_port", guest: 8888, host: 8888

  end

end

And here is the Vagrant file for a project:

ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
ENV["DOCKER_HOST_VAGRANT_FILE"] ||= "./docker/Dockerhost"
ENV["DOCKER_HOST_VAGRANT_NAME"] ||= "zg-site-docker-host"

# BUILD ALL WITH: vagrant up --no-parallel

Vagrant.configure("2") do |config|

  config.vm.define "app" do |v|

    v.vm.synced_folder ".", "/opt/app", type: "rsync",
      rsync__exclude: get_ignored_files()

    v.vm.provider "docker" do |d|
      d.vagrant_machine = ENV["DOCKER_HOST_VAGRANT_NAME"]
      d.vagrant_vagrantfile = ENV["DOCKER_HOST_VAGRANT_FILE"]
      d.build_dir = "."
      d.env = { :SITE_HOSTNAME => 'docker.local' }
      d.remains_running = true
      d.ports = ["8003:8003", "8383:8383", "5353:5353"]
    end
  end

end

def get_ignored_files()
  ignore_file   = ".rsyncignore"
  ignore_array  = []

  if File.exists? ignore_file and File.readable? ignore_file
    File.read(ignore_file).each_line do |line|
      ignore_array << line.chomp
    end
  end

  return ignore_array
end

Non-destructive upgrade path

I'm trying to figure out a non-destructive upgrade path.

It is my understanding that when dduportal/boot2docker is updated with e.g. docker 1.6.0, the Vagrant VM has to be reinitialized (vagrant destroy && vagrant up), which will lead to loss of all images and containers stored on the persistent partition.

The following suggests that vagrant reload would be sufficient:
http://stackoverflow.com/questions/25914383/update-a-vagrant-box

Unfortunately when I do vagrant up with a previous box version:

config.vm.box = "dduportal/boot2docker"
config.vm.box_version = "1.4.1"

then switch to the current version

config.vm.box = "dduportal/boot2docker"
config.vm.box_version = "1.5.0"

and do vagrant reload I'm getting an ssh password prompt from the VM:

...
==> boot2docker: Booting VM...
==> boot2docker: Waiting for machine to boot. This may take a few minutes...
    boot2docker: SSH address: 127.0.0.1:2222
    boot2docker: SSH username: docker
    boot2docker: SSH auth method: private key
    boot2docker: Warning: Connection timeout. Retrying...
Text will be echoed in the clear. Please install the HighLine or Termios libraries to suppress echoed text.
[email protected]'s password:
...

Update to b2d 1.7.0

Just a place to collaborate :)

I noticed you already started working on the update, which is not trivial due to the b2d 1.7.0 changes (64bit userspace).

Are guest additions required to use this box?

Hi guys, I use Windows 10, and when I try to use this box I get the following error:

==> default: Mounting shared folders...
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box.

This happens even if I install the vagrant plugin mentioned here which automatically installs the latest Virtualbox guest additions on the first vagrant up.

My Vagrantfile was very simple, barebone:

Vagrant.configure("2") do |config|
  config.vm.box = "dduportal/boot2docker"
end

Any idea what do I miss here?

When and how is the docker daemon started?

I used @YungSang boot2docker and it seems there are differences in when docker daemons is started.

I have two goals:

  1. set EXTRA_ARGS in /var/lib/boot2docker/profile
  2. login into docker hub.

in @YungSang box 1.) is ignored because docker is already running at that time.
in @dduportal box 2.) is not possible because docker daemon is not running yet.

So the question is how could I resolve this problem with your box, when and how is the daemon started?

# -*- mode: ruby -*-
# vi: set ft=ruby :

unless Vagrant.has_plugin?("nugrant")
  raise Vagrant::Errors::VagrantError.new, "Please install the nugrant plugin running 'vagrant plugin install nugrant'"
end
# Enviromentn definition
TARGET_ENV = "dev"
DOMAIN_PREFIX = "#{TARGET_ENV}"

Vagrant.require_version ">= 1.7.1"
Vagrant.configure(2) do |config|
  config.vm.define "#{TARGET_ENV}_env"
  config.vm.box = "dduportal/boot2docker"

  config.vm.provider "virtualbox" do |vb|
    vb.cpus = 4
    vb.memory = 1024
  end

  config.vm.network "public_network"

  config.vm.synced_folder "~", "/vagrant_data"

  DNS_ADDR = "172.17.42.1"
  #Set DNS for dockerDNS container
  config.vm.provision :shell, inline: "echo EXTRA_ARGS=\\\"--bip=#{DNS_ADDR}/16 --dns=#{DNS_ADDR}\\\" > /var/lib/boot2docker/profile"

  config.vm.provision :shell, inline: "/etc/init.d/docker status"   
  #
  config.vm.provision :shell do |s|
    s.inline = "/usr/local/bin/docker $@"
    s.args   = ["login", "-u", config.user.docker.username, "-p", config.user.docker.password, "-e", config.user.docker.email]
  end

Can't connect host docker client to VM on explicit IP

I'm using this Vagrantfile. Note (from L20; VM_IP_ADDR defaults to 10.1.2.3):

...
        vagrant_b2d_nfs_test.vm.network(:private_network, ip: VM_IP_ADDR)
...

When I attempt to connect to the explicit IP from the host, I'm rejected. It looks like the certificate only accommodates localhost and the dynamic IP set by Vagrant:

% DOCKER_HOST=tcp://10.1.2.3:2376 DOCKER_TLS_VERIFY=0 DOCKER_CERT_PATH="${PWD}/certs" docker images
FATA[0000] An error occurred trying to connect: Get https://10.1.2.3:2376/v1.18/images/json: x509: certificate is valid for 127.0.0.1, 10.0.2.15, not 10.1.2.3

See also Parallels/boot2docker-vagrant-box#17.

Version 1.7.1 not available

Looks like version 1.7.1 was taken down from Drop Box. We have some developers that are unable to pull this specific version:

Loading metadata for box 'dduportal/boot2docker'
URL: https://atlas.hashicorp.com/dduportal/boot2docker
Adding box 'dduportal/boot2docker' (v1.7.1) for provider: virtualbox
Downloading: https://vagrantcloud.com/dduportal/boxes/boot2docker/versions/1.7.1/providers/virtualbox.box
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

The requested URL returned error: 404 Not Found

This no longer is there on dropbox https://dl.dropboxusercontent.com/u/2524496/boxes/boot2docker_virtualbox-1.7.1.box

Upgrading a Vagrant box based on this image

First of all, thank you for your efforts. This is awesome work!

recently upgraded my local Docker installation to 1.8.0. I had a custom Vagrant box running from the following Dockerfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.require_version ">= 1.7.0"

Vagrant.configure(2) do |config|
  config.vm.box = "dduportal/boot2docker"
  config.vm.network "private_network", ip: "192.168.32.64"
end

After I've upgraded Docker, when I tried to list my images I've got the following error:

$ docker images
Error response from daemon: client and server don't have same version (client API version: 1.20, server API version: 1.19)

This box already has several custom images I've built, and I'd rather not rebuild them. How can I upgrade my Vagrant box?

Thanks.

Cert issue migrating from boot2docker to this box

In trying to migrate from a bit outdated, but working boot2docker setup to the latest and greatest NFS solution you posted recently. Super jacked about the possibility of speeding up things with NFS. Anyway, I am just doing something wrong and hoping you can help.

After upgrading Vagrant, Boot2Docker, Docker and trying your vagrantfile.tpl edits wth vagrant init I'm getting an error connecting to the docker daemon.

docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 7c8fca2
OS/Arch (client): darwin/amd64
FATA[0000] An error occurred trying to connect: Get https://192.168.10.10:2376/v1.18/version: x509: certificate is valid for 127.0.0.1, 10.0.2.15, not 192.168.10.10 

I saw you have the ssh key regeneration disabled so I am guessing I need to regenerate keys? I am currently using the following in ~/.bash_profile

export B2D_NFS_SYNC=1
export DOCKER_HOST=tcp://192.168.10.10:2376
export DOCKER_CERT_PATH=/Users/myusername/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1

When I try changing the IP to localhost:2376 I get

FATA[0000] An error occurred trying to connect: Get https://localhost:2376/v1.18/version: x509: certificate is valid for boot2docker, not localhost 

I tried fiddling with the cert path, and running $(boot2docker shellinit) but it didn't help.

The box is running fine, NFS mount works, and I can vagrant ssh and run docker ps -a no problem, so I'm 99% there, its just tricking Mac Yosemite Terminal into connecting right. Thanks for any advice!

Vagrant does not wait long enough for the Docker service to start

When bringing a vm online, Vagrant pauses to wait for the ssh service to start, then continues.
In the case of provider-docker, the next step (provisioning a Docker container inside the VM) requires the Docker service be running.
With your boot2docker image, this is not immediately true...
In /opt/bootscript.sh, the SSH daemon is initialized a while before the Docker daemon, leaving room for a race condition.
I was able to solve this by moving the SSH initialization after the Docker startup.

Here is the sed script I am currently using for this:

# make sure Docker starts before SSH
sed -i \
 -e '/Launch Docker/,$!{/Configure SSHD/,/^$/{H;d;};}' \
 -e '/Launch Docker/,/^$/{/^$/g;}' \
 /opt/bootscript.sh

(This is my current full work-around config: https://gist.github.com/p120ph37/0e2dc718c9ea3400da0590792eac208f )

Custom package installation need to be updated

Hello,

Sorry my fork is too divergent now to do a simple pull request so I open this issue.

In scripts/build-custom-iso.sh

You have

curl -LO "http://tinycorelinux.net/5.x/x86/tcz/${TCZ_PACKAGE}.tcz";

This legacy code leads to 2 issues today :

  • Now tiny core version is 6.4.1 so 5.x packages are outdated
  • Now boot2docker arch is x86_64 so packages installed with this legacy code won't work and commands usage will result to "not found commands"

I propose this fix :

curl -LO "http://tinycorelinux.net/$(version | cut -d '.' -f 1).x/x86_64/tcz/${TCZ_PACKAGE}.tcz";

This should work and probably allow to avoid (don't know the goal of that) the custom install of rsync later in the script.

Kind regards.

Unable to update to version 1.8.1

I'm unable to update to the latest version of this box. I always get the following error:
bsdtar: Error opening archive: Unrecognized archive format

I've tried the following methods:
vagrant box update dduportal/boot2docker
vagrant box add dduportal/boot2docker

I've also tried downloading and installing the box file manually:

curl -OkL https://atlas.hashicorp.com/dduportal/boxes/boot2docker/versions/1.8.1/providers/virtualbox.box
vagrant box add --name stuff virtualbox.box
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'stuff' (v0) for provider:
    box: Unpacking necessary files from: file:///Users/cantonic/dev/git_repos/espn/devops-sandbox/score/virtualbox.box
The box failed to unpackage properly. Please verify that the box
file you're trying to add is not corrupted and try again. The
output from attempting to unpackage (if any):

bsdtar: Error opening archive: Unrecognized archive format

I'm seeing this error with Vagrant 1.7.1, and 1.7.4 (latest version).

I am able to install previous versions of this box without issue.

Disable TLS

I added this to my Vagrantfile to disable TLS so that it is easy to get dev environment started.

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provision "shell",
inline: "echo 'DOCKER_TLS=no' >> /var/lib/boot2docker/profile; /etc/init.d/docker restart"
end

vagrant destroy should cause nfs to stop serving the exports

After performing a vagrant destroy the entries were cleared from /etc/exports, however the nfs daemon was not restarted and continued to service the recently purged entries.

==> default: Pruning invalid NFS exports. Administrator privileges will be required...

/etc/exports is empty.

$ showmount -e
Exports list on localhost:
/opt/boot2docker                    192.168.10.10

Disk Size

Hi,

Is there a way to increase the size of the disk?

Thanks!

Build the box using docker-machine instead of packer

I decided to try building the boot2docker-vagrant base box with docker-machine instead of packer.
I'm basically creating a copy of a docker-machine provisioned VirtualBox VM.

This approach shortens the gap between boot2docker-vagrant and docker-machine:

  • no need a customized boot2docker.iso
  • the base box is an exact copy of the docker-machine VM
  • non-destructive updates become possible by just swapping the boot2docker.iso (though I'm not actively pursuing that idea any more)

Take a look - https://github.com/lmakarov/boot2docker-vagrant-box/tree/docker-machine
Passes all your tests ;)

Let me know what you think.

Containers not mounting shared sources correctly

I am able to view the contents of NFS shared files inside vagrant ssh at the path /vagrant however my folders from Mac are not showing up in the /src directory where the run command mounts them. With docker run command:

      docker run -d \
      -P --name db \
      -e APP=db \
      -e MYSQL_ROOT_PASSWORD=ROOTPASSWORD \
      -e MYSQL_USER=DBUSERNAME \
      -e MYSQL_PASSWORD=DBPASSWORD \
      -v /Users/username/Sites/mysqldata:/src \
      mysql

The config above worked with boot2docker, but now that I've migrated to boot2docker-vagrant-box, the mount works into the vagrant box, but my /src directory in the mysql container is empty. The src dir is created though and looks like this:

docker@boot2docker:~$ docker-enter db
root@89649480e099:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.4G   97M  1.3G   8% /src
/dev/sda2        39G   20G   18G  53% /var/lib/mysql
/dev/sda2        39G   20G   18G  53% /etc/resolv.conf
/dev/sda2        39G   20G   18G  53% /etc/hostname
/dev/sda2        39G   20G   18G  53% /etc/hosts

What am I missing?

User/group with NFS

Hey,

We can mount a NFS synced_folder now.

But permissions are corrupt. With NFS we can't get a proper way to set permissions between docker container and host.

All files created in docker container are root:staff instead of You:staff.

If anyone have an idea to fix it.

I've tried some tricks:

chmod
umask
add user to groups
add group

Rsync error

Hi,

I have an issue using rsync on windows, behind a corporate proxy:

==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring proxy for Docker...
==> default: Configuring proxy environment variables...
==> default: Installing rsync to the VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

tce-load -wi rsync

Stdout from the command:

Downloading: rsync.tcz
Error on rsync.tcz

Stderr from the command:

Connecting to repo.tinycorelinux.net (89.22.99.37:80)
wget: download timed out
md5sum: rsync.tcz.md5.txt: No such file or directory

However, if i connect the vm and run "tce-load -wi rsync" it works fine :(

Any ideas ?

Bootlocal.sh error with shared folder vagrant

Hi,
I'm using bootlocal.sh to launch some job at b2d start.
One of this job start a docker container with option -v /vagrant:/vagrant but when I'm doing Vagrant up , the container don't start and I've the following error :

vagrant
This only happens when I'm doing vagrant halt following by vagrant up . No problem with vagrant reload .

I think the problem is due to the fact that bootlocal.sh script is run before Vagrant has time to mount shared folders.

For now, I'm using a while loop to correct the problem :

  while true; do
        if [[ -e /vagrant ]]; then
           break
        fi
        sleep 1
  done

Do you have any idea to correct the problem with a 'more clean ' solution ?

Thanks,

Automatic box setup results in "Docker is not running on the guest VM."

I am trying to set this box up as my Docker host VM automatically without having to vagrant init dduportal/boot2docker && vagrant up.

My Vagrantfile:

ENV['VAGRANT_DEFAULT_PROVIDER'] ||= 'docker'

Vagrant.configure("2") do |config|
    config.vm.provider "docker" do |d|
        d.build_dir = "."
        d.vagrant_machine = "boot2docker"
        d.vagrant_vagrantfile = "./boot2docker.Vagrantfile"
    end
end

My boot2docker.Vagrantfile:

Vagrant.configure("2") do |config|
    config.vm.box = "dduportal/boot2docker"
    config.vm.define "boot2docker"

    config.vm.provision "docker"
end

This happens when I vagrant up:

% vagrant up
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
    default: Vagrant will now create or start a local VM to act as the Docker
    default: host. You'll see the output of the `vagrant up` for this VM below.
    default:
    default: Importing base box 'dduportal/boot2docker'...
    default: Matching MAC address for NAT networking...
    default: Checking if box 'dduportal/boot2docker' is up to date...
    default: Setting the name of the VM: docker_boot2docker_1435084253735_52549
    default: Clearing any previously set network interfaces...
    default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
    default: Forwarding ports...
    default: 2375 => 2375 (adapter 1)
    default: 2376 => 2376 (adapter 1)
    default: 22 => 2222 (adapter 1)
    default: Booting VM...
    default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: docker
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
    default: Machine booted and ready!
    default: Checking for guest additions in VM...
    default: Configuring and enabling network interfaces...
    default: Mounting shared folders...
    default: /vagrant => /Users/alex/Documents/Workspace/Subversion/nct2013/docker
    default: Running provisioner: docker...
Unable to configure automatic restart of Docker containers on
the guest machine
Docker is not running on the guest VM.

This does not occur when I use the default built-in mitchellh/boot2docker by commenting out vagrant_vagrantfile and vagrant_machine, but that box is too out-of-date, which is why I'm using this one. If I use vagrant up again, it appears to work just fine and it builds my Dockerfile, however, I'd prefer if I didn't have to run it twice.

I also tried this workaround by placing this little ditty after the docker provisioner, but no dice.

# https://github.com/mitchellh/vagrant/issues/3998
config.vm.provision "shell", inline:
    "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"

I'm using the latest Vagrant 1.7.2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.