Giter Club home page Giter Club logo

bosh-softlayer-cpi-release's Introduction

BOSH SoftLayer CPI Release

Coverage status: Coverage Status

Releases

This is a BOSH release for the SoftLayer CPI.

The latest version for the SoftlLayer CPI release is available on bosh.io.

To use this CPI you will need to use the SoftLayer light stemcell. it's also available on bosh.io.

Bootstrap on SoftLayer

Refer to init-softlayer to bootstrap on Softlayer.

Deployment Manifest Samples

Refer to softlayer-cpi for deployment manifest samples.

SoftLayer CPI NG

Recover VMs or Disks (for legacy SoftLayer CPI only)

Frequently Asked Questions and Answers

  1. Q: How do I specify a dynamic network through subnet instead of vlan id?

    A: We don't support it currently.

  2. Q: Is there any restrictions about the hostname supported by SoftLayer?

    A: Yes. The hostname length can't be exactly 64. Otherwise there will be problems with ssh login.

bosh-softlayer-cpi-release's People

Contributors

ardnaxelarak avatar cunnie avatar dpb587-pivotal avatar edwardstudy avatar gu-bin avatar jianqiu avatar jimmyma avatar jlo avatar mattcui avatar maximilien avatar petergtz avatar suhlig avatar sykesm avatar vvraskin avatar xingzhou avatar zhang-hua avatar zhangtbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bosh-softlayer-cpi-release's Issues

Attaching a disk fails with an ssh error

Using stemcell light-bosh-stemcell-3232.8-softlayer-esxi-ubuntu-trusty-go_agent-public, after creating a VM for a job with a persistent disk, the CPI successfully creates a volume but fails to mount it.

The bosh cli reports the following error:

  Started updating job consul_z1 > consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6) (canary). Failed: Attaching disk '12147033' to VM '21656541': Failed to get multipath information from virtual guest `21656541`: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain (00:02:06)

Error 100: Attaching disk '12147033' to VM '21656541': Failed to get multipath information from virtual guest `21656541`: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain

The task log for the deployment contains the following:

D, [2016-06-22 00:58:31 #22295] [canary_update(consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6))] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"create_disk","arguments":[1024,{},"21656541"],"context":{"director_uuid":"9489a253-16f8-4dd4-b2d0-d7ac38a646b7"}} with command: /var/vcap/jobs/softlayer_cpi/bin/cpi
...
D, [2016-06-22 01:00:00 #22295] [canary_update(consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6))] DEBUG -- DirectorJobRunner: External CPI got response: {"result":"12147033","error":null,"log":""}, err: , exit_status: pid 23580 exit 0
...
D, [2016-06-22 01:00:00 #22295] [canary_update(consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6))] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"attach_disk","arguments":["21656541","12147033"],"context":{"director_uuid":"9489a253-16f8-4dd4-b2d0-d7ac38a646b7"}} with command: /var/vcap/jobs/softlayer_cpi/bin/cpi
...
D, [2016-06-22 01:00:32 #22295] [canary_update(consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6))] DEBUG -- DirectorJobRunner: External CPI got response: {"result":null,"error":{"type":"Bosh::Clouds::CloudError","message":"Attaching disk '12147033' to VM '21656541': Failed to get multipath information from virtual guest `21656541`: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain","ok_to_retry":false},"log":""}, err: , exit_status: pid 23595 exit 0
E, [2016-06-22 01:00:32 #22295] [canary_update(consul_z1/0 (0ba7959f-4321-4bdf-a01d-84bae8af26a6))] ERROR -- DirectorJobRunner: Error updating canary instance: #<Bosh::Clouds::CloudError: Attaching disk '12147033' to VM '21656541': Failed to get multipath information from virtual guest `21656541`: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain>
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_cpi-1.3232.10.0/lib/cloud/external_cpi.rb:119:in `handle_error'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_cpi-1.3232.10.0/lib/cloud/external_cpi.rb:88:in `invoke_cpi_method'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_cpi-1.3232.10.0/lib/cloud/external_cpi.rb:59:in `attach_disk'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/disk_manager.rb:271:in `create_and_attach_disk'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/disk_manager.rb:21:in `update_persistent_disk'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/instance_updater.rb:94:in `block in update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/instance_updater/instance_state.rb:5:in `with_instance_update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/instance_updater.rb:49:in `update'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/job_updater.rb:111:in `block (2 levels) in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3232.10.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/job_updater.rb:109:in `block in update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/event_log.rb:91:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/event_log.rb:91:in `advance_and_track'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/job_updater.rb:108:in `update_canary_instance'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.3232.10.0/lib/bosh/director/job_updater.rb:103:in `block (2 levels) in update_canaries'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3232.10.0/lib/common/thread_pool.rb:77:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3232.10.0/lib/common/thread_pool.rb:77:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3232.10.0/lib/common/thread_pool.rb:63:in `loop'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.3232.10.0/lib/common/thread_pool.rb:63:in `block in create_thread'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context'
E, [2016-06-22 01:00:32 #22295] [] ERROR -- DirectorJobRunner: Worker thread raised exception: Attaching disk '12147033' to VM '21656541': Failed to get multipath information from virtual guest `21656541`: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain - /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_cpi-1.3232.10.0/lib/cloud/external_cpi.rb:119:in `handle_error'

It appears that after the CPI grants the VM access to the disk, it tries to get multipath information from the vm:

    hasMultiPath, err := vm.hasMulitPathToolBasedOnShellScript(virtualGuest)
    if err != nil {
        return bosherr.WrapError(err, fmt.Sprintf("Failed to get multipath information from virtual guest `%d`", virtualGuest.Id))
    }

That, in turn, gets the root password from the VM object:

    command := fmt.Sprintf("echo `command -v multipath`")
    output, err := vm.sshClient.ExecCommand(ROOT_USER_NAME, vm.getRootPassword(virtualGuest), virtualGuest.PrimaryBackendIpAddress, command)
    if err != nil {
        return false, err
    }

Unfortunately, the password that's retrieved is invalid. It appears that something has changed the password for the root user between the time the VM was provisioned and the time this operation is driven.

I have manually verified that the password for the root user from the SL device page does not work with that VM but bosh ssh does.

Perhaps this is related to the keep_root_password attribute in settings.json:

{
   "agent_id" : "f4faba9a-20d7-485d-a5ee-30dd39a07ba4",
   "trusted_certs" : "",
   "env" : {
      "bosh" : {
         "keep_root_password" : false,
         "remove_dev_tools" : false,
         "password" : "$6$4gDD3aV0rdqlrKC$2axHCxGKIObs6tAmMTqYCspcdvQXh3JJcvWOY2WGb4SrdXtnCyNaWlrf3WEqvYR2MYizEGp3kMmbpwBC6jsHt0"
      },
      "persistent_disk_fs" : ""
   },
   "ntp" : [
      "0.pool.ntp.org",
      "1.pool.ntp.org"
   ],
   "networks" : {
      "cf1-dynamic" : {
         "ip" : "",
         "use_dhcp" : false,
         "netmask" : "",
         "dns" : [
            "10.0.80.11",
            "10.0.80.12"
         ],
         "type" : "dynamic",
         "preconfigured" : true,
         "mac" : "",
         "default" : null,
         "resolved" : false,
         "gateway" : ""
      },
      "cf1" : {
         "mac" : "",
         "default" : [
            "dns",
            "gateway"
         ],
         "resolved" : false,
         "gateway" : "10.155.198.1",
         "ip" : "10.155.198.9",
         "use_dhcp" : false,
         "dns" : [
            "10.0.80.11",
            "10.0.80.12"
         ],
         "netmask" : "255.255.255.192",
         "type" : "",
         "preconfigured" : true
      }
   },
   "mbus" : "nats://...",
   "blobstore" : {...}
   },
   "disks" : {
      "persistent" : null,
      "raw_ephemeral" : null,
      "ephemeral" : "/dev/xvdc",
      "system" : ""
   },
   "vm" : {
      "name" : "vm-f4faba9a-20d7-485d-a5ee-30dd39a07ba4"
   }
}

Why the "godep store" dont work . It seems some component is missing .

I git clone the project into my ubuntu vm , and enter the folder .

Then exec "godep restore" , it seems some issues as following:

root@alexDevBox:~/go/src/bosh-softlayer-cpi# godep restore
package github.com/cloudfoundry/bosh-agent/errors
imports github.com/cloudfoundry/bosh-agent/errors
imports github.com/cloudfoundry/bosh-agent/errors: cannot find package "github.com/cloudfoundry/bosh-agent/errors" in any of:
/usr/lib/go/src/pkg/github.com/cloudfoundry/bosh-agent/errors (from $GOROOT)
/root/go/src/github.com/cloudfoundry/bosh-agent/errors (from $GOPATH)
godep: restore: exit status 1
package github.com/cloudfoundry/bosh-agent/logger
imports github.com/cloudfoundry/bosh-agent/logger
imports github.com/cloudfoundry/bosh-agent/logger: cannot find package "github.com/cloudfoundry/bosh-agent/logger" in any of:
/usr/lib/go/src/pkg/github.com/cloudfoundry/bosh-agent/logger (from $GOROOT)
/root/go/src/github.com/cloudfoundry/bosh-agent/logger (from $GOPATH)
godep: restore: exit status 1
package github.com/cloudfoundry/bosh-agent/platform/commands
imports github.com/cloudfoundry/bosh-agent/platform/commands
imports github.com/cloudfoundry/bosh-agent/platform/commands: cannot find package "github.com/cloudfoundry/bosh-agent/platform/commands" in any of:
/usr/lib/go/src/pkg/github.com/cloudfoundry/bosh-agent/platform/commands (from $GOROOT)
/root/go/src/github.com/cloudfoundry/bosh-agent/platform/commands (from $GOPATH)
godep: restore: exit status 1
package github.com/cloudfoundry/bosh-agent/system
imports github.com/cloudfoundry/bosh-agent/system
imports github.com/cloudfoundry/bosh-agent/system: cannot find package "github.com/cloudfoundry/bosh-agent/system" in any of:
/usr/lib/go/src/pkg/github.com/cloudfoundry/bosh-agent/system (from $GOROOT)
/root/go/src/github.com/cloudfoundry/bosh-agent/system (from $GOPATH)
godep: restore: exit status 1
package github.com/cloudfoundry/bosh-agent/uuid/fakes
imports github.com/cloudfoundry/bosh-agent/uuid/fakes
imports github.com/cloudfoundry/bosh-agent/uuid/fakes: cannot find package "github.com/cloudfoundry/bosh-agent/uuid/fakes" in any of:
/usr/lib/go/src/pkg/github.com/cloudfoundry/bosh-agent/uuid/fakes (from $GOROOT)
/root/go/src/github.com/cloudfoundry/bosh-agent/uuid/fakes (from $GOPATH)
godep: restore: exit status 1

And when I access these component by my web browser, I got 404 error.
May I get your some suggestion ? Thanks!

Persistent disk allocation is flakey

When deploying a job with a persistent volume, the deployment fails with the following:

Creating disk of size '1024': Create SoftLayer iSCSI disk error.: Cannot find an performance storage (iSCSI volume) with order id 9294521 (00:05:19)

The task log on the director only contains the following:

D, [2016-06-22 17:11:21 #24756] [canary_update(consul_z1/0 (99dcc2f0-648f-41cb-8976-953326dd42b0))] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"create_disk","arguments":[1024,{},"21699145"],"context":{"director_uuid":"9489a253-16f8-4dd4-b2d0-d7ac38a646b7"}} with command: /var/vcap/jobs/softlayer_cpi/bin/cpi
...
D, [2016-06-22 17:16:35 #24756] [canary_update(consul_z1/0 (99dcc2f0-648f-41cb-8976-953326dd42b0))] DEBUG -- DirectorJobRunner: External CPI got response: {"result":null,"error":{"type":"Bosh::Clouds::CloudError","message":"Creating disk of size '1024': Create SoftLayer iSCSI disk error.: Cannot find an performance storage (iSCSI volume) with order id 9294521","ok_to_retry":false},"log":""}, err: , exit_status: pid 25280 exit 0
E, [2016-06-22 17:16:35 #24756] [canary_update(consul_z1/0 (99dcc2f0-648f-41cb-8976-953326dd42b0))] ERROR -- DirectorJobRunner: Error updating canary instance: #<Bosh::Clouds::CloudError: Creating disk of size '1024': Create SoftLayer iSCSI disk error.: Cannot find an performance storage (iSCSI volume) with order id 9294521>

When I go to the softlayer portal, I see the disk has been created. When I go to the orders, I see my order was approved and I was charged for the allocation. I don't know if this is a timing issue or something else as there's no data in the logs - but it does get expensive after a while.

Please support NFS

It would be really great if you could also support creation of NFS disks as well as ISCI.

Is hourly billing supported ?

Hi bosh-sl-cpi team,

I was trying to find a way to provision my VMs using hourly billing, but I was unable to do so. Am I missing something or isnt that supported yet ?
If not supported yet, I would expect that I can define this as a global setting (all VMs hourly OR all VMs monthly) - that would be good enough for the start. Thanks for considering this.

Why does a network IP influence which path actions.CreateVM#Run goes down?

Commit f1ee110 introduced a conditional flow through the create_vm action but there are no tests to describe why it was done or to keep it from getting regressed.

Can you please explain why an IP address is used as the condition that selects OS reload vs. creation?
What condition results in the presence or absence of an IP?
Why aren't there any tests to cover this code?

Thanks.

bin/test-unit fails on pristine checkout of develop_v2 and master branches

Running bin/test-unit fails with the following:

$ bin/test-unit 
godep: WARNING: Go version (go1.6) & $GO15VENDOREXPERIMENT= wants to enable the vendor experiment, but disabling because a Godep workspace (Godeps/_workspace) exists

 Cleaning build artifacts...

 Formatting packages...

 Integration Testing packages:
Will skip:
  ./integration
  ./integration/attach_disk
  ./integration/concurrency_sqlite
  ./integration/create_disk
  ./integration/create_stemcell
  ./integration/create_vm
  ./integration/delete_disk
  ./integration/delete_stemcell
  ./integration/delete_vm
  ./integration/detach_disk
  ./integration/has_vm
  ./integration/os_reload
  ./integration/set_vm_metadata
[1464642019] Action Suite - 79/79 specs - 7 nodes ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• SUCCESS! 42.782084ms 
[1464642019] Dispatcher Suite - 19/19 specs - 7 nodes ••••••••••••••••••• SUCCESS! 32.790795ms 
[1464642019] Transport Suite - 3/3 specs - 7 nodes ••• SUCCESS! 14.166888ms 
[1464642019] Common Suite - 2/2 specs - 7 nodes •• SUCCESS! 14.645487ms 
[1464642019] Main Suite - 5/5 specs - 7 nodes ••••• SUCCESS! 11.319801ms 
[1464642019] Baremetal Suite - 10/10 specs - 7 nodes •••••••••• SUCCESS! 25.274454ms 
[1464642019] Disk Suite - 5/5 specs - 7 nodes ••••• SUCCESS! 18.945243ms 
[1464642019] Stemcell Suite - 6/6 specs - 7 nodes •••••• SUCCESS! 1.026575137s 
[1464642019] VM Suite - 61/61 specs - 7 nodes ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• SUCCESS! 2.018823643s 
[1464642019] Pool Suite - 3/3 specs - 7 nodes ••• SUCCESS! 18.063447ms 
[1464642019] TestHelpers Suite - 6/6 specs - 7 nodes •••••
------------------------------
• Failure [0.002 seconds]
helper functions for integration tests #RunCpi [It] /out/cpi to exist and run 
/Users/sykesm/workspace/bosh-softlayer-cpi-release/src/github.com/cloudfoundry/bosh-softlayer-cpi/test_helpers/integration_helpers_test.go:55

  Expected error:
      <*os.PathError | 0xc8201e8900>: {
          Op: "fork/exec",
          Path: "/Users/sykesm/workspace/bosh-softlayer-cpi-release/src/github.com/cloudfoundry/bosh-softlayer-cpi/out/cpi",
          Err: 0x2,
      }
      fork/exec /Users/sykesm/workspace/bosh-softlayer-cpi-release/src/github.com/cloudfoundry/bosh-softlayer-cpi/out/cpi: no such file or directory
  not to have occurred

  /Users/sykesm/workspace/bosh-softlayer-cpi-release/src/github.com/cloudfoundry/bosh-softlayer-cpi/test_helpers/integration_helpers_test.go:54
------------------------------


Summarizing 1 Failure:

[Fail] helper functions for integration tests #RunCpi [It] /out/cpi to exist and run 
/Users/sykesm/workspace/bosh-softlayer-cpi-release/src/github.com/cloudfoundry/bosh-softlayer-cpi/test_helpers/integration_helpers_test.go:54

Ran 6 of 6 Specs in 0.021 seconds
FAIL! -- 5 Passed | 1 Failed | 0 Pending | 0 Skipped 

Ginkgo ran 11 suites in 1m2.094040482s
Test Suite Failed

SUITE FAILURE

Perhaps the test_helpers package should not be part of the unit tests? Or is it that the cpi should be built before attempting to execute? Or is it that the before suite for the helper test should be building the binary?

implement set_vm_metadata

use newly implemented softlayer-go setUserMetadata and configureMetadataDisk methods to complete this action's implementation.

Using bosh-micro-cli to deploy, the metadata passed to SL VM is not correctly applied

We set the static ip for the SL VM in the deployment manifest, but when the VM is launched, the IP of the VM is not the one we set in the deployment manifest.

We tried to deploy a micro-bosh using bosh-micro-cli+SL eCPI, and set the deployment manifest as using manual network+static ips. after the VM launched in SL, we found that the meta.js contains the correct network and ip information, but the openstack/latest/meta_data.json and openstack/content/interfaces file do not contain the correct network information. Therefore, the VM does not have the correct IP address.

This issue will block bosh-micro-cli ping the bosh-agent, because we need to set the bosh-agent vm's IP in the deployment manifest, if the vm does not have the correct IP, the bosh-micro-cli will be timeout when pinging the bosh-agent.

Unable to $bosh ssh to other VMs

Steps to reproduce:

$bosh login <directorURL>
$bosh download manifest <deployment-name> /tmp/manifest.yml
$bosh deployment /tmp/manifest.yml
$bosh ssh <job_name> <instance_id>

I get the following output in my test ...

Acting as user 'admin' on deployment 'flintstone-hkg02-diego' on 'bosh'
Target deployment is `flintstone-hkg02-diego'

Setting up ssh artifacts

Director task 565

Task 565 done
Starting interactive shell on job database_z1/0
ssh: connect to host 161.202.52.50 port 22: Operation timed out

Cleaning up ssh artifacts

Director task 566

Task 566 done

This now just hangs there indefinitely and does not connect to the VM.

I have not tried the options --gateway_host --gateway_user --gateway_identity_file as The connection to the director seems to work but the further connection is failing.

At the moment we are required to use the bosh director VM as a jump box. Logging into the bosh director and then logging into the VMs from there. But the bosh ssh should be able to tunnel through.

Initially reported by @andrew-edgar in the bosh-softlayer-cpi-release project.

A proposal to protect the data on the persistent disk.

I would like to raise this issue to discuss about the problem where the persistent disks were re-partitioned and formatted accidentally/mistakenly by bosh-agent. It's hard to find the root cause since we didn't find a way to recreate the problem. Basically, it seemed that there had something wrong with disk host or network which resulted in a "bad" situation of the persistent disk, bosh-agent mistakenly assumed the disk was not partitioned, and tried to partition and format. It's a very serious problem, we need to come up with a solution or workaround to protect the data on the persistent disk, especially the disk must not be partitioned during stemcell update. We can leverage UsePreformattedPersistentDisk to protect the data, but we need a way to enable or disable this option to handle different situation (creating or a new VM or OS reloading an existing VM) properly. I have a proposal to have CPI smart enough to handle it:

  1. When creating a new VM, need to make sure UsePreformattedPersistentDisk is set false in agent.json before attaching the disk, so that the bosh-agent is able to partition and format the persistent disk. CPI needs a new function to detect if the disk is mounted and enable UsePreformattedPersistentDisk right after it's mounted.
  2. When OS reloading an existing VM, just need to set true to UsePreformattedPersistentDisk in agent.json before attaching the disk (disk discovery and session login).

@jianqiu @maximilien it's an emergency issue, we need a solution and implement it it in next 1 or 2 days. any thoughts to my proposal? Thanks.
@cppforlife Your comments are always welcome. Thanks.

Failing Integration Tests Intermittently

Problem deleting ssh keys that are in use. Possible fix by changing timeout values.
Example error messages as follows

{"error":"Unable to find object with id of '0'.","code":"SoftLayer_Exception_ObjectNotFound"}


• Failure in Spec Setup (BeforeEach) [2.205 seconds]
BOSH Director Level Integration for has_vm
/tmp/build/71285c25-cff3-4b01-705e-0344b191a7e4/gopath/src/github.com/maximilien/bosh-softlayer-cpi/integration/has_vm/has_vm_test.go:132
  has_vm with actual vm
  /tmp/build/71285c25-cff3-4b01-705e-0344b191a7e4/gopath/src/github.com/maximilien/bosh-softlayer-cpi/integration/has_vm/has_vm_test.go:111
    returns true because vm exists [BeforeEach]
    /tmp/build/71285c25-cff3-4b01-705e-0344b191a7e4/gopath/src/github.com/maximilien/bosh-softlayer-cpi/integration/has_vm/has_vm_test.go:110

    Expected error:
        <*errors.errorString | 0xc20825f500>: {
            s: "Failed to destroy ssh key with id '305097', got '{\"error\":\"SSH key cannot be deleted because it is currently being used in an active transaction.\",\"code\":\"SoftLayer_Exception_Public\"}' as response from the API.",
        }
        Failed to destroy ssh key with id '305097', got '{"error":"SSH key cannot be deleted because it is currently being used in an active transaction.","code":"SoftLayer_Exception_Public"}' as response from the API.
    not to have occurred

    /tmp/build/71285c25-cff3-4b01-705e-0344b191a7e4/gopath/src/github.com/maximilien/bosh-softlayer-cpi/integration/has_vm/has_vm_test.go:78

Full output can be seen in http://159.8.14.90:8080/pipelines/softlayer-ecpi/jobs/sl-go-integration/builds/11 and http://159.8.14.90:8080/pipelines/softlayer-ecpi/jobs/cpi-integration/builds/8

Copy of softlayer-go issue #51 (Tracker story ID #100159074).

Create has_vm integration test

Also add any required has_vm helper methods and test accordingly.
Also look through helper tests to see if any error cases can be added.

These can be found in test_helpers/* and also integration/has_vm/* after the has_vm branch is created.

@maximilien

attach_disk fails when run before /etc/rc2.d/S15install.sh completes

While attempting to deploy cloud foundry, jobs with persistent disks often fail with a disk attach error like this:

E, [2016-07-17 14:24:00 #3630] [task:979] ERROR -- DirectorJobRunner: Attaching disk '12165173' to VM '22469545': Failed to attach volume `12165173` to virtual guest `22469545`: Failed to attach disk '12165173' to virtual guest '22469545'

This has happened several times. The only consistent symptom I find is that the /etc/rc2.d/S15install.sh is running on the failed VMs but is not running on the successful VMs. This may be some sort of race between the softlayer initialization scripts and the execution of the agent.

Since /etc/rc2.d/S15install.sh is looping in the following block:

# wget will default retry 20 times
wget -O "/root/nettest" "http://${DOWNLOAD_HOST}/install_scripts/nettest"
while true
  do
    if [ -s "/root/nettest" ]
    then
      rm -f "/root/nettest"
      break
    else
      sleep 6
      wget -O "/root/nettest" "http://${DOWNLOAD_HOST}/install_scripts/nettest"
    fi
done

the multipath tools setup script (/etc/rc2.d/S20multipath-tools) never runs. When the multipath tools setup script doesn't run, the polling of dmsetup ls that's done by the CPI consistently returns with no disks.

The root cause appears to be that $DOWNLOAD_HOST is not set. An strace of the running script shows that:

[pid 31051] execve("/usr/bin/wget", ["wget", "-O", "/root/nettest", "http:///install_scripts/nettest"], [/* 12 vars */]) = 0

I'm guessing that the value is supposed to come from /root/provisioningConfiguration.cfg but that file has been deleted. That implies that the system was rebooted during the 30 second sleep window of the S15install.sh script since the actions before the sleep completed but the actions after the sleep did not.

    update_status INSTALL_COMPLETE
    # rm -f "/root/swinstall.ini"
    rm -f "/root/provisioningConfiguration.cfg"
    rm -f "/root/base_functions.sh"
    rm -f "${TXN_INSTALL_LOG}"
    sleep 30
    rm /etc/rc2.d/S15install.sh
    rm /root/install.sh
    #/sbin/shutdown -r now

rm /etc/rc2.d/S15install.sh
rm /root/install.sh
rm /etc/systemd/system/installsh.service

Information in dev/README.md appears stale, incorrect, or incomplete

The information under the Normal Testing heading doesn't seem right.

First, the link to the bosh-softlayer-cpi-release doesn't work. That appears to be due to the recent project move. Ignoring that, I can't find a Vagrantfile or any other instructions or information in the release on testing.

Updating the documentation to more clearly explain test approaches would help those of us that are trying to use or contribute to the project.

Argument mismatch between bosh cpi and softlayer CPI for VM deletion

When trying to do a simple deployment to SL, we are getting the following:

D, [2016-04-07 21:54:51 #32412] [] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"delete_vm","arguments":["17309677"],"context":{"director_uuid":"d4f53e59-f1ea-4290-9789-4cd5a1d4c7d5"}} with command: /var/vcap/jobs/cpi/bin/cpi
D, [2016-04-07 21:54:51 #32412] [] DEBUG -- DirectorJobRunner: External CPI got response: {"result":null,"error":{"type":"Bosh::Clouds::CloudError","message":"Extracting method arguments from payload: Not enough arguments, expected 2, got 1","ok_to_retry":false},"log":""}, err: , exit_status: pid 32429 exit 0

In looking at the code on each side of the junction, it appears that bosh expects to provide only the VM cid see here.

It looks as if the SL CPI, however, is expecting two arguments, both the VM cid and an agent id, and this is causing our deployment to fail.

@ScarletTanager & @swetharepakula, RuntimeOG team

CPI causes deployments to fail when bosh drives delete_vm and create_vm

I made a change to my deployment manifest that modified a network configuration. The director then drove the CPI with a delete_vm request (which was successful) and followed that with a create_vm request that failed:

D, [2016-06-22 19:30:59 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: Networks have changed. Recreating VM
D, [2016-06-22 19:30:59 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: Failed to update in place. Recreating VM
I, [2016-06-22 19:30:59 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))]  INFO -- DirectorJobRunner: Deleting VM
D, [2016-06-22 19:30:59 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"delete_vm","arguments":["21700701"],"context":{"director_uuid":"9489a253-16f8-4dd4-b2d0-d7ac38a646b7"}} with command: /var/vcap/jobs/softlayer_cpi/bin/cpi
...
D, [2016-06-22 19:32:38 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: External CPI got response: {"result":null,"error":null,"log":""}, err: , exit_status: pid 29017 exit 0
...
I, [2016-06-22 19:32:38 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))]  INFO -- DirectorJobRunner: Creating VM
D, [2016-06-22 19:32:38 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: External CPI sending request: {"method":"create_vm","arguments":["fa87f505-5c60-409e-bbc8-39443b178a90","1163771",{"Bosh_ip":"10.155.248.130","Datacenter":{"Name":"dal09"},"Domain":"softlayer.com","EphemeralDiskSize":20,"HourlyBillingFlag":true,"MaxMemory":4096,"NetworkComponents":[{"MaxSpeed":1000}],"PrimaryBackendNetworkComponent":{"NetworkVlan":{"Id":1113235}},"PrimaryNetworkComponent":{"NetworkVlan":{"Id":1113225}},"PrivateNetworkOnlyFlag":false,"StartCpus":4,"VmNamePrefix":"cf-furnace-pub"},{"cf1":{"ip":"10.155.198.10","netmask":"255.255.255.192","cloud_properties":{},"default":["dns"],"dns":["10.0.80.11","10.0.80.12"],"gateway":"10.155.198.1"},"cf1-dynamic":{"type":"dynamic","cloud_properties":{},"dns":["10.0.80.11","10.0.80.12"],"ip":"10.155.171.2","netmask":"255.255.255.192","gateway":"10.155.198.1"},"public":{"ip":"169.45.188.210","netmask":"255.255.255.240","cloud_properties":{},"default":["gateway"],"gateway":"169.45.188.209"}},[],{"bosh":{"password":null}}],"context":{"director_uuid":"9489a253-16f8-4dd4-b2d0-d7ac38a646b7"}} with command: /var/vcap/jobs/softlayer_cpi/bin/cpi
D, [2016-06-22 19:32:38 #28586] [canary_update(ha_proxy_z1/0 (1280b3d6-09ef-4c32-b4cc-7c7828b986be))] DEBUG -- DirectorJobRunner: External CPI got response: {"result":null,"error":{"type":"Bosh::Clouds::CloudError","message":"OS Reloading VM with agent ID 'fa87f505-5c60-409e-bbc8-39443b178a90': Could not find virtual guest by ip address: 10.155.171.2: softlayer-go: could not SoftLayer_Account#getVirtualGuests, HTTP error code: '500'","ok_to_retry":false},"log":""}, err: , exit_status: pid 29031 exit 0

The root cause appears (again) to be related to the use of the IP address to find the VM for a reload.

Documentation for the SL CPI manifest syntax?

Is there documentation anywhere for the manifest syntax required for the SL CPI? We have not been able to find it, and thus far we've mostly had to stick to trial-and-error to figure out what works/does not work.

Some of the eCPI APIs are mismatch with the APIs used in bosh-micro-cli

in bosh-micro-cli project, we use string type for the return value of "create_stemcell" and "create_vm", but in current SL eCPI implementation, we just return the int value of stemcell id and vm id, this will lead to bosh-micro-cli runtime error when deploying micro-bosh use SL eCPI.

Long hostname prefix breaks persistent disk acquisition

Long hostname prefixes on nodes with persistent disk configured like the following XX-XXXXXX-XXXXX-XXXXX-database- displayed the following error:

Failed creating bound missing vms > database/0: Creating VM with agent ID '87881d1c-5cfd-453c-ad35-ba0177971fe6': Updating VM's agent env: Updating Agent Env timeout: Uploading temporary file to destination '/var/vcap/bosh/user_data.json': ssh: handshake failed: EOF (00:12:47)
Failed creating bound missing vms (00:12:47)

The problem was caused by the long host name which is too long to generate ssh

For example this XX-XXXXXX-XXXXX-XXXXX-db- prefix work and we could get the new VM with persistent disk without any issue.

Can you implement something to solve this problem in the next cpi version?

At least, the CPI should inform you that the hostname are too long when parsing the configuration.

softlayer_stemcell.go deleteVirtualGuestDiskTemplateGroup method is incorrect

The last part of the method implementation:

    deleted, err := vgdtgService.DeleteObject(id)
    if err != nil {
        return bosherr.WrapError(err, "Deleting VirtualGuestBlockDeviceTemplateGroup from service")
    }

    if !deleted {
        return bosherr.WrapError(nil, fmt.Sprintf("Could not delete VirtualGuestBlockDeviceTemplateGroup with id `%d`", id))
    }

Does not compile on new softlayer-go because, the DeleteObject call does not return a bool but rather a transaction. So correct implementation should wait on transaction to complete or fail.

/cc: @posijon

implement SLAgentEnvService methods: Update and Fetch

These are needed to support disk attachment and detachment. Implementing them entails:

1.Update(AgentEnv): updating config metadata and then rebooting the VM
2. Fetch(): getting the config metadata

For each method, the metadata is base64 encoded and thus must be properly decoded and encoded when being updated.

util/ssh_client.go is untested

While attempting to update the ssh client code to support upload from a reader instead of a file, I discovered that util.sshClientImpl is untested code. The unit test file simply creates a fake, drives methods against it, and then uses the fake to get the arguments that were passed to it.

To be clear: no production code is exercised by the tests. I replaced the production code with the following and all unit tests passed without modification:

package util

import bscutilfakes "github.com/cloudfoundry/bosh-softlayer-cpi/util/fakes"

type SshClient interface {
    ExecCommand(username string, password string, ip string, command string) (string, error)
    UploadFile(username string, password string, ip string, srcFile string, destFile string) error
    DownloadFile(username string, password string, ip string, srcFile string, destFile string) error
}

type sshClientImpl struct{}

func GetSshClient() SshClient {
    return &bscutilfakes.FakeSshClient{}
}

Errors from the SoftLayer CPI need to be propagated to the bosh task log

During a deployment, the CPI tried to create a virtual guest. The request to the SL API failed with a 500 and the following response body:

{
   "error" : "There are insufficient resources behind router bcr04a.dal09 to fulfill the request for the following guests: diego-furnace-compilation-20160720-014141-450",
   "code" : "SoftLayer_Exception_Public"
}

The CPI then turned that into the following bosh error:

{
   "result" : null,
   "error" : {
      "message" : "Creating VM with agent ID '8812add0-5f04-4d03-8c90-8061e3215ce5': Creating VirtualGuest from SoftLayer client: softlayer-go: could not SoftLayer_Virtual_Guest#createObject, HTTP error code: '500'",
      "type" : "Bosh::Clouds::CloudError",
      "ok_to_retry" : false
   },
   "log" : ""
}

As you can see, all of the meaningful information from the error has been stripped. Debugging this issue requires direct access to the director VM and the CPI logs.

Is the 'OS_RELOAD_ENABLED' environment variable still used?

It appears all integration tests and the softlayer_creator unit test explicitly set the OS_RELOAD_ENABLED environment variable to TRUE. I'm trying to understand why that's being set as I can find no production code that references it in this release or its dependencies.

If the code is unused, it should be removed as it makes it harder to reason about what's going on in the code.

(Moved here from cloudfoundry-attic/bosh-softlayer-cpi-release-DEPRECATED-TO_BE_DELETED#26)

Implement set_vm_metadata

The set_vm_metadata is used to configure tags and notes to the VM. So need to implement in a way that calls setObject on the VirtualGuest with tags and notes

Unit test failed on stemcell suit

Unit test failed on stemcell suit, the code in question is https://github.com/mattcui/bosh-softlayer-cpi/blob/os_reload/softlayer/stemcell/softlayer_stemcell_test.go#L54

[1445396531] Disk Suite - 5/5 specs - 7 nodes ••••• SUCCESS! 9.843958ms

[1445396531] Stemcell Suite - 6/6 specs - 7 nodes ••••

• Failure [0.005 seconds]
SoftLayerStemcell #Delete when stemcell does not exist [It] returns error if deleting stemcell does not exist
/Users/mattcui/Developer/go_workspace/src/github.com/maximilien/bosh-softlayer-cpi/softlayer/stemcell/softlayer_stemcell_test.go:55

Expected an error to have occured. Got:
: nil

/Users/mattcui/Developer/go_workspace/src/github.com/maximilien/bosh-softlayer-cpi/softlayer/stemcell/softlayer_stemcell_test.go:54

Summarizing 1 Failure:

[Fail] SoftLayerStemcell #Delete when stemcell does not exist [It] returns error if deleting stemcell does not exist
/Users/mattcui/Developer/go_workspace/src/github.com/maximilien/bosh-softlayer-cpi/softlayer/stemcell/softlayer_stemcell_test.go:54

Ran 6 of 6 Specs in 0.011 seconds
FAIL! -- 5 Passed | 1 Failed | 0 Pending | 0 Skipped

Ginkgo ran 8 suites in 2.381220512s
Test Suite Failed

Move SoftLayer networking configuration from resource pool to networks

Information about network component speed and VLAN IDs for virtual guests are currently specified on the resource pool:

resource_pools:
  small:
    cloud_properties:
      bosh_ip: (( meta.director_ip ))
      domain: softlayer.com
      datacenter:
        name: dal09
      VmNamePrefix: (( meta.environment "-small-" ))
      EphemeralDiskSize: 10
      StartCpus: 2
      MaxMemory: 2048
      HourlyBillingFlag: true
      NetworkComponents:
      - MaxSpeed: 1000
      PrivateNetworkOnlyFlag: true
      PrimaryBackendNetworkComponent:
        NetworkVlan:
          Id: (( meta.softlayer.vlans.private ))

This is a strange factoring that gets in the way of some of the bosh model.

Instead of having the information about networks hung off of the resource pool, I think it would be cleaner (and clearer) to have it associated with the networks themselves. For example, network definitions could look like this:

networks:
- name: cf1-private
  type: dynamic
  dns: 
  - 10.0.80.11
  - 10.0.80.12
  cloud_properties:
    vlan_id: (( meta.softlayer.vlans.private ))
    max_speed: 1000
- name: cf1-public
  type: manual
  subnets:
  - gateway: 169.45.188.209
    range: 169.45.188.208/28
    reserved:
    - 169.45.188.208
    - 169.45.188.209
    - 169.45.188.223
    static:
    - 169.45.188.210-169.45.188.222
    cloud_properties:
      vlan_id: (( meta.softlayer.vlans.public ))
      max_speed: 1000
      public: true
      source_policy_routing: true

A job could then clearly distinguish between public and private and, for things like default routes, easily choose to make the private network the default route:

- name: ha_proxy_z1
  instances: 1
  networks:
  - name: cf1-private
  - default:
    - dns
    - gateway
  - name: cf1-public
    static_ips:
    - 169.45.188.210

Another nice side effect of this is that the bosh director will have visibility to both IPs associated with the job and will display them on commands like bosh vms.

Since the networks and associated cloud properties are already to provided to the external CPI when a VM is created, this should be doable. The SoftLayer creator would need a just a bit more logic to determine if the VM is only associated with a private network and to verify that there's at most one private and one public vlan used across all attached networks.

fix Godeps dependencies

The Godeps dependencies are not updated. We need to do the same as we did for softlayer-go and testing @posijon's README notes.

CPI should be able to convert the name of vlans to their IDs

Trying to use bosh-init to deploy director to softlayer. Only the name of vlans is accessible through Softlayer portal. So I used the names of vlans instead of their IDs. Unfortunately, the deployment failed and I needed to get the vlan IDs using softalyer API and replace it in the manifest. The CPI should be able to do that automatically.

About enable/disable OS Reload in eCPI

@maximilien , I would like to get your advices/comments on this issue. As you know, we need to support OS Reload feature in eCPI to make it work well in Bluemix environment, but I think we should allow the users to choose whether to create a new VM or perform OS reload. So I would like to add a property to enable/disable this feature. I want to place it (osReloadEnabled) in Softlayer section in Micro bosh yaml, then it should be shown at the same place with username/apiKey in config.json:

  "SoftLayer": {
    "username": "[email protected]",
    "apiKey": "fakeKey",
    "osReloadEnabled": true
  }
}

In config.go, add it in SoftLayerConfig struct:

type SoftLayerConfig struct {
    Username string `json:"username"`
    ApiKey   string `json:"apiKey"`
    OSReloadEnabled   bool `json:"osReloadEnabled"`
}

Then, there needs to add this parameter OSReloadEnabled in every place where to call NewSoftLayerClient. In Create and Delete methods, I will add code at the top to check if it's true or false to determine if performing OS reload or creating a new VM, and just deleting the VM info from sqlite DB or deleting the real VM.

Do you agree with this change, any comment/advice? Thanks.

`cannot execute binary file` when deploying with `bosh-init` on macOS

When I attempt to deploy from my macOS machine, I get the error bin/build-linux-amd64: line 9: /Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2/bin/go: cannot execute binary file:

bosh-init deploy concourse-nginx-ntp-pdns-ibm.yml
Deployment manifest: '/Users/cunnie/workspace/deployments/concourse-nginx-ntp-pdns-ibm.yml'
Deployment state: '/Users/cunnie/workspace/deployments/concourse-nginx-ntp-pdns-ibm-state.json'

Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:00)
  Downloading release 'bosh-softlayer-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-softlayer-cpi'... Finished (00:00:00)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:00)
Finished validating (00:00:01)

Started installing CPI
  Compiling package 'golang_1.6.2/9cc12628d20cc7c3c0f35a5d83fe5249bf54a4e3'... Finished (00:00:00)
  Compiling package 'bosh_softlayer_cpi/5a8e567c0853596e823f658918d2f739948f2875'... Failed (00:00:02)
Failed installing CPI (00:00:02)

Command 'deploy' failed:
  Installing CPI:
    Compiling job package dependencies for installation:
      Compiling job package dependencies:
        Compiling package:
          Running command: 'bash -x packaging', stdout: 'total 8
drwxr-xr-x  3 cunnie  staff  102 Jul 20 20:00 github.com
-rwxr-xr-x  1 cunnie  staff  455 Jul 20 19:58 packaging
total 0
drwxr-xr-x  3 cunnie  staff  102 Jul 20 20:00 github.com
', stderr: '+ set -e -x
+ '[' -z /Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages ']'
+ golang_pkg_dir=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2
+ export GOROOT=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2
+ GOROOT=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2
+ export GOPATH=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/tmp/bosh-init-release769914941/extracted_packages/bosh_softlayer_cpi
+ GOPATH=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/tmp/bosh-init-release769914941/extracted_packages/bosh_softlayer_cpi
+ export PATH=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2/bin:/usr/local/bin:/usr/bin:/bin
+ PATH=/Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2/bin:/usr/local/bin:/usr/bin:/bin
+ ls -l .
+ mkdir src
+ cp -a github.com src
+ ls -l src
+ cd src/github.com/cloudfoundry/bosh-softlayer-cpi
+ bin/build-linux-amd64
bin/build-linux-amd64: line 9: /Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2/bin/go: cannot execute binary file
bin/build: line 12: /Users/cunnie/.bosh_init/installations/bdb7d7df-c568-4600-5421-3ace4a48d992/packages/golang_1.6.2/bin/go: cannot execute binary file

It appears that this is caused by the packaging script forcing the install of the Linux version of golang

Let me know if you want me to submit a pull request. In essence, I would add another blob, the macOS version of golang, and add an intelligent check to use the appropriate version.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.