Giter Club home page Giter Club logo

terraform-metal-openstack's Introduction

OpenStack on Equinix Metal

This repository is Experimental meaning that it's based on untested ideas or techniques and not yet established or finalized or involves a radically new and innovative style! This means that support is best effort (at best!) and we strongly encourage you to NOT use this in production.

Overview

Use Terraform to quickly and easily create an OpenStack cloud powered by Armv8 and/or x86 bare metal servers at Equinix Metal. Specifically, this deployment showcases how a multi-node cloud can be deployed on Equinix Metal bare metal.

This repo supports the Ussuri version of OpenStack.

The deployment defaults to a minimum 3 node OpenStack cloud, consisting of 2 x86 infrastructure nodes and a single x86 compute node.

  • It is possible to modify the total number of nodes and the type (various sizes of x86 and ARM hardware provided by Equinix Metal).
  • By default, the template uses third generation Equinix Metal hardware.

If you require support, please email [email protected], visit the Equinix Metal IRC channel (#equinixmetal on freenode), subscribe to the Equinix Metal Community Slack channel or post an issue within this repository.

Contributions are welcome to help extend this work!

Walk Throughs

To see a walk through of this repo, please checkout this YouTube video.

Cloud Abilities

The default deployment supports both ARM and x86 based virtual workloads across multiple compute nodes. Inter-node communication is setup allowing virtual machines within the same overlay network but on different compute nodes to communicate with each other across underlying VXLAN networks. This is a transparent capability of OpenStack. Management and inter-node traffic traverses the private Equinix Metal project network (10 subnet). Public OpenStack services are available via the public IP addresses assigned by Equinix Metal. DNS is not setup as part of this deployment so use IP addresses to access the services. The backend private IP addresses are mapped automatically into the node hostfiles via the deployment process.

The virtual machine images are deployed with enabled usernames and passwords allowing console login. For more details please see "userdata.txt", the cloud-init file that is used for the CentOS, Fedora, and Ubuntu virtual machines. The Cirros default login information is displayed on the console when logging in. The controller and compute nodes are configured with VNC console access for all the x86 machines. Console access is via the Horizon GUI dashboard. Since the ARM virtual machines do not support VNC console access, novaconsole has been made available on the controller via CLI.

By default, upstream connectivity from inside the cloud (virtual machines/networks) to the Internet is not enabled. Connectivity within internal virtual networks is enabled. The sample workload has SSH (TCP-22) and ICMP traffic enabled via security groups.

Prerequisites

Equinix Metal Project ID & API Key

This deployment requires a Equinix Metal account for the provisioned bare metal. You'll need your "Equinix Metal Organization ID" and your "Equinix Metal API Key" to proceed. You can use an existing project or create a new project for the deployment. See the full list of inputs for details.

In this walk-through, we will let Terraform create a randomly named project in the organization that you define.

We recommend setting the Equinix Metal API Token and Organization ID as environment variables since this prevents tokens from being included in source code files. These values can also be stored within a variables file later if using environment variables isn't desired.

export TF_VAR_metal_organization_id=YOUR_ORGANIZATION_ID_HERE
export TF_VAR_metal_auth_token=YOUR_PACKET_TOKEN_HERE

Where is my Equinix Metal Organization ID?

You can find your Organization ID in the organization settings. Click "Settings" in the "Hello, ..." profile menu. Make sure you copy the Organization ID, not the Account ID.

Where is my Equinix Metal Project ID?

You can find your Project ID under the 'Manage' section in the Equinix Metal Portal. They are listed underneath each project in the listing. You can also find the project ID on the project 'Settings' tab, which also features a very handy "copy to clipboard" piece of functionality, for the clicky among us.

How can I create a Equinix Metal API Key?

You will find your API Key on the left side of the portal. If you have existing keys you will find them listed on that page. If you haven't created one yet, you can click here:

https://console.equinix.com/#/api-keys/new

Ensure that your Equinix Metal account has an SSh key attached

When provisioning the machines, Equinix Metal will preset an SSH key to allow administrative access. If no SSH keys are available, it will fail with a "Must have at least one SSH key" error. To fix this, add an ssh key in your Equinix Metal account.

Terraform

These instructions use Terraform from Hashicorp to drive the deployment. If you don't have Terraform installed already, you can download and install Terraform using the instructions on the link below: https://www.terraform.io/downloads.html

Deployment Prep

Download the terraform-metal-openstack manifests from GitHub into a local directory.

git clone URL_TO_REPO
cd terraform-metal-openstack

Download the Terraform providers required:

terraform init

An SSH keypair will be created and managed by this plan to access the hosts in your Metal account's project.

Cloud Sizing Defaults

Several configurations files are available each building the cloud with a different mix of hardware architectures and capacity.

Filename Description Controller Dashboard x86 Compute Nodes ARM Compute Nodes
default Minimal Config c3.medium.x86 c3.medium.x86 c3.medium.x86 none
sample.terraform.tfvars ARM & x86 compute c2.medium.x86 c2.medium.x86 n2.xlarge.x86 c2.large.arm
sample-arm.terraform.tfvars Equinix Metal Gen 2 ARM c2.large.arm c2.large.arm none c2.large.arm
sample-gen2.terraform.tfvars Equinix Metal Gen 2 x86 c2.medium.x86 c2.medium.x86 n2.xlarge.x86 none
sample-gen3.terraform.tfvars Equinix Metal Gen 3 x86 c3.medium.x86 c3.medium.x86 s3.xlarge.x86 none

Running without a "terraform.tfvars" will result in the "default" configuration using Equinix Metal c3.medium.x86 hardware devices and no ARM capabilities. The other sample configurations deploy a mix of ARM and x86 hardware across different Equinix Metal hardware generations.

There are a number of defaults that can be modified as desired. Any deviations from the defaults can be set in terraform.tfvars. No modifications to defaults are required except for the Equinix Metal Project ID and API Token if not set as environment variables.

Copy over the sample terraform settings:

cp sample.terraform.tfvars terraform.tfvars

If the Equinix Metal API Token and Project ID were not saved as environment variables then they'll need to be stored in the terraform.tfvars.

Name Software Default Count Minimum Count
Controller Keystone, Glance, Nova 1 1
Dashboard Horizon 1 0 or more
Compute x86 Neutron 1 0 or more

In terraform.tfvars, the type of all these nodes can be changed. The size of the cloud can also be grown by increasing the count of ARM and x86 compute nodes above the default count of 1. A count of 0 of any compute node type (ARM or x86) will render the cloud unable to provision virtual machines of said type. While this deployment will cluster and support multiple compute nodes, it does not support multiple controller or dashboard nodes.

Deployment

Start the deployment:

terraform apply

At the conclusion of the deployment, the final settings will be displayed. These values can also be output:

terraform output

Sample output as follows:

Cloud_ID_Tag = "5077f6895d12fce0"
Compute_ARM_IPs = [
  "139.178.89.34",
]
Compute_ARM_Type = [
  "c2.large.arm",
]
Compute_x86_IPs = [
  "147.75.70.59",
]
Compute_x86_Type = [
  "n2.xlarge.x86",
]
Controller_Provider_Private_IPv4 = "10.88.70.16/28"
Controller_Provider_Public_IPv6 = "2604:1380:1000:7c01::/64"
Controller_SSH = "ssh [email protected] -i metal-key"
Controller_SSH6 = "ssh root@2604:1380:1000:7c00::7 -i metal-key"
Controller_Type = "c2.medium.x86"
Horizon_dashboard_via_IP = "http://147.75.109.135/horizon/ default/admin/GgT0VzyrX6Jm9Hd9"
Horizon_dashboard_via_IP6 = "http://[2604:1380:1000:7c00::3]/horizon/ default/admin/GgT0VzyrX6Jm9Hd9"
OpenStack_API_Endpoint = "http://147.75.70.123:5000/v3"
OpenStack_API_Endpoint_ipv6 = "http://[2604:1380:1000:7c00::7]:5000/v3"
OpenStack_admin_pass = <sensitive>

The OpenStack Horizon dashboard can be pulled up at the URL listed with the domain/username/password provided. The OpenStack Controller (CLI) can be accessed at the SSH address listed with the key provided.

Sample Workload

This deployment includes the following additional items in addition atop of the OpenStack installation. This includes a set of virtual machine images (Cirros, CentOS, Fedora, Ubuntu), a virtual network and some running virtual machines. For more information on the deployed workloads, please see:

https://github.com/equinix/terraform-metal-openstack/blob/master/OpenStackSampleWorkload.tf

Validation

The deploy can be verified via the OpenStack CLI and/or via the OpenStack GUI (Horizon). The CLI commands can be run on the Contoller node (via SSH). The GUI commands are run on a web browser using the URL and credentials output by Terraform. The individual CLI commands and GUI drill down paths are listed below. This validation checks that all the compute nodes are running and the same workload virtual machines images are running.

When running the CLI, the OpenStack credentials need to be setup by reading in the openrc file.

  • Setup the OpenStack credentials
source admin-openrc
  • Validate that all the OpenStack compute services are running. There will be one nova-compute per bare metal compute node provisioned (ARM or x86).
  • Horizon: Admin->System Information->Compute Services
root@controller:~# openstack compute service list
+----+----------------+----------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host           | Zone     | Status  | State | Updated At                 |
+----+----------------+----------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller     | internal | enabled | up    | 2020-04-10T22:34:31.000000 |
| 10 | nova-scheduler | controller     | internal | enabled | up    | 2020-04-10T22:34:32.000000 |
| 16 | nova-compute   | compute-x86-00 | nova     | enabled | up    | 2020-04-10T22:34:39.000000 |
+----+----------------+----------------+----------+---------+-------+----------------------------+
  • Validate that all the images have been installed
  • Horizon: Admin->Compute->Images
root@controller:~# openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| 2f873bcc-e4ef-471d-a413-6c7bd17c6be0 | Bionic-amd64    | active |
| bc1cac00-996a-4d69-be24-dcdcbc80b812 | Bionic-arm64    | active |
| 4928c2c6-a27d-4e0f-ad71-746ee6d6ab3d | CentOS-8-arm64  | active |
| 6bbb17d2-16df-45a9-bd68-70e89147996c | CentOS-8-x86_64 | active |
| 0c41cdcb-0f8e-488c-9732-4f549aafe640 | Cirros-arm64    | active |
| 68368d34-48d0-4b47-85d4-990457621f97 | Cirros-x86_64   | active |
| 039a1fff-f9d7-45b5-af6f-76c7c0e6f2d3 | Fedora-32-arm64 | active |
| ef2958fc-5ad0-4780-8d1f-0900eaeedf22 | Trusty-arm64    | active |
| 8708ae1b-210d-4bff-8547-93be0c787072 | Xenial-arm64    | active |
+--------------------------------------+-----------------+--------+

  • Validate that all the x86 compute node has the appropriate number of vCPUs and memory
root@controller:~# openstack hypervisor show compute-x86-00 -f table -c service_host -c vcpus -c memory_mb -c running_vms
+--------------+----------------+
| Field        | Value          |
+--------------+----------------+
| memory_mb    | 385434         |
| running_vms  | 1              |
| service_host | compute-x86-00 |
| vcpus        | 56             |
+--------------+----------------+
  • Validate that all the virtual machines are running
  • Horizon: Admin->Compute->Instances
root@controller:~# openstack server list
+--------------------------------------+------+--------+---------------------------+---------------+-----------+
| ID                                   | Name | Status | Networks                  | Image         | Flavor    |
+--------------------------------------+------+--------+---------------------------+---------------+-----------+
| 841ab626-9ad9-492c-ad83-ecdf0d8680b8 | foo  | ACTIVE | 192.168.0.0=192.168.0.116 | Cirros-x86_64 | m1.medium |
+--------------------------------------+------+--------+---------------------------+---------------+-----------+

External Networking Support

External (Provider) networking allows VMs to be assigned Internet addressable floating IPs. This allows the VMs to offer Internet accessible services (i.e. SSH and HTTP). This requires the a block of IP addresses from Equinix Metal (elastic IP address). These can be requested through the Equinix Metal Web GUI. Please see https://www.packet.com/developers/docs/network/basic/elastic-ips/ for more details. Public IPv4 of at least /29 is recommended. A /30 will provide only a single floating IP. A /29 allocation will provide 5 floating IPs.

Once the Terraform has finished, the following steps are required to enable the external networking.

  • Assign the elastic IP subnet to the "Controller" physical host via the Equinix Metal Web GUI.
  • Log into the Controller physical node via SSH and execute:
sudo bash ExternalNetwork.sh <ELASTIC_CIDR>

For example, if your CIDR subnet is 10.20.30.0/24 the command would be:

sudo bash ExternalNetwork.sh 10.20.30.0/24

From there, assign a floating IPs via the dashboard and update security groups to permit the desired ports.

External Block Storage

Equinix Metal offeres block storage that can be attached to compute nodes and used as ephemeral storage for VMs. This involves creating the storage via the Equinix Metal Web App, associating the storage with a compute node, and setting up the volume within the compute node. In this example, a 1TB volume is being created for use as ephemeral storage.

Stop the OpenStack Nova Compute service

service nova-compute stop

Create and assign a storage volume

Create the volume via the Equinix Metal Web App and assign to the compute node. See the steps at: https://metal.equinix.com/developers/docs/servers/elastic-block-storage/

apt-get -y install jq
packet-block-storage-attach
fdisk /dev/mapper/volume-YOUR_ID_HERE # create a new volume (n) and accept defaults
mkfs.ext4 /dev/mapper/volume-YOUR_ID_HERE-part1
blkid | grep volume-YOUR_ID_HERE-part1 # take note of the UUID

Copy over the existing Nova data

mnt /dev/mapper/volume-YOUR_ID_HERE /mnt
rsync -avxHAX --progress /var/lib/nova/ /mnt
umount /mnt
rm -rf /var/lib/nova/*
vi /etc/fstab # add a line like UUID=YOUR-UUID-HERE /var/lib/nova ext4 0 2
mount -a

Start the OpenStack Nova Compute service

service nova-compute start

Tearing it all down

To decommission a compute node, the above steps must be done in reverse order.

umount /var/lib/nova
packet-block-storage-deatach

Via the Equinix Metal Web App, detach the volume from the host, and then delete the volume. The physical host can then be deprovisioned via Terraform destroy.

terraform-metal-openstack's People

Contributors

codinja1188 avatar ctreatma avatar displague avatar jmarhee avatar johnstudarus avatar miouge1 avatar rainleander avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-metal-openstack's Issues

make services externally accessible

Services (such as glance) are currently configured with the hostname (i.e. controller) rather than a public addressable IP or DNS resolvable hostname. This prevents external entities from using the API services. Either the hostnames need to be put into DNS or public IPs published in the service catalog.

Missing TF_VAR documentation

README.md needs to include:

export TF_VAR_metal_create_project=false

OR the default needs to include this variable.

HA Dashboard

Setup high availability via BGP to the Packet routers to load balance across redundant dashboards.

CI: sample workload common fails to create openstack resources via CLI

Due to an HTTP 500 when running the openstack CLI, sample workload common fails to create:

  • security group rule
  • openstack subnet
null_resource.openstack-sample-workload-common (remote-exec): Error while executing command: HttpException: 500, Request Failed: internal server error while processing your request.
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack security group rule create [-h]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-c COLUMN] [--noindent]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--max-width <integer>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--fit-width] [--print-empty]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--remote-ip <ip-address> | --remote-group <group>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--dst-port <port-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--protocol <protocol>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-type <icmp-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-code <icmp-code>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ingress | --egress]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ethertype <ethertype>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                             <group>
null_resource.openstack-sample-workload-common (remote-exec): openstack security group rule create: error: the following arguments are required: <group>
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack security group rule create [-h]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-c COLUMN] [--noindent]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--max-width <integer>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--fit-width] [--print-empty]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--remote-ip <ip-address> | --remote-group <group>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--dst-port <port-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--protocol <protocol>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-type <icmp-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-code <icmp-code>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ingress | --egress]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ethertype <ethertype>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                             <group>
null_resource.openstack-sample-workload-common (remote-exec): openstack security group rule create: error: the following arguments are required: <group>
null_resource.openstack-flavors: Still creating... [10s elapsed]
null_resource.openstack-flavors: Creation complete after 11s [id=5855033389445283733]
null_resource.openstack-sample-workload-common (remote-exec): Error while executing command: HttpException: 500, Request Failed: internal server error while processing your request.
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack subnet create [-h] [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                [-c COLUMN] [--noindent] [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                [--max-width <integer>] [--fit-width]
null_resource.openstack-sample-workload-common (remote-exec):                                [--print-empty] [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--subnet-pool <subnet-pool> | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool]
null_resource.openstack-sample-workload-common (remote-exec):                                [--prefix-length <prefix-length>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--subnet-range <subnet-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dhcp | --no-dhcp]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip]
null_resource.openstack-sample-workload-common (remote-exec):                                [--gateway <gateway>] [--ip-version {4,6}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--network-segment <network-segment>] --network
null_resource.openstack-sample-workload-common (remote-exec):                                <network> [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--allocation-pool start=<ip-address>,end=<ip-address>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dns-nameserver <dns-nameserver>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--host-route destination=<subnet>,gateway=<ip-address>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--service-type <service-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--tag <tag> | --no-tag]
null_resource.openstack-sample-workload-common (remote-exec):                                <name>
null_resource.openstack-sample-workload-common (remote-exec): openstack subnet create: error: argument --network: expected one argument
null_resource.openstack-sample-workload-common: Still creating... [10s elapsed]
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack router add subnet [-h] <router> <subnet>
null_resource.openstack-sample-workload-common (remote-exec): openstack router add subnet: error: the following arguments are required: <subnet>

Acc Tests fail with "Could not find matching reserved block, all IPs were []"

The automation is failing:

Error: Could not find matching reserved block, all IPs were []

  on ProviderNetwork.tf line 6, in data "metal_precreated_ip_block" "private_ipv4":
   6: data "metal_precreated_ip_block" "private_ipv4" {



Error: Could not find matching reserved block, all IPs were []

  on ProviderNetwork.tf line 24, in data "metal_precreated_ip_block" "public_ipv6":
  24: data "metal_precreated_ip_block" "public_ipv6" {

The testing account may need additional permissions or Terraform should be updated to request a reservation.

Originally posted by @displague in #55 (comment)

IPv6 Support

Setup IPv6 Support...

Since there is no floating IPs for IPv6, the IPv6 subnets would need to be pushed out to the compute host. This would likely require provider routed networks when more than one compute host is used.

CI: controller_nova index configuration produces warnings

controller_nova index configuration produces warnings:

null_resource.controller-nova (remote-exec): /usr/lib/python3/dist-packages/pymysql/cursors.py:165: Warning: (1831, 'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
null_resource.controller-nova (remote-exec):   result = self._query(query)
null_resource.controller-nova (remote-exec): /usr/lib/python3/dist-packages/pymysql/cursors.py:165: Warning: (1831, 'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
null_resource.controller-nova (remote-exec):   result = self._query(query)

HA services

Setup redundant backend services including a clustered database and message bus. This could be done with BGP via the upstream Packet routers. The cost to run this setup would go up considerably (from 3 to 8? servers) so maybe use smaller devices?

external networking with private IPv4 addresses

It would be nice to have some provider networking enabled (perhaps using the private IPv4 address space). Public IPv4 addresses are scarce so private IPv4 would work better for demos. This would require allocating the space via TF and then setting up the provider networks within OpenStack.

CI: controller neutron fails to modify network_id in SQL migration

controller neutron fails to modify network_id in SQL migration:

null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 867d39095bf4 -> d72db3e25539, modify uniq port forwarding
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade d72db3e25539 -> cada2437bf41
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade cada2437bf41 -> 195176fb410d, router gateway IP QoS
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 195176fb410d -> fb0167bd9639
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade fb0167bd9639 -> 0ff9e3881597
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 0ff9e3881597 -> 9bfad3f1e780
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 9bfad3f1e780 -> 63fd95af7dcd
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 63fd95af7dcd -> c613d0b82681
null_resource.controller-neutron (remote-exec): Traceback (most recent call last):
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
null_resource.controller-neutron (remote-exec):     cursor, statement, parameters, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
null_resource.controller-neutron (remote-exec):     cursor.execute(statement, parameters)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
null_resource.controller-neutron (remote-exec):     result = self._query(query)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
null_resource.controller-neutron (remote-exec):     conn.query(q)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
null_resource.controller-neutron (remote-exec):     self._affected_rows = self._read_query_result(unbuffered=unbuffered)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in _read_query_result
null_resource.controller-neutron (remote-exec):     result.read()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
null_resource.controller-neutron (remote-exec):     first_packet = self.connection._read_packet()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in _read_packet
null_resource.controller-neutron (remote-exec):     packet.check_error()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in check_error
null_resource.controller-neutron (remote-exec):     err.raise_mysql_exception(self._data)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in raise_mysql_exception
null_resource.controller-neutron (remote-exec):     raise errorclass(errno, errval)
null_resource.controller-neutron (remote-exec): pymysql.err.InternalError: (1832, "Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1'")

null_resource.controller-neutron (remote-exec): The above exception was the direct cause of the following exception:

null_resource.controller-neutron (remote-exec): Traceback (most recent call last):
null_resource.controller-neutron (remote-exec):   File "/usr/bin/neutron-db-manage", line 10, in <module>
null_resource.controller-neutron (remote-exec):     sys.exit(main())
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 658, in main
null_resource.controller-neutron (remote-exec):     return_val |= bool(CONF.command.func(config, CONF.command.name))
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 182, in do_upgrade
null_resource.controller-neutron (remote-exec):     desc=branch, sql=CONF.command.sql)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 83, in do_alembic_command
null_resource.controller-neutron (remote-exec):     getattr(alembic_command, cmd)(config, *args, **kwargs)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/command.py", line 279, in upgrade
null_resource.controller-neutron (remote-exec):     script.run_env()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/script/base.py", line 475, in run_env
null_resource.controller-neutron (remote-exec):     util.load_python_file(self.dir, "env.py")
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/util/pyfiles.py", line 98, in load_python_file
null_resource.controller-neutron (remote-exec):     module = load_module_py(module_id, path)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/util/compat.py", line 174, in load_module_py
null_resource.controller-neutron (remote-exec):     spec.loader.exec_module(module)
null_resource.controller-neutron (remote-exec):   File "<frozen importlib._bootstrap_external>", line 678, in exec_module
null_resource.controller-neutron (remote-exec):   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/env.py", line 120, in <module>
null_resource.controller-neutron (remote-exec):     run_migrations_online()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/env.py", line 114, in run_migrations_online
null_resource.controller-neutron (remote-exec):     context.run_migrations()
null_resource.controller-neutron (remote-exec):   File "<string>", line 8, in run_migrations
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/runtime/environment.py", line 846, in run_migrations
null_resource.controller-neutron (remote-exec):     self.get_context().run_migrations(**kw)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/runtime/migration.py", line 365, in run_migrations
null_resource.controller-neutron (remote-exec):     step.migration_fn(**kw)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/versions/train/expand/c613d0b82681_subnet_force_network_id.py", line 40, in upgrade
null_resource.controller-neutron (remote-exec):     existing_type=sa.String(36))
null_resource.controller-neutron (remote-exec):   File "<string>", line 8, in alter_column
null_resource.controller-neutron (remote-exec):   File "<string>", line 3, in alter_column
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/ops.py", line 1775, in alter_column
null_resource.controller-neutron (remote-exec):     return operations.invoke(alt)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/base.py", line 345, in invoke
null_resource.controller-neutron (remote-exec):     return fn(self, operation)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/toimpl.py", line 56, in alter_column
null_resource.controller-neutron (remote-exec):     **operation.kw
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/ddl/mysql.py", line 98, in alter_column
null_resource.controller-neutron (remote-exec):     else existing_comment,
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/ddl/impl.py", line 134, in _exec
null_resource.controller-neutron (remote-exec):     return conn.execute(construct, *multiparams, **params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 982, in execute
null_resource.controller-neutron (remote-exec):     return meth(self, multiparams, params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
null_resource.controller-neutron (remote-exec):     return connection._execute_ddl(self, multiparams, params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1044, in _execute_ddl
null_resource.controller-neutron (remote-exec):     compiled,
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1250, in _execute_context
null_resource.controller-neutron (remote-exec):     e, statement, parameters, cursor, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1474, in _handle_dbapi_exception
null_resource.controller-neutron (remote-exec):     util.raise_from_cause(newraise, exc_info)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
null_resource.controller-neutron (remote-exec):     reraise(type(exception), exception, tb=exc_tb, cause=cause)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise
null_resource.controller-neutron (remote-exec):     raise value.with_traceback(tb)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
null_resource.controller-neutron (remote-exec):     cursor, statement, parameters, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
null_resource.controller-neutron (remote-exec):     cursor.execute(statement, parameters)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
null_resource.controller-neutron (remote-exec):     result = self._query(query)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
null_resource.controller-neutron (remote-exec):     conn.query(q)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
null_resource.controller-neutron (remote-exec):     self._affected_rows = self._read_query_result(unbuffered=unbuffered)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in _read_query_result
null_resource.controller-neutron (remote-exec):     result.read()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
null_resource.controller-neutron (remote-exec):     first_packet = self.connection._read_packet()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in _read_packet
null_resource.controller-neutron (remote-exec):     packet.check_error()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in check_error
null_resource.controller-neutron (remote-exec):     err.raise_mysql_exception(self._data)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in raise_mysql_exception
null_resource.controller-neutron (remote-exec):     raise errorclass(errno, errval)
null_resource.controller-neutron (remote-exec): oslo_db.exception.DBError: (pymysql.err.InternalError) (1832, "Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1'")
null_resource.controller-neutron (remote-exec): [SQL: ALTER TABLE subnets MODIFY network_id VARCHAR(36) NOT NULL]
null_resource.controller-neutron (remote-exec): (Background on this error at: http://sqlalche.me/e/2j85)
null_resource.controller-neutron: Creation complete after 1m4s [id=920363323014[978](https://github.com/equinix/terraform-metal-openstack/actions/runs/4981221529/jobs/8915180596#step:12:979)7266]

Error Not Found

What I Was Trying To Do:

  • Follow the README.md steps to create a default openstack on equinix metal environment

What I Expected To Happen:

  • create Minimal Config environment

Actual Outcome:
% terraform apply
null_resource.blank-hostfile: Refreshing state... [id=7851812259129772224]
tls_private_key.ssh_key_pair: Refreshing state... [id=bf9002a16da97b5ccd749e9c2ffc25b2a44d4534]
random_password.os_admin_password: Refreshing state... [id=none]
random_id.cloud: Refreshing state... [id=cFTzTY7Gu_o]
local_file.cluster_public_key: Refreshing state... [id=d689aaf84ea2a2af4ccc8e99e17968c65f06e2af]
local_file.cluster_private_key_pem: Refreshing state... [id=375a0d224f16c8dcc29e74d4f7d80558a2ccefd0]
metal_ssh_key.ssh_pub_key: Refreshing state... [id=25b5ce12-ea7e-48be-90be-7961027c9f39]

Error: Not found

on BareMetal.tf line 2, in resource "metal_project" "project":
2: resource "metal_project" "project" {

Steps to Reproduce:
% export TF_VAR_metal_project_id=YOUR_PROJECT_ID_HERE (replaced with my project id)
% export TF_VAR_metal_auth_token=YOUR_PACKET_TOKEN_HERE (replaced with my packet token)
% git clone https://github.com/equinix/terraform-metal-openstack
% cd terraform-metal-openstack
% terraform init
% cp sample.terraform.tfvars terraform.tfvars
% terraform apply

Uniform Standards Request: Experimental Repository

Hello!

We believe this repository is Experimental and therefore needs the following files updated:

If you feel the repository should be maintained or end of life or that you'll need assistance to create these files, please let us know by filing an issue with https://github.com/packethost/standards.

Packet maintains a number of public repositories that help customers to run various workloads on Packet. These repositories are in various states of completeness and quality, and being public, developers often find them and start using them. This creates problems:

  • Developers using low-quality repositories may infer that Packet generally provides a low quality experience.
  • Many of our repositories are put online with no formal communication with, or training for, customer success. This leads to a below average support experience when things do go wrong.
  • We spend a huge amount of time supporting users through various channels when with better upfront planning, documentation and testing much of this support work could be eliminated.

To that end, we propose three tiers of repositories: Private, Experimental, and Maintained.

As a resource and example of a maintained repository, we've created https://github.com/packethost/standards. This is also where you can file any requests for assistance or modification of scope.

The Goal

Our repositories should be the example from which adjacent, competing, projects look for inspiration.

Each repository should not look entirely different from other repositories in the ecosystem, having a different layout, a different testing model, or a different logging model, for example, without reason or recommendation from the subject matter experts from the community.

We should share our improvements with each ecosystem while seeking and respecting the feedback of these communities.

Whether or not strict guidelines have been provided for the project type, our repositories should ensure that the same components are offered across the board. How these components are provided may vary, based on the conventions of the project type. GitHub provides general guidance on this which they have integrated into their user experience.

CI needs to be configured with an API token

A METAL_AUTH_TOKEN secret should be added to this project.
This token must be attached to an account dedicated to this project that only has access to this project.

If a Project level API token is sufficient for this build, we should consider using https://github.com/displague/metal-actions-example with a suggested project name (rather than random).

The user account that provisions the project token would need to be similar to the one used for terraform-provider-metal testing (it could perhaps be the same account, but should be a different user token).

Provision ssh keys with Terraform

The instruction to create an SSH key by hand https://github.com/equinix/terraform-metal-openstack#deployment-prep, can be avoided by generating the key with Terraform:
https://github.com/equinix/terraform-metal-anthos-on-baremetal/blob/v0.3.0/main.tf#L30-L44

In this way, the SSH key will be registered in userdata and configured by cloud-init.

Lines such as these (which add the SSH key contents to userdata directly) will not be necessary:

user_data = "#cloud-config\n\nssh_authorized_keys:\n - \"${file(var.cloud_ssh_public_key_path)}\""

Instances of key copying can create the remote file using the content property instead of file, following the same pattern found in the anthos project.

provisioner "file" {
source = var.cloud_ssh_public_key_path
destination = "openstack_rsa.pub"
}

We can also remove these instructions if we adapt this behavior: https://github.com/equinix/terraform-metal-openstack#ensure-that-your-equinix-metal-account-has-an-ssh-key-attached

Provider Routed Networks

Setup a configuration that uses flat networking (IPv4) with VM server addresses using provider networking off the compute nodes. This would require a Provider Routed Network architecture. This would be the start of a multi facility deployment.

CI: keystone fails to add mariadb dependency

keystone fails to add mariadb dependency:

null_resource.controller-keystone (remote-exec): E: The repository 'http://www.ftp.saix.net/DB/mariadb/repo/10.3/ubuntu bionic Release' does not have a Release file.

missing ExternalNetwork.sh?

README.md says to add external floating IPs, on the controller run;
sudo bash ExternalNetwork.sh <ELASTIC_CIDR>

Where do we find ExternalNetwork.sh? It would appear to be missing.

Remove testing files from the project

I ran into a few things when trying to terraform plan this:

  • main.tf includes a TF Cloud backend that is specific to the testing environment. This file will prohibit users from consuming the module without modification.
    I think we should move the contents of this file to a secret and create it at test time:
     - name: Create testing backend config
       run: |
         echo $BACKEND_FILE >> backend.tf
       shell: bash
       env:
         BACKEND_FILE : ${{secrets.BACKEND_FILE}}
    
  • OpenStack.tf could be main.tf (since the existing main.tf would become backend.tf)
  • There is a root terraform.yml file which is nearly identical to the one in .github/, the one in the root adds terraform fmt and terraform plan
  • terraform fmt applies a few changes to the project that should be included in the PR

Update branding

There are various references to Packet which should be rebranded to Equinix Metal. Also, the default configurations are Gen2, which should be tested and then updated to Gen3.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.