Giter Club home page Giter Club logo

harbor-boshrelease's Introduction

Harbor BOSH Release

Project Harbor is an enterprise-class registry server that stores and distributes Docker images. Harbor extends the open source Docker Distribution by adding the functionalities usually required by an enterprise, such as security, identity and management. As an enterprise private registry, Harbor offers better performance and security. Having a registry closer to the build and run environment improves the image transfer efficiency. Harbor supports the setup of multiple registries and has images replicated between them. In addition, Harbor offers advanced security features, such as user management, access control and activity auditing.

This repository uses the Harbor offline installation package to create the BOSH release for Harbor, which can be used to quickly deploy a standalone Harbor. The main idea of this Harbor BOSH release is running the Harbor components as containers on top of Docker and docker-compose. Please be noted here that it's not a HA architecture deployed with this Harbor BOSH release.

This BOSH release for Harbor is open sourced under Apache License Version 2.0.

Repository Contents

This repository consists of the following file directories.

packages

Packaging instructions used by BOSH to build each of the dependencies. The following 4 packages are contained in this repository:

  • common: provide some utility scripts like pid operations
  • docker: provide installation for Docker
  • docker-compose: provide docker-compose tool
  • harbor-app: harbor application packages including templates and all the docker images of harbor components

jobs

Start and stop commands for each of the jobs (processes) running on Harbor nodes. Currently, only 1 job named harbor in this repository because we start harbor via docker-compose. The job 'harbor' is composed by the following things:

  • templates/config/harbor.cfg: The base configuration file template of harbor. The real content will come from the properties user provided when deploying.
  • templates/tls/server.*: The server certificate and key file template. The real content of the cert and key will also provided by user in the deployment manifest.
  • templates/bin/pre-start.erb: The script is executed in the pre-start stage of the job lifecycle and used to prepare the running environment:
    • Set cgroup mount point
    • Start docker daemon process
    • Load docker images of harbor components into docker
    • Execute harbor prepare scripts
  • templates/bin/ctl.erb: Provide start/stop harbor process commands. 'start' command is based on docker-compose. 'stop' is directly kill the process keep in the pid file.
  • templates/bin/status_check.erb: Check if the harbor is working well. Besides checking the container status, it will also issue http request to call the harbor api.
  • spec: Define the package dependencies and properties.
  • monit: Provide monit way for BOSH to check the status of harbor process.

config

URLs and access credentials to the bosh blobstore for storing final releases. Currently, only contain configuration for local blob.

src

Provide the utility script source code for the common package.

manifests

Provide deployment manifest templates and related manifest generation scripts. Currently, only provides manifest file for vSphere vCenter.

  • deployment-vsphere.yml: Deploy Harbor to vSphere vCenter.

.final_builds

References into the public blobstore for final jobs & packages (each referenced by one or more releases)

releases

yml files containing the references to blobs for each package in a given release; these are solved within .final_builds.

Deploy Harbor via BOSH

Install BOSH CLI V2

Download the binary for your platform and place it on your PATH.

Create BOSH env

Here we just provide the command for vCenter/vSphere, for other IaaS platform, please refer to BOSH doc.

# Create directory to keep state
mkdir bosh-1 && cd bosh-1

# Clone Director templates
git clone https://github.com/cloudfoundry/bosh-deployment

# Fill below variables (replace example values) and deploy the Director
bosh create-env bosh-deployment/bosh.yml \
    --state=state.json \
    --vars-store=creds.yml \
    -o bosh-deployment/vsphere/cpi.yml \
    -o bosh-deployment/uaa.yml \
    -o bosh-deployment/misc/config-server.yml \
    -v director_name=bosh-1 \
    -v internal_cidr=10.0.0.0/24 \
    -v internal_gw=10.0.0.1 \
    -v internal_ip=10.0.0.6 \
    -v network_name="VM Network" \
    -v vcenter_dc=my-dc \
    -v vcenter_ds=datastore0 \
    -v vcenter_ip=192.168.0.10 \
    -v vcenter_user=root \
    -v vcenter_password=vmware \
    -v vcenter_templates=bosh-1-templates \
    -v vcenter_vms=bosh-1-vms \
    -v vcenter_disks=bosh-1-disks \
    -v vcenter_cluster=cluster1
    -o bosh-deployment/vsphere/resource-pool.yml \
    -v vcenter_rp=bosh-rp1

# Create alias for the created env
bosh alias-env <alias name> -e <director IP> --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)

# Set env
bosh int ./creds.yml --path /director_ssl/ca > root_ca_certificate
export BOSH_CA_CERT=root_ca_certificate
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
export BOSH_ENVIRONMENT=<director IP>

Download source code

# Clone repostiry
git clone [email protected]:vmware/harbor-boshrelease.git
cd harbor-boshrelease

Make a deployment with pre-build final release

You can deploy the published pre-build final release without creating a local dev release:

bosh -n -d harbor-deployment deploy manifests/harbor.yml -v hostname=harbor.local

Make a deployment with dev release

Before deploy, you need to create the Harbor BOSH release.

Create the Harbor BOSH release

# Sync remote pre-build blobs
bosh sync-blobs
# Or download blobs locally with new version of blob packages
cd scripts
bash add_blobs.sh

# Create a dev release
bosh create-release --force

# Or create a final release
bosh create-release --final [--version <version>]

# Upload the created dev release:
bosh upload-release

# Confirm the release is uploaded.
bosh releases

Upload cloud-config and runtime-config

You can find the bosh cloud config file, bosh runtime config file and deployment manifest samples in directory manifests. NOTES:

  • Change cloud-config-vsphere.yml per your environment.
  • Change configuration in the deployment manifest sample file deployment-vsphere.yml (e.g. azs name, networks name) per your environment.
  • Change the version of harbor-container-registry release in runtime-config-harbor.yml.

Upload cloud-config and runtime-config:

bosh -n update-cloud-config   manifests/cloud-config-vsphere.yml
bosh -n update-runtime-config manifests/runtime-config-bosh-dns.yml --name bosh-dns
bosh -n update-runtime-config manifests/runtime-config-harbor.yml   --name harbor

Kick off the deployment

bosh -n -d harbor-deployment deploy templates/deployment-vsphere.yml -v hostname=harbor.local [--vars-store /path/to/creds.yml]
bosh run-errand smoke-test -d harbor-deployment

After the deployment is completed, you can check the status of the deployment:

# See current deployments
bosh deployments

# Check the status of vms
bosh vms

# Check the status of instances
bosh instances

Delete the deployment

If you want to delete the specified deployment, execute:

## --force ignore the errors when deleting
bosh -d harbor-deployment delete-deployment --force

Maintainers

  • Jesse Hu [huh at vmware.com]
  • Steven Zou [szou at vmware.com]
  • Daojun Zhang [daojunz at vmware.com]
  • Daniel Jiang [jiangd at vmware.com]

Contributing

The harbor-boshrelease project team welcomes contributions from the community. If you wish to contribute code and you have not signed our contributor license agreement (CLA), our bot will update the issue when you open a Pull Request. For any questions about the CLA process, please refer to our FAQ. For more detailed information, refer to CONTRIBUTING.md.

License

Refer to LICENSE.

harbor-boshrelease's People

Contributors

jessehu avatar ninjadq avatar obeyler avatar reasonerjt avatar sba30 avatar steven-zou avatar stonezdj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

harbor-boshrelease's Issues

pre-start script is failed with FATAL: database "registry" does not exist

Hi, we've been using harbor for PCF but upgrading Harbor from 1.6.0 to 1.6.3 has been failing for pre-start script. You can see the logs in below.
As you can see, it is due to missing 'registry' database, and the script could not get the return of get_version[1], which should be returned from get_version_pgsql function [2] .
I wonder it should be recovered before, but does it mean failed to migration?

BTW, I'm wondering why this migration should not be needed for 1.6.0. This version is 1.6.0 but it still try to do the migration. I wonder how the presence of vmware/harbor-log image[3] is the condition for this migration.
Let me know if you have any story behind for that.

Current Version: 1.6.0 (Harbor for PCF)
Target Version: 1.6.3 (Harbor for PCF)
Ops Manager: 2.3

Tomohiro

[1] https://github.com/goharbor/harbor/blob/v1.6.3/tools/migration/db/run.sh#L156
[2] https://github.com/goharbor/harbor/blob/v1.6.3/tools/migration/db/util/pgsql.sh#L133-L136
[3] https://github.com/vmware/harbor-boshrelease/blob/1.6/jobs/harbor/templates/bin/pre-start.erb#L185-L186

harbor-app/e951c164-18e3-445f-9fae-ec72d916f38f:/var/vcap/sys/log/harbor# cat pre-start.stdout.log
[Thu Jan 10 05:53:11 UTC 2019] Installing Harbor 1.6.3
loaded secret from file: /data/secretkey
Generated configuration file: /var/vcap/packages/harbor-app/common/config/nginx/nginx.conf
Generated configuration file: /var/vcap/packages/harbor-app/common/config/adminserver/env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/ui/env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/registry/config.yml
Generated configuration file: /var/vcap/packages/harbor-app/common/config/db/env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/jobservice/env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/jobservice/config.yml
Generated configuration file: /var/vcap/packages/harbor-app/common/config/log/logrotate.conf
Generated configuration file: /var/vcap/packages/harbor-app/common/config/registryctl/env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/ui/app.conf
Copying UAA CA cert to /var/vcap/packages/harbor-app/common/config/ui/certificates/uaa_ca.pem
Generated certificate, key file: /var/vcap/packages/harbor-app/common/config/ui/private_key.pem, cert file: /var/vcap/packages/harbor-app/common/config/registry/root.crt
Copied custom ca bundle: /var/vcap/packages/harbor-app/common/config/custom-ca-bundle.crt
Copying sql file for notary DB
Generated certificate, key file: /var/vcap/packages/harbor-app/cert_tmp/notary-signer-ca.key, cert file: /var/vcap/packages/harbor-app/cert_tmp/notary-signer-ca.crt
Generated certificate, key file: /var/vcap/packages/harbor-app/cert_tmp/notary-signer.key, cert file: /var/vcap/packages/harbor-app/cert_tmp/notary-signer.crt
Copying certs for notary signer
Copying notary signer configuration file
Generated configuration file: /var/vcap/packages/harbor-app/common/config/notary/signer-config.postgres.json
Generated configuration file: /var/vcap/packages/harbor-app/common/config/notary/server-config.postgres.json
Copying nginx configuration file for notary
Generated configuration file: /var/vcap/packages/harbor-app/common/config/nginx/conf.d/notary.server.conf
loaded secret from file: /data/defaultalias
Generated configuration file: /var/vcap/packages/harbor-app/common/config/notary/signer_env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/clair/postgres_env
Generated configuration file: /var/vcap/packages/harbor-app/common/config/clair/config.yaml
Generated configuration file: /var/vcap/packages/harbor-app/common/config/clair/clair_env
Create config folder: /var/vcap/packages/harbor-app/common/config/chartserver
Generated configuration file: /var/vcap/packages/harbor-app/common/config/chartserver/env
The configuration files are ready, please use docker-compose to start the service.
Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:04:39 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.0-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       0ffa825
  Built:            Wed Jul 18 19:13:39 2018
  OS/Arch:          linux/amd64
  Experimental:     false
[Thu Jan 10 05:54:01 UTC 2019] Docker daemon is running
[Thu Jan 10 05:54:01 UTC 2019] Loading docker images ...
Loaded image: goharbor/harbor-ui:v1.6.3
Loaded image: goharbor/harbor-jobservice:v1.6.3
Loaded image: goharbor/notary-signer-photon:v0.5.1-v1.6.3
Loaded image: goharbor/clair-photon:v2.0.6-v1.6.3
Loaded image: goharbor/nginx-photon:v1.6.3
Loaded image: goharbor/registry-photon:v2.6.2-v1.6.3
Loaded image: goharbor/notary-server-photon:v0.5.1-v1.6.3
Loaded image: goharbor/harbor-migrator:v1.6.3
Loaded image: goharbor/harbor-adminserver:v1.6.3
Loaded image: goharbor/harbor-log:v1.6.3
Loaded image: goharbor/harbor-db:v1.6.3
Loaded image: goharbor/redis-photon:v1.6.3
Loaded image: goharbor/chartmuseum-photon:v0.7.1-v1.6.3
[Thu Jan 10 05:54:14 UTC 2019] Upgrading Harbor 1.6.0 to 1.6.3 ...
[Thu Jan 10 05:54:14 UTC 2019] Use harbor-migrator:v1.6.3 for migration
[Thu Jan 10 05:54:14 UTC 2019] Backing up Harbor database
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: C
  MONETARY: C
  NUMERIC:  C
  TIME:     C
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start....LOG:  database system was shut down at 2019-01-10 05:54:28 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


./db/run.sh: ignoring /docker-entrypoint-initdb.d/*


PostgreSQL init process complete; ready for start up.

Performing backup...
TODO: needs to implement backup registry...
Backup performed.
harbor.cfg not found, WITH_ADMIRAL will not be set to true
[Thu Jan 10 05:54:30 UTC 2019] Migrating Harbor database
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: C
  MONETARY: C
  NUMERIC:  C
  TIME:     C
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start....LOG:  database system was shut down at 2019-01-10 05:54:33 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


./db/run.sh: ignoring /docker-entrypoint-initdb.d/*


PostgreSQL init process complete; ready for start up.

Version is not specified. Default version is head.
FATAL:  database "registry" does not exist
Please make sure to mount the correct the data volume.
harbor.cfg not found, WITH_ADMIRAL will not be set to true

Final releases or pre-compiled releases in the base manifest

I wanted to expand on one idea in #4 - final releases and pre-compiled releases, and making sure your base manifest has them embedded.

The desirable goal is for users to get started with just:

bosh -d harbor deploy manifests/harbor.yml

No manual uploading of bosh releases, no figuring out cloud-config stuff. The command above should "just work" for most BOSH users.

To achieve this, the releases: section needs to include final releases or pre-compiled final releases.

You've got some options:

releases:
- name: harbor-container-registry
  version: 1.0.0
  url: git+https://github.com/vmware/harbor-cff-bosh-release

Or using pre-build tgz, say from bosh.io

releases:
- name: harbor-container-registry
  version: 1.0.0
  url: https://bosh.io/d/github.com/vmware/harbor-cff-bosh-release?v=1.0.0
  sha1: 05305e3423aed79f0db0adb3e92c9ab426b58557

These two will ensure that bosh deploy automatically uploads v1.0.0 if the target BOSH director does not have it yet.

But, the user will then have to wait for each package to be compiled. And each time they bump their stemcell they will have to watch all packages be compiled again. This isn't necessary nor happy experience. It can be avoided for all users with pre-compiled releases.

See https://bosh.io/docs/compiled-releases/

You bosh export-release after you've created the final release in your CI pipeline, get the sha1, store the tgz to S3, and then update your manifests/harbor.yml with the details.

For example https://github.com/cloudfoundry-community/redis-boshrelease/blob/master/manifests/redis.yml#L54-L57

There are many CI ideas in https://github.com/cloudfoundry-community/redis-boshrelease/tree/master/ci that you might like to borrow to produce final releases (shipit job) and then create compiled releases. e.g. https://ci2.starkandwayne.com/teams/cfcommunity/pipelines/redis-boshrelease

/cc @cppforlife if you've got any other tips for the Harbor team?

Latest harbor job depends on bosh-dns in pre-start script

The latest version of the pre-start.erb script has a line that waits for bosh-dns to be ready (https://github.com/vmware/harbor-boshrelease/blob/master/jobs/harbor/templates/bin/pre-start.erb#L158). Trying to deploy a release built on this code with both the provided deployment_vsphere.yml and the more recently-updated deployment_vsphere-uaa.yml end up with an error:

/var/vcap/jobs/bosh-dns does not exist

The harbor job is also dependent on the harbor-enable-bosh-dns job in its monit file (https://github.com/vmware/harbor-boshrelease/blob/master/jobs/harbor/monit#L6). Is there a way to get this up and running without the bosh-dns, or some required manifest changes to get this dependent job up and running?

Support HA deployment structure

harbor boshrelease should support HA deployment structure, main idea:

  • Remove docker-compose out of harbor boshrelease, directly run services on docker via images
  • Split harbor-app job into functional service jobs, may include nginx, ui, adminserver,jobservice,systemlog and registry etc.
  • Add HAProxy and keepalived jobs for internal LB (backlog)
  • Extract database, cache, clair, and notary into external boshrelease

ha

Fix "bosh sync-blobs"

Normal users of the release should not need to run scripts/add_blobs.sh - only CI would run a variation of this when a new upstream blob is discovered (e.g. new version of Harbor).

A normal user should just run bosh sync-blobs.

To fix this, as a committer, please run:

scripts/add_blobs.sh # which runs the bosh add-blob commands
bosh upload-blobs
git add config/blobs.yml
git commit -m "added all the blobs"

Then update the README to remove this section on fetching blobs and/or change it to say bosh sync-blobs.

And you're good to go.

example for S3

Could you provide for an example of S3 configuration in manifest when use harbor-boshrelease ?
config: region:((region)), bucket:((bucket) ,accesskey:((accesskey)) ,secretkey:((secretkey))
is it correct ?

jobs:
 - name: harbor
   release: harbor-container-registry
   properties:
     hostname: ((hostname))
     db_password: ((harbor_db_password))
     admin_password: ((harbor_admin_password))
     clair_db_password: ((clair_db_password))
     with_clair: true
     with_notary: true
     #ui_url_protocol: http #default is https
     registry_storage_provider:
       name: s3
       config: region:((region)), bucket:((bucket) ,accesskey:((accesskey)) ,secretkey:((secretkey))
       redirect: false

One base manifest, many operator files

I appreciate that you've shown many different ways to run Harbor with different manifests. I'm writing to share a pattern that is evolving in other BOSH releases.

In the BOSH community, the consistent pattern we're going with is:

  • A manifests folder (some older projects have a templates folder, but manifests is newer and nicer)
  • A manifests/operators folder (some projects have manifests/ops-files)
  • A manifests/harbor.yml base manifests that works with zero user-provided variables, e.g. no static IPs or elastic IPs, contains releases: that reference final releases or ideally compiled releases, and allows the very simple README instructions:
    bosh -d harbor deploy manifests/harbor.yml
    
  • manifests/operators/create.yml - overrides the harbor release with version: create and uri: . to make it easy to run bosh deploy manifests/harbor.yml -o manifests/operators/create.yml to emulate bosh create-release --force && bosh upload-release --rebase && bosh deploy ...
  • various other operator files to reproduce your existing manifest examples; some of these might be vsphere.yml etc where you're trying to communicate something specific about the target infrastructure; but this isn't common. In your manifests, the only "vsphere" idea I see is the static IP, rather than an elastic IP for other infrastructures?

An example BOSH release project that maintains this pattern is https://github.com/cloudfoundry-community/redis-boshrelease

Thanks for open sourcing the Harbor bosh release!

/cc @cppforlife

wrong version number in manifest harbor.yml

releases:
- name: harbor-container-registry
  version: 1.6.0
  sha1: 5bb63d9b1cac2ae24e601d28c057af7dcb3abe8b
  url: https://storage.googleapis.com/harbor-bosh-releases/harbor-container-registry-1.6.2.tgz

there is a trouble between 1.6.0 version and tgz 1.6.2

Cannot start harbor: pre-start script fails due to lack of python intepreter (Xenial & Jammy)

Describe the bug

Tried to deploy as described in the manifest directory examples. First tried on ubuntu-jammy, but when that failed, tried again on ubuntu-xenial, which is used in the example manifest.

bosh deployment output:

Task 345 | 23:52:06 | L executing pre-start: harbor-app/f74876ed-744e-4c39-ad43-d58b810133af (0) (canary) (00:03:02)
                    L Error: Action Failed get_task: Task bf645736-e545-4a77-772f-e5e2ca8c636b result: 1 of 4 pre-start scripts failed. Failed Jobs: harbor. Successful Jobs: enable-bosh-dns, bosh-dns, user_add.

Content of pre-start script for harbor job:

/var/vcap/jobs/harbor/bin/pre-start: /var/vcap/packages/harbor-app/prepare: /usr/bin/python: bad interpreter: No such file or directory

There iis no /usr/bin/python on ubuntu-xenial stemcell, nor is there python in the path at all. There is /usr/bin/python3

Reproduction steps

  1. Deploy as described in the manifests examples
  2. Note failure

Expected behavior

Harbor job should start, and python should be present, or python3 should be used.

Additional context

No response

Harbor database Corruption state

I tried to upgrade from 1.5.2 to 1.7.4 bosh harbor deployment. It failed and i try to restore back to old version. now i see it got back to fresh install without data. I see db_backup folder in side /data folder. Need to know how to recover data first so that we can be healthy on 1.5.2 and then need help to upgrade to latest version(1.7.4)

drwxr-xr-x 2 10000 10000 4096 Aug 23 2018 ca_download
-rw------- 1 10000 10000 16 Aug 23 2018 secretkey
-rw------- 1 10000 10000 8 Aug 23 2018 defaultalias
drwxr-xr-x 2 999 root 4096 Aug 23 2018 redis
drwxr-xr-x 2 10000 10000 4096 Aug 23 2018 config
drwxr-xr-x 2 10000 10000 4096 Aug 23 2018 psc
drwxr-xr-x 3 10000 10000 4096 Aug 23 2018 registry
drwxr-xr-x 2 root root 4096 Apr 19 01:57 cert
-rw-r--r-- 1 root root 6 Apr 22 17:23 harbor_version
drwxr-xr-x 2 10000 10000 77824 Apr 22 17:29 job_logs
drwx------ 19 999 admin 4096 Apr 22 17:40 clair-db
drwxr-xr-x 6 10000 10000 4096 Apr 22 17:40 notary-db
drwx------ 22 10000 10000 4096 Apr 22 17:40 database
drwxr-xr-x 2 root root 4096 Apr 22 17:59 db_backup

Clair vulnerability db shows as still being configured

@jessehu can you please advise on the below error from the clair.log

exit status 128","output":"Cloning into '.'...\nfatal: unable to update url base from redirection:\n  asked for: https://git.alpinelinux.org/cgit/alpine-secdb/info/refs?service=git-upload-pack\n   redirect: https://github.com/alpinelinux/alpine-secdb\n"} 

Sep 20 15:50:11 172.18.0.1 clair[9114]: {"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:220","Time":"2018-09-20 15:50:11.865898","error":"could not download requested resource","updater name":"alpine"} 

Sep 20 15:50:12 172.18.0.1 clair[9114]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2018-09-20 15:50:12.237223","updater name":"debian"} 

Sep 20 15:50:12 172.18.0.1 clair[9114]: {"Event":"could not branch Ubuntu repository","Level":"error","Location":"ubuntu.go:177","Time":"2018-09-20 15:50:12.791788","error":"exit status 3","output":"bzr: ERROR: Not a branch: \"https://launchpad.net/ubuntu-cve-tracker/\".\n"} 

Sep 20 15:50:12 172.18.0.1 clair[9114]: {"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:220","Time":"2018-09-20 15:50:12.791847","error":"could not download requested resource","updater name":"ubuntu"} 

Sep 20 15:55:03 172.18.0.1 clair[9114]: {"Event":"finished fetching","Level":"info","Location":"updater.go:227","Time":"2018-09-20 15:55:03.913590","updater name":"oracle"} 

We cant schedule updates to the db, all of which are obviously linked.

The VM has internet access...

Thanks

missing endpoint configuration

required functionality:
Unable to configure boshrelease to create endpoint to point to dockerhub when docker image is not present on harbor repository.

harbor standlone bosh release upgrade from 1.5.2-build.8 to 1.7.4-build.42

1 year back we deployed harbor with 1.5.2 as bosh managed deployment(not tile).
today i am trying to upgrade to 1.7.4 and the deployment is failing. Can you please support.

Task 932374 | 15:21:12 | Preparing deployment: Preparing deployment (00:00:01)
Task 932374 | 15:21:13 | Preparing package compilation: Finding packages to compile (00:00:00)
Task 932374 | 15:21:14 | Updating instance harbor-app: harbor-app/10f30b95-367e-47a1-ae60-c1fa749ab175 (0) (canary) (00:10:10)
L Error: Action Failed get_task: Task ffba813e-df22-4dde-5117-eb2fa4020adc result: Unmounting persistent disk: Running command: 'umount /dev/sdc1', stdout: '', stderr: 'umount: /var/vcap/store/docker: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
': exit status 32
Task 932374 | 15:31:24 | Error: Action Failed get_task: Task ffba813e-df22-4dde-5117-eb2fa4020adc result: Unmounting persistent disk: Running command: 'umount /dev/sdc1', stdout: '', stderr: 'umount: /var/vcap/store/docker: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
': exit status 32

Task 932374 Started Fri Apr 19 15:21:12 UTC 2019
Task 932374 Finished Fri Apr 19 15:31:24 UTC 2019
Task 932374 Duration 00:10:12
Task 932374 error

Changing state:
Expected task '932374' to succeed but state is 'error'

Exit code 1

When ssh to vm i dont see any harbor services are running. Trying to understand what was changed between these 2 version and how we can succeed standalone boh managed harbor.

Disable redirect for S3 storage backend

We ran into an issue while deploying the harbor container registry using s3 as the storage backend. Our S3 buckets are locked down to allow only authorized sts/iam profiles to connect to them. To make it work, we need to disable redirect for the storage backend as documented here https://docs.docker.com/registry/configuration/#redirect

Issue:

While pushing an image using docker push, it fails with an EOF error.

Temporary workaround:

Manually updating the /var/vcap/jobs/harbor/packages/harbor-app/common/templates/registry/config.yml file as shown below and restarting the harbor job allows pushing images to the registry.

storage:
  cache:
    layerinfo: inmemory
  s3:
    region:  *****
    bucket: **********
    encrypt: true
    secure: true
    chunksize: 5242880
    rootdirectory: /harbor-container-registry
  maintenance:
    uploadpurging:
      enabled: false
  delete:
    enabled: true
  redirect:
    disable: true

Is this something that can be implemented by just updating the bosh release or require changes to the harbor package itself?

Cannot create a 1.6.2 final release

I'm part of the bosh team and our pipeline to upload the latest harbor-boshrelease to bosh.io is broken as we cannot access the blobstore mentioned in the release.

        "- Downloading blob 'ff4e7286-6182-4632-7565-5eee6a618150' with digest string '2ae476936091a3885db9ff159704dba7e8c70d71':\n    Getting blob from inner blobstore:\n      Getting blob from inner blobstore:\n        Get https://storage.googleapis.com/harbor-bosh-releases/ff4e7286-6182-4632-7565-5eee6a618150: metadata: GCE metadata \"instance/service-accounts/default/token\" not defined\n- Downloading blob 'c3158924-4946-4b3f-5e4f-3b2ec4e8d08f' with digest string 'c29cdfb7cfeed978a012944d099cfff7e5eff752':\n    Getting blob from inner blobstore:\n      Getting blob from inner blobstore:\n        Get https://storage.googleapis.com/harbor-bosh-releases/c3158924-4946-4b3f-5e4f-3b2ec4e8d08f: metadata: GCE metadata \"instance/service-accounts/default/token\" not defined\n- Downloading blob '85054ad5-7291-4bcc-4097-c805095bbdcc' with digest string '494b81134b9ba7a7ef55a05c3b18130365a45642':\n    Getting blob from inner blobstore:\n      Getting blob from inner blobstore:\n        Get https://storage.googleapis.com/harbor-bosh-releases/85054ad5-7291-4bcc-4097-c805095bbdcc: metadata: GCE metadata \"instance/service-accounts/default/token\" not defined\n- Downloading blob 'f00a329c-e841-42c5-62b5-210238e3d40e' with digest string '66f359ee97346395b8a28b2bad11c30acb4f0372':\n    Getting blob from inner blobstore:\n      Getting blob from inner blobstore:\n        Get https://storage.googleapis.com/harbor-bosh-releases/f00a329c-e841-42c5-62b5-210238e3d40e: metadata: GCE metadata \"instance/service-accounts/default/token\" not defined\n- Downloading blob '0aa1cc4e-bf7c-48a8-6517-c27b1551a078' with digest string '686a6bdefc699451c2c050ad432569e9d770d087':\n    Getting blob from inner blobstore:\n      Getting blob from inner blobstore:\n        Get https://storage.googleapis.com/harbor-bosh-releases/0aa1cc4e-bf7c-48a8-6517-c27b1551a078: metadata: GCE metadata \"instance/service-accounts/default/token\" not defined",

If this is a release you wish to release publicly, please change the blob permissions or we can remove updates to this project from being published to bosh-io if updates here should no longer go there

Create job for haproxy

haproxy will be treated as internal LB, work together with keepalived
this job can refer haproxy job in cf release

Question about addon

Why do you add your own bosh-dns or syslog instead of official release from bosh.io ?
On our CF plateform we already have them and they conflict with yours.
I can't remove your add-on present into the harbor manifest as the name used is not the same and you have dependency into your monit :-(

missing entry in spec file for property "admin_password_for_smoketest"

Hi team,
Trying to deploy Harbor as per README.md on AWS with necessary changes in Cloud-config.
Deployment broke-down with following error,
Link property admin_password_for_smoketest in template harbor is not defined in release spec

Post investigation, found that,
Smoke-test job in manifest consumes, harbor provider. Which provides bunch of properties.
https://github.com/vmware/harbor-boshrelease/blob/master/jobs/harbor/spec#L29-L40

An entry for 'admin_password_for_smoketest' was missing in spec file.

Added same and created a dev-release, which results in successful deployment and passing of Smoketest.

did anyone else face this issue, if yes is this a correct solution or any other solution is there.

Passing an incorrect certificate in the Harbor installation corrupts the install

Hello,

We tried installing Harbor Bosh release in Enterprise PKS and provided an incorrect certificate.

The timeout for the pre-start script in the Bosh release was set to the default of 60 minutes.

The prestart script fails at this line: https://github.com/vmware/harbor-boshrelease/blob/master/jobs/harbor/templates/bin/pre-start.erb.sh#L309

You can see that, if the certificate is incorrect, it will continue to return something other than HTTP 200 OK which means this script will stall the process for 60 minutes, without much feedback, even in the logs, there's nothing there except the echo statement. Also, since the problem is in the pre-start script, monit will not boot up properly.

Possible solutions:

  • avoid doing lots of logic in a pre-start script, these are run directly by Bosh
  • give better error logging on why harbor is not coming up (even the HTTP code would help a lot)

Getting error while deploying the latest Bosh release and see below

21:24:43 | Updating instance harbor-app: harbor-app/54ef00a2-4f54-49e2-9e61-77bbd0940e44 (0) (canary) (00:05:07)
L Error: Action Failed get_task: Task 2b5c7659-8861-4f55-4fff-6886ae81fda6 result: 2 of 2 pre-start scripts failed. Failed Jobs: enable-bosh-dns, harbor.

21:29:50 | Error: Action Failed get_task: Task 2b5c7659-8861-4f55-4fff-6886ae81fda6 result: 2 of 2 pre-start scripts failed. Failed Jobs: enable-bosh-dns, harbor.

Started Tue Jul 24 21:15:24 UTC 2018
Finished Tue Jul 24 21:29:50 UTC 2018
Duration 00:14:26

Task 35 error

Updating deployment:
Expected task '35' to succeed but state is 'error'

Exit code 1

Reduce expectations on cloud-config

As per #4 and #5, the goal of one base manifest is for bosh deploy manifests/harbor.yml -d harbor to "just work".

This is made more likely if the base manifest does not put unexpected requirements on the cloud-config that any normal BOSH/CF/CFCR user wouldn't already have.

From one of your manifests:

instance_groups:
- name: harbor
  vm_type: standard
  persistent_disk_type: 20G
  networks:
  - name: default
    static_ips:
    - 10.112.123.31

Each of these three items can be changed to reduce the chance that a user will have an error when initially deploying:

  • vm_type: can be changed to vm_resource: - see http://bosh.io/docs/manifest-v2/#instance-groups will delegate picking an instance type to the CPI, rather than require you to guess what vm_types are in a cloud-config
  • persistent_disk_type: can change to persistent_disk: 20480. The CPI has default cloud_properties for each disk. The persistent_disk_type: attribute is only required if the deployer wants to have their disk have non-default cloud_properties. Leave that to them if they need it.
  • networks: has a good assumption that all/most cloud-config has name: default. The one change to make is to remove static_ips from the base manifest. You don't know anything about a user's cloud-config and don't know what static_ips they have allocated.

/cc @cppforlife any other pro tips?

Avoid using sed to render config file

There are lots of sed command to render config file, these commands will cause problem when reconfigure because these operation can not be revert

For example in pre-start.erb

sed -i "s|/data/registry:/storage|$mount_point:/storage|" ${HARBOR_PACKAGE_DIR}/docker-compose.yml
When switch from nfs to local, it is still in docker-compose.yml.

The workaround for this issue is to edit the config file and change back to the original version manually.

In the future, these config files should be rendered with template file.

ldap_group_admin_dn doesn't work

Even with the ldap_group_admin_dn set to a group, an user from this group is not declare as admin in harbor
extract from openldap ui:
image
extract from harbor ui:
image

image
note that user obeyler belong to the admin group

error when try to deploy harbor

Task 1264 | 10:09:23 | Preparing deployment: Preparing deployment (00:00:02)

Task 1264 | 10:09:26 | Preparing package compilation: Finding packages to compile (00:00:00)

Task 1264 | 10:09:27 | Updating instance harbor: harbor/005c587f-bf8a-4a8e-ab05-52fef0feae2c (0) (canary) (00:00:06)

                L Error: Action Failed get_task: Task ee5e33ba-e400-4c1c-60ed-dbf5ae88a18b result: Stopping Monitored Services: Stop all services: Running command: 'monit stop -g vcap', stdout: '', stderr: 'monit: Error: Depend service 'harbor-enable-bosh-dns' is not defined in the control file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.