Giter Club home page Giter Club logo

cn's Introduction

cn (ceph-nano)

Ceph, the future of storage

The project

cn is a little program written in Go that helps you interact with the S3 API by providing a REST S3 compatible gateway. The target audience is developers building their applications on Amazon S3. It is also an exciting tool to showcase Ceph Rados Gateway S3 compatibility. This is brought to you by the power of Ceph and Containers. Under the hood, cn runs a Ceph container and exposes a Rados Gateway. For convenience, cn also comes with a set of commands to work with the S3 gateway. Before you ask "why not using s3cmd instead?", then you will be happy to read that internally cn uses s3cmd and act as a wrapper around the most commonly used commands. Also, keep in mind that the CLI is just for convenience, and the primary use case is you developing your application directly on the S3 API.

Table of contents

Build

You can build cn by using make. Be sure dep is installed:

$ go get github.com/golang/dep/cmd/dep

Then, add ~/go/bin to your $PATH:

$ export PATH=$PATH:~/go/bin

Build cn:

$ make
rm -f cn cn &>/dev/null || true
dep ensure
GOOS=linux GOARCH=amd64 go build -i -ldflags="-X main.version=cea247c-dirty -X main.tag=devel -X main.branch=guits-doc_build" -o cn-devel-cea247c-dirty-linux-amd64 main.go
ln -sf "cn-devel-cea247c-dirty-linux-amd64" cn

Once the build is done, you should have a symlink cn pointing to the binary that just got built:

$ ls -l
total 10692
-rw-rw-r--. 1 guits guits    15292 20 nov.  22:03 ceph-nano-logo-vertical.jpg
drwxrwxr-x. 2 guits guits     4096 20 nov.  22:03 cmd
lrwxrwxrwx. 1 guits guits       34 20 nov.  22:27 cn -> cn-devel-cea247c-dirty-linux-amd64
-rwxrwxr-x. 1 guits guits 10881196 20 nov.  22:27 cn-devel-cea247c-dirty-linux-amd64

Installation

cn relies on Docker so it must be installed on your machine. If you're not running a Linux workstation you can install Docker for Mac.

Once Docker is installed you're ready to start. Open your terminal and download the cn binary.

macOS:

curl -L https://github.com/ceph/cn/releases/download/v2.3.1/cn-v2.3.1-darwin-amd64 -o cn && chmod +x cn

Linux amd64:

curl -L https://github.com/ceph/cn/releases/download/v2.3.1/cn-v2.3.1-linux-amd64 -o cn && chmod +x cn

Linux arm64:

curl -L https://github.com/ceph/cn/releases/download/v2.3.1/cn-v2.3.1-linux-arm64 -o cn && chmod +x cn

Test it out

$ ./cn
Ceph Nano - One step S3 in container with Ceph.

                  *(((((((((((((
                (((((((((((((((((((
              ((((((((*     ,(((((((*
             ((((((             ((((((
            *((((,               ,((((/
            ((((,     ((((((/     *((((
            ((((     (((((((((     ((((
            /(((     (((((((((     ((((
             (((.     (((((((     /(((/
              (((                *((((
              .(((              (((((
         ,(((((((*             /(((
          .(((((  ((( (/  //   (((
                 /(((.  /(((((  /(((((
                        .((((/ (/

Usage:
  cn [command]

Available Commands:
  cluster      Interact with a particular Ceph cluster
  s3           Interact with a particular S3 object server
  image        Interact with cn's container image(s)
  version      Print the version of cn
  kube         Outputs cn kubernetes template (cn kube > kube-cn.yml)
  update-check Print cn current and latest version number
  flavors      Interact with flavors
  completion   Generates bash completion scripts

Flags:
  -h, --help   help for cn

Use "cn [command] --help" for more information about a command.

Get started

Start the program with a working directory /tmp, the initial start might take a few minutes since we need to download the container image:

$ ./cn cluster start -d /tmp my-first-cluster
Running ceph-nano...
The container image is not present, pulling it.
This operation can take a few minutes......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

Endpoint: http://10.36.116.164:8000
Dashboard: http://10.36.116.164:5001
Access key is: 9ZU1QBYX13KPLXXDDCY2
Secret key is: nthNG1xb7ta5IDKiJKM8626pQitqsalEo0ta7B9E
Working directory: /usr/share/ceph-nano

Selecting the cluster flavor

Is it possible to select the cluster flavor by using the -f option on the command line.

$ ./cn cluster start mycluster -f huge

The full documentation of flavors can be found here

Your first S3 bucket

Create a bucket with cn:

$ ./cn s3 mb my-first-cluster my-buc
Bucket 's3://my-buc/' created

$ ./cn s3 put my-first-cluster /etc/passwd my-buc
upload: '/tmp/passwd' -> 's3://my-buc/passwd'  [1 of 1]
 5925 of 5925   100% in    1s     4.57 kB/s  done

Multi-cluster support

cn can manage any number of clusters on your local machine:

$ ./cn cluster ls
+------+---------+-------------------------------------------------------------------------------------+----------------+--------------------------------+---------+
| NAME | STATUS  | IMAGE                                                                               | IMAGE RELEASE  | IMAGE CREATION TIME            | FLAVOR  |
+------+---------+-------------------------------------------------------------------------------------+----------------+--------------------------------+---------+
| d    | running | ceph/daemon:latest                                                                  | master-77e3d8d | 2018-04-05T15:01:40.323603472Z | default |
| b    | running | ceph/daemon@sha256:369867e450ccdea9bcea7f54e97ed8b2cb1a0437fbef658d2d01fce2b8a2c648 | master-5f44af9 | 2018-03-30T21:08:31.117367166Z | medium  |
+------+---------+-------------------------------------------------------------------------------------+----------------+--------------------------------+---------+

List Ceph container images available

cn can list the available Ceph container images, the default output shows the 100 first images:

$ ./cn image ls
ceph/daemon:latest-mimic
ceph/daemon:latest-luminous
ceph/daemon:latest-master
ceph/daemon:master-0b3eb04-mimic-centos-7
ceph/daemon:master-0b3eb04-luminous-centos-7
ceph/daemon:master-0b3eb04-luminous-opensuse-42.3-x86_64
ceph/daemon:master-0b3eb04-master-centos-7-x86_64
ceph/daemon:master-0b3eb04-luminous-centos-7-x86_64
ceph/daemon:master-0b3eb04-mimic-centos-7-x86_64
[...]

Using images aliases

The image option (-i) support aliases to simply the command line. It is possible to list the aliases by running the image show-aliases command as per below :

$ ./cn image show-aliases
+----------+--------------------------------------------------+
| ALIAS    | IMAGE_NAME                                       |
+----------+--------------------------------------------------+
| mimic    | ceph/daemon:latest-mimic                         |
| luminous | ceph/daemon:latest-luminous                      |
| redhat   | registry.access.redhat.com/rhceph/rhceph-3-rhel7 |
+----------+--------------------------------------------------+

Aliases can be use in place of the traditional image name as per the following example:

$ ./cn cluster start mycluster -i mimic

It is also possible to create new aliases as detailed here

Enable mgr dashboard

TODO: This is a temporary hack to enable the manager dashboard

Currently cn does not expose a port for the mgr dashboard. It only exposes port 8000 for S3 API, and port 5000 for Sree - S3 web client. To expose also the mgr dashboard port we currently have to do some hacks.

This section will guide you how to manually commit a new image and then run a new container with the desired expose ports.

Commit a copy of the docker image:

./cn cluster start temp -d /tmp
docker commit ceph-nano-temp ceph-nano
./cn cluster purge temp --yes-i-am-sure

Run the container:

docker run -dt --name cn -p 8080:8080 -p 5000:5000 -p 8000:8000 ceph-nano

Enable dashboard:

(Note: 'enable dashboard' command will cause the container to exit, so need to start it after)

docker exec cn ceph config set mgr mgr/dashboard/ssl false
docker exec cn ceph config set mgr mgr/dashboard/server_addr 0.0.0.0
docker exec cn ceph config set mgr mgr/dashboard/server_port 8080
docker exec cn ceph mgr module enable dashboard
until docker exec cn ceph; do docker start cn; sleep 1; done # wait for the services to start
docker exec cn ceph dashboard set-login-credentials nano nano

Note that the Object Gateway tab in the dashboard is not enabled yet, so run the following to enable RGW dashboard:

RGW_USER=$(docker exec cn radosgw-admin user create --uid=rgw --display-name=rgw --system)
RGW_ACCESS=$(echo $RGW_USER | awk '{ for (i=1;i<=NF;++i) if ($i ~ /access_key/) { split($(i+1),a,"\""); print a[2] } }')
RGW_SECRET=$(echo $RGW_USER | awk '{ for (i=1;i<=NF;++i) if ($i ~ /secret_key/) { split($(i+1),a,"\""); print a[2] } }')
docker exec cn ceph dashboard set-rgw-api-access-key "$RGW_ACCESS"
docker exec cn ceph dashboard set-rgw-api-secret-key "$RGW_SECRET"
docker exec cn ceph dashboard set-rgw-api-host 127.0.0.1
docker exec cn ceph dashboard set-rgw-api-port 8000
docker exec cn ceph dashboard set-rgw-api-scheme http
docker exec cn ceph dashboard set-rgw-api-user-id rgw

The dashboard should now be accessible:

Troubleshooting - verify that your config dump should look like this:

$ docker exec cn ceph config dump
WHO   MASK LEVEL   OPTION                           VALUE                                                        RO 
  mgr      unknown mgr/dashboard/RGW_API_ACCESS_KEY ********************                                         *  
  mgr      unknown mgr/dashboard/RGW_API_HOST       127.0.0.1                                                    *  
  mgr      unknown mgr/dashboard/RGW_API_PORT       8000                                                         *  
  mgr      unknown mgr/dashboard/RGW_API_SCHEME     http                                                         *  
  mgr      unknown mgr/dashboard/RGW_API_SECRET_KEY ****************************************                     *  
  mgr      unknown mgr/dashboard/RGW_API_USER_ID    rgw                                                          *  
  mgr      unknown mgr/dashboard/password           ************************************************************ *  
  mgr      unknown mgr/dashboard/server_addr        0.0.0.0                                                      *  
  mgr      unknown mgr/dashboard/server_port        8080                                                         *  
  mgr      unknown mgr/dashboard/ssl                false                                                        *  
  mgr      unknown mgr/dashboard/username           nano                                                         *  

cn's People

Contributors

dsavineau avatar guits avatar guymguym avatar keithballdotnet avatar kshithijiyer avatar leseb avatar nixpanic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cn's Issues

S3 gateway for cluster is not responding

Hi. just trying to run cluster with ./cn cluster start -d /tmp my-first-cluster, but getting an error on s3 gateway. So maybe i need to provide .s3cfg for it?

./cn cluster start mycluster
2019/05/27 17:33:37 Running cluster mucluster | image ceph/daemon | flavor default {512MB Memory, 1 CPU} ...
2019/05/27 17:34:19 Timeout while trying to reach: http://192.168.198.209:8001
2019/05/27 17:34:19 S3 gateway for cluster mucluster is not responding. Showing S3 logs (if any):
2019-05-27 14:33:51.368 7f80eb611780 0 framework: beast
2019-05-27 14:33:51.368 7f80eb611780 0 framework conf key: endpoint, val: 0.0.0.0:8080
2019-05-27 14:33:51.368 7f80eb611780 0 deferred set uid:gid to 167:167 (ceph:ceph)
2019-05-27 14:33:51.368 7f80eb611780 0 ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process radosgw, pid 605
2019-05-27 14:33:51.368 7f80eb611780 0 pidfile_write: ignore empty --pid-file
2019-05-27 14:33:58.931 7f80eb611780 0 starting handler: beast
2019-05-27 14:33:58.941 7f80eb611780 0 set uid:gid to 167:167 (ceph:ceph)
2019-05-27 14:33:58.971 7f80eb611780 1 mgrc service_daemon_register rgw.ceph-nano-mucluster-faa32aebf00b metadata {arch=x86_64,ceph_release=nautilus,ceph_version=ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable),ceph_version_short=14.2.1,cpu=AMD Ryzen 5 1600 Six-Core Processor,distro=centos,distro_description=CentOS Linux 7 (Core),distro_version=7,frontend_config#0=beast endpoint=0.0.0.0:8080,frontend_type#0=beast,hostname=ceph-nano-mucluster-faa32aebf00b,kernel_description=#1 SMP Fri Sep 7 08:20:28 UTC 2018,kernel_version=4.9.125-linuxkit,mem_cgroup_limit=536870912,mem_swap_kb=1048572,mem_total_kb=2027864,num_handles=1,os=Linux,pid=615,zone_id=0e507e80-bf8c-4643-8223-21f92439edf1,zone_name=default,zonegroup_id=7a892790-0207-4099-a3c8-13d6fe69cec1,zonegroup_name=default}
2019/05/27 17:34:19 Please open an issue at: https://github.com/ceph/cn.

problem with image name on latest (multiple latest)

[leseb@tarox ~]$ ./cn-devel-0eb6a70-linux-amd64 cluster ls
+-------+---------+----------------------------------------------------+----------------+--------------------------------+
| NAME  | STATUS  | IMAGE                                              | IMAGE RELEASE  | IMAGE CREATION TIME            |
+-------+---------+----------------------------------------------------+----------------+--------------------------------+
| pouet | running | ceph/daemon:latest-bisdocker.io/ceph/daemon:latest | master-77e3d8d | 2018-04-05T15:01:40.323603472Z |
+-------+---------+----------------------------------------------------+----------------+--------------------------------+

cn cluster status returning error

I can't use cn cluster status to check configuration and access keys. The output is:

$ cn cluster status ceph
panic: runtime error: index out of range

goroutine 1 [running]:
github.com/ceph/cn/cmd.getAwsKey(0xc4203181f0, 0xe, 0xc42045cdf2, 0x4, 0xc42048819a, 0x4)
	/build/erwan/src/github.com/ceph/cn/cmd/utils.go:357 +0x17a
github.com/ceph/cn/cmd.echoInfo(0xc4203181f0, 0xe)
	/build/erwan/src/github.com/ceph/cn/cmd/utils.go:312 +0x137
github.com/ceph/cn/cmd.statusNano(0xc4200d7440, 0xc4202d3660, 0x1, 0x1)
	/build/erwan/src/github.com/ceph/cn/cmd/status.go:28 +0xb9
github.com/spf13/cobra.(*Command).execute(0xc4200d7440, 0xc4202d3640, 0x1, 0x1, 0xc4200d7440, 0xc4202d3640)
	/build/erwan/src/github.com/spf13/cobra/command.go:704 +0x2c6
github.com/spf13/cobra.(*Command).ExecuteC(0xad5740, 0xc4201aff00, 0x1f, 0x1f)
	/build/erwan/src/github.com/spf13/cobra/command.go:785 +0x2e4
github.com/spf13/cobra.(*Command).Execute(0xad5740, 0x1, 0x5)
	/build/erwan/src/github.com/spf13/cobra/command.go:738 +0x2b
github.com/ceph/cn/cmd.Main(0xc4201aff00, 0x1f)
	/build/erwan/src/github.com/ceph/cn/cmd/main.go:105 +0x53
main.main()
	/build/erwan/src/github.com/ceph/cn/main.go:34 +0xd7

add --size option

Add --size option to the object store if the osd data are stored in a docker volume, if nothing is provided, the default remains 10GB.
If we use a dedicated directory then we should be taking the whole available size on that fs, so we need to get the available size and build the BLUESTORE_BLOCK_SIZE accordingly.

Enhance functional tests

  • refactor functional-tests.sh, make it more readable, cosmetic mostly

  • add update command in the functional tests

  • update the README with the last output of ./cn -h

Port 8000 cannot be accessed via a svc on minikube

I was using ceph/daemon latest tag following cn kube for the integration tests here. The idea is to access the 8000 port with a svc created on minikube and which can be accessed on the host machine outside the minikube vm. It used to work but the recent update has broken the access to the container's port for some reason (http://192.168.39.193:32194).

./minikube service list
|-------------|--------------|-----------------------------|
|  NAMESPACE  |     NAME     |             URL             |
|-------------|--------------|-----------------------------|
| default     | kubernetes   | No node port                |
| kube-system | kube-dns     | No node port                |
| spark       | ceph-nano-s3 | http://192.168.39.193:32194 |
|-------------|--------------|-----------------------------|

$ ./kubectl get svc -n spark
NAME           TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
ceph-nano-s3   NodePort   10.98.131.126   <none>        8000:32194/TCP   11m

Minikube is running with kvm2. This used to work with latest and now I am falling back to ceph/daemon:v4.0.0-stable-4.0-master-centos-7-x86_64. Maybe I should use the stable version thoughts?

ARM support

We just need to add GOARCH=arm64, so we loop over arm64 and amd64 in the Make file.

Configurable resources

I'm trying to build a dev environment to reproduce an issue we are seeing in production. The issue is that ceph nano tops out at 512MB of RAM. An OOM error just hoses the container.

Expose 6789 port

You could expose 6789/tcp port also to be able to interact with the underlying monitor. That would be very useful.

Add debug/verbose flags for commands

I'm trying to debug a connection refused issue with my Python code and I tested some bucket operations with cn s3 command. However, if I had a way to add --verbose or --debug to check the API interactions would be better to troubleshoot the connection issue.

cn start cluster returns error on Mac

start cluster on Mac hangs with minikube. docker container started, but the correct ip is vm ip, not the host ip. minikube ip is 192.168.64.28 on my laptop. 192.168.2.2 is the host ip.

 $ ./cn cluster start mycluster -f huge
Using /Users/<name>/workspace/go/src/github.com/ceph/cn/cn.toml as configuration file
2019/12/09 21:21:00 Cluster mycluster is already running!
2019/12/09 21:21:20 Timeout while trying to reach: http://192.168.2.2:8000
2019/12/09 21:21:20 S3 gateway for cluster mycluster is not responding. Showing S3 logs (if any):
^cat: /var/log/ceph/client.rgw.ceph-nano-mycluster-faa32aebf00b.log: No such file or directory
2019/12/09 21:21:20 Please open an issue at: https://github.com/ceph/cn.

cleaner output for status

A couple of things to refactor:

  • showing ceph's status is not really important, you can always cn cluster enter if you want to check ceph's status and more
  • rename things to simplify and clean the output

So ideally the output will look like:

$ cn cluster status cn

Endpoint: http://10.36.116.211:8000
Dashboard: http://10.36.116.211:5000
Access key: 1R9D9CVFQ9ER064Z7FHT
Secret key: BOFRIHNmTv8ZZPwRBxwZMo0qi5dyZUurKukeJJh0
Working directory: /usr/share/ceph-nano

Buckets created with boto cannot be listed

example.py

#!/usr/bin/env python

access_key = 'xxx' # Add your S3 Access Key here
secret_key = 'xxx' # Add your S3 Secret Key here
bucket_name = "foobar"

ceph_host = '192.168.7.27' # Add your rgw0 Ceph host here
ceph_port = 8000

import boto
import boto.s3.connection

conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = ceph_host,
        port = ceph_port,
        is_secure=False,
        calling_format = boto.s3.connection.OrdinaryCallingFormat()
        )

bucket = conn.create_bucket("broken")

print "Bucket {} created!".format("broken")

from boto.s3.key import Key

object_key = "spark-test"
object_value = "./foo"

bucket = conn.get_bucket("broken")

k = Key(bucket)
k.key = object_key
k.set_contents_from_filename(object_value)

print "Object {} added in bucket {}!".format(object_key, "broken")

This doesn't work -

kyle@kyle-mini ~ $ > ./cn s3 ls my-first-cluster s3://broken
ERROR: S3 error: 403 (SignatureDoesNotMatch)

This does work -

kyle@kyle-mini ~ $ > ./cn s3 la my-first-cluster
2018-06-21 15:52         4   s3://broken/spark-test

Using the Sree web interface, I get a "Failed to list objects: Network Failure" error for this bucket.

Purge fails on a stopped cluster

erwan@mr-meeseeks:~$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster stop plap
Stopping cluster plap...
erwan@mr-meeseeks:
$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster stop plp^C
erwan@mr-meeseeks:
$ sudo ~/cn-devel-038a38f-dirty-linux-amd64 cluster purge plap
Purge option is too dangerous please set the right flag.

Purge object storage server. DANGEROUS!

Usage:
cn cluster purge [flags]

Flags:
--yes-i-am-sure YES I know what I'm doing and I want to purge
--all This also deletes the container image
--help help for purge
erwan@mr-meeseeks:~$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster purge plap --yes-i-am-sure
Cluster plap does not exist yet.
erwan@mr-meeseeks:
$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster purge plop --yes-i-am-sure
Purging cluster plop...
erwan@mr-meeseeks:
$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster purge plap --yes-i-am-sure
Cluster plap does not exist yet.
erwan@mr-meeseeks:
$ sudo ~/cn-devel-038a38f-dirty-linux-amd64 cluster ps
Interact with a particular Ceph cluster

Usage:
cn cluster [command]

Available Commands:
ls Print the list of Ceph cluster(s)
start Start object storage server
status Stat object storage server
stop Stop object storage server
restart Restart object storage server
logs Print object storage server logs
purge Purge object storage server. DANGEROUS!

Flags:
-h, --help help for cluster

Use "cn cluster [command] --help" for more information about a command.
erwan@mr-meeseeks:~$ sudo /cn-devel-038a38f-dirty-linux-amd64 cluster ls
NAME STATUS IMAGE IMAGE RELEASE IMAGE CREATED
plap exited ceph/daemon:latest master-2302241 2018-04-11T13:11:44.267489958Z
erwan@mr-meeseeks:
$

luminous unsupported rgw backend beast(by default)

version: v2.3.1

ERR:

++/opt/ceph-container/bin/demo.sh:172: bootstrap_rgw(): RGW_FRONTEND_TYPE=beast
++/opt/ceph-container/bin/demo.sh:173: bootstrap_rgw(): log 'ERROR: unsupported rgw backend type beast for your Ceph release luminous, use at least the Mimic version.'
++/opt/ceph-container/bin/common_functions.sh:7: log(): '[' -z 'ERROR: unsupported rgw backend type beast for your Ceph release luminous, use at least the Mimic version.' ']'

scripts here explain it: /opt/ceph-container/bin/demo.sh

  if [[ "$RGW_FRONTEND_TYPE" == "beast" ]]; then
    if [[ "$CEPH_VERSION" == "luminous" ]]; then
      RGW_FRONTEND_TYPE=beast
      log "ERROR: unsupported rgw backend type $RGW_FRONTEND_TYPE for your Ceph release $CEPH_VERSION, use at least the Mimic version."
      exit 1
    fi
  fi

tips:
support envs args to set rgw backend freely, not just fixed envs.

Port mapping issue in docker container

I installed the latest version v2.3.0 from the binary. Installation went fine, but after creating a cluster it is not possible to access it from the host.
The command cn cluster status my-cluster times out.

The reason is the wrong port mapping in docker:

$ docker ps
NAMES         PORTS                                                                   STATUS
ceph-nano-my-cluster   0.0.0.0:5000->5000/tcp, 0.0.0.0:8000->8000/tcp   Up 10 minutes

Cluster service is running on 8080 inside the container, so it can never be reached from the host.
I changed the port mapping configuration on the docker container like so 0.0.0.0:8000->8080/tcp and it solved the problem.

Cannot overwrite existing file in bucket.

I try to upload data into my bucket, but later i cannot overwrite it. Its only getting fixed if i delete the file manually directly from the working directory which means i cannot overwrite my files remotly. The working dir have 777 mode so it should be able to do anything there.

Use native S3 calls

Currently cn uses s3cmd as a wrapper this works, but a nice enhancement would be to do native S3 API calls instead.

drop selinux code in favour of `:z`?

Currently, before starting the container we look at the selinux status with

cn/cmd/utils.go

Lines 56 to 96 in 249e381

// getSeLinuxStatus gets SeLinux status
func getSeLinuxStatus() string {
testBinaryExist("getenforce")
out, err := exec.Command("getenforce").Output()
if err != nil {
log.Fatal(err)
}
return string(out)
}
// applySeLinuxLabel checks if SeLinux is installed and set to Enforcing,
// we relabel our workingDirectory to allow the container to access files in this directory
func applySeLinuxLabel(dir string) {
testBinaryExist("getenforce")
selinuxStatus := getSeLinuxStatus()
lines := strings.Split(selinuxStatus, "\n")
for _, l := range lines {
if len(l) <= 0 {
// Ignore empty line.
continue
}
if l == "Enforcing" {
meUserName, meID := whoAmI()
if meID != "0" {
log.Fatal("Hey " + meUserName + "! Run me as 'root' so I can apply the right SeLinux label on " + dir)
}
if _, err := os.Stat(dir); os.IsNotExist(err) {
os.Mkdir(dir, 0755)
}
testBinaryExist("chcon")
cmd := "chcon " + " -Rt" + " svirt_sandbox_file_t " + dir
_, err := exec.Command("chcon", "-Rt", "svirt_sandbox_file_t", dir).Output()
if err != nil {
log.Fatal(err)
}
log.Println("Executing: " + cmd)
}
}
}
. This is used when using a dedicated directory for OSD data store:

cn/cmd/start.go

Line 185 in 249e381

applySeLinuxLabel(getUnderlyingStorage(flavor))

The docker CLI has a :z arg when bindmounting directory in a containers (see https://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/). We need to find the :z equivalent from the Docker API and use it instead calling raw commands.

gpt partition or not when giving ceph-nano a device?

Lchanged ownership of '/var/lib/ceph/osd/ceph-0' from root:root to ceph:ceph
J++/opt/ceph-container/bin/demo.sh:112: bootstrap_osd(): [[ -n /dev/sda ]]
`++/opt/ceph-container/bin/demo.sh:113: bootstrap_osd(): ceph-volume lvm prepare --data /dev/sda
Husage: ceph-volume lvm prepare [-h] --data DATA [--data-size DATA_SIZE]
G                               [--data-slots DATA_SLOTS] [--filestore]
3                               [--journal JOURNAL]
K                               [--journal-size JOURNAL_SIZE] [--bluestore]
5                               [--block.db BLOCK_DB]
?                               [--block.db-size BLOCK_DB_SIZE]
A                               [--block.db-slots BLOCK_DB_SLOTS]
7                               [--block.wal BLOCK_WAL]
A                               [--block.wal-size BLOCK_WAL_SIZE]
C                               [--block.wal-slots BLOCK_WAL_SLOTS]
G                               [--osd-id OSD_ID] [--osd-fsid OSD_FSID]
=                               [--cluster-fsid CLUSTER_FSID]
I                               [--crush-device-class CRUSH_DEVICE_CLASS]
:                               [--dmcrypt] [--no-systemd]
Uceph-volume lvm prepare: error: GPT headers found, they must be removed on: /dev/sda

2020/05/05 11:32:46 Please open an issue at: https://github.com/ceph/cn with the logs above.

ok, "GPT must be removed", doing that.

# sgdisk -Z /dev/sda
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
# dd if=/dev/zero of=/dev/sda bs=1M count=4096

$ cn cluster start ... -b /dev/sda 

If you gave me a whole device, make sure it has a partition table (e.g: gpt). 
If you gave me a partition, I don't support partitions yet, give me a whole device.
As an alternative, you can create a filesystem on this partition and give the mountpoint to me.

Also if the disk was an OSD you need to zap it (e.g: with 'ceph-disk zap').

Issue spinning up latest-jewel cluster

Here is the log output from trying to spin up a new jewel cluster using cn and -i to specify the image. This is a separate (empty) working dir for this execution so there should be no other files present.

This works if I specify luminous for everything.

ceph-nano version v1.4.0 (HEAD/1b3d996)

bash-4.4# cn cluster start -i ceph/daemon:latest-jewel jewel -d /tmp/cn_jewel
2018/05/01 09:14:14 Running cluster jewel...
2018/05/01 09:15:15 The container ceph-nano-jewel never reached a clean state. Showing the container logs now:
N2018-05-01 13:13:30 /entrypoint.sh: VERBOSE: activating bash debugging mode.
s2018-05-01 13:13:30 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:
P2018-05-01 13:13:30 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'
c2018-05-01 13:13:30 /entrypoint.sh: This container environement variables are: CEPH_DEMO_UID=nano
&HOSTNAME=ceph-nano-jewel-faa32aebf00b
RGW_CIVETWEB_PORT=8000
LC_ALL=C
DEMO_DAEMONS=mon,mgr,osd,rgw
BPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NETWORK_AUTO_DETECT=4
PWD=/
CEPH_VERSION=jewel
SHLVL=1

HOME=/root
CEPH_POINT_RELEASE=
CEPH_DAEMON=demo
DEBUG=verbose
_=/usr/bin/env
4ownership of '/var/run/ceph/' retained as ceph:ceph
Aownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph
Aownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph
Ichanged ownership of '/var/lib/ceph/radosgw' from root:root to ceph:ceph
ochanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-nano-jewel-faa32aebf00b' from root:root to ceph:ceph
Aownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph
7ownership of '/var/lib/ceph/osd' retained as ceph:ceph
7ownership of '/var/lib/ceph/mds' retained as ceph:ceph
gchanged ownership of '/var/lib/ceph/mds/ceph-ceph-nano-jewel-faa32aebf00b' from root:root to ceph:ceph
7ownership of '/var/lib/ceph/tmp' retained as ceph:ceph
Tchanged ownership of '/var/lib/ceph/tmp/tmp.NqzHbGWrrC' from root:root to ceph:ceph
7ownership of '/var/lib/ceph/mon' retained as ceph:ceph
gchanged ownership of '/var/lib/ceph/mon/ceph-ceph-nano-jewel-faa32aebf00b' from root:root to ceph:ceph
X2018-05-01 13:13:30 /entrypoint.sh: I can see existing Ceph files, please remove them!
t2018-05-01 13:13:30 /entrypoint.sh: To run the demo container, remove the content of /var/lib/ceph/ and /etc/ceph/
g2018-05-01 13:13:30 /entrypoint.sh: Before doing this, make sure you are removing any sensitive data.

2018/05/01 09:15:15 Please open an issue at: https://github.com/ceph/cn with the logs above.

issue requested after cn cluster start

root@ceph-1:~# cn cluster start -d /tmp my-first-cluster
The container image (ceph/daemon) is not present, pulling it. 
This operation can take a few minutes.
...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
2019/02/22 12:21:39 Running cluster my-first-cluster | image ceph/daemon | flavor default {512MB Memory, 1 CPU} ...
2019/02/22 12:22:40 The container ceph-nano-my-first-cluster never reached a clean state. Showing the container logs now:
�e2019-02-22 11:21:40  /opt/ceph-container/bin/entrypoint.sh: VERBOSE: activating bash debugging mode.
��2019-02-22 11:21:40  /opt/ceph-container/bin/entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:
�g2019-02-22 11:21:40  /opt/ceph-container/bin/entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'
�z2019-02-22 11:21:40  /opt/ceph-container/bin/entrypoint.sh: This container environement variables are: CEPH_DEMO_UID=nano
�1HOSTNAME=ceph-nano-my-first-cluster-faa32aebf00b
��MON_IP=127.0.0.1
��RGW_CIVETWEB_PORT=8000
�	LC_ALL=C
��DEMO_DAEMONS=mon,mgr,osd,rgw
�
XPOSED_IP=192.168.101.211
�BPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
��PWD=/
��CEPH_VERSION=mimic
SHLVL=1
�
 HOME=/root
��CEPH_POINT_RELEASE=
�SREE_PORT=5000
��SREE_VERSION=v0.1
��CEPH_DAEMON=demo
��CEPH_PUBLIC_NETWORK=0.0.0.0/0
�DEBUG=verbose
�_=/usr/bin/env
�>+/opt/ceph-container/bin/entrypoint.sh:19: case "$KV_TYPE" in
�[+/opt/ceph-container/bin/entrypoint.sh:29: source /opt/ceph-container/bin/config.static.sh
�5++/opt/ceph-container/bin/config.static.sh:2: set -e
�>++/opt/ceph-container/bin/entrypoint.sh:39: to_lowercase demo
�M++/opt/ceph-container/bin/common_functions.sh:189: to_lowercase(): echo demo
�<+/opt/ceph-container/bin/entrypoint.sh:39: CEPH_DAEMON=demo
�H+/opt/ceph-container/bin/entrypoint.sh:41: create_mandatory_directories
��+/opt/ceph-container/bin/common_functions.sh:64: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'
��++/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring
�v+/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd
��+/opt/ceph-container/bin/common_functions.sh:64: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'
��++/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring
�v+/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds
��+/opt/ceph-container/bin/common_functions.sh:64: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'
��++/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring
�v+/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw
��+/opt/ceph-container/bin/common_functions.sh:64: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'
��++/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring
�v+/opt/ceph-container/bin/common_functions.sh:65: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�l+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�l+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�l+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�p+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�l+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp
�~+/opt/ceph-container/bin/common_functions.sh:69: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr
�l+/opt/ceph-container/bin/common_functions.sh:70: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr
��+/opt/ceph-container/bin/common_functions.sh:74: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-nano-my-first-cluster-faa32aebf00b
�h+/opt/ceph-container/bin/common_functions.sh:77: create_mandatory_directories(): mkdir -p /var/run/ceph
��+/opt/ceph-container/bin/common_functions.sh:80: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-nano-my-first-cluster-faa32aebf00b
��+/opt/ceph-container/bin/common_functions.sh:83: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-nano-my-first-cluster-faa32aebf00b
��+/opt/ceph-container/bin/common_functions.sh:86: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-nano-my-first-cluster-faa32aebf00b
�y+/opt/ceph-container/bin/common_functions.sh:89: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/
�4ownership of '/var/run/ceph/' retained as ceph:ceph
��+/opt/ceph-container/bin/common_functions.sh:90: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'
�7ownership of '/var/lib/ceph/tmp' retained as ceph:ceph
�Tchanged ownership of '/var/lib/ceph/tmp/tmp.ZF5B752DWR' from root:root to ceph:ceph
�Aownership of '/var/lib/ceph/bootstrap-mgr' retained as ceph:ceph
�Aownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph
�7ownership of '/var/lib/ceph/mds' retained as ceph:ceph
�rchanged ownership of '/var/lib/ceph/mds/ceph-ceph-nano-my-first-cluster-faa32aebf00b' from root:root to ceph:ceph
�7ownership of '/var/lib/ceph/mon' retained as ceph:ceph
�rchanged ownership of '/var/lib/ceph/mon/ceph-ceph-nano-my-first-cluster-faa32aebf00b' from root:root to ceph:ceph
�Aownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph
�7ownership of '/var/lib/ceph/mgr' retained as ceph:ceph
�rchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-nano-my-first-cluster-faa32aebf00b' from root:root to ceph:ceph
�Ichanged ownership of '/var/lib/ceph/radosgw' from root:root to ceph:ceph
�zchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-nano-my-first-cluster-faa32aebf00b' from root:root to ceph:ceph
�Aownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph
�Aownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph
�7ownership of '/var/lib/ceph/osd' retained as ceph:ceph
�H+/opt/ceph-container/bin/entrypoint.sh:43: [[ ! x86_64 aarch64 =~ '' ]]
�B+/opt/ceph-container/bin/entrypoint.sh:50: case "$CEPH_DAEMON" in
�S+/opt/ceph-container/bin/entrypoint.sh:165: source /opt/ceph-container/bin/demo.sh
�,++/opt/ceph-container/bin/demo.sh:2: set -e
�>++/opt/ceph-container/bin/demo.sh:4: unset 'DAEMON_OPTS[7-1]'
�.++/opt/ceph-container/bin/demo.sh:5: OSD_ID=0
�@++/opt/ceph-container/bin/demo.sh:6: : /var/lib/ceph/osd/ceph-0
�E++/opt/ceph-container/bin/demo.sh:8: [[ mimic == \l\u\m\i\n\o\u\s ]]
�4++/opt/ceph-container/bin/demo.sh:11: MDS_NAME=demo
�K++/opt/ceph-container/bin/demo.sh:13: MDS_PATH=/var/lib/ceph/mds/ceph-demo
�v++/opt/ceph-container/bin/demo.sh:14: RGW_PATH=/var/lib/ceph/radosgw/ceph-rgw.ceph-nano-my-first-cluster-faa32aebf00b
�n++/opt/ceph-container/bin/demo.sh:16: MGR_PATH=/var/lib/ceph/mgr/ceph-ceph-nano-my-first-cluster-faa32aebf00b
�;++/opt/ceph-container/bin/demo.sh:17: RESTAPI_IP=127.0.0.1
�7++/opt/ceph-container/bin/demo.sh:18: MGR_IP=127.0.0.1
�8++/opt/ceph-container/bin/demo.sh:19: : mon,mgr,osd,rgw
�-++/opt/ceph-container/bin/demo.sh:20: : true
�*++/opt/ceph-container/bin/demo.sh:21: : 1
�+++/opt/ceph-container/bin/demo.sh:22: : 32
�*++/opt/ceph-container/bin/demo.sh:23: : 1
�*++/opt/ceph-container/bin/demo.sh:24: : 1
�8++/opt/ceph-container/bin/demo.sh:25: : 192.168.101.211
�-++/opt/ceph-container/bin/demo.sh:26: : 5000
�0++/opt/ceph-container/bin/demo.sh:29: : 0.0.0.0
�-++/opt/ceph-container/bin/demo.sh:30: : 8000
�0++/opt/ceph-container/bin/demo.sh:31: : 0.0.0.0
�-++/opt/ceph-container/bin/demo.sh:32: : 8000
�1++/opt/ceph-container/bin/demo.sh:33: : civetweb
�,++/opt/ceph-container/bin/demo.sh:35: : rbd
�P++/opt/ceph-container/bin/demo.sh:38: RGW_CIVETWEB_OPTIONS=' port=0.0.0.0:8000'
�Q++/opt/ceph-container/bin/demo.sh:39: RGW_BEAST_OPTIONS=' endpoint=0.0.0.0:8000'
�I++/opt/ceph-container/bin/demo.sh:41: [[ civetweb == \c\i\v\e\t\w\e\b ]]
�O++/opt/ceph-container/bin/demo.sh:42: RGW_FRONTED_OPTIONS=' port=0.0.0.0:8000'
�F++/opt/ceph-container/bin/demo.sh:50: : 'civetweb  port=0.0.0.0:8000'
�C++/opt/ceph-container/bin/demo.sh:52: [[ civetweb == \b\e\a\s\t ]]
�9++/opt/ceph-container/bin/demo.sh:406: detect_ceph_files
�i++/opt/ceph-container/bin/common_functions.sh:348: detect_ceph_files(): '[' -f /etc/ceph/I_AM_A_DEMO ']'
�m++/opt/ceph-container/bin/common_functions.sh:348: detect_ceph_files(): '[' -f /var/lib/ceph/I_AM_A_DEMO ']'
�a++/opt/ceph-container/bin/common_functions.sh:353: detect_ceph_files(): '[' -d /var/lib/ceph ']'
�O+++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): wc -l
�}+++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): find /var/lib/ceph/ -mindepth 3 -maxdepth 3 -type f
�U++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): [[ 0 != 0 ]]
�O+++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): wc -l
�l+++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): find /etc/ceph -mindepth 1 -type f
�V++/opt/ceph-container/bin/common_functions.sh:355: detect_ceph_files(): [[ 1 -gt 1 ]]
�7++/opt/ceph-container/bin/demo.sh:407: build_bootstrap
�H++/opt/ceph-container/bin/demo.sh:345: build_bootstrap(): bootstrap_mon
�W++/opt/ceph-container/bin/demo.sh:65: bootstrap_mon(): [[ mimic != \l\u\m\i\n\o\u\s ]]
�Q++/opt/ceph-container/bin/demo.sh:65: bootstrap_mon(): [[ mimic != \m\i\m\i\c ]]
�c++/opt/ceph-container/bin/demo.sh:69: bootstrap_mon(): source /opt/ceph-container/bin/start_mon.sh
�<+++/opt/ceph-container/bin/start_mon.sh:2: source(): set -e
�z+++/opt/ceph-container/bin/start_mon.sh:4: source(): IPV4_REGEXP='[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'
��+++/opt/ceph-container/bin/start_mon.sh:5: source(): IPV4_NETWORK_REGEXP='[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/[0-9]\{1,2\}'
�A++/opt/ceph-container/bin/demo.sh:70: bootstrap_mon(): start_mon
�K+++/opt/ceph-container/bin/start_mon.sh:81: start_mon(): get_available_ram
��+++/opt/ceph-container/bin/common_functions.sh:519: get_available_ram(): limit_in_bytes=/sys/fs/cgroup/memory/memory.limit_in_bytes
�z++++/opt/ceph-container/bin/common_functions.sh:520: get_available_ram(): cat /sys/fs/cgroup/memory/memory.limit_in_bytes
�`+++/opt/ceph-container/bin/common_functions.sh:520: get_available_ram(): memory_limit=536870912
�q+++/opt/ceph-container/bin/common_functions.sh:522: get_available_ram(): '[' 536870912 = 9223372036854771712 ']'
�z++++/opt/ceph-container/bin/common_functions.sh:527: get_available_ram(): cat /sys/fs/cgroup/memory/memory.usage_in_bytes
�_+++/opt/ceph-container/bin/common_functions.sh:527: get_available_ram(): current_usage=8597504
�X+++/opt/ceph-container/bin/common_functions.sh:528: get_available_ram(): echo 528273408
�S++/opt/ceph-container/bin/start_mon.sh:81: start_mon(): available_memory=528273408
�F++/opt/ceph-container/bin/start_mon.sh:83: start_mon(): [[ 0 -eq 0 ]]
�K++/opt/ceph-container/bin/start_mon.sh:84: start_mon(): [[ -z 0.0.0.0/0 ]]
�K++/opt/ceph-container/bin/start_mon.sh:89: start_mon(): [[ -z 127.0.0.1 ]]
�L++/opt/ceph-container/bin/start_mon.sh:121: start_mon(): [[ -z 127.0.0.1 ]]
�L++/opt/ceph-container/bin/start_mon.sh:121: start_mon(): [[ -z 0.0.0.0/0 ]]
��++/opt/ceph-container/bin/start_mon.sh:127: start_mon(): '[' '!' -e /var/lib/ceph/mon/ceph-ceph-nano-my-first-cluster-faa32aebf00b/keyring ']'
�H++/opt/ceph-container/bin/start_mon.sh:128: start_mon(): get_mon_config
�L++/opt/ceph-container/bin/config.static.sh:15: get_mon_config(): IP_LEVEL=4
�d++/opt/ceph-container/bin/config.static.sh:17: get_mon_config(): '[' '!' -e /etc/ceph/ceph.conf ']'
�L++/opt/ceph-container/bin/config.static.sh:18: get_mon_config(): local fsid
�J+++/opt/ceph-container/bin/config.static.sh:19: get_mon_config(): uuidgen
�k++/opt/ceph-container/bin/config.static.sh:19: get_mon_config(): fsid=11fd5274-4a96-4ac3-b242-faa9f2d477f0
�T++/opt/ceph-container/bin/config.static.sh:20: get_mon_config(): [[ demo == demo ]]
�J+++/opt/ceph-container/bin/config.static.sh:21: get_mon_config(): uuidgen
�k++/opt/ceph-container/bin/config.static.sh:21: get_mon_config(): fsid=102f7e7a-ba69-4b80-b019-da7033ec2a00
�E++/opt/ceph-container/bin/config.static.sh:22: get_mon_config(): cat
�h+++/opt/ceph-container/bin/config.static.sh:36: get_mon_config(): findmnt -n -o FSTYPE -T /var/lib/ceph
�U++/opt/ceph-container/bin/config.static.sh:36: get_mon_config(): '[' ext4 = ext4 ']'
�E++/opt/ceph-container/bin/config.static.sh:37: get_mon_config(): cat
�Q++/opt/ceph-container/bin/config.static.sh:54: get_mon_config(): '[' 4 -eq 6 ']'
�V++/opt/ceph-container/bin/config.static.sh:62: get_mon_config(): CLI+=("--set-uid=0")
�t++/opt/ceph-container/bin/config.static.sh:64: get_mon_config(): '[' '!' -e /etc/ceph/ceph.client.admin.keyring ']'
�O++/opt/ceph-container/bin/config.static.sh:65: get_mon_config(): '[' -z '' ']'
�R++/opt/ceph-container/bin/config.static.sh:67: get_mon_config(): CLI+=(--gen-key)
��++/opt/ceph-container/bin/config.static.sh:72: get_mon_config(): ceph-authtool /etc/ceph/ceph.client.admin.keyring --create-keyring -n client.admin --set-uid=0 --gen-key --cap mon 'allow *' --cap osd 'allow *' --cap mds allow --cap mgr 'allow *'
�-creating /etc/ceph/ceph.client.admin.keyring
�k++/opt/ceph-container/bin/config.static.sh:75: get_mon_config(): '[' '!' -e /etc/ceph/ceph.mon.keyring ']'
��++/opt/ceph-container/bin/config.static.sh:77: get_mon_config(): ceph-authtool /etc/ceph/ceph.mon.keyring --create-keyring --gen-key -n mon. --cap mon 'allow *'
�$creating /etc/ceph/ceph.mon.keyring
�y++/opt/ceph-container/bin/config.static.sh:80: get_mon_config(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'
��++/opt/ceph-container/bin/config.static.sh:82: get_mon_config(): ceph-authtool /var/lib/ceph/bootstrap-osd/ceph.keyring --create-keyring --gen-key -n client.bootstrap-osd --cap mon 'allow profile bootstrap-osd'
�2creating /var/lib/ceph/bootstrap-osd/ceph.keyring
�y++/opt/ceph-container/bin/config.static.sh:85: get_mon_config(): '[' '!' -e /var/lib/ceph/bootstrap-mds/ceph.keyring ']'
��++/opt/ceph-container/bin/config.static.sh:87: get_mon_config(): ceph-authtool /var/lib/ceph/bootstrap-mds/ceph.keyring --create-keyring --gen-key -n client.bootstrap-mds --cap mon 'allow profile bootstrap-mds'
�2creating /var/lib/ceph/bootstrap-mds/ceph.keyring
�y++/opt/ceph-container/bin/config.static.sh:90: get_mon_config(): '[' '!' -e /var/lib/ceph/bootstrap-rgw/ceph.keyring ']'
��++/opt/ceph-container/bin/config.static.sh:92: get_mon_config(): ceph-authtool /var/lib/ceph/bootstrap-rgw/ceph.keyring --create-keyring --gen-key -n client.bootstrap-rgw --cap mon 'allow profile bootstrap-rgw'
�2creating /var/lib/ceph/bootstrap-rgw/ceph.keyring
�y++/opt/ceph-container/bin/config.static.sh:95: get_mon_config(): '[' '!' -e /var/lib/ceph/bootstrap-rbd/ceph.keyring ']'
��++/opt/ceph-container/bin/config.static.sh:97: get_mon_config(): ceph-authtool /var/lib/ceph/bootstrap-rbd/ceph.keyring --create-keyring --gen-key -n client.bootstrap-rbd --cap mon 'allow profile bootstrap-rbd'
�2creating /var/lib/ceph/bootstrap-rbd/ceph.keyring
���++/opt/ceph-container/bin/config.static.sh:100: get_mon_config(): chown --verbose ceph. /etc/ceph/ceph.mon.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-mds/ceph.keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring /var/lib/ceph/bootstrap-rbd/ceph.keyring
�Nchanged ownership of '/etc/ceph/ceph.mon.keyring' from root:root to ceph:ceph
�\changed ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' from root:root to ceph:ceph
�\changed ownership of '/var/lib/ceph/bootstrap-mds/ceph.keyring' from root:root to ceph:ceph
�\changed ownership of '/var/lib/ceph/bootstrap-rgw/ceph.keyring' from root:root to ceph:ceph
�\changed ownership of '/var/lib/ceph/bootstrap-rbd/ceph.keyring' from root:root to ceph:ceph
�g++/opt/ceph-container/bin/config.static.sh:102: get_mon_config(): '[' '!' -e /etc/ceph/monmap-ceph ']'
�^++/opt/ceph-container/bin/config.static.sh:103: get_mon_config(): '[' -e /etc/ceph/monmap ']'
��++/opt/ceph-container/bin/config.static.sh:108: get_mon_config(): monmaptool --create --add ceph-nano-my-first-cluster-faa32aebf00b 127.0.0.1:6789 --fsid 102f7e7a-ba69-4b80-b019-da7033ec2a00 /etc/ceph/monmap-ceph
�.monmaptool: monmap file /etc/ceph/monmap-ceph
�=monmaptool: set fsid to 102f7e7a-ba69-4b80-b019-da7033ec2a00
�Bmonmaptool: writing epoch 0 to /etc/ceph/monmap-ceph (1 monitors)
�n++/opt/ceph-container/bin/config.static.sh:110: get_mon_config(): chown --verbose ceph. /etc/ceph/monmap-ceph
�Ichanged ownership of '/etc/ceph/monmap-ceph' from root:root to ceph:ceph
�c++/opt/ceph-container/bin/start_mon.sh:130: start_mon(): '[' '!' -e /etc/ceph/ceph.mon.keyring ']'
�^++/opt/ceph-container/bin/start_mon.sh:135: start_mon(): '[' '!' -e /etc/ceph/monmap-ceph ']'
��++/opt/ceph-container/bin/start_mon.sh:141: start_mon(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
�m++/opt/ceph-container/bin/start_mon.sh:142: start_mon(): '[' -f /var/lib/ceph/bootstrap-osd/ceph.keyring ']'
��++/opt/ceph-container/bin/start_mon.sh:143: start_mon(): ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
�_importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
��++/opt/ceph-container/bin/start_mon.sh:141: start_mon(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
�m++/opt/ceph-container/bin/start_mon.sh:142: start_mon(): '[' -f /var/lib/ceph/bootstrap-mds/ceph.keyring ']'
��++/opt/ceph-container/bin/start_mon.sh:143: start_mon(): ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring
�_importing contents of /var/lib/ceph/bootstrap-mds/ceph.keyring into /etc/ceph/ceph.mon.keyring
��++/opt/ceph-container/bin/start_mon.sh:141: start_mon(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
�m++/opt/ceph-container/bin/start_mon.sh:142: start_mon(): '[' -f /var/lib/ceph/bootstrap-rgw/ceph.keyring ']'
��++/opt/ceph-container/bin/start_mon.sh:143: start_mon(): ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring
�_importing contents of /var/lib/ceph/bootstrap-rgw/ceph.keyring into /etc/ceph/ceph.mon.keyring
��++/opt/ceph-container/bin/start_mon.sh:141: start_mon(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
�m++/opt/ceph-container/bin/start_mon.sh:142: start_mon(): '[' -f /var/lib/ceph/bootstrap-rbd/ceph.keyring ']'
��++/opt/ceph-container/bin/start_mon.sh:143: start_mon(): ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-rbd/ceph.keyring
�_importing contents of /var/lib/ceph/bootstrap-rbd/ceph.keyring into /etc/ceph/ceph.mon.keyring
��++/opt/ceph-container/bin/start_mon.sh:141: start_mon(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
�h++/opt/ceph-container/bin/start_mon.sh:142: start_mon(): '[' -f /etc/ceph/ceph.client.admin.keyring ']'
��++/opt/ceph-container/bin/start_mon.sh:143: start_mon(): ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
�Zimporting contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring

2019/02/22 12:22:40 Please open an issue at: https://github.com/ceph/cn with the logs above.
root@ceph-1:~# 

Improve update-check

If a new version is available we currently get:

Current version: deep-j-development-671d632
Latest version: v2.2.0
There is a newer version of cn available. Download it here: https://github.com/ceph/cn/releases/tag/v2.2.0

Additionally, it'll be nice to have something like:

Download it with: 'curl -L https://github.com/ceph/cn/releases/download/v2.2.0/cn-v2.2.0-darwin-amd64 -o cn && chmod +x cn && mv cn /usr/local/bin/'

Instead of Download it here: https://github.com/ceph/cn/releases/tag/v2.2.0.

Depending on the platform we are running on we make the right suggestion.

Add ceph-nano web interface in `cn kube` command

Good job on cn kube implementation! This is an amazing step to have a simple Ceph instance running on top of OpenShift. I'm wondering if it is possible to add also the web interface available to Ceph Nano inside OpenShift.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.