Giter Club home page Giter Club logo

ceph-ansible's Introduction

Ceph - a scalable distributed storage system

See https://ceph.com/ for current information about Ceph.

Contributing Code

Most of Ceph is dual-licensed under the LGPL version 2.1 or 3.0. Some miscellaneous code is either public domain or licensed under a BSD-style license.

The Ceph documentation is licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).

Some headers included in the ceph/ceph repository are licensed under the GPL. See the file COPYING for a full inventory of licenses by file.

All code contributions must include a valid "Signed-off-by" line. See the file SubmittingPatches.rst for details on this and instructions on how to generate and submit patches.

Assignment of copyright is not required to contribute code. Code is contributed under the terms of the applicable license.

Checking out the source

Clone the ceph/ceph repository from github by running the following command on a system that has git installed:

git clone [email protected]:ceph/ceph

Alternatively, if you are not a github user, you should run the following command on a system that has git installed:

git clone https://github.com/ceph/ceph.git

When the ceph/ceph repository has been cloned to your system, run the following commands to move into the cloned ceph/ceph repository and to check out the git submodules associated with it:

cd ceph
git submodule update --init --recursive --progress

Build Prerequisites

section last updated 27 Jul 2023

Make sure that curl is installed. The Debian and Ubuntu apt command is provided here, but if you use a system with a different package manager, then you must use whatever command is the proper counterpart of this one:

apt install curl

Install Debian or RPM package dependencies by running the following command:

./install-deps.sh

Install the python3-routes package:

apt install python3-routes

Building Ceph

These instructions are meant for developers who are compiling the code for development and testing. To build binaries that are suitable for installation we recommend that you build .deb or .rpm packages, or refer to ceph.spec.in or debian/rules to see which configuration options are specified for production builds.

To build Ceph, make sure that you are in the top-level ceph directory that contains do_cmake.sh and CONTRIBUTING.rst and run the following commands:

./do_cmake.sh
cd build
ninja

do_cmake.sh by default creates a "debug build" of Ceph, which can be up to five times slower than a non-debug build. Pass -DCMAKE_BUILD_TYPE=RelWithDebInfo to do_cmake.sh to create a non-debug build.

Ninja is the buildsystem used by the Ceph project to build test builds. The number of jobs used by ninja is derived from the number of CPU cores of the building host if unspecified. Use the -j option to limit the job number if the build jobs are running out of memory. If you attempt to run ninja and receive a message that reads g++: fatal error: Killed signal terminated program cc1plus, then you have run out of memory. Using the -j option with an argument appropriate to the hardware on which the ninja command is run is expected to result in a successful build. For example, to limit the job number to 3, run the command ninja -j 3. On average, each ninja job run in parallel needs approximately 2.5 GiB of RAM.

This documentation assumes that your build directory is a subdirectory of the ceph.git checkout. If the build directory is located elsewhere, point CEPH_GIT_DIR to the correct path of the checkout. Additional CMake args can be specified by setting ARGS before invoking do_cmake.sh. See cmake options for more details. For example:

ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh

To build only certain targets, run a command of the following form:

ninja [target name]

To install:

ninja install

CMake Options

The -D flag can be used with cmake to speed up the process of building Ceph and to customize the build.

Building without RADOS Gateway

The RADOS Gateway is built by default. To build Ceph without the RADOS Gateway, run a command of the following form:

cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]

Building with debugging and arbitrary dependency locations

Run a command of the following form to build Ceph with debugging and alternate locations for some external dependencies:

cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
..

Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By default, cmake builds these bundled dependencies from source instead of using libraries that are already installed on the system. You can opt to use these system libraries, as long as they meet Ceph's version requirements. To use system libraries, use cmake options like WITH_SYSTEM_BOOST, as in the following example:

cmake -DWITH_SYSTEM_BOOST=ON [...]

To view an exhaustive list of -D options, invoke cmake -LH:

cmake -LH

Preserving diagnostic colors

If you pipe ninja to less and would like to preserve the diagnostic colors in the output in order to make errors and warnings more legible, run the following command:

cmake -DDIAGNOSTICS_COLOR=always ...

The above command works only with supported compilers.

The diagnostic colors will be visible when the following command is run:

ninja | less -R

Other available values for DIAGNOSTICS_COLOR are auto (default) and never.

Building a source tarball

To build a complete source tarball with everything needed to build from source and/or build a (deb or rpm) package, run

./make-dist

This will create a tarball like ceph-$version.tar.bz2 from git. (Ensure that any changes you want to include in your working directory are committed to git.)

Running a test cluster

From the ceph/ directory, run the following commands to launch a test Ceph cluster:

cd build
ninja vstart        # builds just enough to run vstart
../src/vstart.sh --debug --new -x --localhost --bluestore
./bin/ceph -s

Most Ceph commands are available in the bin/ directory. For example:

./bin/rbd create foo --size 1000
./bin/rados -p foo bench 30 write

To shut down the test cluster, run the following command from the build/ directory:

../src/stop.sh

Use the sysvinit script to start or stop individual daemons:

./bin/init-ceph restart osd.0
./bin/init-ceph stop

Running unit tests

To build and run all tests (in parallel using all processors), use ctest:

cd build
ninja
ctest -j$(nproc)

(Note: Many targets built from src/test are not run using ctest. Targets starting with "unittest" are run in ninja check and thus can be run with ctest. Targets starting with "ceph_test" can not, and should be run by hand.)

When failures occur, look in build/Testing/Temporary for logs.

To build and run all tests and their dependencies without other unnecessary targets in Ceph:

cd build
ninja check -j$(nproc)

To run an individual test manually, run ctest with -R (regex matching):

ctest -R [regex matching test name(s)]

(Note: ctest does not build the test it's running or the dependencies needed to run it)

To run an individual test manually and see all the tests output, run ctest with the -V (verbose) flag:

ctest -V -R [regex matching test name(s)]

To run tests manually and run the jobs in parallel, run ctest with the -j flag:

ctest -j [number of jobs]

There are many other flags you can give ctest for better control over manual test execution. To view these options run:

man ctest

Building the Documentation

Prerequisites

The list of package dependencies for building the documentation can be found in doc_deps.deb.txt:

sudo apt-get install `cat doc_deps.deb.txt`

Building the Documentation

To build the documentation, ensure that you are in the top-level /ceph directory, and execute the build script. For example:

admin/build-doc

Reporting Issues

To report an issue and view existing issues, please visit https://tracker.ceph.com/projects/ceph.

ceph-ansible's People

Contributors

alfredodeza avatar alimaredia avatar andrewschoen avatar andymcc avatar asm0deuz avatar b-ranto avatar batrick avatar bengland2 avatar benoitknecht avatar clwluvw avatar d3n14l avatar dang avatar dotnwat avatar dsavineau avatar flisky avatar fmount avatar font avatar fultonj avatar gfidente avatar guestisp avatar guits avatar jcftang avatar jimcurtis avatar ktdreyer avatar leseb avatar logan2211 avatar msambol avatar rishabh-d-dave avatar rootfs avatar stpierre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ceph-ansible's Issues

Should add ceph.client.admin.keyring file to the rgw?

hi,
I use ceph-ansible to install ceph, it's very easy to use and install ceph quickly.
But when I use rgw to create a user, it error as follow:

vagrant@ceph-rgw:~$ sudo radosgw-admin user create --uid=zzm --display-name="zhaozhiming"
2014-09-06 08:41:59.040195 7fefc075f780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2014-09-06 08:41:59.040258 7fefc075f780  0 librados: client.admin initialization error (2) No such file or directory

And I copy the file ceph.client.admin.keyring from mon vm (in the /etc/ceph folder) to the /etc/ceph/ folder. That slove the problem.

vagrant@ceph-rgw:~$ sudo radosgw-admin user create --uid=zzm1 --display-name="zhaozhiming1"
2014-09-10 03:38:07.229932 7fc4f0775700  0 -- :/1002678 >> 192.168.42.12:6789/0 pipe(0x17f8000 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x17f8270).fault
{ "user_id": "zzm1",
  "display_name": "zhaozhiming1",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [],
  "keys": [
        { "user": "zzm1",
          "access_key": "ZGV5G0OFFUNESNRBK55Q",
          "secret_key": "8DSkRch3bnbZHtQqyES9lHH78t7UftLiSdFFzSdP"}],
  "swift_keys": [],
  "caps": [],
  "op_mask": "read, write, delete",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": { "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1},
  "user_quota": { "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1},
  "temp_url_keys": []}

Is this a bug? Should add a function to add the ceph.client.admin.keyring to rgw? Thanks.

RadosGW: skip creation default user

Actually RadosGW task creates some test users.
This could be a security hole. It think would be better to skip this step or remove users immediatily after creation to use this as a working test. If users are created and removed successfully, rados is working.

Testing on CentOS 6.5

Hi, I am testing the playbook on CentOS 6.5 (yeah, the doc clearly said the playbook has yet been tested on CentOS 6.5, and I hope my work may be of value).

Here is my Vagrantfile https://gist.github.com/syang/9965468

After bring up the vms, I run the playbook and the ceph installation failed because python-flask python-arpparse and python-requests were not installed. Am I pointing to the wrong repo?

I tried to install the above python packages with easy_install, and the errors still remain.

Can someone give me a hint where I should pay attention to dig?

Thanks,

(env)Clouds-MacBook-Pro:ceph_automation shuoy$ ansible-playbook -f 7 -v site.yml
PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************

ok: [ceph-rgw]
ok: [ceph-mon0]
ok: [ceph-mon2]
ok: [ceph-mon1]
ok: [ceph-osd2]
ok: [ceph-osd1]
ok: [ceph-osd0]

TASK: [common | Fail on unsupported system] ***********************************
skipping: [ceph-mon0]
skipping: [ceph-mon2]
skipping: [ceph-mon1]
skipping: [ceph-osd0]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-rgw]

TASK: [common | Fail on unsupported architecture] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon2]
skipping: [ceph-osd2]
skipping: [ceph-osd0]
skipping: [ceph-mon1]
skipping: [ceph-rgw]
skipping: [ceph-osd1]

TASK: [common | Fail on unsupported distribution] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-osd0]
skipping: [ceph-mon2]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-rgw]

TASK: [common | Install dependancies] *****************************************
ok: [ceph-osd1] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-osd2] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-rgw] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-mon2] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-mon0] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-mon1] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}
ok: [ceph-osd0] => (item=python-pycurl,ntp) => {"changed": false, "item": "python-pycurl,ntp", "msg": "", "rc": 0, "results": ["python-pycurl-7.19.0-8.el6.x86_64 providing python-pycurl is already installed", "ntp-4.2.6p5-1.el6.centos.x86_64 providing ntp is already installed"]}

TASK: [common | Install the Ceph key] *****************************************
changed: [ceph-osd2] => {"changed": true, "item": ""}
changed: [ceph-mon2] => {"changed": true, "item": ""}
changed: [ceph-rgw] => {"changed": true, "item": ""}
changed: [ceph-mon1] => {"changed": true, "item": ""}
changed: [ceph-osd0] => {"changed": true, "item": ""}
changed: [ceph-osd1] => {"changed": true, "item": ""}
changed: [ceph-mon0] => {"changed": true, "item": ""}

TASK: [common | Add Ceph repository] ******************************************
changed: [ceph-osd2] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.432115", "end": "2014-04-03 20:05:16.422554", "item": "", "rc": 0, "start": "2014-04-03 20:05:10.990439", "stderr": "", "stdout": ""}
changed: [ceph-osd1] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.480214", "end": "2014-04-03 20:05:16.480576", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.000362", "stderr": "", "stdout": ""}
changed: [ceph-mon2] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.464109", "end": "2014-04-03 20:05:16.520175", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.056066", "stderr": "", "stdout": ""}
changed: [ceph-mon0] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.476657", "end": "2014-04-03 20:05:16.598222", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.121565", "stderr": "", "stdout": ""}
changed: [ceph-mon1] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.482950", "end": "2014-04-03 20:05:16.546813", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.063863", "stderr": "", "stdout": ""}
changed: [ceph-rgw] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.500392", "end": "2014-04-03 20:05:16.648348", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.147956", "stderr": "", "stdout": ""}
changed: [ceph-osd0] => {"changed": true, "cmd": ["rpm", "-U", "http://ceph.com/rpm-emperor/el6/noarch/ceph-release-1-0.el6.noarch.rpm"], "delta": "0:00:05.502414", "end": "2014-04-03 20:05:16.570165", "item": "", "rc": 0, "start": "2014-04-03 20:05:11.067751", "stderr": "", "stdout": ""}

TASK: [common | Install Ceph] *************************************************

failed: [ceph-osd2] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: centos.mirror.ndchost.com\n * updates: mirror.nwresd.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-mon2] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: mirrors.cat.pdx.edu\n * extras: mirrors.sonic.net\n * updates: ftpmirror.your.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-osd1] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: mirrors.sonic.net\n * updates: mirror.nwresd.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-osd0] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: mirrors.sonic.net\n * updates: mirror.nwresd.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-rgw] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: mirrors.sonic.net\n * updates: mirror.nwresd.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-mon0] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: centos.mirror.ndchost.com\n * updates: mirrors.syringanetworks.net\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

failed: [ceph-mon1] => {"changed": false, "failed": true, "item": "", "rc": 1, "results": ["Loaded plugins: fastestmirror, security\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.freedomvoice.com\n * extras: mirrors.sonic.net\n * updates: mirror.nwresd.org\nSetting up Install Process\nResolving Dependencies\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: libcephfs1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd1 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados2 = 0.72.2-0.el6 for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: gdisk for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: xfsprogs for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-ceph for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libleveldb.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libtcmalloc.so.4()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_thread-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: libboost_system-mt.so.5()(64bit) for package: ceph-0.72.2-0.el6.x86_64\n--> Running transaction check\n---> Package boost-system.x86_64 0:1.41.0-18.el6 will be installed\n---> Package boost-thread.x86_64 0:1.41.0-18.el6 will be installed\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package gdisk.x86_64 0:0.8.2-1.el6 will be installed\n--> Processing Dependency: libicuio.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n--> Processing Dependency: libicuuc.so.42()(64bit) for package: gdisk-0.8.2-1.el6.x86_64\n---> Package gperftools-libs.x86_64 0:2.0-11.el6.3 will be installed\n--> Processing Dependency: libunwind.so.8()(64bit) for package: gperftools-libs-2.0-11.el6.3.x86_64\n---> Package leveldb.x86_64 0:1.7.0-2.el6 will be installed\n---> Package libcephfs1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librados2.x86_64 0:0.72.2-0.el6 will be installed\n---> Package librbd1.x86_64 0:0.72.2-0.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n---> Package xfsprogs.x86_64 0:3.1.1-14.el6 will be installed\n--> Running transaction check\n---> Package ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-argparse for package: ceph-0.72.2-0.el6.x86_64\n---> Package libicu.x86_64 0:4.2.1-9.1.el6_2 will be installed\n---> Package libunwind.x86_64 0:1.1-2.el6 will be installed\n---> Package python-ceph.x86_64 0:0.72.2-0.el6 will be installed\n--> Processing Dependency: python-requests for package: python-ceph-0.72.2-0.el6.x86_64\n--> Processing Dependency: python-flask for package: python-ceph-0.72.2-0.el6.x86_64\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"]}
msg: Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-flask
Error: Package: ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-argparse
Error: Package: python-ceph-0.72.2-0.el6.x86_64 (ceph)
Requires: python-requests

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/shuoy/site.retry

ceph-mon0 : ok=4 changed=2 unreachable=0 failed=1
ceph-mon1 : ok=4 changed=2 unreachable=0 failed=1
ceph-mon2 : ok=4 changed=2 unreachable=0 failed=1
ceph-osd0 : ok=4 changed=2 unreachable=0 failed=1
ceph-osd1 : ok=4 changed=2 unreachable=0 failed=1
ceph-osd2 : ok=4 changed=2 unreachable=0 failed=1
ceph-rgw : ok=4 changed=2 unreachable=0 failed=1

Prefix variable name with role name

For consistency and readability sake, it would be nice to rename all the variables and prefix them with the role name where they are used in.

ansible_lsb is undefined

On a brand new Wheezy install:

TASK: [common | Add Ceph repository] ****************************************** 
fatal: [osd13] => One or more undefined variables: 'ansible_lsb' is undefined
fatal: [osd12] => One or more undefined variables: 'ansible_lsb' is undefined

lsb_release is working:

# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 7.4 (wheezy)
Release:    7.4
Codename:   wheezy

Add radosgw support

I'm currently working on that. I'm just fighting with some broken package dependancies :(

Could use ceph-ansible to create cpeh rbd env?

HI,
I know ceph-ansible can quick create a ceph object stroage env (3 mons, 3 osds, 1 rgw).
Could use ceph-ansible to create cpeh rbd (rados block device) env (mons , osds and rbd)?

Thanks.

Better output for OSD role

In large clusters with many disks used as OSDs, ansible output is very very long and verbose, writing tons of "failed" or "ignored" warnings for every disks that is not present.

I'm declaring all of my disks even if they are not attached, so that I don't have to customize ansible config every time I add a new disk.

devices: [  '/dev/sda'
           ,'/dev/sdb'
           ,'/dev/sdc'
           ,'/dev/sdd'
           ,'/dev/sde'
           ,'/dev/sdf'
           ,'/dev/sdg'
           ,'/dev/sdh'
           ,'/dev/sdi'
           ,'/dev/sdj'
           ,'/dev/sdk'
           ,'/dev/sdl'
]

I do the same for journals.
This works great, but ansible output is really verbose:

failed: [osd13] => (item=[{u'changed': False, u'stdout': u'', u'delta': u'0:00:00.006124', 'stdout_lines': [], u'end': u'2014-03-12 10:09:31.272136', u'start': u'2014-03-12 10:09:31.266012', u'cmd': u"parted --script /dev/sde print | egrep -sq '^ 1.*ceph' ", 'item': '/dev/sde', u'stderr': u'', u'rc': 1, 'invocation': {'module_name': 'shell', 'module_args': u"parted --script /dev/sde print | egrep -sq '^ 1.*ceph'"}}, '/dev/sde', '/dev/sdf']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "/dev/sde", "/dev/sdf"], "delta": "0:00:00.080666", "end": "2014-03-12 10:09:33.456280", "item": [{"changed": false, "cmd": "parted --script /dev/sde print | egrep -sq '^ 1.*ceph' ", "delta": "0:00:00.006124", "end": "2014-03-12 10:09:31.272136", "invocation": {"module_args": "parted --script /dev/sde print | egrep -sq '^ 1.*ceph'", "module_name": "shell"}, "item": "/dev/sde", "rc": 1, "start": "2014-03-12 10:09:31.266012", "stderr": "", "stdout": "", "stdout_lines": []}, "/dev/sde", "/dev/sdf"], "rc": 1, "start": "2014-03-12 10:09:33.375614"}

this for each disk and journal on each OSD server.
If you have a 24 disks OSD and 10 OSD you could imagine the output......

Is possible to remove this output or changing it to something shorter like " Skipping. /dev/sde not found" or "Skipping: /dev/sde has Ceph partition" ?

ansible_managed changes trigger restart

In ceph-common you load {{ ansible_managed }} at the top of the main
config file - this will trigger handlers on that file whenever an Ansible run is made.

I'd suggest replacing it with a vanilla text comment 'managed by Ansible' to warn
admins but avoid unnecessary cluster bounces.

Playbook not working on MacOSX

I tried to play with Ceph with this playbook on my Mac box, and it failed. Here is what I did

  1. 'vagrant up' from the project directory and then 'ansible all -m ping', I got the expected results
    (env)Clouds-MacBook-Pro:ceph-ansible shuoy$ ansible all -m ping
    ceph-osd1 | success >> {
    "changed": false,
    "ping": "pong"
    }

ceph-osd2 | success >> {
"changed": false,
"ping": "pong"
}

ceph-mon0 | success >> {
"changed": false,
"ping": "pong"
}

ceph-mon2 | success >> {
"changed": false,
"ping": "pong"
}

ceph-mon1 | success >> {
"changed": false,
"ping": "pong"
}

ceph-osd0 | success >> {
"changed": false,
"ping": "pong"
}

ceph-rgw | success >> {
"changed": false,
"ping": "pong"
}

  1. After that, I met the following error lines
    (env)Clouds-MacBook-Pro:ceph-ansible shuoy$ ansible-playbook -f 7 -v site.yml

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [ceph-mon1]
ok: [ceph-osd1]
ok: [ceph-mon2]
ok: [ceph-rgw]
ok: [ceph-osd0]
ok: [ceph-osd2]
ok: [ceph-mon0]

TASK: [common | Fail on unsupported system] ***********************************
skipping: [ceph-mon1]
skipping: [ceph-osd1]
skipping: [ceph-osd0]
skipping: [ceph-rgw]
skipping: [ceph-mon2]
skipping: [ceph-osd2]
skipping: [ceph-mon0]

TASK: [common | Fail on unsupported architecture] *****************************
skipping: [ceph-mon1]
skipping: [ceph-mon0]
skipping: [ceph-mon2]
skipping: [ceph-osd0]
skipping: [ceph-osd2]
skipping: [ceph-osd1]
skipping: [ceph-rgw]

TASK: [common | Fail on unsupported distribution] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-osd0]
skipping: [ceph-mon2]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-rgw]

TASK: [common | Install dependancies] *****************************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-mon2]
skipping: [ceph-osd0]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-rgw]

TASK: [common | Install the Ceph key] *****************************************
skipping: [ceph-mon0]
skipping: [ceph-mon2]
skipping: [ceph-mon1]
skipping: [ceph-osd0]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-rgw]

TASK: [common | Add Ceph repository] ******************************************
skipping: [ceph-mon0]
skipping: [ceph-osd2]
skipping: [ceph-mon2]
skipping: [ceph-osd0]
skipping: [ceph-mon1]
skipping: [ceph-osd1]
skipping: [ceph-rgw]

TASK: [common | Install Ceph] *************************************************
skipping: [ceph-osd0]
skipping: [ceph-osd1]
skipping: [ceph-mon2]
skipping: [ceph-osd2]
skipping: [ceph-mon0]
skipping: [ceph-rgw]
skipping: [ceph-mon1]

TASK: [common | Generate Ceph configuration file] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon2]
skipping: [ceph-mon1]
skipping: [ceph-osd2]
skipping: [ceph-osd1]
skipping: [ceph-osd0]
skipping: [ceph-rgw]

TASK: [common | Fail on unsupported system] ***********************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-mon2]
skipping: [ceph-osd0]
skipping: [ceph-rgw]
skipping: [ceph-osd1]
skipping: [ceph-osd2]

TASK: [common | Fail on unsupported architecture] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-osd0]
skipping: [ceph-mon2]
skipping: [ceph-osd1]
skipping: [ceph-rgw]
skipping: [ceph-osd2]

TASK: [common | Fail on unsupported distribution] *****************************
skipping: [ceph-mon0]
skipping: [ceph-mon1]
skipping: [ceph-osd1]
skipping: [ceph-osd2]
skipping: [ceph-osd0]
skipping: [ceph-mon2]
skipping: [ceph-rgw]

Clarify roles names

We are about to push the roles into Ansible galaxy. They should have clearer names.

Config options rework

The way we config ceph.conf options is real monolithic and not flexible at all.
Overtime we want to support a new option this seems:

  • adding a new variable
  • adding a new line in the template

Something was started here #130 but the ceph.conf layout was terrible and the person never continued the work.
I'd like to have something like a section (general/mon/osd/client) where we can pass variables directly from the variables. Then the template would simply iterate over the set of vars and add them to the ceph.conf.
This would tremendously help to quickly test new options without hacking the code or submitting a PR each time.

I'm thinking of taking over #130 and see how I can get a better layout.

Support for node configuration like IP and Interface bonding

Adding support for the whole node configuration like configuring IPs and interface bonding should be good, because it allow to configure a node in a single ansible run.

For example, a basic debian installation could be done by just adding python, openssh-server, and an IP address to an interface. (Ansible requirements).
After that, everything else like public network IP, private network IP and interface bonding should be done by ansible.

Ability to manage LVM journals

I believe there is a use case behind the lvm journals.
For instance you can do:

2 SSDs:

  • tiny mdadm raid 1 setup for the system; let’s say /dev/sda1 and /dev/sdb1
  • then you still have:
    • /dev/sda2
    • /dev/sdb2

They can both host journals, and you usually want to manage them with lvm, this is easier than managing partition.

Support for CRUSH colocation scenarios

Currently the ceph.conf only supports something like this:

[osd.X]
osd crush location = "root=location"

With more options, however this config is tied to an OSD, so if we do something like SSD and SATA pool and rules on the same host this won't work.

No such file or directory

This is pretty strange:

TASK: [common | Install Ceph] ************************************************* 
failed: [osd12] => (item=ceph,ceph-common,ceph-fs-common,ceph-fuse,ceph-mds,libcephfs1) => {"cmd": ["DEBIAN_FRONTEND=noninteractive", "DEBIAN_PRIORITY=critical", "/usr/bin/apt-get", "-y", "-o", "Dpkg::Options::=--force-confdef", "-o", "Dpkg::Options::=--force-confold", "install", "ceph", "ceph-common", "ceph-fs-common", "ceph-fuse", "ceph-mds", "libcephfs1"], "failed": true, "item": "ceph,ceph-common,ceph-fs-common,ceph-fuse,ceph-mds,libcephfs1", "rc": 2}
msg: [Errno 2] No such file or directory

FATAL: all hosts have already failed -- aborting

manually running the same command:

DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical /usr/bin/apt-get -y -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold install ceph ceph-common ceph-fs-common ceph-fuse ceph-mds libcephfs1

is working properly.

Remove mdss constraint

If MDSs are not used (and not defined in this playbook) some errors are triggered:

fatal: [x] => {'msg': "One or more undefined variables: 'dict object' has no attribute 'mdss'", 'failed': True}

cluster client configuration

Hi all,

can anyone provide some documentation on how to setup a client for this cluster?
I try to tests some rbd use cases like creating mounting and cloning rbd images with a local cluster sat up with vagrant.

I tried copying the monitor configuration from mon1:/etc/ceph/* to my hostmachine:/etc/ceph/*

doing so ceph health outputs:

root@jck2:~# ceph health
HEALTH_OK

ceph -w

root@jck2:~# ceph -w
    cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-mon0=192.168.42.10:6789/0,ceph-mon1=192.168.42.11:6789/0,ceph-mon2=192.168.42.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon2
     osdmap e14: 6 osds: 6 up, 6 in
      pgmap v24: 192 pgs, 3 pools, 0 bytes data, 0 objects
            207 MB used, 65126 MB / 65333 MB avail
                 192 active+clean

2014-11-10 12:59:34.364045 mon.0 [INF] pgmap v24: 192 pgs: 192 active+clean; 0 bytes data, 207 MB used, 65126 MB / 65333 MB avail

ceph osd tree:

root@jck2:~# ceph osd tree
# id    weight  type name   up/down reweight
-1  0.05997 root default
-2  0.01999     host ceph-osd0
0   0.009995            osd.0   up  1   
3   0.009995            osd.3   up  1   
-3  0.01999     host ceph-osd1
2   0.009995            osd.2   up  1   
4   0.009995            osd.4   up  1   
-4  0.01999     host ceph-osd2
1   0.009995            osd.1   up  1   
5   0.009995            osd.5   up  1   

creating of rbd images works!

root@jck2:~# rbd create --image-format 2 --size 512 test --id admin
root@jck2:~# rbd ls
test

However if I try to map and mount an image i get the following error:

root@jck2:~# rbd map test --id admin
rbd: add failed: (95) Operation not supported

Can anyone tell me why map fails? If someone provides me with some hints on setting up the client I am willing to contribute to ceph-ansible by writing a playbook that configures clients and document the setup for other users.

Thanks in advance

  • Cornelius

Separate OSD scenarios to multiple files

Currently everything lives in main.yml, the file has become difficult to read at some point and can be a real mess since we keep adding new scenarios.
I think we should separate the scenarios into dedicated files and just do includes in the main.yml file.

Any thoughts?

@alfredodeza ?

RGW email address not declared

Error:

 _____________________________________________
< TASK: radosgw | Install Rados Gateway vhost >
 ---------------------------------------------
        \   ^__^
         \  (oo)\_______

---
            (__)\       )\/\
                ||----w |
                ||     ||


fatal: [cephaio] => {'msg': "One or more undefined variables: 'email_address' is undefined", 'failed': True}
fatal: [cephaio] => {'msg': "One or more undefined variables: 'email_address' is undefined", 'failed': True}

[Documentation] How to fix cluster construction with Ansible only ?

Here is my diff, from the initial repo :

diff --git a/group_vars/all b/group_vars/all
index 365a450..58ea9c0 100644
--- a/group_vars/all
+++ b/group_vars/all
@@ -51,8 +51,8 @@ dummy:

 ## Ceph options
 #
-#fsid: "{{ cluster_uuid.stdout }}"
-#cephx: true
+fsid: "{{ cluster_uuid.stdout }}"
+cephx: true
 #cephx_require_signatures: true # Kernel RBD does NOT support signatures!
 #cephx_cluster_require_signatures: true
 #cephx_service_require_signatures: false
diff --git a/group_vars/mons b/group_vars/mons
index a2c0034..e8ffcc5 100644
--- a/group_vars/mons
+++ b/group_vars/mons
@@ -5,7 +5,7 @@
 dummy:

 # Monitor options
-#monitor_secret: # /!\ GENERATE ONE WITH 'ceph-authtool --gen-print-key' /!\
+monitor_secret: AQBZSs9UEOmwABAAmYR9DpRU1pb/3BWjqt+KTw==
 #cephx: true

 # Rados Gateway options
diff --git a/roles/ceph-common/defaults/main.yml b/roles/ceph-common/defaults/main.yml
index 05f76e8..bbc0a5b 100644
--- a/roles/ceph-common/defaults/main.yml
+++ b/roles/ceph-common/defaults/main.yml
@@ -87,8 +87,8 @@ pool_default_pg_num: 128
 pool_default_pgp_num: 128
 pool_default_size: 2
 pool_default_min_size: 1
-cluster_network: 192.168.42.0/24
-public_network: 192.168.42.0/24
+cluster_network: 192.168.122.0/24
+public_network: 192.168.122.0/24
 osd_mkfs_type: xfs
 osd_mkfs_options_xfs: -f -i size=2048
 osd_mount_options_xfs: noatime,largeio,inode64,swalloc
diff --git a/roles/ceph-mds/defaults/main.yml b/roles/ceph-mds/defaults/main.yml
index bd46d3b..4d217b8 100644
--- a/roles/ceph-mds/defaults/main.yml
+++ b/roles/ceph-mds/defaults/main.yml
@@ -1,4 +1,4 @@
 ---
 # You can override vars by using host or group vars

-cephx: true
+cephx: false
diff --git a/roles/ceph-osd/defaults/main.yml b/roles/ceph-osd/defaults/main.yml
index f6cc08a..a17cc12 100644
--- a/roles/ceph-osd/defaults/main.yml
+++ b/roles/ceph-osd/defaults/main.yml
@@ -50,9 +50,9 @@ zap_devices: false
 #
 devices:
   - /dev/sdb
-  - /dev/sdc
-  - /dev/sdd
-  - /dev/sde
+    #  - /dev/sdc
+    #  - /dev/sdd
+    #  - /dev/sde

 # Device discovery is based on the Ansible fact 'ansible_devices'
 # which reports all the devices on a system. If chosen all the disks
@@ -90,9 +90,9 @@ journal_collocation: true
 raw_multi_journal: false
 raw_journal_devices:
   - /dev/sdb
-  - /dev/sdb
-  - /dev/sdc
-  - /dev/sdc
+    #- /dev/sdb
+  - #- /dev/sdc
+  - #- /dev/sdc

I got that output :

[root@ceph-client:~] 130 # rbd create foo --size 2
2015-02-02 13:45:50.277015 7f96515ab700  0 -- :/1005796 >> 192.168.122.225:6789/0 pipe(0x82f120 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x82f3b0).fault
2015-02-02 13:45:53.277205 7f965b518700  0 -- :/1005796 >> 192.168.122.225:6789/0 pipe(0x82e910 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x82eba0).fault
[...]
2015-02-02 13:50:41.302498 7f965b518700  0 -- :/1005796 >> 192.168.122.225:6789/0 pipe(0x7f964c000e40 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f964c0010d0).fault
2015-02-02 13:50:44.302691 7f96515ab700  0 -- :/1005796 >> 192.168.122.225:6789/0 pipe(0x835040 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x84af40).fault
2015-02-02 13:50:47.303016 7f965b518700  0 -- :/1005796 >> 192.168.122.225:6789/0 pipe(0x7f964c005670 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f964c005060).fault
2015-02-02 13:50:50.276656 7f965b520760  0 monclient(hunting): authenticate timed out after 300
2015-02-02 13:50:50.276719 7f965b520760  0 librados: client.admin authentication error (110) Connection timed out
rbd: couldn't connect to the cluster!

On RBD creation following this documentation after creating my cluster.
The VMs setup is the following :

  • 3 OSDs with 2 NICs and 2 HDDs (/dev/sdb is dedicated to OSD)
  • 1 MDS + MON with 2 NICs and 1 HDD
  • 1 ceph-client installed via ceph-deploy

I am obviously missing something here, but what ?

ceph.client.admin.keyring doesn't get copied to /etc/ceph ?

I've noticed that the ceph-rgw vagrant machine doesn't have /etc/ceph/ceph.client.admin.keyring.

When I do a simple find operation it isn't able to find it anywhere but /vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45 (which is a randomly generated path I guess).

Shouldn't it be placed on rgw machine to allow executing radosgw-admin commands?

When I need to change the cluster's region I need to do something like this (which is rather odd):

vagrant@ceph-rgw:~$ sudo radosgw-admin user info --uid='janedoe' -k /vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45/etc/ceph/ceph.client.admin.keyring
{ "user_id": "janedoe",
  "display_name": "janedoe",
  "email": "[email protected]",
  "suspended": 0,
     ...}

Because this doesn't work:

vagrant@ceph-rgw:~$ sudo radosgw-admin user info --uid='janedoe' -k /etc/ceph/keyring.radosgw.gateway
2014-09-25 13:33:57.870979 7f2390c93780  0 librados: client.admin authentication error (1) Operation not permitted
couldn't init storage provider

Here's my full find output.

vagrant@ceph-rgw:~$ find / 2>/dev/null | grep keyring 2>/dev/null
/vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45/etc/ceph/ceph.client.admin.keyring
/vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45/etc/ceph/keyring.radosgw.gateway
/vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45/var/lib/ceph/bootstrap-osd/ceph.keyring
/vagrant/fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45/var/lib/ceph/bootstrap-mds/ceph.keyring
/usr/src/linux-headers-3.2.0-30/include/keys/keyring-type.h
/usr/share/keyrings
/usr/share/keyrings/ubuntu-archive-removed-keys.gpg
/usr/share/keyrings/ubuntu-archive-keyring.gpg
/usr/share/keyrings/ubuntu-master-keyring.gpg
/usr/share/locale-langpack/en@shaw/LC_MESSAGES/gnome-keyring.mo
/usr/share/locale-langpack/en_CA/LC_MESSAGES/libgnome-keyring.mo
/usr/share/locale-langpack/en_CA/LC_MESSAGES/gnome-keyring.mo
/usr/share/locale-langpack/en_AU/LC_MESSAGES/libgnome-keyring.mo
/usr/share/locale-langpack/en_AU/LC_MESSAGES/gnome-keyring.mo
/usr/share/locale-langpack/en_GB/LC_MESSAGES/libgnome-keyring.mo
/usr/share/locale-langpack/en_GB/LC_MESSAGES/gnome-keyring.mo
/usr/share/doc/ubuntu-keyring
/usr/share/doc/ubuntu-keyring/README.gz
/usr/share/doc/ubuntu-keyring/copyright
/usr/share/doc/ubuntu-keyring/changelog.gz
/var/lib/dpkg/info/ubuntu-keyring.list
/var/lib/dpkg/info/ubuntu-keyring.md5sums
/var/lib/dpkg/info/ubuntu-keyring.postinst
/var/lib/apt/keyrings
/var/lib/apt/keyrings/ubuntu-archive-keyring.gpg
/etc/ceph/keyring.radosgw.gateway

update group_vars

this file is outdated but really useful to override role variables, so we need to update it to reflect the existing variables.

ansible ssh access broken since vagrant 1.7

As far as I could figure out vagrant changed it's default behaviour since release 1.7.0 and replaces the insecure ssh_key with an individual key for each host.

Thus the deployment of the configured ansible playbook is run with the private_key of the last deployed host (osd2) and fails to connect to all the other hosts.

see here as well: hashicorp/vagrant#5048

Scenarios for when a user wants to deploy to a /somemountpoint/osdNN

There's 3 different scenarios for deploying OSD's right now. An additional one where users can deploy OSD's to 'directories' would be nice. Could be useful for test cases and proof of concepts where users just don't have access to additional devices to create a test cluster.

It looks like this one is an easy one to add to scenario 1 with the colocated journal.

Enable cluster name support

By default we consider that the cluster is named 'ceph', it would be nice to allow to define another name for the cluster.

VLAN interfaces break ceph.conf template

TASK: [ceph-common | Generate Ceph configuration file] ************************
fatal: [ceph04] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ansible_bond0.1062'", 'failed': True}
fatal: [ceph04] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'dict object' has no attribute 'ansible_bond0.1062'", 'failed': True}

Typo in purge.yml

There are too many --- in purge.yml:

ERROR: Syntax Error while loading YAML script, purge.yml
Note: The error may actually appear before this position: line 5, column 1


---
^

Autogeneration of FSID and Mon secret

Actually, both fsid and mon secret are hardcoded.
This is a big security hole.

both should be autogenerated (if not manually set in configuration or when a running cluster already exists) on first run

Support for multiple journal devices

Support for multiple journal devices should be added so that will be possible to map osd1 to ssd1.partition1, osd2 to ssd2.partition1 and so on.

Something like this:

/dev/sda1 => /dev/sdx1
/dev/sdb1 => /dev/sdx2
/dev/sdc1 => /dev/sdy1
/dev/sdd1 => /dev/sdy2

sd[a-d] as spinng disks used as OSD
sd[x-y] as SSD used as journal with multiple partitions on them.

[Documentation issue] - A waiting problem for mons

Hello again,
Since i wanted to make a good use of your wiki, I reseted my whole cluster and tested it. The good news is that OSDs are perfectly OK, but for the mon i'm stuck here, is it from a missing var ? Imho it could be since when I tried to enable a scenario in the var files, I never encountered that issue.

radosgw socket is wrongly hardcoded

The socket name is hardcoded in rgw.conf as "/tmp/radosgw.sock", but with the latest changes the rados gw socket in ceph.conf.j2 is defined as "rgw socket path = /tmp/radosgw-{{ hostvars[host]['ansible_hostname'] }}.sock", so the fastcgi doesn't work because if cannot find the socket.

The socket in "https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-radosgw/templates/rgw.conf" should be defined as below:

FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw-{{ ansible_hostname }}.sock

Thank you for the playbook, it helps a lot with the initial deployment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.