Giter Club home page Giter Club logo

gluster-charm's Introduction

Gluster charmBuild Status

GlusterFS is an open source, distributed file system capable of scaling to several petabytes (actually, 72 brontobytes!) and handling thousands of clients. GlusterFS clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design and can deliver exceptional performance for diverse workloads.

Usage

The gluster charm has defaults in the config.yaml that you will want to change for production. Please note that volume_name, cluster_type, and replication_level are immutable options. Changing them post deployment will have no effect.
This charm makes use of juju storage. Please read the docs to learn about adding block storage to your units.

volume_name:
    Whatever name you would like to call your gluster volume.
cluster_type:
    The default here is Replicate but you can also set it to
     * Distribute
     * Stripe
     * Replicate
     * StripedAndReplicate
     * Disperse
     * DistributedAndStripe
     * DistributedAndReplicate
     * DistributedAndStripedAndReplicate
     * DistributedAndDisperse
replication_level:
    The default here is 2
    If you don't know what any of these mean don't worry about it. The defaults are sane.

Actions

This charm several actions to help manage your Gluster cluster.

  1. Creating volume quotes. Example: juju action do --unit gluster/0 create-volume-quota volume=test usage-limit=1000MB
  2. Deleting volume quotas. Example: juju action do --unit gluster/0 delete-volume-quota volume=test
  3. Listing the current volume quotas. Example: juju action do --unit gluster/0 list-volume-quotas volume=test
  4. Setting volume options. This can be used to set several volume options at once. Example: juju action do --unit gluster/0 set-volume-options volume=test performance-cache-size=1GB performance-write-behind-window-size=1MB

Building from Source

The charm comes packaged with an already built binary in ./hooks/main which is built for x86-64. A rebuild would be required for other architectures.

  1. Install rust stable for your platform

  2. Install cargo

  3. Install libudev-dev as a dependency.

  4. cd into the charm directory and run:

     cargo build --release
    
  5. Copy the built target:

     cp target/release/main hooks/main
    

If you would like debug flags enabled rebuild with: cargo build and cp target/debug/main hooks/main

That should provide you with a binary.

Configure

Create a config.yaml file to set any options you would like to change from the defaults.

Deploy

This charm requires juju storage. It requires at least 1 block device. For more information please check out the docs

Example EC2 deployment on Juju 1.25:
juju deploy cs:~xfactor973/xenial/gluster-3 -n 3 --config=~/gluster.yaml --storage brick=ebs,10G,2

To scale out the service use this command:
juju add-unit gluster

(keep adding units to keep adding more bricks and storage)

Scale Out

Note that during scale out operation if your cluster has existing files on there they will not be migrated to the new bricks until a gluster volume rebalance start operation is performed. This operation can slow client traffic so it is left up to the administrator to perform at the appropriate time.

Rolling Upgrades

The config.yaml source option is used to kick off a rolling upgrade of your cluster. The current behavior is to install the new packages on the server and upgrade it one by one. A UUID sorted order is used to define the upgrade order. Please note that replica 3 is required to use rolling upgrades. With replica 2 it's possible to have split brain issues.

Testing

For a simple test deploy 4 gluster units like so

juju deploy gluster -n 4 --config=~/gluster.yaml --storage brick=local,10G

Once the status is started the charm will bring both units together into a cluster and create a volume.
You will know the cluster is ready when you see a status of active.

Now you can mount the exported GlusterFS filesystem with either fuse or NFS. Fuse has the advantage of knowing how to talk to all replicas in your Gluster cluster so it will not need other high availablity software. NFSv3 is point to point so it will need something like virtual IP's, DNS round robin or something else to ensure availability if a unit should die or go away suddenly. Install the glusterfs-client package on your host. You can reference the ./hooks/install file to show you how to install the glusterfs packages.

On your juju host you can mount Gluster with fuse like so:

mount -t glusterfs <ip or hostname of unit>:/<volume_name> mount_point/

High Availability

There's 3 ways you can achieve high availability with Gluster.

  1. The first an easiest method is to simply use the glusterfs fuse mount on all clients. This has the advantage of knowing where all servers in the cluster are at and will reconnect as needed and failover gracefully.
  2. Using virtual ip addresses with a DNS round robin A record. This solution applies to NFSv3. This method is more complicated but has the advantage of being usable on clients that only support NFSv3. NFSv3 is stateless and this can be used to your advantage by floating virtual ip addresses that failover quickly. To use this setting please set the virtual_ip_addresses config.yaml setting after reading the usage.
  3. Using the Gluster coreutils.
    If you do not need a mount point then this is a viable option.
    glusterfs-coreutils provides a set of basic utilities such as cat, cp, flock, ls, mkdir, rm, stat and tail that are implemented specifically using the GlusterFS API commonly known as libgfapi. These utilities can be used either inside a gluster remote shell or as standalone commands with 'gf' prepended to their respective base names. Example usage is shown here: Docs

MultiTenancy

Gluster provides a few easy ways to have multiple clients in the same volume without them knowing about one another.

  1. Deep Mounting. Gluster NFS supports deep mounting which allows the sysadmin to create a top level directory for each client. Then instead of mounting the volume you mount the volume + the directory name. Now the client only sees their files. This doesn't stop a malacious client from remounting the top level directory.
  • This can be combined with posix acl's if your tenants are not trustworthy.
  • Another option is combining with Netgroups. This feature allows users to restrict access specific IPs (exports authentication) or a netgroup (netgroups authentication), or a combination of both for both Gluster volumes and subdirectories within Gluster volumes.

Filesystem Support:

The charm supports several filesystems currently. Btrfs, Ext4, Xfs and ZFS. The default filesystem can be set in the config.yaml. The charm currently defaults to XFS but ZFS would likely be a safe choice and enable advanced functionality such as bcache backed gluster bricks. Note: The ZFS filesystem requires Ubuntu 16.04 or greater

Notes:

If you're using containers to test Gluster you might need to edit /etc/default/lxc-net and read the last section about if you want lxcbr0's dnsmasq to resolve the .lxc domain

Now to show that your cluster can handle failure you can:

juju destroy-machine n;

This will remove one of the units from your cluster and simulate a hard failure. List your files on the mount point to show that they are still available.

Reference

For more information about Gluster and operation of a cluster please see: https://gluster.readthedocs.org/en/latest/ For more immediate and interactive help please join IRC channel #gluster on Freenode. Gluster also has a users mailing list: https://www.gluster.org/mailman/listinfo/gluster-users For bugs concerning the Juju charm please file them on Github

gluster-charm's People

Contributors

cholcombe973 avatar chrismacnaughton avatar dshcherb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gluster-charm's Issues

"Skipping invalid block device" when using a scsi disk + juju storage with tagged maas disks

Tried to do a test on qemu virtual machines with virtio-scsi disks tagged in MAAS.

ubuntu-q87:/tmp$ juju --version
2.2-beta4-xenial-amd64

ubuntu@maas:~$ maas maas version read
Success.
Machine-readable output follows:
{"subversion": "bzr6054-0ubuntu1~16.04.1", "version": "2.2.0", "capabilities": ["networks-management", "static-ipaddresses", "ipv6-deployment-ubuntu", "devices-management", "storage-deployment-ubuntu", "network-deployment-ubuntu", "bridging-interface-ubuntu", "bridging-automatic-ubuntu", "authenticate-api"]}


ubuntu-q87:/tmp$ juju create-storage-pool glusterpool maas tags=jstorage

ubuntu-q87:/tmp$ juju storage-pools 
Name         Provider  Attrs
glusterpool  maas      tags=jstorage
loop         loop      
maas         maas      
rootfs       rootfs    
tmpfs        tmpfs     

ubuntu-q87:/tmp$ juju deploy ./gluster-charm -n 3 --storage brick='glusterpool'
Deploying charm "local:xenial/gluster-0".

# for better error reporting of mk2fs
ubuntu-q87:/tmp$ juju config gluster filesystem_type
ext4

ubuntu-q87:/tmp$ juju status
Model    Controller  Cloud/Region  Version    SLA
default  vmaas       vmaas         2.2-beta4  unsupported

App      Version  Status   Scale  Charm    Store  Rev  OS      Notes
gluster  3.10.2   blocked      3  gluster  local    0  ubuntu  

Unit        Workload  Agent  Machine  Public address  Ports  Message
gluster/0   blocked   idle   0        10.10.101.65           No bricks found
gluster/1*  blocked   idle   1        10.10.101.63           No bricks found
gluster/2   blocked   idle   2        10.10.101.64           No bricks found

Machine  State    DNS           Inst id  Series  AZ       Message
0        started  10.10.101.65  xcgtpc   xenial  default  Deployed
1        started  10.10.101.63  rynftc   xenial  default  Deployed
2        started  10.10.101.64  rw7fq3   xenial  default  Deployed

Relation  Provides  Consumes  Type

The block device seems to be skipped (see the log below)
"Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1""

ubuntu@maas-xenial5:~$ readlink /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1
../../sdb
ubuntu@maas-xenial5:~$ lsblk 
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  64G  0 disk 
└─sda1   8:1    0  64G  0 part /
sdb      8:16   0  64G  0 disk 

ubuntu@maas-xenial5:~$ lsscsi 
[2:0:0:0]    disk    QEMU     QEMU HARDDISK    2.5+  /dev/sda 
[2:0:0:1]    disk    QEMU     QEMU HARDDISK    2.5+  /dev/sdb 
unit-gluster-0: 19:31:15 INFO unit.gluster/0.juju-log Checking for new devices
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Checking for ephemeral unmount
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Gathering list of manually specified brick devices
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log List of manual storage brick devices: []
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Gathering list of juju storage brick devices
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log List of juju storage brick devices: ["/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"]
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Checking if "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1" is a block device
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log storage devices: []
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log Usable brick paths: []
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 19:31:15 DEBUG unit.gluster/0.juju-log No upgrade requested
unit-gluster-0: 19:31:15 INFO unit.gluster/0.config-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-0: 19:31:15 INFO unit.gluster/0.config-changed 
unit-gluster-0: 19:31:16 INFO unit.gluster/0.server-relation-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-0: 19:31:16 INFO unit.gluster/0.server-relation-changed 
unit-gluster-1: 19:31:17 DEBUG unit.gluster/1.juju-log server:0: I am the leader: 0
unit-gluster-1: 19:31:18 INFO unit.gluster/1.juju-log server:0: Loading config
unit-gluster-1: 19:31:18 DEBUG unit.gluster/1.juju-log server:0: peer list: [UUID: bc6b036f-2bae-4f2a-8f61-dd04fe1446ae Hostname: 10.10.101.63 Status: Connected]
unit-gluster-2: 19:31:18 INFO unit.gluster/2.server-relation-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-2: 19:31:18 INFO unit.gluster/2.server-relation-changed 
unit-gluster-0: 19:31:18 INFO unit.gluster/0.server-relation-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-0: 19:31:18 INFO unit.gluster/0.server-relation-changed 
unit-gluster-1: 19:31:18 DEBUG unit.gluster/1.juju-log server:0: relation-list output: gluster/0

unit-gluster-1: 19:31:18 DEBUG unit.gluster/1.juju-log server:0: Adding in related_units: [Relation { name: "gluster", id: 0 }]
unit-gluster-1: 19:31:18 DEBUG unit.gluster/1.juju-log server:0: Adding 10.10.101.65 to cluster
unit-gluster-1: 19:31:18 DEBUG unit.gluster/1.juju-log server:0: Gluster peer probe was successful
unit-gluster-1: 19:31:18 INFO unit.gluster/1.server-relation-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-1: 19:31:18 INFO unit.gluster/1.server-relation-changed 
unit-gluster-1: 19:31:18 INFO unit.gluster/1.juju-log server:0: Creating volume test
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Waiting for all peers to enter the Peer in Cluster status
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Got peer status: [UUID: c5642b67-77bf-4669-af24-30817c3ff55d Hostname: 10.10.101.65 Status: peer in cluster]
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Checking for ephemeral unmount
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Gathering list of manually specified brick devices
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: List of manual storage brick devices: []
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Gathering list of juju storage brick devices
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: List of juju storage brick devices: ["/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"]
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Checking if "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1" is a block device
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: storage devices: []
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Usable brick paths: []
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Volume is none
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Not enough peers to satisfy the replication level for the Gluster volume.  Waiting for more peers to join.
unit-gluster-1: 19:31:19 INFO unit.gluster/1.juju-log server:0: Waiting for more peers
unit-gluster-1: 19:31:19 DEBUG unit.gluster/1.juju-log server:0: Waiting for all peers to enter the Peer in Cluster status
unit-gluster-1: 19:31:19 ERROR unit.gluster/1.juju-log server:0: Hook failed with error: Some("Mount failed. Please check the log file for more details.\n")
unit-gluster-1: 19:31:19 INFO unit.gluster/1.server-relation-changed Volume info get command failed with error: Volume test does not exist

Adding this device to a list of brick devices without redeploying does not work:

ubuntu-q87:/tmp$ juju config gluster brick_devices=/dev/sdb

unit-gluster-1: 20:49:03 INFO unit.gluster/1.juju-log Checking for new devices
unit-gluster-0: 20:49:03 INFO unit.gluster/0.juju-log Checking for new devices
unit-gluster-2: 20:49:03 INFO unit.gluster/2.juju-log Checking for new devices
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log Checking for ephemeral unmount
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Checking for ephemeral unmount
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Checking for ephemeral unmount
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log Gathering list of manually specified brick devices
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Gathering list of manually specified brick devices
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Gathering list of manually specified brick devices
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log List of manual storage brick devices: ["/dev/sdb"]
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log List of manual storage brick devices: ["/dev/sdb"]
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log List of manual storage brick devices: ["/dev/sdb"]
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Checking if "/dev/sdb" is a block device
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log Checking if "/dev/sdb" is a block device
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Checking if "/dev/sdb" is a block device
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Checking if "/dev/sdb" is initialized
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Connecting to unitdata storage
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Checking if "/dev/sdb" is initialized
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log Checking if "/dev/sdb" is initialized
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Connecting to unitdata storage
unit-gluster-1: 20:49:03 DEBUG unit.gluster/1.juju-log Connecting to unitdata storage
unit-gluster-2: 20:49:03 DEBUG unit.gluster/2.juju-log Getting unit_info
unit-gluster-0: 20:49:03 DEBUG unit.gluster/0.juju-log Getting unit_info
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log Getting unit_info
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log unit_info: None
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log unit_info: None
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log unit_info: None
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log Gathering list of juju storage brick devices
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log Gathering list of juju storage brick devices
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log Gathering list of juju storage brick devices
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log List of juju storage brick devices: ["/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"]
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log List of juju storage brick devices: ["/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"]
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log Checking if "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1" is a block device
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log Checking if "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1" is a block device
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log List of juju storage brick devices: ["/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"]
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log Checking if "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1" is a block device
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log storage devices: [BrickDevice { is_block_device: true, initialized: false, mount_path: "/mnt/sdb", dev_path: "/dev/sdb" }]
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log storage devices: [BrickDevice { is_block_device: true, initialized: false, mount_path: "/mnt/sdb", dev_path: "/dev/sdb" }]
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log Skipping invalid block device: "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1"
unit-gluster-0: 20:49:04 DEBUG unit.gluster/0.juju-log Calling initialize_storage for "/dev/sdb"
unit-gluster-2: 20:49:04 DEBUG unit.gluster/2.juju-log Calling initialize_storage for "/dev/sdb"
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log storage devices: [BrickDevice { is_block_device: true, initialized: false, mount_path: "/mnt/sdb", dev_path: "/dev/sdb" }]
unit-gluster-1: 20:49:04 DEBUG unit.gluster/1.juju-log Calling initialize_storage for "/dev/sdb"
unit-gluster-2: 20:49:04 INFO unit.gluster/2.juju-log Formatting block device with Ext4: "/dev/sdb"
unit-gluster-0: 20:49:04 INFO unit.gluster/0.juju-log Formatting block device with Ext4: "/dev/sdb"
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed mke2fs 1.42.13 (17-May-2015)
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Discarding device blocks: done                            
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Creating filesystem with 16777216 4k blocks and 4194304 inodes
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Filesystem UUID: c648a84c-e273-4c28-bbcc-66a057cda5ca
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Superblock backups stored on blocks: 
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed 	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed 	4096000, 7962624, 11239424
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed 
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Allocating group tables: done                            
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Writing inode tables: done                            
unit-gluster-1: 20:49:04 INFO unit.gluster/1.juju-log Formatting block device with Ext4: "/dev/sdb"
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed mke2fs 1.42.13 (17-May-2015)
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Discarding device blocks: done                            
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Creating filesystem with 16777216 4k blocks and 4194304 inodes
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Filesystem UUID: 7c2e7277-a647-4e40-91f9-b2de3e86500d
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Superblock backups stored on blocks: 
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed 	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed 	4096000, 7962624, 11239424
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed 
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Allocating group tables: done                            
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Writing inode tables: done                            
unit-gluster-2: 20:49:04 INFO unit.gluster/2.config-changed Creating journal (32768 blocks): done
unit-gluster-0: 20:49:04 INFO unit.gluster/0.config-changed Creating journal (32768 blocks): done
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed mke2fs 1.42.13 (17-May-2015)
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Discarding device blocks: done                            
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Creating filesystem with 16777216 4k blocks and 4194304 inodes
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Filesystem UUID: 5a4762f2-ed6e-4dca-8b89-da1012a1cc9c
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Superblock backups stored on blocks: 
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed 	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed 	4096000, 7962624, 11239424
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed 
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Allocating group tables: done                            
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Writing inode tables: done                            
unit-gluster-1: 20:49:04 INFO unit.gluster/1.config-changed Creating journal (32768 blocks): done
unit-gluster-1: 20:49:08 INFO unit.gluster/1.config-changed Writing superblocks and filesystem accounting information: done   
unit-gluster-1: 20:49:08 INFO unit.gluster/1.config-changed 
unit-gluster-2: 20:49:08 INFO unit.gluster/2.config-changed Writing superblocks and filesystem accounting information: done   
unit-gluster-2: 20:49:08 INFO unit.gluster/2.config-changed 
unit-gluster-2: 20:49:08 DEBUG unit.gluster/2.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-1: 20:49:08 DEBUG unit.gluster/1.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-2: 20:49:08 INFO unit.gluster/2.juju-log device_info: Device { id: Some(Uuid("c648a84c-e273-4c28-bbcc-66a057cda5ca")), name: "sdb", media_type: Unknown, capacity: 68719476736, fs_type: Ext4 }
unit-gluster-2: 20:49:08 INFO unit.gluster/2.juju-log Mounting block device "/dev/sdb" at /mnt/sdb
unit-gluster-1: 20:49:08 INFO unit.gluster/1.juju-log device_info: Device { id: Some(Uuid("5a4762f2-ed6e-4dca-8b89-da1012a1cc9c")), name: "sdb", media_type: Unknown, capacity: 68719476736, fs_type: Ext4 }
unit-gluster-1: 20:49:08 INFO unit.gluster/1.juju-log Mounting block device "/dev/sdb" at /mnt/sdb
unit-gluster-0: 20:49:09 INFO unit.gluster/0.config-changed Writing superblocks and filesystem accounting information: done   
unit-gluster-0: 20:49:09 INFO unit.gluster/0.config-changed 
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-2: 20:49:09 INFO unit.gluster/2.juju-log Creating mount directory: /mnt/sdb
unit-gluster-1: 20:49:09 INFO unit.gluster/1.juju-log Creating mount directory: /mnt/sdb
unit-gluster-0: 20:49:09 INFO unit.gluster/0.juju-log device_info: Device { id: Some(Uuid("7c2e7277-a647-4e40-91f9-b2de3e86500d")), name: "sdb", media_type: Unknown, capacity: 68719476736, fs_type: Ext4 }
unit-gluster-0: 20:49:09 INFO unit.gluster/0.juju-log Mounting block device "/dev/sdb" at /mnt/sdb
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log Adding FsEntry { fs_spec: "UUID=5a4762f2-ed6e-4dca-8b89-da1012a1cc9c", mountpoint: "/mnt/sdb", vfs_type: "ext4", mount_options: ["noatime", "inode64"], dump: false, fsck_order: 2 } to fstab
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-0: 20:49:09 INFO unit.gluster/0.juju-log Creating mount directory: /mnt/sdb
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log Adding FsEntry { fs_spec: "UUID=c648a84c-e273-4c28-bbcc-66a057cda5ca", mountpoint: "/mnt/sdb", vfs_type: "ext4", mount_options: ["noatime", "inode64"], dump: false, fsck_order: 2 } to fstab
unit-gluster-1: 20:49:09 INFO unit.gluster/1.juju-log Removing mount path from updatedb "/mnt/sdb"
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log Usable brick paths: ["/mnt/sdb"]
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log Command output: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log Adding FsEntry { fs_spec: "UUID=7c2e7277-a647-4e40-91f9-b2de3e86500d", mountpoint: "/mnt/sdb", vfs_type: "ext4", mount_options: ["noatime", "inode64"], dump: false, fsck_order: 2 } to fstab
unit-gluster-2: 20:49:09 INFO unit.gluster/2.juju-log Removing mount path from updatedb "/mnt/sdb"
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log Usable brick paths: ["/mnt/sdb"]
unit-gluster-0: 20:49:09 INFO unit.gluster/0.juju-log Removing mount path from updatedb "/mnt/sdb"
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log Usable brick paths: ["/mnt/sdb"]
unit-gluster-1: 20:49:09 DEBUG unit.gluster/1.juju-log No upgrade requested
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-2: 20:49:09 DEBUG unit.gluster/2.juju-log No upgrade requested
unit-gluster-1: 20:49:09 INFO unit.gluster/1.config-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-1: 20:49:09 INFO unit.gluster/1.config-changed 
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log .juju-persistent-config saved.  Wrote 445 bytes
unit-gluster-0: 20:49:09 DEBUG unit.gluster/0.juju-log No upgrade requested
unit-gluster-2: 20:49:10 INFO unit.gluster/2.config-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-2: 20:49:10 INFO unit.gluster/2.config-changed 
unit-gluster-0: 20:49:10 INFO unit.gluster/0.config-changed Volume info get command failed with error: Volume test does not exist
unit-gluster-0: 20:49:10 INFO unit.gluster/0.config-changed 

Same with a clean install with

  brick_devices:
    type: string
    default: /dev/sdb

https://paste.ubuntu.com/24693720/

Updating the charm icon

Hi,

The design team at Canonical would like to update charm icons to fit within the new circular format, as displayed in the current GUI.

This will improve how charms are displayed in the store, search results and on charm details pages.

I've attached a new version of the charm icon. If you're happy with this change, please could you update the icon and let us know when it's done.

gluster.svg.zip

Thanks

Implement Arbiter Volumes

Gluster has a newish feature which allows a 2x replica to have a 3x replica's safety. It does this by making the 3rd replica a metadata only replica. More info

Support heterogeneous machines

The charm today assumes that all hosts it is installed on have an identical amount of disks. This is a poor assumption for the real world.

Charm doesn't support raid stripe alignment

The charm doesn't detect any raid stripe alignment to feed into mkfs yet.

Mkfs.xfs –i size=512 –d size=8192,su=768k,sw=3 (just example)

With lvm we could use lvm utilities to check for:
--chunksize
--physicalextentsize

Make charm work on Centos

Currently this charm only works on Ubuntu. A small amount of modifications would be needed to get it working on Centos. I believe the changes center around the install script in the hooks folder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.