Une version Française de ce post est disponible.

Moby Dock, the Docker whale mascot

Okay, setting up the Gluster cluster with a node that might be shut down at anytime may not be the best move I did. But it was only for me to become acquainted with Gluster on bare metal nodes.

My Synology NAS is my third node of choice to set up cluster software. But it is not a Debian and, as such, I can't easily do what I'm used to do.

Hopefully, there is Docker on it and it can communicate with my own private registry. This open up some new perspectives!

I will add ofuro.onsen.lan, IP 192.168.0.77 to the cluster and remove buro.onsen.lan, IP 192.168.0.7 from it.

Packaging Gluster as Docker image

Okay, so we need to package a Docker image as close as possible to my other nodes.

  • Let's start from a debian:stable-slim.
  • Install Gluster in it.
  • Start glusterd.
  • ????
  • Profit!!!

We will run on the host network. So make sure that these ports are available on TCP and UDP:

  • 24007 for Gluster Daemon
  • 24008 for RDMA (I don't use it, but you may want to.)
  • 49152+ for Brick 1 (each brick has its own port at 49151 + brick number)

Given the following Dockerfile:

FROM debian:stable-slim

RUN apt-get update && apt-get install --no-install-recommends --yes glusterfs-server

ARG GLUSTERD_LOG_LEVEL
ARG GLUSTERD_LOG_FILE
ARG GLUSTERD_OPTIONS
ENV GLUSTERD_LOG_LEVEL="${GLUSTERD_LOG_LEVEL:-INFO}" GLUSTERD_LOG_FILE="${GLUSTERD_LOG_FILE:--}" GLUSTERD_OPTIONS="${GLUSTERD_OPTIONS:-}"

CMD /usr/sbin/glusterd --log-level $GLUSTERD_LOG_LEVEL --log-file $GLUSTERD_LOG_FILE --no-daemon $GLUSTERD_OPTIONS

Let's build it and push it to our private repository:

$ docker login registry.onsen.lan:5000
$ docker build --tag registry.onsen.lan:5000/gluster:0.1 .
$ docker push registry.onsen.lan:5000/gluster:0.1

Running Gluster on the Synology

Preparing the brick on the Synology

We should have a logical volume formatted as btrfs on one of our volume group. Other filesystems supporting extended attribute will do. Your mileage may vary.

Mine is volume6. Let's create the folder that will hold the brick data, and another one that will keep Gluster node metadata:

$ sudo mkdir -p /volume6/docker/glusterfs/onsen-gv0/brick1/
$ sudo mkdir -p /volume6/docker/glusterfs/var/lib/glusterd/

Launching Gluster image

We will:

  • run the docker image from our private registry
  • run it in detached mode
  • name the container gluster-onsen
  • run as privileged because Gluster needs to set extended attributes on the data volume for the brick's data
  • bind a part of the previously created path as read/write volume to the same path as our other Gluster nodes in the container
  • bind the node metadata dir as read/write volume
$ docker run -d \
    --name gluster-onsen \
    --privileged=true \
    --net host \
    --mount type=bind,source=/volume6/docker/glusterfs/onsen-gv0,target=/srv/glusterfs/onsen-gv0 \
    --mount type=bind,source=/volume6/docker/glusterfs/var/lib/glusterd,target=/var/lib/glusterd \
    registry.onsen.lan:5000/gluster:0.1

Okay, let's do some sanity checks:

$ sudo docker logs gluster-onsen
[2020-04-26 15:24:46.676612] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.5 (args: /usr/sbin/glusterd --log-level INFO --log-file - --no-daemon)
...

$ sudo netstat -tupan|grep gluster
tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      21153/glusterfsd
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      20377/glusterd

Woot!

Joining the Gluster Cluster

Adding the node to the Trusted Pool

To trust a node in a Gluster cluster, it must be probed from an already trusted peer. From one of these trusted nodes, probe the new peer:

$ sudo gluster peer probe 192.168.0.77
peer probe: success.

Adding the node's brick to the Cluster volume

Next we will extend our Gluster volume with this node's brick with the volume add-brick command. We specify to Gluster that we want to replicate the content also on this brick by increasing the replica count.

$ sudo gluster volume add-brick onsen-gv0 replica 4 192.168.0.77:/srv/glusterfs/onsen-gv0/brick1
volume add-brick: success

Checking replication status

By running the volume heal <volume> info summary, we can check replication status. Here we have a normal situation, everything is fine:

$ sudo gluster volume heal onsen-gv0 info summary
Brick 192.168.0.2:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.1:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.7:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.77:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0

And here is an example status for illustration purposes. I wrote to the Gluster volume with a client and rebooted the Synology in the mean time. While it was down, other node detected missing replication for some files:

$ sudo gluster volume heal onsen-gv0 info summary
Brick 192.168.0.2:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 7
Number of entries in heal pending: 7
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.1:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 7
Number of entries in heal pending: 7
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.7:/srv/glusterfs/onsen-gv0/brick1
Status: Connected
Total Number of entries: 7
Number of entries in heal pending: 7
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick 192.168.0.77:/srv/glusterfs/onsen-gv0/brick1
Status: Transport endpoint is not connected
Total Number of entries: -
Number of entries in heal pending: -
Number of entries in split-brain: -
Number of entries possibly healing: -
  • Transport endpoint is not connected: The node's brick isn't connected to the volume.
  • Heal pending : This node detected that some of it's files did not match the replicas count

Once the Synology has finished rebooting and the Docker container was up again, the status come back to normal.

All right, we have a working Gluster peer running on Docker on Synology!

Removing the other node's brick from the Cluster volume

We remove buro.onsen.lan's brick with the volume remove-brick command. We need to tell to Gluster that we want to go back to a configuration of 3 replicas rather than 4.

Once the brick is removed, we remove the peer from the trusted pool. Done.

$ sudo gluster volume remove-brick onsen-gv0 replica 3 192.168.0.7:/srv/glusterfs/onsen-gv0/brick1 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success

$ sudo gluster peer detach 192.168.0.7
peer detach: success