Connecting Ceph clients. Ceph Object Gateway and S3

In addition to block and file data access, Ceph also supports object access via the S3 or Swift protocols.

In this case, we will look at what settings need to be made on the Ceph side to provide clients with the ability to store data using the S3 protocol.

Let me remind you that I previously described the procedure for installing Ceph Reef from scratch in this article. In this case, I use the same platform, as well as a client based on Rocky Linux 9.

Also, I previously wrote about connecting block devices using RBD here.

Object storage access in Ceph is provided through a component called Object Gateway, formerly RADOS Gateway or RGW. This component accepts HTTP/HTTPS requests from clients and performs appropriate actions on the Ceph side.

The procedure for providing access to clients via S3 is quite simple:

  1. Create an Object Gateway;
  2. Create an S3 user and provide the Access Key and Secret Key to the client;
  3. Using the familiar S3 API, the client places the data in Ceph.

Let’s start by checking the current state of the cluster:

[root@ceph-mon-01 ~]# ceph orch host ls
HOST         ADDR         LABELS  STATUS
ceph-mon-01  10.10.10.13  _admin
ceph-mon-02  10.10.10.14
ceph-mon-03  10.10.10.15
ceph-osd-01  10.10.10.16
ceph-osd-02  10.10.10.17
ceph-osd-03  10.10.10.18
6 hosts in cluster

My cluster consists of 6 nodes, 3 of which are used for system services, the remaining 3 store data and are used only for OSD.

To deploy Object Gateway, I will add two additional nodes to the current cluster. It is not difficult to guess that this is done for fault tolerance. If one node fails, the load will be transferred to the second.

By the way, the documentation recommends using at least three nodes, but in the laboratory case, I think this can be omitted.

The new hosts are named ceph-rgw-01 and ceph-rgw-02. First you need to prepare them and set a short hostname (there is no need to specify the FQDN), and also install Python 3 and Podman (or Docker):

[root@ceph-node1/2/3/4/5/6 ~]# dnf install python3
[root@ceph-node1/2/3/4/5/6 ~]# python3 --version
Python 3.9.18

[root@ceph-node1/2/3/4/5/6 ~]# dnf install podman
[root@ceph-node1/2/3/4/5/6 ~]# podman -v
podman version 4.6.1

Further, all actions are performed from the control node. In my case, it is ceph-mon-01.

Let’s copy the certificate for passwordless SSH access from mon-01 to rgw-01/02:

[root@ceph-mon-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-rgw-01
[root@ceph-mon-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-rgw-02

Add the nodes to Ceph management:

[root@ceph-mon-01 ~]# ceph orch host add ceph-rgw-01
Added host 'ceph-rgw-01' with addr '10.10.10.20'

[root@ceph-mon-01 ~]# ceph orch host add ceph-rgw-02
Added host 'ceph-rgw-02' with addr '10.10.10.21'

The list of hosts in the cluster now looks like this:

[root@ceph-mon-01 ~]# ceph orch host ls
HOST         ADDR         LABELS  STATUS
ceph-mon-01  10.10.10.13  _admin
ceph-mon-02  10.10.10.14
ceph-mon-03  10.10.10.15
ceph-osd-01  10.10.10.16
ceph-osd-02  10.10.10.17
ceph-osd-03  10.10.10.18
ceph-rgw-01  10.10.10.20
ceph-rgw-02  10.10.10.21

8 hosts in cluster

You can notice the new nodes ceph-rgw-01 and ceph-rgw-02.

Let’s set the rgw label for these nodes:

[root@ceph-mon-01 ~]# ceph orch host label add ceph-rgw-01 rgw
Added label rgw to host ceph-rgw-01

[root@ceph-mon-01 ~]# ceph orch host label add ceph-rgw-02 rgw
Added label rgw to host ceph-rgw-02

Now let’s launch the RGW (Ceph Object Gateway) service on all nodes where the rgw label was previously set and use port 8080. Each node will run one RGW service, but more are possible:

[root@ceph-mon-01 ~]# ceph orch apply rgw s3.vmik.lab '--placement=label:rgw count-per-host:1' --port=8080
Scheduled rgw.s3.vmik.lab update...

If you connect to any of the RGW nodes, you can see the new container, as well as the open port 8080:

[root@ceph-rgw-01 ~]# podman ps
CONTAINER ID  IMAGE                                                                                      COMMAND               CREATED         STATUS         PORTS       NAMES
7ba059943580  quay.io/ceph/ceph@sha256:a4e86c750cc11a8c93453ef5682acfa543e3ca08410efefa30f520b54f41831f  -n client.ceph-ex...  38 minutes ago  Up 38 minutes              ceph-24c20e62-c4da-11ee-ba95-005056aad62a-ceph-exporter-ceph-rgw-01
9bd31e7430af  quay.io/ceph/ceph@sha256:a4e86c750cc11a8c93453ef5682acfa543e3ca08410efefa30f520b54f41831f  -n client.crash.c...  38 minutes ago  Up 38 minutes              ceph-24c20e62-c4da-11ee-ba95-005056aad62a-crash-ceph-rgw-01
4d86506b974a  quay.io/prometheus/node-exporter:v1.5.0                                                    --no-collector.ti...  38 minutes ago  Up 38 minutes              ceph-24c20e62-c4da-11ee-ba95-005056aad62a-node-exporter-ceph-rgw-01
d2f9d92711c3  quay.io/ceph/ceph@sha256:a4e86c750cc11a8c93453ef5682acfa543e3ca08410efefa30f520b54f41831f  -n client.rgw.s3....  19 seconds ago  Up 20 seconds              ceph-24c20e62-c4da-11ee-ba95-005056aad62a-rgw-s3-vmik-lab-ceph-rgw-01-rwbzbq 

[root@ceph-rgw-01 ~]# firewall-cmd --list-ports
8080/tcp 9100/tcp

The cluster status will also indicate the presence of RGW daemons:

cluster:
    id:     24c20e62-c4da-11ee-ba95-005056aad62a
    health: HEALTH_OK
services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 2w)
    mgr: ceph-mon-02.iyewzj(active, since 2w), standbys: ceph-mon-01.czyfjm, ceph-mon-03.xcpobs
    osd: 9 osds: 9 up (since 2w), 9 in (since 2w)
    rgw: 2 daemons active (2 hosts, 1 zones)

If you look at the list of pools, you will notice four new ones:

[root@ceph-mon-01 ~]# ceph osd lspools
1 .mgr
2 rbd_images_pool
3 .rgw.root
4 default.rgw.log
5 default.rgw.control
6 default.rgw.meta

With the previous commands, we launched Ceph RGW on two nodes and, you can connect to each of them and perform S3 requests if you access port 8080:

[root@ceph-mon-01 ~]# curl ceph-rgw-01.vmik.lab:8080
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

[root@ceph-mon-01 ~]# curl ceph-rgw-02.vmik.lab:8080
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

However, if any of the gateways fail, the client will have to change the connection address and this is not good.

This problem can be solved by deploying Ingress service based on HAProxy, and Keepalived to manage Virtual IP addresses.

To deploy Ingress, you need to create the yaml manifest:

[root@ceph-mon-01 ~]# vi ./rgw_ingress.yaml
service_type: ingress
service_id: rgw.s3.vmik.lab
placement:
  hosts:
    - ceph-rgw-01
    - ceph-rgw-02
spec:
  backend_service: rgw.s3.vmik.lab
  virtual_ip: 10.10.10.22/24
  frontend_port: 80
  monitor_port: 1967
  virtual_interface_networks: 10.10.10.0/24

Where:
service_id should preferably be associated with RGW;
hosts – list of nodes on which Ingress services will be deployed. These don’t have to be the same nodes where the RGW services are running, but in my case they are;
virtual_ip – address that will be used as an entry point for clients. If one of the nodes fails, this address is moved to another node;
frontend_port – port used by clients to connect. An important point is that if the same hosts are used for RGW and Ingress, the RGW and Ingress ports must be different. In my case, RGW uses port 8080, and Ingress 80.

Let’s apply this manifest:

[root@ceph-mon-01 ~]# ceph orch apply -i rgw_ingress.yaml
Scheduled ingress.rgw.s3.vmik.lab update...

If you connect to any of the RGW nodes, you can see two new containers:

[root@ceph-rgw-01 ~]# podman ps
cf5ffde7a922  quay.io/ceph/haproxy:2.3                                                                   haproxy -f /var/l...  44 seconds ago  Up 45 seconds              ceph-24c20e62-c4da-11ee-ba95-005056aad62a-haproxy-rgw-s3-vmik-lab-ceph-rgw-01-zlbhtr
f22a506cd835  quay.io/ceph/keepalived:2.2.4                                                              ./init.sh             4 seconds ago   Up 5 seconds               ceph-24c20e62-c4da-11ee-ba95-005056aad62a-keepalived-rgw-s3-vmik-lab-ceph-rgw-01-uklasg

HAProxy is responsible for accepting requests from clients and forwarding them to RGW, and Keepalived for managing virtual IP addresses.

We also have new services on the list. Ingress Service:

[root@ceph-mon-01 ~]# ceph orch ls
NAME                     PORTS                RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager             ?:9093,9094              1/1  24s ago    2w   count:1
ceph-exporter                                     8/8  7m ago     2w   *
crash                                             8/8  7m ago     2w   *
grafana                  ?:3000                   1/1  24s ago    2w   count:1
ingress.rgw.s3.vmik.lab  10.72.23.22:80,1967      4/4  24s ago    85s  ceph-rgw-01;ceph-rgw-02
mgr                                               3/3  7m ago     2w   ceph-mon-01;ceph-mon-02;ceph-mon-03;count:3
mon                                               3/3  7m ago     2w   ceph-mon-01;ceph-mon-02;ceph-mon-03;count:3
node-exporter            ?:9100                   8/8  7m ago     2w   *
osd                                                 9  7m ago     -    <unmanaged>
prometheus               ?:9095                   1/1  24s ago    2w   count:1
rgw.s3.vmik.lab          ?:8080                   2/2  24s ago    7m   count-per-host:1;label:rgw

Now you can access the Virtual Address:

[root@ceph-mon-01 ~]# curl ceph-rgw.vmik.lab
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

In my case, this is the DNS name ceph-rgw.vmik.lab, referencing the address 10.10.10.22 (the Ingress public address).

We configured the RGW services and added some fault tolerance through Ingress. Now, that clients connect to an Ingress address, the loss of any of the RGW nodes will not require changing the connection settings from the client side.

When we have deployed an entry point, we need to create accounts and provide access credentials to clients.

It can be done using radosgw-admin.

Let’s create an account:

[root@ceph-mon-01 ~]# radosgw-admin user create --uid=vmik --display-name="VMIK Test" --email=vmik@vmik.lab
{
    "user_id": "vmik",
    "display_name": "VMIK Test",
    "email": "vmik@vmik.lab",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "vmik",
            "access_key": "06QVFAULFXRCHTS1RNYU",
            "secret_key": "Dx54FCCmu1XGaHSj8aDomHAsqJ71xNsyRU4QT6bF"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

The important fields here are access_key and secret_key, which need to be forwarded to the client.

Please note that initially, there are no quotas or restrictions for the user, which is most likely incorrect.

We can set quotas either for the bucket size or entirely for the user. You can read more about the quotas in the documentation.

Let’s set a quota of 3GB for the user. We need to set a quota and enable it:

[root@ceph-mon-01 ~]# radosgw-admin quota set --quota-scope=user --uid=vmik --max-size=3G
[root@ceph-mon-01 ~]# radosgw-admin quota enable --quota-scope=user --uid=vmik

If you look at the user information, you can see the presence of quotas:

[root@ceph-mon-01 ~]# radosgw-admin user info --uid vmik
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": true,
        "check_on_raw": false,
        "max_size": 3221225472,
        "max_size_kb": 3145728,
        "max_objects": -1
    }

By the way, quotas are not processed instantly and it is recommended to carefully read the Quota Cache section in the documentation.

Once the user has been created and quotas have been set, you can try connecting to the storage.

For myself, I wrote a small program in Python using the boto library, which interacts via the S3 API.

First, I try to create several buckets:

What do you want to do? (Print help if you don't know what to do): bucket_create
Enter bucket name: vmik1
--------------------------------------------------
What do you want to do? (Print help if you don't know what to do): bucket_create
Enter bucket name: vmik2
--------------------------------------------------
What do you want to do? (Print help if you don't know what to do): bucket_create
Enter bucket name: vmik3
--------------------------------------------------
What do you want to do? (Print help if you don't know what to do): bucket_list
My buckets:
vmik
vmik2
vmik3

I created 3 buckets, let’s look in Ceph:

[root@ceph-mon-01 ~]# radosgw-admin bucket list
[
    "vmik2",
    "vmik3",
    "vmik"
]

Now I’ll upload the file to one of the buckets:

What do you want to do? (Print help if you don't know what to do): file_upload
Enter file path: /mnt/d/CentOS-7-x86_64-Minimal-2009.iso
Enter bucket name: vmik

What do you want to do? (Print help if you don't know what to do): bucket_files
Enter bucket name: vmik
CentOS-7-x86_64-Minimal-2009.iso        2024-02-24T02:58:28.416Z

Let’s check the user statistics:

[root@ceph-mon-01 ~]# radosgw-admin user stats --uid vmik --sync-stats
{
    "stats": {
        "size": 1020264448,
        "size_actual": 1020264448,
        "size_kb": 996352,
        "size_kb_actual": 996352,
        "num_objects": 1
    },
    "last_stats_sync": "2024-02-24T03:01:16.739986Z",
    "last_stats_update": "2024-02-24T03:01:16.737704Z"
}

It can be seen that some space is already occupied by the user.

I will delete the previously downloaded file:

What do you want to do? (Print help if you don't know what to do): file_delete
Enter bucket name: vmik
Enter file name to delete: CentOS-7-x86_64-Minimal-2009.iso

Ceph reports space freeing:

[root@ceph-mon-01 ~]# radosgw-admin user stats --uid vmik --sync-stats
{
    "stats": {
        "size": 0,
        "size_actual": 0,
        "size_kb": 0,
        "size_kb_actual": 0,
        "num_objects": 0
    },
    "last_stats_sync": "2024-02-24T03:02:29.482104Z",
    "last_stats_update": "2024-02-24T03:02:29.479980Z"
}

By the way, after creating a bucket and loading data, new pools will appear in Ceph:

[root@ceph-mon-01 ~]# ceph osd lspools
1 .mgr
2 rbd_images_pool
3 .rgw.root
4 default.rgw.log
5 default.rgw.control
6 default.rgw.meta
7 default.rgw.buckets.index
8 default.rgw.buckets.data

Thus, we created a fault-tolerant version of RGW, granted access to the client, and set quotas.

That’s all for me about object access in Ceph. As an option, you can consider connecting certificates and using HTTPS, which will be most correct from a security point of view.

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *