Skip to main content
  1. 2023/
  2. Posts from January/
  3. Enabling Ceph RADOS in Proxmox/

③ Enabling RADOS

🐾

🐾

Enabling Rados Gateway #

The RedHat Production Grade OSG Guide1 has a LOT of useful info.
I’d strongly encourage you to take some time to look through it to familiarize yourself with the moving pieces..

There’s a LOT going on in there; an they do a MUCH better job than yours truly at explaining stuff…

Setup #

Without further adieu… let’s get to it:

Secrets #

Create the radosgw keyring
ceph-authtool --create-keyring /etc/pve/priv/ceph.client.radosgw.keyring

⚠️ Perform this on each node ⚠️ #

symlink the keyring to a location ceph knows to look
ln -s /etc/pve/priv/ceph.client.admin.keyring   /etc/ceph/ceph.client.admin.keyring
ln -s /etc/pve/priv/ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring

Node keys #

Create a radosgw client key for each node
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-40 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-41 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-42 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-43 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-44 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-45 --gen-key

Privileges #

Create the privilege tokens #

Grant privilege to each of the newly minted keys
TARGET='/etc/pve/priv/ceph.client.radosgw.keyring'
for H in 40 41 42 43 44 45; do
  CLIENT="client.radosgw.px-m-${H}"
  ceph-authtool -n  ${CLIENT}  --cap osd 'allow rwx' --cap mon 'allow rwx, allow command "config-key get" with "key" prefix "rgw"' ${TARGET}
  echo "added privileges to ${CLIENT} in ${TARGET}"
done
Add the newly minted auth tokens to the cluster #

Using the admin keyring, add the newly minted tokens to the cluster.

Add the new keys to the cluster
ADMINKEY='/etc/pve/priv/ceph.client.admin.keyring'
TARGET='/etc/pve/priv/ceph.client.radosgw.keyring'
for H in 40 41 42 43 44 45; do
  CLIENT="client.radosgw.px-m-${H}"
  ceph -k ${ADMINKEY} auth add ${CLIENT} -i ${TARGET}
done
Output
added key for client.radosgw.px-m-40
added key for client.radosgw.px-m-41
added key for client.radosgw.px-m-42
added key for client.radosgw.px-m-43
added key for client.radosgw.px-m-44
added key for client.radosgw.px-m-45

Config #

Update /etc/services #

Adding RadosGW to /etc/services informs system components of the port assignment.
/etc/services
radosgw         7480/tcp                        # Ceph Rados gw

This is what, for example, allows netstat to resolve the servicename in question when viewing nework connection states.

Adjusting Thread Cache Memory #

The RH Guidance2 is to adjust Ceph’s TCMalloc setting to tune how much memory is allocated for ceph’s thread cache..
In RHEL / CentOS this is adjusted in /etc/sysconfig/ceph. However, ProxMox is Debian / Ubuntu based,
where the “default” config is /etc/default/. As such the file to inspect is /etc/default/ceph.

Inspecting Ceph TCMalloc setting
root@px-m-41:/tmp/ceph-px-m-41#  cat /etc/default/ceph
# /etc/default/ceph
#
# Environment file for ceph daemon systemd unit files.
#
# Increase tcmalloc cache size
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728
When I looked, it was already set to
what I believe to be an acceptable level:

Increase Systemic Limits #

The RH Guidance3 is to increase the file descriptor limits
for the ceph user in /etc/security/limits.conf

And then validate this by
su -s /bin/bash ceph -l -c 'ulimit -a'

Adjust system limits for the ceph user
root@px-m-40:~# grep ceph /etc/security/limits.conf;echo
ceph             soft    nproc           unlimited
ceph             soft    nofile          1048576

root@px-m-40:~# su -s /bin/bash ceph -l -c 'ulimit -a'
real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) 0
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 2061178
max locked memory           (kbytes, -l) 65990836
max memory size             (kbytes, -m) unlimited
open files                          (-n) 1048576
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 16384
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 2061178
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

Adjust ceph config file #

Cluster-wide Requirements #

/etc/ceph/ceph.conf
[global]
  #... existing content
  rgw_dns_name = dog.wolfspyre.io
  rgw_relaxed_s3_bucket_names = true
  rgw_resolve_cname = true
  rgw_log_nonexistent_bucket = true
  rgw_enable_ops_log = true
  rgw_enable_usage_log = true
  osd_map_message_max=10
  objecter_inflight_ops = 24576
  rgw_thread_pool_size = 512

These changes are required to have RadosGW function #

Insert the following blob into the [global] section of /etc/ceph/ceph.conf:

Add each node-client to ceph.conf #

Append these entries to /etc/ceph/ceph.conf.
They’re required so as to inform ceph of the additional clients.

/etc/ceph/ceph.conf
[client.radosgw.px-m-40]
  host = px-m-40
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
[client.radosgw.px-m-41]
  host = px-m-41
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
[client.radosgw.px-m-42]
  host = px-m-42
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
[client.radosgw.px-m-43]
  host = px-m-43
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
[client.radosgw.px-m-44]
  host = px-m-44
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
[client.radosgw.px-m-45]
  host = px-m-45
  keyring = /etc/pve/priv/ceph.client.radosgw.keyring
  log file = /var/log/ceph/client.radosgw.$host.log
  rgw_dns_name = dog.wolfspyre.io
/etc/ceph/ceph.conf
rgw_default_region_info_oid
rgw_default_zone_info_oid
rgw_default_zonegroup_info_oid
rgw_realm
rgw_realm_id
rgw_realm_id_oid
rgw_region
rgw_region_root_pool
rgw_zone
rgw_zone_id
rgw_zone_root_pool
rgw_zonegroup
rgw_zonegroup_id
rgw_zonegroup_root_pool

Things I considered adjusting, but didn’t #

I thought about adjusting these ceph.conf settings,
but left them at their defaults (most of which undefined)

Documentation link References: 4 5 6 7 8 9 10

Installation and Service Enablement #

Wahoo! Ya did it!
Now lets go ahead and install the necessary packages an start the services!
(Repeat these on each node.)

Package Installation #

root@px-m-40:~# apt-get install radosgw librados2-perl python3-rados librados2 librgw2
root@px-m-40:~# systemctl enable radosgw
root@px-m-40:~# systemctl start radosgw

Service enablement #

Making radosgw start properly. #

This seemed to be necessary to get radosgw starting after the shared filesystem is mounted.

Adjusting Radosgw Start sequence
mkdir /lib/systemd/system/radosgw.service.d/
cat <<EOF > /lib/systemd/system/radosgw.service.d/ceph-after-pve-cluster.conf
[Unit]
After=pve-cluster.service
EOF
ln -s /lib/systemd/system/ceph-radosgw.target /etc/systemd/system/ceph.target.wants/ceph-radosgw.target
…… Kinda anticlimactic
Eh?
Whelp❕
On with the show, right!?!

🐾
🐾