The WolfspyreLabs Blog/ 2023/ Posts from January/ Getting things in place/ Getting things in place Back to the top... Why are we’re doing this? (( You are Here )) ONWARD!!! Lets get RADOS enabled Stuff yer gonna need: #Proxmox Ceph Cluster #It kinda goes without saying. If you want to use the Ceph storage amalgam provided with Proxmox to enable this, you’ll need a functional Proxmox cluster.. This means at least two, realistically three nodes. A Front-End gateway. #What’s going to respond to your HTTP(s) traffic? Ceph’s RADOS gateway endpoint doesn’t provide any intrinsic request routing or load balancing. To maintain a durable front door, and reliable service, you’ll need a load balancer/front end. How you facilitate this is up to you. In my environment, I already have a functional, highly available HAProxy rig, backended by OPNsense… so I use that. DNS #You’ll need DNS set up to point everything to the right place. That will mean having: Wildcard and A records #I chose dog.wolfspyre.io as the root subdomain. Since I want to facilitate my offsite hosts to be able to access this as well, I need to enable external and internal resolution of the endpoints. External records # Public-Facing DNS records: # wolfspyre.io dog IN A 108.221.46.29 *.dog IN A 108.221.46.29 internal records # NS entries for the new subdomain dog IN NS ns01.wolfspyre.io. dog IN NS ns02.wolfspyre.io. dog IN NS ns03.wolfspyre.io. dog.wolfspyre.io sub-zone # dog.wolfspyre.io @ IN A 198.19.1.33 skwirreltrap IN A 198.19.198.1 atticus IN A 198.19.198.2 evey IN A 198.19.198.3 px-m-40 IN A 198.19.198.40 px-m-41 IN A 198.19.198.41 px-m-42 IN A 198.19.198.42 px-m-43 IN A 198.19.198.43 px-m-44 IN A 198.19.198.44 px-m-45 IN A 198.19.198.45 IN A 198.19.1.33</code></pre> SSL # At minimum, you’ll want a wildcard ssl certificate for your s3 apex. (in my case *.dog.wolfspyre.io) You may also want a wildcard ssl cert for s3 websites. But I’m not really sure (at the moment) how this works :) Stuff ya may wanna read: # Avi Mor’s blogpost1 does a pretty good job explaining how to think about Realms, Zone Groups, and Zones. RedHat’s Ceph OSG Guide2 covers a LOT of the care and feeding of ceph. It’s worth looking through. I’m going to lay out RADOS in alignment with the failure boundaries already established within my existing cluster. You may have different needs. flowchart RL subgraph rm0["fa:fa-bolt Realm (namespace)"] subgraph zzz["RADOS Traffic Flow"] direction LR end subgraph zg0["Zone Group: Barn" ] subgraph zzy["Note:"] zzya["Zone Groups contain one or more zones. They must have one master zone."] direction LR end subgraph z0["Zone - PXM Master" ] subgraph zzx["Note:"] zzxa["Zones define an isolation/replication boundary."] direction LR end subgraph n40["Physical host: px-m-40"] n40v198["Node 40 Ceph Network<br>198.19.198.40"] r40a["RADOS OSG Process 40A<br>198.19.198.40:7480"] end subgraph n41["Physical host: px-m-41"] n41v198["Node 41 Ceph Network<br>198.19.198.41"] r41a["RADOS OSG Process 41A<br>198.19.198.41:7480"] end subgraph n42["Physical host: px-m-42"] n42v198["Node 42 Ceph Network<br>198.19.198.42"] r42a["RADOS OSG Process 42A<br>198.19.198.42:7480"] end subgraph n43["Physical host: px-m-43"] n43v198["Node 43 Ceph Network<br>198.19.198.43"] r43a["RADOS OSG Process 43A<br>198.19.198.43:7480"] end subgraph n44["Physical host: px-m-44"] n44v198["Node 44 Ceph Network<br>198.19.198.44"] r44a["RADOS OSG Process 44A<br>198.19.198.44:7480"] end subgraph n45["Physical host: px-m-45"] n45v198["Node 45 Ceph Network<br>198.19.198.45"] r45a["RADOS OSG Process 45A<br>198.19.198.45:7480"] end end end end subgraph world["public requests"] direction BT usera["User Requests"] userb["from outside"] userc["the cluster"] end subgraph op["OPNSense Cluster"] direction BT subgraph OPNHAP["OPNSense HAProxy"] direction BT zvip0["https://*.dog.wolfspyre.io <br>198.19.1.33:443"] end opv1["OPNSense VIP Network<br>198.19.1.1"] opv198["OPNSense CEPH Network <br>198.19.198.1"] opv2["OPNSense Public Network"] end r40a -.-> n40v198 --- n40 --> opv198 r41a -.-> n41v198 --- n41 --> opv198 r42a -.-> n42v198 --- n42 --> opv198 r43a -.-> n43v198 --- n43 --> opv198 r44a -.-> n44v198 --- n44 --> opv198 r45a -.-> n45v198 --- n45 --> opv198 opv198 -.-> opv1 -.-> zvip0 ===> r40a & r41a & r42a & r43a & r44a & r45a usera & userb & userc -.- world ---> opv2 -.- opv1 -.-> zvip0 ---> world 1: Reasoning : PreRequisites 3: Enabling RADOS 4: Configuring RADOS 5: Load Balancing 6: Testing 7: Maintenance and Monitoring 8: Reading and References Back to the top… https://medium.com/@avmor/how-to-configure-rgw-multisite-in-ceph-65e89a075c1f ↩︎ https://access.redhat.com/documentation/en-us/red_hat_ceph_storage ↩︎