Homelab // DevOps

Article

Trying Ceph for the First Time: Start With 10GbE

My beginner-friendly Ceph networking note, including the 10GbE adapter I would actually recommend first for a small Proxmox homelab.

6 min read
  • #homelab
  • #devops
  • #proxmox
  • #ceph
  • #networking
  • #10gbe

The short version

I started looking at Ceph because I want my Proxmox storage to be less tied to one physical box. The attractive idea is simple: if one node has a bad day, the VMs and storage are not immediately trapped on that machine.

The less fun part is that Ceph is hungry for network bandwidth. You can test it on 1GbE, but I would not spend money on disks, RAM, and extra nodes while leaving Ceph traffic on a normal gigabit network.

My beginner recommendation is:

Buy an Intel X550-T2 if you want the least annoying 10GbE upgrade.

It is a dual-port RJ45 10GBASE-T card, it works with normal Ethernet-style cabling, it can fall back to slower speeds, and it is boring in the exact way a storage network card should be boring.

Disclosure: some product links on this page may be affiliate links. If you buy through them, I may earn a commission at no extra cost to you. I only want to recommend parts I would be comfortable putting in my own homelab.

Why I am not starting Ceph on 1GbE

Ceph is not just "a NAS, but clustered." It is constantly moving data between machines: writes, reads, replication, backfill, recovery, rebalancing, and client traffic all touch the network.

The official Ceph hardware recommendations say to provision at least 10Gb/s networking between Ceph hosts and clients, with 25Gb/s making sense for heavier workloads. The Proxmox Ceph docs say the same thing in a more practical homelab way: use at least 10GbE, ideally dedicated to Ceph, because recovery traffic can interfere with other services and even make the cluster stack unhappy.

That is the part I care about as a beginner. I do not want my first Ceph test to be a mystery performance problem that is really just "the network is too slow."

The adapter I would recommend first

Best beginner pick: Intel X550-T2

Check price for the Intel X550-T2 on Amazon

The Intel X550-T2 is my top pick for a first 10GbE Ceph card because it keeps the rest of the setup familiar:

  • RJ45 ports, so the physical connector looks like normal Ethernet
  • dual 10GbE ports, which gives room for a dedicated Ceph link and another fast network later
  • supports 10GbE, 5GbE, 2.5GbE, 1GbE, and 100Mb per Intel's specs
  • uses Cat6 up to 55m or Cat6A up to 100m for 10GbE, according to Intel
  • PCIe 3.0 x4, which is easy to fit in a lot of used workstations and servers
  • SR-IOV capable, which is useful if I later want to pass virtual functions into VMs

For a new homelab builder, the important point is not that this is the absolute cheapest possible 10GbE card. It is that the X550-T2 is a known server adapter with Linux support and a simple cabling story. That matters when the project is already new and Ceph itself is enough to learn.

If I were buying for a small Proxmox/Ceph lab today, I would look for a real Intel X550-T2 or a reputable OEM card based on the Intel X550 controller. I would avoid mystery listings that hide the controller, use suspiciously generic product photos, or look like counterfeit Intel cards.

When I would not buy the X550-T2

The X550-T2 is the easy recommendation, but it is not always the best value.

If the nodes are close together, SFP+ can be cheaper and cooler. A used Mellanox/NVIDIA ConnectX card with DAC cables is a very common homelab path. SFP+ direct attach cables are great when the machines are in the same rack or on the same shelf.

The card I would consider instead is a Mellanox/NVIDIA ConnectX-4 Lx. It is more of a server-networking part than a beginner retail product, but it supports faster paths than plain 10GbE, including 25GbE on the right adapters. NVIDIA's ConnectX-4 Lx page lists support for 1, 10, 25, 40, and 50GbE, plus RDMA features. The catch is that the product line is older and NVIDIA's own manual marks it end-of-life, so I would treat it as a used-market value pick, not a clean retail recommendation for everyone.

My personal rule:

  • choose Intel X550-T2 if I want RJ45, simple shopping, and fewer surprises
  • choose Mellanox/NVIDIA SFP+ or SFP28 if I am comfortable with used server gear, DAC cables, firmware checks, and switch compatibility
  • do not buy a random cheap 10GbE card unless I can identify the controller and confirm Linux/Proxmox support first

What I would buy for a tiny Ceph lab

For a two-node experiment, I would probably keep it simple:

  • one Intel X550-T2 in each node
  • one direct Cat6A cable between the nodes for Ceph testing
  • the existing 1GbE network left alone for management

For a real three-node Ceph setup, I would rather have:

  • one 10GbE adapter per node at minimum
  • a dedicated 10GbE switch or a properly planned full-mesh setup
  • separate management/corosync traffic on the normal network
  • enough airflow over the NICs, because 10GBASE-T cards can run warm

This is also where the dual-port X550-T2 starts to make more sense. One port can be the Ceph network and the second port can be used later for VM traffic, migration traffic, another storage network, or a direct link while I am still figuring out the final layout.

The thing I am trying to avoid

The beginner mistake is thinking Ceph is mainly a disk choice. Disks matter, but Ceph performance and recovery behavior are tied hard to networking.

If I build a cluster with three machines and 1GbE between them, I may learn the commands, but I am also teaching myself the wrong performance baseline. A slow recovery or a sluggish VM might not mean Ceph is bad. It might mean I built a storage cluster on a network that was already the bottleneck.

So my first real Ceph purchase is not another SSD. It is the network card.

Sources I used