Along the way of implementing cluster-api on top of metal-stack we learnt quite a few things about kubebuilder which enables us to write reconciliation logic easily and we want to share that knowledge with you, so we built the project xcluster, an extremely simplified version of cluster which contains metal-stack resources. We will assume you already went through kubebuilder book and are looking for more hands-on examples. By referencing the code in this project, you will be able to create a CustomResourceDefinition (CRD), write its reconciliation logic and deploy it.


We created two CRDs, XCluster and XFirewall as shown in the following figure. XCluster represents a cluster which contains metal-stack network and XFirewall. XFirewall corresponds to metal-stack firewall. The circular arrows imply the nature of reconciliation and also the corresponding controllers which reconcile the states of the resources.



metal-api manages all metal-stack resources, including machine, firewall, switch, OS image, IP, network and more. They are constructs which enable you to turn your data center into elastic cloud infrastructure. You can try it out on mini-lab, a local development platform where you can play with metal-stack resources and where we built this project. In this project, metal-api does the real job. It allocates the network and creates the firewall, fulfilling what you wish in the xcluster.yaml.


Clone the repo of mini-lab and xcluster in the same folder.

├── mini-lab
└── xcluster

Download the prerequisites of mini-lab. Then,

cd mini-lab

It's going to take some time to finish. Behind the scene, a kind cluster is created, metal-api related kubernetes resources are deployed, and multiple linux kernel-based virtual machines are created for metal-stack switches and machines.

From time to time, do

docker-compose run metalctl machine ls

Till you see Waiting under LAST EVENT as follows:

ID                                          LAST EVENT   WHEN     AGE  HOSTNAME  PROJECT  SIZE          IMAGE  PARTITION
e0ab02d2-27cd-5a5e-8efc-080ba80cf258        Waiting      8s                               v1-small-x86         vagrant
2294c949-88f6-5390-8154-fa53d93a3313        Waiting      8s                               v1-small-x86         vagrant

Then, in another terminal yet still in folder mini-lab (must!), do

eval $(make dev-env) # for talking to metal-api in this shell
cd ../xcluster

Now you should be in folder xcluster. Then,


Behind the scene, all related kubernetes resources are deployed:

  • CRD of XCluster and XFirewall
  • Deployment xcluster-controller-manager which manages two controllers with the reconciliation logic of XCluster and XFirewall respectively
  • ClusterRole and ClusterRoleBinding which entitle your manager to manage the resources XCluster and XFirewall.

Then, check out your xcluster-controller-manager running alongside other metal-stack deployments.

kubectl get deployment -A

Then, deploy your xcluster.

kubectl apply -f config/samples/xcluster.yaml

Check out your brand new custom resources.

kubectl get xcluster,xfirewall -A

The results should read:

NAME                                           READY   true

NAME                                            READY   true

Then go back to the previous terminal where you did

docker-compose run metalctl machine ls

Repeat the command and you should see a metal-stack firewall running.

ID                                                      LAST EVENT      WHEN    AGE     HOSTNAME                PROJECT                                 SIZE            IMAGE                          PARTITION
e0ab02d2-27cd-5a5e-8efc-080ba80cf258                    Waiting         41s                                                                             v1-small-x86                                   vagrant
2294c949-88f6-5390-8154-fa53d93a3313                    Phoned Home     21s     14m 19s x-cellent-firewall      00000000-0000-0000-0000-000000000000    v1-small-x86    Firewall 2 Ubuntu 20201126     vagrant

The reconciliation logic in reconcilers did the job to deliver what's in the sample manifest. This manifest is the only thing the user has to worry about.

To be continued

In the second part of this series, we will dive into the nitty-gritty of the implementation.