In this section, we will prepare our cluster to handle Spot interruptions.
If the available On-Demand capacity of a particular instance type is depleted, the Spot Instance is sent an interruption notice two minutes ahead to gracefully wrap up things. We will deploy a pod on each spot instance to detect and redeploy applications elsewhere in the cluster
The first thing that we need to do is deploy the Spot Interrupt Handler on each Spot Instance. This will monitor the EC2 metadata service on the instance for a interruption notice.
The workflow can be summarized as:
We have provided an example K8s DaemonSet manifest. A DaemonSet runs one pod per node.
mkdir ~/environment/spot
cd ~/environment/spot
wget https://eksworkshop.com/spot/managespot/deployhandler.files/spot-interrupt-handler-example.yml
As written, the manifest will deploy pods to all nodes including On-Demand, which is a waste of resources. We want to edit our DaemonSet to only be deployed on Spot Instances. Let’s use the labels to identify the right nodes.
Use a nodeSelector
to constrain our deployment to spot instances. View this link for more details.
Configure our Spot Handler to use nodeSelector
Place this at the end of the DaemonSet manifest under Spec.Template.Spec.nodeSelector
nodeSelector:
lifecycle: Ec2Spot
Deploy the DaemonSet
kubectl apply -f ~/environment/spot/spot-interrupt-handler-example.yml
If you receive an error deploying the DaemonSet, there is likely a small error in the YAML file. We have provided a solution file at the bottom of this page that you can use to compare.
View the pods. There should be one for each spot node.
kubectl get daemonsets