Deploy The Spot Interrupt Handler

In this section, we will prepare our cluster to handle Spot interruptions.

If the available On-Demand capacity of a particular instance type is depleted, the Spot Instance is sent an interruption notice two minutes ahead to gracefully wrap up things. We will deploy a pod on each spot instance to detect and redeploy applications elsewhere in the cluster

The first thing that we need to do is deploy the Spot Interrupt Handler on each Spot Instance. This will monitor the EC2 metadata service on the instance for a interruption notice.

The workflow can be summarized as:

  • Identify that a Spot Instance is being reclaimed.
  • Use the 2-minute notification window to gracefully prepare the node for termination.
  • Taint the node and cordon it off to prevent new pods from being placed.
  • Drain connections on the running pods.
  • Replace the pods on remaining nodes to maintain the desired capacity.

We have provided an example K8s DaemonSet manifest. A DaemonSet runs one pod per node.

mkdir ~/environment/spot
cd ~/environment/spot
wget https://eksworkshop.com/spot/managespot/deployhandler.files/spot-interrupt-handler-example.yml

As written, the manifest will deploy pods to all nodes including On-Demand, which is a waste of resources. We want to edit our DaemonSet to only be deployed on Spot Instances. Let’s use the labels to identify the right nodes.

Use a nodeSelector to constrain our deployment to spot instances. View this link for more details.

Challenge

Configure our Spot Handler to use nodeSelector

Expand here to see the solution

Deploy the DaemonSet

kubectl apply -f ~/environment/spot/spot-interrupt-handler-example.yml

If you receive an error deploying the DaemonSet, there is likely a small error in the YAML file. We have provided a solution file at the bottom of this page that you can use to compare.

View the pods. There should be one for each spot node.

kubectl get daemonsets