How to Deploy a GKE Cluster with Spot Instance Node Pool

Kubernetes traffic flow

Google Kubernetes Engine (GKE)

GKE service allows users to provision Kubernetes clusters to host containerized applications on Google Cloud infrastructure. GKE cluster consists of one or more Compute Engine Instances. With GKE, users can gain benefit of advanced cluster management features like load-balancing, node pools, automatic scaling, automatic upgrades, auto-repair, logging and monitoring.

GKE clusters have two modes of operation to choose from:
Autopilot: Manages the entire cluster and node infrastructure, with pre-configured cluster configuration.
Standard: Provides with node configuration flexibility and full control over managing clusters and node infrastructure.

GKE provides two type of clusters, they are Zonal(single-zone or multi-zone) and regional clusters.

Node Pools:

A node pool is a group of nodes within a cluster that all have the same configuration.
While creating cluster, the number of nodes and type of nodes that we specify are used to create the first node pool of the cluster. Then, you can add additional node pools of different sizes and types to your cluster.
We can run multiple Kubernetes node versions on each node pool in a cluster, update each node pool independently, and target different node pools for specific deployments. All new node pools run the same version of Kubernetes as the control plane.

In this demo, we are going to deploy a cluster with Spot instances node pool using terraform code inside a VPC. We have already deployed VPC in earlier blog post. you can find it here.

Spot Instances:

Spot Instances are Compute Engine instances that are priced lower than standard Compute Engine instances.
We can use Spot Instances in GKE node pools to run stateless, batch, or fault-tolerant workloads that can tolerate disruptions.

following is the terraform code that we are going to use to deploy GKE cluster with a spot instance node pool.

Note: we are using google-beta provider because Spot instance option is only available in beta provider.

provider "google-beta" {
  region      = "us-central1"
}
resource "google_container_cluster" "demo-cluster" {
  name                  = "demo-cluster"
  project               = "devops-counsel-demo"
  location              = "us-central1-a"
  network               = google_compute_network.vpc_network.id
  subnetwork            = google_compute_subnetwork.subnet-1.id
  remove_default_node_pool = true
  min_master_version    = "1.27.3-gke.100"
  initial_node_count       = 1
  ip_allocation_policy {
    cluster_secondary_range_name = "k8s-pods"
    services_secondary_range_name = "k8s-services"
  }
}

resource "google_container_node_pool" "demo-gke-node-pool" {
  provider           = google-beta
  project            = "devops-counsel-demo"
  name               = "demo-gke-node-pool"
  location           = "us-central1-a"
  cluster            = google_container_cluster.demo-cluster.name
  initial_node_count = 1

  autoscaling {
    min_node_count = 1
    max_node_count = 1
  }

  node_config {
    machine_type = "e2-medium"
    oauth_scopes = [
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/service.management.readonly",
      "https://www.googleapis.com/auth/servicecontrol",
      "https://www.googleapis.com/auth/trace.append",
      "https://www.googleapis.com/auth/monitoring.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
    spot = true
    disk_size_gb = 10
    disk_type = "pd-standard"
  }
}

We are going to save the above terraform code in “gke.tf” file and run “terraform apply” command. it will create a cluster with default node pool and delete it and then create spot instance node pool.

See the cluster details below after creation.

gke cluster

A node pool has been created with one spot instance(e2-medium type).

gke cluster node pool

We need to run below gcloud command to connect to the cluster. It will write credentials to .kubeconf file

gcloud container clusters get-credentials demo-cluster --zone us-central1-a --project devops-counsel-demo

After running the above command to get cluster credentials, now we can manage cluster with kubectl.

cloudshell:~/gke-deployment$ gcloud container clusters get-credentials demo-cluster --zone us-central1-a --project devops-counsel-demo
Fetching cluster endpoint and auth data.
kubeconfig entry generated for demo-cluster.

cloudshell:~/gke-deployment$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   21m
kube-node-lease   Active   21m
kube-public       Active   21m
kube-system       Active   21m

Conclusion

In this quick start demo we used google-beta terraform provider to deploy a GKE cluster inside a VPC with separately managed node pool. This node pool is using spot instances, to get charged way less than on-demand instances. you can find the VPC and GKE terraform code in this git repo.

you can find more information about GKE in official documentation.

Leave a Reply

%d