Deploying Istio on GKE

By Ohad Ben Nun


Istio offers a great array of tools for viewing, monitoring & controlling our internal & external K8S traffic, but deploying & managing Istio can sometimes be a bit of a hassle.

Luckily, GKE now provides an option to deploy Istio in one step!
Deploying Istio on your K8S cluster will:

  • Encrypt services traffic (mTLS)
  • Provide monitoring tools (Prometheus & Grafana)
  • Allow to control ingress/egress/internal cluster traffic

Before we begin

GKE Istio addon is currently available in Google Cloud as a beta.

This article assumes:

  1. You have already setup a GCP account with the required billing information & permissions to execute the commands shown here (see Istio GKE overview)
  2. You are familiar with Kubernetes, Google Cloud & have the required binaries (kubectl & gcloud cli)
  3. You have SSL certificates (both public & private keys) to use Istio’s mTLS features

Deploying Istio

Deploying Istio can be done in several ways:

  1. Using a Helm Chart
  2. Using GCP CLI (full installation guide)

I also heard that the Istio team is also developing a Kubernetes Operator…

Set gcloud context

Run the following:

Note: you can get all available zones with:

gcloud compute zones list

Creating a cluster with Istio on GKE

Run the following (should take a couple of minutes):

After the operation is completed, you’ll have:

  1. A GKE cluster with 3 nodes
  2. A TCP Load Balancer for ingress traffic

Set ‘kubectl context’ to the new cluster:

Validating Istio Deployment

Run the following:

You should get an output with all the running pods under the namespace ‘istio-system’, for example:

Enabling Istio’s sidecar injection

To let Istio actually manage your services, each service in your application needs to have an Envoy sidecar proxy running in its pod to proxy network traffic between itself and other services.

Istio provides “Automatic sidecar injection” for namespaces with the label ‘istio-injection=true’.

Let’s enable the sidecar injection for our desired namespace by running the following:

Now, Istio will auto-inject the sidecar to each new pod in the namespace!

If there are some pods already running before enabling the ‘sidecar-injection’ mechanism, the pods should be deleted to allow Istio to inject the sidecar to the pods.

If you add subsequent namespaces, remember to enable Istio!

Allowing ingress traffic from GCP Load Balancer

To allow traffic from the load-balancer to the cluster, you can follow the steps described here.

For GKE clusters, you can follow these steps:

First, let’s find the ingress ports (HTTP & HTTPS) we should allow with firewall rules:

Now, let’s create the firewall rules to allow the incoming traffic:

Deploying an app in our cluster

We can now deploy your application or continue with a demo application for K8S like this one:

To get the timezonedb token, go to But for now you can just deploy with a random string since this will be blocked by Istio anyway.

Routing Ingress Traffic to the App

Adding a Gateway

At this point we have a load-balancer which is implemented by the Istio ingress controller (using the Envoy proxy).

Now we add an Istio Gateway which describes the allowed hosts, ports, protocols for the controller.

Note: the example above uses http. If you want to implement https you will need to add certificates, as follows:

Virtual Service

Now we need to route URI to services:

Testing Ingress Connectivity

Find the ingress IP:

Get the customer URL:
echo http://$INGRESS_HOST:$INGRESS_PORT/customer/
And open the URL in your browser. You should see something like this:

Image for post
Image for post

Now open the admin URL:
echo http://$INGRESS_HOST:$INGRESS_PORT/admin/

Image for post
Image for post

The times are invalid because Istio 1.0.x blocks all outbound access by default. Istio 1.1 will change this to allow all egress traffic by default.


  1. We created a K8S cluster with Istio installed as an add-on.
    Google created a load-balancer for us as part of the cluster creation.
    Istio is configured in Permissive mode which means that services will accept both mTLS and clear http (we will show you how to improve this in a subsequent post).
  2. We created a Gateway which receives all ingress traffic (Layers 4/5)
  3. We’ve created two Virtual Services that route the traffic from the gateway to a specific service according to URI path (Layers 7)
  4. We deployed an application into the cluster
Image for post
Image for post
Network Diagram of GKE + Istio

Congrats! We’ve deployed our first Istio enabled K8S app!

Written by

From the Security Policy Company. This blog is dedicated to cloud-native topics such as Kubernetes, cloud security and micro-services.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store