Posts Tagged - Gloo

Canary Deployments with Gloo Function Gateway using Weighted Destinations

Photo by Form.

This is the 3rd post in my 3 part series on doing Canary Releases with Solo.io Gloo.

This post will show a different way of doing Canary release by using weighted routes to send a fraction of the request traffic to the new version (the canary). For example, you could initially route 5% of your request traffic to your new version to validate that its working correctly in production without risking too much if your new version fails. As you gain confidence in your new version, you can route more and more traffic to it until you cut over completely, i.e. 100% to new version, and decommission the old version.

All of the Kubernetes manifests are located at https://github.com/scranton/gloo-canary-example. I’d suggest you clone that repo locally to make it easier to try these example yourself. All command examples assume you are in the top level directory of that repo.

Review

Quickly reviewing the previous 2 posts, we learned that Gloo can help with function level routing, and that routing can be used as part of a Canary Release process, that is slowly testing a new version of our service in an environment. In the last post, we used Gloo to create a special routing rule to our new version that only forwarded on requests that included a specific request header. That allows us to deploy our new service into production, while only allowing request traffic from specific clients, i.e. clients that know to set that specific request header. Once we got confident that our new version was working as expected, we then changed the Gloo routing rules so that all request traffic went to the new service. This is a great way to validate that our new deployment is correctly configured in our environment before sending any important traffic to it.

In this post, we’re going to expand on that approach with a more sophisticated pattern - weighted routes. With this capability we can route a percentage of the request traffic to one or more functions. This enhances our previous header based approach as we can now validate that our new service can handle a managed load of traffic, and as we gain confidence we can route higher loads to the new version till its handling 100% of the request traffic. If at any point, we see errors we can either rollback 100% of traffic to the original working version OR debug our service to better understand why it started to have problems handling a faction of our target load, which in theory should help us fix our new service version quicker.

You can always combine both the header routing and weighted destination routing, and other routing options Gloo provides.

Setup

This post assumes you’ve already run thru the Canary Deployments with Gloo Function Gateway post, and that you’ve already got a Kubernetes environment setup with Gloo. If not, please refer back to that post for setup instructions and the basics of VirtualServices and Routes with Gloo.

By the end of that post, we had 100% of findPets function traffic going to our petstore-v2 service, and the other functions going to the original petstore-v1. Let’s validate our services before we make any changes.

export PROXY_URL=$(glooctl proxy url)
curl ${PROXY_URL}/findPets

The call to findPets should have been routed to petstore-v2, which should return the following result.

[{"id":1,"name":"Dog","status":"v2"},{"id":2,"name":"Cat","status":"v2"},{"id":3,"name":"Parrot","status":"v2"}]

And calls to findPetWithId should route to petstore-v1, which only has 2 pets (Dog & Cat) each with a status of available and pending respectively (versus status of v2 for petstore-v2 responses).

curl ${PROXY_URL}/findPetWithId/1
{"id":1,"name":"Dog","status":"available"}
curl ${PROXY_URL}/findPetWithId/2
{"id":2,"name":"Cat","status":"pending"}
curl ${PROXY_URL}/findPetWithId/3
{"code":404,"message":"not found: pet 3"}                                                                                                                                                                                             

So let’s play with doing a Canary Release with weighted destinations to migrate the findPetWithId function.

Setting up Weighted Destinations in Gloo

Let’s start by looking at our existing, virtual service coalmine

kubectl get virtualservice coalmine --namespace gloo-system --output yaml
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine
    routes:
    - matcher:
        prefix: /findPets
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPets
              parameters: {}
          upstream:
            name: default-petstore-v2-8080
            namespace: gloo-system
    - matcher:
        prefix: /findPetWithId
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /petstore
      routeAction:
        single:
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /api/pets/

To create a weighted destination, we need to change the routeAction from single to multi and provide 2+ destination, which are destinationSpec with weight. For example, to route 10% of request traffic to findPetWithId to petstore-v2 and the remaining 90% to petstore-v1.

kubectl apply -f coalmine-virtual-service-part-3-weighted.yaml

Here’s the relevant part of the virtual service manifest showing the weighted destination spec.

coalmine-virtual-service-part-3-weighted.yaml view raw
  routeAction:
    multi:
      destinations:
      - destination:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v2-8080
            namespace: gloo-system
        weight: 10
      - destination:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
        weight: 90
- matcher:

Let’s run a shell loop to test, remember that petstore-v2 responses have a status field of v2. The following command will call our function 20 times, and we should see ~2 responses (~10%) return with "status":"v2".

COUNTER=0
while [ $COUNTER -lt 20 ]; do
    curl ${PROXY_URL}/findPetWithId/1
    let COUNTER=COUNTER+1
done
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}

Now if we want to increase the traffic to our new version, we just need to update the weight attibutes in the 2 destination objects. Gloo sums all of the weight values within a given weighted destination route, and routes the respective percentage to each destination. So if we set both route weights to 1 then each route would get 1/2 or 50% of the request traffic. I’d recommend setting the values with a sum of 100 so they look like percentages for greater readability. The following example will update our routes to do 50/50 traffic split.

kubectl apply -f coalmine-virtual-service-part-3-weighted-50-50.yaml
coalmine-virtual-service-part-3-weighted-50-50.yaml view raw
  routeAction:
    multi:
      destinations:
      - destination:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v2-8080
            namespace: gloo-system
        weight: 50
      - destination:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
        weight: 50
- matcher:

And if we run our test loop again, we should see about 10 of the 20 requests returning "status":"v2.

COUNTER=0
while [ $COUNTER -lt 20 ]; do
    curl ${PROXY_URL}/findPetWithId/1
    let COUNTER=COUNTER+1
done
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"v2"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"available"}
{"id":1,"name":"Dog","status":"v2"}

Summary

This series has hopefully given you all a taste of how Solo.io Gloo can help you create more interesting applications, and also enhance your application delivery approaches. These posts have shown how to do function level request routing, and how you can enhance those routing rules by requiring presence of request headers and doing managed load balancing by specifying the percentage of traffic going to individual upstream destinations. Gloo supports many more options, and I hope you’ll continue your journey by going to https://gloo.solo.io to learn more.

Read More

Canary Deployments with Gloo Function Gateway

Photo by Zab Consulting.

This is the 2nd post in my 3 part series on doing Canary Releases with Solo.io Gloo.

This post expands on the Function Routing with Gloo post to show you how to do a Canary release of a new version of a function. Gloo is a function gateway that gives users a number of benefits including sophisticated function level routing, and deep service discovery with introspection of OpenAPI (Swagger) definitions, gRPC reflection, Lambda discovery and more. This post will show a simple example of Gloo discovering 2 different deployments of a service, and setting up some routes. The route rules will use the presence of a request header x-canary:true to influence runtime routing to either version 1 or version 2 of our function. Then once we’re happy with our new version, we will update the route so all requests now go to version 2 of our service. All without changing or even redeploying our 2 services. But first, let’s set some context…

Background

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

Danilo Sato Canary Release

The idea of a Canary release is that no matter how much testing you do on a new implementation, until you deploy it into your production environment you can’t be positive everything will work as expected. So having a way to release a new version into production concurrently with the existing version(s) with some way to route traffic can be helpful. Ideally, we’d like to route most traffic to existing, known to work version, and have a way for some (test) requests go to the new version. Once you’re feeling comfortable that your new version is working like you expect, then and only then, do you start routing most/all requests to the new version, and then eventually decommission the original service.

Being able to change request routes without needing to change or redeploy your code, I think, is very helpful in building confidence that your code is ready for production. That is, if you need to change your code or use a code based feature flag, then your exercising different code paths and/or changing deployed configuration settings. I feel its better if you can deploy your service, code and configurations, all ready for production, and use an external mechanism to manage request routing.

Gloo uses Envoy, which is a super high performance service proxy, to do the request routing. In this example, we’ll use a request header to influence the routing, though we could also use other variables like the IP range of the requestor to drive routing decisions. That is, if requests are coming from specific test machines we can route them to our new version. Lots more information on how Gloo and Envoy works can be found on the Solo.io website. On to the example…

This post assumes you’ve already run thru the Function Routing with Gloo post, and that you’ve already got a Kubernetes environment setup with Gloo. If not, please refer back to that post for setup instructions and the basics of VirtualServices and Routes with Gloo.

All of the Kubernetes manifests are located at https://github.com/scranton/gloo-canary-example. I’d suggest you clone that repo locally to make it easier to try these example yourself. All command examples assume you are in the top level directory of that repo.

Review

In the previous post, we had a single service petstore-v1, and we setup Gloo to route requests to its findPets REST function. Let’s test that its still working as expected. Remember we need to get Gloo’s proxy url by calling the glooctl proxy url command, and then we can make requests against that with the /findPets route that we previously setup. If still working correctly we should get 2 Pets back.

export PROXY_URL=$(glooctl proxy url)
curl ${PROXY_URL}/findPets
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Canary Routing

Now let’s deploy a version 2 of our service, and let’s setup a canary route for the findPets function. That is, by default we’ll route to version 1 of the function, and if there is a request header x-canary:true set, we’ll route that request to version 2 of our function.

Install and verify petstore version 2 example service

Let’s first deploy version 2 of our petstore service. This version has been modified to return 3 pets.

kubectl apply -f petstore-v2.yaml
petstore-v2.yaml view raw
---
# petstore-v2
apiVersion: v1
kind: Service
metadata:
 name: petstore-v2
 namespace: default
 labels:
   app: petstore-v2
spec:
 type: ClusterIP
 ports:
 - name: http
   port: 8080
   targetPort: 8080
   protocol: TCP
 selector:
   app: petstore-v2
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: petstore-v2
 namespace: default
 labels:
   app: petstore-v2
spec:
 replicas: 1
 selector:
   matchLabels:
     app: petstore-v2
 template:
   metadata:
     labels:
       app: petstore-v2
   spec:
     containers:
     - name: petstore-v2
       image: scottcranton/petstore:v2
       ports:
       - containerPort: 8080

Verify its setup right

kubectl get services --namespace default
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    22h
petstore-v1   ClusterIP   10.110.99.86    <none>        8080/TCP   33m
petstore-v2   ClusterIP   10.109.91.120   <none>        8080/TCP   6s

Now let’s setup a port forward to see if it works. When we do a GET against /api/pets we should get back 3 pets.

kubectl port-forward services/petstore-v2 8080:8080

And in a different terminal, run the following to see if we get back 3 pets for version 2 of our service.

curl localhost:8080/api/pets
[{"id":1,"name":"Dog","status":"v2"},{"id":2,"name":"Cat","status":"v2"},{"id":3,"name":"Parrot","status":"v2"}]

You should kill all port forwarding as we’ll use Gloo to proxy future tests.

Setup Canary Route

Let’s setup a new function route rule for petstore version 2 findPets function that depends on the presence of the x-canary:true request header.

glooctl add route \
   --name coalmine \
   --path-prefix /findPets \
   --dest-name default-petstore-v2-8080 \
   --rest-function-name findPets \
   --header x-canary=true

Default routing should still go to petstore version 1, and return only 2 pets.

curl ${PROXY_URL}/findPets
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

If we make a request with the x-canary:true set, it should route to petstore version 2, and return 3 pets.

curl -H "x-canary:true" ${PROXY_URL}/findPets
[{"id":1,"name":"Dog","status":"v2"},{"id":2,"name":"Cat","status":"v2"},{"id":3,"name":"Parrot","status":"v2"}]

Just to verify, let’s set the header to a different value, e.g. x-canary:false to see that it routes to petstore v1

curl -H "x-canary:false" ${PROXY_URL}/findPets
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Here’s the complete YAML for our coalmine virtual service that you could kubectl apply if you wanted to recreate

coalmine-virtual-service-part-2-header.yaml view raw
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine
    routes:
    - matcher:
        headers:
        - name: x-canary
          regex: true
          value: "true"
        prefix: /findPets
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPets
              parameters: {}
          upstream:
            name: default-petstore-v2-8080
            namespace: gloo-system
    - matcher:
        prefix: /findPetWithId
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /findPets
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPets
              parameters: {}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /petstore
      routeAction:
        single:
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /api/pets

The part of the virtual service manifest that is specifying the header based routing is highlighted as follows.

- matcher:
    headers:
    - name: x-canary
      regex: true
      value: "true"
    prefix: /findPets
  routeAction:

Make version 2 the default for all requests

Once we’re feeling good about version 2 of our function, we can make the default call to /findPets go to version 2. Note that will Gloo as your function gateway, you do not have to route all function requests to version 2 of the petstore service. In this example, we’re only routing requests for the findPets function to version 2. All other requests are going to version 1 of petstore. This partial routing may not always work for all services; this post is showing that Gloo makes this level of granularity possible when it helps you more fine tune your application upgrading decisions. For example, this may make sense if you want to patch a critical bug but are not ready to role out other breaking changes in a new service version.

The easiest way to make change the routing rules to route all requests to version 2 findPets is by applying a YAML file. You can use the glooctl command line tool to add and remove routes, but it takes several calls.

coalmine-virtual-service-part-2-v2.yaml view raw
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine
    routes:
    - matcher:
        prefix: /findPets
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPets
              parameters: {}
          upstream:
            name: default-petstore-v2-8080
            namespace: gloo-system
    - matcher:
        prefix: /findPetWithId
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /petstore
      routeAction:
        single:
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /api/pets

Summary

This post has shown you have to leverage the Gloo function gateway to do a Canary Release of a new version of a function, and allow you to do very granular function level routing to validate your new function is working correctly. Then it showed changing routing rules so all traffic goes to the new version. All without redeploying either of the 2 service implementations. In this post we used the presence of a request header to influence function routing; we could also have done routing based on IP range of incoming request or other variables. This hopefully shows you the power and flexibility that Gloo function gateway can provide you in your journey to microservices and service mesh.

Read More

Routing with Gloo Function Gateway

Photo by Pietro Jeng.

This is the 1st post in my 3 part series on doing Canary Releases with Solo.io Gloo.

This post introduces you to how to use the open source Solo.io Gloo project to help you to route traffic to your Kubernetes hosted services. Gloo is a function gateway that gives users a number of benefits including sophisticated function level routing, and deep service discovery with introspection of OpenAPI (Swagger) definitions, gRPC reflection, Lambda discovery and more. This post will start with a simple example of Gloo discovering and routing requests to a service that exposes REST functions. Later posts will build on this initial example to do highlight ever more complex scenarios.

Prerequisites

I’m assuming you’re already running a Kubernetes installation. I’m assuming you’re using minikube for this post though any recent Kubernetes installation should work as long as you have kubectl setup and configured correctly for your Kubernetes installation.

Setup

Setup example service

All of the Kubernetes manifests are located at https://github.com/scranton/gloo-canary-example. I’d suggest you clone that repo locally to make it easier to try these example yourself. All command examples assume you are in the top level directory of that repo.

Let’s start by installing an example service that exposes 4 REST functions. This service is based on the go-swagger petstore example.

kubectl apply -f petstore-v1.yaml
petstore-v1.yaml view raw
---
# petstore-v1
apiVersion: v1
kind: Service
metadata:
  name: petstore-v1
  namespace: default
  labels:
    app: petstore-v1
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP
  selector:
    app: petstore-v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: petstore-v1
  namespace: default
  labels:
    app: petstore-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: petstore-v1
  template:
    metadata:
      labels:
        app: petstore-v1
    spec:
      containers:
      - name: petstore-v1
        image: scottcranton/petstore:v1
        ports:
        - containerPort: 8080

We’ve installing this service into the default namespace, so we can look there to see if it’s installed correctly.

kubectl get all --namespace default
NAME                              READY   STATUS    RESTARTS   AGE
pod/petstore-v1-986747fc8-6hn9p   1/1     Running   0          16s

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP    22h
service/petstore-v1   ClusterIP   10.110.99.86   <none>        8080/TCP   17s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/petstore-v1   1/1     1            1           16s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/petstore-v1-986747fc8   1         1         1       16s

Let’s test our service to make sure it installed correctly. This service is setup to expose on port 8080, and will return the list of all pets for GET requests on the query path /api/pets. The easiest way to test is to port-forward the service so we can access it locally. We’ll need the service name for the port forwarding. Make sure the service name matches the ones from your system. This will forward port 8080 from the service running in your Kubernetes installation to your local machine, i.e. localhost:8080.

Get the service names for your installation

kubectl get service --namespace default
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP    22h
petstore-v1   ClusterIP   10.110.99.86   <none>        8080/TCP   42s

Setup the port forwarding

kubectl port-forward service/petstore-v1 8080:8080

In a separate terminal, run the following. The petstore function should return 2 pets: Dog and Cat.

curl localhost:8080/api/pets
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

You can also get the Swagger spec as well.

curl localhost:8080/swagger.json

You can kill all the port forwards. Now we’ll setup Gloo…

Setup Gloo

Let’s setup the Gloo command line utility. Full instructions are at the Gloo doc site. Here are the quick setup instructions.

Setup the glooctl command line tool. This makes installation, upgrade, and operations of Gloo easier. Full installation instructions are located on the https://gloo.solo.io site.

If you’re a Mac or Linux Homebrew user, I’d recommend installing as follows.

brew install glooctl

Now let’s install Gloo into your Kubernetes installation.

glooctl install gateway

Pretty easy, eh? Let’s verify that its installed and running correctly. Gloo by default creates and installs into the gloo-system namespace, so let’s look at everything running there.

kubectl get all --namespace gloo-system

And the output should look something like the following.

NAME                                 READY   STATUS    RESTARTS   AGE
pod/discovery-66c865f9bc-h6v8f       1/1     Running   0          22h
pod/gateway-777cf4486c-8mzj5         1/1     Running   0          22h
pod/gateway-proxy-5f58774ccc-rcmdv   1/1     Running   0          22h
pod/gloo-5c6c4466f-ptc8v             1/1     Running   0          22h

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/gateway-proxy   LoadBalancer   10.97.13.246    <pending>     80:31333/TCP,443:32470/TCP   22h
service/gloo            ClusterIP      10.104.80.219   <none>        9977/TCP                     22h

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/discovery       1/1     1            1           22h
deployment.apps/gateway         1/1     1            1           22h
deployment.apps/gateway-proxy   1/1     1            1           22h
deployment.apps/gloo            1/1     1            1           22h

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/discovery-66c865f9bc       1         1         1       22h
replicaset.apps/gateway-777cf4486c         1         1         1       22h
replicaset.apps/gateway-proxy-5f58774ccc   1         1         1       22h
replicaset.apps/gloo-5c6c4466f             1         1         1       22h

Routing

Upstreams

Before we get into routing, let’s talk a little about the concept of Upstreams. Upstreams are the services that Gloo has discovered automatically. Let’s look at the upstreams that Gloo has discovered in our Kubernetes cluster.

glooctl get upstreams

You may see some different entries than what follows. It depends on your Kubernetes cluster, and what is running currently.

+-------------------------------+------------+----------+------------------------------+
|           UPSTREAM            |    TYPE    |  STATUS  |           DETAILS            |
+-------------------------------+------------+----------+------------------------------+
| default-kubernetes-443        | Kubernetes | Accepted | svc name:      kubernetes    |
|                               |            |          | svc namespace: default       |
|                               |            |          | port:          443           |
|                               |            |          |                              |
| default-petstore-v1-8080      | Kubernetes | Accepted | svc name:      petstore-v1   |
|                               |            |          | svc namespace: default       |
|                               |            |          | port:          8080          |
|                               |            |          | REST service:                |
|                               |            |          | functions:                   |
|                               |            |          | - addPet                     |
|                               |            |          | - deletePet                  |
|                               |            |          | - findPetById                |
|                               |            |          | - findPets                   |
|                               |            |          |                              |
| gloo-system-gateway-proxy-443 | Kubernetes | Accepted | svc name:      gateway-proxy |
|                               |            |          | svc namespace: gloo-system   |
|                               |            |          | port:          443           |
|                               |            |          |                              |
| gloo-system-gateway-proxy-80  | Kubernetes | Accepted | svc name:      gateway-proxy |
|                               |            |          | svc namespace: gloo-system   |
|                               |            |          | port:          80            |
|                               |            |          |                              |
| gloo-system-gloo-9977         | Kubernetes | Accepted | svc name:      gloo          |
|                               |            |          | svc namespace: gloo-system   |
|                               |            |          | port:          9977          |
|                               |            |          |                              |
| kube-system-kube-dns-53       | Kubernetes | Accepted | svc name:      kube-dns      |
|                               |            |          | svc namespace: kube-system   |
|                               |            |          | port:          53            |
|                               |            |          |                              |
+-------------------------------+------------+----------+------------------------------+

Notice that our petstore service default-petstore-v1-8080 is different from the other upstreams in that its details is listing 4 REST service functions: addPet, deletePet, findPetByID, and findPets. This is because Gloo can auto-detect OpenAPI / Swagger definitions. This allows Gloo to route to individual functions versus what most traditional API Gateways do in only letting you route only to host:port granular services. Let’s see that in action.

Let’s look a little closer at our petstore upstream. The glooctl command let’s us output the full details in yaml or json.

glooctl get upstream default-petstore-v1-8080 --output yaml
discoveryMetadata: {}
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"petstore-v1"},"name":"petstore-v1","namespace":"default"},"spec":{"ports":[{"name":"http","port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"app":"petstore-v1"},"type":"ClusterIP"}}
  labels:
    app: petstore-v1
    discovered_by: kubernetesplugin
  name: default-petstore-v1-8080
  namespace: gloo-system
  resourceVersion: "20387"
status:
  reportedBy: gloo
  state: Accepted
upstreamSpec:
  kube:
    selector:
      app: petstore-v1
    serviceName: petstore-v1
    serviceNamespace: default
    servicePort: 8080
    serviceSpec:
      rest:
        swaggerInfo:
          url: http://petstore-v1.default.svc.cluster.local:8080/swagger.json
        transformations:
          addPet:
            body:
              text: '{"id": ,"name": "","tag":
                ""}'
            headers:
              :method:
                text: POST
              :path:
                text: /api/pets
              content-type:
                text: application/json
          deletePet:
            headers:
              :method:
                text: DELETE
              :path:
                text: /api/pets/
              content-type:
                text: application/json
          findPetById:
            body: {}
            headers:
              :method:
                text: GET
              :path:
                text: /api/pets/
              content-length:
                text: "0"
              content-type: {}
              transfer-encoding: {}
          findPets:
            body: {}
            headers:
              :method:
                text: GET
              :path:
                text: /api/pets?tags=&limit=
              content-length:
                text: "0"
              content-type: {}
              transfer-encoding: {}

Here we see that the findPets REST function is looking for requests on /api/pets, and findPetsById is looking for requests on /api/pets/{id} where {id} is the id number of the single pet who’s details are to be returned.

Basic Routing

Gloo acts like an (better) Kubernetes Ingress, which means it can allow requests from external to the Kubernetes cluster to access services running inside the cluster. Gloo uses a concept called VirtualService to setup routes to Kubernetes hosted services.

This post will show you how to configure Gloo using the command line tools, and I’ll explain a little of what’s happening with each command. I’ll also include the YAML at the end of each step if you’d prefer to work in a purely declarative fashion (versus imperative commands).

Setup VirtualService. This gives us a place to define a set of related routes. This won’t do much till we create some routes in the next steps.

glooctl create virtualservice --name coalmine

Here’s the YAML that will create the same resource as the glooctl command we just ran. Note that by default the glooctl command creates resources in the gloo-system namespace.

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine

Create a route for all traffic to go to our service.

glooctl add route \
  --name coalmine \
  --path-prefix /petstore \
  --dest-name default-petstore-v1-8080 \
  --prefix-rewrite /api/pets

This sets up a simple ingress route so that all requests going to the Gloo Proxy / URL are redirected to the default-petstore-v1-8080 service /api/pets. Let’s test it. To get the Gloo proxy host and port number (remember that Gloo is acting like a Kubernetes Ingress), we need to call glooctl proxy url. Then let’s call the route path.

export PROXY_URL=$(glooctl proxy url)
curl ${PROXY_URL}/petstore

And we should see the same results as when we called the port forwarded service.

[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Here’s the full YAML for the coal virtual service created so far.

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine
    routes:
    - matcher:
        prefix: /petstore
      routeAction:
        single:
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /api/pets

Function Routing

Would it be better if we could just route to the named REST function versus having to know the specifics of the query path (i.e. /api/pets) the service is expecting? Gloo can help us with that. Let’s setup a route to findPets REST function.

glooctl add route \
   --name coalmine \
   --path-prefix /findPets \
   --dest-name default-petstore-v1-8080 \
   --rest-function-name findPets

And test it. We should see the same results as the request to /petstore as both those examples were exercising the findPets REST function in the petstore service. This also shows that Gloo allows you to create multiple routing rules for the same REST functions, if you want.

curl ${PROXY_URL}/findPets

If we want to route to a function with parameters, we can do that too by telling Gloo how to find the id parameter. In this case, it happens to be a path parameter, but it could come from other parts of the request.

Note: We’re about to create a route with a different name findPetWithId than the function name findPetById it is routing to. Gloo allows you to setup routing rules for any prefix path to any function name.

glooctl add route \
   --name coalmine \
   --path-prefix /findPetWithId \
   --dest-name default-petstore-v1-8080 \
   --rest-function-name findPetById \
   --rest-parameters ':path=/findPetWithId/{id}'

Let’s look up the details for the pet with id 1

curl ${PROXY_URL}/findPetWithId/1
{"id":1,"name":"Dog","status":"available"}

And the pet with id 2

curl ${PROXY_URL}/findPetWithId/2
{"id":2,"name":"Cat","status":"pending"}

Here’s the complete YAML you could apply to get the same virtual service setup that we just did. To recreate this virtual service you could just kubectl apply the following YAML.

coalmine-virtual-service-part-1.yaml view raw
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: coalmine
  namespace: gloo-system
spec:
  displayName: coalmine
  virtualHost:
    domains:
    - '*'
    name: gloo-system.coalmine
    routes:
    - matcher:
        prefix: /findPetWithId
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPetById
              parameters:
                headers:
                  :path: /findPetWithId/{id}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /findPets
      routeAction:
        single:
          destinationSpec:
            rest:
              functionName: findPets
              parameters: {}
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
    - matcher:
        prefix: /petstore
      routeAction:
        single:
          upstream:
            name: default-petstore-v1-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /api/pets

Summary

This post is just the beginning of our function gateway journey with Gloo. Hopefully its given you a taste of some more sophisticated function level routing options that are available to you. I’ll try to follow up with more posts on even more options available to you.

Read More