Posts Tagged - Ingress

Kubernetes Ingress Past, Present, and Future

Photo by Luke Porter.

Overview

This post was inspired by listening to the February 19, 2019, Kubernetes Podcast, “Ingress, with Tim Hockin.” The Kubernetes Podcast is turning out to be a very well done podcast overall, and well worth the listen. In the Ingress episode, the podcasters interview Tim Hockin who’s one of the original Kubernetes co-founders, a team lead on the Kubernetes predecessor Borg/Omega, and is still very active within the Kubernetes community such as chairing the Kubernetes Network Special Interest Group that currently own the Ingress resource specification. Tim talks in the podcast about the history of the Kubernetes Ingress, current developments around Ingress, and proposed futures. This talk inspired me to reflect on both Ingress Controllers (realizes the implementation of Ingress manifest) and Ingress the concept (allow client outside the Kubernetes cluster to access services running in the Kubernetes cluster).

So what’s a Kubernetes Ingress?

To paraphrase from the Kubernetes Ingress documentation, Ingress is an L7 network service that exposes HTTP(S) routes from outside to inside a Kubernetes cluster. A Kubernetes cluster may have one or more Ingress Controllers running, and each controller manages service reachability, load balancing, TLS/SSL termination, and other services for that controller’s associated routes.

Gloo as Ingress

Each Ingress manifest includes an annotation that indicates which Ingress controller should manage that Ingress resource. For example, to have Solo.io Gloo manage a specific Ingress resource, you would specify the following. Note the included annotation kubernetes.io/ingress.class: gloo.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: gloo
  labels:
    chart: jsonplaceholder-v0.1.0
  name: jsonplaceholder-jsonplaceholder
  namespace: default
spec:
  rules:
  - host: gloo.example.com
    http:
      paths:
      - path: /.*
        backend:
          serviceName: jsonplaceholder-jsonplaceholder
          servicePort: 8080

Ingress Challenges

Ingress has existed as a beta extension since Kubernetes 1.1, and it’s proven to be a lowest common denominator API. For example, the NGNIX community Ingress Controller is used by many in production, but that NGNIX Ingress controller requires the use of many NGNIX specific Ingress Annotations for all but the simplest use cases. The current Kubernetes Ingress resource specification has many limitations like that all referenced services and secrets MUST be in the same namespace as the Ingress, i.e., no cross namespace referencing. And there have been long debates about how exactly to interpret the path attribute; is it a regular expression like the documentation implies OR is it a path prefix like some controllers like NGNIX implement. These challenges have made it, in practice, difficult to have an Ingress manifest that is portable across implementations. The current Ingress manifest has also proven difficult to round trip sync with Custom Resources (CRD) which is unfortunate as CRDs are proving to be a beneficial way to extend Kubernetes.

What’s Next for Ingress?

In the podcast, Tim Hockin says given how many are using the current beta Ingress spec in production, there is a push to move the existing Ingress spec to GA status, and then start work on a next-generation specification, either an Ingress v2 or breaking up Ingress across multiple CRDs. Tim mentions how the Kubernetes community is looking at several Envoy based Ingress implementations for inspiration for the next generation of Ingress. For example, Heptio Contour has created a very interesting, and implementation neutral CRD called Ingress Route. An Ingress Route looks to address the governance challenges with Ingress, for example, if a company wants to expose a /eng route path there are many challenges with the current Ingress model as you can have conflicting Ingress manifests for the route /eng. Ingress Route provides a way to create governance and delegation such as cluster admins can define a virtual host /eng and delegate implementation explicitly to the eng namespace, and this prevents others from overriding that route path.

The Istio community, also based on Envoy like Heptio Contour, are also defining Ingress CRDs.

It will be fascinating to see how Ingress evolves in the not too distant future.

Related reading: API Gateways are going through an identity crisis.

Demo Time

I find it helpful to see working code to help make concepts more real, so let’s run through a few examples of Ingress and beyond.

For this example, I’m going to use a Kubernetes service created from https://jsonplaceholder.typicode.com/, which provides a quick set of REST APIs that provide different JSON output that can be helpful for testing. It’s based on a Node.js json-server - it’s very cool and worth looking at independently. I forked the original GitHub jsonplaceholder repository, ran draft create on the project, and made a couple of tweaks to the generated Helm chart. Draft is a super fast and easy way to bootstrap existing code into Kubernetes. I’m running all of this example locally using minikube.

The jsonplaceholder service comes with six common resources each of which returns several JSON objects. For this example, we’ll be getting the first user resource at /users/1.

  • /posts 100 posts
  • /comments 500 comments
  • /albums 100 albums
  • /photos 5000 photos
  • /todos 200 todos
  • /users 10 users

Following is a script to try this example yourself, and there’s also an asciinema playback so you can see what it looks like running on my machine. We’ll unpack what’s happening following the playback.

# Install tooling
brew update
brew cask install minikube
brew install kubernetes-cli \
  kubernetes-helm \
  azure/draft/draft \
  glooctl

# Create and set up local Kubernetes Cluster
minikube start
helm init
draft init
glooctl install ingress

# Draft runs better locally if you configure
# against minikube docker daemon
eval $(minikube docker-env)

# Get and run the example
git clone https://github.com/scranton/jsonplaceholder.git
cd jsonplaceholder
draft up

# Validate all is running
kubectl get all --namespace default
kubectl get all --namespace gloo-system
kubectl get ingress --namespace default
curl --header "Host: gloo.example.com" \
  $(glooctl proxy url --name ingress-proxy)/users/1

What Happened?

We installed local tooling (you can check respective websites for full install details)

We then started up a local Kubernetes cluster (minikube) and initialized Helm and Draft. We also installed Gloo ingress into our local cluster.

We then git clone our example and used draft up to build and deploy it to our cluster. Let’s spend a minute on what happened in this step. I originally forked the jsonplaceholder GitHub repository and ran draft create against its code. Draft autodetects the source code language, in this case, Node.js, and creates both a Dockerfile that builds our example application into an image container and creates a default Helm chart. I then made a few minor tweaks to the Helm chart to enable its Ingress. Let’s look at that Ingress manifest. The main changes are the addition of the ingress.class: gloo annotation to mark this Ingress for Gloo’s Ingress Controller. And the host is set to gloo.example.com, which is why our curl statement set curl --header "Host: gloo.example.com".


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: 
  labels:
    chart: "-"
  annotations:
    kubernetes.io/ingress.class: 
spec:
  rules:
  - host: 
    http:
      paths:
      - path: /.*
        backend:
          serviceName: 
          servicePort: 
charts/template/ingress.yaml

For more examples of using Gloo as an basic Ingress controller you can check out Kubernetes Ingress Control using Gloo.

You may have also noticed the call to $(glooctl proxy url --name ingress-proxy) in the curl command. This is needed when you’re running in a local environment like minikube and you need to get the host IP and port of the Gloo proxy server. When Gloo is deployed to a Cloud Provider like Google or AWS, then those environments would associate a static IP and allow port 80 (or port 443 for HTTPS) to be used, and that static IP would be registered with a DNS server, i.e., when Gloo is deployed to a cloud-managed Kubernetes you could do curl http://gloo.example.com/users/1.

Ingress Example Challenges

Let’s say we wanted to remap the exiting /users/1 to /people/1 as users are people too. That becomes tricky with Ingress manifests as we can set up a second rule for /people, but we need to rewrite that path to /users before sending to our service as it doesn’t know how to handle requests for /people. If you were using the NGNIX ingress, you could add another annotation nginx.ingress.kubernetes.io/rewrite-target: /, but now we’re adding implementation specific annotations, that is, the nginx annotation won’t be recognized by other Ingress Controllers. And annotations are a flat name space so adding lots of annotations can get quite messy, which is part of why Custom Resources (CRD) was created. Let’s see what the original route, and our path re-writing route, would look like in a CRD based Ingress: Gloo.

Gloo Virtual Services

Gloo uses a concept called Virtual Service that is derived from similar ideas in Istio and Envoy and is conceptually equivalent to an Ingress resource. Easiest to show you the equivalent of the example Ingress we’ve created so far in a Gloo Virtual Service.

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - gloo.example.com
    routes:
    - matcher:
        prefix: /
      routeAction:
        single:
          upstream:
            name: default-jsonplaceholder-jsonplaceholder-8080
            namespace: gloo-system

You’ll notice that it looks very similar to the Ingress we had previously created with a few subtle changes. The path specifier is prefix: / which is generally what people intend, i.e., if the beginning of the request message path matches the route path specifier than apply the route actions. If we wanted to exactly match the previous Ingress, we could use regex: /.* instead. Virtual Services allow you to specify paths by: prefix, exact, and regular expression. You can also see that instead of backend: with serviceName and servicePort, a Virtual Service has a routeAction that delegates to a single upstream. Gloo upstreams are auto-discovered and can refer to Kubernetes Services AND REST/gRPC function, cloud functions like AWS Lambda and Google Functions, and other external to Kubernetes services.

More details on Gloo at:

Let’s go back to our example, and update our Virtual Service to do the path rewrite we wanted, i.e., /people => /users

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - gloo.example.com
    routes:
    - matcher:
        prefix: /people
      routeAction:
        single:
          upstream:
            name: default-jsonplaceholder-jsonplaceholder-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /users
    - matcher:
        prefix: /
      routeAction:
        single:
          upstream:
            name: default-jsonplaceholder-jsonplaceholder-8080
            namespace: gloo-system

We’ve added a second route matcher, just like adding a second route path in an Ingress, and specified prefix: /people. This will match all requests that start with /people, and all other calls to the gloo.example.com domain will be handled by the other route matcher. We also added a routePlugins section that will rewrite the request path to /users such that our service will now correctly handle our request. Route Plugins allow you to perform many operations on both the request to the upstream service and the response back from the upstream service. Best shown with an example, so for our new /people route let’s also transform the response to both add a new header x-test-phone with a value from the response body, and let’s transform the response body to return a couple of fields: name, and the address/street and address/city.

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  creationTimestamp: "2019-04-08T21:43:45Z"
  generation: 1
  name: default
  namespace: gloo-system
  resourceVersion: "772"
  selfLink: /apis/gateway.solo.io/v1/namespaces/gloo-system/virtualservices/default
  uid: 6267ee31-5a47-11e9-bc30-867df7be8a8a
spec:
  virtualHost:
    domains:
    - gloo.example.com
    routes:
    - matcher:
        prefix: /people
      routeAction:
        single:
          upstream:
            name: default-jsonplaceholder-jsonplaceholder-8080
            namespace: gloo-system
      routePlugins:
        prefixRewrite:
          prefixRewrite: /users
        transformations:
          responseTransformation:
            transformation_template:
              body:
                text: '{ "name": "", "address":
                  { "street": "",
                    "city": "" } }'
              headers:
                x-test-phone:
                  text: ''
    - matcher:
        prefix: /
      routeAction:
        single:
          upstream:
            name: default-jsonplaceholder-jsonplaceholder-8080
            namespace: gloo-system

Let’s see what that looks like. My example GitHub repository already included the full Gloo Virtual Service we just examined. We need to configure Gloo for gateway which means adding another proxy to handle Virtual Services in addition to Ingress resources. We’ll use draft up to ensure our example is fully deployed including the full Virtual Service, and then we’ll call both /users/1 and /people/1 to see the differences.

# Install Gloo and update example
glooctl install gateway
draft up

# Call service
curl --verbose --header "Host: gloo.example.com" \
  $(glooctl proxy url --name gateway-proxy)/users/1

curl --verbose --header "Host: gloo.example.com" \
  $(glooctl proxy url --name gateway-proxy)/people/1

Mind Blown

Ok, well not that mind-blowing if you’ve used other L7 networking products or done other integration work, but still pretty cool relative to standard Ingress objects. Gloo is using Inja Templates to process the JSON response body. More details in the Gloo documentation.

Summary

In this article, we touched on some of the history and difficulties with the existing Kubernetes Ingress resources. Ingress resources continue to play a role within Kubernetes deployments despite the many challenges that annotation-based extensions have. Kubernetes Custom Resources (CRDs) was created to address some of those extension challenges and can provide a cleaner way to extend Kubernetes as you saw in the Gloo Ingress and Gateway examples. I’m a big believer in the potential of Envoy based solutions as are others in the Istio and Contour communities, and it will be exciting to see how the Kubernetes community decides to evolve Ingress after they finally move the existing resource spec to GA status.

Read More

Kubernetes Ingress Control using Gloo

Kubernetes is excellent and makes it easier to create and manage highly distributed applications. A challenge then is how do you share your great Kubernetes hosted applications with the rest of the world. Many lean towards Kubernetes Ingress objects and this article will show you how to use the open source Solo.io Gloo to fill this need.

Gloo as Ingress

Gloo is a function gateway that gives users many benefits including sophisticated function level routing, and extensive service discovery with the introspection of OpenAPI (Swagger) definitions, gRPC reflection, Lambda discovery and more. Gloo can act as an Ingress Controller, that is, by routing Kubernetes external traffic to Kubernetes cluster hosted services based on the path routing rules defined in an Ingress Object. I’m a big believer in showing technology through examples, so let’s quickly run through an example to show you what’s possible.

Prerequisites

This example assumes you’re running on a local minikube instance, and that you also have kubectl also running. You can run this same example on your favorite cloud provider managed Kubernetes cluster; you’d need to make a few tweaks. You’ll also need Gloo installed. Let’s use Homebrew to set all of this up for us, and then start minikube and install Gloo. It will take a few minutes to download and install everything to your local machine, and get everything started.

brew update
brew cask install minikube
brew install kubectl glooctl curl

minikube start
glooctl install ingress

One more thing before we dive into Ingress objects, let’s set up an example service deployed on Kubernetes that we can reference.

kubectl apply \
  --filename https://raw.githubusercontent.com/solo-io/gloo/master/example/petstore/petstore.yaml

Setting up an Ingress to our example Petstore

Let’s set up an Ingress object that routes all HTTP traffic to our petstore service. To make this a little more exciting and challenging, and who doesn’t like a good tech challenge, let’s also configure a host domain, which will require a little extra curl magic to call correctly on our local Kubernetes cluster. The following Ingress definition will route all requests to http://gloo.example.com to our petstore service listening on port 8080 within our cluster. The petstore service provides some REST functions listening on the query path /api/pets that will return JSON for the inventory of pets in our (small) store.

If you are trying this example in a public cloud Kubernetes instance, you’ll most likely need to configure a Cloud Load Balancer. Make sure you configure that Load Balancer for the service/ingress-proxy running in the gloo-system namespace.

The important details of our example Ingress definition are:

  • Annotation kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller, i.e., Gloo. This requirement will go away soon as we add the ability for Gloo to be the cluster default Ingress controller
  • Path wildcard /.* to indicate that all traffic to http://gloo.example.com is routed to our petstore service
cat <<EOF | kubectl apply --filename -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: petstore-ingress
 annotations:
    kubernetes.io/ingress.class: gloo
spec:
  rules:
  - host: gloo.example.com
    http:
      paths:
      - path: /.*
        backend:
          serviceName: petstore
          servicePort: 8080
EOF

We can validate that Kubernetes created our Ingress correctly by the following command.

kubectl get ingress petstore-ingress

NAME               HOSTS              ADDRESS   PORTS   AGE
petstore-ingress   gloo.example.com             80      14h

To test we’ll use curl to call our local cluster. Like I said earlier, by defining a host: gloo.example.com in our Ingress, we need to do a little more to call this without doing things with DNS or our local /etc/hosts file. I’m going to use the recent curl --connect-to options, and you can read more about that at the curl man pages.

The glooctl command-line tool helps us get the local host IP and port for the proxy with the glooctl proxy address --name <ingress name> --port http command. It returns the address (host IP:port) to the Gloo Ingress proxy that allows us external access to our local Kuberbetes cluster. If you are trying this example in a public cloud managed Kuberbetes, then most will handle the DNS mapping for your specified domain (that you should own), and the Gloo Ingress service, so in that case, you do NOT need the --connect-to magic, just curl http://gloo.example.com/api/pets should work.

curl --connect-to gloo.example.com:80:$(glooctl proxy address --name ingress-proxy --port http) \
    http://gloo.example.com/api/pets

Which should return the following JSON

[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

TLS Configuration

These days, most want to use TLS to secure your communications. Gloo Ingress can act as a TLS terminator, and we’ll quickly run through what that set up would look like.

Any Kubernetes Ingress doing TLS will need a Kubernetes TLS secret created, so let’s create a self-signed certificate we can use for our example gloo.example.com domain. The following two commands will produce a certificate and generate a TLS secret named my-tls-secret in minikube.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout my_key.key -out my_cert.cert -subj "/CN=gloo.example.com/O=gloo.example.com"

kubectl create secret tls my-tls-secret --key my_key.key --cert my_cert.cert

Now let’s update our Ingress object with the needed TLS configuration. Important that the TLS host and the rules host match, and the secretName matches the name of the Kubernetes secret deployed previously.

cat <<EOF | kubectl apply --filename -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: petstore-ingress
  annotations:
    kubernetes.io/ingress.class: gloo
spec:
  tls:
  - hosts:
    - gloo.example.com
    secretName: my-tls-secret
  rules:
  - host: gloo.example.com
    http:
      paths:
      - path: /.*
        backend:
          serviceName: petstore
          servicePort: 8080
EOF

If all went well, we should have changed our petstore to now be listening to https://gloo.example.com. Let’s try it, again using our curl magic, which we need to both resolve the host and port as well as to validate our certificate. Notice that we’re asking glooctl for --port https this time, and we’re curling https://gloo.example.com on port 443. We’ll also have curl validate our TLS certificate using curl --cacert <my_cert.cert> with the certificate we created and used in our Kubernetes secret.

curl --cacert my_cert.cert \
    --connect-to gloo.example.com:443:$(glooctl proxy address --name ingress-proxy --port https) \
    https://gloo.example.com/api/pets

Next Steps

This was a quick tour of how Gloo can act as your Kubernetes Ingress controller making very minimal changes to your existing Kubernetes manifests. Please try it out and let us know what you think at our community Slack channel.

If you’re interested in powering up you Gloo superpowers, try Gloo in gateway mode glooctl install gateway, which unlocks a set of Kubernetes CRDs (Custom Resources) that give you a more standard, and far more powerful, way of doing more advanced traffic shifting, rate limiting, and more without the annotation smell in your Kubernetes cluster. Check out these other articles for more details on Gloo’s extra powers.

Read More