Ingress controllers are quite confusing. Here's the general summary:
- General best practice for Kubernetes clusters is to have a proxy that routes external traffic to internal services
- This is because Kubernetes provides its own internal network space so each of your internal services aren't directly exposed externally (although you can do this if you like)
- Kubernetes has a spec for how this is done, the "ingress" spec
- An ingress controller implements the ingress spec
- The ingress spec is pretty limited in functionality (basic routing, no support for timeouts, as an example)
So if you just need to do basic routing, any ingress controller is going to do the same thing, more or less.
If you want to do more than that (which is very likely), then you'll want to compare the ingress controllers beyond basic ingress. That's where NGINX vs Envoy Proxy vs Traefik etc come into play as your core data plane proxy, and then how much stuff comes on top of it.
Hope that helps.
(Disclosure: I work on Ambassador, one of the Envoy-based ingress controllers)
I was about to suggest using something like Ambassador in response to your comment and then saw at the end you are the Ambassador guy. Loved Ambassador as it made the hell of using Ingress go away for me at the time. I'm focused on OpenStack nowadays, but wanted to give you all a shout-out!
Istio does have an ingress gateway. Again, if you’re doing basic routing it’s more-or-less the same as any other ingress (that’s what a standard is for). That being said — Istio is focused more on traffic inside the data center (“east-west”) vs getting data into the data center (“north-south”). And these really are distinct problems, e.g., you don’t need to worry about stuff like PROXY protocol, redirect-to-HTTPS, openID Connect, etc inside the data center. But you definitely need to worry about this at the edge.
This is the first I hear about this project. Is anyone using it and able to say something about how it works for them and, in particular, if it works well for bare metal clusters?
I don't see myself using something like this on a public cloud when I could just use an Ingress Controller offered by the cloud provider unless there are very good reasons for it. Which there might be! Which is what I ask for. :)
I tried to install an kubernetes ingress but it turned out to be too difficult and so I used the standard service from my cloud. Do you know any good articles about how this macanism works?
- General best practice for Kubernetes clusters is to have a proxy that routes external traffic to internal services
- This is because Kubernetes provides its own internal network space so each of your internal services aren't directly exposed externally (although you can do this if you like)
- Kubernetes has a spec for how this is done, the "ingress" spec
- An ingress controller implements the ingress spec
- The ingress spec is pretty limited in functionality (basic routing, no support for timeouts, as an example)
So if you just need to do basic routing, any ingress controller is going to do the same thing, more or less.
If you want to do more than that (which is very likely), then you'll want to compare the ingress controllers beyond basic ingress. That's where NGINX vs Envoy Proxy vs Traefik etc come into play as your core data plane proxy, and then how much stuff comes on top of it.
Hope that helps.
(Disclosure: I work on Ambassador, one of the Envoy-based ingress controllers)
Edited: formatting