About a month ago, I started seriously considering what to do with my cluster with ingress-nginx reaching EOL. My first thought was to take this as an opportunity to migrate from frozen Ingress API to Gateway API. I dug through the documentation for most implementations of both Ingress and Gateway APIs, but none of them really ticked all the boxes.
Well, as you might have guessed from the title, I ended up writing my own Ingress controller implementation. The initial idea was to create some basic Ingress controller that would fit my needs. Retrospectively, it was quite an optimistic goal, as Ingress controller contains a K8s controller and a reverse proxy, both of which are not usually implemented by some trivial code.
This short post is just a handful of thoughts after spending a couple of days writing Ingress controller from scratch. The road to switching to this implementation in my own cluster was a bit more involved than this post paints. Also, there were actually two instances of ingress-nginx in my cluster, so I expect even more eventful hours of debugging from the second migration.
First steps
I have never programmed any Kubernetes controller, which made me think that the first step would be really painful. However, in a relatively short time I was able to set up Go build, draft initial manifests and deploy a pod which would print about Ingresses in the cluster. Kudos to Go K8s controller libraries. Not so long after, I was even able to route basic HTTP traffic!
Well, who would have guessed that TLS would be the first interesting roadblock on the way? Not that the setup of the TLS endpoint would be quite hard in Go, it was fairly simple. However, the interaction with the Kubernetes control plane was a bit more involved. Because the controller now needed to worry about changes to secrets referenced by Ingresses, not only Ingress objects themselves.
Proxying
My naive idea of writing a fairly simple handler which would just do a request on behalf of the client worked to about the time I decided to proxy the first real webpage (in this case it was Immich).
The first immediately obvious hiccup was websocket. Immich uses it, and my implementation just did not think of anything like that. This oversight became the first detour on the way of using s-ingress for real traffic.
The next significant detour came with the deployment of some other services and the realization that there were some ingress-nginx functionalities I could not switch without. In the order of implementation: IP allowlisting, custom response header, TCP proxy, subrequest authorization and basic authorization. At that time I decided to separate this functionality to modules, which retrospectively came out as a quite good solution.
Deployment
After fiddling with the implementation for a few days, I finally got the courage to swap my primary controller in the cluster. Well, it turned out that there were still some rough edges. Moreover, the S-Ingress was not on yet codeberg.org, but on my local Forgejo instance which was running behind the ingress-nginx I was trying to change. This has lead (at least twice) to situations in which I cut myself (and the cluster nodes) out from the package registry, unable to push a fixed version. However, after about two hours of struggle, I had successfully changed my ingress-nginx to s-ingress. Yay!
Can you try it?
# Yes
helm install s-ingress oci://codeberg.org/oidq/charts/s-ingress
Should you try it? Probably no.
While it is public now on codeberg.org/oidq/s-ingress and you technically can try it. Moreover, I have actually written a short paragraph on how to do that. You most definitely should not deploy it to do some important job, like be a single entrypoint for all the services in a cluster.
Another fact is that it has been tailored fit for my own use case with very limited maneuvering space. I would definitely like to add some configurability to the implementation, but it is not there yet.
I guess time will tell whether this was a reasonable decision to make, or I will be painstakingly throwing my work out of the window and reverting to some other controller in a few weeks. Well, I guess, at least I learned a lot about Kubernetes and the inner working of a reverse proxy.