Previously they mostly used a typical lamp stack in the front end. The php apps pulled data from Java backend services. They had a strict separation policy.
The different parts of the bbc where essentially different apps with a proxy in front doing path based routing.
You’d generate a RPM with your php app / Java app and that got deployed.
That had a few drawbacks mostly around process, it was a pain to get releases out as had to go through a single team who could deploy your rpm. You also had the inflexibility of a fixed sized pool of servers in a dc you manage.
When they first started using cloud that mostly remained the same but streamlined process. You provide a rpm using the current lamp/Java stack, a build process baked that in to an ami you could deploy. That made deployments more flexible in you was not constrained by the current physical hardware available and removed the dependency having a specific team do the deployment manually on a shared host.
I imagine the hosting started to get expensive with the dedicated hosts per service. Im guessing slowly the more they used aws services and trialing things they ended up where they are which sounds super complex.
I’m not familiar with where they are now other than the article but I’d bet going back to an app such as php, Java, ruby whatever on the fronted and binpacking them with kubernetes would be simpler than dealing with thousands of lambdas on a black box runtime. Most of the stuff at the bbc hits the edge proxies/cache anyway so the remotes are fairly idle.
Previously they mostly used a typical lamp stack in the front end. The php apps pulled data from Java backend services. They had a strict separation policy.
The different parts of the bbc where essentially different apps with a proxy in front doing path based routing.
You’d generate a RPM with your php app / Java app and that got deployed.
That had a few drawbacks mostly around process, it was a pain to get releases out as had to go through a single team who could deploy your rpm. You also had the inflexibility of a fixed sized pool of servers in a dc you manage.
When they first started using cloud that mostly remained the same but streamlined process. You provide a rpm using the current lamp/Java stack, a build process baked that in to an ami you could deploy. That made deployments more flexible in you was not constrained by the current physical hardware available and removed the dependency having a specific team do the deployment manually on a shared host.
I imagine the hosting started to get expensive with the dedicated hosts per service. Im guessing slowly the more they used aws services and trialing things they ended up where they are which sounds super complex.
I’m not familiar with where they are now other than the article but I’d bet going back to an app such as php, Java, ruby whatever on the fronted and binpacking them with kubernetes would be simpler than dealing with thousands of lambdas on a black box runtime. Most of the stuff at the bbc hits the edge proxies/cache anyway so the remotes are fairly idle.