Routing with Cloudflare Workers

Routing a static site with Serverless on a distributed edge network

Jason Conway-Williams
5 min readMay 31, 2020

What is CloudFlare?

Some people would say that it’s a CDN service but that’s a very simple answer.

Cloudflare is one of the biggest networks operating on the Internet.

CloudFlare offers a magnitude of services for websites and apps operating on the internet with a bold mission statement.

Cloudflare is on a mission to help build a better Internet.

The Cloudflare network consists of well over one hundred edge locations with a model based on distributed services. Services are located on each edge location making them closer to the website/app user.

Cloudflare edge locations

What are Cloudflare Workers

One of Cloudflares’ more recent offerings is Workers and like many of the other services they offer, Workers are extremely impressive. I’m not going to go into detail about how they work but if you want to know the intricacies of Workers and the differences between them and other Serverless platforms such as AWS Lambda, take a look at Zack Blooms article. He explains how Workers use V8 Isolates rather than containers which are prevalent in other Serverless platforms, it is absolutely worth a read.

Workers provide the ability to run javascript functions on Cloudflare edge locations in a Serverless manner with a pricing model similar to other Serverless platforms. The best place to get an explanation of what a Worker is is probably the CloudFlare website.

Cloudflare Workers are a platform for enabling serverless functions to run as close as possible to the end user. In essence, the serverless code itself is ‘cached’ on the network, and runs when it receives the right type of request. Cloudflare Workers are written in JavaScript against the service workers API, meaning they can use all the functionality offered by service workers. They leverage the Chrome V8 engine for execution. Cloudflare Workers code is hosted in Cloudflare’s vast network of data centers around the world.

Serving Static Assets

One area of focus for me over the past year or so has been moving our front end stack from our monolithic CMS to a static Serverless architecture. This has involved not only storing our image, javascipt and css assets on AWS S3 but also pre-rendering our html pages and storing them on S3 as well. In our existing stack, pre Cloudflare workers, we would have to provide a mechanism to route the asset requests to S3 and return the response. This was typically done within our AWS network through our Nginx instances by providing routing rules. This would involve a blue green deployment to each environment every-time we wanted to add or change a route.

Utilising Workers would allow us to remove two network hops from the request since the worker would make the request directly to S3, as well as allow us to cache the assets in the edge location.

Routes

Routes are defined when deploying a worker to Cloudflare either through a deployment platform such as Serverless, the Cloudflare API or the Cloudflare console. The cardinality between workers and routes is; a worker can serve many routes but a route can only be served by one worker. Wild cards can also be used when defining routes which works well in my case since the start of the URL stem for each asset type is unique. All js and css assets are served under <domain>/assets/* , all images are served under <domain>/images/* and all html is served under <domain>/en-gb/*. As you can see, the worker solution would need to support three wild carded routes. Since the pricing model for workers is “you pay for what you use”, we can deploy one worker for each of these routes rather than a single worker for all of them which would allow us to define specific, static response headers in the worker for each asset type. The cost of deploying the three workers would be the same as deploying an individual worker to handle all of the routes.

The Assets Worker

Like with Lambda’s, I use the Serverless framework with Cloudflare workers. Serverless allows me to separate my code into ES6 classes, bundle up the code base for deployment using Webpack as well as use a standard development environment and CI/CD setup. Below is a screen shot of the RequestHandler Class used in the worker to process the asset request, retrieve the asset from S3, configure and return the response.

Firstly, I create an instance of URL to ease the processing of URL path and query params. I then return a response if a response for the requested URL is present in the cache. If not, the worker then makes a request to S3 via the service worker fetch API. If an unsuccessful response is returned from S3, a new Response is created based on the initial response and a set of no cache headers is added with the use of the HeaderUtils class. If the S3 response is successful, a new Response is created based on the initial response, the content type of the asset is determined and a set of default headers for the content type is set on the response. The complete response is then added to the cache for future requests before the complete response is returned.

The code for the assets worker can be found on Github

--

--