Save Money by caching paid APIs with Amazon CloudFront

The modern developer enjoys a wide variety of third party platforms that makes his or her life easier. They can tap into already specialized functionality and curated data from the cloud, thus allowing them to focus on unique and specific business challenges.

Of course, anything comes at a price, and third party providers, be it small teams or tech giants can put a small fee for their efforts. Totally fair.

However, you may want to reduce your call expensive when you can. Or maybe you need to deliver data faster, making your application more usable. Or both. In this article I explore an option offered by Amazon Web Services that would help you achieve those requirements.

Important. Whenever considering storing or caching data from an external service you must consult their data retention policies. A few will let you store it indefinitely, others not at all, while some will allow a time period. Don’t forget to read their source data attribution policy, you might have to display the data source’s name to the user.

The usecase

Your fictional website lists some real world places of interest (could be restaurants) in a city. Best, worst, cheapest, it matters less. Once a user clicks on such a location, a more detailed view is shown. Each place, as data entity, has an id and it’s correspondent richer information. This makes it a good candidate for caching/storing it by having the id as the key.

At first I was thinking of adopting Places from the Google Maps platform. However their terms about caching sound a little too strict. This led me to a Foursquare’s Places API which allows retaining their data up to 24h.

Note: The main reasons for choosing an API are that it’s served over HTTP, retention green-light, and that it has a price. Any other provider with these criteria could have been use for article.

As a caching solution I’ll go with Amazon CloudFront. It’s easy to setup up and offers prized functionality. You can deliver, with low latency, static content and dynamic content (by proxying a HTTP server). This approach will preserve client applications code-wise agnostic to the backend, the only change being a string endpoint value.

When comparing the costs we can ignore the monthly Foursquare’s 599$ API fee for commercial usages and whatever initial calls to the search endpoint to get the venue ids, as those will be the same in each case.

With this in mind, the goal is to determine the price and performance of the source and the cache technology, simplified as a contents between the endpoints:


Benchmarking setup

Both the original api and the CDN will undergo the same measurements. The stress tests will try to simulate a set of users that would call the exposed RESTful API directly, as if they were on a browser or mobile app. We’ll use k6 tool to output valuable stats and run it over the internet from my every day notebook.

Below you have a config script for k6 and it will be used for the two cases.

// benchmark.js
import http from "k6/http";
import { check } from "k6";

export let options = {
vus: 10,
iterations: 4000,

let url = __ENV.API_FULL_URL;
let params = {
headers: { "Accept-Encoding": "gzip" },

export default function () {
let response = http.get(url, params);

check(response, {
"http2 is used": (r) => r.proto === "HTTP/2.0",
"status is 200": (r) => r.status === 200,
"hit cache": (r) => r.headers["X-Cache"] === "Hit from cloudfront",

Due to the restrictive 500 calls daily quota on venue details endpoints on the Personal API plan, I will perform the calls on the /search endpoint which some coordinates that yield the same payload size, this allows almost 100k calls in a day.



First, let’s see how much money we’d have to pay if we used just use the API directly. Consulting their tier plan, the cost for a fictitious total of 1 million requests, with the $0.003 per call fee, would be $3000.

Taking a look at the rate limits we learn about the 5000 request per hour limitation on the venues/* endpoints. A spike in traffic, or some batch processing, could make a caller hit that threshold. Moreover, they state they offer only 4 Queries per Second (tests say it’s actually more).


Running k6 in terminal with the url containing your credentials as

$ k6 run -e API_FULL_URL=",-73.01&client_id=<id>&client_secret=<secret>&limit=38&v=20190817" ./benchmark.js

we get the results in a detailed output, of which the two most important lines are copied in the tables:




For the costs, put to use the AWS Simply Monthly Calculator. Assume you already used up your free tier privileges.

Let’s also make the supposition that on average half of the request hit the cache, and half don’t. Given 1 million request per month, each of them with a Average Object Size of 5 kilobytes, that would be 5 GB/Monthly at Data Transfer Out and another 5 at Data Transfer Out to Origin, if I understood correctly. Checking also HTTPS as the Type of Requests, the estimation yields a value of $1.70.

Adding to this the price of the half of million request that will not hit the CloudFront cache but the FourSquare API, which is $1500 (0.5M times $0.003), we have a total estimate of $1501.7.

On the create distribution page there are some actions to take:

  • we have the Origin Domain Name field, in our case it’s “”.
  • the Origin path is “/v2/venues”
  • set the Origin Protocol Policy to Match Viewer otherwise you might get some annoying 301 (Moved Permanently) status codes, redirecting you to the origin API
  • at Object Caching, select Customize, the granted retention period is of 24h thus put the equivalent of 86400 seconds in all the three fields
  • for Query String Forwarding and Caching I selected Forward all, cache based on whitelist and wrote client_id, client_secret and v each on a new line. This is not that important because the venue id is actually part of the route venues/:id/ and not a query string parameter.


Using k6 on the CloudFront endpoint this time we get the following output.

$ k6 run -e API_FULL_URL="https://<cloudfrontid><id>&client_secret=<secret>v=20190817" ./benchmark.js


Comparing $3000 versus $1500 as expenses and 269ms versus 33ms as response times we happily notice that we get 7 times the speed at half the cost. So there you have it, the numbers are too promising not to give CloudFront a try.

Even if I chose to cache Foursquare’s API, this does not mean there’s anything wrong with it. On the contrary, the API is easy to get started with due to the sandbox environment, has solid documentation and provides valuable data.

However, don’t generalize and study your particular case. Quotas, rates and prices most certainly differ. For the sake of easy calculations I’ve worked with naive estimates for requests count and request size. This article is a guideline on how you could reduce expenses and deliver faster content and serves as a proof that it is an achievable goal.

Bonus? Pre-cache using Lambda

Maybe there are better suitable solutions out there, but here’s one.

There is a possibility you need the speed of the CDN as soon as possible. This could make your users happier or get a higher ranking from search engines, which would pay off in the long run. Bellow I’m showcasing an imaginary scenario where you have a daily report with 1000 critical objects that should be pre-cached. The venue ids are stored in S3 and with the help of a Lambda function you could hit CloudFront to engage caching. The function can be called by a CloudWatch event triggered every 24h and 5 minutes.

According to the Lambda cost calculator, with the 125900 ms duration and 128 MB allocated for the script, running 30 times per moth, the cost would be $0.01. Negligible. The costs for making calls to the original API are still the same, it would be hit by an organic request eventually.