The Swiftype Blog / Building an Asynchronous API to Improve Performance

Building an Asynchronous API to Improve Performance

One of the challenges we’ve had to deal with at Swiftype is that we have had customers pushing a lot of search and indexing traffic from very early on. When a customer is pushing hundreds of index updates per second, it’s important to respond quickly so we don’t start dropping requests.

In order to do that, we’ve built bulk create/update API endpoints to reduce the number of HTTP requests required to index a batch of documents and moved most processing out of the web request. We’ve also invested in front-end routing technology to limit the impact customers have on each other.

However, we were not satisfied. Sometimes when a large customer was indexing a huge number of documents, our front-end queues would still back up.  In the pursuit of even better response times for our customers, we’ve built an asynchronous indexing API. Our goals in creating the new API were high throughput, supporting bulk requests for all interactions, and excellent developer ergonomics. We wanted an API that was fast and easy to use.

Here’s how it works.

async_bulk_API_vertical_2.28.39_PM

First, customers submit a batch of documents to create or update. The request for this looks just like our pre-existing bulk create or update API, but goes to a new endpoint.

When our server receives the response, it performs a quick sanity check on the input, without hitting the database. If all the input parameters are present and validly formatted, we create two records in our database for each document that was submitted: a document creation journal, and a document receipt.

For performance, we insert these rows using activerecord-import. This is a great library that uses a single INSERT statement with multiple rows. This results in a massive speed improvement compared to standard ActiveRecord when saving a large number of records. We also generate the IDs ahead of time using BSON. By generating the IDs ahead of time, we don’t need to get them from the database after inserting, and using BSON lets us encode a timestamp in the ID at the cost of a larger ID column.

Once created, we enqueue a message for each document creation journal onto a queue that is read by a pool of loops workers. Loops is a dead-simple background processing library written by our Technical Operations Lead, Oleksiy Kovyrin. It makes it easy to write code that does one thing forever, in this case, reading messages off the queue and creating the associated document in the database.

The response to the API request includes a way to check the status of all the document receipts. To make the API easy to use, we’re including URLs to the created resources. Though we’re not following all its precepts, this approach is inspired by the idea of the hypermedia API. These URLs make it easy for both humans and computers to find the resource.

Since the API is asynchronous, users must poll the document receipts API to check for the status of the document creation. We’ve built an abstraction in our Ruby client library that allows developers to simulate a synchronous request, although we recommend that only for development.

By pushing all work except for JSON parsing and the most minimal input validation to the backend, we’re able to respond to these API requests very quickly. On the backend, the loops workers read messages off the queue and create documents. When a loops worker attempts to create a document, it updates the document receipt (either with the status of “complete” and a link to the created/updated document, or with the status “failed” and a list of error messages) and deletes the document creation journal.

This brings us to one final aspect of the asynchronous API: how we make sure it keeps working. If our loops workers started failing, the document creation journals would back up without being processed, and no documents would be created/updated. To guard against this, we have built a monitoring framework that alerts us when the oldest record in the table is older than a certain threshold.

This solution has been successful for us in beta tests with our largest API users, and we have now rolled it out to everyone.

We hope this helps you build out your next high-throughput API. If this is the kind of thing you’re interested in, we’re hiring engineers for our core product and infrastructure teams.

Subscribe to our blog