How We Dogfood Loader.io in SendGrid Engineering SendGrid Engineering November 14, 2013 Guest Post, Product, Technical // SUMMARIES ?> This post comes from Sr. Software Engineer Sam Nguyen. At SendGrid, we send hundreds of millions of emails every day and the traffic keeps on growing. Recently, we worked with one of our major high-volume customers who were starting to push our mail sending API service to its limits. We had to figure out how help them send more email, faster. To identify areas for improvement, we turned to Loader.io — a project of SendGrid Labs — to benchmark performance before, during and after the optimization process. Within seconds of starting a benchmark, Loader.io can spin up 10,000 concurrent clients to hit your service. By turning this tool built for external developers around on our own internal code, we were able to implement and demonstrate a 5x reduction in request latency. In this case, we identified our authentication flow to be the bottleneck. Each server handling incoming mail runs a local service to process authentication requests. Even though we were caching results within the authentication service, the sheer amount of traffic on each server was causing requests to the authentication server to queue up and time out. Our solution was to cache authentication results closer to where they are being used to prevent the requests from ever having to be made. Now all SendGrid customers benefit from the increased performance that came from using our own Labs product. But we’re not done. Loader.io pushed our servers so hard that we uncovered a new bottleneck in our system, which was masked by the slower performance in our old code. When we implement the next level of performance optimizations, we’ll definitely use Loader.io to prove that we’ve done our job correctly.