We’ve written a lot of end-to-end (E2E) Cypress tests to validate our web applications are still working as expected with the backend. After writing these browser automation tests, we would like to always have these Cypress tests run or be triggered in some way like our unit tests before we merge code and deploy to certain environments. This led us down the path of wanting to run our Cypress tests in a Docker container to integrate with our continuous integration (CI) provider and the machines we use in the cloud to run these containers.
When it comes to deployment flows, we use Buildkite as our CI provider. This allows us to generate a build of automated steps for our application in a Buildkite pipeline when we plan to move code across the board. For more context, a pipeline is a place usually tied to an application’s repository where we can look at builds or trigger builds with certain steps to run when you create pull requests, push new code changes, merge code to master, and deploy to different environments. We create multiple pipelines for separate purposes such as for deployment, triggered Cypress tests, and specific Cypress tests running on a schedule.
This blog post assumes you’ve already written Cypress tests before and have some tests running, but would like ideas for how to run these tests all the time in your development and deployment flows. If you would like more of an overview about writing Cypress tests instead, you may check out this earlier blog post and then revisit this when you have something to run.
We aim to walk you through ideas for how you can integrate Cypress tests in a Docker container with your CI provider by taking a look at how we’ve done it with Docker Compose and Buildkite in our deployment pipeline. These ideas can be expanded upon in your infrastructure for the strategies, commands, and environment variables to apply when triggering Cypress tests.
Our standard CICD flow
In our standard development and deployment flow, we set up two pipelines:
- The first handles our deployment steps for when we push code.
- The second triggers our Cypress tests to run in parallel and to be recorded. The success or failure of this affects the deployment pipeline.
In our deployment pipeline, we build out our web application assets, run unit tests, and have steps to trigger selected Cypress tests before deploying to each environment. We make sure they pass before ungating the ability to do a push button deploy. These triggered Cypress tests in the second pipeline also run in a Docker container and are hooked up to the paid Cypress Dashboard through a recording key so we can look back on the videos, screenshots, and console output from those Cypress tests to debug any issues.
Using Buildkite’s select inputs, we devised a dynamic, choose your own adventure so users could select “Yes” or “No” to decide which Cypress spec folders to run and verify as we push more code. The default answer would be “No” for all the options, but the value of “Yes” would be the glob path to the Cypress spec folder.
At times, we do not want to run all the Cypress tests if our code change does not affect other pages. We, instead, only want to trigger the tests we know will be affected. We may also need to deploy a quick fix to production for an urgent bug issue as we feel confident enough to not run our Cypress tests which can take anywhere from 0 to 10 minutes depending on how many tests we trigger. We provide an example both visually and in the YML steps for this part.
Next, we implemented our own Bash script called
runCypress.sh to run after that select step to parse out the selected “Yes” or “No” values. We do this to form a list of comma-separated spec paths to run and append as an option,
--spec , to our eventual Cypress command that runs in a Docker container in a triggered pipeline. We export environment variables such as the formed list of specs in “CYPRESS_SPECS” and the current test environment in “CYPRESS_TEST_ENV” to be used in the pipeline we are triggering at the end of script with
buildkite-agent pipeline upload "$DIRNAME"/triggerCypress.yml.
You may have noticed how we also export an “ASYNC” environment variable. In Buildkite, you can choose to have a triggered build step be blocking or non-blocking in terms of the success or failure. If we have “ASYNC” set to true, our main deployment pipeline steps will continue to run and will not wait for the triggered Cypress tests in a different pipeline to finish. The success or failure of the pipeline does not affect the success or failure of the deployment pipeline.
If we have “ASYNC” set to false, our main deployment pipeline steps will be blocked until the triggered Cypress tests in a different pipeline finishes. The success or failure of the triggered build leads to the overall success or failure of the deployment pipeline where it picks up after.
When our code is still in a feature branch with a pull request open, we like to push more changes, trigger some Cypress tests, and see how things behave. But, we don’t always want to block the rest of the deployment pipeline steps from running if the triggered tests fail since there are potentially more changes along the way. In this scenario, we set “ASYNC” to false to not block if Cypress tests fail. For the case where we already merged our pull request into master and deployed to staging but want to trigger Cypress tests before we deploy to production, we set “ASYNC” to true since we do want the Cypress tests to always pass before going out to production.
Returning back to
runCypress.sh, we recall that script triggers the second pipeline to run by calling the
triggerCypress.yml file with assigned environment variable values. The
triggerCypress.yml file looks something like this. You’ll notice the “trigger” step and interpolation of values into the build messages are helpful for debugging and dynamic step names.
Whether we trigger the Cypress tests to run from our deployment pipeline to a separate trigger pipeline or run the Cypress tests on a schedule in a dedicated pipeline, we follow and reuse the same steps while only changing up the environment variable values.
These steps involve:
- Building the Docker image with a latest tag and unique version tag
- Pushing up the Docker image to our private registry
- Pulling down that same image to run our Cypress tests based on our environment variable values in a Docker container
These steps are outlined in a
pipeline.cypress.yml file like so:
When we trigger Cypress tests to run, it will kick off a separate build in the Cypress trigger pipeline. Based on the success or failure of the build, the Cypress test run will either block or allow for us to deploy to production when we are going from staging to production for master branch builds.
Clicking the “Triggered cypress/integration/…” step will take you to the triggered pipeline’s build with a view like this to see how the tests went.
If you are curious about how the Docker part is all connected, our
docker-compose.cypress.yml use those environment variables exported from our pipelines to then use the proper Cypress command from our application’s
package.json pointing to the right test environment and running the selected spec files. The snippets below show our general approach that you can expand on and improve to be more flexible.
Outside of tests run during our usual integration and deployment cycles, we created dedicated Buildkite pipelines. These pipelines run on a schedule for important tests against our staging environment to ensure our frontend and backend services are working correctly. We reused similar pipeline steps, adjusted certain environment variable values in the Buildkite pipeline’s settings, and set up a cron schedule to run at a scheduled time. This helps us catch many bugs and issues with the staging environment as we continue to monitor how well our tests are doing and if anything downstream or from our own code pushes may have led to failing tests.
We also utilize the parallelization flag to take advantage of the number of AWS machines we can spin up from our queue of build agents set up by our Ops team. With this parallelization flag, Cypress auto-magically brings up a certain number of machines based on the number we set in Buildkite’s “parallelism” property.
We were able to run over 200 tests in around 5 minutes for one of our application repos.
It then spreads out all the Cypress tests to run in parallel across those machines while maintaining the recording of each of the tests for a specific build run. This boosted our test run times dramatically!
Here are some tips when parallelizing your Cypress tests:
- Follow the suggestions in the Dashboard Service for the optimal number of machines and have the number of machines set in an environment variable for flexibility in your pipelines.
- Split into smaller test files, especially breaking out longer running tests out into chunks we can parallelize better across machines.
- Make sure your Cypress tests are isolated and do not affect each other or depend on each other. When dealing with update, create, or delete-related flows, use separate users and data resources to avoid tests stomping on each other and running into race conditions. Your test files can run in any order so make sure that is not an issue when running all of your tests.
- For Buildkite, remember to pass in the Buildkite build ID environment variable value into the
--ci-build-idoption in addition to the
paralleloption so it knows which unique build run to associate with when parallelizing tests across machines.
In order to hook up your Cypress tests to your CI provider such as Buildkite, you will need to:
- Build a Docker image with your application code, using the necessary Cypress base image and dependencies required to run the tests in a Node environment against certain browsers.
- Push your Docker image up to a registry with certain tags
- Pull the same image down in a later step
- Run your Cypress tests in headless mode and with recording keys if you are using the Cypress Dashboard Service.
- Set different environment variable values and plug them into the commands you run for Cypress to trigger selected Cypress tests against a certain test environment in those Docker containers.
These general steps can be reused and applied to Cypress tests running on a schedule and other use cases, such as triggering tests to run against selected browsers in addition to your deployment pipelines. The key is leveraging the capabilities of your CI provider and setting up your commands to be flexible and configurable based on environment variable values.
Set up your commands to be flexible and configurable based on environment variable values.
Once you have your tests running in Docker with your CI provider (and if you pay for the Dashboard Service), you can take advantage of parallelizing your tests across multiple machines. You may have to modify existing tests and resources so they are not dependent on another to avoid any tests stomping on each other.
We also discussed ideas you can try out for yourself such as creating a test suite to validate your backend API or triggering tests to run against a browser you choose. There are also more ways to set up continuous integration here in the Cypress docs.
Moreover, it’s important to run these Cypress tests during deployment flows or scheduled intervals to be sure your development environments are working as expected all the time. There have been countless times where our Cypress tests have caught issues related to downstream backend services that were down or changed in some way, manifesting in frontend application errors. They especially saved us from unexpected bugs in our web pages after we pushed out new React code changes.
Maintaining passing tests and monitoring failing test runs diligently in our test environments lead to less support tickets and happier customers in production. Keeping a healthy and stable suite of Cypress tests running when you push new code changes provides greater confidence that things are working well and we recommend that you and your teams do the same with your Cypress tests.
For more resources on Cypress tests, check out the following articles: