As your number of Cypress test grows, if you are mocking out your API calls, it can easily and quickly become hard to maintain which is exactly what happened on one of my projects.
We were mocking all of our API calls so that we could focus our tests on UI behaviors and interactions. The code for the mocks was typically in each test which was fine for a few tests but now with over 1,000 Cypress tests, you can just imagine how much duplicate code there was and the difficulty in maintaining it..
It was difficult to find an existing mock for a new tests, so many times the developer writing the tests just created a new mock and a new fixture file.
It was even more time consuming as the API responses changed over time and we had to find all of the places that a mock was used and update the response for the new API response.
It was also timing consuming to switch from the old way of mocking Cypress with cy.server to cy.intercept.
One of the most essential things before jumping into writing tests is to develop your test strategy. I know it sounds like a daunting task. If you are reading this post, you are most likely a developer. We developers….we just want to write code, not spend time coming up and documenting something like a test strategy. However, not having a test strategy is like driving a car in a new city without a GPS and hoping you will reach your destination as quickly as possible. You may get there eventually without the GPS, but it is most likely going to be a lot of wrong turns, having to stop for directions (which most likely you won’t do until you are really, really lost if you ever stop to ask) and a lot of backtracking and starting over. I don’t know about you, but I prefer the most direct route and a GPS to guide me.
In this post, you will discover the questions you need to answer to develop your test strategy, and once you answer those questions, you will have your GPS guide. These questions are how we decided on our testing strategy when we started and has taken us from zero UI tests in 2019 to around 1,100 tests as of the writing of this post.
So let’s jump right into it.
Using XUnit, I was creating a bunch of exception test methods that did the same exact test with the only difference being the type of exception being thrown and I thought there has to be a way to use an XUnit theory to only write a single test method and validate the various exception scenarios.
Luckily, it was really easy to accomplish the goal of having a single test method that validated multiple types of exceptions using an XUnit theory but it took a bit to figure how to accomplish this goal.
In our previous post, we talked about how TeamCity automatically combines multiple dotCover outputs into a single code coverage report but what if we wanted to keep thoose reports seperate?
You might be thinking why would you want to do this? Don’t you want to see the overall test coverage?
The overall coverage is nice but when you have both unit tests and integration tests, or multiple test projects serving different purposes, there is a high potential that you are reporting lines of code being covered that were only called but not actually tested.
In this post, we are going to look at how in TeamCity you can run multiple test builds to generate both individual reports and still have a combined report.
In TeamCity, what if you need to combine the code coverage results say from unit test and integration test projectsthst run as seperate build steps?
Luckily, TeamCity does this work for you but it is not obvious that it will do it for.
To get TeamCity to combine the multiple the code coverage results into a single code coverage result, you just need to add the echo command to import the data into the build like we did in our previous post
In this post, we are going to add our code coverage to our TeamCity builds to run our unit tests with code coverage as part of the automated builds, show us the code coverage metrics summary after the build, be able to view the code coverage report right in TeamCity and add failure metrics for if code coverage percent drops.
In part 1 of the ASPNET Code Coverage Using dotCover, we hooked up dotCover to our project to generate our code coverage report. It was pretty easy to hook it up but the number is not the most accurate as it includes every file that is part of the solution including our test project and all of the ASP.NET WebApi code.
When looking at code coverage, we only want to include files that make sense to test against so that our code coverage number is the most accurate. So that means we should exclude our unit tests project from the report since we are not going to run tests against our test projects. Same thing with the ASP.NET WebApi startup.cs, program.cs and Controllers as we would want to test those using integration tests and not unit tests.
Having automated tests is a good thing to have to help with your code quality but having those tests without any idea of how much of your code is actually being tested is a really bad thing.
To figure out how much of our code we are actually testing, we need to create a code coverage report. To generate our code coverage report, we going to use the JetBrains dotCover tool.
Welcome to part two of our two part series on Angular code coverage.
In the previous article, we set up Cypress code coverage for our Angular project so that we could run it locally on our development machine. In this article, we are going to take it to the next step and add it to our automated build.
I am a big believer in DevOps and having automated builds and deployments for all of my projects. In fact, I have had automated builds and deployments since 2002, long before DevOps become a thing.
I will be using TeamCity as the automated build platform and am assuming that you already have your Angular build working and are just adding in code coverage to the build.
Welcome to part one of our two part series on Angular code coverage for Cypress tests. As I have implemented more automated tests, one of the must haves for me is code coverage reports. Code coverage allow me to quickly and easily see which lines of the code are not being tested so I can close any critical testing gaps. Today, I am going to be talking specifically about how to implement code coverage for an Angular project that was generated from the Angular CLI.
In your Angular application if you are using RxJS Debounce and running Cypress test you may have run into times that your tests are not consistently getting past the debounce wait time and appear like they are flaky tests.
Debounce is a way to wait X number of milliseconds for something to happen before continuing such waiting for a user to stop typing in a field before making an API call. This way you are not making an API call for each character typed into the field.
In Cypress, you could just use a wait statement to get past the debounce time but adding time based wait statements in Cypress is an anti-pattern.
Instead in Cypress you should use the
cy.clock() and cy.tick() commands to be able to forward the virtual time and cause debounce to fire. However, I found it was not consistently getting past the debounce. RxJS was acting like we had not waited for the debounce time.
Luckily, after much troubleshooting the solution ended up being quite simple and only involved test code changes.
Recently, on my build servers, the Chrome version update to Chrome 95 and all of a sudden my automated builds stopped being able to communicate between Cypress and Chrome and would instantly cause Cypress to crash on the 1st test when using
cypress run. We could login to the machine and run all of the tests using Chrome as a headed browser but anytime we tried to use Chrome as a headless browser it would instantly fail the build on the 1st tests.
We have seen other random issues like this show up with Chrome updates in the past and previously the fix was to disable the Chrome auto-update feature and then install a previous version of Chrome. This was a hack though as there is no supported way to downgrade Chrome.
Thankfully, there is an easier way by using a custom browser when running Cypress. Cypress even maintains a Chromium repository so that you can easily download previous versions of Chrome.
So far with the Cypress grep plugin we have looked at how to run tests with certain tags, how to run tests that have no tags, and how to increase the performance of the plugin when running filtered tests.
In this post, we are going to look at how to run tests multiple times to ensure that they are flake free. I have several times run into issues where we think a tests is working great only to find out if you run it a 2nd time that it fails or causes other tests to fail since Cypress without the Cypress grep plugin will only run each passing tests once.
By default, for the Cypress grep plugin when using the grep and grepTags all of the specs are executed and then each the filters are applied. This can be very wasteful, if only a few specs contain the grep in the test titles. Thus when doing the positive grep, you can pre-filter specs using the grepFilterSpecs=true parameter.
In the previous post on the Cypress Grep Plugin we installed and went through the basics of how to run just tests that have certain tags but what if you want to run all tests that do not have any tags?
One of the features that I wish Cypress had is a way to group feature tests together so that I can run all tests for the feature I am currently coding or testing without having to put them all into the same spec file. Now you can with the Cypress grep plugin.
Welcome to the start of my series on code coverage. In this article we are going to talk about why you need code coverage reports and then in future articles implement the reports in our Angular based UI and ASP.NET Core based API. For years, I pushed back against implementing code coverage on my projects and I am here to say that I was wrong. In the past everytime someone tried to implement code coverage on my projects or suggested it, all they cared about was the percent of code coverage.