Other Views: By Month | By Category | By Tag Cloud
In our previous post, we talked about how TeamCity automatically combines multiple dotCover outputs into a single code coverage report but what if we wanted to keep thoose reports seperate?
You might be thinking why would you want to do this? Don’t you want to see the overall test coverage?
The overall coverage is nice but when you have both unit tests and integration tests, or multiple test projects serving different purposes, there is a high potential that you are reporting lines of code being covered that were only called but not actually tested.
In this post, we are going to look at how in TeamCity you can run multiple test builds to generate both individual reports and still have a combined report.
In TeamCity, what if you need to combine the code coverage results say from unit test and integration test projectsthst run as seperate build steps?
Luckily, TeamCity does this work for you but it is not obvious that it will do it for.
To get TeamCity to combine the multiple the code coverage results into a single code coverage result, you just need to add the echo command to import the data into the build like we did in our previous post
In part 1 and part 2 of of this article series, we setup and optimized our code coverage using the free command line version of dotCover.
In this post, we are going to add our code coverage to our TeamCity builds to run our unit tests with code coverage as part of the automated builds, show us the code coverage metrics summary after the build, be able to view the code coverage report right in TeamCity and add failure metrics for if code coverage percent drops.
In part 1 of the ASPNET Code Coverage Using dotCover, we hooked up dotCover to our project to generate our code coverage report. It was pretty easy to hook it up but the number is not the most accurate as it includes every file that is part of the solution including our test project and all of the ASP.NET WebApi code.
When looking at code coverage, we only want to include files that make sense to test against so that our code coverage number is the most accurate. So that means we should exclude our unit tests project from the report since we are not going to run tests against our test projects. Same thing with the ASP.NET WebApi startup.cs, program.cs and Controllers as we would want to test those using integration tests and not unit tests.
Having automated tests is a good thing to have to help with your code quality but having those tests without any idea of how much of your code is actually being tested is a really bad thing.
To figure out how much of our code we are actually testing, we need to create a code coverage report. To generate our code coverage report, we going to use the JetBrains dotCover tool.
Welcome to part two of our two part series on Angular code coverage.
In the previous article, we set up Cypress code coverage for our Angular project so that we could run it locally on our development machine. In this article, we are going to take it to the next step and add it to our automated build.
I am a big believer in DevOps and having automated builds and deployments for all of my projects. In fact, I have had automated builds and deployments since 2002, long before DevOps become a thing.
I will be using TeamCity as the automated build platform and am assuming that you already have your Angular build working and are just adding in code coverage to the build.
Welcome to part one of our two part series on Angular code coverage for Cypress tests. As I have implemented more automated tests, one of the must haves for me is code coverage reports. Code coverage allow me to quickly and easily see which lines of the code are not being tested so I can close any critical testing gaps. Today, I am going to be talking specifically about how to implement code coverage for an Angular project that was generated from the Angular CLI.
Being able to schedule a post is one of the features that I miss when using Hugo for a site.
Out of the box, Hugo has no way to schedule a post as it is a static site generator which means that when you build the site the html is generated and only updated when you build the site again.
The manual workaround for the scheduling of post is to create a published post with a future date on it and then on that day, rebuild and redeploy your site. Even though this works, it depends on me to remember to do it vs it being automatic which is what I personally want. I do not want to have to remember to build and deploy the site on the day that I want a post to be published. If it depends on me, then changes are that I am going to forget or get busy and the post won’t get published when it is supposed to be.
Luckily, since I am already using Netlify to build my Hugo site and with the code for the site residing on Github, I can easily create a scheduled Github action that triggers the Netlify build for me.
Since the Angular CLI was released it has included linting using the ng lint
command. With the release of Angular v11 it was announced that tslint which ng lint
used behind the scenes for the linting was being replaced with eslint.
To make the migration to eslint easier for your existing project they created a couple of tools that automate almost the whole migration process for us.
I was able to finish the migration start to finish in about 30 minutes.
In your Angular application if you are using RxJS Debounce and running Cypress test you may have run into times that your tests are not consistently getting past the debounce wait time and appear like they are flaky tests.
Debounce is a way to wait X number of milliseconds for something to happen before continuing such waiting for a user to stop typing in a field before making an API call. This way you are not making an API call for each character typed into the field.
In Cypress, you could just use a wait statement to get past the debounce time but adding time based wait statements in Cypress is an anti-pattern.
Instead in Cypress you should use the cy.clock() and cy.tick()
commands to be able to forward the virtual time and cause debounce to fire. However, I found it was not consistently getting past the debounce. RxJS was acting like we had not waited for the debounce time.
Luckily, after much troubleshooting the solution ended up being quite simple and only involved test code changes.