Enhance review process with automated tools and services in an open source application
Mohd Shamoon
January 12, 2022

One of the hardest things that I feel is to verify, if a new piece of code will not break another part of the application. A good reviewer can be as good as the knowledge and experience they have, but even they can’t deal with the unknown. Thankfully we have automated tools to help with that.

In this blog I will share my journey of Glific (an open source two way communication platform) and how the codebase, tools and processes evolved over time to help the reviewer.


Started with Unit testing

The first thing that we started was to add unit test cases for each component in React. The task was simple: create a component, think of all the possible states for the component and write a test for each of the states.

With Jest (the test runner) we get the code coverage metrics, so it can tell if your component is well covered with all the possible scenarios or not. It even provides the line numbers for which the tests did not evaluate.




Now we have the tool to measure coverage but how do the reviewer get to know if the test cases are added or if the coverage is increased or decreased. For this the reviewer needs to keep track  of coverage percentage and compare it with the new one.

Thankfully we have a service called CodeCov that does this for us, if we integrate it with our Github CI. All one needs to do is run the tests in Github CI and send the coverage report generated by Jest to CodeCov. It will generate the metrics for you.

The coverage details will be added as a comment on the PR from CodeCov.

It even highlights the lines for which the test cases were not added or not reached.



And finally a badge in our readme file for the coverage percentage. We are still at 77 percent and hopefully will reach 90 sometime soon 😁.



Start of integration testing

With unit testing we ensured that individual components worked fine but it was not enough. How do we make sure that the components linked together are working fine?

With this thought we reached out to Cypress. Its an amazing library for automated integration and end to end testing. We can test out a complete feature from one end to another and ensure it works exactly the same way on each test run.

Our initial test cases followed this pattern:
– Open a page
– Create an item,
– Test that the functionality is working as intended
– Delete the item.

With this we ensured our apps functionality remained intact with each new change. Thanks to the open source plan that we got from Cypress we got additional features like the support for the dashboard service. It helps to track all previous tests and to see videos of failing test cases so that it is easier to debug where the test is failing.

With the dashboard service in place, we were able to see screenshots and videos of the errors that helped us to debug the issue really quickly



This ensured that the functionality is not broken on a new change and we covered most of the features in these test cases. 


UI and cross browser testing

We made sure that the functionality remained intact on new changes by adding unit and integration test cases but what about UI and other browsers compatibility. We tried to make our code cross browser compatible but how do we ensure it?

I had a windows system earlier and we got an issue in Safari and I had no way to test that. I took help from my colleagues and resolved that. Fast forward some time later I got a MacBook Pro. And then we got another issue, which was a scroll related issue on chrome in windows 😅.

With these issues coming up related to cross browser and system we thought to get a service for easier testing across multiple devices. Thanks to the team at browserstack to give us a free open source plan. Using browserstack we were able to easily test and debug on multiple devices.




With this we made sure our application is working as expected across multiple devices with a new change, but even this was not enough. Our next challenge was to reduce the reviewer’s time.


Start with a code linter

With the linting process we did something extra. Since we are using TypeScript and there is a compilation step to javascript, we added the linting in the compilation process itself.

Now if we don’t follow any rule provided by the linter, our code will give a warning after compilation. It needs to be fixed to continue working. This way we reduced the overhead for the developer to check warnings at the end which believe me never comes.




Further we added a service called DeepScan in our checks that scans our code for best practices that needs to be followed in javascript.





And last but not the least, checking for security issues in our code. For this we added two services:

Synk– This checks for any vulnerabilities in third party libraries that we have used in our project and tells us the latest version where the vulnerability issue is resolved

GitGuardian Security Checks– This makes sure that the developer does not check in any API key in the code by mistake. Believe me this happens a lot with some devs.

Now with these services and checks a reviewer can be at ease that most of the basic things are working as expected.


The final problem

Testing locally: If a reviewer has to ensure the functionality is working as expected before merging the PR he may have to test the branch in their local system. This takes some time and this time significantly increases if there are some package updates.

Thanks to netlify for giving an open source hosting plan with which we can deploy and preview our branches without the need to check the code on local.



All the services and processes that we added ensures that the quality of the code is good and ensures stability with new releases and most importantly makes the code review process much smoother.

In the end I would like to thank: Cypress, Browser stack, Netlify, CodeCov, Deepscan, Synk and GitGuardian to support us by providing your free open source plans which helped us in our journey to ensure a quality product.