vbrandl.net/content/post/2019-05-03_hits-of-code.md
Valentin Brandl 795a2a1786
All checks were successful
continuous-integration/drone/push Build is passing
Add new post
2019-05-03 16:06:25 +02:00

3.6 KiB

+++ title = "Hits-of-Code Badges" description = "Building a web service for readme badges" date = "2019-05-03T16:00:00+02:00" publishdate = "2019-05-03T16:00:00+02:00" draft = false categories = ["rust", "programming"] tags = ["rust", "actic-web", "hits-of-code", "code metric"] +++

There are few metrics that try to evaluate a codebase. Some give a glimpse about the code quality like cyclomatic complexity, code duplication, dependency graphs and the most accurate of all, WTFs per minute (WTFs/min). Others are less well fit to actually evaluate the quality of a code base such as souce lines of code (SLoC). Counting SLoC might seem like a good metric for the amount of work invested in a piece of software at first, but when you think about it, things like refactorings and removal of duplicate code through new abstractions might reduce the SLoC even if work was invested.

WTFs/m

Hits-of-Code

A few years ago, Yegor Bugayenko proposed Hits-of-Code as an alternative to SLoC. The idea is to count the changes made to the codebase over time instead of simply counting the current amount of lines. By looking at the commit history, you can calculate the metric and it gives a better overview about the amount of work, that was invested to implement some project. The score grows with every commit you make and can never shrink.

While this has nothing to say about the code quality, I think this is a useful metric, so I decided to implement a small web service to generate badges for everyone to include in their readme files: hitsofcode.com.

Hits-of-Code

Currently only repositories hosted on GitHub, Gitlab and BitBucket are supported. The service is implemented in Rust using the actix-web framework and deployed as a Docker container. It is possible to self-host everything using the Docker image or by building the source code yourself.

The service simply creates a bare clone of the referenced repository and parses the output of git log. I also implemented a simple caching mechanism by storing the commit ref of HEAD and the HoC score. Consecutive requests will pull the repository, compare the old HEAD against the new one, if the HEAD changed, the HoC between the old and the new one is calculated and the old score gets added. If HEAD stayed the same, the old score is returned.

I have some ideas for the future, e.g. calculating the metric using a git library instead of invoking a git binary like in the reference implementation and implement nicer overview pages. But for now the service works fine and is already used by some repositories. If you got any feature requests or bugs to report, just open a issue on GitHub or contact me directly.

Final Words

I think HoC is a cool metric and it is a fun project to work on and improve further but always keep in mind:

Responsible use of the metrics is just as important as collecting them in the first place.

Jeff Atwood