Saved you a click:
- JaCoCo for test coverage,
- PMD for static code analysis
- SpotBugs (successor of FindBugs) for linting and enforce coding style/best practices,
- japicmp to check semantic versioning
- codecov and checkstyle.
I wish there was a linter that stopped my colleagues from adding in a half-gigabyte of Apache Spark artifacts to go with a single line of code, which makes the product impossible to deploy to customers; or to implement another fucking O(n^2) algorithm that flies through the test suite and craps out in production; or our placement students from trying to get ChatGPT shit through code review. Sigh.
I also fucking hate checkstyle, or any of its friends like Google’s spotless; sometimes you want to format your code so that the underlying thinking is more obvious, perhaps to highlight how some parts are different and make things that are not as you would expect stand out, but no.
Tools that generate warnings, for the assistance of human reviewers? Great. Tools that generate errors, so that you have to go stupid shit to keep a machine happy? Not so great.
I wish there was a linter that stopped my colleagues from adding in a half-gigabyte of Apache Spark artifacts to go with a single line of code, which makes the product impossible to deploy to customers;
It sounds like this would be simple to catch during code reviews, or in the very least with a regression test running after the packaging stage.
or to implement another fucking O(n^2) algorithm that flies through the test suite and craps out in production;
It sounds like the job for a performance test.
or our placement students from trying to get ChatGPT shit through code review.
Aren’t code reviews catching this?
I also fucking hate checkstyle, or any of its friends like Google’s spotless; sometimes you want to format your code so that the underlying thinking is more obvious, perhaps to highlight how some parts are different and make things that are not as you would expect stand out, but no.
I’m afraid that’s not supposed to be handled at the code formatting level. That’s supposed to be handled by extracting code to a method or a dedicated class, and cover with unit tests to illustrate how it’s expected to work. If you’re trying to handle that with non-standard code formatting then I’m afraid you’re writing bad code.
Tools that generate errors, so that you have to go stupid shit to keep a machine happy?
I lost the count how many companies made me create dumb tests to achieve some random ass definition of code coverage. Once I was literally trying to save production and had to get back and add a boilerplate test. And somehow this is a standard here in Brazil, everyone loves these “metrics”
To build on this:
- Micrometer to measure performance of potentially complex code
- BeanRunner to test POJOs without writing much code.
Sonarlint (free IDE plugin) / sonarqube possibly as an alternative to spot bugs and pmd