I have spent a couple of weeks evaluating SonarQube 6.2 for my organisation. We are fully on .NET with hundreds of ASP.NET MVC, WCF, Web API and Windows Service applications and a few million lines of code. We have a legacy of 10 years of code. During those years we have made migrations to newer versions of the .NET framework, different source control systems, different message queues and different architectures. Migrations to new technologies can be expensive with that much code, and bad decisions can be costly to undo and when you realise you have a bad decision after you have finished integrating it into everything it becomes a nightmare.
So, taking that into account, I spent quite some time looking into SonarQube. For me personally, if it isn't a hell yes, then it's not the right thing.
SonarQube is a code quality platform, that integrates with various build systems and can analyse a bunch of languages. If you have multiple languages in your codebase, multiple build systems then the chances are that SonarQube can handle it. It has a great looking Web UI where you can:
finely tune quality profiles (list of rules to be applied) a global or per project basis.
finely tune quality gates to block deployments.
manage issues generated by rule violations. Flag false positives, issues you won't fix, fixed issues. You can assign issues, add comments, view history etc.
view the code itself
As far as static code analysis (SCA) goes, SonarQube is a great platform. But SCA is BS to a large degree, it is basically a bunch of limited rules that are too limited to give a true picture of quality. When you get your A grade, don't actually think you have A grade code, far from it - you might have crappy code with loads of business logic bugs and failure scenarios. So why bother with SCA?
If you know what SCA is capable of, and you know the rules that have been run over the code, then you know that according to that set of limited analyses, your code is ok. For example, your A tells you that:
you don't have loads of duplicated code
you don't have unchecked null reference scenarios everywhere
you don't have classes and methods that are too big and too complex
you don't have subtle bugs caused by nuisances of the language that very few developers are aware of
there is minimum test code coverage. This one can be divisive. May be you want close to 100% covergae on a dynamical language like Ruby, but for statically typed languages is that really necessary? I prefer to unit/integration test the areas that are geniunely complex, critical or near impossible to test manually and leave the rest.
The list goes on.
So SCA is valuable, given that you know what your A grade means.
SonarLint for Visual Studio
SonarLint is a Visual Studio extension that binds VS solutions to SonarQube projects. Once bound, SonarLint will download the analysers and rulesets of the quality profile linked to that SQ project. It enables a "Connected Mode", the idea being that developers can get real-time feedback based on the current rules that have been configured on the server.
This should be great. We get real-time feedback on bad code and can fix it before we make commits to source control. If Roslyn analysers come with Code Fixers then developers can fix bad code with a couple of clicks of the mouse via the light bulb icon.
When I first tried it out I told my colleagues that this thing rocks! But then I started on my path to removing false positives, finely tuning file exclusions so that open source JS files added to Script folders were excluded, flagging issues as Won't Fix; and suddenly SonarLint lost its shine. Then I opened a couple of our monolith applications (that happen to be critical to the organisation and have half the developers working on them), and SonarLint looked like a liability.
The reality is that SonarLint for Visual Studio is not a complete product yet and falls down in many areas making it not a viable choice at my organisation. So why don't I like SonarLint? There is a bunch of stuff that we need to talk about first:
Pitfalls of Static Code Analysis
Static Code Analysis strategies
Common Pitfalls of Static Code Analysis
False Positives Undermine the Platform
SCA tends to produce a lot false positives if care is not taken to limit or eliminate them. When the true rule violations have been fixed, all that are left are the false positives. Without a viable method of suppressing these false positives, developers eventually ignore all the SCA warnings as most are garbage. Also, a high rate of false positives in new code undermine confidence in the tool and lead to over-suppression of rules.
This can be avoided by
Being very careful not to enable rules that create false positives. The ratio of true violations to false positives should be very low. The correct ratio depends on the importance of the rule violation. Important rules can support a higher rate of false positives, but low priority rules that create false positives probably do more harm than good as they teach developers to not respect and to ignore the SCA results.
Having a clean mechanism for individual rule violation suppression. Having visibility of those suppressions is also important in order to avoid misuse.
It is very important that we try to reach 0 warnings in our code compiles. Once we reach 0 warnings, maintenance of code quality is easier and sustainable. But when we allow for hundreds of warnings to continue, the tool will eventually be ignored or under-utilised.
Death by a thousand rule violations
You turn on your new SCA tool and then get a shock at the 10,000 rule violations it detects. So what do you do about that? Well I'll tell you what not to do - don't just let them live on, build after build, week after week, month after month. That is the perfect strategy for getting people to completely ignore your SCA tool. It breeds contempt and/or apathy in your developers. Developers need to see the point of it all. Either turn them off for legacy code or fix your legacy code.
Slow Performance, High Resource Utilisation
When this occurs on developer PCs, this directly impacts developer productivity. It also can create bad feeling towards the SCA tool on behalf of developers. We need developers to use this tool in a positive way.
Server side, higher resource utilisation can be fixed by increasing resources. An increased build time will impact team productivity.
People treat your A grade for code quality as if were real
Static Code Analysis results are not the true picture of quality, it is an indicator based on the limited capacity of the rules applied. SCA does not replace good testing practices. Good testing will do more for code quality than an SCA tool, but testing is applied in varying quality and coverage (often not at all). Testing also ensures the business logic is correct whereas an SCA tool does not. SCA can enforce minimum testing coverage of code but it does not ensure that the quality of the tests are good.
An SCA tool offers a standardised, always executed set of simple rules that can detect certain classes of bugs and give a rough estimate of quality.
Rule suppression is a tricky thing. If you need to do it (genuinely) too often then you have a bad rule. Rules should not produce a high rate of false positives. However, sometimes a rule is really important and it happens to produce the occasional false positive. So we need to make the SCA tool stop complaining about it.
Both SonarQube and Visual Studio offer different ways of achieveing this. SonarQube allows you to flag individual issues (rule violations) as "false positive" or "won't fix". SonarQube then ignores those specific issues after that. Problem solved.
If you have your SonarQube rules also running in Visual Studio you can suppress a rule in three ways:
pragma warnings in the code. Suppress a specific rule in a specific piece of code.
GlobalSuppressions.cs that suppresses a rule on a method (not optimal as it suppresses a rule for a whole method, not a specific violation)
SuppressMessage attribute on the method of the rule violation. Again this suppresses the rule for the whole method, not an individual violation
You really want suppression of just the specific violation but I find #pragma warnings unacceptable pollution of code. I don't want to see that in my code.
So SonarQube offers me the best option: I can suppress individual violations without polluting my code.
Static Code Analysis Strategies
If you've got a big ship, I don't recommend just installing SonarQube, activating some rules and telling your dev teams to go make the code better. I've been around long enough to see a few initiatives die on the vine, ending up in this zombie state where it isn't alive but isn't quite dead either, and static code analysis is especially vulnerable to this due to the common pitfalls already discussed.
I present two strategies and which one you go for depends on your organisation, size of codebase etc.
Zero Issues Strategy
The goal should be to have a baseline of 0 issues and 0 compiler warnings. The benefits of this approach are:
• Quality is high
• Code quality maintenance is easier to manage
• New issues are identified and fixed quickly
• Developers do not become indifferent to issues and compiler warnings
Because the Zero Issues Strategy requires an initial effort to resolve all open issues, you might want to do a roll-out in a controlled way. Target the solutions that are most often changed first. You could have the policy that whenever a project/sprint requires a change to a VS solution, they must include the effort to go to zero issues if the solution is not yet added to SonarQube.
The benefits are that SQ issues and compiler warnings mean something. Once you are at 0, it is easy to maintain that. Developers treat SonarQube with respect and pay attention to it.
We only care about one thing: code quality improves over time. Every time we commit code we don’t add new issues and clean up some old ones. Quality Gates can be configured to block any builds with issues in all the code or just new code.
Trends can be visualised via the use of two different plugins:
Timeline. Free. Simple chart that can be configured to show trends according multiple different metrics.
Developer Cockpit plugin. 7000€ per year. Includes lots of nice features, including trend analysis.
Limitations of SonarQube and Trend Analysis
Quality Gates cannot be configured to fail when a key metric goes down. It can only be configured with hard coded values which can be measured against all code or just new code.
However, we CAN be very strict with new code, for example by allow zero issues on new code. This by itself should limit trends going in the wrong direction.
Enforcing Good Trends on Old Code
While bad new code can be easily blocked by Quality Gates, old code cannot. It is likely that you might want to identify a roadmap for the quality of an important SQ project (old and new code,) and want to be able to enforce the moving towards that goal.
Trend analysis is only viable at an SQ project level. To control trends at a project level via Quality Gates you would need “project owners”, and these owners would need to set these hard coded values and maintain them over time. It also means that we would need a Quality Gate per SQ Project (VS Solution).
The alternative is to not use Quality Gates to enforce good trends on old code. We would need vigilance and good governance on behalf of tech leads (risky depending on organisation culture).
Likely Strategy Implementation
For core and critical solutions: Individual Quality Gates set up to enforce good trends over all code, and maintenance of the quality gates is performed by elected project owners in accordance with a roadmap.
For 90% (invented statistic!) of projects that are not defined as core or critical: Default quality gate enforces 0 issues in new code but ignores old code. Old code trends enforced by the diligence of tech leads.
Why I won't recommend SonarLint
Taking all the above into consideration, why do I not like SonarLint? The promise of SonarLint is to be able to run the rules configured on the server inside Visual Studio and get all the goodness of real-time analysis and code fixers. If it were that simple and worked then this would be a killer feature of SonarQube.
Let's look at how it all falls down, in order of worst to not a big deal.
Analysis in Visual Studio is not the same as on the server
If you are going to run your quality profiles locally, in order to be able to cleanly manage your rule violations it is critical that the experience in Visual Studio is the same as the server. If you use the Zero Issues strategy you want zero compiler warnings, but this is not possible with SonarLint.
All those issues you flagged as "false positive" or "won't fix" are highlighted in your code and produce compiler warnings. All those file exclusions and issue exclusions you set up on the server? Ignored by SonarLint. You compile in Visual Studio and get harrassed by all these violations that you already said weren't a problem.
It is essential that you remove spurious violations, I cannot say that enough. Spurious warnings that won't die are a disaster for your code quality strategy. By using SonarLint that is exactly what you produce. Except that may be developers start using rule suppression in code to stop the warnings. If you reach that point then forget using SonarQube server as the single source of truth and a great way of managing issues. It becomes a mess. Some of the main reasons for using SonarQube are thrown away.
I use an I7, 16Gb RAM with an SSD, Windows machine. On small projects I noticed a small performance impact on compile times. Incremental builds sometimes would lock up my PC for 2-3 seconds. A compile after changing one line of code would be near instantaneous without SonarLint, but close to a rebuild with SonarLint enabled.
Then I opened a monolith. 300,000 lines of C#. Without SonarLint, a full rebuild normally takes about 1:30 minutes, incremental builds are not noticeable and a build after a single line is changed is around 20 seconds. With SonarLint a rebuild is close to 6 minutes, incremental builds lock up my PC for short periods occasionally and a build after the same line change is 3:30 minutes. Given that half our developers work on critical monolith applications, this is going to be a problem.
So I changed to a quality profile with about 80 rules instead of 220. Rebuild went down to 2:30. Not great, but potentially workable, may be.
Do I have to compromise the strength of the quality profile so that SonarLint is usable by developers? I don't like that at all.
Developers can turn off rules locally
Ruleset files are added to every VS project in the solution. Developers can open these ruleset files and turn off or add rules, for that specific project. Great! You can customise the rules at a project level, sounds great, except you can do much the same from SQ server using issue exclusions.
The danger of this functionality is that developers can turn things off and you don't see that from the SQ Web UI. Your SQ administrators have just lost control!
Artefacts added to solutions
When you bind a solution to a SonarQube project, the following gets added:
SonarQube folder at the solution level. This contains two files: SolutionBinding.sqconfig and a ruleset file that contains all the C# rules of your quality profile.
Adds a ruleset per project
Adds that ruleset to the csproj file
Probably not such a big deal if you make a strategic move to SonarQube, but it does mean some cleanup will be required if you move to a different SCA tool later on.
I love SonarQube as a server side analysis platform. It has loads of great plug-ins, you can easily add your own analysers. Issue management is great. We are considering a move to different source control and build systems and SonarQube has integration with all of them.
But SonarLint is just not ready yet. If SonarSource/Microsoft could just fix the bad bad synchronisation and the performance problems then I'd start using SonarLint for Visual Studio without a second thought. The real-time feedback and use of code fixers would be a major plus and not having that is a big negative for the platform as a whole. So I'm not ruling out SonarLint forever, I'll keep an eye on it and hope that in future they solve those problems.