At work we have a lot of experience with code quality metrics and the relation with software maintenance and software realibility. Maintenance costs are calculated using these code metrics: the better the code, the easier to maintain and the lower the costs.
As Cosa is imho quality wise a step-up compared to most of the libraries out there for Arduino, I wondered what the statistiscal metrics and quality of this package is
So I downloaded the current Cosa package and ran it through some analysis with the following overall very satisfying result I must say mr. Kowalski!
The most widespread and valuable method but also one of the least understood methods is McCabes' Cyclomatic Complexity.
Cyclomatic complexity is defined as measuring “the amount of decision logic in a source code function” where higher numbers are “bad” and lower numbers are “good”. Simply put the more decisions that have to be made in code, the more complex it is. We use cyclomatic complexity to get a sense of how hard any given code may be to test, maintain, enhance, refactor or troubleshoot as well as an indication of how likely the code will be to produce errors.
The complexity of the source code is described by a number. That number defines the risk of the module and is defined by the the Software Engineering Institute, as follows:
|Cyclomatic Complexity ||Risk Evaluation|
|1-10||A simple module without much risk|
|11-20||A more complex module with moderate risk|
|21-50||A complex module of high risk|
|51 and greater||An untestable program of very high risk|
We always strive for any number below 10 as it has proven to be the easiest to maintain and debug. The largest number I have had so far is 85. This software has proven over 1 year maintenance to be very, very, very hard to modify and almost impossible to test.
So how does Cosa 'perform' code metrics wise?
As you can see Cosa contains 163 files with a total of 20.564 lines. Of these lines, 21% is comment (about 4.380 lines), 20% are branch statements (about 4.215 lines) and 9.933 lines contain normal statements.
If I zoom in to the quality metrics, I get the following Kiviat Graph where green = ok and above or below that green band means that that quality metric has a value that should be investigated further:
The general impression is good: 5 of the 7 quality metrics are within the green band, 1 (Max Depth) is just outside this band, and one - Max (Cyclomatic) Complexity - is with a value of 39 far outside the required bandwith which should be investigated further.
The next list shows the modules that are outside the Max Complexity value of 10. It also shows the modules average complexity. What we see is that the number of modules with a higer than wanted complexity is not very high, but that the average complexity of some of these modules is also to high: in other words, it is not just one function that has a high complexity.
I took just 3 different modules to show the metrics for the MQTT, TWI and Menu modules to get an idea what is causing this high complexity and depth.Menu:
The Menu module is by far the worst module
All metrics are OUTSIDE the green band!
The worst offender is the Menu::Walker::on_key_down() method which scores the highest on both Complexity (39) and block depth (7). The method contains nested switch statements and a lot of decision logic causing these high complexity and depth numbers.MQTT:
The MQTT module scores high on statememts per method and on both average and maximum complexity. The worst offender is the MQTT::Client::publish() method that scores the highest on both Complexity (24) and block depth (4). Again a large switch statement seems to be responsible for the high complexity and depth.TWI:
The last module, TWI, scores good on all metrics except for the maximum complexity. The worst offender is the ISR() method that scores the highest on both Complexity (34) and block depth (4). Again a large switch statement seems to be responsible for the high complexity and depth.
These three examples show that (large) switch statements are responsible for most of the high complexity and depth.
I haven't analysed in depth if these modules could be refactored to increase quality metrics or that these metrics are just the way they are: there is no other choice to implement this functionality! I hope kowalski can shed a light on this issue
Summerized: Cosa shows that there is high quality software in the Arduino world. The full object oriented approach and the constant refactoring that kowalski does are no doubt some of the reasons behind these good code metrics results.
I did like Cosa already, but now even more