Rascal

pYro_65:

But my real point is that portability as a goal is contrary to efficient use of resources

Time is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.

Embedded design is one area that makes this truism false. When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal. And that is something that requires platform specific optimizations. Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture... That reduces (or eliminates) the benefit of the OO approach in my opinion.

pYro_65:

Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.

True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.

The #define THIS_PROCESSOR approach only allows for limited optimizations. To make the best, most efficient code, requires that not simply low level interface code be optimized, but also the whole algorithm be chosen and implemented to compliment the hardwares ability. Trying to accommodate multiple architectures of vastly differing capabilities only results in mediocre performance...
[/quote]

pYro_65:

Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.

True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.

Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.

Again, for me, the issue is one of choosing the appropriate algorithms for the appropriate hardware. Something that works well on the Due, is likely to require an entirely different approach when attempting to use an 8-bit uC to accomplish the same task. And in many ways the newer, faster, more powerful, uC's (like the due) are eliminating the need for this approach, much like the faster general computer hardware eliminated the need for such optimizations in those applications and made the "OO way" so useful. It is a better tool for software engineering, but it does not tend to produce efficient code... Not because the syntax requires bloat, but because the design methodology encourages it. And because if you throw enough horsepower at a problem, code that is 5, 10, or even 50% less efficient doesn't matter because in a short time the hardware running that code will be %200 faster...

But embedded design, with its emphasis on least cost, is one area where such optimizations make economic sense... And besides it makes me sentimental for the days when I had to get my code running within 512-2048 bytes that were available on my machine... :slight_smile: