And what is a fish going to do with a bike?
QuoteAnd what is a fish going to do with a bike?Have you used new() on Arduino?
Quote from: wanderson on Jul 24, 2012, 02:00 amCompound that with OO's feature of masking the low level implementationI wholeheartedly disagree with that statement. Masking of low level implementation has nothing to do with OO vs procedural. If you're using a third party library that isn't open source, then the implementation of that library is masked from you regardless what language it's written in. If the sources are available, then it's implementation isn't masked.
Compound that with OO's feature of masking the low level implementation
I've already shown that the majority of C++ features incur no overhead with regards to code size or performance. Bottom line is, there is nothing inherently inefficient about C++ compared to C. That's a fallacy, plain and simple. Yes, it takes experience to write efficient code, but an inexperienced programmer is just as likely to write inefficient C code as they are C++ code. And for the record, though not really applicable to the Arduino, the STL is a very efficient library.
OO only becomes a clear advantage when the resources are for all intents unlimited or when the project size is extremely large.
Of course simple, traditional functional code can mask the low level implementation. But it is easier to write low level C with an appreciation of what the assembly output will be than it is higher level functional based code and especially OO code. It is a question of what is practical, not possible...
You are incorrect when you claim "no overhead", though a "proof" would require comparison of the produced assembly.
OO code, produced for reusability
The purpose of inheritance in C++ is to express interface compliance (subtyping), not to get code reuse. In C++, code reuse usually comes via composition rather than via inheritance. In other words, inheritance is mainly a specification technique rather than an implementation technique.
Wrapping a non-member function into a class will not add overhead and 'Interface' does not generate instructions. The C++ compiler exploits the added semantic information associated with public inheritance to provide static typing.OO code, produced for reusability
This is one major problem. Object Orientated paradigm's especially inheritance are not for code re-use. If that is your sole reason for using an OO paradigm, you are using it incorrectly.
When programming for an embedded environment OO features insulate the programmer from the hardware
I'm not sure I fully understand this argument; is it for example: pointing out the difference between doing explicit port mapping and letting a class do it internally, cos that is not insulation but encapsulation. The greater difference here is a class can be programmed internally for portability, doing the appropriate port manipulations for the target processor in a generic way. To gain the same level of portability with linear style code would involve a tremendous amount of conditional branching when considering a port I/O intensive application.
While class based portability that you describe is aesthetically pleasing, it can be fairly easily mimicked without the "tremendous" effort you ascribe.
But my real point is that portability as a goal is contrary to efficient use of resources
Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.
Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.
QuoteBut my real point is that portability as a goal is contrary to efficient use of resourcesTime is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.
QuoteDiffering architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.
QuoteCode that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.
Embedded design is one area that makes this truism false. When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal. And that is something that requires platform specific optimizations. Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture... That reduces (or eliminates) the benefit of the OO approach in my opinion.
the dynamic version was slower
How do you create "dynamic version" without a new operator?