Go Down

Topic: Rascal (Read 5963 times) previous topic - next topic

florinc

Quote
And what is a fish going to do with a bike?

Have you used new() on Arduino?

jraskell


Quote
And what is a fish going to do with a bike?

Have you used new() on Arduino?

Nope.

pYro_65

C'mon guys, give c++ a break, lol.
jraskell has some great points that I would back up if I wasn't heading to work soon ( great excuse I know! ), and comparing polymorphism to new is silly. Polymorphism just like everything else comes in two flavours... dynamic and static.

Docedison

Very interesting dissertation though somewhat off topic.....
We though, "tend to read what we understand rather than understand what we read..." My statement was to understanding Linux, FIRST in order to learn and use the language for the Rascal (Python? cool but later)...
I am having enough trouble learning C and C++ for now. I am one of those people who do things in a serial manner... You know... learn to walk before I run the NY marathon.
The language is slowly coming to have meaning for me as I learn it. I have no desire whatsoever to become anything more than reasonably familiar with C and see what happens from there.
I have more years of electronics experience that most of the people I interface with here in the forum do in writing code... (again I don't wish to be mis-understood I won the hard way, by persevering to the bet of my ability.
I am NOT trying to offend people here who are in my age bracket. I know who you are and I respect your knowledge more than I can say. There are however others that don't see what I am trying to say.
I WILL do the same with C and C++. Remember I bought my first Arduino Uno in March of this year and everything else is going well. I can write a sketch to a limited degree, I understand enough of the process to make a sketch do as I want and I learn more every day.
I thought the Rascal to be an interesting device for 'Forum Chatter" Highly educational and very frequently amusing reading the threads generated by this Idea and many more like it...
The Rascal is way out of my league now and I will not divert my attention elsewhere until I am more comfortable with the Arduino brand of C and C++ as these devices taken singly and as building blocks (I bought 2 328,5V Pro mini's for stand alone radio interconnected tasks for "This Years Grand Project")...

Doc
Next year I have no doubt I will be after another project but the Arduino will do for now...
--> WA7EMS <--
"The solution of every problem is another problem." -Johann Wolfgang von Goethe
I do answer technical questions PM'd to me with whatever is in my clipboard

wanderson



Compound that with OO's feature of masking the low level implementation


I wholeheartedly disagree with that statement.  Masking of low level implementation has nothing to do with OO vs procedural.  If you're using a third party library that isn't open source, then the implementation of that library is masked from you regardless what language it's written in.  If the sources are available, then it's implementation isn't masked.


Of course simple, traditional functional code can mask the low level implementation.  But it is easier to write low level C with an appreciation of what the assembly output will be than it is higher level functional based code and especially OO code.  It is a question of what is practical, not possible...


I've already shown that the majority of C++ features incur no overhead with regards to code size or performance.  Bottom line is, there is nothing inherently inefficient about C++ compared to C.  That's a fallacy, plain and simple.  Yes, it takes experience to write efficient code, but an inexperienced programmer is just as likely to write inefficient C code as they are C++ code.  And for the record, though not really applicable to the Arduino, the STL is a very efficient library.


You are incorrect when you claim "no overhead", though a "proof" would require comparison of the produced assembly.  However, that ignores the crux of my claim.  OO code, produced for reusability, will include components that are simply not needed for every application.  This is also the problem with functional based libraries of code.  A simple example is code that handles multiple data types, when only one is needed for a given application.  It is not a question of the "theoretical" use of any approach but the practical.

Here is a specific example.  I was attempting to test some code on an ATTINY2313.  The core code was small enough for both FLASH and RAM conditions; however, by attempting to use the Serial object I was exceeding available resources.  The only way to get that code to work, involved, using low level C code to transmit serial data (of the specific form I needed).  At that level OO approaches would be indistinguiable from straight C or really even assembler...


OO only becomes a clear advantage when the resources are for all intents unlimited or when the project size is extremely large.


Again, I have to disagree.  The advantages of OO code become clear with just a few thousand lines of code, which doesn't even come close to pushing the limits of the Arduino unless it makes extensive use of RAM or utilizes some fairly large third party libraries (the latter case really meaning it's much larger than a few thousand lines of code). 

Encapsulation alone is far better than having a few dozen global variables and dozens of functions whose only real organization is in their naming convention.  Hell, it's better than having half a dozen globals and a handful of functions in even a small project.
[/quote]

Encapsulation is easily obtainable in a straight functional approach.  Indeed the concept was incorporated as a desirable programming practice long before the practical use of OO tools... 

pYro_65

#20
Jul 24, 2012, 02:21 pm Last Edit: Jul 24, 2012, 02:37 pm by pYro_65 Reason: 1
Just my view on the matter

Quote
Of course simple, traditional functional code can mask the low level implementation.  But it is easier to write low level C with an appreciation of what the assembly output will be than it is higher level functional based code and especially OO code.  It is a question of what is practical, not possible...

Quote
You are incorrect when you claim "no overhead", though a "proof" would require comparison of the produced assembly.


I believe that was a fairly accurate analysis. Wrapping a non-member function into a class will not add overhead and 'Interface' does not generate instructions. The C++ compiler exploits the added semantic information associated with public inheritance to provide static typing.

Quote
OO code, produced for reusability


This is one major problem. Object Orientated paradigm's especially inheritance are not for code re-use. If that is your sole reason for using an OO paradigm, you are using it incorrectly.

Quote
The purpose of inheritance in C++ is to express interface compliance (subtyping), not to get code reuse. In C++, code reuse usually comes via composition rather than via inheritance. In other words, inheritance is mainly a specification technique rather than an implementation technique.

wanderson

#21
Jul 24, 2012, 02:48 pm Last Edit: Jul 24, 2012, 03:40 pm by wanderson Reason: 1

Wrapping a non-member function into a class will not add overhead and 'Interface' does not generate instructions. The C++ compiler exploits the added semantic information associated with public inheritance to provide static typing.

OO code, produced for reusability


While this is true in some cases, and possibly even most, it is not always true...  and that is the crux of the problem.  When programming for an embedded environment OO features insulate the programmer from the hardware


This is one major problem. Object Orientated paradigm's especially inheritance are not for code re-use. If that is your sole reason for using an OO paradigm, you are using it incorrectly.


You may not be old enough to remember when OO was first being prosletyzed, but code re-use was the major selling point at that time.  Interface compliance or inheritance was primarily sold as a method of improving the ability to re-use code over the methods that were then available.  Indeed the additional design requirements were only able to be justified by the labor savings such re-use would allow for...


The purpose of inheritance in C++ is to express interface compliance (subtyping), not to get code reuse. In C++, code reuse usually comes via composition rather than via inheritance. In other words, inheritance is mainly a specification technique rather than an implementation technique.


Again, what your describing is part of the OO design approach...  And I believe your interpretation differs from mine due to your modern experience base, while mine is influenced by the religions earlier sales pitch...

pYro_65

#22
Jul 24, 2012, 03:24 pm Last Edit: Jul 24, 2012, 03:26 pm by pYro_65 Reason: 1
Quote
When programming for an embedded environment OO features insulate the programmer from the hardware


I'm not sure I fully understand this argument; is it for example: pointing out the difference between doing explicit port mapping and letting a class do it internally, cos that is not insulation but encapsulation. The greater difference here is a class can be programmed internally for portability, doing the appropriate port manipulations for the target processor in a generic way.

To gain the same level of portability with linear style code would involve a tremendous amount of conditional branching when considering a port I/O intensive application.

wanderson


I'm not sure I fully understand this argument; is it for example: pointing out the difference between doing explicit port mapping and letting a class do it internally, cos that is not insulation but encapsulation. The greater difference here is a class can be programmed internally for portability, doing the appropriate port manipulations for the target processor in a generic way.

To gain the same level of portability with linear style code would involve a tremendous amount of conditional branching when considering a port I/O intensive application.


While class based portability that you describe is aesthetically pleasing, it can be fairly easily mimicked without the "tremendous" effort you ascribe.  But my real point is that portability as a goal is contrary to efficient use of resources.  Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.  Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.


pYro_65

#24
Jul 24, 2012, 04:31 pm Last Edit: Jul 24, 2012, 04:45 pm by pYro_65 Reason: 1
Quote
While class based portability that you describe is aesthetically pleasing, it can be fairly easily mimicked without the "tremendous" effort you ascribe.


Haha, maybe I used the wrong words. My example is based on a few things.
Either explicitly typing out hard coded 'portability' for a specific few architectures. Or for example, the amount of expanded code the digitalWriteFast macros would emit before compiling.

Quote
But my real point is that portability as a goal is contrary to efficient use of resources


That is the beauty of encapsulation. There is no need for portability unless explicitly needed, however due to encapsulation of a specific feature set, it allows you to upgrade your code easily and for another platform if needed. By making your client code rely on a single interface for a specific feature, you only need change the underlying implementation in one place to manipulate every instance using the interface.

Time is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.

Quote
Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.


True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.

Quote
Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.


True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.

Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.


wanderson


Quote
But my real point is that portability as a goal is contrary to efficient use of resources

Time is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.


Embedded design is one area that makes this truism false.  When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal.  And that is something that requires platform specific optimizations.  Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture...  That reduces (or eliminates) the benefit of the OO approach in my opinion.


Quote
Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.


True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.


The #define THIS_PROCESSOR approach only allows for limited optimizations.  To make the best, most efficient code, requires that not simply low level interface code be optimized, but also the whole algorithm be chosen and implemented to compliment the hardwares ability.  Trying to accommodate multiple architectures of vastly differing capabilities only results in mediocre performance...
[/quote]


Quote
Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.


True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.

Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.


Again, for me, the issue is one of choosing the appropriate algorithms for the appropriate hardware.  Something that works well on the Due, is likely to require an entirely different approach when attempting to use an 8-bit uC to accomplish the same task.  And in many ways the newer, faster, more powerful, uC's (like the due) are eliminating the need for this approach, much like the faster general computer hardware eliminated the need for such optimizations in those applications and made the "OO way" so useful.  It is a better tool for software engineering, but it does not tend to produce efficient code...  Not because the syntax requires bloat, but because the design methodology encourages it.  And because if you throw enough horsepower at a problem, code that is 5, 10, or even 50% less efficient doesn't matter because in a short time the hardware running that code will be %200 faster...

But embedded design, with its emphasis on least cost, is one area where such optimizations make economic sense...  And besides it makes me sentimental for the days when I had to get my code running within 512-2048 bytes that were available on my machine... :)

pYro_65

#26
Jul 24, 2012, 07:19 pm Last Edit: Jul 24, 2012, 07:22 pm by pYro_65 Reason: 1
@Docedison, sorry mate, it seems we hijacked your thread with no intention of returning it any time soon...

Quote
Embedded design is one area that makes this truism false.  When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal.  And that is something that requires platform specific optimizations.  Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture...  That reduces (or eliminates) the benefit of the OO approach in my opinion.


@wanderson, I can see the points that you are making, and under most circumstances you are correct. But in my reality this notion is wrong, and I have proven it to some extent ( hehe my opinion only! ). I have been working on an abstraction layer which not only allows me to write platform independent code, but also connection independent code. It works by using an interface to describe how data is moved, then layers provide the actual transport mechanisms ( SPI, I2C, parallel, shiftIn/Out ). My profiling test is an LCD driver, my code has increased the capabilities of the LCD beyond its native support. I can now use my LCD ( st7920 ) in 8-bit read write mode via SPI using shift registers with no extra code on the client side. And it is literally one line of code difference to change its connection method. I have designed the system in a way that an update for the Due will be relatively easy. The LCD library has not one single line of code that talks directly to any hardware. ( I would be happy to show if interested. )

When used under test scenarios ( digital port manipulation, with 74hc595/74hc165, in static mode ), its optimisations exceeded expectations. All layers of the library evaporated and direct port manipulation was emitted directly into the loop function providing extremely optimised instructions, the dynamic version was slower with a v-table overhead but still many times faster then arduino's base code.

The system is non-restrictive. A large parallel connection for instance can control many devices ( devices can be offset, used inversely an SPI connection could control many IO making it extremely easy to expand your system ).

My motivation for this rant is based upon the fact that C++ is a superset of C, it is C + More, whereas C is simply C without the C++
It was kind-of created for the new generation of applications that could not be expressed properly/efficiently by C. There is nothing that C++ can't do that C can do better  :P.

wanderson

My comments are not related to C vs C++ because like you said you can write C++ using either an OO approach, a functional approach, or low level bit-banging.  But in my opinion, OO approach only works successfuly (in the vast majority of cases) when there are no appreciable resource limitations placed upon it.  There is a reason that most (as in the vast majority) of embedded applications (in terms of number of units deployed) are created in assembly or low level C (which if C++ limits itself to is really just C again)...

As Moore's law continues to apply to embedded processors, this may alter that equation, but I for one am not looking forward to it.  I am really not looking forward to riding in a car being driven by software created using the same techniques and quality control that places like Microsoft apply to Windows (or the Sync entertainment system in my current car)...

florinc

Quote
the dynamic version was slower

How do you create "dynamic version" without a new operator?

pYro_65

#29
Jul 25, 2012, 01:59 am Last Edit: Jul 25, 2012, 02:07 am by pYro_65 Reason: 1
Quote
How do you create "dynamic version" without a new operator?


Dynamic polymorphism has nothing to do with new, it is merely a function for allocating bytes during runtime. Sorry if my post was misleading in my use of static/dynamic

Go Up