Arduino Forum

Using Arduino => Microcontrollers => Topic started by: Docedison on Jul 21, 2012, 04:54 pm

Title: Rascal
Post by: Docedison on Jul 21, 2012, 04:54 pm
I saw an ad for a Rascal... Here:http://www.elektor.com/news/rascal-combines-linux-and-arduino.2208065.lynkx?utm_source=UK&utm_medium=email&utm_campaign=news&cat=microprocessor (http://www.elektor.com/news/rascal-combines-linux-and-arduino.2208065.lynkx?utm_source=UK&utm_medium=email&utm_campaign=news&cat=microprocessor)
Is this the "New Arduino"? Thanks are to the copy material from Elektor "Weekly # 180. I Quote:
"Built around a 400 MHz AT91SAM9G20 ARM9 from Atmel, the Rascal is an open source Linux board compatible with Arduino extension cards or shields. Programming the board is easy thanks to a library written in Python from Pytronics that allows easy access to peripherals and shields. The Rascal's firmware comes with a web server that can serve as a programming interface; you can write your applications directly in a web browser connected to the Rascal board.
Another "Clone" or a different ChipKit? Or the Long awaited answer to the Arduino 8 bit limitations?
It "Looks" very interesting... Although I have enough trouble fumbling my way through C & C++ without learning a new OS as well.
There is no Price quoted... "We are temporarily out of stock enter your email address..." So I did. More (I guess) will be revealed later"...
"400 MHz AT91SAM9G20 ARM9" certainly doesn't look very low power, running @ 400 Mhz... and I don't think one could pop a new chip in and "burn a bootloader" But perhaps learning the language in a form more easily relevant to C and C++ in the larger world... Might make learning the language easier, faster and more complete. My only real issue with the Arduino is learning the language... trying to sort what is relevant to the Arduino and what isn't is difficult at times.

Doc
Title: Re: Rascal
Post by: AWOL on Jul 21, 2012, 05:07 pm
Quote
My only real issue with the Arduino is learning the language...

Is Python any simpler?
Title: Re: Rascal
Post by: Docedison on Jul 21, 2012, 07:27 pm
No, qualified, though in that I can learn the whole language rather than a subset of the language. I am currently reading "The C++ Primer, 6th Ed" and "A Book on C" I like reading the Primer but I feel somehow that a lot of what I am reading simply isn't very applicable as it refers to C11 rather than C99. The Material in A Book on C... seems to presuppose that one has a basic knowledge of C and this makes many of the examples... Cryptic. Some I can read and understand fully and many might as well be ideograms for all I can get from them. in most cases the answers to my questions are to be found here but figuring out how to search for them is frequently a real challenge as there are many references to many things, many of which are not necessarily accurate or relevant i.e. for earlier versions of the IDE or simply "first attempts", It gets difficult at times to find good information. Sometimes I think I cry just to hear myself cry...

Doc
Title: Re: Rascal
Post by: majenko on Jul 21, 2012, 07:41 pm
Why not just get a ChipKit and keep the same programming environment?

Yes, it may take a bit of work to port some unsupported libraries, but still less work than porting to an entire new language...
Title: Re: Rascal
Post by: wanderson on Jul 21, 2012, 09:01 pm
And at a price of $175 it seems over priced when compared to its competitors (Rasberry Pi, Beagleboard, etc...)

http://store.rascalmicro.com/products/rascal-beta-unit (http://store.rascalmicro.com/products/rascal-beta-unit)
Title: Re: Rascal
Post by: keeper63 on Jul 21, 2012, 09:07 pm

My only real issue with the Arduino is learning the language...


Since you note in your profile that you are a retired electronics engineer, I'm not sure if what I am going to write will have much bearing, but it is what I would recommend to anyone getting into programming. It is a basic level of knowledge to keep in mind, especially if your intention is to become a professional software developer: Focus on the structure, not on the language.

What I mean by that, is that in virtually all modern (and not so modern) high-level programming languages you are likely to encounter, all utilize essentially the same concepts, and the same forms. This is mainly because (especially with more recent programming languages) they all have influenced one another. I would honestly surmise that the languages of the past which have mostly influenced languages of today, would be Fortran, C/C++, and Pascal.

All have basically the same control/branching structures. All have more or less the same forms of variable assignment structures. All have functional/procedural structures.

Of course, only C++ (discounting any more recent versions of Fortran and/or Pascal) has object-orientation. You can clearly see its influence, though, on other languages like Java, PHP, etc.

Languages like BASIC clearly have a flavor of Fortran about them (when I was a kid, translating Fortran code I found in books at the library over into BASIC for my home computer was something I enjoyed doing). I'm not sure where others like Python, Lua, and Perl fit in, but despite certain differences, they too have similarities with the legacy languages.

Once you understand the structures, the rest is just syntax. Think of it like the "romance languages" (Spanish, Italian, French): They all have a very similar structure, with the rest being syntax (it isn't quite the same, though, human languages being human and all, and subject to a much wider array of social, political, and other forces which don't exert themselves in the same way on computer languages, not to mention the age difference); if you learn one, you can (in theory) pick up the others much easier than, say - English or German.

That said, there are programming languages out there (but none you are likely to encounter or use) that will make you go "WTF?" - and some of them are based on interesting constructs meant to make them easier to utilize in certain scenarios or for certain purposes. Once you begin to appreciate what the world has settled on (mostly) for "standard programming structures", looking into these other languages can be an interesting diversion (like LISP, Prolog, Brainf*ck and Whitespace, etc)...
Title: Re: Rascal
Post by: spcomputing on Jul 21, 2012, 09:15 pm
LOL, wanderson beat me with nearly the identical comments on RaspPi and $175 price.

Only thing I could add is if you want that internet control, the BitLash library could go a ways...



Title: Re: Rascal
Post by: wanderson on Jul 21, 2012, 09:17 pm

I would honestly surmise that the languages of the past which have mostly influenced languages of today, would be Fortran, C/C++, and Pascal.

All have basically the same control/branching structures. All have more or less the same forms of variable assignment structures. All have functional/procedural structures.


I really don't think Fortran has had much influence on any language in the last 40 years.  Indeed I think that more recent incarnations of Fortran have been more on the receiving end of such influence.


Of course, only C++ (discounting any more recent versions of Fortran and/or Pascal) has object-orientation. You can clearly see its influence, though, on other languages like Java, PHP, etc.


C++ object orientation, like the additions for pascal like languages and presumably fortran like, are really add-ons.  You can use OO methods, but you are not required to.  Which is why Arduino code for the most part doesn't tend to use much of the OO functionaility.


Languages like BASIC clearly have a flavor of Fortran about them (when I was a kid, translating Fortran code I found in books at the library over into BASIC for my home computer was something I enjoyed doing). I'm not sure where others like Python, Lua, and Perl fit in, but despite certain differences, they too have similarities with the legacy languages.


Very early basic were indeed Fortran like though thankfully less import on hollerith coding, but once Gates got his empire going, he changed it in many modern ways.  Modern basics don't look like, nor have much in common, with traditional fortran


Once you understand the structures, the rest is just syntax. Think of it like the "romance languages" (Spanish, Italian, French): They all have a very similar structure, with the rest being syntax (it isn't quite the same, though, human languages being human and all, and subject to a much wider array of social, political, and other forces which don't exert themselves in the same way on computer languages, not to mention the age difference); if you learn one, you can (in theory) pick up the others much easier than, say - English or German.


All true, but it is not just syntax, it is methodology.  There is a world of difference between a traditional strucutural coding approach (which I think still works very well for embedded programs) and object oriented techniques which were originally developed for building very large applications.


That said, there are languages out there (but none you are likely to encounter or use) that will make you go "WTF?" - and some of them are based on interesting constructs meant to make them easier to utilize in certain scenarios or for certain purposes. Once you begin to appreciate what the world has settled on (mostly) for "standard programming structures", looking into these other languages can be an interesting diversion...


Forth is an excellent example of a different language, LISP is another...  Both make my eyes water...
Title: Re: Rascal
Post by: jraskell on Jul 23, 2012, 09:41 pm

Which is why Arduino code for the most part doesn't tend to use much of the OO functionaility.


What Arduino code would you be referring to?  Most (if not all) of the Arduino library utilizes OO substantially.  Serial, Servo, Wire, SPI, Ethernet, etc are all objects.

If you are referring to user code, that's entirely up to the user.  The vast majority of samples are written in straight C because they are targeted towards non-programmers.

I write the majority of my own projects utilizing C++.
Title: Re: Rascal
Post by: wanderson on Jul 23, 2012, 09:51 pm


Which is why Arduino code for the most part doesn't tend to use much of the OO functionaility.


What Arduino code would you be referring to?  Most (if not all) of the Arduino library utilizes OO substantially.  Serial, Servo, Wire, SPI, Ethernet, etc are all objects.

If you are referring to user code, that's entirely up to the user.  The vast majority of samples are written in straight C because they are targeted towards non-programmers.

I write the majority of my own projects utilizing C++.



The Arduino libraries use more of the OO nature of C++, but not by much.  One specific example is that I/O tends to be function/procedure based  rather than stream based.  Another is that even when OO techniques are used the actual implementations are by nature limited.  An example that comes to mind is the print functionality which doesn't cover all of the actual object/type possibilities within the standard environment.

OO methodologies do not seem to work well for devices with such limited resources.  By nature good OO design tends to be less concerned with resource use, which makes sense since OO was developed to build programs that are larger than any uC can hope to run.  And efficient use of resources is the over riding concern of embedded design.
Title: Re: Rascal
Post by: jraskell on Jul 24, 2012, 12:30 am
Quote
OO methodologies do not seem to work well for devices with such limited resources.


I think that's a common misconception.  The constraints imposed by these limited resource micros are no less constraining using procedural methodologies than they are with OO methodologies.  The vast majority of OO features have little to no real impact on either code size or code performance.  And the few exceptions that do exist should be taken into consideration even when used on PC based platforms (at the very least, the programmer should be aware of the impact they have).

Encapsulation:  Classes in and of themselves have no impact on binary size or performance.  A handful of functions and global variables will take up just as much ram and flash as a class encapsulating them.

Inheritance: Excluding virtuals (which is really polymorphism), utilizing inheritance also has no impact on binary size or performance.  It's just good code reuse practice.

Polymorphism: This does impose some overhead in both performance and RAM (as a vtable needs to be created and then used at runtime to determine which virtual method needs to be called).  If used extensively, the RAM overhead can certainly get out of hand, but judicious high level usage can still provide some value-add even on these micros.

Function Overloading:  Another handy feature that does get used extensively by the Print class.  Again, no impact on code size or performance.

Operator Overloading:  Same as Function Overloading.

Templates:  The only thing Templates change is the amount of code you have to write, and it reduces that.  Another win there.

Now, once we start getting into comparing the C 'standard' library to the STL, things become less favorable, but that's mitigated by the fact that Arduino does not provide an implementation of the STL (though one or two third party implementations are floating around out there), nor is it a valid reason for not utilizing all the other strengths of the C++ language.
Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 02:00 am
Code written for re usability will almost always be more bloated than code customized for the specific task.  O-O only makes sense if the design objective is reusability.  The Arduino platform is a perfect example.  What little OO in the Arduino, which is what makes it so easy and begginner friendly is precisely why the Arduino way is so much slower and more flash intensive than straight C/C++ for the AVR written for specific tasks.

Compound that with OO's feature of masking the low level implementation, and you have a mechanism that will produce bloated code when used by typical programmers.  Now it is certainly possible for a seasoned OO programmer to write code that is as efficient as that written in straight C or even asm, provided they pay attention to the low level implementation of that code.  And if they are doing that there is little (to no) benefit in the OO approach over the previous Structured approach.  OO only becomes a clear advantage when the resources are for all intents unlimited or when the project size is extremely large.  And that would be defined large on a scale where even the most extensive and powerful embedded application would still qualify as small.
Title: Re: Rascal
Post by: jraskell on Jul 24, 2012, 03:32 am

Code written for re usability will almost always be more bloated than code customized for the specific task. 


That is a rather vague statement, and while I wouldn't argue it's technical validity, I will absolutely argue it's practical impact.  I'm assuming here that you really mean optimized when you say customized, since virtually any code one would write can be considered 'custom'. The reality is, very few projects require optimized code to get the job done.  Reusability should ALWAYS be a design objective, with optimizations performed only where deemed necessary to meet the goals of the project.


Compound that with OO's feature of masking the low level implementation


I wholeheartedly disagree with that statement.  Masking of low level implementation has nothing to do with OO vs procedural.  If you're using a third party library that isn't open source, then the implementation of that library is masked from you regardless what language it's written in.  If the sources are available, then it's implementation isn't masked.


Now it is certainly possible for a seasoned OO programmer to write code that is as efficient as that written in straight C


I've already shown that the majority of C++ features incur no overhead with regards to code size or performance.  Bottom line is, there is nothing inherently inefficient about C++ compared to C.  That's a fallacy, plain and simple.  Yes, it takes experience to write efficient code, but an inexperienced programmer is just as likely to write inefficient C code as they are C++ code.  And for the record, though not really applicable to the Arduino, the STL is a very efficient library.


OO only becomes a clear advantage when the resources are for all intents unlimited or when the project size is extremely large.


Again, I have to disagree.  The advantages of OO code become clear with just a few thousand lines of code, which doesn't even come close to pushing the limits of the Arduino unless it makes extensive use of RAM or utilizes some fairly large third party libraries (the latter case really meaning it's much larger than a few thousand lines of code). 

Encapsulation alone is far better than having a few dozen global variables and dozens of functions whose only real organization is in their naming convention.  Hell, it's better than having half a dozen globals and a handful of functions in even a small project.
Title: Re: Rascal
Post by: florinc on Jul 24, 2012, 03:38 am
Quote
Polymorphism: This does impose some overhead in both performance and RAM (as a vtable needs to be created and then used at runtime to determine which virtual method needs to be called).  If used extensively, the RAM overhead can certainly get out of hand, but judicious high level usage can still provide some value-add even on these micros.

Polymorphism without new() is like a fish without a bicycle.
Title: Re: Rascal
Post by: jraskell on Jul 24, 2012, 04:04 am
And what is a fish going to do with a bike?
Title: Re: Rascal
Post by: florinc on Jul 24, 2012, 04:33 am
Quote
And what is a fish going to do with a bike?

Have you used new() on Arduino?
Title: Re: Rascal
Post by: jraskell on Jul 24, 2012, 04:57 am

Quote
And what is a fish going to do with a bike?

Have you used new() on Arduino?

Nope.
Title: Re: Rascal
Post by: pYro_65 on Jul 24, 2012, 05:52 am
C'mon guys, give c++ a break, lol.
jraskell has some great points that I would back up if I wasn't heading to work soon ( great excuse I know! ), and comparing polymorphism to new is silly. Polymorphism just like everything else comes in two flavours... dynamic and static.
Title: Re: Rascal
Post by: Docedison on Jul 24, 2012, 07:22 am
Very interesting dissertation though somewhat off topic.....
We though, "tend to read what we understand rather than understand what we read..." My statement was to understanding Linux, FIRST in order to learn and use the language for the Rascal (Python? cool but later)...
I am having enough trouble learning C and C++ for now. I am one of those people who do things in a serial manner... You know... learn to walk before I run the NY marathon.
The language is slowly coming to have meaning for me as I learn it. I have no desire whatsoever to become anything more than reasonably familiar with C and see what happens from there.
I have more years of electronics experience that most of the people I interface with here in the forum do in writing code... (again I don't wish to be mis-understood I won the hard way, by persevering to the bet of my ability.
I am NOT trying to offend people here who are in my age bracket. I know who you are and I respect your knowledge more than I can say. There are however others that don't see what I am trying to say.
I WILL do the same with C and C++. Remember I bought my first Arduino Uno in March of this year and everything else is going well. I can write a sketch to a limited degree, I understand enough of the process to make a sketch do as I want and I learn more every day.
I thought the Rascal to be an interesting device for 'Forum Chatter" Highly educational and very frequently amusing reading the threads generated by this Idea and many more like it...
The Rascal is way out of my league now and I will not divert my attention elsewhere until I am more comfortable with the Arduino brand of C and C++ as these devices taken singly and as building blocks (I bought 2 328,5V Pro mini's for stand alone radio interconnected tasks for "This Years Grand Project")...

Doc
Next year I have no doubt I will be after another project but the Arduino will do for now...
Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 01:23 pm


Compound that with OO's feature of masking the low level implementation


I wholeheartedly disagree with that statement.  Masking of low level implementation has nothing to do with OO vs procedural.  If you're using a third party library that isn't open source, then the implementation of that library is masked from you regardless what language it's written in.  If the sources are available, then it's implementation isn't masked.


Of course simple, traditional functional code can mask the low level implementation.  But it is easier to write low level C with an appreciation of what the assembly output will be than it is higher level functional based code and especially OO code.  It is a question of what is practical, not possible...


I've already shown that the majority of C++ features incur no overhead with regards to code size or performance.  Bottom line is, there is nothing inherently inefficient about C++ compared to C.  That's a fallacy, plain and simple.  Yes, it takes experience to write efficient code, but an inexperienced programmer is just as likely to write inefficient C code as they are C++ code.  And for the record, though not really applicable to the Arduino, the STL is a very efficient library.


You are incorrect when you claim "no overhead", though a "proof" would require comparison of the produced assembly.  However, that ignores the crux of my claim.  OO code, produced for reusability, will include components that are simply not needed for every application.  This is also the problem with functional based libraries of code.  A simple example is code that handles multiple data types, when only one is needed for a given application.  It is not a question of the "theoretical" use of any approach but the practical.

Here is a specific example.  I was attempting to test some code on an ATTINY2313.  The core code was small enough for both FLASH and RAM conditions; however, by attempting to use the Serial object I was exceeding available resources.  The only way to get that code to work, involved, using low level C code to transmit serial data (of the specific form I needed).  At that level OO approaches would be indistinguiable from straight C or really even assembler...


OO only becomes a clear advantage when the resources are for all intents unlimited or when the project size is extremely large.


Again, I have to disagree.  The advantages of OO code become clear with just a few thousand lines of code, which doesn't even come close to pushing the limits of the Arduino unless it makes extensive use of RAM or utilizes some fairly large third party libraries (the latter case really meaning it's much larger than a few thousand lines of code). 

Encapsulation alone is far better than having a few dozen global variables and dozens of functions whose only real organization is in their naming convention.  Hell, it's better than having half a dozen globals and a handful of functions in even a small project.
[/quote]

Encapsulation is easily obtainable in a straight functional approach.  Indeed the concept was incorporated as a desirable programming practice long before the practical use of OO tools... 
Title: Re: Rascal
Post by: pYro_65 on Jul 24, 2012, 02:21 pm
Just my view on the matter

Quote
Of course simple, traditional functional code can mask the low level implementation.  But it is easier to write low level C with an appreciation of what the assembly output will be than it is higher level functional based code and especially OO code.  It is a question of what is practical, not possible...

Quote
You are incorrect when you claim "no overhead", though a "proof" would require comparison of the produced assembly.


I believe that was a fairly accurate analysis. Wrapping a non-member function into a class will not add overhead and 'Interface' does not generate instructions. The C++ compiler exploits the added semantic information associated with public inheritance to provide static typing.

Quote
OO code, produced for reusability


This is one major problem. Object Orientated paradigm's especially inheritance are not for code re-use. If that is your sole reason for using an OO paradigm, you are using it incorrectly.

Quote
The purpose of inheritance in C++ is to express interface compliance (subtyping), not to get code reuse. In C++, code reuse usually comes via composition rather than via inheritance. In other words, inheritance is mainly a specification technique rather than an implementation technique.
Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 02:48 pm

Wrapping a non-member function into a class will not add overhead and 'Interface' does not generate instructions. The C++ compiler exploits the added semantic information associated with public inheritance to provide static typing.

OO code, produced for reusability


While this is true in some cases, and possibly even most, it is not always true...  and that is the crux of the problem.  When programming for an embedded environment OO features insulate the programmer from the hardware


This is one major problem. Object Orientated paradigm's especially inheritance are not for code re-use. If that is your sole reason for using an OO paradigm, you are using it incorrectly.


You may not be old enough to remember when OO was first being prosletyzed, but code re-use was the major selling point at that time.  Interface compliance or inheritance was primarily sold as a method of improving the ability to re-use code over the methods that were then available.  Indeed the additional design requirements were only able to be justified by the labor savings such re-use would allow for...


The purpose of inheritance in C++ is to express interface compliance (subtyping), not to get code reuse. In C++, code reuse usually comes via composition rather than via inheritance. In other words, inheritance is mainly a specification technique rather than an implementation technique.


Again, what your describing is part of the OO design approach...  And I believe your interpretation differs from mine due to your modern experience base, while mine is influenced by the religions earlier sales pitch...
Title: Re: Rascal
Post by: pYro_65 on Jul 24, 2012, 03:24 pm
Quote
When programming for an embedded environment OO features insulate the programmer from the hardware


I'm not sure I fully understand this argument; is it for example: pointing out the difference between doing explicit port mapping and letting a class do it internally, cos that is not insulation but encapsulation. The greater difference here is a class can be programmed internally for portability, doing the appropriate port manipulations for the target processor in a generic way.

To gain the same level of portability with linear style code would involve a tremendous amount of conditional branching when considering a port I/O intensive application.
Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 03:38 pm

I'm not sure I fully understand this argument; is it for example: pointing out the difference between doing explicit port mapping and letting a class do it internally, cos that is not insulation but encapsulation. The greater difference here is a class can be programmed internally for portability, doing the appropriate port manipulations for the target processor in a generic way.

To gain the same level of portability with linear style code would involve a tremendous amount of conditional branching when considering a port I/O intensive application.


While class based portability that you describe is aesthetically pleasing, it can be fairly easily mimicked without the "tremendous" effort you ascribe.  But my real point is that portability as a goal is contrary to efficient use of resources.  Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.  Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.

Title: Re: Rascal
Post by: pYro_65 on Jul 24, 2012, 04:31 pm
Quote
While class based portability that you describe is aesthetically pleasing, it can be fairly easily mimicked without the "tremendous" effort you ascribe.


Haha, maybe I used the wrong words. My example is based on a few things.
Either explicitly typing out hard coded 'portability' for a specific few architectures. Or for example, the amount of expanded code the digitalWriteFast macros would emit before compiling.

Quote
But my real point is that portability as a goal is contrary to efficient use of resources


That is the beauty of encapsulation. There is no need for portability unless explicitly needed, however due to encapsulation of a specific feature set, it allows you to upgrade your code easily and for another platform if needed. By making your client code rely on a single interface for a specific feature, you only need change the underlying implementation in one place to manipulate every instance using the interface.

Time is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.

Quote
Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.


True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.

Quote
Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.


True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.

Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.

Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 05:02 pm

Quote
But my real point is that portability as a goal is contrary to efficient use of resources

Time is money and having code that is looking into the future can not only save programmers from headaches but also the wallet from possible delays in future updates.


Embedded design is one area that makes this truism false.  When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal.  And that is something that requires platform specific optimizations.  Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture...  That reduces (or eliminates) the benefit of the OO approach in my opinion.


Quote
Differing architectures will require entirely different algorithms/approaches to make most efficient use of the underlying architecture.


True, but most differences are statically marked using #define THIS_PROCESSOR and such provided by the architectures implementation, so there is no reason for things to be inefficient if all the relevant information is available at compile-time.


The #define THIS_PROCESSOR approach only allows for limited optimizations.  To make the best, most efficient code, requires that not simply low level interface code be optimized, but also the whole algorithm be chosen and implemented to compliment the hardwares ability.  Trying to accommodate multiple architectures of vastly differing capabilities only results in mediocre performance...
[/quote]


Quote
Code that is easily portably between vastly different architectures will result in code that is very poorly performing on at least some of those architectures.


True, if you compare say an arduino and a raspberry pi. But i'm thinking in practical terms of say 8-bit avr's and the upcoming due's 32-bit sam-something-something processor. Sure the 32-bit can do vastly more, and differently, thankfully a lot of major communication protocols and such are not bound by certain architectures, and even still those that are will still be able to have an interface that is independent of any system.

Multiple SPI's on the 32-bit could be emulated in software on the 8-bit with no knowledge to the client code using the actual SPI handling code. Sure that is taking a performance hit, but that's to be expected taking features backwards. Extending feature sets if done correctly should not impact systems already in place, you still have to explicitly write code for each architecture, but well-formed encapsulation should mean you only have to write it once.


Again, for me, the issue is one of choosing the appropriate algorithms for the appropriate hardware.  Something that works well on the Due, is likely to require an entirely different approach when attempting to use an 8-bit uC to accomplish the same task.  And in many ways the newer, faster, more powerful, uC's (like the due) are eliminating the need for this approach, much like the faster general computer hardware eliminated the need for such optimizations in those applications and made the "OO way" so useful.  It is a better tool for software engineering, but it does not tend to produce efficient code...  Not because the syntax requires bloat, but because the design methodology encourages it.  And because if you throw enough horsepower at a problem, code that is 5, 10, or even 50% less efficient doesn't matter because in a short time the hardware running that code will be %200 faster...

But embedded design, with its emphasis on least cost, is one area where such optimizations make economic sense...  And besides it makes me sentimental for the days when I had to get my code running within 512-2048 bytes that were available on my machine... :)
Title: Re: Rascal
Post by: pYro_65 on Jul 24, 2012, 07:19 pm
@Docedison, sorry mate, it seems we hijacked your thread with no intention of returning it any time soon...

Quote
Embedded design is one area that makes this truism false.  When the real cost of a project is the production costs of the hardware (as opposed to simply software development costs) it makes sense to use the cheapest components possible that will still accomplish the desired goal.  And that is something that requires platform specific optimizations.  Those can include simply optimizations of code, but also include the choosing of algoritms that are the most efficient for the architecture...  That reduces (or eliminates) the benefit of the OO approach in my opinion.


@wanderson, I can see the points that you are making, and under most circumstances you are correct. But in my reality this notion is wrong, and I have proven it to some extent ( hehe my opinion only! ). I have been working on an abstraction layer which not only allows me to write platform independent code, but also connection independent code. It works by using an interface to describe how data is moved, then layers provide the actual transport mechanisms ( SPI, I2C, parallel, shiftIn/Out ). My profiling test is an LCD driver, my code has increased the capabilities of the LCD beyond its native support. I can now use my LCD ( st7920 ) in 8-bit read write mode via SPI using shift registers with no extra code on the client side. And it is literally one line of code difference to change its connection method. I have designed the system in a way that an update for the Due will be relatively easy. The LCD library has not one single line of code that talks directly to any hardware. ( I would be happy to show if interested. )

When used under test scenarios ( digital port manipulation, with 74hc595/74hc165, in static mode ), its optimisations exceeded expectations. All layers of the library evaporated and direct port manipulation was emitted directly into the loop function providing extremely optimised instructions, the dynamic version was slower with a v-table overhead but still many times faster then arduino's base code.

The system is non-restrictive. A large parallel connection for instance can control many devices ( devices can be offset, used inversely an SPI connection could control many IO making it extremely easy to expand your system ).

My motivation for this rant is based upon the fact that C++ is a superset of C, it is C + More, whereas C is simply C without the C++
It was kind-of created for the new generation of applications that could not be expressed properly/efficiently by C. There is nothing that C++ can't do that C can do better  :P.
Title: Re: Rascal
Post by: wanderson on Jul 24, 2012, 07:33 pm
My comments are not related to C vs C++ because like you said you can write C++ using either an OO approach, a functional approach, or low level bit-banging.  But in my opinion, OO approach only works successfuly (in the vast majority of cases) when there are no appreciable resource limitations placed upon it.  There is a reason that most (as in the vast majority) of embedded applications (in terms of number of units deployed) are created in assembly or low level C (which if C++ limits itself to is really just C again)...

As Moore's law continues to apply to embedded processors, this may alter that equation, but I for one am not looking forward to it.  I am really not looking forward to riding in a car being driven by software created using the same techniques and quality control that places like Microsoft apply to Windows (or the Sync entertainment system in my current car)...
Title: Re: Rascal
Post by: florinc on Jul 24, 2012, 08:44 pm
Quote
the dynamic version was slower

How do you create "dynamic version" without a new operator?
Title: Re: Rascal
Post by: pYro_65 on Jul 25, 2012, 01:59 am
Quote
How do you create "dynamic version" without a new operator?


Dynamic polymorphism has nothing to do with new, it is merely a function for allocating bytes during runtime. Sorry if my post was misleading in my use of static/dynamic
Title: Re: Rascal
Post by: florinc on Jul 25, 2012, 03:18 am
Quote
Dynamic polymorphism has nothing to do with new

My C++ may be rusty, but I don't remember any other method (than new) to create objects dynamically (at runtime).
Title: Re: Rascal
Post by: pYro_65 on Jul 25, 2012, 05:32 am
Quote
My C++ may be rusty, but I don't remember any other method (than new) to create objects dynamically (at runtime).


You are correct in regards to dynamic allocation of memory ( stack space can be used in a pseudo-dynamic fashion ). However dynamic polymorphism is not related to allocation of memory.

Dynamic polymorphism is a way of using types to provide a layer of commonality between one or many incompatible types.The dynamic part of this is the need for a v-table to provide the relationship between interface and implementation. Static polymorphism requires each layer of abstraction to be statically typed to its derived implementation. The difference is the loss of v-table overhead and also the dynamic ability to reference incomplete types.

Each paradigm has its own pros and cons.
Title: Re: Rascal
Post by: florinc on Jul 25, 2012, 06:09 pm
OK, thanks.
So my question to you then (maybe I should start a new thread) is how do you implement dynamic polymorphism without new?
(I am interested in practical solutions. I don't want to re-implement new() using malloc, create vtables etc, mechanism that is already provided by the C++ compiler and linker).
Title: Re: Rascal
Post by: pYro_65 on Jul 26, 2012, 04:38 pm
I will try my best to give a good explanation.

Using objects can mean a few things, but to start,  I will clarify that we are talking about using new to create dynamic instances of a polymorphic class rather than polymorphic classes using new to create storage. In my opinion new() is not something I will use on a micro-controller. I have found that due to their limited resources, designs can benefit from a static-as-possible approach ( memory allocation ).

After a while of failing to divise a nice concise example, I will instead just try to explain my current project as polymorphism is something that is specific to a cause, it can't just be applied to something as an optimisation endevour.

I have a pseudo-HAL system that abstracts the arduino hardware ( transport systems ) at a very low level while providing hardware independant access to it. My main goal is to have an optimal arduino implementation that is portable between 8 and 32 bit platforms ( Atmel 8-bit, Due & STM32 ).

I start off at the very bottom with a transport layer. It defines an interface for update and access of hardware data ( no specific hardware ).
Next are input and output classes that  derive the transport class and provide an interface for how to read or write data to and from hardware ( also, no specific hardware ).
At the end of the transport chain there is the actual hardware, I have currently implemented classes for ShiftIn & ShiftOut, Parallel ( read/write system ), SPI ( read/write also ), i'm also part way through SPIShift ( read/write ). These classes talk to the hardware and provide the data in a way that is compatible with the transport layer interface.

The transport system has been implemented in both dynamic and static versions.

The hardware specific transport classes access the actual physical hardware through a single interface that is hardcoded to the platform. It is the only part of the system that has to be reproduced for different platforms.

This may seem like a lot of work ( it was ) for not much advantage apart from being able to access different hardware under a common interface, especially when accessing the hardware directly through its class. For example using a 74hc595 and the ShiftOut class, its first advantage is the fact it directly accesses the hardware so it is much faster than using the arduino standard shiftOut, even with the overhead of the virtual function calls it is still many times faster. Now thats using the dynamic method. Using static polymorphism the hardware classes will emit instructions equivalent to explicitly writing out the direct port mapping yourself. The transport layer completley dissapears, and your code will work on all arduinos.

The whole reason behind the transport system isn't just to allow easy access to the hardware in a portable and fast way, but for an even higher purpose I will rant on about now.


One project I'm working on involves an LCD ( ST7920 ) and I want to use it with a big system, for now it looks to fit on an UNO, but maybe I will end up needing the ram and outputs of a MEGA, or even the non-existant Due. The point I want to make is each of these systems has a potentially different way of connecting the LCD more efficiently.

So rather than create a different implementation for each different connection method, the LCD driver talks to the transport layer interface instead. When creating the LCD instance you specify the transport class it is going to use. So to change the connection method from parallel on a Mega to shift in/out on an UNO you only need to modify one line of code.

To show an example, here is an excerpt from the library LCD12864, it is the code it uses to talk to the LCD. Below that is the same piece of code but using my library, as you can see it has nothing to do with any particular connection method.

LCD12864:
Code: [Select]
void LCD12864::setPins(uint8_t tRS, uint8_t tRW, uint8_t tD7, uint8_t tD6, uint8_t tD5, uint8_t tD4, uint8_t tD3, uint8_t tD2, uint8_t tD1, uint8_t tD0) {
digitalWrite(EN,1); 
delayns();

  digitalWrite(RS, tRS);   
  digitalWrite(RW, tRW);   
  digitalWrite(D7, tD7);   
  digitalWrite(D6, tD6);   
  digitalWrite(D5, tD5);   
  digitalWrite(D4, tD4);   
  digitalWrite(D3, tD3);   
  digitalWrite(D2, tD2);   
  digitalWrite(D1, tD1);   
  digitalWrite(D0, tD0);   
delayns();
  digitalWrite(EN, 0);   
delayns();

}


Mine ( comments are part of code not for my post here ):
Code: [Select]
  BASE_TEMPLATE void BASE_TYPE::_ByteOut( byte b_Data )
      {
        FLAG_ON( ST7920_EN );  //this->t_Output.Write( true, _WriteOffset + ST7920_EN );
        this->t_Output[ ST7920_B0 ] = b_Data;
        this->t_Output.Update();
        FLAG_OFF( ST7920_EN );  //Significant speed increase here over: this->t_Output.WriteBit( false, _WriteOffset + ST7920_EN, true );
        this->t_Output.Update();
        return;
      }


Just like my previous example, a test of most LCD features using static polymorphism discovered that not only the transport layer but also the LCD class almost entirely disolved. The only major indirection from linear program flow was the shiftout code as it is used with every lcd command, all unique lcd stuff was inlined directly into the calling code.

Now after all this you may ask yourself 'what has this got to do with new()', well the answer is nothing. You can use these components dynamically if you want, but as they describe a static system, you gain no benefit over initialising a global variable. All the features I rambled on about are only a fraction of the libraries actual capabilities, it employs both dynamic and static polymorphism. And this is where I think you are mixing things up. Memory allocators like new() are working with data whereas polymorphism is entirely types, the explanation I was trying to make at the start will now make sense.

At first glance you may notice from my explanations that the static version is faster and more concise. This is great but not always the best option.

Lets take a scenario made simple by the library: Multiple LCD's connected to a single arduino ( doesn't matter which board or connection type ). There are a couple of different modes that may be common.

1. Each LCD displaying its own data, or
2. Both LCD's displaying the same thing.

Either method can be done using static polymorphism. However only the first is more efficient than the dynamic version in terms of scalability. As the static version removes the v-table it produces more direct code, or most code will be replicated between the LCD's doing the same thing.

The dynamic version comes into play as it would allow both LCD's to be referenced in an array of the base transport type. So each LCD could have different connection methods, and still have the hardware code in a single location as the v-table maps the transport interface to the hardware implementation.

Hope this helps.

Title: Re: Rascal
Post by: florinc on Jul 27, 2012, 04:26 am
Thanks. I need some time to digest what you wrote.
My first thought was about (source) code readability and accessibility. Many years ago I learned to write code for others rather than me. You may understand your code, but if the person who takes over doesn't, then that code becomes un-maintainable (or very expensive to maintain). Good job security though :)

BTW, you should gather these in your blog. Many would be interested. Class diagram(s) (or any other kind of diagram) would also definitely help.
Title: Re: Rascal
Post by: pYro_65 on Jul 27, 2012, 05:51 am
The readability ( once tidied up and presented well with comments ) should not suffer, I have created it in a way which will hopefully encourage people to extend its functionality, not just use it to improve external library development. I would say that most of the code is just pure interface, just discrete objects encapsulating a single functionality. I also plan to hide the interface behind some #define's to allow a linear looking usage for those not keen on working with templates. I also have devised a system for some debugging and error handling.

I was planning on pre-releasing it shortly after the release of the Due, I haven't got a blog, but I can have a look at starting one. Unfortunately my design data is a scrapbook. I'm planning on finding a visio style program for linux to do up my diagrams.

It is encouraging to read your comments so in the next couple of days I'll try to get a detailed description with some examples into the 'software development' forum to see what others think. I have noticed a lot of people are slightly against c++ for different reasons, so I have been holding off a release until I had a fully functional proof-of-concept.  I've been working on it randomly for five months now so its probably time to get some feedback anyway.
Title: Re: Rascal
Post by: wanderson on Jul 27, 2012, 01:54 pm

I'm planning on finding a visio style program for linux to do up my diagrams.


Take a look at dia...  I find it very similar to visio for functionality like that.
Title: Re: Rascal
Post by: pYro_65 on Jul 28, 2012, 10:47 am
cheers, I've looked at its home page and it seems like it'll suit my needs fine.