Go Down

Topic: A mini with a Ferrari's Engine? lol (Read 2 times) previous topic - next topic

grantm009

#15
Oct 18, 2013, 12:00 am Last Edit: Oct 18, 2013, 12:05 am by grantm009 Reason: 1
Quote
You can run RISC OS on a Raspberry Pi, it makes Linxu look like the dog it is.


Quote
Thing is when I say things, I tend to think about what I say first, you should try that sometime.


Well Grumpy if you really did think about things first you probably would not say half the things you throw out there.

RISC OS is a cute 15 year old OS that has developed into, well, a cute 15 year old OS. Nice Toy !!!  To compare it to Linxu might hold water - I'm guessing you mean the Linxu Mint - the pseudo random bit bucket  :).

Or perhaps when you type things you don't tend to let your fingers think about it first. You should try that sometime  :smiley-roll:

Now - RISC OS vs LINUX ? Thats a debate thats likely to last less than a minute. And if you really DO think about things first, you probably won't do much more than an agreeable grunt in response.

Which would I prefer - the old Ferrari or the new RISC OS ? I'll leave that up to your densely packed finger imagination.

Buy hey Grumpy don't get Grumpy - I'm just winding you up - which is what you have to do with RISC OS before using it  :D


cheers.

Grumpy_Mike

Quote
I'm just winding you up

Yer I know, you couldn't possibly believe that tosh you wrote.

In case you did the reason that Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.

grantm009

:)

Quote
Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.


true but it does not claim to be and with good use of interrupts it can be "near real time". - or - you could RTLinux that is real time linux.

Jantje


In case you did the reason that Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.

Do you prefer cooperative multitasking?  ]:D
Jantje
Do not PM me a question unless you are prepared to pay for consultancy.
Nederlandse sectie - http://arduino.cc/forum/index.php/board,77.0.html -

fungus


It played a tune and the data it used to play the tune was the code that played the tune. You can't do that on an Arduino because it is Harvard architecture.


"pgm_read_byte()"?

No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

fungus


It's a rather techincal hour-long talk, but (for those not interested in grimey details) the interesting part is the demo they put together.  (It's near the beginning.)  Just unreal what they've accomplished on such limited hardware.  And a perfect case in point here.  They're literally executing code in such a way that memory contents are both program and data, and timed to exact cycles of the hardware.  It's just sick...


Been there, done most of that.

The "Anarchy!" trick at 19:00 was new to me. The rest of it was fairly standard stuff (new combinations of old tricks).
No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

cjdelphi

I was trying to keep it non technical.

Open an Amiga up, now replace all the hardware to moden day standard...  i have no clue what cpu's amiga had but let's say 256k ram  10mhz processor,  using that new multi million dollar cpu... execute you old amiga programs but thousands of times faster and if Amiga took off and not died out...

What kind of OS could have been written?

Atari ST with a quad core processor :)


Mike... you're so far up yourself, you want me to buy you a torch for christmas so you can see what you're doing?...

westfw

Quote
somehow have the real hardware running the real thing only a lot lot more powerful...
   :
Think windows 3.1 on a quad core machine.

In order for me to be satisfied, you're going to have to define exactly what you mean by "the real hardware."
To me, there is no way that a quad-core x86 system is "the same hardware" as the 386 that ran windows 3.1.  I mean, it's got 4 cores, which is pretty fundamentally different!  (However, AFAIK, you can run FreeDOS on one of them, if you want.)
There have been a number of implementations of "classic" CPU chips (like the 6502 used in Apple ][, C64, and many other ancient personal computers) in FPGAs, sometimes running at crazy clock rates. ( http://www.youtube.com/watch?v=rOFKD8A-syU )  Same Hardware?   I dunno; you're the one being picky.

A lot of the old cpus were microcoded, anyway, and I've seen the diehard emulators (who otherwise wouldn't go near a windows system) say things like "well, the microcode engine happens to be a x86.  And it happens to run windows to boot itself up.  So?"

A lot of old code used "timing loops" for delays.  If you run it on merely turbo-charged hardware, it will be useless.

Quote
What kind of OS could have been written? [on a supercharged Amiga]

Unix :-(
Even in those days, Unix was THE "small computer operating system of choice", and systems tried to be as unix-like as they could, within the limitations of, you know, hard disks being exhorbitently expensive.   There were GUI environments for unix, sort-of.  They weren't all that wonderful.  Then everyone would copy Apple (interestingly, the Amiga had very similar hardware to the original Mac (except: Color!)  As did a bunch of the early "industry oriented" unix systems like Sun and HP (except: add the expensive disks anyway.  Somewhere.)

cjdelphi

ok... pick a processor that was once used, but went the way of the dodo, let's "pretend" it never died out.

What would have happened to that CPU if it had of evolved and not died out, does a CPU exist? if yes....

if [insert CPU] continued to advance (like ARM and Intel and AMD and Apple (eg, A5), if it went from a couple of mhz to begin with and now 30 years later, that same fundemental processor would still be the core (eg, I could get away with installing DOS and Windows 3.1 on my AMD AM3)..

So I would love to see a company buy out the rights or if they own the rights, give it a make over and bring that ancient machine to speed and resurrect it with new technology and then sell it, coders back from the 80's could not only go back to their old source code, they could improve it and bring it to modern day standards... it would be a huge job both sides, now the question is, does such a cpu exist (once popular, now dead...) an operating system that could not advance because the CPU it was written for simply no longer exists..

Still confused?...


SirNickity

It's not particularly feasible.  Sure, like others have said, you could pretend to do this sort of thing by taking FPGAs or emulators to create an instruction set that is compatible with those old CPUs, but running at clock speeds way beyond their original spec.

But, doing this and maintaining the instruction set gets you only two benefits:  The ability to run old binaries -- which will fail spectacularly due to the now vastly increased clock speed;  and as a crutch for old programmers who don't want to learn newer instruction sets.

Well... the thing is, technology has moved on for a lot of reasons.  The new instruction sets aren't just bigger for the sake of being more complicated (although I have heard that criticism of x86/x64).  There are lots of new instructions that exist in modern CPUs (that obviously didn't in old ones) because it was not possible (or at least economical) to do those things back then.  For e.g., modern CPU instruction sets have advanced math instructions... not just bit manipulation, shifting, add, subtract, multiply, divide ... but things like sin, cos, tan...  Or for that matter, handling of floating-point numbers at all.  That's a modern convenience that would be hard to let go of, but it was way too complex and expensive to include on the chip (or even via a second chip) until the beginning of the 90s or so.

Which segues to another thing that has changed.... number size.  I don't know the original Mac Classic CPU specs off the cuff, but I'm pretty sure it couldn't handle 64-bit integers with single registers -- and therefore, single instruction cycles.  This would be necessary, of course, to give that hot-rodded classic CPU access to the 8GB of RAM you would want for your modernized OS.  If you want multiple cores, there needs to be some way of detecting, initializing, monitoring, and controlling those.  More instructions.

So that means, by necessity, you would have to expand the instruction set to accommodate the types of hardware the CPU will interact with directly (cores, memory controller, floating-point unit, arithmetic unit, etc...)  You've already broken compatibility by this time, but let's ignore that and carry on...

OK, so we're blissfully pretending to run old applications on archaic instruction sets, but at a gazillion times faster clock speed.  Let's also pretend that the software does not implement delays by ticking away clock cycles uselessly, but has a smarter, real-time way of timing itself.  We're also going to pretend the software uses large enough data sizes to comprehend a 1TB hard-drive in a single partition, or open a file that is greater than 64MB.

Next problem... how do you address your Bluetooth mouse through AmigaOS or Win3.1?  Or, even access IPv4 networks?  Or USB?  That hardware wasn't universally supported back then -- most of it didn't exist at all -- so that means re-writing the kernel to develop extensible frameworks, creating drivers, and support applications to interface all of that to the user or other applications.

Let's assume that can be done.  What do you get?  Basically, an x64 running Linux, Windows, or Mac.

To remix our old hardware and old applications, we've had to create new hardware, new OSes, and new applications.  There's literally nothing left to salvage except maybe the look and feel.  OK, thick beige plastic cases and text-based or 2-dimensional graphical interface elements.  That can be done.

westfw

ARM, PPC, 68000, x86, z80, and 8051 are all examples of microprocessor architectures where you can go out and buy "modern", usually much faster, versions.  x86 is the most dramatically improved, going from 16bit 5MHz systems to 64bit 4GHz multicore, while 68000 went from ~8MHz to ~250MHz and 8051 went from 1MIPs (~12MHz, 12-cycles-per-instruction) to ~100MIPs (~100MHz 1-cycle-per-instruction.)
There's a hobbyist sub-segment of re-creating old systems, whether using original hardware, new but mostly compatible hardware, or emulated hardware.  But it's a pretty small segment;  The truth is that there's not a lot of reason to run the old applications; if they were good and important, they have better and more modern replacements.  If they completely died out, that's mostly because they sucked compared to what's come along since; there is only nostalgia.
Also, such re-creations are not cheap by modern standards.   Lacking the economies of mass production, it will cost you more to build a neat reproduction than it will to purchase a used x86 system.
And most of the nostalgia is for games.  Where speeding things up or increasing the graphics resolution is ... neither useful nor desirable.

Quote
coders back from the 80's could not only go back to their old source code, they could improve it and bring it to modern day standards.

No, they couldn't.  Or they already have.

Quote
now the question is, does such a cpu exist (once popular, now dead...) an operating system that could not advance because the CPU it was written for simply no longer exists..

Well, I'd argue that you could pick any DEC computer.  Once the 2nd largest computer manufacturer in the US, it was gradually acquired and dissipated until nothing is left (not quite true; Modern Windows is supposed to be based on some of their core technology and unix originated on DEC hardware.)  So, PDP8, PDP11, PDP10, Vax...  All viable candidates, often with a large amount of the core software on the net and licensed so anyone can use it.   But not many people want to.

john1993

So a mini with a Ferrari's engine would snap the chassis in two,


lol! sweet.

but your harvard architecture comment is not totally relevant to avr. thanks to spm and lpm instructions its easy to include data in the code space and possible to run executables from ram or ee. also note that linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.

btw speaking of cross-emulation there are several programs for imitating commodore on avr. ive tried both vic20 and c64 versions which did quite good job driving a tv. also i have an intersil 6100 cmos chip which runs pdp8 code but never got it running. iirc requirement for multiple and strange supply voltages stalled the project.

SirNickity

but your harvard architecture comment is not totally relevant to avr. thanks to spm and lpm instructions its easy to include data in the code space and possible to run executables from ram or ee. also note that linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.


*sign*  I can only assume this whole paragraph is meant to troll.  Nonetheless, I'm going to take the bait...

Re-programming the flash in-code is not quite the same thing as literally executing data, or processing code as data.  That's a trick that, by definition, isn't possible on Harvard Architecture because they're mandated into separate memory spaces.  Yes, you can shuttle those bits from one side of the fence to the other, but it's a fundamentally different thing than executing the same memory contents as you use to feed the executing code with data.  Furthermore, those instructions may not even be available outside the bootloader.  I thought they were restricted on AVRs where there is a dedicated loader area.  (Haven't had the occasion to use them, so I don't remember for sure.)

Second, while I love Linux, and I tolerate Windows, I have to disagree with this polarizing statement.  There do exist phenomenal Windows coders, and horrible Linux coders.  The platform (with all its strengths and flaws) has little to do with the capabilities of its developers.  And no, Linux can not be made to be a real-time OS just because someone is a good programmer.  An actual RTOS does not execute time-slices of code, it executes instructions exactly.  If you write a loop that is 1000 clock cycles long, it will take 1000 cycles to complete.  No more.  That is not possible on Linux because the Linux kernel was not designed that way.  This is not a flaw, it's a deliberate design choice.  RTOSes do not lend themselves to multitasking, or multi-user, desktop and server environments.  Changing the Linux kernel to be a RTOS makes it a different kernel -- so yeah, that can be done, but no coder, no matter how sup3r 1337, is going to write an application, run it on Ubuntu, and achieve RTOS performance.  It can't happen.

fungus


linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.


Nope.

At the very least you need a special version of Linux ( eg. https://en.wikipedia.org/wiki/RTLinux ) and even then it's only an approximation.

No pre-emptive operating system can ever be deterministic (ie. it can't allow threads to disable interrupts for arbitrary amounts of time).


possible to run executables from ram or ee


Nope.
No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

john1993

#29
Oct 18, 2013, 10:39 pm Last Edit: Oct 18, 2013, 11:04 pm by john1993 Reason: 1
thanks for "biting" but there seems to be a little unfamilariarity with avr architecture floating around. first of all lpm allows direct access to flash data with no regard at all to section or any other restrictions. so that makes us halfway von neuman right there.

but wait! theres more!  spm only executes from the boot area (actually RWW which is not the same) for SOME of the mega but not true of any of the tiny series. even in those cases its only necessary to keep a single spm instruction in RWW which can then be called from anywhere. i make use of this often using my modified "opti with bios". anyway this allows loading code from ram for execution in flash. this is what  all bootloaders do. so you can argue about what constitutes harvard but the fact remains code from ee or ram CAN be executed. its mostly a timing issue. even if you want to play semantic games and disallow this last trick the avr still comes out more von neuman than harvard due to lpm.

as far as linux, true its not intended as rtos. however, not sure about ubuntu, but earlier versions did allow entry and exit in and out of protected mode using correct sequence of opcodes. with correponding abilty to disable exceptions and interrupts this not only provided direct access to the ports but accompanying rt timing advantages. its true the os is no longer in control for the duration but with proper manipulation of the global descriptor tables things would pick up where they left off. yes, not exactly cooperative but still. and maybe not possible with modern versions, i havent actually tried it lately.  

afaik no one has been able to do this in windows since 98. and btw c girls should not even try due to opcode timing issues usually associated with compiled code. at least with gcc from what i can tell. maybe inline asm tricks but i doubt it.

and dont bother asking for links or code examples.  this is a funtime fishing expedition, not watch the smart guy dance. everyone is of course entitled to continue with their own definitions and theories.

Go Up