A mini with a Ferrari's Engine? lol

john1993:
lol is right. you have to excuse grumpy and fungus. theyre pretty much my favorite members but dont always get the point. in this case unable to determine the difference between imitating an insruction set (emulation) and running look-and-feel of programs and os for vintage computers.

What's the difference between software on a Pentium/ARM CPU and 'hardware' implemented on a FPGA (or whatever)?

Writing programs for it not the same thing as using an API. You're 'seeing' the instruction set and hardware that you wanted.

How the circuit board inside the box works really doesn't matter.

cjdelphi:
Not emulated, somehow have the real hardware running the real thing only a lot lot more powerful...

Think windows 3.1 on a quad core machine. Not emulated. Only lets have hardware from the past updated :slight_smile:

I have to agree -- this seems like a contradiction. Not emulated ... same hardware except more powerful. How can it be the same, but improved? If it's improved, it's not the same. Do you mean the same architecture at higher clock speeds? If so, that throws off the timing of all the legacy software. Back in the days when software ran on computers less powerful than an Uno, there was a lot of timing-dependency. Especially where computers were all the same CPU at the same clock speed. (I.e., not like PCs where yours could be 2.4GHz, and mine is 2.8GHz, but where we're both running 8MHz CPUs with the same part number.) When you know the clock speed, you can assume it takes XX clock cycles to do whatever task, and you can use this to run delay loops and things like that. Changing the clock speed breaks all those assumptions, and sometimes causes the code to stop working, if the delay was there to wait for some event to take place. Other times it just changes the apparent speed. To see this in action, try running Windows 3.1 or Windows 95 in a VirtualBox emulator, and win a game of Solitaire. That used to be an animation that took a minute or two to complete. Now, it's just a blur of cards.

I don't know how literally you meant the analogy of running Win3.1 on a quad-core, but it's kinda doable now, without emulating anything. It will only use one core, because Win3.1 does not support SMP. Also, you'll have to tweak one of the .ini files to change the memory divisor because it doesn't use large enough integers to track the amount of RAM computers have today. Other than that, it works. Again, I used VirtualBox to do so myself (a couple years ago) because... well.. there's not much to do in Win3.1 these days, and I'm not willing to dedicate hardware to it.

john1993:
lol is right. you have to excuse grumpy and fungus. theyre pretty much my favorite members but dont always get the point. in this case unable to determine the difference between imitating an insruction set (emulation) and running look-and-feel of programs and os for vintage computers.

It could be we're all misunderstanding the point. It's not very clear. What is it you or the OP is looking for? A way to run old software? (The actual binaries of old software...) Re-make something that looks like old software, but with modern hardware support? Just rebuilding an old computer in a new case?

Whatever the goal is here, someone's probably already done it. There's a lot of tech prowess behind nostalgia. But first, someone has to spell out what they're trying to do.. 'cause this Ferrari metaphor is totally lost on me.

SirNickity:
Back in the days when software ran on computers less powerful than an Uno, there was a lot of timing-dependency. Especially where computers were all the same CPU at the same clock speed. (I.e., not like PCs where yours could be 2.4GHz, and mine is 2.8GHz, but where we're both running 8MHz CPUs with the same part number.) When you know the clock speed, you can assume it takes XX clock cycles to do whatever task, and you can use this to run delay loops and things like that. Changing the clock speed breaks all those assumptions,

Yep, I've done a lot of that in the old days - from Spectrums to Atari ST and Amiga. Things like changing the color palette in sync with the video display was very common.

The C64 in particuar relied heavily on "raster chasing" for split-screen scrolling and sprite multiplexing. I'm always very impressed that software emulators can run those programs at all. It shows the dedication of their programmers.

There's also a lot of games that will simply run too fast. We've all seen what happens to "delay()" on an Arduino if you choose the wrong clock speed in the menu. Many old games would do the same thing.

Bottom line: There's a lot of 'nostalgia' code out there that will break if you change the clock speed.

Have you seen this?

It's a rather techincal hour-long talk, but (for those not interested in grimey details) the interesting part is the demo they put together. (It's near the beginning.) Just unreal what they've accomplished on such limited hardware. And a perfect case in point here. They're literally executing code in such a way that memory contents are both program and data, and timed to exact cycles of the hardware. It's just sick...

They're literally executing code in such a way that memory contents are both program and data,

Yep done that back in 75 about the third program I ever wrote on a micro.
It played a tune and the data it used to play the tune was the code that played the tune. You can't do that on an Arduino because it is Harvard architecture.

cjdelphi:
There's no limits to your closed off imagination.

More imagination in my little finger sunshine than you will ever have if you live to be 300. Seen my projects have you? Where are your projects?

Thing is when I say things, I tend to think about what I say first, you should try that sometime. But then there I am again taking you literally, thinking that your words actually have meaning.

So a mini with a Ferrari's engine would snap the chassis in two, just like your analogy.

You can run RISC OS on a Raspberry Pi, it makes Linxu look like the dog it is.

Thing is when I say things, I tend to think about what I say first, you should try that sometime.

Well Grumpy if you really did think about things first you probably would not say half the things you throw out there.

RISC OS is a cute 15 year old OS that has developed into, well, a cute 15 year old OS. Nice Toy !!! To compare it to Linxu might hold water - I'm guessing you mean the Linxu Mint - the pseudo random bit bucket :).

Or perhaps when you type things you don't tend to let your fingers think about it first. You should try that sometime :roll_eyes:

Now - RISC OS vs LINUX ? Thats a debate thats likely to last less than a minute. And if you really DO think about things first, you probably won't do much more than an agreeable grunt in response.

Which would I prefer - the old Ferrari or the new RISC OS ? I'll leave that up to your densely packed finger imagination.

Buy hey Grumpy don't get Grumpy - I'm just winding you up - which is what you have to do with RISC OS before using it :smiley:

cheers.

I'm just winding you up

Yer I know, you couldn't possibly believe that tosh you wrote.

In case you did the reason that Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.

:slight_smile:

Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.

true but it does not claim to be and with good use of interrupts it can be "near real time". - or - you could RTLinux that is real time linux.

Grumpy_Mike:
In case you did the reason that Linux sucks is that it is preemptive multitasking system which means it is useless for real time control.

Do you prefer cooperative multitasking? ]:smiley:
Jantje

Grumpy_Mike:
It played a tune and the data it used to play the tune was the code that played the tune. You can't do that on an Arduino because it is Harvard architecture.

"pgm_read_byte()"?

SirNickity:
It's a rather techincal hour-long talk, but (for those not interested in grimey details) the interesting part is the demo they put together. (It's near the beginning.) Just unreal what they've accomplished on such limited hardware. And a perfect case in point here. They're literally executing code in such a way that memory contents are both program and data, and timed to exact cycles of the hardware. It's just sick...

Been there, done most of that.

The "Anarchy!" trick at 19:00 was new to me. The rest of it was fairly standard stuff (new combinations of old tricks).

I was trying to keep it non technical.

Open an Amiga up, now replace all the hardware to moden day standard... i have no clue what cpu's amiga had but let's say 256k ram 10mhz processor, using that new multi million dollar cpu... execute you old amiga programs but thousands of times faster and if Amiga took off and not died out...

What kind of OS could have been written?

Atari ST with a quad core processor :slight_smile:

Mike... you're so far up yourself, you want me to buy you a torch for christmas so you can see what you're doing?...

somehow have the real hardware running the real thing only a lot lot more powerful...
:
Think windows 3.1 on a quad core machine.

In order for me to be satisfied, you're going to have to define exactly what you mean by "the real hardware."
To me, there is no way that a quad-core x86 system is "the same hardware" as the 386 that ran windows 3.1. I mean, it's got 4 cores, which is pretty fundamentally different! (However, AFAIK, you can run FreeDOS on one of them, if you want.)
There have been a number of implementations of "classic" CPU chips (like the 6502 used in Apple ][, C64, and many other ancient personal computers) in FPGAs, sometimes running at crazy clock rates. ( 10x overclocked 6502 for Agat-7 - YouTube ) Same Hardware? I dunno; you're the one being picky.

A lot of the old cpus were microcoded, anyway, and I've seen the diehard emulators (who otherwise wouldn't go near a windows system) say things like "well, the microcode engine happens to be a x86. And it happens to run windows to boot itself up. So?"

A lot of old code used "timing loops" for delays. If you run it on merely turbo-charged hardware, it will be useless.

What kind of OS could have been written? [on a supercharged Amiga]

Unix :frowning:
Even in those days, Unix was THE "small computer operating system of choice", and systems tried to be as unix-like as they could, within the limitations of, you know, hard disks being exhorbitently expensive. There were GUI environments for unix, sort-of. They weren't all that wonderful. Then everyone would copy Apple (interestingly, the Amiga had very similar hardware to the original Mac (except: Color!) As did a bunch of the early "industry oriented" unix systems like Sun and HP (except: add the expensive disks anyway. Somewhere.)

ok... pick a processor that was once used, but went the way of the dodo, let's "pretend" it never died out.

What would have happened to that CPU if it had of evolved and not died out, does a CPU exist? if yes....

if [insert CPU] continued to advance (like ARM and Intel and AMD and Apple (eg, A5), if it went from a couple of mhz to begin with and now 30 years later, that same fundemental processor would still be the core (eg, I could get away with installing DOS and Windows 3.1 on my AMD AM3)..

So I would love to see a company buy out the rights or if they own the rights, give it a make over and bring that ancient machine to speed and resurrect it with new technology and then sell it, coders back from the 80's could not only go back to their old source code, they could improve it and bring it to modern day standards... it would be a huge job both sides, now the question is, does such a cpu exist (once popular, now dead...) an operating system that could not advance because the CPU it was written for simply no longer exists..

Still confused?...

It's not particularly feasible. Sure, like others have said, you could pretend to do this sort of thing by taking FPGAs or emulators to create an instruction set that is compatible with those old CPUs, but running at clock speeds way beyond their original spec.

But, doing this and maintaining the instruction set gets you only two benefits: The ability to run old binaries -- which will fail spectacularly due to the now vastly increased clock speed; and as a crutch for old programmers who don't want to learn newer instruction sets.

Well... the thing is, technology has moved on for a lot of reasons. The new instruction sets aren't just bigger for the sake of being more complicated (although I have heard that criticism of x86/x64). There are lots of new instructions that exist in modern CPUs (that obviously didn't in old ones) because it was not possible (or at least economical) to do those things back then. For e.g., modern CPU instruction sets have advanced math instructions... not just bit manipulation, shifting, add, subtract, multiply, divide ... but things like sin, cos, tan... Or for that matter, handling of floating-point numbers at all. That's a modern convenience that would be hard to let go of, but it was way too complex and expensive to include on the chip (or even via a second chip) until the beginning of the 90s or so.

Which segues to another thing that has changed.... number size. I don't know the original Mac Classic CPU specs off the cuff, but I'm pretty sure it couldn't handle 64-bit integers with single registers -- and therefore, single instruction cycles. This would be necessary, of course, to give that hot-rodded classic CPU access to the 8GB of RAM you would want for your modernized OS. If you want multiple cores, there needs to be some way of detecting, initializing, monitoring, and controlling those. More instructions.

So that means, by necessity, you would have to expand the instruction set to accommodate the types of hardware the CPU will interact with directly (cores, memory controller, floating-point unit, arithmetic unit, etc...) You've already broken compatibility by this time, but let's ignore that and carry on...

OK, so we're blissfully pretending to run old applications on archaic instruction sets, but at a gazillion times faster clock speed. Let's also pretend that the software does not implement delays by ticking away clock cycles uselessly, but has a smarter, real-time way of timing itself. We're also going to pretend the software uses large enough data sizes to comprehend a 1TB hard-drive in a single partition, or open a file that is greater than 64MB.

Next problem... how do you address your Bluetooth mouse through AmigaOS or Win3.1? Or, even access IPv4 networks? Or USB? That hardware wasn't universally supported back then -- most of it didn't exist at all -- so that means re-writing the kernel to develop extensible frameworks, creating drivers, and support applications to interface all of that to the user or other applications.

Let's assume that can be done. What do you get? Basically, an x64 running Linux, Windows, or Mac.

To remix our old hardware and old applications, we've had to create new hardware, new OSes, and new applications. There's literally nothing left to salvage except maybe the look and feel. OK, thick beige plastic cases and text-based or 2-dimensional graphical interface elements. That can be done.

ARM, PPC, 68000, x86, z80, and 8051 are all examples of microprocessor architectures where you can go out and buy "modern", usually much faster, versions. x86 is the most dramatically improved, going from 16bit 5MHz systems to 64bit 4GHz multicore, while 68000 went from ~8MHz to ~250MHz and 8051 went from 1MIPs (~12MHz, 12-cycles-per-instruction) to ~100MIPs (~100MHz 1-cycle-per-instruction.)
There's a hobbyist sub-segment of re-creating old systems, whether using original hardware, new but mostly compatible hardware, or emulated hardware. But it's a pretty small segment; The truth is that there's not a lot of reason to run the old applications; if they were good and important, they have better and more modern replacements. If they completely died out, that's mostly because they sucked compared to what's come along since; there is only nostalgia.
Also, such re-creations are not cheap by modern standards. Lacking the economies of mass production, it will cost you more to build a neat reproduction than it will to purchase a used x86 system.
And most of the nostalgia is for games. Where speeding things up or increasing the graphics resolution is ... neither useful nor desirable.

coders back from the 80's could not only go back to their old source code, they could improve it and bring it to modern day standards.

No, they couldn't. Or they already have.

now the question is, does such a cpu exist (once popular, now dead...) an operating system that could not advance because the CPU it was written for simply no longer exists..

Well, I'd argue that you could pick any DEC computer. Once the 2nd largest computer manufacturer in the US, it was gradually acquired and dissipated until nothing is left (not quite true; Modern Windows is supposed to be based on some of their core technology and unix originated on DEC hardware.) So, PDP8, PDP11, PDP10, Vax... All viable candidates, often with a large amount of the core software on the net and licensed so anyone can use it. But not many people want to.

Grumpy_Mike:
So a mini with a Ferrari's engine would snap the chassis in two,

lol! sweet.

but your harvard architecture comment is not totally relevant to avr. thanks to spm and lpm instructions its easy to include data in the code space and possible to run executables from ram or ee. also note that linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.

btw speaking of cross-emulation there are several programs for imitating commodore on avr. ive tried both vic20 and c64 versions which did quite good job driving a tv. also i have an intersil 6100 cmos chip which runs pdp8 code but never got it running. iirc requirement for multiple and strange supply voltages stalled the project.

john1993:
but your harvard architecture comment is not totally relevant to avr. thanks to spm and lpm instructions its easy to include data in the code space and possible to run executables from ram or ee. also note that linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.

sign I can only assume this whole paragraph is meant to troll. Nonetheless, I'm going to take the bait...

Re-programming the flash in-code is not quite the same thing as literally executing data, or processing code as data. That's a trick that, by definition, isn't possible on Harvard Architecture because they're mandated into separate memory spaces. Yes, you can shuttle those bits from one side of the fence to the other, but it's a fundamentally different thing than executing the same memory contents as you use to feed the executing code with data. Furthermore, those instructions may not even be available outside the bootloader. I thought they were restricted on AVRs where there is a dedicated loader area. (Haven't had the occasion to use them, so I don't remember for sure.)

Second, while I love Linux, and I tolerate Windows, I have to disagree with this polarizing statement. There do exist phenomenal Windows coders, and horrible Linux coders. The platform (with all its strengths and flaws) has little to do with the capabilities of its developers. And no, Linux can not be made to be a real-time OS just because someone is a good programmer. An actual RTOS does not execute time-slices of code, it executes instructions exactly. If you write a loop that is 1000 clock cycles long, it will take 1000 cycles to complete. No more. That is not possible on Linux because the Linux kernel was not designed that way. This is not a flaw, it's a deliberate design choice. RTOSes do not lend themselves to multitasking, or multi-user, desktop and server environments. Changing the Linux kernel to be a RTOS makes it a different kernel -- so yeah, that can be done, but no coder, no matter how sup3r 1337, is going to write an application, run it on Ubuntu, and achieve RTOS performance. It can't happen.

john1993:
linux is quite capable of realtime operation with proper programming. something windows users cant even dream of.

Nope.

At the very least you need a special version of Linux ( eg. RTLinux - Wikipedia ) and even then it's only an approximation.

No pre-emptive operating system can ever be deterministic (ie. it can't allow threads to disable interrupts for arbitrary amounts of time).

john1993:
possible to run executables from ram or ee

Nope.