It's not particularly feasible. Sure, like others have said, you could pretend to do this sort of thing by taking FPGAs or emulators to create an instruction set that is compatible with those old CPUs, but running at clock speeds way beyond their original spec.
But, doing this and maintaining the instruction set gets you only two benefits: The ability to run old binaries -- which will fail spectacularly due to the now vastly increased clock speed; and as a crutch for old programmers who don't want to learn newer instruction sets.
Well... the thing is, technology has moved on for a lot of reasons. The new instruction sets aren't just bigger for the sake of being more complicated (although I have heard that criticism of x86/x64). There are lots of new instructions that exist in modern CPUs (that obviously didn't in old ones) because it was not possible (or at least economical) to do those things back then. For e.g., modern CPU instruction sets have advanced math instructions... not just bit manipulation, shifting, add, subtract, multiply, divide ... but things like sin, cos, tan... Or for that matter, handling of floating-point numbers at all. That's a modern convenience that would be hard to let go of, but it was way too complex and expensive to include on the chip (or even via a second chip) until the beginning of the 90s or so.
Which segues to another thing that has changed.... number size. I don't know the original Mac Classic CPU specs off the cuff, but I'm pretty sure it couldn't handle 64-bit integers with single registers -- and therefore, single instruction cycles. This would be necessary, of course, to give that hot-rodded classic CPU access to the 8GB of RAM you would want for your modernized OS. If you want multiple cores, there needs to be some way of detecting, initializing, monitoring, and controlling those. More instructions.
So that means, by necessity, you would have to expand the instruction set to accommodate the types of hardware the CPU will interact with directly (cores, memory controller, floating-point unit, arithmetic unit, etc...) You've already broken compatibility by this time, but let's ignore that and carry on...
OK, so we're blissfully pretending to run old applications on archaic instruction sets, but at a gazillion times faster clock speed. Let's also pretend that the software does not implement delays by ticking away clock cycles uselessly, but has a smarter, real-time way of timing itself. We're also going to pretend the software uses large enough data sizes to comprehend a 1TB hard-drive in a single partition, or open a file that is greater than 64MB.
Next problem... how do you address your Bluetooth mouse through AmigaOS or Win3.1? Or, even access IPv4 networks? Or USB? That hardware wasn't universally supported back then -- most of it didn't exist at all -- so that means re-writing the kernel to develop extensible frameworks, creating drivers, and support applications to interface all of that to the user or other applications.
Let's assume that can be done. What do you get? Basically, an x64 running Linux, Windows, or Mac.
To remix our old hardware and old applications, we've had to create new hardware, new OSes, and new applications. There's literally nothing left to salvage except maybe the look and feel. OK, thick beige plastic cases and text-based or 2-dimensional graphical interface elements. That can be done.