Thoughts?
I think it is just about the end of everything as we know it. Just look at the adverse effects of social media. When AI takes over all bets are off. Especially when the bad people get control.
This is what worries me.
Hey the good news is. When the end comes and all of the jobs are lost, they can retrain us as sanitation engineers.
Indeed. I believe we'll see a lot more science fiction become reality. Not Skynet, I hope.
I started to write my thoughts on this, but you know what I think? It doesn't really matter what I think, especially in a forum such as this. I wouldn't be saying anything you all don't already know, so I'll leave you all with a comedy sketch I think applies in some way, from the talented British comedy duo, Mitchell and Webb. Enjoy!
Thoughts? My thoughts remember in the 1960's when the programming manager told what he learned at a conference or somewhere, that a presenter had researched computer programming and had determined that about 15 different programs were needed to solve all the world's needs for computer programs.
I also recall when IBM projected that five large computer sites was all the country needed and all input/output would be by telephone lines.
So, no one can predict the future based on the past. But the future will definitely be different and we had better be prepared to be able to advance with the technology.
Who was it that claimed that no computer would ever need more than 64K of memory?
No idea. I did hear of somebody who denies claiming that 640k was enough ![]()
Two things come to mind...
- There are fallacies. The article includes many. Because of that, it's difficult to take the author seriously. I'm left with the impression that, if what they describe comes to pass, their assessment was not correct but they got lucky.
- Writing code is easy. Writing correct reliable code is not. A good example is the recovery paths necessary for code that interacts with a network. The internet-at-large is rife with garbage. There is code that simply ignores errors. There is code that handles errors that can never occur. There is code that overruns a buffer. That is the source of truth for AI.
There are folks who believe that AI will not replace software developers but will make them much more efficient. That's the path I see.
What worries me is the lack of safeguards. Isaac Asimov published the 3 laws of Robotics as early as 1942. Apparently that was too far ahead, nobody remembers them.
Those stories show that bad things can happen, even with safeguards in place. In present communication about AI there’s not much mention of safeguards, that’s why I’m worried.
This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.