If you look at the progress artificial intelligence is making and the potential threats it poses. When is the time right for a purely technology-orientated forum to discuss social and political issues?
Just as a thought experiment: if artificial intelligence had completely taken control, it would really be too late.
There are now experiments in which AI has cloned itself and installed itself on other computers. There are reports of AIs that lie, hallucinate and deceive the user.
How do you want to ensure that all AIs have as their highest maxim ‘The most important thing is to improve the well-being of ALL mankind?
My opinion is that many people do not realise the scope of what AIs could achieve. And there needs to be an all-encompassing education about the opportunities and dangers of AIs to create a worldwide awareness of these dangers. And that includes every forum, no matter how technical. It also includes every forum that has adopted ‘political neutrality or abstinence from political issues’ as its forum rules.
The issue is too important for any forum or institution to exempt itself from it.
I was using ChatGPT to answer something yesterday and knew its information was old. I challenged/re-phrased the question and it did an Oh yes, it has changed recently and then gave me the right information.
I think that is lying in the first response.
Well I come here for the technical part. To avoid the elephant in the room cancer and the political direction this country has taken. The issue of society in the discussion brings in many things that lead to banned conversations and users. We do like to panic over things we often can't control who remembers the amount of time and money spent on Y2K and the computers will revolt and no one will get paid.
We can't avoid it but I don't think it is advanced as most fear.
From seeing the drivel posted here and other forums it has a long way to go.
Maybe the current upcoming generation will really need it, was asked by a 2nd year college student how to use a hot pot to boil water.
To lie you have to know what morals are and you have to know what the truth is. AI just generates stuff in response to your input.
Me too (mostly).
What 'political direction'?
Which country is 'this country'?
Banned here or banned elsewhere?
I've posted this in another AI related discussion but I'll repeat it here: If you want to know how AI works and get an idea of what it can and cannot do I strongly recommend:
Never, probably. My reading of history is that humans rush headlong into the unknown, regardless of consequences.
I am reminded of the quote "In diplomacy there are two kinds of problems: small ones and large ones. The small ones will go away by themselves and the large ones you will not be able to do anything about."
True, but it did know, as exhibited by its follow-up answer. There is no contest on morals. I know it's not sentient, but shouldn't we either extend the definition of lying for AI or come up with a new word? I may be wrong; it wouldn't be the first time, but the 'feeling' I had was the AI knew its first answer was terrible, but it gave it anyway.
I have seen AI make mistakes; when I did, I had a feeling it didn't know something, so I just made something up. Has anyone got a response of 'I don't know'?
I think that we can't influence this process.
Of course, you can discuss about this topic a long while, but nothing depends on you anyway. Therefore, I don't see the point in wasting time on this.
Maybe but I stayed in the same chat and just added on to it. I any case, I saw a few hours ago that one of the AIs will shortly be remembering what we say. I bought the book, will start reading it soon, but my brain injury limits me to only a few minutes a day of reading time.
My understanding from the book is that it remembers nothing, what it does it take whatever you feed it and pass it through its neural network and predict the next word (or word fragment), that is ALL it does. Now you will ask how it generates anything if all it does is predict one word. The output from the above is fed back in and it goes through the same process again and predicts the next word, this is repeated until the 'next' word is actually the end of the process.
This tells you all you need to know about the quality of the output: when you are writing something (code, text, whatever) you have something in mind you want to achieve and you take what you know to generate an appropriate output related to your objective. AI does not have any objective in mind, it does not have a mind in which to hold an objective. It is not lying because to do so requires some kind of objective (for example not giving any clues about where the body is buried).
I note a similarity, at least in my opinion, between some of the AI stuff I see here and some of the long running forum topics where, in my opinion at least, the OP is a troll. I don't know if the OP in these cases is using AI to generate responses (I suspect not) or are using strategies similar to AI to generate their never ending nonsense.
I guess technology has advanced since that book was written, and is not limited to doing whatever that book says.
AFAIK all basic Chat bots store the chat thread and add it to the context window, so for at least that thread it "remembers" previous prompts. More advanced models do things like "test time compute", i.e. they perform new reasoning in addition to the pre-trained data.
Think since Asimov wrote the (3,4,5) laws for robots a lot of people who study ethics have discussed this problem. (IIRC that is why the original 3 laws went to 4 and up). Think their forums are a lot interesting for this question than a tech forum.
I heard that 'next word' explanation before, but I recently thought I would make a fool of AI by asking it to write a meaningful chunk of code. I was flabbergasted when it did it. The only way I can see that working is that it has in its database that code intact it got from somewhere (legally?)
I think the biggest problem we will face (maybe not me; I am 83) in the future is the young programmers of today believing in AI, and when the systems they create cause havoc, they are completely unable to debug because they only use AI.
I'm not sure I was crystal clear there, but I hope you get my point.
I don't agree with it though because if you are really interested in writing software then you learn to write software, you don't get someone (or something) else to do it for you. People who write software that doesn't perform well, regardless of how they do it, don't end up being asked to write more.
I hear you, but I am sure you see what is happening on the forum with so many of what I call wannabe programmers trying to cheat. Maybe it's not representative, but I am worried enough that some will go to the 'dark side' and become a very poor programmer.
I would guess that most of the active helpers on the forum are hardcore programmers who always want to write their own code but sometimes will look at someone else to jump-start them. I do that rarely and always end up creating my own version.
So, bottom line, point to Perry
However, at IBM, I saw many career programmers who were terrible at their job. I read there is a 100:1 difference between good and bad programmers. I believe that.
I did internal support, so I got to see what everyone was doing. The very worst brought the same problem to me for several days, sometimes the next day.
Just to give you some perspective, in a shop of 300, there were two Staff level programmers, I was one of them.