thoughts on Machine learning

these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?

Just because you can does not mean that you should ...

There seems to me a great risk that the development of intelligent machines will create (or allow) a huge gap between the very few who have the knowledge and skill to build the machines and everyone else.

People with the ability to earn a PhD are not necessarily well equipped in other aspects of human relations. And machine intelligence will probably be the field for the top 10% of PhD types - nerds through and through.

Unlike earlier technical advances intelligent machines will extend the "area" that a designer can influence or control. Once a program is written it is easy to make billions of copies of it. And there is no scope for democratic control. It does not require much of a stretch of the imagination to envisage a global dictator, or economic monopolist.

I'm quite sure our politicians are not now debating whether the people with the skill to make intelligent machines are the people that can safely be entrusted to develop systems that won't harm mankind.

...R

I dunno Robin. Among my closest acquaintances, two are overt supporters of our controversial President. One of them is not very educated and a technophobe. The other is a PhD who can fine tune very sophisticated, computerized instrumentation. Education and technological prowess don't always map directly to personal politics.

ChrisTenone:
I dunno Robin. Among my closest acquaintances, two are overt supporters of our controversial President. One of them is not very educated and a technophobe. The other is a PhD who can fine tune very sophisticated, computerized instrumentation. Education and technological prowess don't always map directly to personal politics.

First of all you describe these people as "closest" acquaintances which probably means that you have already excluded the weirdos :slight_smile: so yours is not a random sample. In spite of your picture I don't have you down as a weirdo. :slight_smile:

In any case, to my way of thinking you are looking at the problem from the wrong end :slight_smile:

Imagine a group of the sort of people with the brain power to create intelligent machines (and they may not necessarily have PhDs). Then think about the person within that group who will be most influential in a technical sense. Would you be happy to leave ALL the future decisions of the world in his hands? (And it will probably be a male person).

...R

amine2:
these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?

Yes

amine2:
these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?

No

At the moment Machine Learning tends to be aimed at very specific domains but, as IBMs Project Debater shows, they are rapidly becoming general purpose;

It will not be humans that make AIs but AIs, and the speed of development could be exponential.

AIs learn by accessing vasts amounts of data. Unfortunately anf biases or prejudices in the data impact on the decisions an AI takes. AIs can certainly see solutions to problems that humans cannot, however some of those solutions could be quite extreme. AIs with autonomous capabilities could definitely be dangerous. Something like Azimov's Laws are going to be needed.

Presently society functions by people working, earning money, purchasing goods and thus funding industry. What happens though when very few jobs exist?

AFAIK there are systems within the medical profession for ethical consideration of new techniques and new medical capabilities. I don't plan to comment on whether the systems work well but at least someone has considered that ethical questions can arise.

IMHO the ethical risks are many times greater in the area of Artificial Intelligence and I have not heard of any equivalent systems for their consideration.

...R

amine2:
or is it over-hyped ?

As it did 40 years ago and again 20 years ago it depends on the playground.

At this moment "machine learning" works great in the small. Want to optimize battery life for a mobile phone? Need to differentiate between 6x2 and 8x4 Legos? Have a desire to separate little black things from slightly larger white things so your customers look through the clear bag then think to themselves, "That's good rice"?

At this moment it works well in the medium. Want to translate the Declaration of Independence to ancient Scottish Gaelic? Google translate has a high probability of doing a good job.

Anything more complex may or may not work but is extremely fragile. This is a great example...

Basically, a bit of muck on the thing to be recognized can easily cause a false-negative.

The inverse is certainly true for facial recognition...

I reckon it is the sort of thing that may be "fixed" virtually overnight when someone has a brainwave.

Just think how suddenly Google's inventors changed the search business.

IMHO it is not wise to assume that we can wait until next year ti start worrying about the impact of Artificial Intelligence. Political decisions about it should be made BEFORE it happens. Afterwards will be too late.

...R

[quote author=Coding Badly date=1529521593 link=msg=3779105]
... Anything more complex may or may not work but is extremely fragile. This is a great example...
https://boingboing.net/2017/08/07/nam-shub-of-enki.html[/quote]
Those attacks exploit weaknesses in the AI system, but humans have exactly the same type of problem. Any optical illusion is an example of a weak point in human perception being exploited. Remember the black dress/gold dress photo that went viral.

Our hearing is also subject to illusions;

The fantastic "McGurk Effect" demonstrates our visual sense completely overriding our hearing;

Unlike humans though AIs might be trained to recognise and overcome their intial limitations.

I think in just the last five years the pace at which these systems have developed is staggering.
An example is voice recognition. You used to have to read a book to a system to train it to your specific voice and even after that the success rate was low. Now despite having an accent you can talk to systems, using obscure words, and get almost 100% word recognition on the first go. Speech recognition is not easy, despite what you think there are seldom gaps between words.

That explains why people think I'm shouting at them.

ChrisTenone:
That explains why people think I'm shouting at them.

The English have known for years that shouting is the only way to get foreigners to understand you :slight_smile:

...R

ChrisTenone:
That explains why people think I'm shouting at them.

Robin2:
The English have known for years that shouting is the only way to get foreigners to understand you :slight_smile:

Nothing wrong with that...

I personally am starting to worry , I am a medical practioner , and lately I have seen a machine that can interpret auscultatory sound and predict the type of valvulopathy with high accuracy , these machines can also see things in EMG and ECG signals that a doctor with 30 years of experience can't see . Fellow radiologists should be the first to start worrying though , about 15 years ago when they invented 3d manifestation for scanner and standard radiology they were worried that there days were gone , but were comforted by the fact that machines can never see through human subjectivity , but this goes beyond everything I have ever seen . One rather cynical mate of mine ones told me that this is the next step of existential evolution , he said that adrenal glands , emotional swings , egoism and lack of proximal in date optimization of human kind is rather disturbing , and said that machines should be the ones to inherit what we have , because their build is obviously stronger and better and they can be given access to their own structure faster than humans ever will which will grant them unlimited optimization capacities .... Yet the big problem remains , as stated by psychologists , what will drive the machines , what will the incentive be . After all us humans are enslaved by endorphins and work to optimize our fitness to ensure the passage of our DNA brand to the next generation , but what will drive a machine ? What could be a valid incentive ? Even the most advanced of algorithms require a stimulant , one that could guide the selection process and orient towards "good thought" .... We will most certainly have alot to think about in the upcoming years

Oh, I get it - it wasn't "the meek shall inherit the Earth", it was "the mechs shall inherit the Earth".

Makes sense.

I have a vague memory of a science fiction story in which Earth scientists got data from outer space that was the plans for a "machine" which they built with great glee only it turned out to be an intelligent machine that took over an enslaved all the humans.

Maybe someone else remembers it better than I and can supply the name of the story or the author.

...R

amine2:
I personally am starting to worry , I am a medical practioner , and lately I have seen a machine that can interpret auscultatory sound and predict the type of valvulopathy with high accuracy , these machines can also see things in EMG and ECG signals that a doctor with 30 years of experience can't see .
.....
.... Yet the big problem remains , as stated by psychologists , what will drive the machines , what will the incentive be .

In the relatively early days of Artificial Intelligence a lot of the hype surrounded "Expert Systems", for the most part that approach did not seem to deliver results. However even then I believe AIs were developed that could out perform the average hospital doctor at diagnosing somebody admitted with unexplained stomach pains. Apparently these are hard to diagnose but you would know better than me :slight_smile:

The new Machine Learning approach seems considerably more powerful, so yes I think there is good reason for people to take notice of what is happening.

Humans can no longer beat computers at Chess and Go and soon fields such as medical diagnosis will succomb. How will we feel about ourselves if we cannot beat a computer at a debate?

Of course at the moment AIs are not sentient so the issue of "what will drive the machines" does not really arise. Except in the sense that we are "teaching" them with data that may contain biases that we are not aware of and that the decisions made by those AIs may then impact on our lives and we may not be able to understand the basis upon which the decisions were arrived at.

In other words the machines will not "take over" but they may control us because we have given them power in certain areas and we don't comprehend how they then act.

ardly:
In other words the machines will not "take over" but they may control us because we have given them power in certain areas and we don't comprehend how they then act.

As evidence, look how the existence of cars (which have had no intelligence whatever for 80 years or so, and still have very little) has changed the way we live - where we live, where we work, where we shop, where we entertain ourselves. When the Model T Ford was the popular car nobody would have foreseen that.

Of course at the moment AIs are not sentient so the issue of "what will drive the machines" does not really arise.

Because we value sentience I wonder are we at risk of deluding ourselves? Maybe it is not at all necessary for machines?

...R

Taking cars as an example. Imagine if you created an AI to cotrol the traffic in major cities and you defined its goal as making traffic flow as efficiently as possible. You would have to be very, very careful exactly how you define efficient and what the AI was allowed to do to achieve it.

There are two reasons for this;

  • In such a large, complex, real-time, environment humans would have no way of verifying the decisions made by the AI.
  • The AI might have noticed some very subtle pattern and to achieve its goal it might actually be manipulating the traffic in a way that humans would consider unacceptable if they knew it was happening e.g. perhaps making older drivers wait longer at lights, or penalising traffic coming in or out of certain areas or perhaps even targetting individual drivers.

These are just wild made up examples but I hope you get the idea.

i personally feel very intimidated by this , makes me want to learn Machine learning . for that if you can't beat it , maybe you can satisfy your ego by knowing how it works . though some say that being intimidated by an AI program is like being intimidated by a calculator . after all , a calculator is much better than us in simple calculation . also some others say that we might be able to optimize ourselves before Real AI comes to life , cracking the DNA code and being able to modify it is extremely difficult though . it's all about the Prefrontal-cortex , that's what separates us from other mammals anyway . and in evolutionary science , the limited size of the prefrontal-cortex is merely due to the size of the birth canal , we can't have bigger heads at birth and the Neural stock cannot augment after the 8th week or so i think of pregnancy . so our primate form is also limiting our intelligence , so let's just hope that we'll be able to optimize ourselves before fellow intelligent sentinels see the light .

but beyond that . the major problem with AI is that it's potentially as dangerous as a nuclear weapon , if a single programmer develops real AI with a viscious driving force , like seeking more resources or network domination and then unleashes it on the World Wide Web , we'd all be doomed because it will spread on servers all around the world like Metastasis of a malignant tumor , and the internet as we know it would then have "Cancer" . the only way of defense would then be to use other friendly AI programs to fight back since we'd have no hope tracking the monster ourselves , but as in all matters of life , Malignancy is always stronger , has a harder first and a bigger striking force and has access to more resources .... the mere thought of it intimidates me

Even if a benign form of Artifical Intelligence is developed there will be the risk that humans become too dependent on it and suffer enormously if the technology becomes unavailable for any reason.

For example if doctors rely on it for diagnoses they may lose (or deliberately forgo) the ability to do the diagnosis without the aid of the machine.

In a more mundane example, what percentage of under 50yrs car owners know how to repair their cars compared to the percentage of their parents who had that capability.

...R