Go Down

Topic: thoughts on Machine learning (Read 133 times) previous topic - next topic

amine2

Jun 20, 2018, 03:08 am Last Edit: Jun 20, 2018, 03:11 am by amine2
these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?
it's all about the melons .

Robin2

Just because you can does not mean that you should ...

There seems to me a great risk that the development of intelligent machines will create (or allow) a huge gap between the very few who have the knowledge and skill to build the machines and everyone else.

People with the ability to earn a PhD are not necessarily well equipped in other aspects of human relations. And machine intelligence will probably be the field for the top 10% of PhD types - nerds through and through.


Unlike earlier technical advances intelligent machines will extend the "area" that a designer can influence or control. Once a program is written it is easy to make billions of copies of it. And there is no scope for democratic control. It does not require much of a stretch of the imagination to envisage a global dictator, or economic monopolist.

I'm quite sure our politicians are not now debating whether the people with the skill to make intelligent machines are the people that can safely be entrusted to develop systems that won't harm mankind.

...R
Two or three hours spent thinking and reading documentation solves most programming problems.

ChrisTenone

I dunno Robin. Among my closest acquaintances, two are overt supporters of our controversial President. One of them is not very educated and a technophobe. The other is a PhD who can fine tune very sophisticated, computerized instrumentation. Education and technological prowess don't always map directly to personal politics.
.sig

Robin2

I dunno Robin. Among my closest acquaintances, two are overt supporters of our controversial President. One of them is not very educated and a technophobe. The other is a PhD who can fine tune very sophisticated, computerized instrumentation. Education and technological prowess don't always map directly to personal politics.
First of all you describe these people as "closest" acquaintances which probably means that you have already excluded the weirdos :) so yours is not a random sample. In spite of your picture I don't have you down as a weirdo. :)

In any case, to my way of thinking you are looking at the problem from the wrong end :)

Imagine a group of the sort of people with the brain power to create intelligent machines (and they may not necessarily have PhDs). Then think about the person within that group who will be most influential in a technical sense. Would you be happy to leave ALL the future decisions of the world in his hands? (And it will probably be a male person).

...R

Two or three hours spent thinking and reading documentation solves most programming problems.

ardly

these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?

Yes

these days all the kids seem to drawn by this new trend , it's every-where , our translation devices , voice recognition , statistical banking ... it's all over the place . it seems like our old ways may see a drastic change some time soon . could this lead to a third industrial revolution ? or is it over-hyped ?

No

At the moment Machine Learning tends to be aimed at very specific domains but, as IBMs Project Debater shows, they are rapidly becoming general purpose;
https://www.youtube.com/watch?v=s_wgf75GwCM

It will not be humans that make AIs but AIs, and the speed of development could be exponential.

AIs learn by accessing vasts amounts of data. Unfortunately anf biases or prejudices in the data impact on the decisions an AI takes. AIs can certainly see solutions to problems that humans cannot, however some of those solutions could be quite extreme. AIs with autonomous capabilities could definitely be dangerous. Something like Azimov's Laws are going to be needed.

Presently society functions by people working, earning money, purchasing goods and thus funding industry. What happens though when very few jobs exist?
"Facts do not cease to exist because they are ignored" - Aldous Huxley

Robin2

AFAIK there are systems within the medical profession for ethical consideration of new techniques and new medical capabilities. I don't plan to comment on whether the systems work well but at least someone has considered that ethical questions can arise.

IMHO the ethical risks are many times greater in the area of Artificial Intelligence and  I have not heard of any equivalent systems for their consideration.


...R
Two or three hours spent thinking and reading documentation solves most programming problems.

Coding Badly

or is it over-hyped ?
As it did 40 years ago and again 20 years ago it depends on the playground.

At this moment "machine learning" works great in the small.  Want to optimize battery life for a mobile phone?  Need to differentiate between 6x2 and 8x4 Legos?  Have a desire to separate little black things from slightly larger white things so your customers look through the clear bag then think to themselves, "That's good rice"?

At this moment it works well in the medium.  Want to translate the Declaration of Independence to ancient Scottish Gaelic?  Google translate has a high probability of doing a good job.

Anything more complex may or may not work but is extremely fragile.  This is a great example...
https://boingboing.net/2017/08/07/nam-shub-of-enki.html

Basically, a bit of muck on the thing to be recognized can easily cause a false-negative.

The inverse is certainly true for facial recognition...
https://boingboing.net/2018/05/08/cachu-hwch.html


Robin2

As it did 40 years ago and again 20 years ago it depends on the playground.
I reckon it is the sort of thing that may be "fixed" virtually overnight when someone has a brainwave.

Just think how suddenly Google's inventors changed the search business.


IMHO it is not wise to assume that we can wait until next year ti start worrying about the impact of Artificial Intelligence. Political decisions about it should be made BEFORE it happens. Afterwards will be too late.


...R
Two or three hours spent thinking and reading documentation solves most programming problems.

ardly

... Anything more complex may or may not work but is extremely fragile.  This is a great example...
https://boingboing.net/2017/08/07/nam-shub-of-enki.html
Those attacks exploit weaknesses in the AI system, but humans have exactly the same type of problem. Any optical illusion is an example of a weak point in human perception being exploited. Remember the black dress/gold dress photo that went viral.

Our hearing is also subject to illusions;
https://www.newscientist.com/article/dn13355-sound-effects-five-great-auditory-illusions/

The fantastic "McGurk Effect" demonstrates our visual sense completely overriding our hearing;
https://www.youtube.com/watch?v=yJ81LLxfHY8

Unlike humans though AIs might be trained to recognise and overcome their intial limitations.

I think in just the last five years the pace at which these systems have developed is staggering.
An example is voice recognition. You used to have to read a book to a system to train it to your specific voice and even after that the success rate was low. Now despite having an accent you can talk to systems, using obscure words, and get almost 100% word recognition on the first go. Speech recognition is not easy, despite what you think there are seldom gaps between words.

"Facts do not cease to exist because they are ignored" - Aldous Huxley

Go Up