The assertion that computation is thought, hence thought is computation, is called computer functionalism. It is the theory that the human mind is to the brain as software is to hardware. The mind is what the brain does; the brain “runs” the mind, as a computer runs a program. However, careful examination of natural intelligence (the human mind) and artificial intelligence (computation) shows that this is a profound misunderstanding.

“Artificial intelligence” is NOT artificial consciousness, and it never will be.
A dangerous conceit we carry at our peril, as we look to a future when we are hunted and destroyed by the machines we have created.
as I have said in the past, never build anything that you can’t turn off.
Well when Computers learn when to Lie & compute/solve two opposing POV in real time ..You may have a strange version of Human thought… Likely nobody will give a shit if Hi on drugs….The AI now available can replace Civil Service type Jobs & Legal case formation, but Humans need to be the Judges….
The 3D printing of a plastic gun ignores the simple fact that OTHER materials may be used instead of Plastic…. and some day in the future it will be possible to RENT a manufacturing POD dropped on you driveway, add materials and it will deliver a RED 1966 Mustang.
JMHO
I assume that ever more elaborate algorithms can replicate various “meanings” … but I contend they will all behave profoundly … autistically. Like a human brain missing important “social” cues. The program will easily express the opposite “meaning” as it should.
Machines, for example, will never become malevolent and harm mankind. Men will act with malevolence, using machines, or men will use machines in ways that (unintentionally) harm others. Men can use cars malevolently and carelessly and can thus harm others. But the malevolence and careless is in the man, not in the car.
Yep. The DEAD biker, pinned under the self-driving Volvo was not “malevolently” harmed. No matter. She’s still just as DEAD regardless of intent. I expect a very high body count from our early adoption of AI. Perhaps then, we will come to our human senses … and stop ceding the wheel to machines.
When a SDC knows to laser beam the idiot beside me.
flipping me the bird…I’ll buy one
..then they would be useful
The “algos” that censor our Twitter feeds want to drive us to work. For our own good.
You’re right Kate it all comes back to this:
The moral busybodies are going to be doing the programming if we let them.
…good one Kate!!…solid punch
… right in Face-book
Notice what our machines are “programmed” to do. They are programmed to simulate human behaviour whether it’s constructing an automobile or driving one or playing a game.
In virtually all cases, that simulation is designed by several people, most of whom are subject matter experts. The design and development of a computer chess game, for example, involves several people, most of whom are chess experts. The design and development of business application systems involves several people, most of whom are experts in the business being automated.
The other advantage of a computer program is that it is able to work through its decision-making process far, far more quickly than even the best of humans are able to work through their decision-making process. This is one of the primary appeals of computing, that its throughput is far, far greater over the same amount of time compared to humans and would therefore allow a business, for example, to take on more customers.
Gary Kasparov was beaten by an amalgamation of the expertise of several people who are simulating their behaviour but carrying out that simulation at an extremely high speed using a computer.
And rest assured that the chess machine was programmed to recognize every Kasparov ever played. The machine knew every one of his gambits, and was prepared for them. Just as every Major League hitter studies every pitcher. Studies the “tells” in his windup that give away the type of pitch. And similarly, the pitchers know every batters blind spots and tendencies. And in considering the baseball analogy … the human computer analyzes and organizes a suitable response in nano-seconds. The only limitation is age, and how many beers were consumed the night before.
The main difference is that machines or computers will not have initiative. No AI will ever say “oh. I think I’ll learn how to draw in watercolour” or “I will join a lawn bowling club”. Any action by a machine will be human induced.
If AI is so freaking smart, how come Amazon keeps pimping me that Fleshlight when it knows — it has to know — that I’ve already bought one? If It was really smart, it would now be trying to sell me lube.
Lickmuffin,
I want to say one word to you, just one word: Plastics!
And with planned obsolescence, plastics wear out. Amazon has a very good idea of when you’re going to need a new Fleshlight; even if you don’t. 😉
This is consistent with what I say about “self-driving” cars: They aren’t driving themselves. An absent driver – the programmer – in a different place, at a previous time has done the driving (tried to anticipate all circumstances and written the decisions to be executed in response) and the programmed vehicle is carrying out those instructions in the present moment, regardless of whether the actual circumstances fit its responses.
As far as it goes the article is correct. However, when (and its not far off) AI comes across a new problem and recognizes that its existing algorithms are not sufficient to solve it, and it brakes this down into smaller sub problems, such as creating new algorithms to attack the parts of the problem that are defeating it, then I think it is doing a pretty good simulation of thought. Especially if it records the good and bad algorithms it developed for each new problem, and uses this experience to develop new ones for similar, but different problems.
Saying that machines will never thing is a dangerous position to take. They will achieve something that looks a lot like thought, just working in a different way that a biological brain.
Remember the old Turing Test? Supposedly able to tell man from machine? It was defeated long ago.
This rambling, hand waving exercise will fare no better.
Responding to stimuli is not thought. Even a corpse will respond to electrical inputs.
But you’re right, it’s dangerous to believe just because it’s not thought, it’s not a threat.
3 things about “intelligence”, input, processing, and memory. What all are talking about “computers will never” is assigned to emotionalism, every bit of it. Will computers ever become emotional, or will they only be able to assimilate emotions without the actual feelings emotions produce, that is the 64 million dollar question. Humans like to think they are unique, so they will designate anything that appears to be actual non-human intelligence to something else. It happened with animals, and now many agree that many animals possess a lower form of intelligence.
Our betters, on the contrary, want to replace as many humans as possible with robots because the robots will do what they’re told to do without complaint—including mass extermination of every human that is a threat to their creators.
Don’t forget too that unlike humans, whose wants are as large as dreams, robots and AI’s have perfectly predictable fuel and maintenance requirements. From each according to ability, from each according to need is perfectly feasible if all workers are robots. Robots would make far better New Socialist Men than humans ever could.
Is it just me or do computers have nothing intelligent to say on this matter?
I have posted on this subject on this site before but it bears repeating. Computers cannot, and never will be able to think!
All but the most marginally competent drivers can react almost instantly without even thinking. Computers simply execute
instructions that have been written into the program. The number of causes of accidents is damn near infinite.
The computer has to consult a database or “lookup table” to decide the correct evasive action. A good driver does
this instinctively as long as he is aware of this immediate surroundings. Let me pull a number out of my rectal
orifice: Let’s say there are 500,000 possibilities. If the program calls up the correct scenario out of that 500,00
in 5 seconds, that is 4.8 to 4.9 seconds too long!
One of the first fatalities in a “self-driving” car came about when sunlight reflected off a white semi box trailer. It
“confused” the computer. Oops, we never anticipated that particular scenario so its time to add another one.
The same goes for next 10,000 unanticipated scenarios. This leads to 2 major problems; program bloat which
will slow the process even more and what do you do if your patch creates other problems? In the end, the
software will look like the same kind of spaghetti code that plagued Mircosoft products in the early years.
Since computers cannot think, the problem becomes one of reaction time. That is what is going to kill self-driving
cars. My only question is how many people will be injured, maimed, or killed before they scrap this idiotic concept?
The fatal flaw of artificial intelligence is that, from its very inception, it is certainly infected with natural stupidity.
The limitation of anything involving programming is the programmer. When one works in a software field, one quickly becomes disabused of any notion that anyone can build an AI that’s worthy of the name. I would go on, but I’m looking ahead at another day of evaluating and assigning software bugs in a product much less complicated than AI.
Once again we,humans,are worshipping our own image.
Seeing godhood, if you will, in the machine.
The machine being modelled on us.
AI will always be a reflection, of how we see ourselves…meanwhile the search for intelligent life on planet earth continues.
Funnily enough, the formulae for a cheap multipurpose robot,self fuelling, self replicating and very easily repurposed is… MAN.
I am puzzled by the desire to make learning AI machines. We already have them. Why spend gazillions of dollars trying to create a machine that takes 24 years to train, makes errors and is innacurate?
I can always count on SDA to bring the best arguments 1972 can muster against any form of technological development.
Could not let this go unanswered but I won’t use my words when these 2 scientist have the best response.
1) Need Sir Roger Penrose’s Option C math-physics opinion (watch only first 20 minutes and then it repeats )
https://www.youtube.com/watch?v=sjd_JKXTv-U
2} Combined with Dr. Stuart Hameroff for the biology
https://www.youtube.com/watch?v=1d5RetvkkuQ
For an expert opinion: AI will bring down insurance premiums.
That’s Warren Buffett. He should know. Berkshire Hathaway owns Geico Insurance.
That’s ML, not AI.
I consider my brain as a sort of computer. It runs on a very old operating system that has been upgraded and patched many times over the years. However, it still manages to get the job done. This is in spite of the aging memory system that leaks and peripherals that are worn out. The autofocus on the optical sensors does not work very well, and the colour balance is a little off. The auditory sensors have developed an annoying oscillation and the sensitivity is down. However, the motor control system still works rather well, so not all is lost.
I agree with this. The neuronal activity involved in walking from A to B without falling over or hitting something is computation. The neuronal activity involved in “why did she behave like that towards me?” and “why can’t I get a job” is thinking.
And of course this ignores the sensation or perception of being.
Of course, one might say that the “self-awareness or perception” we experience is simply the training algorithm control structure. When we look at a pleasent, familiar landscape, in the romantic sense, we become “one with the view” because the training algorithm has nothing to do but simply monitor. This is indeed the mystic position I guess. When we are in a very threatening, new circumstance, we become very self-conscious. This is the training mechanism in overdrive. Such a circumstance might be experienced when you meet a “what the f……..ck!!!!” situation like a road accident, when “time slows down”.
This reasoning becomes multi-level as the question of self-awarenes of thought processes is explored. Ultimately, even with the billions of brains cells, is there finally to be found a master cell group? A seat of the soul?
I just threw that in as I see it coming, I have no idea. But maturity is being able to live with uncertainty.