

Weblog Awards
Best Canadian Blog
2004 - 2007
Why this blog?
Until this moment I have been forced to listen while media and politicians alike have told me "what Canadians think". In all that time they never once asked.
This is just the voice of an ordinary Canadian yelling back at the radio -
"You don't speak for me."
homepage
email Kate
(goes to a private
mailserver in Europe)
I can't answer or use every tip, but all are appreciated!
Katewerk Art
Support SDA
I am not a registered charity. I cannot issue tax receipts.

Want lies?
Hire a regular consultant.
Want truth?
Hire an asshole.
The Pence Principle
Poor Richard's Retirement
Pilgrim's Progress

Trump The Establishment
Oh, they WILL, sooner or later allow their AI’s to make their own decisions. I’ll predict that AI will become very, very good at identifying which targets to attack, with a very high probability of being “correct” in that decision. The problem with that is that AI can be unable to “explain” its rationale for the decision in any way that makes logical sense to it’s human masters. If the AI is right often enough, we are likely stop questioning it’s decisions, and will accept some level of errors as an unavoidable part of the system.
For now, we will choose a set of targets for the AI. Next, AI systems will identify potential targets and rank them; humans will initially verify and select the target(s) recommended by the AI. After that, humans will mostly rubber-stamp the AI choices without critical review. In the end, they WILL allow the AI a great deal of autonomy.
Skynet will be here before we know it.
Gov’ts tend to view infrastructure and capital as assets and the ‘little people’ as liabilities.
It is not difficult then to consider that bringing AI to the battlefield will ultimately end up in Kirk’s realm with “A Taste of Armageddon”
https://en.wikipedia.org/wiki/A_Taste_of_Armageddon
watch it here: https://www.youtube.com/watch?v=D2JzKn8XXlQ
In fact given our current level of tech and mindless politicians, it is quite feasible that we arrive at this point in the next 15-20 years.
So .., in this new AI battlefield … where will the embedded reporters be located ? Where will the anti-AI-WAR reporters get their film to broadcast on the evening news ?
Obama already got-away with waging a drone-WAR … sanitized for American liberal acceptance. Adding AI to THAT Un-WAR scares the crap out of me. Talk about Pontious Pilate washing his hands of murder …
Imagine if Stalin or Mao had an auto-destruct AI button to press against their political enemies ? How many more Billions of human lives will be bulldozed into mass graves ?
I just felt a great disturbance in the Force … like a billion cries of terror
Exactly. William Tecumseh Sherman described war as hell and also said:
“You cannot qualify war in harsher terms than I will. War is cruelty, and you cannot refine it.”
We’ve seen how it’s been sanitized and reduced to the level of a video game by the use of drones. It will become even more so through the use of artificial intelligence.
This could lead to wars not only becoming palatable but perpetual as there would be little incentive to bring them to a swift end. One of the reasons the Americans used nuclear weapons against Japan was because of the projected casualty rates on both sides had the Japanese home islands been invaded with the war possibly lasting until 1947 or ’48.
Today, our side never wants to win wars so who cares about new technology? Wars keep getting longer and longer and we usually lose them, at best hoping for a draw. How you ended a war is by brutalizing them so badly that they beg for it to end under any terms. Death has to be so random and so quick that everyone fears they are next. The threat to a soldiers family is a much better motivator than a threat to themselves. Britain never lost a colonial war after the U.S. fiasco because it employed absolute terror against their enemies.
If they AI identifies that the best targets are either top leaders for decapitation strikes or massed troops for efficiency of kills, I have faith that Skynet will easily pick out the .01%’ers and the masses of EBT card users and leave the rest of us the hell alone.
Didn’t Philip K Dick write a book about this?
Machine learning is not intelligence. It is -flexibility-, which is a different thing. With machine learning a given machine or system can adapt to changes in its environment.
We don’t know what will happen when we release a bunch of machines into the environment, and they start adapting to it, and to each other.
If we want a preview though, we should look at insects. Because that’s what a really top-notch machine-learning AI will approach. It will act like a grasshopper, fly or cockroach. After we get super-duper good at them, they may act like ants and bees.
But at the beginning, they’re going to be really STUPID grasshoppers.
The question arises, would it be a good idea to stick a few hundred pounds of high explosive in something with the brain of a stupid grasshopper? One that changes its behavior as it learns, and those changes are essentially generated by a random-number module in the code.
You won’t know what its going to do until it actually goes out of the garage and starts doing it.
We are at the equivalent of 2004, pre-iPhone age of cellphone development with this stuff. All the social media, selfies, texting-while-driving, walking into poles because phone, pop-up videos of EVERYTHING, that level of change and development with machine learning AI is ahead of us. There’s “killer aps” out there waiting to be discovered, and they won’t reveal themselves until the problem of many self-directed machines is a thing of everyday life.
Here’s something else to consider. North American technologists are trained to be hugely risk-averse. They will not push the Go button until they know -exactly- what the thing will do.
Other cultures are different, as Lefties are so fond of reminding us. There is a culture in our midst that cheerfully blows up little girls at rock concerts for obscure religious reasons. Those guys will cheerfully turn loose the stupid grasshopper-brained tank and let it kill whoever the f- it wants, even their own side.
The prepared man expects the unexpected. Flexibility of mind is going to be an in-demand commodity.
“Didn’t Philip K Dick write a book about this?”
Perhaps you’re thinking of his short story “The Second Variety”. Scary even if many of the little killers were not AI. It’s an early “Terminator” story. It’s on line if you want to read it. I recommend it for those who are unfamiliar with Philip K. Dick.
http://www.philipkdickfans.com/mirror/gutenberg/32032-h/32032-h.htm
I’m pretty sure it’s this one:
https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F
semantics mean nothing, it is still an order to kill.
DP, put the bottle down
“Put the bottle down.”
This is interesting, coming from you. You’ve not been following the Apple/Google/Uber efforts at getting self-driving cars out there, I take it? Paying no attention to the Amazon delivery drone? The cooler-on-wheels that’s going to be delivering pizza in San Francisco?
So tell me, all-knowing sage, what happens to a city when you release ten thousand pizza delivery bots into it? What happens when all the Ubers in a town are robots?
What happens when a robot Uber meets a robot pizza delivery cart at a stop sign?
You don’t know. Nobody does. That’s the point I’m making. We won’t know until they actually do it. It will keep changing too, because they develop new behaviors on their own.
Tell me about the bottle again though. You have some other, more informed opinion?
I guess we will all need to be armed to take preemptive action.
Skynet here we come!
“We are not going to design weapons that decide what target to hit”
Nonsense. Of course we are.
We designed and developed nuclear weapons and we used them to end the war with Japan. If we were prepared to unleash that scale of destruction in a conflict, then the deployment of AI-operated weapon systems represents truly small potatoes in terms of moral trifles.