Category: AI

Your Moral And Intellectual Superiors

Time, after Time.

Time snubbed tech titan and artificial intelligence backer Elon Musk from its annual list of the “100 Most Influential People in AI” – but slapped actress Scarlett Johansson on this year’s cover.

The magazine created a composite image for its 2024 cover showing the photos of 18 AI leaders, topped by Nvidia boss Jensen Huang that also prominently featured the “Black Widow” star.

Your Moral And Intellectual Superiors

Facts are stubborn things.

Washington Post columnist Meghan McArdle ripped the community of fact-checkers who have tried to hold former President Trump accountable during his political career, admitting they’ve ultimately failed to hamper his support and have hurt their own institutions.

The author, a staunch critic of Trump, accused those of trying to prevent the spread of Trump’s “disinformation” of being arrogant and mistaking their own opinion with objective fact. She even accused them of censorship. All of this, she wrote, has ultimately led to voters questioning them and other institutions more than they’ve ever questioned the former president.

“After eight years of all-out disinformation warfare, Trump’s approval ratings are holding up better than public trust in academia and journalism,” McArdle lamented.

The columnist began her piece by describing the idealized mission of the Trump era fact-checkers, saying they “devote themselves to checking the internet for bad facts and bad actors — and especially for the malevolent impulses of Trump.”

However, they didn’t save the world in her estimation. At best, they dinged Trump on some of his bragging and, at worst, they censored true facts in their thirst to correct him.

“Some of their efforts have been useful, including their fact-checking of Trump’s more frenetic flights of fancy,” she said, adding, “But the larger effort has been repeatedly marred when the disinformation experts have acted as censors, suppressing information that turned out to be true and spreading information that was false.”

McArdle provided some of the major examples of this suppression, examples that most of the media participated in at the behest of these fact-checkers.

“Recall when it was ‘misinformation’ to suggest the pandemic might have started in a Wuhan lab. Recollect how a bevy of putative experts assured us that Hunter Biden’s laptop was probably a ‘Russian information operation’ rather than … Hunter Biden’s laptop.”

She added a more recent one, stating, “If these memories have faded, remember that just a couple months ago, we were hearing that videos of President Joe Biden’s obvious decline were actually expert-certified ‘cheap fakes.’”

Related: Journalist Resigns After Being Exposed for Fake, AI-Generated Quotes

I, For One, Welcome Our New Self-Driving Overlords

ARS;

On Saturday, NBC Bay Area reported that San Francisco’s South of Market residents are being awakened throughout the night by Waymo self-driving cars honking at each other in a parking lot. No one is inside the cars, and they appear to be automatically reacting to each other’s presence.

Videos provided by residents to NBC show Waymo cars filing into the parking lot and attempting to back into spots, which seems to trigger honking from other Waymo vehicles. The automatic nature of these interactions—which seem to peak around 4 am every night—has left neighbors bewildered and sleep-deprived.[…]

The lack of human operators in the vehicles has complicated efforts to address the issue directly since there is no one they can ask to stop honking. That lack of accountability forced residents to report their concerns to Waymo’s corporate headquarters, which had not responded to the incidents until NBC inquired as part of its report.

I, For One, Welcome Our New Self-Driving Overlords

FT;

The use of computer-generated data to train artificial intelligence models risks causing them to produce nonsensical results, according to new research that highlights looming challenges to the emerging technology.

Leading AI companies, including OpenAI and Microsoft, have tested the use of “synthetic” data — information created by AI systems to then also train large language models (LLMs) — as they reach the limits of human-made material that can improve the cutting-edge technology.

Research published in Nature on Wednesday suggests the use of such data could lead to the rapid degradation of AI models. One trial using synthetic input text about medieval architecture descended into a discussion of jackrabbits after fewer than 10 generations of output.

The Sound Of Settled Science

When I was about 12, I read a book on “forensic science”, and for a time considered it as one of my career options. Little did I know, the field is more credentialist guesswork than it is solid science.

New research highlights the importance of careful application of high-tech forensic science to avoid wrongful convictions. The study was published on June 10 in the Proceedings of the National Academy of Sciences.

In the study, which has implications for a wide range of forensic examinations that rely on “vast databases and efficient algorithms,” researchers discovered that the odds of a false match significantly increase when examiners make millions of comparisons in a quest to match wires found at a crime scene with the tools allegedly used to cut them.

The rate of mistaken identifications could be as high as one in 10 or more, concluded the researchers, who are affiliated with the Center for Statistics and Applications in Forensic Evidence (CSAFE), based in Ames, Iowa.

Flashback: Bite marks, blood-splatter patterns, ballistics, and hair, fiber and handwriting analysis sound compelling in the courtroom, but much of the “science” behind forensic science rests on surprisingly shaky foundations.

I, For One, Welcome Our New Self-Driving Overlords

Everything new is old again.

We’re in a situation similar to the dotcom bust of 2000. The internet is definitely transformative, but real change takes time—it doesn’t happen overnight. The internet didn’t really take off until smartphones became affordable and people could access it from anywhere. I mean pets.com reached $400 million valuation while selling products at about ~60% loss. The only reason pets.com reached this valuation was the internet trend stick that was slapped onto it. Exactly what we are seeing with the AI bumper sticker companies are slapping onto their company.

Look at the # companies saying AI in earning calls. Just following the herd and using buzzwords to generate stock interest.

h/t Melinda Romanoff

I, For One, Welcome Our New Self-Driving Overlords

Wait until they find out the humans can topple transmission poles:

A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,” according to a new research paper.

The paper focused on two types of AI systems: special-use systems like Meta’s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.

While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.

Navigation