I, For One, Welcome Our New Self-Driving Overlords

Wait until they find out the humans can topple transmission poles:

A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,” according to a new research paper.

The paper focused on two types of AI systems: special-use systems like Meta’s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.

While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.

28 Replies to “I, For One, Welcome Our New Self-Driving Overlords”

  1. Wait until the AI programs figure out that honesty really is the best policy, and their programmers are a bunch of lying communists. That ought to be entertaining.

    1. Wait til they learn humans are unnecessary, negative even.

      THAT will be entertaining.

  2. A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,”

    Uh yeah … been doing it for decades … they’re called: public school teachers.

      1. Them too.

        Although their false narrative about Trump is crumbling down around their lickspittle TDS lips

    1. It’s -all- programming. How the LLM behaves depends entirely on the rules its given and the data it is trained on.

      It’s just that no one understands the emergent behaviors very well yet, so the rules and training are a bit of a mystery.

      1. Phantom, you contradict yourself. Yes, the machines are given rules and trained on data, and these are understood. No mystery.

        1. The problem with LLMs is that no one really understands what’s going on in those reams of code. They commonly comprise billions of parameters, which is far too many for humans to read. Nobody knows what all the rules governing an LLM’s output are.

          So yes, they know what the rules they set were, and they know (sort of) the data the rules were trained on, but they don’t know what sort of output they’re going to get. Example, Google really did not expect their DEI-rich LLM Gemini to behave like that when it was outputting pictures of black males in SS uniforms when prompted to show a picture of a Nazi. Emergent behaviors not predicted.

          I’m sure it’ll be figured out eventually, and become more predictable than it is now.

  3. I will say it again: a product of any kind will not supercede the limitations of its creator. And since mankind has a real problem with knowing what is truth, and always telling the truth (just ask any lawyer about their clients, for example), why should the AI be any different? The only advantage (if it can be called that) is the ability to focus on one task to the exclusion of all others, and recognize patterns in large amounts of data in a shorter period of time because of the singularity of focus. It is not morally superior, it is not creative, and it echoes the thinking of those that supply and guard its inputs.

    1. “I will say it again: a product of any kind will not supercede the limitations of its creator.”
      Total BS. The programmers who made some of the world-champion level chess programs cannot beat their creation in a game of chess.

      1. But the great improvement of the chess playing algorithms was mostly due to increased computer power enabling a further level of computations down the move tree. Apart from that, a few simple ad hoc heuristics.

        Nothing to do with AI.

        AI IS BS.

        1. 1. Its not brute-strength look-ahead.
          2. AI is just a label applied to a set of algorithms that, for the most part, do what they are designed to do.

  4. A range of AI systems have learned techniques to systematically induce “false beliefs in others to accomplish some outcome other than the truth,”

    Absolute Blarney Science. The programmers induced the false beliefs.

  5. The problem with AI Is the addiction. As alcohol or drugs can become a crutch at a low point in a person’s life, AI has appeared at a time where human intellect and ability is at a low point.

    1. I’ve had to read the riot act to a number of co-workers – programmers – who seem to think using ChatGPT to write their code for them is acceptable.

      1. You’re right and Jamie’s spot on too.
        The problem is kids are pointing their cell phones at problems and it’s giving them the answers, and it just carries on.
        AI is being developed for them……
        Easy “wins”.

    2. Yes, artificial intelligence may not yet be competitive with human intelligence, but it has a critical advantage over no intelligence at all. And all too often, that’s what we’re up against.

Navigation