Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • @[email protected]
    link
    fedilink
    English
    -79 months ago

    So if I find a single example of an AI doing a reasoning task that’s not in its training material, would you agree that you’re wrong and AI does reason?

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      9 months ago

      You won’t find one. LLMs are literally incapable of the kind of reasoning you’re talking about. All of their solutions are based on training data, no matter how “original” your problem might seem.