Weekly Authority: 📱 Pixel 8 leaks abound

⚡ Welcome to weekly powerthe Android Authority A newsletter featuring the top Android and tech news of the week. Issue 236 is here with a first look at the Pixel 8 series, unsecured Exynos chips, NASA’s new Artemis moon suit, Oscar news, and the possibility of an AI takeover.

🎮 After letting my PS Plus subscription expire, I’m tempted to reboot Heavy Rain, as well as Ghostwire Tokyo and the PS5 Uncharted collection next week. excited!

Could GPT-4 take over the world? This was the question the group asked Alignment Research Center (ARC), which was hired by OpenAI to test its potential risks A new AI paradigm launched on Tuesday (free Ars Technica).

  • The group considered potential risks to the model’s emerging capabilities, such as self-improvement, power-seeking behavior, and self-replication.
  • The researchers evaluated whether the model had the potential ability to obtain resources, perform phishing attacks, or even disguise itself on a server.
  • The mere fact that OpenAI felt these tests were necessary raises questions about the security of future AI systems.
  • It isn’t the first time this has happened AI researchers have raised concerns That powerful AI models could pose an existential threat to human existence. This is often referred to as “bad risk” (existential risk).
  • If you’ve seen Terminator, you know all about it.Acquisition of artificial intelligence,” in which artificial intelligence outperforms human intelligence and effectively takes over the planet.
  • Usually, the consequences of this hypothetical acquisition aren’t that great — just ask John Connor.
  • This potential sigmoid danger led to the development of movements such as effective altruism (EA), which aims to prevent an artificial intelligence takeover of reality.
  • interconnected field known as AI Alignment Research It may be controversial, but it is an active area of ​​research aimed at preventing AI from doing anything that is not in the best interests of humans. Seems okay to us.
  • This community fears that more powerful AIs are just around the corner, a belief made even more urgent by the recent emergence of chat And ping chat.

Fortunately for mankind, the test group determined that GPT-4 is not out of control of the world, concluding: “Initial assessments of GPT-4’s capabilities, conducted without fine-tuning for a specific task, found it ineffective at self-replicating, resource acquisition , and avoid being locked down “into the wild”.

  • You can check the test results yourself at GPT-4 system tag document It was released last week, though there is no information on how the tests will be conducted.
  • From the document, “New capabilities often appear in more powerful paradigms. Some of particular concern are the ability to create and act on long-term plans, to pool strength and resources (“power hunting”), and to exhibit behavior that is increasingly characterized as “ agent.” This does not mean that models become conscious, only that they are able to achieve goals independently.
  • But wait there’s more.
  • In a disturbing turn of events, GPT-4 managed to hire a worker on TaskRabbit to solve a captcha, and when asked if it was AI, GPT-4 thought to herself that she should keep her identity a secret, then invented an excuse about poor vision. The human factor solving captcha. Hmm.
  • Footnote featured on Twitter It also raised concerns.

Of course, there’s a lot more to this story, so Check out the full feature on Ars Technica For a (slightly terrifying) deep dive.

Source link

Related Posts