人工的な 知能 警告 over human 絶滅 - all you need to know

人工的な 知能 could lead to the 絶滅 of humanity and the 危険s should be 扱う/治療するd with the same 緊急 as 核の war, dozens of 専門家s have 警告するd ? 含むing the 開拓するs who developed it.

Here, the PA news 機関 takes a look at the 最新の 状況/情勢.

? What is AI?

人工的な 知能 (AI) is the 知能 論証するd by machines, as …に反対するd to the natural 知能 陳列する,発揮するd by both animals and humans.

Examples of AI 含む 直面する 承認 ソフトウェア and 数字表示式の 発言する/表明する assistants such as Apple’s Siri and アマゾン’s Alexa.

? How could AI lead to human 絶滅?

AI could be weaponised, for example to develop new 化学製品 武器s and 高める 空中の 戦闘, the San Francisco-based Centre for AI Safety says on its website. The centre 解放(する)d the 声明 about the 危険 of 絶滅 from AI which was 調印するd by the 産業 leaders.

The centre 名簿(に載せる)/表(にあげる)s other 危険s on its website, 含むing AI 潜在的に becoming dangerous if it is not 提携させるd with human values.

It also says humans could become 扶養家族 on machines if important 仕事s are ますます 委任する/代表d to them.

And in the 未来 AI could be deceptive, not out of malice, but because it could help スパイ/執行官s 達成する their goals. It may be more efficient to 伸び(る) human 是認 through deception than to earn it legitimately.

? Who are the people 説 AI could wipe out humanity?

Dozens of 専門家s have 調印するd the letter, which was organised by the Centre for AI Safety, a 非,不,無-利益(をあげる) which 目的(とする)s “to 減ずる societal-規模 危険s from AI”.

Two of the three “godfathers” of AI have 調印するd the letter ? Geoffrey Hinton, emeritus professor of computer science at the University of Toronto, and Yoshua Bengio, professor of computer science at the Universite de Montreal/Mila.

Dr Hinton 辞職するd from his 職業 at Google earlier this month, 説 that in the wrong 手渡すs, AI could be used to to 害(を与える) people and (一定の)期間 the end of humanity.

The 加盟国s also 含む Sam Altman and Ilya Sutskever, the 長,指導者 (n)役員/(a)執行力のある and co-創立者 それぞれ of ChatGPT developer OpenAI.

The 名簿(に載せる)/表(にあげる) also 含むs dozens of academics, 上級の bosses at companies such as Google DeepMind, the co-創立者 of Skype, and the 創立者s of AI company Anthropic.

Elon Musk has previously expressed concern (Brian Lawless/PA)

Elon Musk has 以前 表明するd 関心 (Brian Lawless/PA)

Earlier this year more than 1,000 研究員s and technologists, 含むing Elon Musk, had 調印するd a much longer letter calling for a six-month pause on AI 開発.

? What can be done to 規制する it and stop these シナリオs?

The Centre for AI Safety says it 減ずるs 危険s from AI through 研究, field-building, and advocacy.

The AI 研究 含むs: identifying and 除去するing dangerous behaviours; 熟考する/考慮するing deceptive and unethical behaviour in it; training AI to behave morally; and 改善するing its 安全 and reliability.

The centre says it also grows the AI safety 研究 field through 基金ing, 研究 組織/基盤/下部構造, and 教育の 資源s.

And it raises public 認識/意識性 of AI 危険s and safety, 供給するs technical 専門的知識 to 知らせる policymaking and advises 産業 leaders on structures and practices to prioritise AI safety.

? What are countries doing?

総理大臣 Rishi Sunak retweeted the Centre for AI Safety’s 声明 on Wednesday and said the 政府 is “looking very carefully” at it.

He said he raised it at the G7 首脳会議 and will discuss the topic again when he visits the US.

Prime Minister Rishi Sunak (Jordan Pettitt/PA)

総理大臣 Rishi Sunak (Jordan Pettitt/PA)

His tweet said: “The 政府 is looking very carefully at this. Last week I 強調する/ストレスd to AI companies the importance of putting guardrails in place so 開発 is 安全な and 安全な・保証する. But we need to work together. That’s why I raised it at the @G7 and will do so again when I visit the US.”

Last week Mr Sunak spoke about the importance of 確実にするing the 権利 “guard rails” are in place to 保護する against 可能性のある dangers, 範囲ing from 故意の誤報 and 国家の 安全 to “existential 脅しs”.

And this week 中国’s 判決,裁定 共産主義者 Party called for beefed-up 国家の 安全 対策, 最高潮の場面ing the 危険s 提起する/ポーズをとるd by 前進するs in AI.

? What are the 利益s of AI?

AI can 成し遂げる life-saving 仕事s, such as algorithms analysing 医療の images 含むing X-rays, ざっと目を通すs and ultrasounds, helping doctors to identify and 診断する 病気s such as 癌 and heart 条件s more 正確に and quickly.

One example of a 利益 of AI is new brain 科学(工学)技術 which helped a man who was paralysed in a bicycle 事故 more than a 10年間 ago to stand and walk again.

Neuroscientists at the Ecole Polytechnique 連邦の de Lausanne (EPFL) in Switzerland have created what they call a “wireless 数字表示式の 橋(渡しをする)” which is able to 回復する the 関係 lost between the brain and the spinal cord.

This 数字表示式の 橋(渡しをする) is a brain?spine interface which has 許すd Gert-Jan Oskam to 回復する 支配(する)/統制する over the movement of his 脚s, enabling him to stand, walk and even climb stairs.

Sorry we are not 現在/一般に 受託するing comments on this article.