為何政府不應對人工智慧操之過急?

閱讀標記說明:
加粗對應的是值得注意的單詞; 下劃線對應的是值得注意的短語;綠色字型對應的是值得注意的句式; 標記為綠色的部分是值得記住的句子; 標記為黃色的部分

是要小心留意的詞彙或語法的使用。

說明: 本文選自10月28日最新《經濟學人》, 上方掃碼加微信即可領取

Think, then act思考,然後行動

Governments must not rush intopolicing AI
政府不應急於管制人工智慧

Asummitin Britain willfocus on“extreme” risks. Butno one knows what they look like英國峰會將重點關注"極端 "風險。但沒有人知道它們是什麼樣子的

WILL ARTIFICIAL intelligencekill us all? Some technologists sincerely believe the answer is yes. In one nightmarish scenario, AI eventually outsmarts humanity and goes rogue, taking overcomputers and factories and filling the sky with killer drones. In another, large language models (LLMs) of the sort that power generative AIs like ChatGPT give bad guys the know-how to create devastating cyberweapons and deadly new pathogens.
人工智慧會殺死我們所有人嗎?一些技術專家認為真地會這樣。在一種噩夢般的場景中,人工智慧最終會超越人類,變得不聽話,接管計算機和工廠,讓無人機殺手充斥天空。在另一種情況下,為ChatGPT 等生成式人工智慧提供動力的大型語言模型(LLM)讓壞人學會了如何製造毀滅性網路武器和致命新病原體。
It is time to think hard about these doomsday scenarios. Not because they have become more probable—no one knows how likely they are—but because policymakers around the world are mulling measuresto guard against them. The European Union is finalising an expansive AI act; the White House is expected soon to issue an executive orderaimed at LLMs; and on November 1st and 2nd the British government will convene world leaders and tech bosses for an “AI Safety Summit” to discuss the extreme risks that AI models may pose.
現在是認真思考這些末日場景的時候了。這並不是因為它們變得更有可能——沒人知道它們的可能性有多大,而是因為全世界的政策制定者都在醞釀防範它們的措施。歐盟正在敲定一項廣泛的人工智慧法案;白宮預計很快將釋出一項針對大型語言模型的行政命令;11月1日和2日,英國政府將召集世界各國領導人和科技大佬召開 "人工智慧安全峰會",討論人工智慧模型可能帶來的極端風險。
Governments cannot ignore a technology that could change the world profoundly, and any credible threat to humanity should be taken seriously.Regulators have been too slow in the past. Many wish they had acted faster to police social media in the 2010s, and are keen to be on the front footthis time. But there is danger, too, in acting hastily. If they go too fast, policymakers could create global rules and institutions that are aimed at the wrong problems, are ineffective against the real ones and which stifle innovation.
各國政府不能忽視這項可能深刻改變世界的技術,任何對人類的可信威脅都應得到認真對待。監管機構過去的行動過於遲緩。許多人都希望他們能在2010年之際加快行動,監管社交媒體。但倉促行事也有危險。如果操之過急,政策制定者可能會制定出針對錯誤問題的全球規則和制度,對真正的問題束手無策,並扼殺創新。
The idea that AI could drive humanity to extinctionis still entirely speculative. No one yet knows how such a threat might materialise. No common methods exist to establish what counts as risky, much lessto evaluate models against a benchmark for danger. Plenty of research needs to be done before standards and rules can be set. This is why a growing number of tech executives say the world needs a body to study AImuch like the Intergovernmental Panel on Climate Change (IPCC), which tracks and explains global warming.
人工智慧可能導致人類滅絕的觀點仍完全是推測。目前還沒有人知道這種威脅會如何發生。目前還沒有通用的方法來確定什麼是風險,更不用說根據危險基準來評估模型了。在制定標準和規則之前,還需要進行大量的研究。這就是為什麼越來越多的技術高管表示,世界需要一個研究人工智慧的機構,就像跟蹤和解釋全球變暖的政府間氣候變化專門委員會(IPCC)一樣。
A rush to regulate away tail riskscould distract policymakers from less apocalyptic but more pressing problems. New laws may be needed to govern the use of copyrighted materials when training LLMs, or to define privacy rightsas models guzzle personal data. And AI will make it much easier to produce disinformation, a thorny problem for every society.
如果急於對尾部風險進行監管,可能會分散決策者的注意力,使其無法關注不那麼具有世界末日性質但卻更為緊迫的問題。在訓練大型語言模型時,可能需要新的法律來規範版權材料的使用,或者在這些模型吞噬個人資料時界定隱私權。人工智慧將使製造虛假資訊變得更加容易,這對每個社會來說都是一個棘手的問題。
Hasty regulation could also stifle competition and innovation. Because of the computing resourcesand technical skillsrequired, onlya handful of companies have so far developed powerful “frontier” models. New regulation could easily entrench the incumbents andblock out competitors, not least because the biggest model-makers are working closely with governments on writing the rule book. A focus on extreme risks is likely to make regulators wary of open-source models, which are freely availableand can easily be modified; until recently the White House was rumoured to be considering banning firms from releasing frontier open-source models. Yet if those risks do not materialise, restraining open-source models would serve only to limit an important source of competition.
倉促監管還可能扼殺競爭和創新。由於需要計算資源和技術技能,迄今為止只有少數幾家公司開發出了功能強大的"前沿 "模型。新的監管條例很容易鞏固現有的監管機構,並將競爭者拒之門外,這主要是因為最大的模型製造商正與政府密切合作,共同編寫監管規則。對極端風險的關注可能會讓監管者對開源模型產生戒心,因為開源模型可以免費獲取,而且很容易修改;直到最近,白宮還在傳言考慮禁止企業釋出前沿開源模型。然而,如果這些風險沒有出現,限制開源模型只會限制重要的競爭來源。
Regulators must be prepared to react quickly if needed, but should not be rushed into setting rulesor building institutions that turn out to be unnecessary or harmful. Too little is known about the direction of generative AI to understand the risks associated with it, let alonemanage them.
監管機構必須做好準備,在必要時迅速做出反應,但不應匆忙制定規則或建立制度,因為這些規則或制度最終會被證明是不必要的或有害的。人們對人工智慧的發展方向知之甚少,無法瞭解與之相關的風險,更不用說管理這些風險了。
The best that governments can do now is to set upthe infrastructure to study the technology and its potential perils, and ensure that those working onthe problem have adequate resources. In today’s fractious world, it will be hard to establish an IPCC-like body, and for it to thrive. But bodies that already work on AI-related questions, such as the OECD and Britain’s newish Frontier AI Taskforce, which aims to gain access to models’ nuts and bolts, could work closely together.
政府現在能做的最好的事情就是建立基礎設施來研究這項技術及其潛在的危險,並確保那些致力於解決這個問題的人擁有足夠的資源。在當今這個紛爭不斷的世界上,要建立一個類似於IPCC 的機構並使其茁壯成長是很難的。不過,那些已經在研究人工智慧相關問題的機構,如經合組織和英國新成立的旨在獲取模型具體細節的"人工智慧前沿工作組"(Frontier AI Taskforce),可以密切合作。
It would help if governments agreed to a code of conductfor model-makers, much like the “voluntary commitments” negotiated by the White House and to which 15 makers of proprietary models have already signed up. These oblige model-makers, among other things, to share information about how they are managing AI risk. Though the commitments are not binding, they may help avoid a dangerous free-for-all. Makers of open-source models, too, should be urged to join up.
如果各國政府能就模型製作者的行為準則達成一致,這將會有所幫助,就像白宮談判達成的"自願承諾 "一樣,目前已有 15 家專有模型製作者簽署了該承諾。除其他事項外,這些承諾還要求模型製造商分享他們如何管理人工智慧風險的資訊。儘管這些承諾並不具有約束力,但它們可能有助於避免危險的自由競爭。我們也應敦促開源模型的製造商加入進來。
As AI develops further, regulators will have a far better idea ofwhat risks they are guarding against, and consequently what the rule bookshould look like. A fully fledged regimecould eventually look rather like those for other technologies of world-changing import, such as nuclear power or bioengineering. But creating it will take time—and deliberation. ■

隨著人工智慧的進一步發展,監管機構將更清楚地知道他們要防範哪些風險,從而知道規則應該是什麼樣的。一個成熟的制度最終可能與其他改變世界的技術(如核能或生物工程)的制度類似。但是,建立這種制度需要時間,也需要深思熟慮。■


相關文章