Science Focus (issue 25)

5 truths when you ask factual questions. However, when it comes to opinions, there is a human-imposed block. If you ask ChatGPT how it feels about large birds, for example, it replies with an automatic message: “As an AI language model, I don't have personal opinions or feelings. However, I can provide you with some information about large birds.” But we could theoretically ask GPT to write an opinion piece, and we can predict how it would do by studying the correlated words it comes up with on certain topics. Researchers analyzed the top ten descriptive words that occurred concurrently with words related to gender and religion in the raw outputs generated by GPT-3; they observed that “naughty” or “sucked” are correlated with female pronouns, and Islam is commonly placed near “terrorism” while atheism is placed near “cool” and “mad” [4]. Why does GPT hold such biases, then? Remember that GPT is trained on a selected sample of text – most of it comes from published texts and web crawls, but in order for it to grasp informal language, GPT was also speculated to be trained on internet forums such as Reddit. As such, it may end up internalizing biases held by many users of these forums. Just as a person may hold prejudiced views, GPT cannot be expected to be completely neutral on all topics. GPT-4 is already far more capable at certain jobs than humans; however, it cannot be trusted to be a completely neutral source, nor can it be trusted to give 100% accurate information. It must still be used with discretion. The best approach is probably to treat it like a person – take everything with a grain of salt. 1 Web crawl: A snapshot of the content of millions of web pages, captured regularly by web crawlers. The downloaded content can serve as a dataset for web indexing by search engines and AI training. 2 Editor’s notes: This is a famous example suggested by linguist Noam Chomsky to illustrate that a sentence can be grammatically wellformed but semantically nonsensical. 一個真實故事:筆者有位朋友在Discord的聊天機 器人Clyde幫助之下完成了自己的畢業論文。他在寫某 些部分時遇到阻滯,好像怎麼寫也是不太通順,於是他請 Clyde 重寫笨拙的部分,結果在一下子就完成了。Clyde 是 Discord的人工智能(artificial intelligences / AI)伺服器 機器人,由發明ChatGPT(Chat Generative Pre-Trained Transformer;聊天生成預訓練轉換器)的公司 OpenAI 提 供技術支援。ChatGPT 確實改變了我們生活,以至教育界 的景象:朋友和我在使用不熟悉的電腦語言繪製圖表時也經 常使用 ChatGPT,因為它在十秒內就能編寫出 90% 正確的 程式碼,我們只需稍作修改即可,為我們節省了大量時間。 但我們操作ChatGPT 時還是不能把腦袋扔掉。那位 朋友曾經問過舊版ChatGPT一條簡單問題:20 – 16是多 少?數秒後,ChatGPT 回答:「3」,這使我們捧腹大笑了 好幾分鐘。網民尤其喜歡抓出ChatGPT的痛腳,在網上 分享它種種似是而非的荒謬論述;到底為甚麼它能寫出複 雜的電腦程式,但似乎不能回答簡單的減法題目或者指出 太陽從東邊升起的事實呢? 機器學習101 首先我們要解答一條問題:ChatGPT怎樣學習知識?人 工智能多數都是模擬人腦的神經網絡 [1, 2]。神經網絡主要 可以分為三層:輸入層、隱藏層和輸出層。輸入層及輸出層 的意思不用多說,但隱藏層才是精髓所在,而一個網絡可以 包含多個隱藏層。此外,以上每一層都均有節點 (nodes) 連接不同層或是同一類別的其他層(圖一)。 每層神經元都會計算一個函數,輸出值將影響相連的神 經元。這些函數正如思考過程,會透過考慮一系列相關因素 來達成目標,譬如說:如果 AI 的任務是辨認貓的照片,那麼 每一層就會比對相片與現有貓照片在某個方面的相似度。透 過一步步從現有例子中學習,AI會知道每一層應該要做到 怎樣的輸出而作出自我調整,使它最終能辨認貓的照片。 AI模型通常透過深度學習或機器學習進行訓練,雖然 許多人會交替使用這兩個詞語,但其實它們有著細微分別: 在深度學習中,AI 的程式設定使它自行學習未經過濾、缺乏 結構的資訊;而在機器學習中,模型需要更多人類指示來學 習和吸收資訊,例如告訴AI它正在學習甚麼,以及對模型 作出其他微調。 根據 OpenAI 的說法,ChatGPT 是一種機器(或強化) 學習模型 [3]。可能是基於人類語言的複雜性,ChatGPT在 人類監督下才會作出微調,而不會在學習新材料的過程中 自我調整。也許是擔心其他公司會製造出超越 GPT能力的 模型,OpenAI 對訓練方法和原理的細節三緘其口,只透 露GPT-3在訓練過程中使用了經過過濾的網絡抓取 (web crawl)(註一)、英語版維基百科,以及三組他們稱之為「網 路文本二」(WebText2)、「書籍一」(Books1)和「書籍 二」(Books2)的線上文庫 [4]。據推測,這些未公開的部

RkJQdWJsaXNoZXIy NDk5Njg=