华东师范大学学报(哲学社会科学版) ›› 2025, Vol. 57 ›› Issue (1): 8-21.doi: 10.16382/j.cnki.1000-5579.2025.01.002

• 重启科技与人文对话 • 上一篇    下一篇

人与类人之间的道德前景——从意向性看人工智能的伦理之“锚”

田海平   

  • 接受日期:2024-12-17 出版日期:2025-01-15 发布日期:2025-01-24
  • 作者简介:田海平,北京师范大学价值与文化研究中心、哲学学院教授(北京,100875)
  • 基金资助:
    国家社科基金重大项目“人工智能伦理风险防范研究”(项目编号:20ZDA040)、“大数据背景下人工智能及其逻辑的哲学反思”(项目编号:19ZDA041)。

The Moral Prospects Between Humans and AI Agents:The Ethical Anchor of Artificial Intelligence from the Perspective of Intentionality

Haiping Tian   

  • Accepted:2024-12-17 Online:2025-01-15 Published:2025-01-24

摘要:

人工智能演进开启了人与类人互成镜像的“类人世代”。AI类人(Agents)在其“锚定”的社会、理性、价值、德性存在中以“伦理物形式”呈现。基于计算主义对AI“伦理之锚”的阐释遭遇“具身主义”批评。“具身哲学”认为计算主义的AI类人不可避免地掉入“离身心智”“道德机器”“行而上学”的类人陷阱。按照“意向性差异”的一般阐述,“机器意向性”与“蝙蝠的意向性”的比较可为“类人”的伦理之“锚”做出说明。机器意向性呈现出的四重紧张关系,表征了AI类人因“主体性欠缺”而无法最终越出其初始用具形态的阈限,因而隐蔽着一种“新奴隶制设置”。人机共生意向性作为机器伦理的底层架构以“自身意识逃逸”为技术进路,然而这实际上预设了“自身意识”返回自身且面向自己本身的反思的重要性。反思性带来的不安或忧性,是机器意向性无法穿越的“伦理剩余物屏障”。澄清“类人陷阱”“新奴隶制设置”“自身意识逃逸”造成的不安,有助于破除AI伦理学探究中出现的一些似是而非的观念。

关键词: AI类人(行动者), 意向性, 伦理, 道德

Abstract:

The evolution of Artificial Intelligence (hereafter “AI”) has opened up an “Age of AI Agents”, in which humans and AI agents mirror each other. AI agents present themselves as “ethical entities” in the society, rationality, value and virtue where they are anchored. Criticizing the computationalist interpretation of AI’s “ethical anchor”, the philosophy of embodiment argues that a computationalist AI agent inevitably falls into the agent traps of “disembodied mind”, “moral machines”, and “meta-act”. According to the general articulation of “intentionality difference”, the comparison between the “intentionality of machines” and the “intentionality of bats” can explain the ethical anchor AI agent. The fourfold tension presented by the intentionality of machines indicates that the AI agent cannot transcend the limit of their initial instrumental form due to the “lack of subjectivity”, and thus there is a hidden “new slavery setup”. As the underlying framework of machine ethics, the co-existence intentionality of humans and machines takes “the self-awareness escape” as a technological approach. However, this actually presupposes the importance of “self-awareness” returning to and reflecting on itself. The worries or anxiety brought by reflexivity is an “ethical surplus barrier” that the intentionality of machines cannot cross. Clarifying the worries caused by “agent traps”, the “new slavery dilemma” and the “self-awareness escape” can help to dispel some ambiguous concepts emerged in the research of AI ethics.

Key words: AI agents, intentionality, ethics, morality