HKUST PPOL Fall 2025

Yang, Liang, Yan Xu, and Pan Hui. “Framing metaverse identity: A multidimensional framework for governing digital selves.” Telecommunications Policy 49.3 (2025): 102906. SCIENCE, TECHNOLOGY AND INNOVATION POLICY FOCUS OF STUDY The metaverse, as an emerging digital ecosystem, is redefining the boundaries between physical and virtual realities, offering both challenges and opportunities for societal and personal growth. This study contributes to metaverse governance discourse by proposing a multidimensional framework for understanding and regulating Metaverse Identity, defined as a user’s digital self encompassing personal attributes, data footprints, social roles, and economic elements. The framework introduces two guiding principles: Equivalence and Alignment, emphasizing coherence between digital and realworld identities to enhance accountability and legal clarity, and Fusion and Expansiveness, advocating for creative, inclusive identity expression beyond traditional constraints. These principles address key governance challenges, including identity interoperability, privacy management complexities, risks from deepfakes, and identity fragmentation. By bridging theoretical gaps, this study offers a foundation for future research and strategies to guide the ethical and inclusive evolution of the metaverse. POLICY RECOMMENDATION Policymakers should develop adaptive governance frameworks that balance regulatory oversight with flexibility in identity expression. These frameworks must address challenges such as privacy, interoperability, and ethical risks while safeguarding mental health, fostering inclusivity, and encouraging innovation to ensure a fair and forward-thinking metaverse ecosystem. FOCUS OF STUDY This paper addresses the critical issue of value alignment in AI systems, highlighting two main challenges: ensuring that AI understands human values and determining which values should be prioritized. It critiques existing approaches like reinforcement learning with human feedback (RLHF) and Constitutional AI for their lack of transparency and inclusivity, which can result in biased outcomes reflecting only dominant perspectives. The study proposes a Dynamic Value Alignment approach that enhances users’ moral and epistemic agency, allowing them to exert greater control over the values that guide AI behavior. By modeling moral reasoning as a dynamic process, this framework aims to democratize AI ethics, ensuring that a diverse array of human values are represented in AI systems. POLICY RECOMMENDATION Policymakers should promote the adoption of the Dynamic Value Alignment approach in AI development, ensuring that value selection processes are transparent, inclusive, and participatory. This will help mitigate biases and enhance accountability in AI systems, reflecting a broader spectrum of human values. Huang, Linus Ta-Lun, Gleb Papyshev, and James K. Wong. “Democratizing value alignment: From authoritarian to democratic AI ethics.” AI and Ethics 5.1 (2025): 11-18. 19 Scholarly Showcase

RkJQdWJsaXNoZXIy NDk5Njg=