HKUST PPOL Fall 2024

STUDY FOCUS The paper explores the emerging applications of generative AI models, particularly in qualitative research within the social sciences and public policy. While these technologies enhance conversational capabilities, they also raise significant ethical concerns regarding data confidentiality and research integrity. The study identifies potential risks associated with the use of Large Language Models (LLMs), such as GPTs, through a review of scientific literature and discussions with qualitative researchers and stakeholders impacted by these technologies. The research highlights the need for interventions to mitigate risks affecting the three key groups: Reviewers, Researchers, and Research Respondents (the 3Rs). By comparing current AI-related policies from the European Union, Singapore, the United States, the United Kingdom, and China, the authors identify regulatory gaps in addressing the ethical implications of LLM usage in qualitative research. POLICY RECOMMENDATION To ensure ethical integrity in qualitative research utilizing generative AI, policymakers should establish a regulatory framework that encompasses both soft laws and hard laws. This framework should facilitate ongoing dialogue among researchers, regulators, and stakeholders, while implementing guidelines and regulations that protect data confidentiality and uphold research ethics. Emphasizing transparency and accountability will be crucial to fostering trust and promoting responsible use of AI technologies in research. STUDY FOCUS This paper examines the relationship between the risks of robots replacing jobs and citizens’ preferences for government policies. It aims to clarify mixed results from existing research by investigating when citizens support government intervention to reduce job displacement risks and their preferred policies. A survey in China, the largest robot market, found that citizens favor government action in dangerous work environments over routine jobs. They prefer direct assistance, like training programs, rather than company-focused regulations. Support for government action is stronger when citizens believe beneficiaries deserve help. POLICY RECOMMENDATION This paper recommends focusing on training programs that directly support workers affected by robot job displacement. Clear communication about the benefits and intended beneficiaries of these policies is essential for gaining public support. Policymakers should also strive for a balance between technological advancement and job security to protect workers’ rights. SCIENCE, TECHNOLOGY AND INNOVATION POLICY Fan, Ziteng, Jing Ning, and Alex Jingwei He. “Slowing down or adapting to technological progress? Automation risk and policy preferences.” Regulation & Governance DOI:10.1111/rego.12642. Sivarudran Pillai, Vishnu, and Kira Matus. “Regulatory solutions to alleviate the risks of generative AI models in qualitative research.” Journal of Asian Public Policy (2024): 1-24. 16 SCHOLARLY SHOWCASE

RkJQdWJsaXNoZXIy NDk5Njg=