School of Business and Management Department of Management 215 Judgment and Decision Making in Organizations Supervisor: David HAGMANN / MGMT Student: WANG Ziling / SGFN Course: UROP 1000, Summer This research examines the social evaluation paradox arising from AI disclosure in organizational settings: while AI enhances work quality, disclosing its use triggers negative perceptions of employees. Through a scenario-based experiment where participants evaluate a financial consultant’s AI-assisted successful project, we test how disclosure audience (supervisor vs. colleagues), framing (replacement vs. augmentation), and timing affect evaluations across eight dimensions (e.g., trustworthiness, laziness, innovativeness). Results reveal significant penalties when AI is framed as replacing tasks, especially when disclosed to supervisors. Conversely, augmentation-framed disclosure mitigates backlash. These findings, grounded in attribution theory (Reif et al., 2025), highlight critical tensions between transparency norms and evaluation biases. We propose actionable frameworks for ethical AI policies, training protocols, and evaluation systems to ensure fair assessment of human-AI collaboration. Judgment and Decision Making in Organizations Supervisor: David HAGMANN / MGMT Student: WU Yutong / ECOF Course: UROP 1100, Spring In order to enhance the effectiveness of the Student Feedback Questionnaire (SFQ) on improving the teaching quality of the Hong Kong University of Science and Technology, we redesigned an SFQ, leveraging Large Language Models (LLMs) in real-time to elicit more detailed student feedback and synthesize this feedback into actionable advice for professors. Following the SFQ redesign, we administered the questionnaire to 324 students enrolled in MGMT 2110 from 6 different sessions, asking them to evaluate their MGMT 2110 professor using the newly designed SFQ. We subsequently collected their responses. As anticipated, we observed a substantial increase in the length of students’ textual responses. Furthermore, we witnessed a significant increase in “concreteness”, as measured by an algorithm developed to assess the actionability (using concreteness as a proxy) of the advice provided by participants for their professors. Because this initial phase of experimentation was conducted relatively late in the term, data analysis is currently ongoing. In the second phase of our study, we intend to recruit approximately one hundred participants from Prolific who identify as teachers, to evaluate the feasibility of the AI-synthesized advice and assess the overall effectiveness of our newly designed SFQ. Our study explores the potential of LLMs for obtaining valuable user feedback within the context of questionnaire-based surveys. We believe this research holds significant implications for understanding the potential of AI in education, service industries, and academia.
RkJQdWJsaXNoZXIy NDk5Njg=