School of Business and Management Department of Management 212 Department of Management Perceptions of AI in Organizations Supervisor: Thomas Bradford BITTERLY / MGMT Student: CHEUNG Yau Sum / IS Course: UROP 1000, Summer Throughout my UROP1000: Perceptions of AI in Organizations course, I was able to join the team’s weekly group meetings as well as participate in various tasks from different research projects namely the Digital Human Project, The Dosage of Deception: How Frequency and Type Influence Trust Evaluations research essay and the Laughter on Trust Project. Through these projects, I have gained a lot of invaluable insights on how to coordinate a research project and have learnt about different approaches and research methods to achieve the most ideal results. Perceptions of AI in Organizations Supervisor: Thomas Bradford BITTERLY / MGMT Student: EE Hong Liang / GBUS Course: UROP 1000, Summer The work in this report was done to support a collection of studies related to the exploration of the effects of certain factors (e.g., use of AI, humor) on trust. This report focuses on my involvement on two such studies. One study explored the trust levels of humans while talking to an AI chatbot, determined by the extent to which humans would share personal information. I independently generated data through data coding of survey responses. Another study investigated how the use of humour affected trust in organizations depending on whether it occurred in a high- (e.g., China) or low-context (e.g., the US) culture. I verified statistical analysis and result replication to identify discrepancies. Overall, the work in this report validated the analysis of the studies and supported the research process of the entire team. Perceptions of AI in Organizations Supervisor: Thomas Bradford BITTERLY / MGMT Student: TSENG Yung-yi / OM Course: UROP 1100, Spring This study investigates the factors that influence trust and adoption of artificial intelligence (AI) within organizations. This semester, through our two ongoing experiments - one analyzing the disclosure patterns in AI chat interactions and another comparing feedback evaluations (human, AI, and hybrid) - we find that while human feedback is initially perceived as warmer, AI-enhanced human feedback objectively outperforms in competence, warmth, and trustworthiness when assessed blindly. Our current results also highlight a paradox that users who prefer in-person interaction disclose more personal information to AI. These findings advance understanding of AI adoption concerns.
RkJQdWJsaXNoZXIy NDk5Njg=