School of Engineering Department of Computer Science and Engineering 128 Making Large Language Models (LLMs) Interact with Physical World Supervisor: LI Mo / CSE Student: YEUNG Wun Lam / CPEG Course: UROP 2100, Fall This research introduces an innovative approach to question answering (QA) in automatically generated life journals using large language models (LLMs). Building on previous work in automatic life journaling that collects motion and location data through mobile sensors, our framework generates daily journals and enhances user interaction. A key advancement is an optimized location context generation method that utilizes OpenStreetMap data, satellite imagery, and geospatial analysis to create detailed spatial narratives, reducing reliance on LLMs and improving efficiency. Additionally, we integrate image analysis techniques to align visual data with these narratives. Our specialized QA dataset, derived from mobile data, is evaluated against traditional retrieval-augmented generation methods, demonstrating improved contextual understanding and engagement in personal documentation practices with LLMs. Assess User Experience to Design Effective Visual Representation and Interaction in Virtual Reality Supervisor: MA Xiaojuan / CSE Student: ZHANG Zongmin / COMP Course: UROP 1100, Spring As VR/AR technologies continue to evolve rapidly, their applications in areas such as virtual communication, education, and entertainment are becoming increasingly significant. This project focuses on designing and developing multi-user interactive virtual environments using the Unity engine and Meta XR All-in One SDK on the Meta Quest 3 headset. The work includes constructing scenes, designing spatial layouts, and implementing interactions between users and game objects or other users across shared or separate environments, leveraging building blocks from the Meta XR Interaction SDK and custom C# scripts. The project also investigates the integration and compatibility of various XR software packages within a unified system. Extensive debugging and testing were conducted to ensure a stable, smooth, and immersive user experience. Designing Conversational Agents for Neurocognitive Disorders Screening Supervisor: MA Xiaojuan / CSE Student: IU Hei Ching / COSC Course: UROP 1100, Spring UROP 3200, Summer Conversational agents (CAs) have shown potential in neurocognitive disorder screening, but their effectiveness depends heavily on interaction quality. This study investigates prompting strategies to enhance conversational agent performance in an automated neurocognitive disorder screening task. We examine how different prompting structures, such as chain-of-thought (CoT) reasoning and human-knowledge scaffolding, affect response quality and appropriateness. Furthermore, we evaluate correlation between automatic speech recognition errors on the model’s response quality is evaluated.
RkJQdWJsaXNoZXIy NDk5Njg=