UROP Proceeding 2023-24

School of Engineering Department of Computer Science and Engineering 81 Generative AI in Interactive Multimedia Applications Supervisor: BRAUD Tristan Camille / CSE Student: LIU Huaiqian / COMP Course: UROP 1100, Fall In the era of artificial intelligence and machine learning, visual AI — using artificial intelligence to process visual data, has opened multiple new avenues for the usage of visual data through machine learning and other techniques. Ultralytics’ YOLO (You Only look Once) is a high-speed, high-accuracy object detection and image segmentation model. The latest YOLOv8 (version 8) model supports a full range of vision AI tasks, including detection, segmentation, pose estimation, tracking, and classification. This UROP projects aims to study YOLOv8 how it implements machine learning algorithms and train its models, as well as its usages and potential applications. Generative AI in Interactive Multimedia Applications Supervisor: BRAUD Tristan Camille / CSE Student: MA Mingfei / CPEG Course: UROP 1100, Spring We want to design a 3D game which contains a figure that is in the first perspective and he or she can walk around in an environment that has flowers, trees, rivers and many natural objects. It is just like wandering in natural environments like forest, deserts, seaside and so on. The most impressive and fancy part is that all the subjects in the environment can change their features according to the physical conditions of the external environment such as illumination intensity, the volume of voice, temperature, compression strength and so on. This is a relaxing and interesting game for people. And all the programming tasks can be done by an app called Unity. Generative AI in Interactive Multimedia Applications Supervisor: BRAUD Tristan Camille / CSE Student: MA Sum Yi / COMP Course: UROP 1100, Fall UROP 2100, Spring UROP 3100, Summer Through a user evaluation involving 12 players, we explored the design implementations that revolved around this novel type of interactive narrative experience. We also discuss the mapping of ambient light data, and the benefits of tangible input, and provide future recommendations for further development of this type of experience. Our contribution consists of a qualitative analysis and a preliminary framework for integrating tangible data into generative narrative experiences, offering a foundation for further research and development in this emerging area.

RkJQdWJsaXNoZXIy NDk5Njg=