UROP Proceeding 2023-24

School of Engineering Department of Computer Science and Engineering 130 Self-Supervised Scene Depth Estimation from the Wild Supervisor: XU Dan / CSE Student: YU Kaiwen / COGBM Course: UROP 1100, Fall With the march of AI system, the depth information of pictures or videos is becoming more and more important since it is one of the most fundamental elements of many applications. The first aim of this project was to enhance the field of self-supervised scene depth estimation by comprehensively analyzing and implementing the methodologies presented in the paper “Depth from Videos in the Wild”. As the research progressed, I made a pivotal shift in the approach after discussions with my supervisor Professor Xu. Now we are researching integrating the relative depth estimation methods from “Depth from Videos in the Wild” into the ZoeDepth framework, which aims to improve ZoeDepth's accuracy and robustness in varying environments. While the detailed implementation phase has not commenced, this report outlines our journey thus far, detailing the initial analysis, strategic pivot, and future plans for enhancing depth estimation in dynamic scenes. Automatic and Scalable Data Processing for LLM Supervisor: ZHOU Xiaofang / CSE Student: CHEN Zhenhong / COMP Course: UROP 1100, Summer Recent advancements in Large Language Models (LLMs) and Automated Machine Learning (AutoML) have demonstrated significant potential. This report explores the integration of these two fields and how to enhance AutoML configurations using LLMs. I have reviewed several relevant studies to understand how LLMs can optimize AutoML configuration and also learned about the example that enhance NAS with GPT4. Although as a beginner, I haven’t had any research experience before and I have found many of the concepts in those advanced projects hard to understand, I will try my best to present my findings in a more organized manner in this report. Automatic and Scalable Data Processing for LLM Supervisor: ZHOU Xiaofang / CSE Student: HOU, Jingcheng / COMP Course: UROP 3100, Spring Nowadays, more and more renowned companies and universities are collaborate closely in the development of powerful large language models (LLM). While it may be straightforward for affluent entities to acquire a robust model, smaller teams with limited resources face significant challenges in due to the time and monetary investments involved. To training a powerful LLM, careful attention needs to be paid to aspects such as training dataset collection and construction, base model selection. This report delves into the details of training or fine-tuning dataset construction process of some of the most influential models released in the past year, aiming to provide a foundation for future LLM research endeavors.

RkJQdWJsaXNoZXIy NDk5Njg=