School of Engineering Department of Computer Science and Engineering 123 Large Language Models as Your Machine Learning Experts Supervisor: DI Shimin / CSE Student: LIU Siwei / COSC Course: UROP 1100, Fall Advancements in Large Language Models (LLMs) and Automated Machine Learning (AutoML) have shown considerable promise. This report investigates the integration of these two fields, focusing on how LLMs can enhance AutoML configurations. I have reviewed a number of relevant studies to gain insights into how LLMs can optimize various aspects of AutoML. Additionally, I explored examples of how Neural Architecture Search (NAS) has been improved with GPT. As a beginner without prior research experience, I found many concepts in these advanced projects challenging to grasp. Nevertheless, I am committed to presenting my findings in a clear and organized manner throughout this report. Large Language Models as Your Machine Learning Experts Supervisor: DI Shimin / CSE Student: ZHU Ruofan / RMBI Course: UROP 1100, Fall This progress report presents the ongoing implementation and study of the DesiGNN framework, as described in the paper “Computation-friendly Graph Neural Network Design by Accumulating Knowledge on Large Language Models”. The project focuses on leveraging Large Language Models (LLMs) to accumulate knowledge for automating Graph Neural Network (GNN) architecture design. Initial efforts include studying the correlation between graph topology, GNN configurations, and their performance, as well as implementing graph understanding and knowledge retrieval modules. Preliminary experiments on benchmark datasets, such as Cora and PubMed, have validated some aspects of the framework. The future work includes integrating LLMs for efficient model refinement, exploring dataset similarities, and conducting extensive testing to assess the framework’s scalability and effectiveness.
RkJQdWJsaXNoZXIy NDk5Njg=