School of Engineering Department of Computer Science and Engineering 121 Graph Machine Learning Methods for Scientific Discovery Supervisor: SONG Yangqiu / CSE Student: TIAN Hangyu / DSCT Course: UROP 1100, Spring Graph machine learning also known as machine learning with graphs has become an important area in artificial intelligence. Graph machine learning has been used in many fields including social networks, biological systems, and knowledge graphs to help us find the relationship between different nodes and predict the future graph. The main motivation for using machine learning graphs is their ability to model and use complex relationships to improve predictive performance and uncover hidden patterns. Recent research has emerged some promising and effective models that help to advance the accuracy and prediction of the model. However, this field still faces a lot of problems including scalability, interpretability, and domainspecific adaptability, and still requires continued research and innovation. Mental State Reasoning for Large Language Models Supervisor: SONG Yangqiu / CSE Student: FAN Yixiang / COGBM TAN, Shaun Tyler Tan / COGBM Course: UROP 1000, Summer UROP 1100, Summer The recent advancements in Large Language Models (LLM) have provided AI models such as ChatGPT the ability to accurately infer the intentions of users based on the given prompts. In light of this capability, researchers have begun investigating the presence of Theory of Mind within LLMs. Theory of Mind is characterized as the ability to recognize the possibility of distinct mental states between individuals and ascribe them to each one when they exist. Typical prompting strategies for ToM in LLMs involve generating a hypothetical situation where theory of mind is exhibited and prompting LLMs based on such a scenario. This study focuses on using a negotiation based scenario as a means to benchmark ToM in LLMs. The results of the study find that, similar to the results of other studies, existing LLMs including state of the art models have limited capabilities for ToM. Although their capabilities pass the threshold for a proof of concept, it is too limited to be considered significant. Reasoning with Large Foundation Models Supervisor: SONG Yangqiu / CSE Student: KWOK, Sze Heng Douglas / MATH-CS Course: UROP 1100, Spring Large Language Models (LLMs) are renowned for their multi-task abilities and adaptability to downstream tasks. In particular, one application of LLMs is their ability to understand purchase intentions in E-commerce scenarios. While there were previous attempts to incorporate LLM into understanding purchase intentions, none could generate meaningful and human-centric intentions applicable to real-world contexts. During this UROP Project, I contributed to writing the IntentionQA research paper (Ding et al., 2024) as the third author. IntentionQA is a two-task multiple-choice question-answering benchmark that evaluates how LLMs comprehend purchase intentions in the context of E-commerce. In particular, I contributed primarily by implementing the experiments for 7 of the 19 language models using PyTorch.
RkJQdWJsaXNoZXIy NDk5Njg=