School of Engineering Department of Computer Science and Engineering 109 Using Large Language Models (LLMs) for Software Development Supervisor: CHEUNG Shing Chi / CSE Student: WANG Hesong / COMP ZHANG Chenyu / COMP Course: UROP 1100, Fall UROP 1100, Fall This UROP project is Using Large Language Models (LLMs) for Software Development. We have take part in data gathering and basic tasks for this project. This paper presents the two tasks we performed and the results. The first task is evaluating results of Hand recognition model. The second task is testing GPT-3.5turbo with differential prompting for software flaw detection, the results shows the challenges of using LLM to solve this problems but also highlights the improvements than without differential prompting. At last, we included reading related paper of DeiT to broaden our horizon in this field. Using Large Language Models (LLMs) for Software Development Supervisor: CHEUNG Shing Chi / CSE Student: ZHONG Yingqi / COMP Course: UROP 1100, Summer “CCMD is an essential compiler property that a production compiler should support: the compiler should emit the same machine code regardless of enabling debug information.” With this definition, previous research introduced three mutators designed to enhance the detection of CCMD bugs by altering seed programs. This UROP project replicates and refines these mutators, implemented by Clang Libtooling library. The resulting tool effectively mutates all C programs produced by csmith, a random C program generator, demonstrating the tool’s robustness. The primary outcomes of this UROP include understanding the fundamental concepts of compiler testing, Abstract Syntax Tree (AST), and the application of the Clang Libtooling library. Future research could focus on identifying the limitations of current mutators and developing new ones. Deep Learning for Skin Lesion Segmentation Supervisor: CHUNG Albert Chi Shing / CSE Student: WANG Anbang / DSCT Course: UROP 1000, Summer Skin lesion segmentation plays a crucial rule in early diagnosis of skin cancer. Deep learning-based methods have shown promising potential in this task, but the large size of the model and intensive computation resources for training, make it difficult to be deployed on small devices, such as mobile phone. Nowadays, make lightweight of vision models has gain tremendous amount of attention. Among which, knowledge distillation, which capture the feature from larger model to produce effective small models, is a promising method. In this report, we implement knowledge distillation on skin lesion segmentation. We compare the outcome of different distillation schemes and their influence on segmentation task and try to give explanation to the result.
RkJQdWJsaXNoZXIy NDk5Njg=