UROP Proceedings 2022-23

School of Engineering Department of Computer Science and Engineering 109 Trustworthy Machine Learning Supervisor: CHENG, Minhao / CSE Student: BAI, Yihan / COMP YE, Yilin / DSCT Course: UROP1100, Summer UROP1000, Summer Large language models (LLMs) have become increasingly popular due to their capacity to generate highquality text. Identifying the source of the generated text is essential for safeguarding user copyright, ensuring security, and combating malicious behavior. In this report, we endeavor to assess the precision of successful detection rate of a watermark model with varying numbers of tokens, model interference, and LLMs. The watermark model embeds the identification code within the generated output in order to establish a distinct association between the client and their output allowing it to match a single output with a single client. Finally, we evaluate the accuracy rate performed by each experiment to determine the influence of each variable. Trustworthy Machine Learning Supervisor: CHENG, Minhao / CSE Student: PENG, Yiqing / CPEG Course: UROP1100, Fall The adversarial examples are malicious inputs generated by malicious users to confuse machine learning models. As an investigator, it is important to trace the malicious user who carried out adversarial attacks. In an industrial context, the machine learning models are distributed to users by the service provider. However, adversarial attacks were conducted by one user through the assigned model. The duty of investigator is to find out the malicious user by analyzing the adversarial examples generated by the user. I have looked through papers on generating adversarial examples, preventing adversarial attacks and embedding watermarks to protect intellectual property of machine learning models. Building a Blockchain and Smart Contract Application Supervisor: CHEUNG, Shing-Chi / CSE Student: PANG, Lok Chi / COMP Course: UROP1100, Fall Smart contracts can contain vulnerabilities that can make them susceptible to attacks by malicious parties. A common attack is the frontrunning attack, in which the attacker leverages knowledge of future transactions to perform actions, before said future transactions could be mined, to make profits. Attackers may obtain knowledge of future transactions by observing their broadcast. However, even with knowledge of a future transaction, an actual attack is only possible when the smart contract has vulnerabilities. The goal of this project is to examine a set of frontrunning attacks on the Ethereum blockchain and analyze the vulnerabilities in the contract.

RkJQdWJsaXNoZXIy NDk5Njg=