School of Engineering Department of Electronic and Computer Engineering 175 The Integration and Characterization of an Artificial Compound Eye Supervisor: WONG Man / ECE Student: HONG Guangbin / ELEC Course: UROP 2100, Fall This project is regarded as a continuation of the one in summer semester. Under the ultimate target of utilizing the tool of Spiking Neural Network (SNN) to recognize dynamically written digits, this project serves as a trial and verification for various algorithms and techniques. In this project, a renovated model with innovated algorithms is built to address the potential deficiency in the previous one and integrate different powerful ideas to raise the prediction accuracy. Although the current model is restricted to identification of static handwriting, its collaboration with software that provides dynamically recording and analyzing functions creates possibility for its further upgrading. This project performs a critical role in the pathway of exploring the capabilities and prospects of SNN. Using Large Language Model (LLM) to Assist Digital Circuit Design Supervisor: XIE Zhiyao / ECE Student: XIANG Jiuyao / CPEG Course: UROP 1100, Spring This study explores the application of Large Language Models (LLMs) in automating Verilog code optimization for low-power and area-efficient hardware design. While LLMs cannot fundamentally alter the algorithmic bounds of a design’s power-area tradeoffs, they significantly enhance optimization efficiency through structured prompt engineering. By embedding hardware-specific constraints into prompts and employing Chain-of-Thought workflows, LLMs automate repetitive tasks like logic substitution, resource sharing, and syntax correction, reducing trial-and-error cycles. Case studies on ALUs and counters demonstrate reductions in design iterations, with LLMs proposing synthesis-aware modifications. The framework bridges semantic gaps between abstract code generation and physical implementation, enabling engineers to focus on high-impact architectural decisions. Though transformative PPA gains remain architecture-dependent, LLMs emerge as collaborative tools that democratize optimization expertise and accelerate design convergence, particularly for complex, multi-objective RTL refinement. Using Large Language Model (LLM) to Assist Digital Circuit Design Supervisor: XIE Zhiyao / ECE Student: ZHOU Hangan / CPEG Course: UROP 1100, Summer In recent years, Large Language Models (LLMs) have shown immense promise in automating Register Transfer Level (RTL) design. However, accurately evaluating their true capabilities is hindered by flaws in existing benchmarking methodologies. This report details my contributions to the “RTL-Rev” project, which conducted a comprehensive analysis of these benchmarks. My primary role involved systematically evaluating a wide range of LLMs, from commercial models like GPT-4o to open-source and fine-tuned versions, against established benchmarks such as VerilogEval, RTLLM, and CVDP. I performed in-depth failure analysis, categorizing errors to distinguish between genuine model limitations and issues stemming from ambiguous prompts or flawed testbenches. My work was instrumental in identifying and documenting these “fake challenges,” ultimately contributing to a more accurate assessment of LLM performance and the development of improved evaluation practices.
RkJQdWJsaXNoZXIy NDk5Njg=