School of Engineering Department of Mechanical and Aerospace Engineering 189 Ambular: An Emergency Response Drone Supervisor: Larry LI / MAE Student: RAO Dev / AE Course: UROP 1100, Spring The Ambular project, initiated by the International Civil Aviation Organization (ICAO) in collaboration with key partners such as HKUST, EHang, Concordia University, CAACSRI, and IMAGINACTIVE, aims to revolutionize urban air mobility for emergency medical transport. Inspired by the visionary designs of futurist Charles Bombardier and the think-tank IMAGINACTIVE, this project focuses on developing an electric Vertical Take-off and Landing (eVTOL) vehicle, specifically designed to rapidly transport individuals facing medical emergencies to hospitals. This multidisciplinary aerospace project involves various stages of the design and development process, including aerodynamic testing and analysis in wind tunnels. The project not only seeks to innovate in aircraft design but also emphasizes the importance of understanding the practical applications and impacts of these new technologies on improving the quality of life. Through collaborative efforts and extensive research, the Ambular project has set new standards and practices in aviation, addressing the challenges and risks associated with the realization of advanced urban air mobility solutions. In this progress report, we present findings from computational fluid dynamics simulations conducted at angles of attack ranging from -20 to 5 degrees. These simulations provide critical insights into the aerodynamic performance and stability of the eVTOL prototype under various flight conditions, ensuring its operational readiness and safety for real-world emergency scenarios. Generative AI in Design and Manufacturing Supervisor: LU Yanglong / MAE Student: FARID Muhammad Shaheer Bin / COMP JESSANI Anzila / TEMG Course: UROP 1000, Summer This study evaluates the capability of current generative AI systems to support computer-aided design (CAD) and computer-aided manufacturing (CAM) tasks across two modalities: text-to-image and text-to-video. Widely used tools, including ChatGPT image generation, DALL-E, and Stable Diffusion, were assessed for producing CAD-style visuals. Two text-to-video pipelines, an open-source ModelScope/Diffusers setup, and the web-based Pika workflow, were tested for depicting a basic manufacturing sequence involving 3D printing a cube. Using a shared prompt set and evaluation rubric focused on visual clarity, technical fidelity, adherence to numeric constraints, CAD-style rendering, spatial consistency, and camera stability, results indicate that ChatGPT’s image generator produces the most consistent CAD-like, instruction-following linework, while DALL-E often generates more photorealistic imagery but introduces extraneous or incorrect features. Stable Diffusion demonstrated limited reliability for strict CAD patterns. In text-to-video generation, the ModelScope pipeline produced plausible framing of printer and nozzle motion but frequently violated physical causality, and Pika improved composition and anchoring yet struggled with realistic additive deposition. Prompt design heuristics, prioritizing early camera constraints, explicitly anchoring moving and fixed elements, and minimizing competing nouns, align with CLIP-based encoder constraints that truncate prompts at ~77 tokens. Current models are effective for ideation, teaching, and storyboarding but remain unsuitable for precision-critical engineering drawings or accurate process simulation, showing the need for constraint aware, CAD-native, and physics-informed generative systems.
RkJQdWJsaXNoZXIy NDk5Njg=