Interdisciplinary Programs Office Division of Emerging Interdisciplinary Areas 219 Can We Build a Cloud from the Crowd ? Cloud Computing on Smartphones Supervisor: HUI Pan / EMIA Student: CHAN Kei Chi / COMP Course: UROP1100, Spring The proliferation of mobile devices and the advancement of mobile devices’ capabilities has opened new methods for handling computationally intensive tasks. The latest mobile phones are capable of rendering complex graphics in real-time for games with their advanced hardware. What if we can gather these capable tiny computers and have them solve a complex task by collaborating? In places like educational institutions and companies where people are crowded, and thus have many phones available, research that requires taxing computation like machine learning and AR is often performed. This research focuses on unleashing this potential and finding a new way to handle these tasks fast and cheaply. This report proposes a task scheduler that can coordinate mobile devices to tackle complex tasks together. It captures the structure of the task as a graph, breaks it down, and assigns them to devices over the network. It also automatically prioritizes sub-tasks that could become a bottleneck and ensure they execute at the fastest speed. To be tolerant of network errors, it also has a recovery mechanism that can restart any subtask that did not return successfully. The main goal of this scheduler is to finish the task at the fastest speed by distributing sub-tasks to a scalable network of mobile devices. And this report will delve into its design choices, the development progress, and future steps. Augmented Reality on Wearable Devices Supervisor: HUI Pan / EMIA Student: CHE Siu Hei / COMP Course: UROP1100, Spring This report demonstrates a method for hand gesture recognition using machine learning models. Instead of using camera tracking which is what most commercial virtual reality or augmented reality headsets use for hand gesture recognition, a glove-based hand gesture recognition is not affected by the limitation of a camera, as it can still work when user’s hands are occluded or outside of the camera’s field of vision. Using an IMU glove for data collection, the data is then preprocessed and fed into two kinds of machine learning model, CNN and FNN. After that, a real-time test is conducted to assess the performance of the models. The results indicated that a FNN model is better at recognizing gesture than a CNN model.
RkJQdWJsaXNoZXIy NDk5Njg=