Giter Club home page Giter Club logo

my_project_portfolio's Introduction

My Project Portfolio

About Me πŸ€“

I am Zhouhao ZHANG, a final-year undergraduate majoring in automation in Beihang University.

This portfolio showcases a selection of projects I undertook during my undergraduate studies. The majority of these projects were completed independently, driven by my own curiosity and enthusiasm, without the guidance of a mentor. They span a wide range of subjects, including 3D computer vision, traditional image processing, deep learning, intelligent algorithms,robotics and SLAM.

These experiences, although not leading-edge research, have significantly enriched my academic journey and helped me identify my passions. I firmly believe that this valuable practical experience will greatly assist me in my future scientific research endeavors.

My Awards πŸ†:

  • CATIC Scholarship (10 places in BUAA), 12/2023
  • The First prize of scholarship in discipline competition, BUAA, 11/2023
  • The Second prize of Social Work Outstanding Scholarship, BUAA, 11/2023
  • Outstanding Student Leader, BUAA, 11/2023
  • The Second Prize of The 22nd China University Robot Competition (ROBOCON), 07/2023
  • The Second Prize of China Intelligent Robot Fighting and Gaming Competition 2022, 03/2023
  • National Scholarship (The top 3 out of 236), The Ministry of Education of the People’s Republic of China, 12/2022
  • The Top Prize of Learning Excellence Scholarship (The top 3 out of 154), BUAA, 12/2022
  • The First Prize, The Artificial Intelligence & Robot Creative Design Competition in 2022 Robot Competition for College Student in Five Province (Municipalities and Autonomous Regions) of North China, 11/2022
  • University-level Outstanding Student (The top 2 out of 154), BUAA, 09/2022
  • The Third Prize of The 32nd β€œFengRu Cup” Competition&Yuyuan Robots Competition, BUAA, 06/2022
  • The Third Prize of The 38th National Physics Competition for College Students in Some Regions of China, 12/2021

Internship πŸ§‘πŸ»β€πŸ’»

  • A national invention patent is pending.

  • Auto Keystone Correction Projector with Structured Light Pair: The calibration process in this study involves local homography and Gray code, while the correction process is achieved by triangulating keypoint depths and subsequently fitting the projection plane. Concurrently, an accelerometer measures the direction of gravity. The correlation between the key points on the wall and those on the projection screen is employed to compute the homography matrix, which is then used to reverse-engineer the actual display area. The primary focus of this research is to obtain the largest and sharpest inner rectangle within an arbitrary convex projected quadrilateral. In the future, an algorithm for automatically avoiding obstacles on the wall will be developed to enhance the user experience.

  • Auto Keystone Correction Projector with TOF: Plane detection is made possible using the VL53L5CX multi-point Time-of-Flight (TOF) sensor from STMicroelectronics. Data fluctuations are minimized through a filtering process, and robustness is further improved by introducing the Random Sample Consensus (RANSAC) algorithm. The projection mechanism of ultrashort focus projectors is described using an equivalent ideal pinhole model. In other aspects, this study follows a methodology similar to previous research.

Praise from the project manager: You have refreshed my attitude towards the post-00s. Introduce me to more students with good character like you, Zhang. I'll gather you all together as a team, you'll be the head.

Robotics Team of BUAA πŸ€–

  • Auto-shoot Algorithm Based on Deep Learning for Racing Robot in CURC ROBOCON 2023: Information for target identification and encoding is obtained through the fusion of data from laser radar, wheel odometry, and an Inertial Measurement Unit (IMU) serving as a priori localization. Target identification relies on a well-trained deep learning model with data augmentation. With the integration of localization data, precise angular deviations are computed. These deviations are subsequently sent to the motor driver chip, allowing for precise and automated shooting. Our robot's exceptional performance at the 2023 CURC ROBOCON competition served as a validation of the algorithm's precision and robustness.

Related Video

  • Decision-making algorithm for autonomous robots for ROBOCON 2024: The topic for ROBOCON2024 requires autonomous robots to achieve a significant victory in Zone 3. The victory condition is to occupy three granaries. Occupying a granary is achieved when at least two of your team's balls are present in the granary, with your team's ball at the top. This places a high demand on the autonomous decision-making algorithm for robots. To address this, I have designed an algorithm based on the minimax search with alpha-beta pruning, incorporating a simulation interface. What sets this algorithm apart from traditional turn-based game tree approaches is that it allows the robot to choose to skip its own turn and wait for the opponent to act. This approach is more in line with the context of this competition and will give our autonomous robots greater flexibility.
  • Target trajectory analysis with stereo camera : To reduce the computational cost of the deep learning component, a sliding window is introduced, leveraging recognition results from the previous frame. The principles of triangulation are applied to calculate the three-dimensional coordinates of the target. Additionally, Kalman filtering is utilized to enhance data smoothness, predict missing identification information, and bolster overall system robustness.
  • Team entry test: Test I gave to prospective team members. It's a camera pose estimation task. In a scenario with known three-dimensional coordinates, we calculate the camera pose using the Perspective-n-Point (PNP) principle. This involves combining the corner detection results from the previous frame to establish correspondences between 2D points and 3D points. By continuously recognizing these points, we can trace and plot the camera's trajectory. I uploaded the demonstration video to the internet and received widespread attention and discussion.

Related Video

  • Team trainning: This slide serves as a technical guide for new team members. It's designed to instruct newcomers on essential topics, including image processing, 3D vision, as well as providing a brief introduction to Linux and ROS.

Soft Robotics Lab πŸ™

Related Video

Praise from the doctoral students of the research group, after the first group meeting. Zhou Hao, I think you're excellent, and you've got a rough idea of our project today. Our team members are all hardworking, and we've been striving to do some interesting and innovative research. If you're interested in this project, I'd like to invite you to join us. Let's work together and aim to publish a high-quality paper.

AI Program in NUS πŸ‡ΈπŸ‡¬

  • Seq2Seq population forecasting model: I led the team members to apply the Seq2Seq model to the population prediction assignment, not only in the basic regression model required by the professor. We won the winning team of the NUS Artificial Intelligence and Machine Learning Summer Course, and praised by Prof. Mehual Motani.

Course Projects πŸ“š

I take every experiment in class seriously, cherish these practical opportunities, and always exceed the teacher's tasks. This seriousness is also reflected in my grades.

  • PointNet/PointNet++ point cloud segmentation: I led the team to dive into the architecture of PointNet and PointNet++. Through the common BackBone with different heads, the classification and segmentation tasks of point clouds are realized. I deeply explored the properties of T-Net, and tried to change the structure of T-Net, adding residual connections and so on to obtain different performance.
  • Comparison experiments between CNN and Dense: I was inspired by Professor Li Mu's Drive into Deep Learning and personally constructed various classic neural networks for the MNIST and Fashion MNIST datasets. I compared their performance and parameter differences. To gain a better understanding of how convolutional neural networks work, I visualized the results of each layer of LeNet. Below are some illustrative figures from my experimental report.
  • Experiments on Medical Image segmentation (Liver)

  • Experiments on Medical Image segmentation (Retinal vessels) The above two are medical image segmentation experiments I conducted. I reproduced the classic U-Net using PyTorch and experimented with various hyperparameters to achieve good model training and convergence on a small dataset, resulting in a satisfying outcome. During liver CT segmentation, I noticed the issue of uneven distribution in the original data. After experiencing initial failures, I performed data normalization to overcome the challenge and ultimately achieved successful experimental results.

  • EEG-based Motor Imagery Classification: Inferring motor imagery through EEG signals has always been a challenge. I read the paper on EEGnet and implemented it using PyTorch, following its network structure. During the experiment, I gained a deep understanding of the significance of group convolution, depth-wise convolution, and point-wise convolution. In the end, my experimental results ranked at the top of the class for both binary and four-class classification tasks.

Related Video: I explored the differences in various heuristic functions in robot path planning tasks and summarized my findings in an experimental report. The visual process of the experiment has been uploaded to Bilibili.

to be continued

Gallery 🎞️

The following images were taken on medium format film.

my_project_portfolio's People

Contributors

porridge0216 avatar

Stargazers

 avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.