DustBusterAI

dash

dash

License ROS2 Version Ubuntu Version Python C++ C# Unity

Table of Contents

Project Overview

DustBuster is an AI-powered full autonomous differential vehicled robot that is designed to clean plain floors in wide areas such as airports in the possible shortest period of time. DustBuster is equipped with advanced path planning, decision-making, and optimal motion control algorithms that enable it to navigate through complex environments while avoiding obstacles. Its powerful electric motors and durable wheels allow it to move quickly and efficiently across large areas, while its integrated cleaning system ensures that every nook and cranny is thoroughly cleaned. Vehicle electronics including hardware, firmwares, and sensor system is designed with efficiency and reliability in mind, as the robot needs to be able to operate for extended periods of time without interruptions. This requires careful power management and fault tolerance design.

DustBusterAI uses the Robot Operating System 2 (ROS2), the well known and effective framework, to implement AI-based cleaning solutions for the robot. It employs simulation and monitoring platforms, Gazebo and RViz, for efficient development.

In addition to its autonomous capabilities, DustBuster also offers a range of features that make it easy to use and maintain. Its intuitive interface allows operators to monitor its progress and adjust settings as needed, while its modular design makes it easy to replace worn components and perform routine maintenance tasks.

Overall, DustBuster represents a significant advancement in the field of autonomous cleaning robots, offering a powerful, efficient, and user-friendly solution for cleaning large areas quickly and effectively.

Autonomous Overview

Autonomous Overview

The autonomous architecture of our DustBuster robot consists of five primary stages: perception, interpretation, planning, and navigation, followed by the actuation or motor drive stage. Detailed documentation of each of these stages can be found in the corresponding project folders and they are summarized below subsections.

dash

sense

Sense

In the perception or sensing stage, the robot collects data from various sensors. We first employ a 2D Lidar. This sensor utilizes a laser beam within a 240-degree field of view to detect structures in the environment in two dimensions. Next in line are the encoders, which provide information about the vehicle’s location by determining the degree of wheel rotation. Finally, the Inertial Measurement Unit (IMU) assists and provides reliability to the other sensors by detecting the vehicle’s instant motion.

Raw data obtained is often noisy and may contain outliers or irrelevant information. To ensure the quality and reliability of the data, we employ various filtering techniques to reduce the noise and enhance the valuable information.

For a deeper dive into DustBuster’s sensing mechanisms, please refer to the following resources:

dash

interpret

Interpret

The next stage is interpretation. In this stage, the processed sensor data is evaluated to gain an understanding of the robot’s environment. This process aim to create environmental awareness for the robot. It answers critical questions like, “Where am I currently located?”, “What obstacles are present in my surroundings?”, and so on.

We combine sensor data for localization and mapping, thereby obtaining more precise and reliable information. Initially, we determine the vehicle’s current position by passing encoder and IMU data through a Kalman filter. We then utilize the Lidar data directly to generate the map.

Sensing & Interpretation stages can be summarized with below schematic.

Sensing & Interpretation Stages Schematic

Sensing & Interpretation Stages Schematic

For a comprehensive understanding of the interpretation process, kindly refer to the following detailed documentations:

dash

plan

Plan

We then proceed to the planning stage. Using artificial intelligence algorithms and map data, we determine the general route that the robot will take. The goal here is to find the optimal path that ensures efficiency and safety while achieving the desired cleaning objectives.

We use evolutionary algorithms to determine the optimal path at specified time stamps according to the situation. It can be seen in the below simulation the optimal path determination at the initial state of the robot in Unity platform.

Route Determined by Genetic Algorithm Visualized in Unity Platform

Route Determined by Genetic Algorithm Visualized in Unity Platform

dash

navigate

Following planning, we reach the navigation stage. Here, the paths to the planned waypoints are generated, producing real-time velocity commands for the robot. This stage ensures the smooth and accurate execution of the planned route, responding dynamically to any changes in the environment.

Below is a video demonstration showcasing how DustBuster operates within an environment that is completely unknown to it initially.

Demonstration of Route Navigation via Genetic Algorithm in Gazebo Simulation

Demonstration of Route Navigation via Genetic Algorithm in Gazebo Simulation

For a more thorough understanding of our navigation approach, consider checking out the following resources:

dash

actuate

Actuate

The final stage in DustBusterAI’s autonomous architecture is motor drive or actuation. Here, the velocity command generated during the navigation phase is converted into a voltage signal suitable for the DC motor. This is achieved using a digital PI-PD controller embedded within the Arduino, a development that highlights the critical role of control theory in our design process. Leveraging tools like Matlab and Simulink, we’ve ensured the controller is well-tuned to provide efficient, responsive, and robust actuation. The result is a robot that settles to desired commands quickly, without overshoots, and remains stable across a range of conditions.

The efficacy of our controller design can be appreciated in the real-time wheel speed graph produced by the system. The following image demonstrates the performance of DustBusterAI’s motor control in real-world conditions:

Real-time Wheel Speed Graph

Real-time Wheel Speed Graph

For in-depth information on our controller design and motor actuation strategies, refer to the following resources:

dash

Overall

The entire operation of DustBusterAI, our robot, is a carefully coordinated process where multiple subsystems work together to achieve autonomous cleaning. The block diagram provided below offers a clear picture of how these different subsystems interact and work together in a step-by-step manner, from initial sensing of the environment, through interpretation of data and route planning, to navigation and final actuation of the motors.

To understand the details of this process and the design decisions made along the way, we invite you to explore our main and sub-repositories, as well as the extensive documentation provided for each subsystem. These resources will offer a comprehensive understanding of the inner workings of DustBusterAI.

General Schematic

Sub-repositories

DustBusterAI (this) is the main repository of the project and contains general planning, documentation, management and visualisation.

Sub-repositories are,

Note that, as DustBusterAI continues to evolve, new sub-repositories may be created to cater to evolving project needs. This could include a dedicated repository for advanced artificial intelligence algorithms, robot monitoring written in Java, or a repository for a web interface built using ASP.NET. Be sure to follow our GitHub for the latest project updates.

Project Goals

The goals of the DustBusterAI project in the initial phase are:

Project Team & Academic

The DustBusterAI project is being developed by a team of engineering students. The team includes:

A part of this project has been developed as an academic research within the Autonomous Systems Research Laboratory at our university.

The development and progress of the project were carefully monitored and guided by our esteemed faculty member, Prof. Dr. Erdinç Altuğ. His invaluable insights and direction were instrumental in achieving the objectives of this project.

For academic insights, mechanical designs, manufactured items, theses, presentations, and publications related to this project, please visit our Project Documentations.

Technologies Used

In the development of DustbusterAI, we benefit a wide array of technologies across different disciplines. Here is a categorised breakdown:

Software Development:

Electronics and Firmware Development:

Control System Design:

Algorithm Simulation:

Mechanical Design:

Schematic and Graphic Design:

Video Editing:

Supportive:

Project Timeline

Project Budget

The project budget for DustBusterAI is estimated for the first prototype to be $650, including costs for materials, components, hardware and firmware development, software development, algorithm optimization, and prototype testing. This budget will be allocated across the different phases of the project according to the timeline outlined above.

It’s worth noting that not all items were purchased at the academic timeline of the project, as they were already available in the laboratory. For further details, please refer to the Bill of Materials

Donate or Join to Support the DustBusterAI Project.

We are having economic burdens in our country which usually result in the necessary materials or manufacturing fees such as PCB production or 3D printing to be overpriced. We rely on the support of donors like you to bring the DustBusterAI project to develop further. Your contribution will help us purchase the necessary equipment and materials to build a working prototype of the robot, or if you are interested in and eager to develop this project, you can join our team for the further development of the necessary software and electronics to make it run, design of the mechanical parts to enpower the system. By donating or joining to us, you can be a part of revolutionizing the cleaning industry and making our world a cleaner, more efficient place. Thank you for your support!

Reach us out by contacting Onur Ulusoy

Project Challenges

The DustBusterAI project faces several difficulty, including:

Project Status

As of June 14, 2023, the initial prototype designs of DustBusterAI are completed and we are currently in the physical prototyping stage. The software team is still in the process of developing and refining the decision-making algorithms, conducting tests using the ROS2 framework in conjunction with Gazebo and RViz platforms. The project is progressing according to the set milestones and timelines.

Algorithm Simulations

Since most of the algorithms are handled by development team, before creating ROS environment or investing resources into building a physical prototype of a robot, it’s crucial to validate the underlying algorithms that will govern its behavior. This involves simulating the robot’s movements and interactions with the environment to verify that the algorithms are functioning as intended. Unity, a popular game engine and simulation platform, provides a powerful toolset for creating these simulations. By building a virtual prototype in Unity, developers can test and refine the algorithms that will ultimately drive the physical robot’s behavior, without the risks and expenses associated with building and testing a physical prototype. This can help to identify and resolve potential issues early on in the development process, ensuring that the final product is as efficient and effective as possible. Once the algorithms have been thoroughly tested and optimized in the Unity simulation, developers can confidently move forward with building the physical robot.

Current unity simulation status can be checked from Simulation Stages

https://user-images.githubusercontent.com/95442568/222244447-768bc50f-41e9-4a67-80c4-689e2ca3722b.mp4

Note: Initial simulations in Unity Engine are over, we jumped into developing this algoritms in ROS environment using Gazebo and RViz tools. DustBusterAI-Software

License

All software and hardware designs created for the DustBusterAI project are released under the MIT License. This includes the source code, CAD files, and any other materials created for the project. You are free to use, modify, and distribute this project for both commercial and non-commercial purposes. However, the authors of this project make no warranties or guarantees as to the reliability, suitability, or accuracy of the materials provided. Use at your own risk.

The media files, including but not limited to graphics, videos, and other visual content, accompanying the DustBusterAI project are licensed under a separate Restrictive License. These media files created and owned by Onur Ulusoy are the exclusive property of Onur Ulusoy and may not be used, reproduced, distributed, modified, or exploited in any manner, whether commercial or private, without explicit written permission. Unauthorized use or exploitation of the media files may result in legal action.

Acknowledgements

We would like to take this opportunity to extend our gratitude to all those who have made significant contributions to the DustBusterAI project. This includes everyone from those who worked on the technical aspects to those who provided valuable insights and ideas.

In particular, we would like to acknowledge the exceptional work of the designers who brought our brand to life with their creative artwork. Their contributions have given DustBusterAI its unique and recognizable identity.

For a detailed list of acknowledgements, please refer to our ACKNOWLEDGEMENTS document.

Conclusion

The DustbusterAI project is an ambitious undertaking that aims to develop an autonomous robot capable of cleaning a room using path planning, decision making, and motion control algorithms. By utilizing the latest technologies and techniques in robotics, the project hopes to create a robot that can handle different room sizes and shapes, as well as different types of obstacles.