Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course
Edge AI empowers artificial intelligence models to operate directly on embedded or resource-limited devices. This approach significantly reduces latency and power consumption while enhancing autonomy and privacy within robotic systems.
This instructor-led live training, available online or onsite, is designed for intermediate-level embedded developers and robotics engineers. The course focuses on implementing machine learning inference and optimization techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completion of this training, participants will be capable of:
- Grasping the core principles of TinyML and edge AI applied to robotics.
- Converting and deploying AI models for on-device inference.
- Optimizing models to improve speed, reduce size, and enhance energy efficiency.
- Integrating edge AI systems into robotic control architectures.
- Evaluating performance and accuracy in real-world operational scenarios.
Course Format
- Interactive lectures and group discussions.
- Hands-on practice with TinyML and edge AI development toolchains.
- Practical exercises conducted on embedded and robotic hardware platforms.
Customization Options
- For organizations wishing to request a customized version of this course, please contact us to arrange the details.
Course Outline
Introduction to Edge AI and TinyML
- Overview of edge AI applications
- Benefits and challenges of deploying AI on devices
- Key use cases in robotics and automation
Fundamentals of TinyML
- Machine learning tailored for resource-constrained systems
- Techniques such as model quantization, pruning, and compression
- Supported frameworks and compatible hardware platforms
Model Development and Conversion
- Training lightweight models using TensorFlow or PyTorch
- Converting models to TensorFlow Lite and PyTorch Mobile formats
- Testing and validating model accuracy
Implementing On-Device Inference
- Deploying AI models to embedded boards (e.g., Arduino, Raspberry Pi, Jetson Nano)
- Integrating inference capabilities with robotic perception and control systems
- Executing real-time predictions and monitoring system performance
Optimizing for Edge Performance
- Strategies to reduce latency and energy consumption
- Leveraging hardware acceleration via NPUs and GPUs
- Benchmarking and profiling embedded inference performance
Edge AI Frameworks and Tools
- Utilizing TensorFlow Lite and Edge Impulse
- Exploring deployment options with PyTorch Mobile
- Debugging and tuning embedded machine learning workflows
Practical Integration and Case Studies
- Designing edge AI perception systems for robots
- Integrating TinyML with ROS-based robotics architectures
- Case studies covering autonomous navigation, object detection, and predictive maintenance
Summary and Next Steps
Requirements
- Knowledge of embedded systems
- Proficiency in Python or C++ programming
- Familiarity with fundamental machine learning concepts
Target Audience
- Embedded developers
- Robotics engineers
- System integrators specializing in intelligent devices
Open Training Courses require 5+ participants.
Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course - Booking
Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course - Enquiry
Edge AI for Robots: TinyML, On-Device Inference & Optimization - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursArtificial Intelligence (AI) for Robotics integrates machine learning, control systems, and sensor fusion to build intelligent machines capable of perceiving, reasoning, and acting autonomously. Leveraging modern tools such as ROS 2, TensorFlow, and OpenCV, engineers can now design robots that intelligently navigate, plan, and interact with real-world environments.
This instructor-led, live training (available online or onsite) targets intermediate-level engineers aiming to develop, train, and deploy AI-driven robotic systems using current open-source technologies and frameworks.
By the end of this training, participants will be able to:
- Use Python and ROS 2 to build and simulate robotic behaviors.
- Implement Kalman and Particle Filters for localization and tracking.
- Apply computer vision techniques using OpenCV for perception and object detection.
- Use TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localization and Mapping) for autonomous navigation.
- Develop reinforcement learning models to improve robotic decision-making.
Format of the Course
- Interactive lecture and discussion.
- Hands-on implementation using ROS 2 and Python.
- Practical exercises with simulated and real robotic environments.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led live training in Norway (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 6-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led live training in Norway (online or onsite), participants will learn the different technologies, frameworks, and techniques for programming various types of robots for use in nuclear technology and environmental systems.
The four-week course runs five days a week. Each day is four hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++, and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service unites the strengths of the Microsoft Bot Framework and Azure Functions, offering a robust platform for rapidly creating intelligent bots.
During this instructor-led live training, participants will investigate how to efficiently develop intelligent bots using Microsoft Azure.
Upon completion of the training, participants will be capable of:
Grasping the fundamental concepts behind intelligent bots.
Constructing intelligent bots utilizing cloud-based applications.
Acquiring practical knowledge of the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Applying well-established bot design patterns in practical scenarios.
Creating and deploying their initial intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, enthusiasts, engineers, and IT professionals with an interest in bot development.
Course Structure
The training blends lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV is an open-source computer vision library that enables real-time image processing, while deep learning frameworks such as TensorFlow provide the tools for intelligent perception and decision-making in robotic systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who wish to apply computer vision and deep learning techniques for robotic perception and autonomy.
By the end of this training, participants will be able to:
- Implement computer vision pipelines using OpenCV.
- Integrate deep learning models for object detection and recognition.
- Use vision-based data for robotic control and navigation.
- Combine classical vision algorithms with deep neural networks.
- Deploy computer vision systems on embedded and robotic platforms.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using OpenCV and TensorFlow.
- Live-lab implementation on simulated or physical robotic systems.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing a Bot
14 HoursA bot, or chatbot, functions as a digital assistant designed to automate user interactions across various messaging platforms, enabling tasks to be completed more efficiently without requiring direct human contact.
Through this instructor-led live training, participants will learn how to begin developing bots by working through the creation of sample chatbots using specialized bot development tools and frameworks.
Upon completion of this training, participants will be capable of:
- Understanding the diverse uses and applications of bots
- Gaining insight into the complete bot development process
- Exploring the various tools and platforms utilized for bot construction
- Constructing a sample chatbot for Facebook Messenger
- Building a sample chatbot using the Microsoft Bot Framework
Target Audience
- Developers interested in creating their own bot
Course Format
- A combination of lectures, discussions, exercises, and extensive hands-on practice
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in Norway (online or onsite) is designed for intermediate-level participants eager to investigate the impact of collaborative robots (cobots) and other human-focused AI systems in contemporary work environments.
Upon completion of this training, participants will be equipped to:
- Grasp the core principles of Human-Centric Physical AI and its practical applications.
- Investigate how collaborative robots contribute to improved workplace productivity.
- Recognize and resolve challenges associated with human-machine interaction.
- Develop workflows that maximize collaboration between humans and AI-driven systems.
- Foster a culture of innovation and adaptability in workplaces utilizing AI.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical, hands-on course aimed at introducing participants to the design and implementation of intuitive interfaces for human–robot communication. This training blends theoretical foundations, design principles, and programming practice to help build natural and responsive interaction systems utilizing speech, gesture, and shared control techniques. Participants will gain skills in integrating perception modules, developing multimodal input systems, and designing robots capable of safe collaboration with humans.
Delivered as instructor-led live training, available either online or onsite, this program targets beginner to intermediate-level participants seeking to design and implement human–robot interaction systems that improve usability, safety, and overall user experience.
Upon completion of this training, participants will be able to:
- Grasp the foundational concepts and design principles of human–robot interaction.
- Create voice-based control and response mechanisms for robotic systems.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems that ensure safe and shared autonomy.
- Evaluate HRI systems based on usability, safety standards, and human factors.
Format of the Course
- Interactive lectures and live demonstrations.
- Hands-on coding and design exercises.
- Practical experiments conducted in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical course designed to bridge the gap between industrial automation and contemporary robotics frameworks. Participants will gain the skills needed to integrate ROS-based robotic systems with PLCs for synchronized operations, while exploring digital twin environments to simulate, monitor, and optimize production processes. The curriculum emphasizes interoperability, real-time control, and predictive analysis utilizing digital replicas of physical systems.
This instructor-led, live training (available online or onsite) targets intermediate-level professionals aiming to develop practical competencies in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completing this training, participants will be able to:
- Comprehend the communication protocols facilitating interaction between ROS and PLC systems.
- Execute real-time data exchange between robots and industrial controllers.
- Create digital twins for monitoring, testing, and process simulation.
- Integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Format of the Course
- Interactive lectures and architectural walkthroughs.
- Hands-on exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in Norway (online or onsite) is aimed at engineers who wish to learn about the applicability of artificial intelligence to mechatronic systems.
By the end of this training, participants will be able to:
- Gain an overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the concepts of neural networks and different learning methods.
- Choose artificial intelligence approaches effectively for real-life problems.
- Implement AI applications in mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursThe Multi-Robot Systems and Swarm Intelligence course is an advanced program designed to explore the design, coordination, and control of robotic teams, drawing inspiration from biological swarm behaviors. Participants will gain insights into modeling interactions, implementing distributed decision-making processes, and optimizing collaboration across multiple agents. By combining theoretical knowledge with practical simulation exercises, the course prepares learners for applications in logistics, defense, search and rescue operations, and autonomous exploration.
This instructor-led live training, available online or onsite, targets advanced-level professionals interested in designing, simulating, and implementing multi-robot and swarm-based systems using open-source frameworks and algorithms.
Upon completion of this training, participants will be able to:
- Grasp the principles and dynamics of swarm intelligence and cooperative robotics.
- Design effective communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviors, including formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization challenges.
Course Format
- Advanced lectures featuring in-depth algorithmic discussions.
- Practical coding and simulation exercises using ROS 2 and Gazebo.
- A collaborative project focused on applying swarm intelligence principles.
Customization Options
- For personalized training arrangements, please contact us.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in Norway (online or onsite) is designed for advanced robotics engineers and AI researchers seeking to leverage Multimodal AI to integrate diverse sensory inputs. The objective is to build more autonomous and efficient robots that possess the ability to see, hear, and touch.
Upon completion of this training, participants will be able to:
- Implement multimodal sensing solutions within robotic systems.
- Develop AI algorithms for sensor fusion and strategic decision-making.
- Design robots capable of executing complex tasks in dynamic environments.
- Resolve challenges related to real-time data processing and actuation.
Smart Robots for Developers
84 HoursA smart robot represents an Artificial Intelligence (AI) system capable of learning from its surroundings and past experiences to enhance its capabilities. These robots can collaborate effectively with humans, operating alongside them and adapting to their behaviors. Beyond performing manual labor, they are equipped to handle complex cognitive tasks. In addition to physical hardware, smart robots can also exist as pure software applications within a computer, devoid of moving parts or direct physical interaction with the environment.
In this instructor-led live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical smart robots. Attendees will apply this knowledge to develop and complete their own smart robot projects.
The course is structured into four sections, each spanning three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical project, allowing participants to practice and demonstrate the skills they have acquired.
The course utilizes 3D simulation software to emulate the target hardware. Programming the robots will be done using the ROS (Robot Operating System) open-source framework, along with C++ and Python.
Upon completion of this training, participants will be able to:
- Grasp the key concepts underlying robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Comprehend and implement the software components that support smart robots.
- Build and operate a simulated mechanical smart robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice.
- Enhance a smart robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a smart robot in realistic scenarios.
Audience
- Developers
- Engineers
Format of the course
- A combination of lectures, discussions, exercises, and extensive hands-on practice.
Note
- To customize any aspect of this course (such as programming language or robot model), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves embedding artificial intelligence into robotic systems to enhance perception, decision-making, and autonomous control capabilities.
This instructor-led live training, available both online and onsite, is designed for advanced robotics engineers, systems integrators, and automation leads who want to implement AI-driven perception, planning, and control within smart manufacturing settings.
Upon completion of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.