Whereas in the past the term “robotics” related mainly to the distant cosmic future and science fiction, today robots have already become a part of our lives. At some point, they already serve humans: they make us coffee, wash and dry our clothes, and regulate the temperature and humidity in the house. They actively work in industrial fields and ship goods in warehouses, scan barcodes, pick up and deliver the needed cargoes. The most advanced models even perform surgical operations, becoming more improved, and smarter every year.

Sadly, you can read in Russian mainly about Arduino and Raspberry PI while truly modern technologies rarely get into these reviews. We try fixing it in our blog, so that you can learn about the most advanced developments. In this article, we will talk about technologies that exist and are actively used in 2019.

What Are Processors and Why Are They Needed?

A processor is the main component of a computer that determines its basic functions. A processor is the machine’s “brain”. Therefore, it is essential to choose the right one attentively, keeping in mind that finding the appropriate processor for robotics much depends on the goals the device is required to meet.

In robotics, the following types of processors are used: MCU, CPU, GPU, and TPU. The difference between them looks as follows.

Processors` specifications

MCU – Microcontroller Unit

CPU - Central Processing Unit

GPU - Graphics Processing Unit

TPU - tensor processing unit developed by Google

Being generally cheap, they know how to work with sensors.

Are used to control hardware.

It allows you to run full-fledged software on operating systems such as Linux.

Such processors are easy to program; they support any programming environment.

They are explicitly designed for image and video processing.

It can support a much larger number of cores, which makes them perfect for parallel data processing, and therefore they are used for training and operation of neural networks.

They are designed for TensorFlow, an open-source machine learning framework.

TPU is very well suited for applications where vector and matrix computing predominate.

Distinctive features

Inexpensive and versatile for electronics control, but requires special development and programming environment.

When it comes to machine learning, the CPU loses to other processors, but the CPUs are cheap and consume relatively little power. In all other cases, it is worth using them.

GPU is used for video streaming processing, training, and operation of neural networks, 3D graphics, and animation.

Designed specifically for working with neural networks.

What robotics platforms use these technologies?

Arduino, STM32, ESP8266, ESP32 and others

Raspberry PI, Orange PI, and other single board controllers

NVidia Jetson

Google Coral

 

Do you still have questions about the ultimate difference between CPU and GPU? Please watch the video below:

The production of efficient machines requires a combination of several processors e.g., CPU and GPU or CPU and TPU, which has become the basis for the development of single-board computers, and we will hold that thought.

Modern Hardware Platforms Used in Robotics

Robots operate based on single-board computers. All types of machines differ in size, functions, and power. For example, the well known to many Raspberry Pi minicomputers or Arduino microcontroller boards are ideal for simple, often amateur-built robots. However, when it comes to the production of more powerful and functional artificial intelligence-based machines, you should pay attention to the following hardware solutions:

NXP BlueBox

NXP BlueBox is a series of platforms specifically designed for autonomous vehicles; it supports advanced classification tasks, object detection, localization, mapping, and making decisions about the behavior on the road. BlueBox consists of two chips - one for processing visuals and a more powerful one for making decisions. The bigger processor is eight integrated 64-bit ARM processors running at 2 GHz. NXP said the processor could run 90,000 million instructions per second at under 40 watts of power. With a variety of sensors, the NXP BlueBox platform delivers the performance you need to analyze the driving conditions, assess risk factors, and then control the vehicle behavior.

NXP BlueBox

Nvidia Drive PX

Nvidia Drive PX is a computer that allows vehicle builders to accelerate the production of automated and autonomous vehicles. The board structure is available in various configurations. They range from a single passive cooled mobile processor running at 10 watts of power to a multi-chip configuration with four high-performance processors providing 320 trillion machine learning operations per second. The Nvidia Drive PX platform combines machine learning, sensor fusion, and visual odometry by merging data from multiple cameras. Consequently, it is capable to understand in real-time what is happening around the car, precisely determine its location on a satellite map, and plan a safe path.

Nvidia Drive PX

Nvidia Jetson AGX Xavier

Jetson AGX Xavier is the world’s first computer designed specifically for the production of intelligent robots. It has onboard an 8-core ARM processor for performing general calculations, a graphics processor with tensor kernels designed for tasks related to deep learning, and specialized units for video processing. Jetson AGX Xavier has a workstation-level performance with a maximum computing power of 32 trillion operations per second and a data transfer rate of 750 Gbit per second.

This is sufficient for visual odometry, sensor fusion, localization and cartography, object recognition, route planning, and performing tasks which solution needs the use of artificial intelligence. Jetson AGX Xavier is also equipped with a full set of NVIDIA AI tools, which allows developers to train and deploy neural networks quickly.

Jetson AGX Xavier

On top of that, Nvidia Jetson produces several more ready-to-use platforms, including the Nvidia Jetson Nano model, which, due to its small size, is often compared with different versions of the Raspberry Pi.

Nvidia Jetson Nano and Raspberry Pi comparing

Ready-Made Software Solutions Used in Robotics

So, we have chosen the required hardware. What else is needed to build a robot? You have to understand real-time programming, for example, engine management, artificial intelligence, often image and video processing, and this may not be the full list. Frameworks equipped with ready-made functions and necessary tools present an easier solution, and there are plenty of them. There are separate libraries for simulation, environment perception, navigation, machine learning, and much more. Although none of them is a fully equipped framework for all needs, we tried to select some of today’s most popular robotics programming software and will go into more detail about them.

ROS: Robot Operating System

ROS is a framework containing a set of tools, libraries, and solutions designed to simplify the robots' creation. ROS was founded to encourage collaborative software development for robotics and code reuse in research and development. Along with the specific capabilities that can be obtained from many ROS packages, the platform’s benefits include:

  • Hardware abstraction
  • Low-level device management
  • Messaging support
  • Package management

As a result, ROS pays off on creating custom, specialized, or academic robots. ROS can be of great help if you have to deal with the coordination of advanced sensors, engines, and controllers in situations when performance is not a problem. Some developers may consider ROS an operating system - and they will be wrong. The framework is installed and runs on Linux.

Isaac SDK: Software Development Kit

NVIDIA Isaac Software Development Kit (SDK) is a set of tools and libraries designed to accelerate the development of AI-based robots. It also allows you to create modular applications. This approach provides high-performance image processing and the use of deep learning technology to create intelligent robots. By using computing graphs and a component system, the framework allows developers to implement a sophisticated robot control system using small building blocks. Robot developers can access key AI-based robotics features, including obstacle detection, speech recognition, and stereo depth estimation. Each of the functions is an essential component of robotic systems.

Orocos

Open Robot Control Software Orocos is a framework designed for controlling robots. It consists of three main libraries written in C ++:

Also, one of the main characteristics of Orocos is that complex software systems are created at runtime and not during the compilation. Additionally, the framework is supported by different developers. Lastly, its main advantage is the fact that the platform is free and focused on controlling robots in real-time.

iRobot Aware 2

IRobot Aware 2 is, for the most part, an open framework used by iRobot in many of its robotic products. However, high-level tools such as the robot control system and the operator control unit (OCU) are proprietary. IRobot Aware 2 has facilities for real-time data acquisition, logging, web-based data management, and more. Like ROS, it runs on Linux. It is noteworthy that Aware 2 is trusted by the US armed forces and was used to create the iRobot 710 Warrior and iRobot 510 Packbot military robots.

Military Robots

Above, we described the most impressive specialized frameworks for the "reviving" of robots, as we see them. In fact, many more of them exist, for example:

  • Player is a framework for controlling sensors and robots’ interface.
  • Moos is a cross-platform software for robotics research.
  • Robotnav – libraries related to navigation.
  • Gazebo is a robot simulation framework.
  • Waffles – tools for machine learning.
  • TensorFlow – libraries for visualization of network modeling, as well as machine learning.
  • Caffe is a framework known for its speed and use in modeling of neural networks.
  • Chainer is a framework for working with neural networks.

Robots have already come into our lives in many ways, and year by year they become an increasingly customary part of it. They replace human employees, play the role of pets, even star in movies.

Have you assembled a robot, but there are problems with bringing it to “life”? To do this, you will need machine learning technologies and computer vision. And we know how to develop computer vision systems, train them and implement them into work. Don’t waste your time and contact us!

02.12.2019
Рейтинг: 5 / 5 (1)