fbpx

Autonomous robots: answering three basic questions

Stefan
Iulian Stefan Iulian is precision agriculture professional, currently working for Syngenta in Romania.
Autonomous robot navigating in a field- Photo: Farmdroid
Autonomous robot navigating in a field- Photo: Farmdroid

Autonomous robots are one of, if not the hottest trends in precision agriculture at this moment. These robots are equipped with a wide range of technologies and sensors. Because farming is about achieving the best results with minimal effort, I advise farmers to consider the capabilities and the level of autonomy the robots have before purchasing one. This is more important than focusing on fancy lights, colors, and options that don’t add any value to their basic operations.

For a robot to be truly autonomous, it must find solutions to three basic questions:

  1. Where is it located at this moment?
  2. Where is it going?
  3. How does it get there?

In addition to these questions, the robot must also consider how to use the attached implement and how to control it properly, but this is a topic for another article.

To answer the three basic questions mentioned above, the robots have to receive or execute the following inputs and outputs:

  1. To have a model of the environment: In agriculture, the environment in which robots navigate is relatively simple with not many variables, but it changes rapidly throughout the year (crops grow, soil is prepared, irrigation equipment in different positions, etc.). In this case, the environment model will be given to the robots, or at least most of it (for example, loading shape files with boundaries, obstacles, and other points featured on it). However, the robot will still have to (at least partially) perceive and analyze the environment through its sensors and algorithms.
  2. Find its position and situation within the environment, which is achieved again through data collected by sensors and with the help of algorithms.
  3. Plan and execute the given task by using the motion devices with which it is equipped, in a safe and proper way.

Text continues below picture

Naio robot with LiDAR, camera and GNSS antenna in the top center front. - Photo: Mark Pasveer
Naio robot with LiDAR, camera and GNSS antenna in the top center front. - Photo: Mark Pasveer

Analyze the environment

Now, let’s discuss what sensors the robots can use to perceive and analyze the environment and what they are doing.

Laser Scanners or LiDAR (Light Detection And Ranging) provide accurate and real-time environmental sensing and mapping. It helps robots to navigate, avoid obstacles, and understand their surroundings. Think about this as radar technology, but with light (or laser) waves instead of radio, with the main difference being that some advanced LiDAR systems can be used to capture the reflected light’s angle, creating a detailed 3D map of the environment.

GNSS (or better known as GPS) plays a significant role in helping robots navigate the environment by providing accurate positioning information. This information feeds the robot with precise information about where it is and confirms that it is heading in the right direction or on the right path. In scenarios where multiple robots need to collaborate in the same field, GPS is used to coordinate their activity. It is important to note that while GPS is very valuable in outdoor applications, it can be limited or impossible to use in indoor applications, such as greenhouses.

Valuable information with cameras

Cameras are another kind of sensor that provides valuable information for the robots, and here the more, the better. Digital cameras provide a wide field of view by capturing images and videos from multiple directions. Cameras are used more and more in SLAM algorithms (Simultaneous Localization and Mapping), allowing robots to build maps of the environment while simultaneously determining their own localization within that environment. It must be noted that algorithms that use data input from cameras need a larger than average computing power.

Inertial Measurement Units (or IMUs) are sensors that measure and report specific force and angular movements. They provide crucial information in agricultural robots for motion tracking, orientation estimation, and control. For example, speed can be measured using such sensors, but also valuable information about the robot’s roll, pitch, and yaw angles.

As most of the robots deployed in agriculture are wheel-based or tracked, wheel encoders are other important sensors to find about. They are used to measure the rotation and speed of wheels on a robot but also provide valuable information about odometry, navigation, and error correction (like slippage or uneven terrain).
Text continues below picture

AgXeed robot with front safety bumper. - Photo: Mark Pasveer
AgXeed robot with front safety bumper. - Photo: Mark Pasveer

Sensors

Other sensors commonly found in commercially available robots today, with the main focus on safety or quality, are:

  1. Proximity sensors, which use infrared or ultrasonic sensors to detect the presence or proximity of foreign objects, animals, or people.
  2. Touch and pressure sensors, such as force-resistive sensors, allow robots to detect physical contact with other objects or surfaces. These sensors are integrated especially in the safety bumpers of agricultural robots.
  3. Temperature and humidity sensors are starting to be implemented more and more in agricultural robots, as there is a need for certain operations to be stopped if the weather conditions change during the operation, in order to preserve the quality of it.

And we should not forget about the more commonly used sensors in agriculture, such as flow sensors, tilt sensors, encoders, potentiometers, or RFID sensors. But we won’t spend time explaining these ones, as most farmers are familiar with them.

Join 17,000+ subscribers

Subscribe to our newsletter to stay updated about all the need-to-know content in the agricultural sector, two times a week.