Creation of effective deep-learning capabilities for herbicide application is becoming easier. All it takes is a good algorithm for weed indentication – but good algorithms take time.
Artificial intelligence (AI) is increasingly enabling farmers and ranchers to produce food with more precision. The algorithms behind effective AI programing, however, can take considerable time to construct – particularly if the goal is to develop deep-learning capability.
This challenge is particularly pronounced when the problem being addressed has a lot of variation. Weeds, for example, can be tall, short, thick, thin, different in colour, or similar looking to surrounding crops, making weed identification significantly more difficult.
According to Guy Coleman, weed control researcher at the University of Sydney, the creation of effective deep-learning capabilities for herbicide application is becoming easier. It’s still not a quick pursuit, though, in comparison to simpler and more general AI programming.
Regardless, both systems have benefits and drawbacks, and are playing an increasing role in herbicide application systems.
Text continues below image
Broad-acreage application has remained the standard practice for weed control because of the difficulty of identifying the location of weeds, says Coleman. Advances have been made, however, particularly with colouration sensors working on brown (bare soil) backgrounds.
Identifying weeds in a green-on-green environment is the real challenge. Success has been found with drone and satellite imaging, although the approach does not work in real-time.
“I guess in the future you might be able to mount sensors and nozzles to drones. The payloads are going to be an issue,” says Coleman. Ground-based systems – which take action based on information from cameras collecting in-depth colour, shape, and spatial information – are an important way forward.
The problem is weeds change in colour throughout the day. They also change colour throughout the year
“We can use this information to, not necessarily go to machine learning just yet, but start to work with just shape, colour, texture, location, using fairly simple algorithms…The problem is weeds change in colour throughout the day, different lighting conditions from the sun. They also change colour throughout the year, the season, in different drought stress [and other] environments. So that’s why this method isn’t always the best, but it does work in some scenarios.”
Older AI algorithms and algorithm-generation practices has made real-time weed identification a reality. With the right hardware incorporated onto ground application equipment, Coleman says algorithms fed with small numbers of images can help growers identify and control a large percentage of weeds in a given acre.
An Australian grower he recently worked with, for example, found success by inputting just 120 field images into the AI algorithm. Over 1000 weeds had to be manually identified in the 120 images, though. Coleman estimates the identification process would have taken between half and one full day – a not ridiculous, but still fairly time-consuming activity.
Coleman reiterates it is possible for people to create fairly effective programs – in the area of 80 per cent effective, for example – with a couple hours of work. The issue is a limited number of images fed to the algorithm means it can’t generalize. The algorithm might work, potentially quite well, but only in the exact environment for which it has been designated. Being effective in new fields and new conditions require additional time.
Deep-learning algorithms, or those which can learn and generalize, is the goal. But where 100 images might work in an older system, deep-learning systems require thousands, hundreds of thousands, perhaps even millions of images to develop active learning capacity. Effectiveness increases with the amount of training data – but so does the time commitment on the part of developers.
“Getting that first part is the most difficult part because the algorithm doesn’t know anything until you give it information,” says Coleman, adding there are also diminishing returns as the amount of fed data increases. The first 50,000 images might produce a weed identification rate of 90 per cent, that is, whereas another 100,000 images might be required to account for the last five or 10 per cent.
Just think of the scale of these things and the amount of computation that’s happening
“Deep learning is showing it’s much better adapting to those different conditions. These things are very, very large. A normal network is, I guess, around 100 million parameters…That’s the scale of it these days. Just think of the scale of these things and the amount of computation that’s happening.”
Daunting as it may be, Coleman says the amount of effort being put into dep-learning weed identification systems is substantial, and already generating real results. Indeed, he says many large manufacturers have made significant strides in on-the-go weed control, whether in-house or by partnering with companies specializing in the space.
“The fact that this works and we’re seeing examples in the field, in real time, is pretty cool…That’s why everybody is so excited about deep learning, it’s doing much better than traditional machine learning.”
One of the most exciting and surprising things from Coleman’s perspective is the availability – and quality – of weed identification data. This, he believes, is a major boon to developers, and those interested in precision-control more generally.
“The level of open-source information, it’s something that really blew me away. The machine learning community, I think it has flourished partly because of openness… Someone has already done the coding for you,” he says.
“If agriculture can head down that path, I think it would help address some of the issues around [weed identification difficulties] and shift things away from data collection and boring annotation…It can open this up to people who can make a difference but that wouldn’t consider weed detection in their daily lives.”