News | January 18, 2024

Responsible AI For Food And Ecosystems

Many WUR groups are increasingly working with artificial intelligence (AI). What opportunities and dilemmas does this offer for Wageningen research? And how can WUR invest in so-called 'responsible AI'?

The Wageningen research groups are producing and using more and more data. These data are fundamental to our understanding of the food system, ecosystems and all processes studied in Wageningen. AI offers many new ways to interpret this data.

For example, the Netherlands Plant Eco-phenotyping Center (NPEC) in Wageningen carries out a huge number of measurements on plants under controlled conditions. By growing and measuring multiple crop varieties under different environmental conditions, researchers try to find out how the interaction between DNA and environmental conditions works. These measurements generate an enormous amount of data; now more than 1,000 terabytes.

NPEC is currently already using AI, say professor Mark Aarts and Rick van de Zedde of NPEC. For example, plant researchers take images of plants in the greenhouse, where an AI program provides a filter on the images, so that only the relevant plant parts are analyzed and irrelevant aspects such as the background, pots and sticks are filtered. But now they discover that they can extract more information from the enormous data mountain than is currently possible.

Disease detection in plants
For example, there is a project on disease detection in plants. The researchers in this project grow different varieties in the greenhouse, introduce a pathogen and then monitor the health of the plants. They use a series of fully automated imaging systems to determine: are the plants sick or not, and how sick are they? They are now training an AI system to recognize the disease stages on the camera images, based on the assumption that the computer can detect the diseases earlier and more accurately with AI than we can with the naked eye.

What AI can also help with is finding and explaining strange deviations in tests. If the data for some plants deviate from the average, is it because they have received less water (tube clogged) or because that genotype responds differently to treatment? Researchers often only see such deviations at a later stage and then the meaning - especially in large data sets - can no longer be determined. With AI, the computer can immediately notice and analyze this, or search it back in the database.

NPEC is working together with the three professors in the field of artificial intelligence (AI) that WUR hired more than two years ago. Anna Fensel, Ricardo da Silva Torres and Ioannis Athanasiadis work in various chair groups with support from the Wageningen Data Competence Center. They create, develop and apply AI knowledge for issues in the Wageningen domain.

In recent years, AI professor Da Silva Torres has been involved in some of the 7 fellowship projects to strengthen AI in Wageningen fields. In one of the projects, Da Silva Torres collaborated with the Aquatic Ecology and Water Quality Management group. This group conducts research into the resilience of ecosystems and possible tipping points if that resilience decreases.

Coral reef resilience
The group wanted to use remote sensing images of vegetation and determine the resilience of an ecosystem using Turing patterns. A Turing pattern is a mathematical explanation for the emergence of patterns in biology, from the interaction of two variables. Using such patterns, the ecologists wanted to determine, for example, the resilience of a coral reef and whether that ecosystem was about to die. Da Silva Torres contributed the AI ​​knowledge; he wrote an algorithm that assesses and classifies the remote sensing images in terms of resilience.

With such an algorithm you can also determine the resilience of a savannah or rainforest based on satellite images, says Da Silva Torres. He always needs the domain knowledge of chair groups to create a good algorithm. The research groups can provide the correct context and variables and indicate which properties of an entity are relevant. Moreover, they can test the algorithm based on their research expertise. This collaboration between AI and domain experts is mutually beneficial and offers many opportunities in both directions.

According to the AI ​​professors, the interaction between humans and machines is crucial, because much AI cannot be fully automated due to the risk that the computer will interpret and classify data incorrectly. To keep track of the development of AI, it is therefore important that the origin and use of data is transparent. In other words, FAIR: findable, accessible, interoperable and reusable, so findable, accessible, combinable and reusable. This makes FAIR a sustainable alternative to the current data management practices of the large technology companies.

Algorithm testing centers
Secondly, it is important that we properly record the instructions and algorithms. There is now EU legislation requiring Member States to categorize the algorithms and indicate how dangerous they are. There will also be test centers for algorithms. WUR will become part of the European center for testing and experimenting AI applications in the agri-food sector. Professor Ioannis Athanasiadis is involved in this test center, together with colleagues from Wageningen Research and the Agrotechnology and Geo-information Science chair groups.

Third, we need to answer the question: who owns the data and what AI do we actually want? That is why research is being conducted at WUR into the ethical, legal and societal aspects (ELSA) of AI. Wageningen technology philosopher Vincent Blok is involved in this, as is AI professor Da Silva Torres. ELSA is a virtual lab, Blok emphasizes, it experiments with the ethical, legal and social side of artificial intelligence in concrete practices.

For example, Blok researches milking robots. The farmer uses this for milking, but the milking robot contains artificial intelligence and can therefore also be used for animal health and medical diagnostics. Do the farmers want that? Then there will be an algorithm that assesses the health of the cows. Who owns that data and who is responsible if a cow remains ill or even dies? The ELSA lab raises these types of questions to include in the design process of a smart AI milking robot.

What also plays a role in this: AI is a decision support system, it gives the farmer advice, but does the farmer believe and trust that advice? Blok: 'Then you have to ensure that AI is not a black box, you have to be able to clearly explain how AI arrives at the advice. You may also need to be able to bring the farmer's expertise into the AI ​​system and take it into account.' In this way, AI becomes an interface between technology and behavior.

There are more examples of 'responsible AI'. Professor Anna Fensel has developed technical solutions for responsible access and use of data. Such solutions include descriptions and data sharing tools that comply with the principles of the EU General Data Protection Regulation. Suppose you drive a car. As a motorist, do you want to share data about your journey for road safety? Probably. This is how you achieve 'positive data sharing'. This also allows consumers to share data about diseases, lifestyle and behavior in a transparent manner for purposes that benefit everyone. In that case, it is important to consider what data people want to share, with whom, how and for how long.

Face recognition
Many problems do not have to do with technology itself, but with the fact that a few large companies are in control, says Blok. This is difficult to solve during the design process in ELSA. Then you should rather look at the EU AI Act, which regulates the power and competition of tech companies and which, for example, requires consumer consent when using AI for facial recognition.

As far as Blok is concerned, facial recognition software is not about a ban, but about responsible use. For example, it can be useful in nutritional research. A lot of nutritional research, to investigate what people eat, is now done with the help of questionnaires, which we know that people do not complete properly. A smartwatch that tracks the consumption of test subjects could offer a solution.

A step further, nutrition researchers install cameras in nursing homes to see how much and what residents eat and how long they chew the food, says Blok. This is not possible without the consent of the residents. One option to deal with this is for the cameras to only record food intake and chewing movements of a part of the face around the mouth. In this case, you can create AI protocols that limit the camera images in such a way that nutritional research is possible without facial recognition.

How are these AI protocols actually created? That is the research area of ​​AI professor Anna Fensel. She works on knowledge graphs. Knowledge graphs organize and connect information so researchers can make meaningful connections and discover valuable insights. This creates an extensive knowledge network that serves as the basis for FAIR data. For researchers, the knowledge graphs can be an alternative to manually searching large amounts of literature to find relevant information and better understand the bigger picture. Such a knowledge graph is currently being developed, for example, in the EU project SoilWise, which collects data on soil health.

These knowledge graphs are semantic networks, says Fensel, that use the meaning of words and symbols to structure data. AI is essentially not about zeros and ones, but about language, identity and meaning. Semantic networks ensure that information gains meaning through the collaboration of humans and machines.

Digital tomato
Philosopher Vincent Blok applies this semantics to the tomato, for example. Blok: 'We are doing a project with digital twins, digital copies or representations of real things to experiment with those copies. For example, we have made a digital copy of a tomato, but what data do you need? The plant and food scientist will say: we need information about shape, color, water content and vitamins. The supermarket mentions uniformity and price and the consumer mentions other elements. You soon discover that there are many implicit definitions and characteristics to capture a digital tomato – it is not a neutral representation, although engineers often think so. Decisions about the desired tomato are often made for commercial reasons and defining the digital tomato makes those reasons visible.'

But in the next phase of AI, the computer may design its own tomato without precise instructions from humans, based on hypotheses from the collected data. At NPEC, this future is already within reach. Mark Aarts: 'The next step is for AI to recognize patterns in large data files that the researchers had not yet noticed. We can then investigate AI-generated hypotheses. For example, AI identifies a pattern and asks researchers: is this an interesting pattern? We are convinced that this will play an important role in plant breeding.'

Is there still work for the plant researcher? Aarts: 'Yes, the domain experts continue to play an important role in designing and applying AI methods for data interpretation, together with the AI ​​experts. This allows us to make progress in both domain knowledge and AI methods.'

Source: Wageningen University & Research