Charcoal risk model hackathon

Our Sensing Clues colleagues, Melanie and Chiel organised a hackathon together with Big Data Republic from the Netherlands. The hackathon took place on two afternoons, on 19th and 26th January 2024.

The goal of the hackathon

The goal of the hackathon was to improve the risk model that was previously developed by Melanie and use new data that was available to make it better in predicting where illegal charcoaling events will happen.

Illegal charcoaling events are one of the main causes of environmental and biodiversity loss, especially in Africa, where charcoaling events are very widespread. Charcoaling is simply defined as the process of converting wood or woody biomass from bushes into charcoal through controlled burning and carbonisation. For this hackathon, we collaborated with Wildlife Works, our field partner in Kenya, to develop and test the charcoal event prediction model.

The participants of the hackathon

Four teams with 4 members participated in the hackathon, of which one would be selected as Winner! The teams were free to use other methods, models, and data sources (e.g. include Lidar data), and bring their creativity to the hackathon. In the end, the models were run against actual data from Wildlife Works, and the outcomes were compared with the base model we had developed earlier.

we had A winner. guess who it was!

From the graph, Team 2 (green) developed a significantly improved model compared to the base model (in red), as shown by the green curve. However, Team 4’s model (light blue) scored better on the left side of the graph which proved to be more relevant (explained below). Hence Team 4 emerged as the winners.

The winners didn’t go empty-handed

The winners were given some branded Sensing Clues beanies and also given an opportunity to have a meeting with Wildlife Works to understand how the risk model is being used in their daily work.

 

What we will do with the models

As for the models created during the hackathon, we will study the different approaches of the teams and will implement any ideas that improve the performance of the model, including ideas from the teams that did not win. This will help us develop an improved risk map of charcoaling events in the area.

The impact of the hackathon

The charcoal risk model helps to find more kilns against less patrolling efforts. The risk model makes predictions, and in doing so, it always makes two types of errors: False Negatives (FN) and False Positives (FP). False Negatives occur when we miss a charcoaling event happening in the area. False Positives occur when there is unnecessary patrolling in areas where charcoaling events have not taken place. The objective of the improved charcoaling model is to lower both errors, thus increasing patrol efficiency.

By taking into account the prevalence (rate of occurrence) of charcoaling, which is estimated at 2% of all areas, it is possible to express the potential of a classification model for charcoaling in an overall cost amount. This is shown in the graph above, where the straight lines indicate 'equal cost'; lines more to the left have lower overall cost.

As we want to avoid false negatives (missed events) more than we want to minimise patrolling places where no events are happening (false positives), the model of Team 4 came out as winner.

How New Relic is used to monitor our platform

Is is very important that our tools are available 24/7 and we can pro-actively react on potential issues. To ensure the availability of our solution is high, we use the monitoring tool-suite from New Relic. Not only can we use their application for free, they also support us with their Pro-bono program. Once or twice a year, they create a team to assist us and ensure we use their products the best way possible.

More about New Relic and how we monitor our platform can you read here: https://newrelic.com/blog/how-to-relic/sensing-clues-ngo-alerts-dashboards

The crane migration is on!

Author: ass. prof. Koen de Koning, partner in Nature FIRST

Autumn is normally marked by colder weather, giving many thousands of birds the go-ahead to migrate south to their wintering grounds in southern Europe and Africa. However, this fall has been particularly mild so far, so few birds really felt the need to begin their fall migration. That is now changing! Colder weather is approaching, and that will change the minds of many thousands of migratory birds. There may even be a real mass migration in the offing! So are the cranes.


Some time ago I examined GPS data from transmittered cranes, and combined it with historical weather data, to get a picture of how the weather affects their migratory behavior. What emerged? Cranes are particularly heavily influenced by weather in the fall. They wait patiently until optimal conditions literally give them a boost to fly southward. A very strong relationship can be found between a few key weather characteristics, and the choice to depart, namely: wind speed, wind direction, precipitation, cloud cover and temperature. The day of departure is heralded by a strongly cooling temperature, a substantial decrease in wind speed (calm weather), drier weather, more sun and a wind turning northeast. Sunshine provides the necessary thermals, allowing cranes to gain altitude easily, and the wind from the northeast naturally blows the birds south. Especially in northwestern Europe, where headwinds (southwestern wind) are dominant, this turning of the wind to the northeast is a very important starting signal for the autumn migration. And let now coincidentally all these favorable weather factors come together early next week! So that is a guarantee for spectacular bird migration!

Can we expect thousands of cranes in the Netherlands? That remains to be seen, but the predictions are favorable. The only thing is that most of the cranes have only just made the crossing from Scandinavia to Germany, so the question is whether they have been sufficiently 'refueled' to make the next leg. In addition, the wind has to come in from the east to 'blow' the cranes to our country. In any case, there is plenty of reason to keep a close eye on the crane radar!

And a nice piece of news for those who are already familiar with the crane radar: Recently, we have been working hard on a new version that incorporates all of these weather forecasts as well in order to track migration even more accurately on the radar!


Credits:

Nature FIRST and Waarneming.nl

WWF-Ukraine introduces Sensing Clues

In May, our WWF-Ukraine colleagues of Nature FIRST have trained staff of the Verkhovyna National Nature Park and Yasinya Forestry in collecting and analysing data about large carnivores.

Some impressions!

It is necessary to systematically collect and analyse information on the populations of large carnivores and other species to develop sustainable management plans for territories, biodiversity conservation, and the prevention of conflicts in the region. The modern tools of the Sensing Clues software suite can provide this. We provided partners with 10 test smartphones with ready-to-use mobile applications and installed these applications on employees' phones. We introduced and taught how to work with the capabilities of the package of programs for monitoring and analysis, — Roman Cherepanyn, WWF-Ukraine expert and project manager.

"We met motivated conservationists who strive to improve the primary data collection process and modernise their analysis on these training. We got reasonable questions and collected comments and suggestions regarding work organisation with the presented mobile application. Today, this package of programs for analysis and reporting has no available analogues; it has received favourable reviews", — Ostap Reshetylo, WWF-Ukraine expert and project manager.

Intro and Demo of Solutions for Biodiversity Monitoring

Join our online demo

Join an online session where we will showcase the achievements, solutions, and technologies of Nature FIRST, a project focused on biodiversity preservation. Our goal is to gather feedback from key stakeholders in the ecosystem, as they are the future users of our technology.
The session will begin with an introduction to the project, followed by a demonstration of our achievements and solutions.

After the demonstrations, we will open the floor to a Q&A session where you can ask questions and provide feedback. We hope to start a discussion about the ongoing challenges in biodiversity preservation and gather insights from key players in the field.
The session will take place on April 26, 14:00 CET. We look forward to your participation and contribution to this vital discussion. Please let us know if you can join by completing the form below. After completing the form, you will receive a link for the session!


Agenda (CET)

14:00 – 14:10

Introduction of Nature FIRST by Jan Kees from Sensing Clues

14:10 – 14:45

Nature FIRST tech & solutions demonstrations

  • Taxonomy crossovers: EUNIS, CLC, IUCN Red List, Natura2000 and more by Albin from the Semantic Web Company

  • Ecosystem base maps by Melanie from Sensing Clues

  • Intro to the Habitat Mapping method by Jan-Kees from Sensing Clues

  • TrapTagger for Species Recognition by Judith from Sensing Clues

  • Towards the Nature FIRST Knowledge Graph by Jan Kees

14:45 – 15:00

Q&A


About Nature FIRST

As a Horizon Europe project funded by the European Commission, Nature FIRST is developing predictive, proactive, and preventative tools for nature conservation.

Stay tuned to learn more about how we combine forensic intelligence, remote sensing technologies and digital twins to protect and restore biodiversity in Europe and beyond. The tools we are developing are tested and demonstrated in the following regions:

  • The Carpathian Mountains, a 1,500 km-long range in Central and Eastern Europe.

  • The Danube Delta River is Europe’s largest remaining natural wetland. The more significant part of the Danube Delta lies in Romania, and a small part is in Ukraine.

  • The Stara Planina Mountains are a mountain range in the eastern part of the Balkan Peninsula.

  • And the Ancares y O Courel, the largest green reserve in Galicia, Spain.

Learn more about the project on the Nature FIRST website.

Cluey version 3.0 - Community Work

Cluey is being used in a rapidly growing number of countries across the globe to record data related to biodiversity, human-wildlife conflicts, (illegal) human activities, and related points of interest. Next to these themes, most (if not all) projects are deeply engaged with local communities.

To accommodate the needs of these and future projects and to facilitate the collection of data related to community work, we carried through a series of upgrades. In this email you read all about them.


Community work
Cluey now supports the registration of activities conducted for and with the local community. The design is based on the requirements of projects in 6 different countries. Nonetheless, we realise that some information might still be missing. If you think that's the case, please contact us. 

The first version of the Community module works as folllows:

First, to record an activity that has been conducted, 3 mandatory fields are provided:

  • Activity type contains a list of themes, subjects and types of activities (what);

  • Beneficiary type describes the target group for the selected activity (for who);

  • Mode of engagement describes how knowledge transfer and other interactions are organised (how);

To complete the report of who participated and what was done and achieved, three optional data entry forms are made available: General, Finance, and Natural resources. Please check them out and send us your feedback!

Activating the Community work module
The new Community work module can be activated in two ways:

  1. Go to Group information and edit the Selected tags. Edit the Selected types list and select the Community work option from the ObservationsTypesList (only group owners can do this). Note, for it to work properly, also select and modify the three mandatory fields Activity types, Beneficiaries and Modes of engagement. In addition you can customise the fields Agents and Land-ownership. Again, if any options are missing, contact us so we can include them for you.

  2. Create a new group and (de)select the fields of choice.


Addition of new classes 
Adding new classes (such as a species) to existing categories (such as Mammals) is now even easier and faster than before. Easier: you can load the newly added classes by simply pulling your group list. Faster: new classes can be added, translated, tested and released within a week.

Following request from the field, a number of categories and classes have been added. These are the most important ones:

  • Animal sighting / Animal health (healthy, weak, wounded)

  • Animal sighting / Cause of death / Caught in fence, Starvation and Hit by train

  • Point of Interest / Fire (controlled fire, crown fire, smoke, wild fire)

Tracking types have been expanded with the following options: on Fence inspection, on Perimeter track and on Snare Sweep.

And last but not least: the values of Actions taken can now be customised by the group owner.
 

Tips

  1. Update your Cluey app through the Google Playstore

  2. Refresh your groups list by pulling it. A turning wheel appears. Wait until it disappears again before you continue.

  3. The Community work module is brand-new. If any values that you would expect are missing, we can quite easily add them for you. Just send us a note with a short description. If you can send us a nice icon as well, even better.

  4. For group owners only: check the Selected tags in your Group information for new options per list that have not been described in these release notes.

  5. As of version 3, the observation types that appear when you hit the Add observation button can be customised as well. Go to Group information, Selected tags, and (de)select the options you want in the ObservationTypesList.

Crane Radar Hackathon

The Crane Radar

The Crane Radar is a predictive map that gives birdwatchers a real-time indication of where and when crane birds can be spotted and the direction they are heading.

March of this year, Wageningen University (WUR) and Sensing Clues tested the first online version. Based on observations made by citizen scientists via the platform Waarneming.nl, flocks of cranes appeared on the map. An animation then showed the likely location of these flocks, based on the most recent observations, over 20 years of historic observations, and knowledge about for example their average flying speed.

Challenges

To improve the predictions, the hackathon was aimed at finding ways to incorporate additional environmental factors, such as local weather conditions. In addition we wanted to crack several technical issues of the web application, to allow as many bird enthusiasts as possible to use the crane radar during the upcoming autumn migration.

Results

Based on GPS data provided by the Swedish University of Agricultural Sciences, we were not only able to confirm that wind speed and direction significantly impact the speed and direction of cranes. We also cracked the mathematics behind drift and compensation, two important factors for improving our predictions.

What’s next

In the upcoming months the WUR and Sensing Clues are refining the models and preparing the website. We’ll bring it live in September, about a month before the actual migration starts.

Keep an eye out for our new and improved Crane Radar!

digital twins to foster peaceful Human-wildlife coexistence

Quick-response team in action. Photo by Keith Hellyer

All over the world, human-wildlife coexistence is under pressure. There are many different reasons, but almost all are related to land competition. People tend to use more and more space, but if the natural areas become too small, fragmented or depleted, wildlife is forced to enter human-dominated landscapes to search for food, water, partners and shelter. The consequences are sometimes grave. Crops are eaten and livestock and people attacked. People and wildlife even get killed.

To reduce this problem, many nature conservation organisations engage with local communities to mitigate the damage and establish conditions for peaceful coexistence between people and wildlife. Known solutions include guarding shepherd and dogs, financial schemes to compensate farmers, the funding and placement of fences to keep wildlife at bay, and early warning systems to detect wildlife.

All these measures are very costly and not always effective. Hence, innovations are needed to drive down the costs and boost the effectiveness of existing measures.

The Digital Twin-innovation that we are working on is aimed at boosting the effectiveness of existing measures.This is possible because Digital Twins are sophisticated simulation models that do 3 things:

  1. they predict where (groups of) animals are right now,

  2. the predictions are continuously updated through real-time observations in the field, and

  3. the prediction algorithms automatically become better through time as they learn from every new observation.

Together with the Wageningen University and others, we are developing digital twins for cranes, bears, and elephants (more species will follow!).

Stay tuned if you want to use our Digital Twins to foster peaceful human-wildlife coexistence!

Intro to CAIMAN

Camera traps are very handy tools for nature conservation professionals. Amongst others, they are used to

  • spot rare species,

  • record nocturnal species,

  • assess biodiversity,

  • recognise individuals,

  • conduct a structured census,

  • monitor places of interest, such as water holes or bird nests,

  • detect poachers and other intruders.

Each use case brings its own challenges and implications for the camera setup and the processing of the recorded images. To cater for each of the above use cases, our CAIMAN solution consists of 4 services that can be fine-tuned. That is:

  1. a connection service,

  2. a process configuration service,

  3. a human-in-the-loop service,

  4. and reporting services.

Step 1: Connection Service

If real-time is of the essence, like when you want to intercept poachers, the cameras need to be connected to the internet. If they are, they can stream their images directly to CAIMAN. Through an API, for the technicians amongst us.

Most camera’s though, store their images on a SD-card, which is collected every now and then. After collecting the images from the field, they can be uploaded to the data upload service of Sensing Clues.

Step 2: Process configuration service

In its essence, AI-driven image classification is a statistical exercise. Species are identified with a certain level of probability. As some species are easy to recognise while others are very hard to distinguish from other species, the quality of the AI-model varies per species. 100% confidence is very hard if not impossible to reach. Above 80 to 90% is often more realistic.

AI models are being trained and made available per geographic region and per use case, as illustrated above. The first solution that we are currently testing is aimed at identifying over 200 species that live in Southern Africa.

To tune the classification process to your needs, minimise mistakes and minimise your time spent behind the computer, thresholds can be set per species. If a species is very important for you, you can set the threshold for automatically accepting the outcome of the algorithm very high. If the species are more abundant and classification mistakes less costly, you can lower the threshold for that species.

STEP 3: Human-in-the-loop validation service

Classifications with probabilities below the threshold are treated in a separate process. In this process we select images from which can learn most. As soon as you’ve verified the images and confirmed or corrected its class, the AI-model is re-trained. This speeds up the learning process and decreases the number of images that need to be sifted through manually.

ps. We are still working on the Human-in-the-loop app. The picture above shows an experiment to quickly verify series of images and find oddities, potentially saving you hundreds of hours.

STEP 4: Reporting services

The classified images are stored in the WITS dataplatform. Similar to Cluey-observations, classified cam-trap images are treated as observations. Hence, they are organised in a Group and are made by an Agent (in this case, the name of the camera). And like Cluey-observations, you can visualise and analyse them with Focus, WIldCAT, ArcGIS Online, or any other tool of your preference (e.g. R-Studio, Python, Jupyter Notebooks).

Sound recognition - live in Amsterdam!

Below is a translated article about our sound event recognition sensor SERVAL. To speed up development we open-sourced it, resulting in a great collaboration with the IoT Sensemakers AMS.

Enjoy the read!

How does the Marineterrein sound?

10 AUGUST 2021

Noise pollution is a big problem in the city. But to tackle it, you first need to know what exactly causes the noise. At the Marineterrein Amsterdam, tests are now being conducted with a sensor that can classify sounds and thus pinpoint the source. And this with technology that originated in the jungle to stop poachers.

It all started with a gunshot in the jungle of Laos, followed by the sound of a boat sailing away. It made Jan Kees Schakel of Sensing Clues realise that it is easy for poachers to make off with their 'loot' under cover of night. But he also realised that the sound the poachers produce is the way to stop them.

Sound sensor

Humans are noisy creatures', says Jan Kees. Basically any sound we make - talking, driving a vehicle and certainly gunshots - carries far and can therefore be picked up well by a sensor. A big advantage over cameras, which are limited by what their lens can 'see'. A sound sensor can help conservationists map out what and where is happening at any given time. After all, if voices can be heard in the jungle in the middle of the night, you can be reasonably sure that something is wrong.'

Complex

That sounds good, but it is easier said than done. Until about five years ago, only the number of decibels of a sound could be detected, but the sound could not be classified. In other words, a sensor could indicate that a loud sound was being produced somewhere, but not whether it came from a slamming car door or an elephant. And it is extremely difficult to be able to make that distinction,' says Jan Kees. You need an enormous database of sounds which then serve as a frame of reference for a complex algorithm to classify sounds'.

Practical challenges

'But since 2016, artificial intelligence and machine learning has taken off,' he continues. 'The technology is getting better and better and we can now recognise multiple sounds with one algorithm. But it is still important to have a large database of sounds and there are also practical challenges. There has to be power, a good internet connection and the hardware around the sensor has to be able to withstand the elements. To get everything working optimally we need to do a lot of testing.

Test location: urban jungle

Since testing sound sensors in the jungle is expensive and complicated, Jan Kees decided to do it closer to home. In the urban jungle to be precise. The Marineterrein has recently been equipped with a sound sensor that maps out city noise and noise pollution. Jan Kees joined forces with Sensemakers AMS, an old friend of ours who has been active at the Marineterrein for years, measuring and interpreting the water quality in the inner harbour, for example. With their combined knowledge, they developed a test set-up that will provide insight into the sound of the Marineterrein, and the extent to which there is nuisance.

The sensor of Sensing Clues and the IoT Sensemakers AMS

The sensor of Sensing Clues and the IoT Sensemakers AMS

Training algorithms

With this project, we can gain a wealth of experience in training our algorithms', says Jan Kees. That is useful for conservationists, because in the jungle we look for the same kind of sounds as here. Voices, laughter, the sound of car engines and doors slamming. But with what we do here now, we can also tackle nuisances in the city. Noise pollution can reduce residents' enjoyment of living and even cause stress. But it is often difficult for people to say exactly what is bothering them. By classifying urban noise, we can pinpoint the exact time and source of various noises and use this information to tackle the problem in a targeted way.

Alarming noises

According to Jan Kees, this mainly concerns intrusive, loud noises. As a resident, you often no longer hear overflying aircraft or trams, you get used to it. But sudden, loud noises trigger us. It is still deeply ingrained in our cognition to be alarmed by such sounds, as a reaction to possible danger. Think of accelerating scooters and alarms, but also of bicycle bells. By classifying these types of sounds, we can pinpoint exactly where the nuisance comes from. It is important that we do this while preserving privacy. The sound is processed on the sensor and not stored. Only the sound labels, such as 'moped' or 'scooter alarm', are stored. So only labels remain, and no sound.'

Jungletech also for a liveable city

With the knowledge gained from this test at the Marineterrein, this technology can be used on a larger scale in the city in the future. In squares, in entertainment areas and even in individual cafés, to make visitors aware of the potential nuisance they are causing. In this way, Jan Kees hopes that his 'jungle technology' can contribute to a liveable city. But of course we eventually want to bring the lessons we learn here back to the jungle. That will require some further development: in the bush there are no power outlets or wifi, so we have to find good and cheap solutions for that. The most important thing is that the technology remains affordable for conservationists, so that they can always stay a step ahead of poachers.

Sensing Clues

Sensing Clues is a non-profit foundation and largely relies on volunteers. You can support Jan Kees' fight against poachers by making a donation. Do you have the knowledge and skills to contribute to this project? Then get in touch with Sensemakers AMS. Electronics engineers and people who like to get started with data analysis and visualisation are particularly welcome.

Text: Sjoerd Ponstein

Translated by Deepl.com

Sponsor our open source projects!

CAIMAN

In the Kaggle iWildcam 2021 competition the species recognition and counting algorithm of the Sensing Clues team became split-second second. To increase its spread and impact we recently open sourced the project. With the funds we receive from sponsors we integrate results of CAIMAN in our WITS-platform, which increases the value of these images tremendously.

OpenEars & SERVAL

With our volunteer friends of IoT Sensemakers Amsterdam we have developed a highly sophisticated sound recognition sensor. It recognises many different sounds related to the presence of people, ranging from gunshots, motorbikes, trucks, chainsaws, music, barking dogs, cattle, and other. The sensor is currently being tested in the city jungle of Amsterdam. To be ready for monitoring and protecting the jungles of Africa, Asia and the Amazon a few more development steps have to be taken. Your sponsorship brings that reality closer!

Oh yes, OpenEars is the name of the sensor, hardware and all; SERVAL is the name of the sound recognition algorithm that we created. And both our open source.