CAIMAN

Intro to CAIMAN

Camera traps are very handy tools for nature conservation professionals. Amongst others, they are used to

  • spot rare species,

  • record nocturnal species,

  • assess biodiversity,

  • recognise individuals,

  • conduct a structured census,

  • monitor places of interest, such as water holes or bird nests,

  • detect poachers and other intruders.

Each use case brings its own challenges and implications for the camera setup and the processing of the recorded images. To cater for each of the above use cases, our CAIMAN solution consists of 4 services that can be fine-tuned. That is:

  1. a connection service,

  2. a process configuration service,

  3. a human-in-the-loop service,

  4. and reporting services.

Step 1: Connection Service

If real-time is of the essence, like when you want to intercept poachers, the cameras need to be connected to the internet. If they are, they can stream their images directly to CAIMAN. Through an API, for the technicians amongst us.

Most camera’s though, store their images on a SD-card, which is collected every now and then. After collecting the images from the field, they can be uploaded to the data upload service of Sensing Clues.

Step 2: Process configuration service

In its essence, AI-driven image classification is a statistical exercise. Species are identified with a certain level of probability. As some species are easy to recognise while others are very hard to distinguish from other species, the quality of the AI-model varies per species. 100% confidence is very hard if not impossible to reach. Above 80 to 90% is often more realistic.

AI models are being trained and made available per geographic region and per use case, as illustrated above. The first solution that we are currently testing is aimed at identifying over 200 species that live in Southern Africa.

To tune the classification process to your needs, minimise mistakes and minimise your time spent behind the computer, thresholds can be set per species. If a species is very important for you, you can set the threshold for automatically accepting the outcome of the algorithm very high. If the species are more abundant and classification mistakes less costly, you can lower the threshold for that species.

STEP 3: Human-in-the-loop validation service

Classifications with probabilities below the threshold are treated in a separate process. In this process we select images from which can learn most. As soon as you’ve verified the images and confirmed or corrected its class, the AI-model is re-trained. This speeds up the learning process and decreases the number of images that need to be sifted through manually.

ps. We are still working on the Human-in-the-loop app. The picture above shows an experiment to quickly verify series of images and find oddities, potentially saving you hundreds of hours.

STEP 4: Reporting services

The classified images are stored in the WITS dataplatform. Similar to Cluey-observations, classified cam-trap images are treated as observations. Hence, they are organised in a Group and are made by an Agent (in this case, the name of the camera). And like Cluey-observations, you can visualise and analyse them with Focus, WIldCAT, ArcGIS Online, or any other tool of your preference (e.g. R-Studio, Python, Jupyter Notebooks).