Autonomous machines are on the rise, and software and algorithms are integral to many of their capabilities.

UAVs, for example, need to decide where to go (“mission planning”), manoeuvre around each other (“detect-and-avoid” – DAA), and potentially talk to other members of a swarm (“communication protocols”).

For DAA in particular, the ability to estimate the location and expected trajectory of other aircraft – is vital not only for avoiding mid-air collisions. If the location estimation algorithms aren’t robust enough, they could cause erratic manoeuvres, which in turn could throw off other drones…

So we need a way of assuring the autonomous algorithms we use to estimate location and trajectory of other aircraft. These algorithms are often of a statistical nature, which means they reason using probabilities and rely on measurements that are possibly unreliable and noisy.

Are there any methods suitable for the often more complex and inherently less deterministic reality of object tracking algorithms?

In a previous post, we introduced formal methods as way of assuring software algorithms, so it won’t come as a surprise that, yes, there are. Here, we share our experience using ‘probabilistic model checking’ to analyse one such statistical algorithm, the widely used Kalman filter, and reflect on how this affects the software development process.

Our choice to analyse a classical algorithm – as opposed to recently popularised methods in DAA such as neural networks – was largely motivated by regulations, which are still cautious with regards to using AI in safety-critical applications.

The Problem: How can we analyse the impact of noise in Kalman filters?

Kalman filters are a commonly used type of algorithm for estimating the state of a system based on incomplete and possibly inaccurate measurements.

Say you are using a camera on a drone to track an obstacle and want to predict where it will end up in the future, so you can then avoid colliding with it: the state you are trying to estimate is the trajectory and the input is the obstacle position measurement each camera frame.

The Kalman filter continuously updates its estimate for each given measurement in a way that is technically mathematically optimal – however, in reality, the quality of its outputs heavily depends upon our assumptions of how much noise is present in the system.

The camera may not be accurate due to weather conditions or movement of the drone. Yet we need to ensure that the algorithm won’t suddenly diverge, which is when the algorithm starts giving us increasingly worse outputs as the internal maths gets affected by inaccuracies in the input.

A big problem when reasoning about noise is that, theoretically, there are an infinite number of possible outcomes, so we need a way of abstracting these into a finite set of potential noise values that are statistically representative of the way the noise will likely behave.

A Solution: Probabilistic Model Checking

The paper Quantitative Verification of Kalman Filters (by Alexandros Evangelidis and David Parker) describes one possible approach to the problem of quantitatively analysing the impact of noise in Kalman filters.

We re-implemented the paper for a realistic object tracking filter, building upon it to deal with more complicated noise, and assessed the usefulness of this approach in modern software development.

The approach uses probabilistic model checking.

Model checking in general refers to a formal method where a system is modelled by a set of possible states and state transition probabilities. We can then run a model checker, a piece of software which can compute all possible states that can be reached after evaluating all possible transitions in the model.

Probabilistic model checking extends this idea by adding probabilities and costs to each transition, so we can ask questions like “What is the probability of reaching an error state?” or “How long, on average, does it take to reach a success state?”. In other words, we can now rigorously stem!

How can we model a Kalman filter using this method? As described earlier, the algorithm is applied over and over as we receive measurements, with a new estimated state each input step. Each step, we want to apply different noise values to an ideal measurement and assess whether the filter still satisfies a set of expected mathematical properties. So, we need some representative noise values with attached probabilities.

The trick is to divide an expected noise distribution into different sections: Each section represents a subsection of possible noise values, so to get a representative value we take the most likely value in each section as the value to apply to the measurement. The transition probabilities are the probabilities of landing in each section. Using this approach, we can keep branching over and over, hopefully therefore exploring a wide range of likely scenarios.

After creating this model for a pre-determined number of steps, we can now run a probabilistic model checker on it, computing the overall probability of our desired properties still holding in the final state, regardless of what noise has been applied with each step. 

How does this solution fit into the modern software development process?

A key point of the process described above is selecting the noise distribution we assume is likely to affect the input – varying this assumption and running the model over and over is what gives us an idea of whether our Kalman filter is robust to failures.

This way, we can narrow down the amount of uncertainty our system can cope with, an insight which can then be fed into further into the software development process. A runtime monitor for example can alert us if some assumptions are broken while we are running the autonomous algorithm.

One problem with this approach is that model checkers can require large amounts of computational resources, making it hard to run the model many times. This turned out to be the case with our Kalman filter analysis. After implementing it for an object tracking filter, we quickly discovered that building the model for a realistic number of filter steps is computationally expensive (both in terms of time and memory).

In the context of assuring drone software, this might mean that we start out with standard simulations as the first step, and then invest in more computational resources and apply a model checker once we have more concrete knowledge about the amount of noise our system needs to withstand.

Where next for autonomous algorithms? Onwards and upwards…

Probabilistic model checking can be used in any situation where you want to analyse the safety or performance of a system design in the face of noise and uncertainty.

To name just a few examples, we can use it to model dynamic fault trees, an important technique for safety assurance often required for certification, or to find optimal strategies for a robotic mission by modelling an environment that has to be traversed, or to check if a protocol will likely succeed in sending a message even if there is a possibility that messages will be lost.

It can be a key tool to assure the safety of autonomous algorithms.

Want to find out more?.

Speak to our Autonomous Team today
Tatiana Sedelnikov
Autonomous Technology Consultant