The challenges of traditional flood forecasting
Flood forecasting has always been a challenge. Predicting when, where, and how flooding will occur is complicated by the unpredictability of the natural world. Traditionally, forecasts have relied on physical models based on rainfall, river flows, tides and topography. While these models have steadily improved, there are still reasons why they can struggle:
- Accuracy – no physical system can be fully modelled. Any physical model will require parameterisation of certain elements – for example, how much friction water experiences as it flows over different surfaces. This might mean assigning a single value to represent rough grass, concrete, or woodland. But in reality, conditions vary greatly and change over time (e.g., a grassy floodplain may be smooth in summer but clogged with debris in winter). These simplifications can reduce the model’s ability to accurately simulate how water moves.
- Computational efficiency – physical models solve complex equations to represent the real world. But this detail comes at a cost: in many cases, models can take longer to run than the flood itself lasts, making them unsuitable for real-time forecasting.
- Uncertainty – physical models rely on assumptions about things like soil moisture, rainfall inputs, or upstream flows. Small errors in these inputs can lead to large errors in the forecast. And because models can only simulate one possible outcome at a time, they often struggle to express the full range of what might happen.
Machine learning (ML) and artificial intelligence (AI) can help alleviate some of these issues. By learning patterns directly from data – from rainfall records to river levels to tides – ML models can spot complex relationships that traditional models might miss. Importantly, they can often produce forecasts faster, enabling real-time warnings and decision-making.
Keeping construction safe: Predicting water levels on the River Doe Lea, Derbyshire
When engineers are working within a river channel, accurate water level forecasts are essential. Overpredict floods, and works may be unnecessarily delayed - costing time and money. Underpredict floods, and safety risks rise. On the River Doe Lea, Derbyshire, we developed a Long Short-Term Memory (LSTM) deep learning model that significantly outperforms traditional rainfall-runoff models. Trained initially on data from 66 catchments across England - and then fine-tuned using local river data - our model predicts small-scale floods with far greater accuracy.
Over a one-year test period, it would have prevented more than 15 false alerts compared to traditional methods (calibrated rainfall-runoff models), while still capturing all significant flood events. Deployed through Delft-FEWS, our LSTM model means safer, more efficient river construction works, and a model that can be easily adapted to other sites.

Protecting London: Enhancing tidal forecasts for the Thames Barrier
The Thames Barrier is a critical line of defence protecting London from tidal surges. Current operational forecasts rely on a 1D hydraulic model to simulate how tidal levels at the coast propagate up the Thames Estuary. However, inaccuracies in modelling this tidal movement can lead to conservative decision-making - resulting in more barrier closures than necessary. Each additional closure places wear on the structure and can shorten the barrier’s operational lifetime.
We developed an LSTM model to better predict tidal levels along the Thames Estuary. By learning from observed tidal levels, simulated model outputs, river flows, and local wind conditions, the LSTM reduced forecast errors to within ±0.1 m for 92% of high tides at Silvertown, compared to only 68% using the existing model.
This improvement could help optimise barrier closures, reducing unnecessary use, saving on maintenance costs, and improving long-term reliability - all of which are crucial for enhancing climate resilience as sea levels continue to rise and extreme weather events become more frequent.

Real-time flood warning system for Eastbourne: from days to seconds
Eastbourne faces a complex flood risk, with tidal surges, river floods, and surface water all interacting. While there is a detailed Integrated Catchment Model (ICM) of the town, it is too slow for real-time use, taking 16 days to simulate just 24 hours. We are using Convolutional Neural Networks (CNNs) to emulate the ICM, delivering fast, accurate flood forecasts in seconds.
This makes it possible to issue postcode-level flood warnings based on real-time rainfall forecasts - including probabilistic information to reflect uncertainty.
As part of the Flood and Coastal Resilience Innovation Programme (FCRIP), this work will bring powerful, life-saving information directly to local communities when it matters most.

Building the future of flood forecasting
Across these projects, we are demonstrating how machine learning can enhance and complement traditional flood models, offering faster, more reliable, and more usable forecasts.
By working closely with partners like the Environment Agency, Deltares, and local authorities, we are helping to translate cutting-edge research into operational practice, delivering economic, social, and environmental benefits.
Machine learning is not a replacement for hydrological expertise - it's a new tool, enabling us to forecast the future more clearly and act more decisively in the face of flooding risks.
Contact Jenny Roberts for more information.