What do AI technologies mean for the field of foresight and futures analysis?
We are currently at a turning point in how we analyse and understand data. Following the COVID pandemic and the shift to greater remote working, the conditions have been created to allow a huge level of investment and flourishing of AI technologies. There are so many processes and workflows that we all rely on virtually every day that it is no surprise that currently AI is being applied everywhere, all at once…or at least it feels like this at the moment.
I’m not going to debate the pros and cons of this technology shift, but instead I’m going to look at the implications of the trend toward the greater use of AI in analysis and decision making and consider its strategic implications for a particular field - one called foresight, or futures to some people (horizon scanning to others).
Traditional foresight is generally based on a human-led services
As explained in my other blogs - there is generally a requirement for many government departments to understand and anticipate the long term. This has, since the mid 90’s, generally been based on techniques generally described as ‘foresight’. What’s interesting about this field and a tension I’ve long held with it (I’ve been a futures analyst since around 2005) - is that the techniques utilised by foresight are generally about producing human-based outputs. For example, foresight tends to be based on either:
Tools to help generate ideas about the future
Techniques to help decision makers rehearse the future.
Both of these are useful things to do and I’m generalising a bit here. But if you look at the current Government Office of Science Futures toolkit all of the techniques described are generally based on facilitation and discussion of how to think about the future.
What's also clear is if you follow the toolkit it generally needs to be delivered via humans teaching humans - i.e. futures consultants sell processes to people who need to better understand the future. The toolkit aims to train policy makers and executives on how to think about the future - so our leaders and strategy makers can do this using the power of their minds, wisdom and experience and collective internal agreement on likely future priorities and threats.
So, If you take the current UK SDSR as an example - ultimately the people leading the inquiry will need to recommend and drive decisions that do impact on the future. It is a consequence of power and the role of the government to provide security. This means it is inevitable that some bets will need to be made on how we do stuff and what we endeavour to protect ourselves against in the future.
But are ‘traditional’ foresight methods the best means of helping us gather, understand and bet on future threats?
AI and the impact on futures analysis
I would argue that increasingly the advance of AI tooling does reduce the need for foresight to be practised in its current form. Don’t get me wrong - there is still a strong need for some of the human led training and tools it proposes, particularly around simulations and the use of gaming, scenarios and rehearsals to help decision makers rehearse and prepare for real life crisis but I think it’s also important to stress that futures analysis entirely based on the current tools and techniques do not consider trend data and information very well.
In my last blog I looked at how well ChatGpt could generate trend data in comparison to the outputs in Global Strategic Trends 7 (GST7).
It was a simple, basic comparison but in reality, the trends presented aren’t really all that different. Clearly, side by side, the outputs from ChatGPT fall down when compared to the analysis in GST7 as they have no provenance (on what basis are these trends being highlighted, where is the reachback to the source evidence?). Using GST7 as a reference you can fairly easily source trend ideas back to the references upon which they were generated. With the current version of chatGPT, this is harder to do - it is not clear on where the evidence on the trends articulated actually originates. But this is just a feature of its current design…other LLM based summarisation and generative content AI’s are increasingly providing reach back to the source information used in summarisations and suggestions as information provenance becomes increasingly valued.
So this leads to the question - is it not better to invest in systems that gather and assess current information on trends about the future - actual, reputable evidence and analysis and then use this as the baseline for extracting and assessing likely probability to assess the future? Interestingly, parallel to Traditional Foresight - this research question is currently being asked. Not (as far as I’m aware) by foresight practitioners, but by data scientists and ML engineers trying to better understand probabilities, threats and risks.
And this, to me, is where there is a current exciting intersection in the venn diagram for AI research and foresight research, which comes to the question - How good is the data and underlying information we use to make our decisions and guide our actions?
How different is the question "How good is the information we used to make our judgements on the future?" to another question "How good is the data we use to train our AI?"
In both cases, it seems all paths lead back to data quality. The models we use to understand the world, are the models that will deliver better decision making.
And that will be subject of the next blog - Advanced Information Modelling (AIM) and the importance of high quality data to decision making.
https://youtu.be/OvEIDJC6fLM