It isn’t uncommon to hear about the future like it is a single, unified, static thing that can be held in a petri dish and precisely analyzed or worse, quantified. We see futures as multiple, fluid, fuzzy constantly undergoing change. They reflect and extend today’s richness and become scaffoldings for building tomorrow. If we are to learn from them, we must approach futures not only with an open mind (see agnostic futurism), but also with an open map.
The futures cone, imagined by Hancock & Bezold and updated by Joseph Voros, shows the field of possible futures as a cone enlarging as time goes, with the most probable ones at the center. It shows ‘preferable’ futures, which like the notion of a singular ‘today’ should be questioned: preferable for whom? The idea of preferable futures usually goes together with the notion of utopia, and its more recent evil twin: dystopia. However, if we look at the etymology, utopia is not the ‘good place.’ It originally meant the ‘non-place’: an island in time that can only exist through the imagination of its author and readers.
While such ideals provide excellent platforms for debating and comparing people’s expectations of a ‘perfect society,’ it goes without saying that neither can exist because such absolutes don’t, and shouldn’t happen in real life. Everything has trade-offs. Now, the cone shows not every bit of futures is similarly located. Some are further from the central axis, indicating a lesser degree of plausibility. Others are further from the starting point: they are further in time. We prefer not to use dates, because they come with expectations. So let’s stay clear from such formatting and call today Horizon 0.
HORIZON 0 — "Now Now"
Consider today our starting position. If you were to place it on a timeline, it would be reductive to view it as a single point — our world is complex, full of contradictions, and conflicting agendas. There is no reason that any point in time should be simpler — if anything, entropy wants things to get wilder with time.
HORIZON 1 — "If we told you this exists somewhere else, you would believe us"
Our next stop is the near future, or the short range. This is where most speculation occurs regarding "what might be." What is happening today impacts what is to come in more measurable ways, so we can extrapolate signals of change and imagine how they would play out in the near future. On this horizon, innovation is rather predictable and one has a good sense of what is plausible. What isn't predictable, however, is how people will respond to such changes. That makes it an ideal playground for more provocative ideas, especially looking at social implications. That way, unique possibilities open up to depict new and compelling techno-social phenomena. Like in Charlie Brooker's Black Mirror, fiction can probe what's acceptable by making things feel imminent or as if they were already happening elsewhere.
HORIZON 2 — "The Land of Hopes and Dreams"
Further in time is Horizon 2: the far future, which is far too remote to be reasonably guessed, and can only be imagined. That horizon, however, has the power to influence the present by sparking new imaginaries about what *could* become. It is the terrain of social and political idealism, and yes, in general that of utopias. But it is difficult to visualize — at such a distance, visions get blurry.
In essence, our practice consists in pulling a red thread from the long range visions (Horizon 2) to the near future ones (Horizon 1) — a process known in future studies as 'backcasting.' We take visions that can seem like wishful thinking, and make them ‘feel’ as likely as possible through a form of reverse-engineering of ideas. We like to use the analogy of Future Fishing: we cast our line as far as possible — we imagine, and therefore create, a Horizon 2 future — and when we have caught something with some weight we reel it back closer to look at it, and maybe catch it by prototyping parts of it. This makes our futures more tangible and lets us turn abstract ideas into visions that can be looked at critically. We always prioritize objects that exemplify conflicts of interests; the kinds that solve issues but come with tradeoffs.