Mission


Modern politics runs on headlines. Leaders announce bold plans, enjoy the news cycle, and move on long before anyone checks whether those promises were kept. When the actual policy turns out to be impractical, unpopular, or inconvenient, leaders walk it back quietly, in hopes that it'll fade into the background noise.

This tactic isn’t new, but it has become supercharged by an information environment built to overwhelm us. The increasingly popular strategy of “flooding the zone” makes it nearly impossible for ordinary people to keep track of what was said, when it was supposed to happen, and what the outcome actually was. As a result, many of us end up relying on fragmented social media feeds, vulnerable to manipulation by algorithms and corporate incentives we do not control.

The Follow Up exists as an attempt to restore memory and accountability. We systematically log concrete promises made to the press, and perform regular check-ins to determine if they've been delivered on. We aim to capture as much of the "zone" as possible, and from there we can begin to systematically sift what's important from what's not.

In an era of disposable headlines, The Follow Up is built around a simple idea: if they said it, we should remember it.

Note on AI Usage

We utilize AI for much of the processing and fact checking done by the Follow Up. While this is hopefully very obvious, we do feel it's important to state this explicity; too frequently, organizations try to sneak it past you and say "see! it wasn't so bad!"

We know many people are uncomfortable with AI, in large part directly because of how strongly large companies push it. Despite this, we believe this it is an immensely valuable tool that we all, collectively, would be foolish to reject.

The difference, which we hope is apparent, is that we are not dogmatic on AI. It is just a tool, one that is fallible and one that can break. This is why we split our processing into well-defined, bounded tasks. One of the biggest mistakes people make in the AI industry is throwing extremely unspecified problems at AI models, under some false hope of a higher intelligence within.

As time goes on, we hope to be able to introduce more ways to "check and balance" the AI with human feedback, and if funding allows, our own models informed by human review. But that's a long ways away. This is an individual project built between a million other tasks, and a single person can only do so much.