Why “predictive programming” is psychologically implausible
If you think that popular culture – movies, TV, and music – have been kind of samey lately, you’re not alone. Peter Suderman at Slate has proposed that most summer blockbusters follow the same basic formula laid out in a screenwriting book from 2005. Others, though, think that this problem goes back much further, and reflects a sinister conspiracy to indoctrinate the public into accepting a totalitarian future.
This is a conspiracy theory called “predictive programming.” It’s the brainchild of a man by the name of Alan Watt, and has been popularised by luminaries of conspiracism like Alex Jones and David Icke, among many others. Here’s how it works.
What is predictive programming?
Imagine that you’re part of a sinister conspiracy to do a particular thing in the future – let’s say that you’re planning to install GPS trackers in people’s heads in order to cement the government’s control over the population. You’re all set to go: the implanting centres are fully staffed and your bulk order of GPS receivers has come in from FoxConn. But you’re worried about how people will react – will they accept the implants? Will they balk at the legal requirement and take to the streets, risking an overthrow of the totalitarian system you’ve worked so hard to build? What’s a conspirator to do?
This is where predictive programming comes in. According to the theory, you can lay the psychological groundwork for the implantation program by planting images in popular media. For instance, you could sponsor a re-release of the original Total Recall, a film in which Arnold Schwarzenegger has a tracking bug implanted in his head. You could get a puppet TV network to release a new sci-fi TV show depicting a future where everyone has a brain implant. Through this media campaign, people will come to accept brain implants as an inevitability. When the time comes, they will accept them without question.
“Predictive programming is a subtle form of psychological conditioning provided by the media to acquaint the public with planned societal changes to be implemented by our leaders. If and when these changes are put through, the public will already be familiarized with them and will accept them as natural progressions, thus lessening possible public resistance and commotion.” - Alan Watt
This is why the pilot episode of The Lone Gunmen featured the attempted destruction of the Twin Towers, why a map in The Dark Knight Rises had a location marked “Sandy Hook,” and why Family Guy had a joke about the Boston Marathon. According to predictive programming theorists, these were all planted within the media in order to prepare the public for these events – events that were planned well in advance by the Powers That Be. When the events happened, people shrugged and went on with their daily lives rather than reacting to them as they otherwise would have.
But it’s not all bad news for the sheeple. By finding common themes in popular culture, we can figure out what they’re planning next. For instance, look at how many science fiction films feature a dystopian future with an evil, totalitarian government – Logan’s Run, Robocop, Starship Troopers, V for Vendetta, Minority Report, The Hunger Games, and so on. This is no coincidence: it’s predictive programming. The conspirators are preparing the world for a totalitarian government takeover.
So that’s the theory of predictive programming. At its heart, it’s a psychological claim. So is it psychologically plausible? I argue that the answer is no. First, social learning theory shows that context is important when presenting something that’s meant to be a model for future behaviour. Second, the supposed outcomes of predicting programming seem to have nothing to do with the methods used. Third, the mechanisms by which predictive programming are supposed to work don’t make nearly as much sense as they seem to. Fourth, neurolinguistic programming, the most commonly cited psychological justification for why predictive programming could be expected to work, has been thoroughly discredited by research. Finally, predictive programming is not very good at actual predictions.
1. Conflict with social learning theory.
A major component of predictive programming theory is the idea that if someone sees something that they’ve seen depicted in fiction, they react to it with resigned indifference and maybe a half-hearted protest. According to this view, the mere portrayal of some social condition in fiction programs people with the idea that it is inevitable and should not be resisted.
To understand why this is implausible, consider one of the most famous psychological experiments of all time: Albert Bandura’s “Bobo Doll” experiments. In this series of studies, Bandura and his team recruited two groups of children. In one group, each child was shown a short film of an adult hitting an inflatable clown doll; in the other group, the adult in the film ignored the Bobo doll. After watching whatever film they were assigned to, each child was then put into a room with a variety of toys, including a Bobo doll. The children who had been shown the aggressive video overwhelmingly mimicked the adult and beat up the doll, while the other group left the doll alone.
What does this mean for predictive programming? It completely debunks the idea that simply portraying something will elicit the same reaction regardless of context. Watching the hero hit the Bobo doll makes us want to do the same. The children’s reaction was driven not by the simple presence of the doll, but by the adult model’s reaction to it. It’s relevant that in nearly every film which is supposedly carrying out predictive programming in aid of some dystopian future government, the dystopian society is seen as evil and resistance is seen as a moral imperative.
Consider The Hunger Games, a film about a teenage girl rebelling against the totalitarian government that rules the shattered remnants of North America with an iron fist, described by Alex Jones as “one hundred percent predictive programming.” The filmmakers try to make us sympathise with the heroine, her friends, and the downtrodden masses in their fight for freedom. The idea that this would make people less likely to resist a totalitarian government is both baseless and counterintuitive. It flies in the face of half a decade of research on social learning and how we model our own behaviour after the behaviour of others around us. If you were trying to institute an evil world government, would you really want to put it out there that people who fight against evil world governments are the heroes? You could use the same reasoning to say that the Ku Klux Klan hagiography Birth of a Nation was really a way of preparing the world for racial integration and mixed marriages. Portrayal is not endorsement, and attitudes are determined in a more complex way than simple presence versus absence.
One more example. The cheesy 70s sci-fi classic Logan’s Run depicts a dystopian future where people are ceremonially executed upon reaching the age of 30. It’s a favourite of predictive programming theorists who think that it presages a world of enforced population control, even though the world is portrayed as cruel and unjust and the hero of the film ultimately destroys the totalitarian society in question. What’s more, a Google search for “logan’s run” + “obamacare” returns over 110,000 results. Far from accepting the film’s future as an inevitability, people use Logan’s Run as a way to resist that future, to give context to their fears about euthanasia and end-of-life care. This is the opposite of what one would expect to happen if predictive programming were a legitimate phenomenon.
2. Poorly defined purposes.
Sometimes the supposed point of predictive programming is not to program some sort of large social change like the institution of a totalitarian government, but instead to manage the impact of a particular event. A good example of this is the various lists of popular media “references” to 9/11 before the event – depictions of the Twin Towers exploding, and so on. One article from last year discusses the idea that Independence Day was in fact predictive programming for 9/11 – America having iconic buildings blown up by an alien enemy, a heroic president who fights back with force of arms, Will Smith flying a UFO, and so on. As with the other examples above, predictive programming prevented people from reacting to 9/11; instead, they just accepted it as inevitable and moved on.
An article on InfoWars talks about how the appearance of a 9/11-type government plot in the pilot episode of The Lone Gunmen was a way of discrediting the 9/11 truth movement before it began:
The show was used to subconsciously manipulate people to believe that if these events did actually happen, it would be like a film, not a part of reality, therefore we should not worry too much. Anyone who would dare to say that the Government were responsible for such terrorist attacks would immediately be branded a “lunatic conspiracy theorist, like those guys from the X-Files.”
But there’s a contradiction here. On the one hand, showing the public a fictional 9/11-type event as an external attack is meant to make them more likely to believe that this is the case. On the other hand, showing them a fictional 9/11-type event as an inside job makes them less likely to believe that this is the case. You can’t have it both ways. Besides, if the point of both of these was to simply stop people from having any reaction at all to 9/11, interpretation aside, then all this programming certainly didn’t do a very good job.
Sometimes the motives are said to be even more obscure. The words “Sandy Hook” appear on a map in a scene in The Dark Knight Rises, for instance. But what would be the purpose of this? Even if you buy that predictive programming in the “prepare the public for social change” sense works, it strains belief to the breaking point to think that a brief appearance of the name of a town in a popular film would somehow prevent people from reacting to a tragedy that takes place there. Moreover, this would contradict all of the Sandy Hook conspiracy theories that allege that the shooting never took place and was a staged hoax meant to provoke a reaction from gun-control advocates. Mind-controlling people not to react to it would be totally counterproductive. So what would be the point?
This is where things get a little weird. Alan Watt, when writing about the reasons for predictive programming, said that “legally, they must tell you what they’re doing. And they do – all the time.” James Farganne reasons that the conspirators’ “own belief system seems to mandate that they notify their victims.” At this point, claims about predictive programming cease to be psychological so it’d be off-topic to deal with them in any sort of rigorous way.
3. Implausible psychological mechanisms.
The basic idea of predictive programming is that seeing something portrayed in popular media will prevent people from reacting to the same event when it happens in their own lives. Rather than resisting it, they will accept it and move on. This is something that people are not aware of – they are persuaded unconsciously, subliminally, without their knowledge or consent.
In fact, there has been a good amount of psychological research on subliminal persuasion. For instance, Karremans, Stroebe, and Klaus (2006) showed that subliminally showing people the name of a drink will increase the chance that they’ll pick that drink when presented with a choice – but only if they’re thirsty. However, this body of research conflicts with the idea of predictive programming on a number of counts. For instance, the core idea of predictive programming is that showing people things in fiction will prevent them from reacting to them in real life, and that the tone of the portrayal doesn’t matter. However, subliminal priming research shows the importance of positive or negative emotions – for instance, Sweeny, Grabowecky, Suzuki, and Paller (2009) showed people a series of surprised-looking faces. Unbeknownst to the participants in the study, they were also subliminally shown fearful, happy, or neutral faces along with the surprised ones. Participants remembered the surprised faces better, and rated them more positively, when they were matched with subliminal happy faces. This study, and others like it, make it implausible that portraying something in a positive or negative light doesn’t affect how it’s perceived.
Then again, things like the government in The Hunger Games and the age-based euthanasia in Logan’s Run are hardly subliminal – they’re major plot points. It’s questionable whether they would have “subliminal” effects at all. So if predictive programming doesn’t work subliminally, how is it supposed to work? One possible candidate is the “mere exposure” effect – showing people a pleasant or neutral stimulus repeatedly will lead them to like it more and more over time. This is a well-established effect in psychology, and works even when the stimuli are not consciously perceived. However, and importantly for predictive programming, this doesn’t work for negative stimuli. In fact, Perlman and Oskamp (1970) showed that repeatedly showing people in negative settings makes participants’ evaluations of those people harsher – they became more and more disliked over time. This is a knockout blow for the idea that repeatedly presenting a type of government or social policy in a negative light would somehow prevent people from feeling bad about it. In fact, based on what we know about the mere exposure effect, it would make things worse.
4. Pseudoscientific underpinnings.
Many of the people who traffic in predictive programming cite neurolinguistic programming (NLP) as a scientific-sounding basis for claims about its effects, or even use the two terms interchangeably – in the sense of “OMG look at this NLP / PREDICTIVE PROGRAMMING in the new James Bond movie!!” NLP, at least originally, was a generally well-specified psychological theory that made specific predictions. Other than the idea that both of them are supposed to do vaguely spooky things to your brain, however, there’s no clear link between NLP and predictive programming.
But let’s imagine that there is some consistent, NLP-based justification for predictive programming as a theory. The problem with this is that despite its popularity on the Internet, NLP has been systematically discredited as a theory of thought and behaviour – while it makes fairly straightforward predictions about counselling, learning, and eye movement, for instance, these ideas simply don’t pan out when examined empirically (see Sharpley, 1987, and Sturt et al., 2012, for reviews). There is simply no consistent evidence that NLP works – its predictions haven’t panned out after decades of testing. Proposing that predictive programming is consistent with NLP principles doesn’t do the former any favours. Building a theory of mind control on NLP is like building a castle on top of quicksand.
5. Lack of predictive validity.
But what’s the track record of predictive programming itself as a theory? As expressed by Alan Watt, predictive programming is a relatively straightforward theory. Seeing something portrayed in fiction makes people more likely to shrug and accept it when it comes along in real life. While Watt and other proponents mostly apply it to large social changes and world events, there’s no reason why it shouldn’t apply equally to things on a smaller scale – getting fired from your job, having your partner cheat on you, getting bilked out of money, and so on. These are predictions that are easily amenable to empirical research, and I’d very much like to do a study on this subject sometime to see if the predictions of the theory stand up.
However, there is already a track record of predictions in the theory’s traditional domains. Despite the name, predictive programming has not done very well when it comes to actually predicting things. In theory, you’re supposed to be able to see what They are up to by looking at what’s going on in popular media. This is meant to give an idea of what the conspirators are psychologically preparing the population for. Indeed, after some event like 9/11, people will inevitably go back and find things in the media that seem to match up. But efforts to go the other way – to predict things in advance based on what’s in the media – fall completely flat.
Three examples come immediately to mind: the Simpsons clock fiasco, the Comet Elenin panic, and the London Olympics false-flag-that-wasn’t. The Simpsons clock fiasco happened a couple of years ago when an episode of the long-running TV show featured a giant clock exploding and landing next to a sleeping Homer Simpson, who subsequently wakes up, yawns, stretches, and walks off camera. Predictive programming enthusiasts made much of the position of the hands on the clock (supposedly indicating a date), the way Homer stretched when he got up from his hammock (“obvious masonic hand gestures”, opined one Youtube commenter), the importance of clock faces in Project Monarch, the nuclear-looking explosion, and so on. Of course, none of the predictions of doom came to pass. The Comet Elenin panic was an even weirder example, where a set of coincidences led predictive programming enthusiasts to believe that the film Deep Impact was made in order to condition people to accept an extinction-level impact from the eponymous comet. The comet disintegrated while passing through the solar system in August 2011 and killed absolutely nobody. Finally, pretty much everyone who was into predictive programming thought that there would be a false-flag attack of some kind at the 2012 Olympics in London (particularly Ian R. Crane) – but the Olympics in general, went off without a hitch.
The “predictive” element of predictive programming is really retrodictive – it can’t be used to predict in advance what’s going to happen, any more than flipping a coin or reading bird entrails. There are a couple of possible reasons for this – either the conspirators are putting out fake “programming” in order to throw people off the trail (which decreases the chance that there’s any signal getting through all the noise), the programming is so arcane that it’s impossible to pick it out except after the fact (which makes it less plausible that it’s having any prospective effect on us), or the apparent cases of predictive programming are nothing more than the result of hunting for vague resemblances after the fact.
Clearly there are a lot of reasons to believe that predictive programming probably doesn’t work: it runs counter to one of the foundational experiments in social psychology, its effects and aims are vague and poorly defined, it doesn’t agree with decades of psychological research on mere exposure and subliminal persuasion, its “scientific” justification is completely unsupported by research, and the predictions made by its advocates simply don’t pan out.
However, predictive programming is amenable to research. There’s nothing stopping someone from putting it to the test – I would like to do this myself at some point. Get two groups of participants, like in the Bobo Doll study. While one group watches neutral videos, the other watches a series of videos where people get cheated out of things: a gambler loses money in a rigged poker game, a business deal with a corrupt politician goes bad, a bank wrongly forecloses on a mortgage and puts a family out on the streets. The participants are then put in an economic game scenario where someone else behaves unfairly, and they have the opportunity to punish them. If predictive programming works, the people who were exposed to fictional depictions of cheating should be less likely to punish the other person – rather than resisting, they’ll simply accept what happens. How do you think this experiment would turn out?