Milgram Obedience Experiments
Stanley Milgram's 1961-1963 Yale experiments on obedience to authority — in which participants delivered what they believed were dangerous electric shocks to a confederate on researcher instruction — have held up better than the Stanford Prison Experiment, but modern readings emphasize conditional engaged followership rather than blind obedience.
The **Milgram Obedience Experiments** were conducted by psychologist Stanley Milgram at Yale University between 1961 and 1963, originally motivated by the Adolf Eichmann trial and the question of whether ordinary people would participate in atrocities under authority. ## Design Participants ('teachers') were instructed by an experimenter in a lab coat to administer increasing electric shocks to a 'learner' (a confederate) for wrong answers. Shock levels rose from 15V to a labelled 450V ('XXX'). No shocks were actually delivered; the learner's distress was scripted audio. Primary finding: **65% of participants delivered the maximum 450V shock** when instructed to continue despite visible and vocal distress from the learner. This result was far higher than pre-experiment predictions from psychologists (who predicted ~1%). ## Reception The experiments were hugely influential — foundational to social psychology curricula, dramatized in films (*The Tenth Level*, 1976), and widely cited in discussions of the Holocaust, My Lai Massacre, and corporate wrongdoing. ## Modern status Unlike the Stanford Prison Experiment Debunked, Milgram's core finding has survived replication — partially. Key updates: - **Jerry Burger's 2009 partial replication** (capped at 150V for ethics reasons) found ~70% compliance, consistent with Milgram. - **Dozens of cross-cultural replications** 1960s-present show similar patterns. - **Archival reanalysis** (Gina Perry's *Behind the Shock Machine* 2012): Milgram's published rates obscure significant variation across his 24 different condition variations; some conditions produced near-zero obedience; Milgram's published compliance rates were not obtained uniformly. - **Engaged followership reinterpretation** (Stephen Reicher, Alex Haslam, 2010s): participants did not obey blindly. They complied when they identified with the scientific project and believed the experimenter was a legitimate authority acting for a legitimate cause ('help science'). When the experimenter was framed as harming the learner for no reason, compliance dropped sharply. This reframes Milgram as a study of *legitimacy-based followership*, not mindless obedience. ## Key implications (post-revision) - Authority alone is insufficient to produce harmful compliance. - Legitimacy framing — 'this is how important science is done' — is the load-bearing variable. - Disconnection from the harm (physical distance, bureaucratic fragmentation) substantially raises compliance. Milgram's own condition variations established this: putting the learner in the same room dropped compliance to 40%; requiring the participant to physically press the learner's hand onto a plate dropped it to 30%. - People who identify with *the victim's group* rather than the authority's group resist. ## Contrast with SPE Milgram and SPE are often paired in psych curricula but they should be treated very differently: - Milgram: coercive pressure → partial compliance, reproducible, conditional on legitimacy. - Stanford Prison Experiment Debunked: coached guards performing assigned roles, not science. The combined honest teaching lesson: institutional cruelty is not inevitable and not pure 'human nature.' It emerges from specific conditions — authority legitimacy claims, moral disengagement, physical/social distance from victims, loss of dissenting voices. Those conditions are engineerable in both directions.