He predicted the dark side of the Internet 30 years ago. Why did no one listen?

In 1994 – before most Americans had an email address or Internet access or even a personal computer – Philip Agre foresaw that computers would one day facilitate the mass collection of data on everything in society.

That process would change and simplify human behavior, wrote the then UCLA humanities professor. And because that data would be collected not by a single, powerful “big brother” government but by lots of entities for lots of different purposes, he predicted that people would willingly part with massive amounts of information about their most personal fears and desires.

Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.

“Genuinely worrisome developments can seem ‘not so bad’ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre, who has a doctorate in computer science from the Massachusetts Institute of Technology, in an academic paper.

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Then, no one listened. Now, many of Agre’s former colleagues and friends say they’ve been thinking about him more in recent years, and rereading his work, as pitfalls of the Internet’s explosive and unchecked growth have come into relief, eroding democracy and helping to facilitate a violent uprising on the steps of the U.S. Capitol in January.

“We’re living in the aftermath of ignoring people like Phil,” said Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy.

Charlotte Lee, who studied under Agre as a graduate student at UCLA, and is now a professor of human-centered design and engineering at the University of Washington, said she is still studying his work and learning from it today. She said she wishes he were around to help her understand it even better.

Video: America’s digital infrastructure

But Agre isn’t available. In 2009, he simply dropped off the face of the earth, abandoning his position at UCLA. When friends reported Agre missing, police located him and confirmed that he was OK, but Agre never returned to the public debate. His closest friends declined to further discuss details of his disappearance, citing respect for Agre’s privacy.

Instead, many of the ideas and conclusions that Agre explored in his academic research and his writing are only recently cropping up at think tanks and nonprofits focused on holding technology companies accountable.

“I’m seeing things Phil wrote about in the 90s being said today as though they’re new ideas,” said Christine Borgman, a professor of information studies at UCLA who helped recruit Agre for his professorship at the school.

The Washington Post sent a message to Agre’s last known email address. It bounced back. Attempts to contact his sister and other family members were unsuccessful. A dozen former colleagues and friends had no idea where Agre is living today. Some said that, as of a few years ago, he was living somewhere around Los Angeles.

Agre was a child math prodigy who became a popular blogger and contributor to Wired. Now he has been all but forgotten in mainstream technology circles. But his work is still regularly cited by technology researchers in academia and is considered foundational reading in the field of social informatics, or the study of the effects of computers on society.

Agre earned his doctorate at MIT in 1989, the same year the World Wide Web was invented. At that time, even among Silicon Valley venture capitalists betting on the rise of computers, few people foresaw just how deeply and quickly the computerization of everything would change life, economics or even politics.

A small group of academics, Agre included, observed that computer scientists viewed their work in a vacuum largely disconnected from the world around it. At the same time, people outside that world lacked a deep enough understanding of technology or how it was about to change their lives.

By the early 1990s, Agre came to believe the field of artificial intelligence had gone astray, and that a lack of criticism of the profession was one of the main reasons. In those early days of artificial intelligence, most people in AI were focused on complex math problems aimed at automating human tasks, with limited success. Yet the industry described the code they were writing as “intelligent,” giving it human attributes that didn’t actually exist.

His landmark 1997 paper called “Lessons Learned in Trying to Reform AI” is still largely considered a classic, said Geoffrey Bowker, professor emeritus of informatics at University of California, Irvine. Agre noticed that those building artificial intelligence ignored critiques of the technology from outsiders. But Agre argued criticism should be part of the process of building AI. “The conclusion is quite brilliant and has taken us as a field many years to understand. One foot planted in the craftwork in design and the other foot planted in a critique,” Bowker said.

Nevertheless, AI has barreled ahead unencumbered, weaving itself into even “low tech” industries and affecting the lives of most people who use the Internet. It guides people on what to watch and read on YouTube and Facebook, it determines sentences for convicted criminals, allows companies to automate and eliminate jobs, and allows authoritarian regimes to monitor citizens with greater efficiency and thwart attempts at democracy.

Today’s AI, which has largely abandoned the type of work Agre and others were doing in the ’80s and ’90s, is focused on ingesting massive sums of data and analyzing it with the world’s most powerful computers. But as the new form of AI has progressed, it has created problems – ranging from discrimination to filter bubbles to the spread of disinformation – and some academics say that is in part because it suffers from the same lack of self-criticism that Agre identified 30 years ago.

In December, Google’s firing of AI research scientist Timnit Gebru after she wrote a paper on the ethical issues facing Google’s AI efforts, highlighted the continued tension over the ethics of artificial intelligence and the industry’s aversion to criticism.

“It’s such a homogenous field and people in that field don’t see that maybe what they’re doing could be criticized,” said Sofian Audry, a professor of computational media at University of Quebec in Montreal who began as an artificial intelligence researcher. “What Agre says is that it is worthwhile and necessary that the people who develop these technologies are critical,” Audry said.

Agre grew up in Maryland, where he said he was “constructed to be a math prodigy” by a psychologist in the region. He said in his 1997 paper that school integration led to a search for gifted and talented students. Agre later became angry at his parents for sending him off to college early and his relationship with them suffered as a result, according to a friend, who spoke on the condition of anonymity because Agre did not give him permission to speak about his personal life.

Agre wrote that when he entered college, he wasn’t required to learn about much else other than math and “arrived in graduate school at MIT with little genuine knowledge beyond math and computers.” He took a year off graduate school to travel and read, “Trying in an indiscriminate way, and on my own resources, to become an educated person,” he wrote.

Agre began to rebel, in a sense, from his profession, seeking out critics of artificial intelligence, studying philosophy and other academic disciplines. At first, he found the texts “impenetrable,” he wrote, because he had trained his mind to dissect everything he read as he would a technical paper on math or computer science. “It finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms,” he wrote.

Agre’s blossoming intellectual interest took him away from computer science and transformed him into something unusual at that time: A brilliant mathematician with a deep understanding of the most advanced theories in artificial intelligence, who could also step outside of that realm and look at it critically from the perspective of an outsider.

For this reason, Agre became a sought-after academic. Several former colleagues told stories about Agre’s insatiable appetite on books from across the academic and popular landscape, piled high in his office or in the library. He became known for his original thinking that was buoyed by his widespread curiosity.

“He was a very enlightening person to think with – someone you would want to have a meal with at every opportunity,” Borgman said.

Agre combined his understanding of the humanities and technology to dissect the impact technology would have on society as it progressed. Today, many of his analyses read like predictions come true.

In a 1994 paper, published a year before the launches of Yahoo, Amazon and eBay, Agre foresaw that computers could facilitate the mass collection of data on everything in society, and that people would overlook the privacy concerns because, rather than “big brother” collecting data to surveil citizens, it would be many different entities collecting the data for lots of purposes, some good and some problematic.

More profoundly, though, Agre wrote in the paper that the mass collection of data would change and simplify human behavior to make it easier to quantify. That has happened on a scale few people could have imagined, as social media and other online networks have corralled human interactions into easily quantifiable metrics, such as being friends or not, liking or not, a follower or someone who is followed. And the data generated by those interactions has been used to further shape behavior, by targeting messages meant to manipulate people psychologically.

In 2001, he wrote that “your face is not a bar code,” arguing against the use of facial recognition in public places. In the article, he predicted that, if the technology continues to develop in the West, it would eventually be adopted elsewhere, allowing, for instance, the Chinese government to track everyone inside its country within 20 years.

Twenty years later, a debate is raging in the U.S. over the use of facial recognition technology by law enforcement and immigration officials and some states have begun to ban the technology in public places. Despite outcry, it may be too late to curtail the proliferation of the technology. China, as Agre predicted, has already begun employing it on a mass scale, allowing an unprecedented level of surveillance by the Communist Party.

Agre brought his work into the mainstream with an Internet mailing list called the Red Rock Eater News Service, named after a joke in Bennett Cerf’s Book of Riddles. It’s considered an early example of what would eventually become blogs.

Agre was also, at times, deeply frustrated with the limitations of his work, which was so far ahead of its time that it went unheeded until 25 years later. “He felt that people didn’t get what he was saying. He was writing for an audience of the benighted and the benighted were unable to understand what he was saying,” Bowker said.

“He was certainly frustrated that there wasn’t more uptake. But people who are a generation ahead of themselves, they’re always a generation ahead of themselves,” Borgman said.

Agre’s final project was what friends and colleagues colloquially called “The Bible of the Internet,” a definitive book that would dissect the foundations of the Internet from the ground up. But he never finished it.

From time to time, Agre resurfaces, according to a former colleague, but has not been seen in years.

“Why do certain kinds of insightful scholars or even people with such an insightful understanding of some field essentially throw their arms in the air and go I’m done with this?” asked Simon Penny, a professor of fine arts at University of California, Irvine who has studied Agre’s work extensively. “Psychologically people have these breaks. It’s a big question. Who goes on and why? Who continues to be engaged in some sort of battle, some sort of intellectual project and at what point do they go I’m done? Or, say ‘this is not relevant to me anymore and I’ve see the error of my ways.'”

Several years ago, former colleagues at UCLA attempted to put together a collection of his work, but Agre resurfaced, telling them to stop.

Agre’s life’s work was left uncompleted, questions posed but unanswered. John Seberger, a postdoctoral fellow in the Department of Informatics at Indiana University who has studied Agre’s work extensively, said that’s not necessarily a bad thing.

Seberger said Agre’s work offers a way of thinking about the problems that face an increasingly digital society. But today, more than a decade after Agre’s disappearance, the problems are more clearly understood and there are more people studying them.

“Especially right now when we are dealing with profound social unrest, the possibility to involve more diverse groups of scholars in answering these questions that he left unanswered can only benefit us,” he said.

Related Content

A Colorado county offers glimpse of America’s future

With stress on officers spiking, New York joins wave of police agencies using therapy dogs

The Dixie Fire destroyed this small California town. A week later, its residents remain in limbo.

source: yahoo.com