Technology

Does Google Manipulate Your Search Results?

by Sissi Cao

Click here to read the article: “Does Google Manipulate Your Search Results?”

Fix the Machine, Not the Person

by Aaron Swartz

#systemanalysis, #proposal, #technology, #automotiveindustry, #logos, #research, #sharedvalues #cognitivebias

Close-up image of a foosball player
“Photo” is in the Public Domain, CC0

This post is part seven of the series Raw Nerve.

The General Motors plant in Fremont was a disaster. “Everything was a fight,” the head of the union admits. “They spent more time on grievances and on things like that than they did on producing cars. They had strikes all the time. It was just chaos constantly. … It was considered the worst workforce in the automobile industry in the United States.”

“One of the expressions was, you can buy anything you want in the GM plant in Fremont,” adds Jeffrey Liker, a professor who studied the plant. “If you want sex, if you want drugs, if you want alcohol, it’s there. During breaks, during lunch time, if you want to gamble illegally—any illegal activity was available for the asking within that plant.” Absenteeism was so bad that some mornings they didn’t have enough employees to start the assembly line; they had to go across the street and drag people out of the bar.

When management tried to punish workers, workers tried to punish them right back: scratching cars, loosening parts in hard-to-reach places, filing union grievances, sometimes even building cars unsafely. It was war.

In 1982, GM finally closed the plant. But the very next year, when Toyota was planning to start its first plant in the US, it decided to partner with GM to reopen it, hiring back the same old disastrous workers into the very same jobs. And so began the most fascinating experiment in management history.

Toyota flew this rowdy crew to Japan, to see an entirely different way of working: The Toyota Way. At Toyota, labor and management considered themselves on the same team; when workers got stuck, managers didn’t yell at them, but asked how they could help and solicited suggestions. It was a revelation. “You had union workers—grizzled old folks that had worked on the plant floor for 30 years, and they were hugging their Japanese counterparts, just absolutely in tears,” recalls their Toyota trainer. “And it might sound flowery to say 25 years later, but they had had such a powerful emotional experience of learning a new way of working, a way that people could actually work together collaboratively—as a team.”

Three months after they got back to the US and reopened the plant, everything had changed. Grievances and absenteeism fell away and workers started saying they actually enjoyed coming to work. The Fremont factory, once one of the worst in the US, had skyrocketed to become the best. The cars they made got near-perfect quality ratings. And the cost to make them had plummeted. It wasn’t the workers who were the problem; it was the system.1

An organization is not just a pile of people, it’s also a set of structures. It’s almost like a machine made of men and women. Think of an assembly line. If you just took a bunch of people and threw them in a warehouse with a bunch of car parts and a manual, it’d probably be a disaster. Instead, a careful structure has been built: car parts roll down on a conveyor belt, each worker does one step of the process, everything is carefully designed and routinized. Order out of chaos.

And when the system isn’t working, it doesn’t make sense to just yell at the people in it — any more than you’d try to fix a machine by yelling at the gears. True, sometimes you have the wrong gears and need to replace them, but more often you’re just using them in the wrong way. When there’s a problem, you shouldn’t get angry with the gears — you should fix the machine.

If you have goals in life, you’re probably going to need some sort of organization. Even if it’s an organization of just you, it’s still helpful to think of it as a kind of machine. You don’t need to do every part of the process yourself — you just need to set up the machine so that the right outcomes happen.

For example, let’s say you want to build a treehouse in the backyard. You’re great at sawing and hammering, but architecture is not your forte. You build and build, but the treehouses keep falling down. Sure, you can try to get better at architecture, develop a better design, but you can also step back, look at the machine as a whole, and decide to fire yourself as the architect. Instead, you find a friend who loves that sort of thing to design the treehouse for you and you stick to actually building it. After all, your goal was to build a treehouse whose design you like — does it really matter whether you’re the one who actually designed it?2

Or let’s say you really want to get in shape, but never remember to exercise. You can keep beating yourself up for your forgetfulness, or you can put a system in place. Maybe you have your roommate check to see that you exercise before you leave your house in the morning or you set a regular time to consistently go to the gym together. Life isn’t a high school exam; you don’t have to solve your problems on your own.

In 1967, Edward Jones and Victor Harris gathered a group of college students and asked them to judge another student’s exam (the student was a fictional character, but let’s call him Jim). The exam always had one question, asking Jim to write an essay on Fidel Castro “as if [he] were giving the opening statement in a debate.” But what sort of essay Jim was supposed to write varied: some of them required Jim to write a defense of Castro, others required Jim to write a critique of Castro, the rest left the choice up to Jim. The kids in the experiment were asked to read Jim’s essay and then were asked whether they thought Jim himself was pro- or anti-Castro.

Jones and Harris weren’t expecting any shocking results here; their goal was just to show the obvious: that people would conclude Jim was pro-Castro when he voluntarily chose to write a pro-Castro essay, but not when he was forced to by the teacher. But what they found surprised them: even when the students could easily see the question required Jim to write a pro-Castro essay, they still rated Jim as significantly more pro-Castro. It seemed hard to believe. “Perhaps some of the subjects were inattentive and did not clearly understand the context,” they suspected.

So they tried again. This time they explained the essay was written for a debate tournament, where the student had been randomly assigned to either the for or against side of the debate. They wrote it in big letters on the blackboard, just to make this perfectly clear. But again they got the same results — even more clearly this time. They still couldn’t believe it. Maybe, they figured, students thought Jim’s arguments were so compelling he must really believe them to be able to come up with them.

So they tried a third time — this time recording Jim on tape along with the experimenter giving him the arguments to use. Surely no one would think Jim came up with them on his own now. Again, the same striking results: students were persuaded Jim believed the arguments he said, even when they knew he had no choice in making them.3

This was an extreme case, but we make the same mistake all the time. We see a sloppily-parked car and we think “what a terrible driver,” not “he must have been in a real hurry.” Someone keeps bumping into you at a concert and you think “what a jerk,” not “poor guy, people must keep bumping into him.” A policeman beats up a protestor and we think “what an awful person,” not “what terrible training.” The mistake is so common that in 1977 Lee Ross decided to name it the “fundamental attribution error”: we attribute people’s behavior to their personality, not their situation.4

Our natural reaction when someone screws up is to get mad at them. This is what happened at the old GM plant: workers would make a mistake and management would yell and scream. If asked to explain the yelling, they’d probably say that since people don’t like getting yelled at, it’d teach them be more careful next time.

But this explanation doesn’t really add up. Do you think the workers liked screwing up? Do you think they enjoyed making crappy cars? Well, we don’t have to speculate: we know the very same workers, when given the chance to do good work, took pride in it and started actually enjoying their jobs.

They’re just like you, when you’re trying to exercise but failing. Would it have helped to have your friend just yell and scream at you for being such a lazy loser? Probably not — it probably would have just made you feel worse. What worked wasn’t yelling, but changing the system around you so that it was easier to do what you already wanted to do.

The same is true for other people. Chances are, they don’t want to annoy you, they don’t like screwing up. So what’s going to work isn’t yelling at them, but figuring out how to change the situation. Sometimes that means changing how you behave. Sometimes that means bringing another person into the mix. And sometimes it just means simple stuff, like changing the way things are laid out or putting up reminders.

At the old GM plant, in Fremont, workers were constantly screwing things up: “cars with engines put in backwards, cars without steering wheels or brakes. Some were so messed up they wouldn’t start, and had to be towed off the line.” Management would yell at the workers, but what could you do? Things were moving so fast. “A car a minute don’t seem like it’s moving that fast,” noted one worker, “but when you don’t get it, you’re in the hole. There’s nobody to pull you out at General Motors, so you’re going to let something go.”

At the Toyota plant, they didn’t just let things go. There was a red cord running above the assembly line, known as an andon cord, and if you ever found yourself in the hole, all you had to do was pull it, and the whole line would stop. Management would come over and ask you how they could help, if there was a way they could fix the problem. And they’d actually listen — and do it!

You saw the results all over the factory: mats and cushions for the workers to kneel on; hanging shelves traveling along with the cars, carrying parts; special tools invented specifically to solve problems the workers had identified. Those little things added up to make a big difference.

When you’re upset with someone, all you want to do is change the way they’re acting. But you can’t control what’s inside a person’s head. Yelling at them isn’t going to make them come around, it’s just going to make them more defiant, like the GM workers who keyed the cars they made.

No, you can’t force other people to change. You can, however, change just about everything else. And usually, that’s enough.

Notes:

  1. This story has been told several places, but the quotes here are from Frank Langfitt with Brian Reed, “NUMMI,” This American Life 403 (26 March 2010; visited 2012-09-23). Quotes are taken from the show’s transcript which sometimes differ slightly from the aired version.
  2. Some of the concepts and terms here were inspired by Ray Dalio, Principles (2001), part 2 (visited 2012-09-01).
  3. Edward E. Jones and Victor A. Harris, “The Attribution of Attitudes,” Journal of Experimental Social Psychology 3:1 (January 1967), 1–24.
  4. Lee Ross, “The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process,” Advances in Experimental Social Psychology 10 (1977), 173–220.

____________________

Aaron Swartz was an American computer programmer, entrepreneur, writer, political organizer, and Internet hacktivist. He was involved in the development of the web feed format RSS and the Markdown publishing format, the organization Creative Commons, and the website framework web.py, and was a co-founder of the social news site Reddit. He died in 2013. His writings can be found on his blog, aaronsw.com.

Creative Commons License

Fix the Machine, Not the Person by Aaron Swartz is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Getting a Scientific Message Across Means Taking Human Nature into Account

by Rose Hendricks

#reportinginformation, #cognitivebias, #logos, #ethos, #kairos, #pathos, #currentevents, #science

Image of a man acting out "Hear no evil, Speak no evil, and See no evil"
“Hear no evil. Speak no evil. See no evil” by smileham is licensed under CC BY-NC 2.0

We humans have collectively accumulated a lot of science knowledge. We’ve developed vaccines that can eradicate some of the most devastating diseases. We’ve engineered bridges and cities and the internet. We’ve created massive metal vehicles that rise tens of thousands of feet and then safely set down on the other side of the globe. And this is just the tip of the iceberg (which, by the way, we’ve discovered is melting). While this shared knowledge is impressive, it’s not distributed evenly. Not even close. There are too many important issues that science has reached a consensus on that the public has not.

Scientists and the media need to communicate more science and communicate it better. Good communication ensures that scientific progress benefits societybolsters democracy, weakens the potency of fake news and misinformation and fulfills researchers’ responsibility to engage with the public. Such beliefs have motivated training programsworkshops and a research agenda from the National Academies of Science, Engineering, and Medicine on learning more about science communication. A resounding question remains for science communicators: What can we do better?

A common intuition is that the main goal of science communication is to present facts; once people encounter those facts, they will think and behave accordingly. The National Academies’ recent report refers to this as the “deficit model.”

But in reality, just knowing facts doesn’t necessarily guarantee that one’s opinions and behaviors will be consistent with them. For example, many people “know” that recycling is beneficial but still throw plastic bottles in the trash. Or they read an online article by a scientist about the necessity of vaccines, but leave comments expressing outrage that doctors are trying to further a pro-vaccine agenda. Convincing people that scientific evidence has merit and should guide behavior may be the greatest science communication challenge, particularly in our “post-truth” era.

Luckily, we know a lot about human psychology – how people perceive, reason and learn about the world – and many lessons from psychology can be applied to science communication endeavors.

 

Consider human nature

Regardless of your religious affiliation, imagine that you’ve always learned that God created human beings just as we are today. Your parents, teachers and books all told you so. You’ve also noticed throughout your life that science is pretty useful – you especially love heating up a frozen dinner in the microwave while browsing Snapchat on your iPhone.

One day you read that scientists have evidence for human evolution. You feel uncomfortable: Were your parents, teachers and books wrong about where people originally came from? Are these scientists wrong? You experience cognitive dissonance – the uneasiness that results from entertaining two conflicting ideas.

Psychologist Leon Festinger first articulated the theory of cognitive dissonance in 1957, noting that it’s human nature to be uncomfortable with maintaining two conflicting beliefs at the same time. That discomfort leads us to try to reconcile the competing ideas we come across. Regardless of political leaning, we’re hesitant to accept new information that contradicts our existing worldviews.

One way we subconsciously avoid cognitive dissonance is through confirmation bias – a tendency to seek information that confirms what we already believe and discard information that doesn’t.

This human tendency was first exposed by psychologist Peter Wason in the 1960s in a simple logic experiment. He found that people tend to seek confirmatory information and avoid information that would potentially disprove their beliefs.

The concept of confirmation bias scales up to larger issues, too. For example, psychologists John Cook and Stephen Lewandowsky asked people about their beliefs concerning global warming and then gave them information stating that 97 percent of scientists agree that human activity causes climate change. The researchers measured whether the information about the scientific consensus influenced people’s beliefs about global warming.

Those who initially opposed the idea of human-caused global warming became even less accepting after reading about the scientific consensus on the issue. People who had already believed that human actions cause global warming supported their position even more strongly after learning about the scientific consensus. Presenting these participants with factual information ended up further polarizing their views, strengthening everyone’s resolve in their initial positions. It was a case of confirmation bias at work: New information consistent with prior beliefs strengthened those beliefs; new information conflicting with existing beliefs led people to discredit the message as a way to hold on to their original position.

Image of a man yelling into a megaphone

Just shouting louder isn’t going to help. Photo by Maritime Union of New Zealand is licensed under CC BY 2.0

 

Overcoming cognitive biases

How can science communicators share their messages in a way that leads people to change their beliefs and actions about important science issues, given our natural cognitive biases?

The first step is to acknowledge that every audience has preexisting beliefs about the world. Expect those beliefs to color the way they receive your message. Anticipate that people will accept information that is consistent with their prior beliefs and discredit information that is not.

Then, focus on framing. No message can contain all the information available on a topic, so any communication will emphasize some aspects while downplaying others. While it’s unhelpful to cherry-pick and present only evidence in your favor – which can backfire anyway – it is helpful to focus on what an audience cares about.

For example, these University of California researchers point out that the idea of climate change causing rising sea levels may not alarm an inland farmer dealing with drought as much as it does someone living on the coast. Referring to the impact our actions today may have for our grandchildren might be more compelling to those who actually have grandchildren than to those who don’t. By anticipating what an audience believes and what’s important to them, communicators can choose more effective frames for their messages – focusing on the most compelling aspects of the issue for their audience and presenting it in a way the audience can identify with.

In addition to the ideas expressed in a frame, the specific words used matter. Psychologists Amos Tversky and Daniel Kahneman first showedwhen numerical information is presented in different ways, people think about it differently. Here’s an example from their 1981 study:

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is ⅓ probability that 600 people will be saved, and ⅔ probability that no people will be saved.

Both programs have an expected value of 200 lives saved. But 72 percent of participants chose Program A. We reason about mathematically equivalent options differently when they’re framed differently: Our intuitions are often not consistent with probabilities and other math concepts.

Metaphors can also act as linguistic frames. Psychologists Paul Thibodeau and Lera Boroditsky found that people who read that crime is a beast proposed different solutions than those who read that crime is a virus – even if they had no memory of reading the metaphor. The metaphors guided people’s reasoning, encouraging them to transfer solutions they’d propose for real beasts (cage them) or viruses (find the source) to dealing with crime (harsher law enforcement or more social programs).

The words we use to package our ideas can drastically influence how people think about those ideas.

 

What’s next?

We have a lot to learn. Quantitative research on the efficacy of science communication strategies is in its infancy but becoming an increasing priority. As we continue to untangle more about what works and why, it’s important for science communicators to be conscious of the biases they and their audiences bring to their exchanges and the frames they select to share their messages.

____________________

Rose Hendricks is a Ph.D. Candidate in Cognitive Science, University of California San Diego. Her essay was originally published in The Conversation.

Creative Commons License

Getting a Scientific Message Across Means Taking Human Nature into Account by Rose Hendricks is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

How Technology Is Reinventing Education

by Carrie Spector

Click here to read the article: “How Technology Is Reinventing Education”

How Trolls Are Ruining the Internet

by Time Staff

Click here to read the article: “How Trolls Are Ruining the Internet” 

Is Google Making Us Stupid?

by Nicholas Carr

Click here to read the article: “Is Google Making Us Stupid?”

Social Media and Mental Health

by Lawrence Robinson and Melinda Smith

Click here to read the article: “Social Media and Mental Health” 

When Your Mother Is a Cyberbully

by Justin W. Patchin

Click here to read the article: “When Your Mother Is a Cyberbully”

Why Good People Turn Bad Online

By Gaia Vince

#technology #analysis #causalargument #systemanalysis #sharedvalues #research

MHVt2hHqMXsGfMwkDVPF8ce1RgeYj_YEHqDI4O1Soyg0cBn64c2ZIUqqJQlZwww2uWeSUB4usLPWX5bTgvnVYP-_1uFDAbYE4_XfmgoWYwDLFXg9Be1q5zRzknqPuZy62wLink to audio

Clipart image of a troll toy with a red no symbol over it
“No Trolls wanted here” by John Pons is licensed under CC BY-SA 3.0

On the evening of 17 February 2018, Professor Mary Beard posted on Twitter a photograph of herself crying. The eminent University of Cambridge classicist, who has almost 200,000 Twitter followers, was distraught after receiving a storm of abuse online. This was the reaction to a comment she had made about Haiti. She also tweeted: “I speak from the heart (and of course I may be wrong). But the crap I get in response just isn’t on; really it isn’t.”

In the days that followed, Beard received support from several high-profile people. Greg Jenner, a fellow celebrity historian, tweeted about his own experience of a Twitterstorm: “I’ll always remember how traumatic it was to suddenly be hated by strangers. Regardless of morality – I may have been wrong or right in my opinion – I was amazed (later, when I recovered) at how psychologically destabilizing it was to me.”

Those tweeting support for Beard – irrespective of whether they agreed with her initial tweet that had triggered the abusive responses – were themselves then targeted. And when one of Beard’s critics, fellow Cambridge academic Priyamvada Gopal, a woman of Asian heritage, set out her response to Beard’s original tweet in an online article, she received her own torrent of abuse.

There is overwhelming evidence that women and members of ethnic minority groups are disproportionately the target of Twitter abuse. Where these identity markers intersect, the bullying can become particularly intense, as experienced by black female MP Diane Abbott, who alone received nearly half of all the abusive tweets sent to female MPs during the run-up to the 2017 UK general election. Black and Asian female MPs received on average 35 per cent more abusive tweets than their white female colleagues even when Abbott was excluded from the total.

The constant barrage of abuse, including death threats and threats of sexual violence, is silencing people, pushing them off online platforms and further reducing the diversity of online voices and opinion. And it shows no sign of abating. A survey last year found that 40 percent of American adults had personally experienced online abuse, with almost half of them receiving severe forms of harassment, including physical threats and stalking. 70 percent of women described online harassment as a “major problem”.

The business models of social media platforms, such as YouTube and Facebook, promote content that is more likely to get a response from other users because more engagement means better opportunities for advertising. But this has a consequence of favoring divisive and strongly emotive or extreme content, which can in turn nurture online “bubbles” of groups who reflect and reinforce each other’s opinions, helping propel the spread of more extreme content and providing a niche for “fake news”. In recent months, researchers have revealed many ways that various vested interests, including Russian operatives, have sought to manipulate public opinion by infiltrating social media bubbles.

Image of a graffiti noose
“” by sensesmaybenumbed is licensed under CC BY-NC-SA 2.0

Our human ability to communicate ideas across networks of people enabled us to build the modern world. The internet offers unparalleled promise of cooperation and communication between all of humanity. But instead of embracing a massive extension of our social circles online, we seem to be reverting to tribalism and conflict, and belief in the potential of the internet to bring humanity together in a glorious collaborating network now begins to seem naive. While we generally conduct our real-life interactions with strangers politely and respectfully, online we can be horrible. How can we relearn the collaborative techniques that enabled us to find common ground and thrive as a species?

“Don’t overthink it, just press the button!”

I click an amount, impoverishing myself in an instant, and quickly move on to the next question, aware that we’re all playing against the clock. My teammates are far away and unknown to me. I have no idea if we’re all in it together or whether I’m being played for a fool, but I press on, knowing that the others are depending on me.

I’m playing in a so-called public goods game at Yale University’s Human Cooperation Lab. The researchers here use it as a tool to help understand how and why we cooperate, and whether we can enhance our prosocial behavior.

Over the years, scientists have proposed various theories about why humans cooperate so well that we form strong societies. The evolutionary roots of our general niceness, most researchers now believe, can be found in the individual survival advantage humans experience when we cooperate as a group. I’ve come to New Haven, Connecticut, in a snowy February, to visit a cluster of labs where researchers are using experiments to explore further our extraordinary impulse to be nice to others even at our own expense.

The game I’m playing, on Amazon’s Mechanical Turk online platform, is one of the lab’s ongoing experiments. I’m in a team of four people in different locations, and each of us is given the same amount of money to play with. We are asked to choose how much money we will contribute to a group pot, on the understanding that this pot will then be doubled and split equally among us.

This sort of social dilemma, like all cooperation, relies on a certain level of trust that the others in your group will be nice. If everybody in the group contributes all of their money, all the money gets doubled, redistributed four ways, and everyone doubles their money. Win–win!

“But if you think about it from the perspective of an individual,” says lab director David Rand, “for each dollar that you contribute, it gets doubled to two dollars and then split four ways – which means each person only gets 50 cents back for the dollar they contributed.”

Even though everyone is better off collectively by contributing to a group project that no one could manage alone – in real life, this could be paying towards a hospital building, or digging a community irrigation ditch – there is a cost at the individual level. Financially, you make more money by being more selfish.

Rand’s team has run this game with thousands of players. Half of them are asked, as I was, to decide their contribution rapidly – within 10 seconds – whereas the other half are asked to take their time and carefully consider their decision. It turns out that when people go with their gut, they are much more generous than when they spend time deliberating.

“There is a lot of evidence that cooperation is a central feature of human evolution,” says Rand. Individuals benefit, and are more likely to survive, by cooperating with the group. And being allowed to stay in the group and benefit from it is reliant on our reputation for behaving cooperatively.

“In the small-scale societies that our ancestors were living in, all our interactions were with people that you were going to see again and interact with in the immediate future,” Rand says. That kept in check any temptation to act aggressively or take advantage and free-ride off other people’s contributions. “It makes sense, in a self-interested way, to be cooperative.”

Cooperation breeds more cooperation in a mutually beneficial cycle. Rather than work out every time whether it’s in our long-term interests to be nice, it’s more efficient and less effort to have the basic rule: be nice to other people. That’s why our unthinking response in the experiment is a generous one.

Throughout our lives, we learn from the society around us how cooperative to be. But our learned behaviors can also change quickly.

Those in Rand’s experiment who play the quickfire round are mostly generous and receive generous dividends, reinforcing their generous outlook. Whereas those who consider their decisions are more selfish, resulting in a meagre group pot, reinforcing an idea that it doesn’t pay to rely on the group. So, in a further experiment, Rand gave some money to people who had played a round of the game. They were then asked how much they wanted to give to an anonymous stranger. This time, there was no incentive to give; they would be acting entirely charitably.

It turned out there were big differences. The people who had got used to cooperating in the first stage gave twice as much money in the second stage as the people who had got used to being selfish did. “So we’re affecting people’s internal lives and behavior,” Rand says. “The way they behave even when no one’s watching and when there’s no institution in place to punish or reward them.”

Rand’s team have tested how people in different countries play the game, to see how the strength of social institutions – such as government, family, education and legal systems – influences behavior. In Kenya, where public sector corruption is high, players initially gave less generously to the stranger than players in the US, which has less corruption. This suggests that people who can rely on relatively fair social institutions behave in a more public-spirited way; those whose institutions are less reliable are more protectionist. However, after playing just one round of the cooperation-promoting version of the public goods game, the Kenyans’ generosity equaled the Americans’. And it cut both ways: Americans who were trained to be selfish gave a lot less.

So is there something about online social media culture that makes some people behave meanly? Unlike ancient hunter-gatherer societies, which rely on cooperation and sharing to survive and often have rules for when to offer food to whom across their social network, social media have weak institutions. They offer physical distance, relative anonymity and little reputational or punitive risk for bad behavior: if you’re mean, no one you know is going to see.

I trudge a couple of blocks through driving snow to find Molly Crockett’s Psychology Lab, where researchers are investigating moral decision-making in society. One area they focus on is how social emotions are transformed online, in particular moral outrage. Brain-imaging studies show that when people act on their moral outrage, their brain’s reward center is activated – they feel good about it. This reinforces their behavior, so they are more likely to intervene in a similar way again. So, if they see somebody acting in a way that violates a social norm, by allowing their dog to foul a playground, for instance, and they publicly confront the perpetrator about it, they feel good afterwards. And while challenging a violator of your community’s social norms has its risks – you may get attacked – it also boosts your reputation.

In our relatively peaceful lives, we are rarely faced with outrageous behavior, so we rarely see moral outrage expressed. Open up Twitter or Facebook and you get a very different picture. Recent research shows that messages with both moral and emotional words are more likely to spread on social media – each moral or emotional word in a tweet increases the likelihood of it being retweeted by 20 per cent.

“Content that triggers outrage and that expresses outrage is much more likely to be shared,” Crockett says. What we’ve created online is “an ecosystem that selects for the most outrageous content, paired with a platform where it’s easier than ever before to express outrage”.

Unlike in the offline world, there is no personal risk in confronting and exposing someone. It only takes a few clicks of a button and you don’t have to be physically nearby, so there is a lot more outrage expressed online. And it feeds itself. “If you punish somebody for violating a norm, that makes you seem more trustworthy to others, so you can broadcast your moral character by expressing outrage and punishing social norm violations,” Crockett says. “And people believe that they are spreading good by expressing outrage – that it comes from a place of morality and righteousness.

“When you go from offline – where you might boost your reputation for whoever happens to be standing around at the moment – to online, where you broadcast it to your entire social network, then that dramatically amplifies the personal rewards of expressing outrage.”

This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. “Our hypothesis is that the design of these platforms could make expressing outrage into a habit, and a habit is something that’s done without regard to its consequences – it’s insensitive to what happens next, it’s just a blind response to a stimulus,” Crockett explains.

“I think it’s worth having a conversation as a society as to whether we want our morality to be under the control of algorithms whose purpose is to make money for giant tech companies,” she adds. “I think we would all like to believe and feel that our moral emotions, thoughts and behaviors are intentional and not knee-jerk reactions to whatever is placed in front of us that our smartphone designer thinks will bring them the most profit.”

On the upside, the lower costs of expressing outrage online have allowed marginalized, less-empowered groups to promote causes that have traditionally been harder to advance. Moral outrage on social media played an important role in focusing attention on the sexual abuse of women by high-status men. And in February 2018, Florida teens railing on social media against yet another high-school shooting in their state helped to shift public opinion, as well as shaming a number of big corporations into dropping their discount schemes for National Rifle Association members.

“I think that there must be ways to maintain the benefits of the online world,” says Crockett, “while thinking more carefully about redesigning these interactions to do away with some of the more costly bits.”

Someone who’s thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yale’s Human Nature Lab, located just a few more snowy blocks away. His team studies how our position in a social network influences our behavior, and even how certain influential individuals can dramatically alter the culture of a whole network.

The team is exploring ways to identify these individuals and enlist them in public health programmed that could benefit the community. In Honduras, they are using this approach to influence vaccination enrolment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.

Corporations already use a crude system of identifying so-called Instagram influencers to advertise their brands for them. But Christakis is looking not just at how popular an individual is, but also their position in the network and the shape of that network. In some networks, like a small isolated village, everyone is closely connected and you’re likely to know everyone at a party; in a city, by contrast, people may be living more closely by as a whole, but you are less likely to know everyone at a party there. How thoroughly interconnected a network is affects how behaviors and information spread around it, he explains.

“If you take carbon atoms and you assemble them one way, they become graphite, which is soft and dark. Take the same carbon atoms and assemble them a different way, and it becomes diamond, which is hard and clear. These properties of hardness and clearness aren’t properties of the carbon atoms – they’re properties of the collection of carbon atoms and depend on how you connect the carbon atoms to each other,” he says. “And it’s the same with human groups.”

Christakis has designed software to explore this by creating temporary artificial societies online. “We drop people in and then we let them interact with each other and see how they play a public goods game, for example, to assess how kind they are to other people.”

Then he manipulates the network. “By engineering their interactions one way, I can make them really sweet to each other, work well together, and they are healthy and happy and they cooperate. Or you take the same people and connect them a different way and they’re mean jerks to each other and they don’t cooperate and they don’t share information and they are not kind to each other.”

In one experiment, he randomly assigned strangers to play the public goods game with each other. In the beginning, he says, about two-thirds of people were cooperative. “But some of the people they interact with will take advantage of them and, because their only option is either to be kind and cooperative or to be a defector, they choose to defect because they’re stuck with these people taking advantage of them. And by the end of the experiment everyone is a jerk to everyone else.”

Christakis turned this around simply by giving each person a little bit of control over who they were connected to after each round. “They had to make two decisions: am I kind to my neighbors or am I not; and do I stick with this neighbor or do I not.” The only thing each player knew about their neighbors was whether each had cooperated or defected in the round before. “What we were able to show is that people cut ties to defectors and form ties to cooperators, and the network rewired itself and converted itself into a diamond-like structure instead of a graphite-like structure.” In other words, a cooperative prosocial structure instead of an uncooperative structure.

In an attempt to generate more cooperative online communities, Christakis’s team have started adding bots to their temporary societies. He takes me over to a laptop and sets me up on a different game. In this game, anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with: each of us has to pick from one of three colors, but the colors of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money; if we fail, no one gets anything. I’m playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to – nevertheless, we have to cooperate to win.

I’m connected to two neighbors, whose colors are green and blue, so I pick red. My left neighbor then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my color, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections. Time’s up before we solve the puzzle, prompting irate responses in the game’s comments box from remote players condemning everyone else’s stupidity. Personally, I’m relieved it’s over and there’s no longer anyone depending on my cack-handed gaming skills to earn money.

Christakis tells me that some of the networks are so complex that the puzzle is impossible to solve in the timeframe. My relief is short-lived, however: the one I played was solvable. He rewinds the game, revealing for the first time the whole network to me. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Thousands of people from around the world play these games on Amazon Mechanical Turk, drawn by the small fee they earn per round. But as I’m watching the game I just played unfold, Christakis reveals that three of these players are actually planted bots. “We call them ‘dumb AI’,” he says.

His team is not interested in inventing super-smart AI to replace human cognition. Instead, the plan is to infiltrate a population of smart humans with dumb-bots to help the humans help themselves.

“We wanted to see if we could use the dumb-bots to get the people unstuck so they can cooperate and coordinate a little bit more – so that their native capacity to perform well can be revealed by a little assistance,” Christakis says. He found that if the bots played perfectly, that didn’t help the humans. But if the bots made some mistakes, they unlocked the potential of the group to find a solution.

“Some of these bots made counter-intuitive choices. Even though their neighbors all had green and they should have picked orange, instead they also picked green.” When they did that, it allowed one of the green neighbors to pick orange, “which unlocks the next guy over, he can pick a different color and, wow, now we solve the problem”. Without the bot, those human players would probably all have stuck with green, not realizing that was the problem. “Increasing the conflicts temporarily allows their neighbors to make better choices.”

By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.

Much antisocial behavior online stems from the anonymity of internet interactions – the reputational costs of being mean are much lower than offline. Here, bots may also offer a solution. One experiment found that the level of racist abuse tweeted at black users could be dramatically slashed by using bot accounts with white profile images to respond to racist tweeters. A typical bot response to a racist tweet would be: “Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.” Simply cultivating a little empathy in such tweeters reduced their racist tweets almost to zero for weeks afterwards.

Another way of addressing the low reputational cost for bad behavior online is to engineer in some form of social punishment. One game company, League of Legends, did that by introducing a “Tribunal” feature, in which negative play is punished by other players. The company reported that 280,000 players were “reformed” in one year, meaning that after being punished by the Tribunal they had changed their behavior and then achieved a positive standing in the community. Developers could also build in social rewards for good behavior, encouraging more cooperative elements that help build relationships.

Researchers are already starting to learn how to predict when an exchange is about to turn bad – the moment at which it could benefit from pre-emptive intervention. “You might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm,” says Cristian Danescu-Niculescu-Mizil, at Cornell University’s Department of Information Science. “What we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behavior. For a specific period of time, you can actually become a troll. And that’s surprising.”

It’s also alarming. I mentally flick back through my own recent tweets, hoping I haven’t veered into bullying in some awkward attempt to appear funny or cool to my online followers. After all, it can be very tempting to be abusive to someone far away, who you don’t know, if you think it will impress your social group.

Danescu-Niculescu-Mizil has been investigating the comments sections below online articles. He identifies two main triggers for trolling: the context of the exchange – how other users are behaving – and your mood. “If you’re having a bad day, or if it happens to be Monday, for example, you’re much more likely to troll in the same situation,” he says. “You’re nicer on a Saturday morning.”

After collecting data, including from people who had engaged in trolling behavior in the past, Danescu-Niculescu-Mizil built an algorithm that predicts with 80 per cent accuracy when someone is about to become abusive online. This provides an opportunity to, for example, introduce a delay in how fast they can post their response. If people have to think twice before they write something, that improves the context of the exchange for everyone: you’re less likely to witness people misbehaving, and so less likely to misbehave yourself.

The good news is that, in spite of the horrible behavior many of us have experienced online, the majority of interactions are nice and cooperative. Justified moral outrage is usefully employed in challenging hateful tweets. A recent British study looking at anti-Semitism on Twitter found that posts challenging anti-Semitic tweets are shared far more widely than the anti-Semitic tweets themselves. Most hateful posts were ignored or only shared within a small echo chamber of similar accounts. Perhaps we’re already starting to do the work of the bots ourselves.

As Danescu-Niculescu-Mizil points out, we’ve had thousands of years to hone our person-to-person interactions, but only 20 years of social media. “Offline, we have all these cues from facial expressions to body language to pitch… whereas online we discuss things only through text. I think we shouldn’t be surprised that we’re having so much difficulty in finding the right way to discuss and cooperate online.”

As our online behavior develops, we may well introduce subtle signals, digital equivalents of facial cues, to help smooth online discussions. In the meantime, the advice for dealing with online abuse is to stay calm, it’s not your fault. Don’t retaliate but block and ignore bullies, or if you feel up to it, tell them to stop. Talk to family or friends about what’s happening and ask them to help you. Take screenshots and report online harassment to the social media service where it’s happening, and if it includes physical threats, report it to the police.

If social media as we know it is going to survive, the companies running these platforms are going to have to keep steering their algorithms, perhaps informed by behavioral science, to encourage cooperation rather than division, positive online experiences rather than abuse. As users, we too may well learn to adapt to this new communication environment so that civil and productive interaction remains the norm online as it is offline.

“I’m optimistic,” Danescu-Niculescu-Mizil says. “This is just a different game and we have to evolve.”

References

The New Statesman tracked abusive tweets sent to women MPs in the run-up to the 2017 UK general election.

A 2017 Pew Research Center survey showed that 41 percent of Americans have experienced online harassment.

Researchers at University College London investigated what hunter-gatherers can tell us about social networks.

Research published in PNAS showed that emotion influences how content spreads online.

In 2016, Ars Technica reported a study showing how Twitter bots can reduce racist slurs.

Community Security Trust, a charity that protects British Jews from anti-Semitism, published a report about anti-Semitic content on Twitter in 2018.

____________________

Gaia Vince is a journalist, broadcaster and author specialising in science, the environment and social issues. Her article first appeared in Mosaic.

Creative Commons License

Why Good People Turn Bad Online by Gaia Vince is licensed under a Creative Commons Attribution 4.0 International License.

Attributions

88 Open Essays – A Reader for Students of Composition & Rhetoric, edited and compiled by Sarah Wangler & Tina Ulrich is licensed under CC BY 4.0

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

ENG 101 & 102 Rhetoric Copyright © 2024 by Central Arizona College; Shelley Decker; Kolette Draegan; Tatiana Keeling; Heather Moulton; and Lynn Gelfand is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book