CRES [NOT] ON FACEBOOK
The EU takes action--
A worthy attempt to mitigate Facebook evils
This is a long and valuable article explaining why the US went rapidly on the road to hell right after 2009, with changes in Facebook features, etc. Unless we deal with this danger, we are doomed as a democracy.
Belshazzar's Feast (Daniel 5) -- with Facebook
Zuckerberg . . . is an enemy of the state, and I mean the United States of America. He doesn’t give a shit about us, the United States. He knows he can transcend it. He can get away to any place. And so it’s just about filthy lucre, that’s it. . . . Because these people — and Sheryl is a complicit . . . .
“The truth is that these
companies won’t fundamentally change because
--Sacha Baron CohenHere is a link to the posts page on the CRES Facebook site. Friends have prevailed upon me to allow them to post good stuff to Facebook because they are "more realistic" than I am. I think Facebook is evil, but I know it also does a lot of good. I am not letting my own judgment override the opinion of so many.
But below I am retaining analyses about the dangers of Facebook.
I also question Twitter and will not use it.
FACEBOOK DANGER updates:
Facebook and Democracy
What's to be done about this evil? 211005 WaPo
NYTimes: Facebook: Delay, Deny and Deflect
LONDON — As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic.
Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.”
“They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”
Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques.
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes.
During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement expressing alarm and promising to take action.
“This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said.
Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.
In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday that she was looking into whether the Facebook data was “illegally acquired and used.”
In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees.
Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election.
While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton.
The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party.
“Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”
Still, he added, “it’s a serious abuse of our rules.”
Reading Voters’ Minds
The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate.
Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University.
The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said.
Then a chance meeting bought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth.
Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and Mr. Mercer declined to comment.
Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward.
The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win.
When Mr. Wylie’s colleagues failed to produce a memo explaining their work to Mr. Neugebauer, Mr. Nix castigated them over email.
“ITS 2 PAGES!! 4 hours work max (or an hour each). What have you all been doing??” he wrote.
Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior.
But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages.
Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed.
When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records.
All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.”
He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested.
INSERT IMAGE == An email from Dr. Kogan to Mr. Wylie describing traits that could be predicted.
Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers.
“We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.”
Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.”
An International Effort
Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica.
The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European.
But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections.
In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.”
In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican.
Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said.
Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules.
Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees.
Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support.
Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters.
In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign.
After the Leak
By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored.
Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.”
Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it.
Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.”
Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.”
But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained.
While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted.
Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations.
All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign.
In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on.
Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr reported from London. Gabriel J.X. Dance contributed reporting from London, and Danny Hakim from New York.
Facebook and Democracy - 2
and the Secret Agenda of a Facebook Quiz
By McKENZIE FUNK
NOV. 19, 2016
Do you panic easily? Do you often feel blue? Do you have a sharp tongue? Do you get chores done right away? Do you believe in the importance of art?
If ever you’ve answered questions like these on one of the free personality quizzes floating around Facebook, you’ll have learned what’s known as your Ocean score: How you rate according to the big five psychological traits of Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. You may also be responsible the next time America is shocked by an election upset.
For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism “psy ops” work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names.
Cambridge Analytica worked on the “Leave” side of the Brexit campaign. In the United States it takes only Republicans as clients: Senator Ted Cruz in the primaries, Mr. Trump in the general election. Cambridge is reportedly backed by Robert Mercer, a hedge fund billionaire and a major Republican donor; a key board member is Stephen K. Bannon, the head of Breitbart News who became Mr. Trump’s campaign chairman and is set to be his chief strategist in the White House.
In the age of Facebook, it has become far easier for campaigners or marketers to combine our online personas with our offline selves, a process that was once controversial but is now so commonplace that there’s a term for it, “onboarding.” Cambridge Analytica says it has as many as 3,000 to 5,000 data points on each of us, be it voting histories or full-spectrum demographics — age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, car ownership, homeownership — from consumer-data giants.
No data point is very informative on its own, but profiling voters, says Cambridge Analytica, is like baking a cake. “It’s the sum of the ingredients,” its chief executive officer, Alexander Nix, told NBC News. Because the United States lacks European-style restrictions on second- or thirdhand use of our data, and because our freedom-of-information laws give data brokers broad access to the intimate records kept by local and state governments, our lives are open books even without social media or personality quizzes.
Ever since the advertising executive Lester Wunderman coined the term “direct marketing” in 1961, the ability to target specific consumers with ads — rather than blanketing the airwaves with mass appeals and hoping the right people will hear them — has been the marketer’s holy grail. What’s new is the efficiency with which individually tailored digital ads can be tested and matched to our personalities. Facebook is the microtargeter’s ultimate weapon.
The explosive growth of Facebook’s ad business has been overshadowed by its increasing role in how we get our news, real or fake. In July, the social network posted record earnings: quarterly sales were up 59 percent from the previous year, and profits almost tripled to $2.06 billion. While active users of Facebook — now 1.71 billion monthly active users — were up 15 percent, the real story was how much each individual user was worth. The company makes $3.82 a year from each global user, up from $2.76 a year ago, and an average of $14.34 per user in the United States, up from $9.30 a year ago. Much of this growth comes from the fact that advertisers not only have an enormous audience in Facebook but an audience they can slice into the tranches they hope to reach.
One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times.
Imagine the full capability of this kind of “psychographic” advertising. In future Republican campaigns, a pro-gun voter whose Ocean score ranks him high on neuroticism could see storm clouds and a threat: The Democrat wants to take his guns away. A separate pro-gun voter deemed agreeable and introverted might see an ad emphasizing tradition and community values, a father and son hunting together.
In this election, dark posts were used to try to suppress the African-American vote. According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous “super predator” line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake. Federal Election Commission rules are unclear when it comes to Facebook posts, but even if they do apply and the facts are skewed and the dog whistles loud, the already weakening power of social opprobrium is gone when no one else sees the ad you see — and no one else sees “I’m Donald Trump, and I approved this message.”
While Hillary Clinton spent more than $140 million on television spots, old-media experts scoffed at Trump’s lack of old-media ad buys. Instead, his campaign pumped its money into digital, especially Facebook. One day in August, it flooded the social network with 100,000 ad variations, so-called A/B testing on a biblical scale, surely more ads than could easily be vetted by human eyes for compliance with Facebook’s “community standards.”
Perhaps out of necessity, the Trump team was embracing a new-media lesson: It didn’t have to build everything from scratch. Mark Zuckerberg and others had already built the infrastructure the campaign needed to reach voters directly. When “Trump TV” went live on Facebook before and after the second debate it raked in $9 million in donations in 120 minutes.
In the immediate wake of Mr. Trump’s surprise election, so many polls and experts were so wrong that it became fashionable to declare that big data was dead. But it isn’t, not when its most obvious avatar, Facebook, was so crucial to victory.
On Monday, after a similar announcement from Google, Facebook said it would no longer allow fake-news websites to show ads, on their own sites, from Facebook’s ad network — a half-step that neither blocks what appears on your newsfeed nor affects how advertisers can microtarget users on the social network.
There are surely more changes to come. Mr. Zuckerberg is young, still skeptical that his radiant transparency machine could be anything but a force for good, rightly wary of policing what the world’s diverse citizens say and share on his network, so far mostly dismissive of Facebook’s role in the election. If Mr. Zuckerberg takes seriously his oft-stated commitments to diversity and openness, he must grapple honestly with the fact that Facebook is no longer just a social network. It’s an advertising medium that’s now dangerously easy to weaponize.
A Trump administration is unlikely to enforce transparency about who is targeted by dark posts and other hidden political ads — or to ensure that politicians take meaningful ownership of what the ads say. But Facebook can.
Facebook and Democracy - 3
Who is afraid of special counsel Robert S. Mueller III? President Trump is afraid. So are those who worked on his campaign. But they are not alone.
Over the weekend, Rob Goldman made it clear that some of America’s biggest social media companies are scared of Mueller, too. Goldman is Facebook’s vice president for advertising, and according to his Twitter bio, a “student, seeker, raconteur, burner.” On Friday, he took to Twitter to proclaim his company’s innocence. He was, he wrote, “very excited to see the Mueller indictment today,” since Facebook had “shared Russian ads with Congress, Mueller and the American people.” But “still, there are key facts about the Russian actions that are still not well understood.”
He went on: “Most of the coverage of Russian meddling involves their attempt to effect the outcome of the 2016 US election. I have seen all of the Russian ads and I can say very definitively that swaying the election was *NOT* the main goal.” Instead, he said, the main goal was to “divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well.”
In a short string of tweets, in other words, Facebook’s vice president for advertising twisted and obfuscated the issues almost beyond recognition. For one, the indictment states clearly that the Russians were not merely buying ads: It alleges that they used fake American identities, fraudulently obtained PayPal accounts and fraudulent Social Security numbers to set up Facebook pages for groups such as “Blacktivist,” “Secured Borders” and “Army of Jesus.” They did indeed use those pages to spread fear and hatred, reaching tens and possibly hundreds of millions of people.
They began this project in 2014, well before the election. And when the election began, they were under clear instructions, according to the indictment, to “use any opportunity to criticize Hillary [Clinton] and the rest (except [Bernie] Sanders and Trump—we support them).” By the time the election began in earnest, the attempt to “divide America” was an attempt to elect Trump. They pushed anti-Clinton messages on websites aimed at the far-right fringe and tried to suppress voter turnout on websites aimed at minorities. I’m not sure where Goldman’s idea that “swaying the election was not the main goal” comes from, but it is diametrically opposed to the content of Mueller’s indictment. No wonder Trump tweeted this on Saturday: “The Fake News Media never fails. Hard to ignore the fact from the Vice President of Facebook Ads, Rob Goldman!”
But Goldman is right to be afraid. The social media companies, including Facebook as well as Twitter, YouTube and Reddit, really do bear a part of the responsibility for the growing polarization and bitter partisanship in American life that the Russians, and not only the Russians, sought to exploit. They have not become conduits for Russian propaganda, and not only Russian propaganda, by accident. The Facebook algorithm, by its very nature, is pushing Americans, and everybody else, into ever more partisan echo chambers — and people who read highly partisan material are much more likely to believe false stories.
At the same time, Facebook has declared itself free of responsibility: The company continues to argue that it is not legally liable for material that appears on its platform because it is not a “publisher,” even though it behaves in every other way like a publisher, including by collecting advertising revenue that used to go to publishers. The result is that anyone who seeks to spread false information on Facebook or any other social media site is, in practice, no longer bound by laws on libel or false advertising that were explicitly designed to stop them.
his is not the only problem: There is plenty of evidence now that the very nature of the platforms encourages ever more extreme, ever more offensive material. Studies of YouTube have shown how automated video production, governed by algorithms, not humans, leads inexorably to more violent and more disturbing videos. One recent survey suggests that up to 15 percent of Twitter accounts — some 48 million — may not be human at all. Many think that is a gross underestimate.
Don’t let them off the hook: Until they take responsibility
for what appears on their platforms — or until they are held legally liable
— the social media companies will continue to fuel the division that Goldman
piously denounces. They are not accidental victims of Russia’s information
war. They are its tools.
Legally mandated open application programming interfaces for social media platforms . . .would help the public identify what is being delivered by social media algorithms, and thus help protect our democracy.
How Evil Is Tech?
NYTimes | OP-ED COLUMNIST
David Brooks NOV. 20, 2017
Not long ago, tech was the coolest industry. Everybody wanted to work at Google, Facebook and Apple. But over the past year the mood has shifted.
Some now believe tech is like the tobacco industry — corporations that make billions of dollars peddling a destructive addiction. Some believe it is like the N.F.L. — something millions of people love, but which everybody knows leaves a trail of human wreckage in its wake.
Surely the people in tech — who generally want to make the world a better place — don’t want to go down this road. It will be interesting to see if they can take the actions necessary to prevent their companies from becoming social pariahs.
There are three main critiques of big tech.
The first is that it is destroying the young. Social media promises an end to loneliness but actually produces an increase in solitude and an intense awareness of social exclusion. Texting and other technologies give you more control over your social interactions but also lead to thinner interactions and less real engagement with the world.
As Jean Twenge has demonstrated in book and essay, since the spread of the smartphone, teens are much less likely to hang out with friends, they are less likely to date, they are less likely to work.
Eighth graders who spend 10 or more hours a week on social media are 56 percent more likely to say they are unhappy than those who spend less time. Eighth graders who are heavy users of social media increase their risk of depression by 27 percent. Teens who spend three or more hours a day on electronic devices are 35 percent more likely to have a risk factor for suicide, like making a plan for how to do it. Girls, especially hard hit, have experienced a 50 percent rise in depressive symptoms.
The second critique of the tech industry is that it is causing this addiction on purpose, to make money. Tech companies understand what causes dopamine surges in the brain and they lace their products with “hijacking techniques” that lure us in and create “compulsion loops.”
Snapchat has Snapstreak, which rewards friends who snap each other every single day, thus encouraging addictive behavior. News feeds are structured as “bottomless bowls” so that one page view leads down to another and another and so on forever. Most social media sites create irregularly timed rewards; you have to check your device compulsively because you never know when a burst of social affirmation from a Facebook like may come.
The third critique is that Apple, Amazon, Google and Facebook are near monopolies that use their market power to invade the private lives of their users and impose unfair conditions on content creators and smaller competitors. The political assault on this front is gaining steam. The left is attacking tech companies because they are mammoth corporations; the right is attacking them because they are culturally progressive. Tech will have few defenders on the national scene.
Obviously, the smart play would be for the tech industry to get out in front and clean up its own pollution. There are activists like Tristan Harris of Time Well Spent, who is trying to move the tech world in the right directions. There are even some good engineering responses. I use an app called Moment to track and control my phone usage.
The big breakthrough will come when tech executives clearly acknowledge the central truth: Their technologies are extremely useful for the tasks and pleasures that require shallower forms of consciousness, but they often crowd out and destroy the deeper forms of consciousness people need to thrive.
Online is a place for human contact but not intimacy. Online is a place for information but not reflection. It gives you the first stereotypical thought about a person or a situation, but it’s hard to carve out time and space for the third, 15th and 43rd thought.
Online is a place for exploration but discourages cohesion. It grabs control of your attention and scatters it across a vast range of diverting things. But we are happiest when we have brought our lives to a point, when we have focused attention and will on one thing, wholeheartedly with all our might.
Rabbi Abraham Joshua Heschel wrote that we take a break from the distractions of the world not as a rest to give us more strength to dive back in, but as the climax of living. “The seventh day is a palace in time which we build. It is made of soul, joy and reticence,” he said. By cutting off work and technology we enter a different state of consciousness, a different dimension of time and a different atmosphere, a “mine where the spirit’s precious metal can be found.”
Imagine if instead of claiming to offer us the best things in life, tech merely saw itself as providing efficiency devices. Its innovations can save us time on lower-level tasks so we can get offline and there experience the best things in life.
Imagine if tech pitched itself that way. That would
be an amazing show of realism and, especially, humility, which these days
is the ultimate and most disruptive technology.
Excerpt from a column by
The Post’s reviewer, Susan Benkelman of the American Press Institute, summed up its takeaway: “The company has put growth and profits above all else, even when it was clear that misinformation and hate speech were circulating across the platform and that the company was violating the privacy of its users.”
Facebook’s strategy: Avert disaster, apologize and keep growing
And then I remembered a few things — like having been present at the Senate hearing in 2018 when Facebook founder Mark Zuckerberg tried to defend the company policies that enabled the political consulting firm Cambridge Analytica, intent on electing Donald Trump as president, to get its hands on data from millions of users.
I remembered how thoroughly incapable many senators were of even understanding the way Facebook works, much less regulating it effectively, and how at one point, 84-year-old Orrin Hatch of Utah, asked a question about the company’s business model so basic that Zuckerberg was able to answer it in four words: “Senator, we run ads.”
From 2018: Members of Congress can’t possibly regulate Facebook. They don’t understand it.
. . . .
To the casual user posting wedding photos or recipes, doing research or finding that old friend, using Facebook may seem to be not only fun but free.
But in reality, the price is astonishingly high. And it’s only going in one direction.
211005 Washington Post: Margaret Sullivan
Facebook is harming our society.
Frances Haugen, a Facebook whistleblower, speaking to Scott Pelley on “60 Minutes” on Sunday. Haugen showed the extent of Facebook’s toxicity, Sullivan writes, but our current tools of government aren’t equipped to confront it. (Robert Fortunato/AFP/Getty Images)
Frances Haugen, who revealed herself Sunday as the Facebook whistleblower, could not have made things any clearer.
“Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money,” the former member of Facebook’s civic integrity team, who left the company this spring, told Scott Pelley of CBS’s “60 Minutes.”
This wasn’t just Haugen’s opinion as a digital-economy veteran, with a long stint at Google before she joined Facebook. She had the goods. The huge trove of documents that she took when she left the behemoth social network spells out its ugly incentive structure in case you had any remaining doubt: Outrage, hate and lies are what drive digital engagement, and therefore revenue.
The system is broken. And we all suffer from it.
But how to fix it? A problem that threatens the underpinnings of our civil society calls for a radical solution: A new federal agency focused on the digital economy.
The idea comes from none other than a former Federal Communications Commission chairman, Tom Wheeler, who maintains that neither his agency nor the Federal Trade Commission are nimble or tech-savvy enough to protect consumers in this volatile and evolving industry.
“You need an agency that doesn’t say ‘here are the rigid rules,’ when the rules become obsolete almost immediately,” Wheeler, who headed the FCC from 2013 to 2017, told me Monday.
Too much of the digital world operates according to Mark Zuckerberg’s famous motto: “Move fast and break things.” That’s a perfect expression of what Wheeler called “consequence-free behavior.”
So if we really want to think about the public interest in the fast-paced digital world, it’ll be necessary to revise “the cumbersome, top-down rule-making process that has been in place since the industrial era,” as Wheeler wrote in a Harvard’s Shorenstein Center paper, with Phil Verveer, the Justice Department lead counsel on a suit that resulted in the breakup of AT&T, and Gene Kimmelman, a prominent consumer-protection advocate.
Digital platforms like Facebook and Google have become “pseudo-governments that make the rules,” Wheeler told me. No surprise that they make the rules to benefit themselves.
The existing regulatory structure just doesn’t work, he argued in a Brookings Institution piece. The FCC and FTC are filled with dedicated professionals but are constrained. Their antitrust actions may grab headlines but can’t protect against more general consumer abuses — like those take-it-or-leave-it “terms and conditions” they force on their customers.
And it’s not as though Facebook hasn’t been punished for its offenses. In 2019, the FTC slapped the company with a record-breaking $5 billion fine for deceiving billions of users and failing to protect their privacy.
But such a penalty doesn’t address the issues that Haugen was talking about Sunday, or those that she’s expected to discuss Tuesday when she testifies before Congress.
“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” she told CBS. “Facebook, over and over again, chose to optimize for its own interests, like making more money.”
Haugen’s simple delivery made for a powerful interview, hammering home the details of the shocking information she shared with the Wall Street Journal for its recent blockbuster investigation, The Facebook Files.
Among the revelations: Facebook is thoroughly aware that the mental health of teens is damaged by engagement with Instagram, which it owns (“Teens blame Instagram for increases in the rate of anxiety and depression,” stated one slide from an internal presentation) but has done little to change that.
And despite its constant protestations to the contrary, Facebook has built a business model that it knows full well relies on the anger and outrage of its nearly 3 billion users to keep them engaged and clicking. (“Misinformation, toxicity and violent content are inordinately prevalent among reshares,” its own data scientists concluded, according to the Journal report.)
As Haugen explained, this phenomenon motivates politicians not just to communicate differently but to govern differently, by embracing less reasonable, more outrage-inducing policy positions. You can see this playing out in extreme rhetoric on emotional issues like immigration policy. Facebook’s practices, she believes, even propelled the Jan. 6 insurrection at the Capitol by allowing misinformation to flourish and organizers to congregate on its sites.
Facebook’s representatives deny many of her charges, calling some of them ludicrous.
“We continue to make significant improvements to tackle the spread of misinformation and harmful content,” said Facebook spokesperson Lena Pietsch. “To suggest we encourage bad content and do nothing is just not true.”
Company founder Zuckerberg, meanwhile, has repeatedly said he thinks more regulation is necessary. (As long as it doesn’t cut into profits, one can assume he means.)
That all sounds mighty reasonable. And mighty familiar. Zuckerberg loves to apologize sincerely and carry on.
Facebook keeps growing in size, value and influence — vividly demonstrated when the massive platform crashed Monday, along with its subsidiaries Instagram and WhatsApp, and disrupted an enormous chunk of the planet that has come to rely on them.
Something has to change. And that doesn’t mean a
little tinkering around the edges of what already exists. The digital revolution
requires a revolutionary change in restraining out-of-control practitioners.