Bad actors and rival nations have already manipulated tech platforms in attempts to shape American political outcomes. Given how much data is stored on their servers, data privacy experts fear the tech companies themselves could influence the electorate when they face an existential threat like the gig companies do in California.
“If you’re Uber, you’re using your monopoly power and your position in the sector to push advocacy that affects your bottom line,” said Dipayan Ghosh, a computer scientist who worked on privacy issues at Facebook and now directs the Digital Platforms & Democracy Project at Harvard’s Kennedy School. “I think that that’s highly problematic.”
The gig companies behind Proposition 22 have spent nearly $200 million on the initiative. It would carve ride share and delivery services out of Assembly Bill 5, a labor law the California legislature passed in 2019 to codify a state Supreme Court ruling on worker classification.
The platforms say the law poses an existential threat to their business models by requiring them to treat their workers as employees, while labor advocates counter that the companies have been depriving gig workers of health care and other benefits to which they are entitled.
Companies “risk annoying their customers” by targeting them with messages when they’re waiting for dinner or a ride home, said Bob Stern, the architect of California’s campaign finance rules. But such actions do not violate state election law, as long as they’re properly reported, he said.
Other legal scholars and technology experts shared a similar view: The use of a platform to advance policy favoring a company’s interests would likely be protected as free speech under the First Amendment unless the businesses violate their terms of service or data privacy regulations.
“A lot of the protections we have are not legal protections,” said Ashkan Soltani, a former chief technologist for the Federal Trade Commission. “They’re just norms.”
The labor battle in California is highly visible to rideshare and meal delivery users. But the potential for influence below the surface concerns data privacy experts even more.
In theory, some suggest Google could give lower search rankings to news stories about the need for a stringent data privacy regulation, and Facebook could boost content in its news feeds about the harms of gutting a legal shield for social media platforms — policies over which they have a deep business interest. There is no indication that they have done so.
“There’s not much to keep a company, legally speaking, from doing this,” said Mary Anne Franks, professor at the University of Miami School of Law. “Clearly they make choices all the time about what kind of stories rise up and what is trending. I can’t think of any wires it would trip.”
Google says that its search algorithms are designed to help people find the most relevant information from trusted sources, not to advance political agendas, and stresses that it does not sell higher search rankings. Different rules dictate the placement of ads that are labeled and often appear at the top of a search page.
A spokesperson noted the company’s announcement last year that it would stop allowing political advertisers to serve small groups of people with election ads based on narrow demographic profiles, a practice known as microtargeting. That policy would apply to Google itself, if it placed political ads, the spokesperson said.
Facebook declined to comment for the story.
Eric Goldman, a professor at the Santa Clara University School of Law in the Silicon Valley, argues that fears of powerful tech companies secretly manipulating the public and perverting democracy are overblown, especially given the backlash they already face in Washington and elsewhere over data privacy, disinformation and antitrust issues.
The U.S. Department of Justice filed a long-anticipated antitrust suit against Google on Tuesday, while social media companies struggling to combat online disinformation have been slammed with allegations of anti-conservative bias from President Donald Trump and other Republicans in Washington.
“It sounds really scary, ‘Oh my God, these people could totally muck with my mind,'” Goldman said. “The reality is Facebook and Google are doing everything in their power to avoid anything that smells like that.”
In California, Prop. 22 would give workers health care subsidies and other wage and job protections, and the campaign and the companies behind it have defended their tactics as a way to reach people most directly affected by the issue at hand.
“Uber’s app is sharing the voice of tens of thousands of drivers, 72% of whom support Prop 22 with millions of riders in California and keeping them informed of the stakes on this issue,” the company said in a statement. “We have previously shared videos from drivers with riders and MADD’s endorsement of prop 22 because of ridesharing’s impact on reducing drunk driving.”
But the unorthodox campaign strategy has created a backlash as consumers grow increasingly sensitive about the use of their personal data and unwanted intrusions into their lives.
“You can see that it cracks the door open,” said Hany Farid, a computer science professor at the University of California, Berkeley, who noted he was served an ad along with his meal on Uber Eats. “They can have a blatant ad on the app about a proposition that benefits them financially. What’s the next step? It just feels like you’re opening this door to manipulating the electorate in a way that doesn’t feel right to me.”
If any company says it won’t use someone’s personal information for a certain purpose, but it does anyway, that could trigger an investigation by the Federal Trade Commission or a state attorney general, especially in places like California that have enacted strong data privacy laws.
And less than two weeks before Election Day, drivers sued Uber on Thursday, alleging the company was illegally pressuring them to support Prop. 22 by requiring them to click through messages about the measure in their apps.
But legal experts say it’s otherwise legal for companies like Google, Facebook or Twitter to try to influence consumers by presenting or withholding certain information based on what they know about them.
In California, a wealthy activist behind a data privacy measure also on the November ballot wants the state to force companies to disclose whether they manipulate their algorithms to influence election outcomes — even if it can’t prevent such manipulations. Alastair Mactaggart included such a proposal in an early draft of his ballot initiative but later removed it, citing concerns about a First Amendment challenge bringing down the entire privacy law.
Mactaggart said last week that he will lobby to have an algorithmic disclosure bill taken up in the California legislature next year.
The problem is that there’s no way to know what’s taking place under the hood, said Soltani, who has consulted for Mactaggart.
“If someone wants to put their thumb on the scale and rank information based on what they know about a user and what they’re more susceptible to, not only is that permitted by the law but that’s the definition of behavioral advertising,” Soltani said. “That’s what the entire system is designed to do — to show the right content to the right person at the right time.”