Politics

Russian trolls and bots disrupting US democracy via Facebook and Twitter

Spread the love

There was the ISIS attack on a chemical plant in southern Louisiana last September. Two months later, an outbreak of the deadly Ebola virus occurred in Atlanta on the same day a video began circulating on social media of an unarmed black woman being shot to death by police in the Georgia city.

What all these shocking and disparate stories have in common are two things: they are not true and they all originated from a group of Russian cyber trolls working out of a non-descript office building in St. Petersburg.

In addition, throughout last year’s presidential election season, dozens of stories circulated on Twitter, Facebook and other social media attacking Democratic candidate Hillary Clinton about everything from  supposedly having poor mental health to allegedly fostering secret ties with Islamic extremists.

putin reuters

With top executives from Facebook, Twitter and Google testifying Tuesday before a Senate Judiciary subcommittee on Russia’s attempts to influence U.S. elections and sow discord across the country via social media, many questions remain about how these Russian trolls operate, the veracity of the so-called news they are spreading and what tech giants in Silicon Valley are doing to combat this scourge.

“We’re pretty sure Russia is behind this given that we’ve seen the building they use, know that they’ve put hundreds of millions of dollars into this effort and that they have about 200 employees working on this,” Philip Howard, a professor of internet studies at the University of Oxford and research director of the Oxford Internet Institute, told Fox News. “The problem combatting them is different with each social media platform.”

On the world’s most popular social media site, Facebook, Russian trolls use fake names and backgrounds paired with stolen photos to pose as American citizens and spread either false news stories or hacked information.

In one instance, an account under the name of Melvin Reddick of Harrisburg, Pa. last June posted a link to a website called DCLeaks, which displayed material stolen from a number of prominent American political figures. While the information posted on the site appeared to be true, Reddick was an apparition – with no records in Pennsylvania appearing under his name and his photos taken from an unsuspecting Brazilian.

While trolls in Russia have also used individual accounts on Twitter to disseminate false or incendiary news, the more common practice implemented on that platform is so-called bot farming, where a series of up to hundreds and –at times – thousands of automated  accounts will send out identical messages seconds apart and in the exact alphabetical order of their made-up names.

On Election Day last year, a group of bots on Twitter blasted out the hashtag #WarAgainstDemocrats more than 1,700 times.

Experts say that while these trolls may use various methods to try and disguise their Russian identities, such as changing their IP addresses, they are easily tracked by their frequent screw-ups (some accidently put their location setting on in Twitter, use credit cards linked to the Russian government or write their posts in Cyrillic).

Preventing them from operating, however, is a different story.

Groups like Hoffman’s Oxford Internet Institute say that they can monitor questionable accounts and alert tech companies, but to really make progress – and to openly prove that Russia is behind these attacks – Facebook and Twitter need to be more transparent.

“The only way to see if these accounts are actually Russian is…

Source link