Few people walking around a modern city centre minding their own business will be aware that sophisticated surveillance technologies are monitoring their every move. These same individuals will have become belatedly aware of the abuses of social media companies, following the Facebook and Cambridge Analytica scandal. But what they won’t know is how it’s not even necessary to be online for your data to be tracked and recorded by invisible agencies.
Bill Gates is a future tech visionary, but sometimes his soothsaying powers have let him down, such as when he claimed Microsoft would soon trump Google in internet searches. One of his most notoriously imprecise prophecies came at the World Economic Forum, in 2004, when he predicted that the power of financial markets would kill off spam within two years. This was the polar opposite of what actually happened. The problem of spam, and other fake information spread on the internet, has become far more widespread and more complex. In 2018, the era of both Cambridge Analytic and Russian Government propaganda, the public is more confused about the online threat than ever before.
The big three email providers, Gmail, Microsoft Outlook and Yahoo, have invested heavily in improving their filters since 2004, which has helped them to dominate the market. But messages selling fake goods and links to dodgy websites still get through regularly, preying on the vulnerable and the naive. The problem of spam, however, is no longer exclusive to email. Criminal gangs are increasingly turning their botnets – networks of private computers infected with malicious software that makes them create spam – to social media, where most young people are hanging out. The key to ‘social spamming’ is to create fake accounts that try to become friends, or follow verified accounts.
These botnets are controlled by technologically proficient criminal gangs and hired out to spammers for a fee. The botnets are based all over the world, but hotspots include China, India, Vietnam, Russia and Brazil. Some are state-controlled and there is strong evidence that the Russian Government used them to sway the US presidential election. As if all that wasn’t bewildering enough, the public now has to contend with seemingly legitimate companies, such as the UK’s Cambridge Analytica, exploiting their Facebook data.
It’s easy to see that Bill Gates’ crystal ball was malfunctioning badly when he underestimated the spam problem. But why was he so wide of the mark? “At the time of his prediction, the spammers were much less sophisticated and operated on a smaller scale. They used to write scripts and try to fool people using a handful of computers,” says Dr Panagiotis Andriotis, a lecturer in Computer Forensics and Security at the University of the West of England, who uses machine learning to track down botnets. “Today, it’s easy to hire botnets to do your dirty work and there’s a constant cat-and-mouse battle with security experts. We’ve become clever at catching them, but eventually they always evade our systems.”
Spam is born
Spam was officially born on April 12, 1994, when two American immigration lawyers sent out the first mass mailing of a commercial message via the internet. They were advertising their services to 6,000 potential applicants for US Green Card lotteries on the Usenet discussion group network. From that moment, the seriousness of the problem grew. But it wasn’t until 2003 – a year prior to Bill Gates’ quote – that the first botnets appeared. The gangs had found a way to infect networks of zombie computers with malware. They organised the computers in a militaristic hierarchy, with early zombies infecting downstream computers. Some of the zombies became middle managers that transmitted commands from the central servers.
The largest botnet on record, known as Rustock, infected more than a million computers and could send 30 billion spam emails per day, before it was taken down in March 2011. Microsoft financially supported a complex international operation across different legal jurisdictions because Rustock sent its emails through Windows Live Hotmail accounts. The drug company Pfizer was also involved because the Rustock spam advertised counterfeit versions of their patent-protected Viagra.
The cat-and-mouse game with the bots is never-ending. To frustrate the botnets, security experts introduced CAPTCHAs, those irritating fuzzy tests that are designed to distinguish bots from humans. But the botnets reacted by paying online workers in developing countries to solve thousands of them. In a 2010 paper The Economics of Spam, US researcher David Reiley said the labourers, mainly in countries like India, China and Southeast Asia, were paid only US$1 for every thousand CAPTCHAs they solved.
In the past few years, botnets have become much better at concealing their tracks, so that the security experts require advanced computing techniques to locate them. As recently as 2011, it was enough to set a “honeypot trap” to detect botnets. For example, a team at Texas A&M University created a few fake Twitter accounts to produce gibberish tweets that were of no interest to humans, but attracted thousands of botnets eager to grow their social circles. Dr Andriotis says that today’s social botnets can be programmed to create more realistic personas. They even search the internet for information to fill out their profiles and post material at predetermined times that mirror human behaviour. Some go as far as cloning human behaviour. Meanwhile, natural language algorithms allow them to engage in conversations, comment on posts and answer questions.
To defeat such a formidable opponent, Dr Andriotis says machine learning has become an essential tool. “We use it to work out the specific characteristics of the social botnet accounts. We extract characteristics of accounts we know to be human and of other accounts that we know to be bots. Then we create specific algorithms to distinguish between human and botnets on Twitter. The AI can classify it if it looks like a previous account.”
Twitter used a similar approach to determine whether social botnets were trying to influence the US elections. They concluded that more than 50,000 Russian bots had re-tweeted Donald Trump nearly 500,000 times in the 10 weeks leading up to the US election in November 2016. Twitter examined things like the timing of tweets and engagement with them. Other indications of a link included accounts set up in Russia, associations with Russian phone carriers, Cyrillic characters in display names and Russian IP addresses.
Facebook also revealed last year that it had uncovered 3,000 ads from 470 accounts connected to a Russian botnet manufacturer.
Emilio Ferrara, a world-renowned investigator in this field based at the University of Southern California, estimated that about 400,000 botnets were engaged in the political discussion about the US Presidential election. He calculated that they made about 3.8 million tweets, a fifth of the entire election conversation. Ferrara used several indicators to spot the bots, including whether the Twitter profile was customised as bots are more likely to exhibit the default profile setting. Another strong indicator was the absence of geographical metadata because humans use smartphones that record digital footprints. Further evidence came from activity statistics, such as incessant activity and excessive numbers of tweets. Botnets retweet content more frequently than generating new tweets.
The botnets spread some outrageous claims on social media during the US elections, but a lot of people were taken in. “I have relatives in the US and, like so many social media users, they believed the most incredible stories on Facebook, and shared them with friends, including me,” says Dr Andriotis. “People trust what they see on their monitors, which they’re looking at for two or three hours a day.”
Some parts of the world have been much better at regulating to prevent spam. Chris Thompson, from the Spamhaus Project, an international organisation that tracks spam and related cyber threats, says Canada, Australia and the EU have taken strong positions against spam, and in support of an email address owner’s property rights. Australia, in particular, has a lower profile in spam than most countries, and that could be because of the well-publicised prosecution of the notorious spammer Wayne Mansfield. The Australian Federal Court fined Mansfield AUS$1 million personally and his company Clarity One AUS$4.5 million for sending 75 million emails between April 2004 and April 2006. “Likewise, Canada’s CASL legislation seems to inhibit large-scale spam operations there, and existing and forthcoming EU policies seem to reduce ‘mainsleaze spam’,” says Thompson. “In the EU, we are hearing from senders who are in the process of tightening their mailing list policies to comply with GDPR regulations this spring.”
US regulators have been comparatively lax, Thompson says. Powerful US business lobbies have vested interests in sending spam, he believes. “The Direct Marketing Association of the USA favoured an ‘opt out’ spam policy – which essentially condones spam – and its influence led directly to the USA’s weak ‘CAN-SPAM’ legislation of 2003,” he said. “Also in the USA, both Salesforce and InfoGroup, huge corporations, not little fly-by-nights – still sell third-party email addresses. Their Jigsaw/Data.com and Walter Karl companies, respectively, offer email address ‘appending’ and other forms of distribution. That, in turn, gives a green light to smaller, often less accurate or principled firms, to offer similar services to sell your and my personal contacts to any and all buyers.”
Although the security experts will continue their ceaseless battle to protect the public, Dr Panagiotis Andriotis says we cannot rely entirely on them. We are all responsible for defending ourselves and our society, he argues. Unlike with physical crime, it’s difficult to prevent spamming internationally. “Specific organisations identify and fight physical crime across borders, such as Europol and Interpol. But cybercrime is different. You cannot see who is doing what, so it is not well-defined. This puts more onus on us as digital citizens to be aware and not wait for other people to make us safe. What we’re doing online is important and lots of people have no idea of the dangers,” he says.