Research company investigates 40,000 fake accounts to find impersonator tactics.
Say you just got laid off from your job. Bills are piling up and the pressure to get a new job quickly is building. Your desperation has you taking chances you wouldn’t normally take, such as clicking on a link to a job offer — even if something about it doesn’t quite look right.
Research firm ZeroFOX has found that unless a company has a verified recruiting account, it can be difficult for an applicant to decipher a legitimate account from an impersonator. One way to spot an impersonator is that they commonly provide Gmail, Yahoo, and other free email provider addresses through which applicants can inquire about a job and send their resumes (more advanced scammers can spoof company email domains). Some also include links to official job sites and LinkedIn for follow-up. In most cases, the impersonator uses the company logo to portray themselves as an official recruiter for the company.
Once the impersonator receives an email, he or she will either try to extract personally identifiable information (PII) or demand payment for an application fee. Some companies are aware of recruitment scams and have a page on their site asking job seekers to be aware of scammers using unofficial company email addresses.
ZeroFOX created honeypot accounts, engaged with the impersonators, and observed the social engineering attack within a sandboxed environment in investigating 40,000 fake accounts. This allowed the research company to reveal the anatomy of the attacks, identify commonalities and differences in these attacks, and more clearly understand motives.
“Social media is no longer used solely as a personal communication tool. It has evolved into a critical business application – helping businesses dramatically increase revenue and productivity – while strengthening and growing customer relationships. As businesses increasingly look to leverage social media – so are cybercriminals.,” said ZeroFOX’s Evan Blair.
In the last two years, the overall number of malicious impersonations has increased 11 times from December 2014 to December 2016, according to ZeroFOX. In its research, ZeroFOX uses a suite of machine learning, natural language processing, image recognition, and other data science techniques in order to measure the relative similarity between an impersonating profile and the genuine account.
Nearly half (48.1 percent) of all malicious social media impersonators disguise their payload as a fake coupon or giveaway using the brand to attract promotions seekers. More than 1,000 impersonators incorporated credibility-building words like “official,” “authentic,” “real,” “authorized,” “actual,” and “legitimate” within their names, screen names and descriptions.
“In our new digital lives, where people are free to assume others’ identities and perpetrate malicious activity in their name, businesses – regardless of size – are at an all-time high risk of financial and reputational losses. Social media and digital security is now a mission-critical function for brands to protect themselves – and more importantly, their customers – from falling victim to safety, privacy, reputation and revenue vulnerabilities,” Blair said.
The social networks have taken the first step in combatting the impersonator problem by verifying accounts, indicating to a user that the profile they’re interacting with is legitimate and not an imposter. This is similar to websites that are verified using website digital certificates, and browsers that highlight the URL in green. But what this approach doesn’t provide is any indication of a nefarious account, ZeroFOX reports. Social networks rely on abuse reports from their users or manual triage in order to identify and respond to these accounts. This approach cannot keep up with the constant flux of impersonating accounts as they are created and deleted each day.
The problem of fraudulent accounts is systemic across the social networks and the tactics are broad and diverse. Proactively hunting for these accounts requires sophisticated, layered methods using account verification, threat detection, and machine learning.
This approach can be subsequently integrated to allow large-scale, cross-network analysis and improved detection accuracy. Machine learning classifiers that can report on these threats targeting an individual or enterprise at a large scale. An organization can then take a more proactive and timely approach to thwarting threats, requesting account takedowns, and mitigating risk.
ZeroFOX has shared some of the scenarios they saw in setting the trap:
From product complaints, to account security issues, to undelivered packages, customers publicly express their discontent by directly mentioning the company’s social media account. Companies have responded by forming rapid response teams who address such customer inquiries. But they aren’t the only ones to do so. Impersonators have latched on to the inherent trust that customers place in these support accounts.
Other than the blue verified checkmark, the differences between the real account and its two impersonators are negligible to the human eye. Customers with bank accounts identify themselves by mentioning the authentic bank’s account alongside a personal question, and the impersonator then uses this publicly posted information as a one-stop-shop for victim acquisition.
Another common theme involves impersonators who target military members and veterans. From the data collected, 1,047 impersonators incorporated military-associated words like “military,” “navy,” “army,” “air force,” “marines,” and “nato” within their names, screen names and descriptions. Impersonators try to penetrate the social media circles of military members to try to steal personal and sensitive information.
Some impersonators garner followers and likes by promising vouchers, gift certificates, and other fake giveaway promotions. In most instances they request a @mention or repost of the contest along with an email address or photo. Obtaining followers allows them to inflate their own prominence on social media, a tactic called fame farming.
The value of inflating followers count is threefold:
More followers creates a more credible account: There is a feedback loop between offering fake promotions for likes and having a strong following. A strong following increases an account’s credibility, and more credibility means more follows. Accounts build this following until they are ready to do something else, almost always something malicious, with the account.
Followers now are victims later: By building a following over time without conducting any overtly malicious activity, the followers are less likely to suspect malicious activity once the account does spring into action. The cybercriminal may begin direct messaging its followers or posting more overtly malicious content, such as phishing links disguised as fake offers or malware in the form of fake contests.
Robust accounts can be sold: Scammers, spammers, and cybercriminals pay a hefty price for accounts with a pre-built following. Building and selling accounts, called “account flipping,” is a lucrative tradecraft in the social media cybercrime economy.
Enterprises should be concerned about these tactics because there is a profound element of brand reputation that is not part of the traditional cost analysis of an incident, ZeroFOX said. These attacks target a brand’s customer base, especially those that are particularly engaged. Organizations ought to assess these attacks in term of the value of a single customer, not just the direct financial fallout of the attack.
Another way for a cybercriminal to ensure their attack is viewed by a huge number of potential victims is to use paid promotion, which broadcasts the phishing link to wider audiences. Promotion is a service offered to social media marketers to display an ad to users beyond just their followers, and it is the basis for revenue for most social networks. Scammers using this method take a huge risk because the social networks review ads before they are posted and the scammer may have their entire account banned if the network deems their purposes to be nefarious. Scammers must invest extra time and energy ensuring their promoted content will dupe the network’s filters.
In the image to the left, a website offering counterfeit sunglasses at a too-good-to-be-true discount is promoted on Instagram. The website sells fake merchandise despite adopting the real brand’s logo. The more scammers are willing to pay, the more the networks will distribute the post.
Impersonators use a variety of techniques to avoid detection by the social networks. One of the most popular is creating an account but letting it sit dormant for significant periods of time before springing into action. They can return to dormancy just as quickly. The reasons for this might be:
1. Older accounts are more credible
For a user doing a cursory check on a potentially malicious impersonation account, the account’s age is a good indicator of legitimacy. Users expect the authentic account of well-known brands to have been around for quite a while. For a scammer, this means “aging” the account makes it more authentic. During this aging process, the account must remain undetected, and thus the perpetrators leave the account dormant and blank.
2. Dormant accounts are more likely to fly under the radar
Cyber criminals regularly wipe the account to avoid detection. Wiping the account helps cover the tracks of the attacker and avoid detection in between attacks.
3. The account may have been recently sold
Accounts are bought and sold regularly. Cybercriminals might buy a dormant account with a lucrative handle, perhaps one very similar to that of the brand they intend to impersonate. Once the account has changed hands, it may spring to life and start spreading its attack campaign.
The authentic Twitter user @verified posts a URL with information about how users can get their accounts verified. Its impersonator uses the same default image, similar background, and a deceptive @HeIpSupport username with a homoglyph uppercase “i” replacing the lowercase “l.” The account laid dormant for four years before starting to phish, but now actively engages by posting and liking often, following other users, and following back similar accounts spreading malicious URLs.
Retailers are targets
Retailers are also targets for fraud and scams. The fake gift card, coupon, and promotion impersonators can be used to phish information from coupon-clippers, provide discount codes that bait-and-switch to malware, and even generate usable gift card numbers from fake mobile apps.
Retail scams distribute links that redirect the user to a page to enter the contest, thus harvesting name, address, email, birthdate, and other PII. Despite following registration instructions, entry confirmation is never received. Instead, the page leads to multiple pop-ups with malware and eventually redirects to a website designed for data extraction.
Other impersonator accounts simply request an email address in conjunction with a repost. Once entered, the email is sold to spam lists. The user is typically encouraged to follow steps for providing contact information in exchange for an unfulfilled card. Additionally, a perpetrator can check these against exposed account lists such as haveibeenpwned. The social network account can also be reviewed to identify the user’s posts, hobbies, and more for password guessing purposes.
Financial services firms are obvious targets for fraud and scams, such as money-flipping scams, work from home scams, card cracking, and more. Financial scammers hijack banks’ logos in an attempt to make their services look official. They monitor legitimate bank profiles on social media and identify when they’re followed by a new user. The scammer will then immediately tag them or use an @mention to ask if the user would like to make a quick return on their money. Then the perpetrator takes the conversation with the user to private direct messages (DM) to engage off the radar. This activity is not completely hidden; the initial post is public to all including the bank.
In another scenario, a scammer offered to money-flip for a number of banks, going as far as providing their phone number. The bulk of the malicious activity is carried out via DM or off of the platform entirely, making it difficult to detect.
The scammers target victims in dire financial need, often appending hashtags like #help, #debt, and even #singlemom. They also target members of the military and holiday shoppers, who make for lucrative targets. At the end of the day, it’s often the banks who eat the costs of these scams, which combined across platforms, could total in the hundreds of millions annually, ZeroFOX said.