Monthly Archives: July 2018

How to set up a rule in Microsoft Exchange to send an alert of a phishing attack

Empowering your employees to easily notify IT security personnel of a phishing attack requires an Exchange rule. This tutorial explains how to set one up.

n general, IT cybersecurity experts agree that when it comes to enterprise phishing emails, the most effective defense, and the only one that will inevitably stop such attacks, is a well-trained and educated workforce. While technologies like artificial intelligence and machine learning may stop many phishing emails from getting through to user inboxes, those tech solutions cannot overcome the careless click of a malicious link by one of your employees when the technology fails.

As we have mentioned before, a 2018 report shows that about 50% of an enterprise’s computer using employees will click on a link sent via email from an unknown user without first thinking of the potential consequences. To overcome this lack of urgency so prevalent amongst users, IT professionals should task the entire workforce with the responsibility of immediately reporting phishing emails when they are uncovered.

The Office 365 add-in, Report Message, allows Outlook users to report a phishing or other suspicious email with the click of a single icon on the standard Office Ribbon interface. However, by adding a new rule to Microsoft Exchange, admins can also receive a copy of the report—with no additional effort on the employee’s part.

This how-to article explains how to set up a rule in Exchange that will piggyback on Report Message to notify the proper IT security team in your organization that a phishing email has been reported.

Set up the Rule

Creating or modifying rules using the following technique requires Exchange Online Administrator authentication status. This tutorial also assumes you have installed and enabled the Report Message add-in for Outlook. (Check out the previous article for details.)

Open the online portal to Office 365 and logon with administrator credentials. Navigate to the Admin Center and then open the Exchange Admin Center submenu. Click the Mail Flow link in the left navigation bar. You should see something similar to Figure A. (Note, the example has no rules yet.)


Figure A

Click on the Plus button to create a new rule. Name your new rule (Phishing Submission) and then open the Apply this rule if dropdown box. Choose the entry: The recipient address includes. Add these two email addresses to the list as shown in Figure B.



Figure B

Surviving a DDoS attack: Our Story

What happens when an IT education company gets hit with a DDOS attack? It happened to us earlier this year and we’re sharing the story.


In the Do the following box, choose the Bcc the message to entry and add the appropriate security administrator or team as designated by your intrusion detection policy. Set the Audit this rule with severity level to medium, as shown in Figure C and click Save.


Figure C

Once this rule is established, whenever an employee reports an email using the Report Message add-in, the appropriate security personnel will receive a copy of the message automatically. This will allow your security teams to act swiftly and decisively to mitigate and counteract phishing attacks in accordance with your enterprise’s policies.


via:  techrepublic

New York kicks Charter out of the state after failure to honor conditions of Time-Warner merger

Broadband providers! They love to make noise about how dedicated they are to improving your service, rolling out new features and generally adhering to both the law and their own code of ethics. So how can it be that Charter has so badly failed the terms imposed on its purchase of Time-Warner Cable in 2016 that the state of New York is showing them (specifically their subsidiary Spectrum) the door? Could all these promises be only so many words? Say it ain’t so!

Yes, to the surprise of no one but to the continued detriment of New York’s broadband customers, Charter has failed to meet various obligations, lied about compliance and performance and apparently has even been operating unsafely out in the field.

New York’s Public Service Commission approved the merger at the state level in 2016 on condition that the company expand broadband offerings in both quality and quantity; at a national level the FCC set its own conditions.

Unfortunately, Charter has failed repeatedly and publicly to meet the NY PSC’s requirements. The latter wrote in a press release (PDF):

Charter, doing business as Spectrum, has — through word and deed — made clear that it has no intention of providing the public benefits upon which the Commission’s earlier approval was conditioned.

These recurring failures led the Commission to the broader conclusion that the company was not interested in being a good corporate citizen and that the Commission could no longer in good faith and conscience allow it to operate in New York.

Charter is the largest cable provider in the state, serving some 2 million people in a variety of urban communities, so this isn’t a matter of swapping out a couple of neighborhoods. The company has 60 days to provide a plan for “an orderly transition to a successor provider(s).” Difficulty level: “Charter must ensure no interruption in service is experienced by customers.”

The PSC has clearly had it with the company and gladly recounts its sins:

By its own admission, Charter has failed to meet its commitment to expand its service network that was specifically called for as part of the Commission’s decision to approve the merger between Charter and Time Warner Cable. Its failure to meet its June 18, 2018 target by more than 40 percent is only the most recent example. Rather than accept responsibility Charter has tried to pass the blame for its failure on other companies, such as utility pole owners, which have processed tens of thousands of pole applications submitted by Charter.

Despite missing every network expansion target since the merger was approved in 2016, Charter has falsely claimed in advertisements it is exceeding its commitments to the State and is on track to deliver its network expansion. This led to the Commission’s general counsel referring a false advertising claim to the Attorney General’s office for enforcement.

Not only has Charter’s performance been wholly deficient and its behavior before the Commission contrary to the laws of New York State and regulations of the Commission, but it has also repeatedly claimed not to be bound by the terms of the Commission’s approval. Such egregious conduct cannot be condoned and the only reasonable remedy that remains is for the Commission to revoke the 2016 merger approval…

…and its subsequent removal from the state. It has also been ordered to pay $3 million in fines.

The company would not be able to operate in New York, but it could continue to do business in other states. That said, a string of failures this prominent is sure to draw federal attention; the FCC requirements included some broadband deployment ones, and Charter’s negligence in such a major market will not go unnoticed.

Charter told Ars Technica that it will fight the PSC’s order, and in a statement said that election season had caused the “rhetoric” to become “politically charged,” and that it had expanded to 86,000 new homes since 2016.


via:   techcrunch

Google Assistant can now do things automatically at a scheduled time

Back at Google I/O, Google announced two new features for Google Assistant: custom routines and schedules — both focusing on automating things you do regularly, but in different ways.

The first lets you trigger multiple commands with a single custom phrase — like saying “Hey Google, I’m awake” to unsilence your phone, turn on the lights and read the news. Schedules, meanwhile, could trigger a series of commands at a specific time on specific days, without you needing to say a thing.

While custom routines launched almost immediately after I/O, scheduling has been curiously absent. It’s starting to roll out today.

As first noticed by DroidLife, it looks like scheduling has started rolling out to users by way of the Google Home app.

To make a schedule:

  • Open the Google Home app
  • Go to Settings>Routines
  • Create a new routine with the + button
  • Scroll to the “Set a time and day” option to schedule things ahead of time

If you don’t see the “time and day” option yet, check back in a day or two. Google is rolling it out over the next few days (generally done in case there’s some bug it missed), so it might pop up without much fanfare.

Want your bedroom lights to turn on every morning at 7 am on workdays? You can do that. Want that song from the Six Flags commercials to play every day at noon to get you over the hump and/or drive your roommates up a wall? Sure! Want to double-check the door lock, dim the downstairs lights and make sure your entertainment center is off at 2 am? If you’ve got all the smart home hardware required, it should be able to handle it.

While a lot of things you might use Google Assistant for can already be scheduled through their respective third-party apps (most smart lights, for example, have apps with built-in scheduling options), this moves to bring everything under one roof while letting you fire off more complicated sequences all at once. And if something breaks? You’ll know where to look.


via:  techcrunch.

Ransomware Infection Cripples Shipping Giant COSCO’s American Network

A ransomware infection has crippled the US network of one of the world’s largest shipping giants —COSCO (China Ocean Shipping Company).

“Due to local network breakdown within our America regions, local email and network telephone cannot work properly at the moment,” said the company in a press release. “For safety precautions, we have shut down the connections with other regions for further investigations.”

But while the company described the incident as a “network breakdown,” according to internal emails seen by several maritime news sites [1, 2], the company referred to the incident as a ransomware infection.

COSCO warns employees not to open suspicious emails

COSCO warned employees in other regions not to open “suspicious emails” and urged its IT staff to perform a sweep of internal networks with antivirus software.

The type of ransomware that infected the company’s network is still unknown. COSCO did not respond to multiple requests for comment sent by Bleeping Computer.

The incident took place on Tuesday, July 24, but today, the company’s American Region IT infrastructure was still down, including email servers and telephone network, according to a Facebook post. The company’s US website was also down and was still down at the time of this article’s publication.

The company’s US employees have resorted to using public Yahoo email accounts to answering customer problems reported via social media.

Incident not as big as Maersk’s NotPetya problems

COSCO is the world’s fourth-largest maritime shipping company. A.P. Møller-Maersk, the world’s largest shipping firm, also suffered a ransomware infection last year when it was one of the NotPetya ransomware outbreak’s largest victims.


Speaking at a panel on securing the future of cyberspace at the World Economic Forum held in January in Davos, Switzerland, Maersk’s CEO said the company’s engineers had to reinstall over 4,000 servers, 45,000 PCs, and 2500 applications over the course of ten days in late June and early July 2017, following the NotPetya outbreak.

The COSCO incident is much smaller in size and nature compared to Maersk’s NotPetya troubles. Some of Maersk’s shipments were trapped in some ports because of NotPetya, something that doesn’t seem to have happened to COSCO, according to current reports.



via:  bleepingcomputer

Researchers Can Earn Up to $100K via Microsoft Identity Bounty Program

Microsoft announced its Identity Bounty Program through which security researchers can earn up to $100,000 for an eligible submission.

On 17 July, Microsoft Security Response Center (MSRC) unveiled the creation of a new bug bounty program to help it remediate vulnerabilities affecting its Identity services.

Phillip Misner, principal security group manager of MSRC, noted that security today depends largely upon protecting a customer’s digital identity. This helps explain Microsoft’s commitment to identity-based solutions, as Misner said in a blog post:

Modern security depends today on collaborative communication of identities and identity data within and across domains. A customer’s digital identity is often the key to accessing services and interacting across the internet. Microsoft has invested heavily in the security and privacy of both our consumer (Microsoft Account) and enterprise (Azure Active Directory) identity solutions. We have strongly invested in the creation, implementation, and improvement of identity-related specifications that foster strong authentication, secure sign-on, sessions, API security, and other critical infrastructure tasks, as part of the community of standards experts within official standards bodies such as IETF, W3C, or the OpenID Foundation. In recognition of that strong commitment to our customer’s security we are launching the Microsoft Identity Bounty Program.

According to its terms and conditions, the Microsoft Identity Bounty Program welcomes reports detailing previous unreported critical or important vulnerabilities that affects one of its in-scope Identity services. Those include and Microsoft’s mobile Authenticator app, amongst others, as well as several standards such as OpenID Connect Core and OAuth 2.0 Form Post Response Types. Certain issues such as reports from automated scans, denial-of-service (DoS) flaws and vulnerabilities likely requiring user interaction aren’t in scope.

When it comes to the rewards security researchers can receive for an eligible submission, the amounts vary widely. Participants can expect to make at least $500 for an incomplete submission detailing an authorization flaw or instance of sensitive data exposure. On the other end of the spectrum, they can earn up to $100,000 for a high-quality submission disclosing enough information for an engineer to reproduce, grasp the specifics of and fix a multi-factor authentication bypass or standards design vulnerabilities.

The reward scheme for the Microsoft Identity Bounty Program

Microsoft notes that participants of its Identity Bounty Program must avoid privacy violations and the destruction of data. They are also prohibited from using certain methods of research such as DoS testing, automated security testing and phishing testing against company employees.

Researchers are required to set up test accounts and test tenants in order to look for security issues. Specifically, they can create an Azure free trial and/or a Microsoft test account.

News of this program follows approximately two months after Microsoft launched a limited-time bug bounty program to help discover and address vulnerabilities similar to Spectre and Meltdown.


via:  tripwire 

Europe takes another step towards copyright pre-filters for user generated content

In a key vote the European Parliament’s legal affairs committee has backed the two most controversial elements of a digital copyright reform package — which critics warn could have a chilling effect on Internet norms like memes and also damage freedom of expression online.

In the draft copyright directive, Article 11; “Protection of press publications concerning online uses” — which targets news aggregator business models by setting out a neighboring right for snippets of journalistic content that requires a license from the publisher to use this type of content (aka ‘the link tax’, as critics dub it) — was adopted by a 13:12 majority of the legal committee.

While, Article 13; “Use of protected content by online content sharing service providers”, which makes platforms directly liable for copyright infringements by their users — thereby pushing them towards creating filters that monitor all content uploads with all the associated potential chilling effects (aka ‘censorship machines’) — was adopted by a 15:10 majority.

MEPs critical of the proposals have vowed to continue to oppose the measures, and the EU parliament will eventually need to vote as a whole.


EU Member State representatives in the EU Council will also need to vote on the reforms before the directive can become law. Though, as it stands, a majority of European governments appear to back the proposals.

European digital rights group EDRi, a long-standing critic of Article 13, has a breakdown of the next steps for the copyright directive here. It’s possible there could be another key vote in the parliament next month — ahead of negotiations with the European Council, which could be finished by fall. A final vote on a legally checked text will take place in the parliament — perhaps before the end of the year.

Derailing the proposals now essentially rests on whether enough MEPs can be convinced it’s politically expedient to do so — factoring in a timeline that includes the next EU parliament elections, in May 2019.


A coalition of original Internet architects, computer scientists, academics and supporters — including Sir Tim Berners-Lee, Vint Cerf, Bruce Schneier, Jimmy Wales and Mitch Kapor — penned an open letter to the European Parliament’s president to oppose Article 13, warning that while “well-intended” the requirement that Internet platforms perform automatic filtering of all content uploaded by users “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.

“As creators ourselves, we share the concern that there should be a fair distribution of revenues from the online use of copyright works, that benefits creators, publishers, and platforms alike. But Article 13 is not the right way to achieve this,” they write in the letter.

“By inverting this liability model and essentially making platforms directly responsible for ensuring the legality of content in the first instance, the business models and investments of platforms large and small will be impacted. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.”

The Wikimedia Foundational also blogged separately, setting out some specific concerns about the impact that mandatory upload filters could have on Wikipedia.

“Any sort of law which mandates the deployment of automatic filters to screen all uploaded content using AI or related technologies does not leave room for the types of community processes which have been so effective on the Wikimedia projects,” it warned last week. “As previously mentioned, upload filters as they exist today view content through a broad lens, that can miss a lot of the nuances which are crucial for the review of content and assessments of legality or veracity.”

More generally critics warn that expressive and creative remix formats like memes and GIFs — which have come to form an integral part of the rich communication currency of the Internet — will be at risk if the proposals become law…


Regarding Article 11, Europe already has experience experimenting with a neighboring right for news, after an ancillary copyright law was enacted in Germany in 2013. But local publishers ended up offering Google free consent to display their snippets after they saw traffic fall substantially when Google stopped showing their content rather than pay for using them.

Spain also enacted a similar law for publishers in 2014, but its implementation required publishers to charge for using their snippets — leading Google to permanently close its news aggregation service in the country.

Critics of this component of the digital copyright reform package also warn it’s unclear what kinds of news content will constitute a snippet, and thus fall under the proposal — even suggesting a URL including the headline of an article could fall foul of the copyright extension; ergo that the hyperlink itself could be in danger.

They also argue that an amendment giving Member States the flexibility to decide whether or not a snippet should be considered “insubstantial” (and thus freely shared) or not, does not clear up problems — saying it just risks causing fresh fragmentation across the bloc, at a time when the Commission is keenly pushing a so-called ‘Digital Single Market’ strategy.

“Instead of one Europe-wide law, we’d have 28,” warns Reda on that. “With the most extreme becoming the de-facto standard: To avoid being sued, international internet platforms would be motivated to comply with the strictest version implemented by any member state.”

However several European news and magazine publisher groups have welcomed the committee’s backing for Article 11. In a joint statement on behalf of publishing groups EMMA, ENPA, EPC and NME a spokesperson said: “The Internet is only as useful as the content that populates it. This Publisher’s neighbouring Right will be key to encouraging further investment in professional, diverse, fact-checked content for the enrichment and enjoyment of everyone, everywhere.”

Returning to Article 13, the EU’s executive, the Commission — the body responsible for drafting the copyright reforms — has also been pushing online platforms towards pre-filtering content as a mechanism for combating terrorist content, setting out a “one hour rule” for takedowns of this type of content earlier this year, for example.

But again critics of the copyright reforms argue it’s outrageously disproportionate to seek to apply the same measures that are being applied to try to clamp down on terrorist propaganda and serious criminal offenses like child exploitation to police copyright.

“For copyrighted content these automated tools simply undermine copyright exceptions. And they are not proportionate,” Reda told us last year. “We are not talking about violent crimes here in the way that terrorism or child abuse are. We’re talking about something that is a really widespread phenomenon and that’s dealt with by providing attractive legal offers to people. And not by treating them as criminals.”

Responding to today’s committee vote, Jim Killock, executive director of digital rights group, the Open Rights Group, attacked what he dubbed a “dreadful law”, warning it would have a chilling effect on freedom of expression online.

“Article 13 must go,” he said in a statement. “The EU Parliament’s duty is to defend citizens from unfair and unjust laws. MEPs must reject this law, which would create a Robo-copyright regime intended to zap any image, text, meme or video that appears to include copyright material, even when it is entirely legal material.”

Also reacting to the vote today, Monique Goyens, director general of European consumer rights group BEUC, said: “The internet as we know it will change when platforms will need to systematically filter content that users want to upload. The internet will change from a place where consumers can enjoy sharing creations and ideas to an environment that is restricted and controlled. Fair remuneration for creators is important, but consumers should not be at the losing end.”

Goyens blamed the “pressure of the copyright industry” for scuppering “even modest attempts to modernize copyright law”.

“Today’s rules are outdated and patchy. It is high time that copyright laws take into account that consumers share and create videos, music and photos on a daily basis. The majority of MEPs failed to find a solution that would have benefitted consumers and creators,” she added in a statement.


via:  techcrunch


Ease the Squeeze – Cyber Security with Small Teams

The competition is fierce; each team looking to find the best talent and get the most from every member. Sometimes, to fill a position you have to go to your bench, but this is a battle, and you are in it to win it.

No, it isn’t the national team looking to grab top honors at the World Cup, it’s your cyber security team working to defend the enterprise every day, find top talent and bring in reserves from IT. The cyber security skills gap is well-documented, and those who do have the skills are highly sought after. Having those members on your team is a huge boon, but keeping them remains a challenge when other firms are aggressively seeking that talent.

Sometimes the best approach to filling out the team is training from within, bringing IT staff into the security fold. While this approach is a great way to meet the recruitment challenge, it costs time and money to invest in training staff and bring them up to speed. The investment is certainly worthwhile if you are able to build your security talent pool, but even this strategy has its limits.

The fact is that there is rarely enough budget to hire all the people you need, and with the growing requirements and responsibilities of cyber security, organizations are looking to do more and more with less and less.

The squeeze is real. With an ever-increasing number of tools, platforms, operating systems and threats, security teams are forced to focus on the most critical assets while leaving others vulnerable. With businesses moving to the cloud, setting up hybrid environments and adopting DevOps practices, keeping up on the latest technologies and trends takes discipline while not falling behind on your current environment is an impressive juggling act.

Security teams are talented, but there are only so many balls they can keep in the air by themselves. One way to keep those balls from hitting the ground is automation.

Automating manual tasks that could be scripted or handled by software workflows will make the team more efficient. Beware, though, that automation has a downside. A recent study found that automation may actually make the skills gap worse. This isn’t to say you shouldn’t automate – nobody wants to click buttons when a machine can do it for you – but it does mean that automation can’t fully replace skill cyber security professionals.

Speaking of skilled professionals, another way to ease the squeeze is to pass some of those balls off to a juggling partner. When it comes to cyber security, this can mean bringing in managed services. A managed services partner can take on many of the administrative and monitoring tasks that fill up the hours of a security analyst’s day.

These teammates allow small teams to focus on their strategic objectives and provide valuable expertise to bolster core security staff.

Finding a trusted partner for your team is an important job, as you need to ensure the managed service is providing the right tools, right information, and right guidance to make your team more efficient and your enterprise more secure. It’s not just a matter of finding someone to operate a set of tools; you need someone to be another expert on your team, come alongside them and deliver the valuable insights those tools provide.

Integrating into your organization’s business processes, change management system, and analytics tools is also an important consideration. You don’t want to simply add yet another set of security tools that provide disparate information that someone then needs to make sense of. Everything should come together to provide a single view into the state of your security environment.

If you are a small team looking to ease the squeeze of managing your cyber security, whether it’s in the cloud or on-premises, consider training from within, implementing automation, and adding a trusted managed services partner. Tripwire’s managed services offering ExpertOps is a great way to add strength to your security team while saving money, as it combines the tools you need with security expertise tailored to your organization’s specific context.

To learn more about great security with small teams, download the ExpertOps Solution Brief


via:  tripwire


157 GB of sensitive data from Tesla, GM, Toyota & others Exposed

The IT security researchers at cyber resilience firm Upguard discovered a massive trove of highly sensitive data publicly available to be accessed by anyone. The data belonged to hundreds of automotive giants including Tesla, Ford, Toyota, GM, Fiat, ThyssenKrupp, and Volkswagen – Thanks to a publicly exposed server owned by Level One Robotics, a Canadian firm providing industrial automation services.

The data was discovered on July 1st, 2018, when 157 gigabytes of files (47,000 documents) were available on a server without any security. The analysis of the exposed data revealed that it includes trade secrets and other sensitive data from the automotive giants including scanned copies of passports, driver licenses, invoices, banking data, contracts, non-disclosure agreements, robotic configurations and 10 years of assembly line schematics etc.

157 GB of sensitive data from Tesla, GM, Toyota, others exposed online

Image credit: Upguard

According to Upguard’s blog post, “The data was exposed via rsync, a common file transfer protocol used to mirror or backup large data sets. The rsync server was not restricted by IP or user, and the data set was downloadable to any rsync client that connected to the rsync port.”


Level One Robotics was informed about the breach on July 9th and at the time of publishing this article; the files were taken offline. However, it is unclear if the data was accessed by someone else other than Upguard. In case it happened, it can be a disaster for the companies since automotive companies prefer to keep their plans secret to avoid competitor from accessing them.

157 GB of sensitive data from Tesla, GM, Toyota, others exposed online

Image credit: Upguard

“Level One takes these allegations very seriously and is diligently working to conduct a full investigation of the nature, extent, and ramifications of this alleged data exposure,” Level One chief executive Milan Gasko told The New York Times. “In order to preserve the integrity of this investigation, we will not be providing comment at this time.”

There was no comment from the automotive firms affected by the breach.


via:  hackread

This Is How Much a ‘Mega Breach’ Really Costs

The average cost of a data breach is $3.86 million, but breaches affecting more than 1 million records are far more expensive.

Companies hit with a data breach pay an average of $3.86 million around the world, marking a 6.4% increase from last year. It’s no small amount for any company, but a few million is only a small fraction of the cost of “mega breaches,” which compromise at least 1 million records.

The “2018 Cost of a Data Breach Study,” sponsored by IBM Security and conducted by the Ponemon Institute, annually evaluates the total cost of security incidents. This marks the first time researchers calculated costs associated with breaches ranging from 1 million to 50 million lost records.

So what’s the damage? It turns out massive security breaches come with equally large price tags, ranging from $40 million for 1 million records lost to $350 million for 50 million records lost.

Researchers had been wanting to dig into the financial impact of mega breaches but previously lacked the data to do so, explains Caleb Barlow, vice president at IBM Security.

“You have to remember if we go back three or four years ago, when you got into these scenarios, unless they were credit-card-related, companies didn’t need to disclose,” he says. Now, as a result of breach disclosure laws, analysts have more data to work with.

As for the overall price increase of 6.4%, Barlow says the most concerning aspect isn’t the percentage itself, but the fact it continues to grow at all. The potential impact on consumers, and the companies supporting them, has become significant enough to gain board-level attention.

“With the average cost just under $4 million – and [we’re] not talking about mega breaches when we say $4 million – the fact this continues to grow is indicative that we as an industry haven’t got our arms around this yet,” he says.

According to the study, most data breaches (48%) come from malicious or criminal attacks, which are the most expensive, at $157 per capita. Twenty-seven percent are caused by human error – for example, negligent employees or contractors ($131 per capita). One-quarter result from system glitches, including both technical and business process failures ($128 per capita).

Hundreds of factors influence the cost of a data breach. Third-party involvement is the most significant: If a third party causes an incident, it costs the company about $13 per record stolen, the report reveals. “Every company nowadays is not a product only of what they do, but their supply chain as well,” Barlow points out. Extensive cloud migration and compliance failure both contribute to the growing cost, adding about $12 per record to the total expense.

One of the factors researchers can better calculate is the reputational side of breach cost, including customer churn and brand damage. If a customer decides not to conduct business with a company due to a breach, or to wait to process a transaction, it can have a pretty devastating impact, he continues. Attackers are taking notice.

“Not only is that potential impact understood now in the cost of a data breach, but it’s something the adversary is very much aware of as well,” Barlow says. Companies with less than 1% loss of existing customers have an average breach cost of $2.7 million. Researchers estimate those with a churn rate over 4% face an average cost of $4.9 million.

Cost of data breach disclosure is another pricey factor, and it’s highest in the United States. Not only is the attack surface larger there compared with other nations, but there are also 49 unique breach disclosure laws to deal with. Companies will likely have to handle the process differently in each jurisdiction where they do business, increasing the cost of alerting users.

Fortunately, a few practices can lessen the financial burden of a data breach.

“There are definitely some things that can reduce the cost,” says Barlow, and not just breach prevention. This marks the second year in a row that incident response – having a team and plan in place for remediation – is the top cost-cutting measure ($14 per record).

Extensive use of encryption is another cost saver, cutting about $13 per record, followed by business continuity management (BCM) involvement and employee training ($9.30 each), participation in threat sharing ($8.70), artificial intelligence platforms ($8.20), and use of security analytics ($6.90).


via:  darkreading

Hackers automate the laundering of money via Clash of Clans

According to a new report, popular smartphone games such as “Clash of Clans” are being used to launder hundreds of thousands of dollars on behalf of credit card thieves.

Researchers at Kromtech Security describe how they first came across the money-laundering ring in mid-June when they analyzed an unsecured MongoDB database.

The database, which was freely accessible to the public without a password, contained thousands of credit card details. However, the researchers quickly surmised that they had not stumbled across an all-too-familiar story of a corporation being sloppy with its customer data but rather a database belonging to credit card thieves (commonly known as carders).

And this particular gang was hoping to launder money stolen from these credit card accounts through mobile games.

As anyone who has played many of the most popular smartphone games will know, the demand for in-game currency is substantial. Many players are addicted to the notion of advancing in the game or frustrated by a free game’s mechanics that force them to wait a long period of time for features to be unlocked. Inevitably, this has resulted in some players trying to find unofficial shortcuts to make progress.

The security researchers realized that they were dealing with a carder gang who had created a sophisticated automated mechanism for creating fake Apple ID accounts with stolen card information and buying virtual “gold”, “gems”, and other in-game power-ups within games.

These virtual goodies would then be sold to other game players on third-party markets such as G2G. In short, the gang was receiving money in exchange for the game currency or power-ups, without any making any obvious link to the stolen credit card data.

In this particular instance, the fraudsters are said to have targeted popular games such as “Clash of Clans” and “Clash Royale” as well as Kabam’s “Marvel Contest of Champions”. Kromtech says that these three games alone have over 250 million aggregate users, generating approximately US $330 million USD in revenue each year.

The sheer popularity of such games, and the money they generate, was clearly too tempting for the criminals to resist.

Supercell, developer of “Clash of Clans” and “Clash Royale”, warns players not to be duped into buying cheap gems or diamonds from unauthorized third-party sites. Not only could your account be permanently banned, but you could be handing control of your Apple ID and Google Play account over to criminals:

Certain websites and individuals might offer cheaper gems/diamonds. Don’t be fooled – it’s a scam.

Such services request private login data (such as Apple ID, Google Play credentials, etc) in order to access your game account. These vendors will gain access to your account and oftentimes, hijack the account and try selling it to other players.

IMPORTANT: If you release your private information/credentials to 3rd parties, you’re permanently placing your game and financial/online security in a high-risk situation.

Consequences of misconduct: Purchasing gems or diamonds from 3rd party vendors can lead to revoked in-app currency and can even get your account permanently banned.

In the opinion of the researchers, more can be done to prevent organized criminals from laundering money via mobile games. They are argue that more steps should be taken to better verify credit card details, names, and addresses when Apple ID accounts are created. Furthermore, service providers are called upon to better secure their account creation processes from being abused by automated tools. And both Apple and the game developers themselves are urged to improve policy enforcement and better track abusers.

Of course, we probably would have never known this criminal scheme was taking place in the first place if the money launderers hadn’t carelessly left their database of credit card details carelessly exposed on the public internet.


via:  tripwire