Monthly Archives: May 2016

Dropbox Addresses Security Concerns for New Initiative’s Kernel Access

Dropbox has responded to security concerns regarding one of its new technology’s abilities to obtain kernel access.

Back in April, the secure file sharing and storage service announced “Project Infinite,” an initiative which will help revolutionize the way Dropbox interfaces with a user’s computer.

Dropbox software engineer Damien Deville provides more information in a blog post.

“Traditionally, Dropbox operated entirely in user space as a program just like any other on your machine. With Dropbox Infinite, we’re going… into the kernel—the core of the operating system. [We’re] evolving from a process that passively watches what happens on your local disk to one that actively plays a role in your filesystem.”

Dropbox currently overlays a green check icon on all files that are available locally. Project Infinite will add a cloud icon as a second overlay that indicates a file is available online but not yet locally. Users can therefore download that file and interact with it as they would any other file.

To view a video of how this new initiative works, please click here.

Dropbox designed Project Infinite to grant users access to all of their saved files regardless of how much space they have available locally on their hard drives.

While this explanation might appeal to end users, Sam Bowne, who teaches Ethical Hacking at City College San Francisco, is worried about the level of access the new initiative would require. Per Bowne’s conversation with Motherboard:

“By moving from userland to kernel-land, Dropbox will take on a large responsibility. The way Dropbox works now, it’s like a vendor setting up a cart outside your home selling hot dogs. But they are now proposing to copy the keys to your house, move in, and live with you.”

Bowne and other security experts are worried that if flaws existed in Project Initiative, attackers could use those vulnerabilities to escalate their access and assume control of a user’s computer.

In light of these concerns, Deville has released the following update from Dropbox:

“We wanted to address some comments about Project Infinite and the kernel. It’s important to understand that many pieces of everyday software load components in the kernel, from simple device drivers for your mouse to highly complex anti-virus programs. We approach the kernel with extreme caution and respect. Because the kernel connects applications to the physical memory, CPU, and external devices, any bug introduced to the kernel can adversely affect the whole machine. We’ve been running this kernel extension internally at Dropbox for almost a year and have battle-tested its stability and integrity.

“File systems exist in the kernel, so if you are going to extend the file system itself, you need to interface with the kernel. In order to innovate on the user’s experience of the file system, as we are with Project Infinite, we need to catch file operation events on Dropbox files before other applications try to act on those files. After careful design and consideration, we concluded that this kernel extension is the smallest and therefore most secure surface through which we can deliver Project Infinite. By focusing exclusively on Dropbox file actions in the kernel, we can ensure the best combination of privacy and usability.

“We understand the concerns around this type of implementation, and our solution takes into consideration the security and stability of our users’ experience, while providing what we believe will be a really useful feature.”

Dropbox is currently in the process of testing Project Infinite. It intends to roll out the initiative to a broader set of users soon.

News of this announcement comes two years after the file-sharing service confirmed the existence of a vulnerability that allowed sensitive files associated with shared links to be exposed and turn up in search engine results on Google

Via: tripwire

Hacker Selling 65 Million Passwords From Tumblr Data Breach

Earlier this month Tumblr revealed that a third party had obtained access to a set of e-mail addresses and passwords dating back from early 2013, before being acquired by Yahoo.

At that time, Tumblr did not reveal the number of affected users, but in reality, around 65,469,298 accounts credentials were leaked in the 2013 Tumblr data breach, according to security expert Troy Hunt, who runs the site Have I Been Pwned.

“As soon as we became aware of this, our security team thoroughly investigated the matter. Our analysis gives us no reason to believe that this information was used to access Tumblr accounts,” read Tumblr’s blog.

A Hacker, who is going by “peace_of_mind,” is selling the Tumblr data for 0.4255 Bitcoin ($225) on the darknet marketplace The Real Deal.

The compromised data includes 65,469,298 unique e-mail addresses and “salted & hashed passwords.”

The Same hacker is also selling the compromised login account data from Fling, LinkedIn, and MySpace. I wonder if he has more data sets yet to sell…

Salt makes passwords hard to crack, but you should still probably change it.

Via: thehackernews

Microsoft, Facebook To Establish Innovative New Subsea Cable Across Atlantic Ocean

Microsoft Corporation and Facebook disclosed an agreement to build a new, state-of-the-art subsea cable across the Atlantic. According to the company, the new “MAREA” cable would help meet the growing customer demand for high speed, reliable connections for cloud, as well as, online services for Microsoft, Facebook and their customers.

The two companies indicated that the parties have cleared conditions to go Contract-In-Force (CIF) with their plans, and that the construction of the cable would begin in August 2016 with completion expected in October 2017.

Microsoft’s GM for Datacenter Strategy, Planning & Development, Christian Belady, commented, “As the world is increasingly moving toward a future based on cloud computing, Microsoft continues to invest in our cloud infrastructure to meet current and future growing global demand for our more than 200 cloud services, including Bing, Office 365, Skype, Xbox Live and the Microsoft Azure platform.”

He continued, “The MAREA transatlantic cable we’re building with Facebook and Telxius will provide new, low-latency connectivity that will help meet the increasing demand for higher-speed capacity across the Atlantic. By building the cable along this new southern route, we will also increase the resiliency of our global network, helping ensure even greater reliability for our customers.”

The two companies are aligning to accelerate the development of the next-generation of Internet infrastructure and support the explosion of data consumption and rapid growth of their respective cloud and online services. They pointed out that MAREA would be the highest-capacity subsea cable to ever cross the Atlantic – eight fiber pairs and an initial estimated design capacity of 160Tbps.

Similarly, Facebook’s VP of Network Engineering, Najam Ahmad, said, “Facebook wants to make it possible for people to have deep connections and shared experiences with the people who matter to them most — anywhere in the world, and at any time.”

He added, “We’re always evaluating new technologies and systems in order to provide the best connectivity possible. By creating a vendor-agnostic design with Microsoft and Telxius, we can choose the hardware and software that best serves the system and ultimately increase the pace of innovation. We want to do more of these projects in this manner — allowing us to move fast with more collaboration. We think this is how most subsea cable systems will be built in the future.”

via: benzinga

Business users get live chat in Office Online

Microsoft takes another swing at Google’s productivity suite with the new feature

A new Skype for Business integration makes it possible for users to hold live chats to discuss documents stored in OneDrive for Business and SharePoint Online.

Microsoft’s attempts to catch up with Google in the online collaboration space took a step forward Wednesday, when the company announced that it’s giving business users live chat in Office Online.

The new feature will allow users to discuss documents stored in SharePoint and OneDrive for Business using chat sessions powered by Skype for Business.

When more than one person is working on a shared document inside Word, Excel, OneNote or PowerPoint Online, they’ll see a chat button show up in the Web app’s toolbar. When clicked, it’ll open a chat sidebar so everyone with the document open can discuss it.

It’s an enterprise-grade improvement to the Skype chat Microsoft already offers for consumers using Office Online, as part of the company’s push to better compete with other productivity suites that feature real-time collaboration. Skype for Business chats compliment other functionality in Office Online, like support for real-time co-authoring of documents shared between users.

The chats aren’t designed to replace traditional document collaboration tools like leaving comments and tracking changes, but they can help a team of people all looking at the same document to better work together in a more rapid-fire way.

It’s a feature that has been core to Google’s Docs productivity suite for quite some time, and this update means that businesses using Office 365 have another reason to consider sticking with Microsoft rather than switching to one of its competitors.

Microsoft has recently rolled out a number of other updates to Office, including new watch face support in Outlook on Android Wear and the launch of SharePoint for iOS, which the company announced earlier this month.

In addition, Microsoft is offering discounts for consumers who want to buy either Office 365 Home or its subscription-free Office Home and Student 2016 software.

Via: itworld

Google Announces Plans to Help Kill Off Passwords on Android Devices

Google has announced plans that will help kill off the need for passwords on Android mobile devices.

During his Friday talk at Google I/O, an annual software developer conference, Daniel Kaufman of the tech giant’s Advanced Technology and Projects (ATAP) division revealed the upcoming roll-out of Trust API.

Instead of relying on passwords, Trust API will use biometrics like facial recognition as well as the way a user types, swipes, and even walks to evaluate user behavior, reports MIT’s Technology Review.

Each of those metrics will help build a “trust score.” If that score remains above a certain threshold, the user will remain authenticated. If not, they will need to provide more information to re-authenticate themselves.

Kaufman explained that Trust API will run in the background and use an Android device’s sensors to constantly monitor a user’s behavior. As quoted by Mashable:

“We have a phone and these phones have all these sensors in them, why couldn’t it just know who I was so I don’t need a password, I should just be able to work.”

Some industry experts are excited about Trust API, just one element of Google’s “Project Abacus” initiative.

For example, Richard Lack of customer identity management firm Gigya told The Guardian that approaches like Google’s will pay off:

“Consumers tell us that they are struggling to remember what is now an average of over 100 passwords in Europe. At a time when the number of devices we own is rising sharply, this frustration has relegated the registration process to being the most broken thing about the internet. The future lies in methods of authentication without passwords, which consumers clearly favour, both in terms of convenience and enhanced security. Biometric authentication is a powerful enabler, allowing businesses smart enough to deploy it to significantly increase rates of registration, gaining data and insight about their customers, while also increasing customer security. This is a win/win scenario which sounds the death-knell for awkward and insecure passwords sooner than we may imagine.”

Even so, privacy advocates are likely to resent Google’s use of mobile sensors to continuously track their habits.

Kaufman said the tech giant plans to test Trust API with “several very large financial institutions” in June, with the intention of making the tool available to every Android developer by the end of the year.

In the meantime, users who are concerned about their password security should consider using a password manager.

Via: tripwire

4 Reasons Why the Cloud Is More Secure Than Legacy Systems

We tend to fear what we do not understand. Especially when it comes to new technologies. We oftentimes worry… and worry some more… before finally embracing a new gadget, platform, or feature and deciding to incorporate it into our lives.

Brian David Johnson, futurist at Intel, is responsible for creating models that predict how people will interact with technology in the next 10 to 15 years. He describes four stages to show the paradoxical relationship between fear and technology.

  • Fear – “It will kill us all!”
  • Personal struggle to make sense of the new technology – “It will steal my daughter!”
  • Denial of its usefulness – “I’ll never use it!”
  • Acceptance – “What are you going on about?”

The pattern applies to many situations, including the Internet, computers, smartphones, wearables and the Cloud.

When it comes to cloud computing, businesses and IT professionals alike remain especially wary.

Indeed, enterprises express mixed feelings about security in the cloud despite wide-scale adoption. More than 90 percent of enterprises in the United States use the Cloud, and 52 percent of small and medium-sized businesses (SMBs) use the cloud specifically for storage.

These adoption trends demonstrate that most businesses passed through the fear of the Cloud phase and now reside in stages three and four, where they are attempting to make sense of the new technology and deny its significance.

This article explores four reasons why the Cloud is more secure than on-premise backup, storage and computing systems, otherwise known as “legacy systems.” It aims not only to demonstrate cloud computing’s usefulness but also to address existing concerns about security.


How does legacy system security compare to cloud security?

Sixty-four percent of IT professionals say the Cloud is more secure than legacy systems. But breaking the data down shows that 38 percent see the Cloud only as “somewhat more secure” rather than “much more secure.”

This leaves a quarter of respondents unsure about whether the quality significantly differs, with roughly 10 percent doubting the efficacy of cloud security overall.

High-profile data breaches at TargetHome Depot and in the Apple iCloud received a lot of media attention. However, the media ignored that all three breaches were a result of human error, not shortcomings in the Cloud.

For example, in the cases of Target and Home Depot, hackers got ahold of personal information from third-party vendors and not by hacking the Cloud.

Incidents of this nature, combined with a lack of knowledge about the safeguards that exist to protect data stored in the Cloud, paralyze businesses in stages two and three of the fear-technology paradox.

“I think what we’re seeing now, when it comes to the Cloud and security, is a bit of a myth that the Cloud is less secure. I’ve heard this many times, but it does not seem to be true in real life.” — David Linthicum, Senior Vice President, Cloud Technology Partners

Four features of cloud security demonstrate why fear of the Cloud is more of a myth than a reality.

1. Strong Perimeters and Surveillance

Legacy system security can be unreliable and difficult to implement. They include the terminal, workstation and browser. Legacy systems originated before computer crimes became prevalent. Therefore, preventing access to on-site computers often was enough to block hackers.

But businesses still rely on these systems today, often using them in tandem with cloud infrastructure and backup and recovery services. This makes legacy systems increasingly vulnerable to hackers.

Additionally, assessing legacy system security concerns is a multi-step process, with the best option being to replace the legacy system itself.

In most offices, a locked door is the main defense to protect IT equipment, important files, and personal- and business-related data.

In contrast, the top cloud service providers’ (CSP) data centers have multi-layered security defenses. Precautions include high fences, barbed wire, concrete barriers, guards that patrol the area, and security cameras.

These physical barriers not only prevent people from entering the data center. They also monitor activity near the space.

2. Controlled Access

When data is stored off-site in the Cloud, employees, vendors and visitors are physically separated from a company’s mission-critical data.

This lack of physical access makes it more difficult for third parties to stumble across data and use it negatively. The amount of human risk decreases.

3. Cyber Security Expertise

CSPs specialize in keeping data safe. Cloud infrastructure is monitored at all times in order to head off potential security threats.

“If you were to look at the skill set in a single organization and compare it to another organization that specializes in a specific solution, all things being equal, you would expect the specialized company to provide the best service. This is how it is with the Cloud. The cloud vendor will have good, if not better, security and support for security than any one company.” — Duane Tharp, Vice President of Technical Sales and Services, Cloud-Elements

With the Cloud, you get access not only to the best data centers but also to highly skilled IT professionals.

4. Thorough and Frequent Auditing

CSPs undergo yearly audits to protect against flaws in their security systems. However, on-premise, legacy systems do not have this requirement.

“If you have an on-premise solution for five years, within those five years, it may get audited once, which leaves room for gaps in security to arise.” — Jason Reichl, CEO, Go Nimbly

Additionally, legacy systems can be difficult to update, especially as they grow alongside the company.

“Legacy systems are more difficult to keep updated because enterprises may have to go around to several hundred thousand platforms to check and update security systems. It’s easier for legacy systems to fall behind.” — David Linthicum, Senior Vice President, Cloud Technology Partners


As more businesses begin using the Cloud, it is inevitable that they will see tangible benefits, such as improved business efficiency, better access to data and security. These benefits will fuel the transition to stage four on the technology-fear continuum — “acceptance.”

Big data, virtual reality and the Internet of Things (IoT) are stealing the technology industry’s spotlight now. All three new technologies work in tandem with the Cloud.

Greater reliance on cloud services will quell remaining security concerns and indirectly show how the Cloud is more secure than antiquated on-premises systems.

Via: tripwire

Understanding Prioritization – Patches and Vulnerabilities

At Tripwire, one of the responsibilities of VERT (Vulnerability and Exposure Research Team) is the monthly publication of the Patch Priority Index (PPI). Equal parts science and art, the PPI is released by VERT researchers who deal with vulnerabilities resolved by these patches on a daily basis. When this process first began, it prompted a very interesting discussion among the project’s stakeholders.

At first, they looked at using scoring as a method of prioritizing patches and considered both our Tripwire IP360 Scoring System and CVSSv2. Neither system is designed to assist with the prioritization of patches; instead they’re designed to describe the criticality of a vulnerability.

While some may argue that we’re dealing in semantics at this point, the concepts are, in fact, very different. When you start to discuss vulnerability prioritization, there are common considerations:

  • What level of access can be gained by a compromise?
  • What is the attack vector?
  • How easily can the vulnerability be exploited?
  • Is the vulnerability being actively exploited?

These can be considered common because they surface in multiple scoring systems and most discussions around the severity of vulnerabilities.

Patches, on the other hand, resolve multiple vulnerabilities, which immediately implies that any prioritization will be much more complex. One immediate thought is to simply combine the vulnerability scores. Consider the following scenario:

You have a system with four vulnerabilities: A, B, C, and D. Patch X resolves A and B, while Patch Y resolves C and D. Time is limited, which patch do you apply first?

CVSS Scores

A – 10.0 (AV:N/AC:L/Au:N/C:C/I:C/A:C)

B – 0.8 (AV:L/AC:H/Au:M/C:N/I:N/A:P)

C – 6.0 (AV:L/AC:H/Au:S/C:C/I:C/A:C)

D – 7.2 (AV:L/AC:L/Au:N/C:C/I:C/A:C)

Ignoring the vectors for a moment, assume you were combining CVSS with simple arithmetic operations. You might mistake strict inequality for prioritization.

A+B = 10.0 + 0.8 = 10.8

C+D = 6.0 + 7.2 = 13.2

Y is the priority C+D > A+B

It’s easy to see how one could assume that you should patch C+D first. This could be further strengthened by documentation from an ASV that indicates two vulnerabilities above the 4.0 CVSS fail threshold as opposed to one. The CVSS vector though indicates that the single 10.0 vulnerability is potentially more severe than the other two vulnerabilities, and this could factor in when prioritizing patches. However, we’re just beginning to scratch the surface of the distinction between vulnerability prioritization and patch prioritization.

Vulnerability scoring is a science. While some would have you believe that there are subjective aspects to measuring the criticality of a vulnerability (e.g. media coverage and fancy names), it simply isn’t true. There’s a reason why the 1-5 scoring system for vulnerability severity didn’t last.

Vulnerability scoring is objective; there are repeatable steps in its reproduction and observed outcomes of exploitation. Since we can measure this, we can say, without a doubt, that this scoring is science. (For more information on vulnerability scoring, please see article series on the subject herehere, and here.)

The mistake that many of us make is assuming that because vulnerability scoring is a science, patch prioritization is also a science. After three separate experiences, I now feel I can say that it is not science but instead an art. The first time I experienced this realization was, as mentioned above, the development of the PPI. The second realization occurred at RSA in 2015.

There it was submitted a P2P session in 2015 on vulnerability scoring with the expectation that we would discuss the math and science behind various scoring algorithms. These sessions are incredibly beneficial because, unlike regular conference talks, they have only a loose outline and grow organically based on attendance. This session shifted in ways I never could have imagined and resulted in a truly great discussion around Patch Management.

During the conversation, questions were raised regarding the failure of vendors to factor in reboot requirements or patch installation complexity. This surprised me, as they are neither aspects of a vulnerability nor systems of measurement for topics like “installation complexity.” It became clear that we were moving away from the science of vulnerability scoring. I was fascinated by some of the revelations made during the session and wrote down a number of facts that I wanted to consider in later research.

This later research ended up being our patch management survey, which resulted in a white paper on the concept of Patch Fatigue. One of the questions we were able to ask was around the elements considered when prioritizing patches. The responses solidified my belief that patch prioritization is a well-practiced art.

While several pieces of objective data were referenced (CVSS, exploit availability and reboot requirements), there were also a number of references to subjective data. This included post patch configuration, multi-stage updates, internal policies, and online resources and publications.

All of this has reinforced my belief that patch prioritization is an art and, while there’s still plenty of science that we can incorporate and lots of subjective data that we could distill to objective buckets, it requires an experienced practitioner to be truly effective. That’s why, with over eleven decades of IT and Security experience, VERT takes pride in our monthly Patch Priority Index and the effort that goes into it.

Via: tripwire

89 Percent of Healthcare Organizations Were Breached in the Past Two Years

And 45 percent were breached five or more times in the same period of time, a recent survey found.

According to the results of a recent survey of senior-level personnel at 91 healthcare providers and 84 business associates that handle protected health information (PHI) for healthcare organizations, fully 89 percent of healthcare organizations and 60 percent of business associates have experienced data breaches in the past two years.

The Ponemon Institute’s Sixth Annual Benchmark Study on Privacy & Security of Healthcare Data, sponsored by ID Experts, also found that 79 percent of healthcare organizations experienced two or more data breaches in the past two years, and 45 percent experienced five or more breaches.

The most commonly exposed data are medical records, followed by billing and insurance records and payment information.

Half of all data breaches in healthcare, the study found, are caused by criminal attacks, and the other half are caused by mistakes — unintentional employee actions, third-party errors and stolen computer devices.

Over the past two years, the average cost of a data breach for healthcare organizations is estimated to be more than $2.2 million, and the average cost of a data breach to business associates is more than $1 million.

Still, almost half of all healthcare organizations, and more than half of all business associates, have little or no confidence that they can detect all patient data loss or theft. In fact, 60 percent of business associates and 59 percent of healthcare organizations don’t think their organization’s security budget is sufficient to curtail or minimize data breaches.

And while 38 percent of healthcare organizations and 26 percent of business associates are aware of medical identity theft cases affected their own patients and customers, 64 percent of healthcare organizations and 67 percent of business associates don’t offer any protection services for victims whose information has been breached.

“In the last six years of conducting this study, it’s clear that efforts to safeguard patient data are not improving,” Ponemon Institute chairman and founder Dr. Larry Ponemon said in a statement. “More healthcare organizations are experiencing data breaches now than six years ago.”

“Negligence — sloppy employee mistakes and unsecured devices — was a noted problem in the first years of this research and it continues,” Ponemon added. “New cyber threats, such as ransomware, are exacerbating the problem.”

When asked what type of security incident worries them the most, 69 percent of healthcare organizations listed negligent or careless employees, followed by cyber attackers (45 percent) and the use of insecure mobile devices (30 percent).

A separate Skycure study, based on millions of monthly security tests between October and December of 2015, found that 27.79 million devices with medical apps installed on them may also be infected with high-risk malware.

The Skycure study also found that 11 percent of mobile devices running an outdated operating system with high-severity vulnerables may have patient data stored on them, and 14 percent of mobile devices holding patient data appear to have no passcode to protect them.

“Hackers have a giant bullseye on the healthcare sector right now, because they know that many organizations still rely on simplistic, dated approaches to cybersecurity,” Axcient CEO Justin Moore told eSecurity Planet by email. “Fact is, many organizations have already been breached, and the only way to both prevent and withstand attacks is by taking a multilayered approach.”

“IT resiliency today involves implementing protections for the organization, protecting related communities and supply chains from attack and then stopping existing attacks before they become breaches,” Moore added. “Until CIOs hit all three objectives, they’ll remain easy pickings for hackers.”

Via: esecurityplanet

SWIFT Acknowledges Major Malware Attack on Second Bank

The attack is ‘part of a wider and highly adaptive campaign targeting banks,’ according to SWIFT.

The SWIFT network recently announced that a second bank has been hit by a malware attack similar to the one that led to the theft of $81 million from Bangladesh Bank in February, the Guardian reports.

In the newer instance, the attack specifically targets the PDF Reader used by customers to download statements.

“Once installed on an infected local machine, the Trojan PDF reader gains an icon and file description that matches legitimate software,” SWIFT said in a statement. “When opening PDF files containing local reports of customer specific SWIFT confirmation messages, the Trojan will manipulate the PDF reports to remove traces of the fraudulent instructions.”

The second attack, according to SWIFT, “evidences that the malware used in the earlier reported customer incident was not a single occurrence, but part of a wider and highly adaptive campaign targeting banks.”

“The attackers clearly exhibit a deep and sophisticated knowledge of specific operational controls within the targeted banks — knowledge that may have been gained from malicious insiders or cyber attacks, or a combination of both,” SWIFT added.

The methods of attack (both in the Bangladesh Bank case and in the more recent one) are as follows, acccording to SWIFT:

  1. Attackers compromise the bank’s environment
  2. Attackers obtain valid operator credentials that have the authority to create, approve and submit SWIFT messages from customers’ back-offices or from their local interfaces to the SWIFT network
  3. Attackers submit fraudulent messages by impersonating the operators from whom they stole the credentials
  4. Attackers hide evidence by removing some of the traces of the fraudulent messages

Splunk security evangelist Matthias Maier told eSecurity Planet by email that the news of a second attack should be a wake-up call for banks worldwide. “These are not isolated incidents,” he said. “Serious investigations must follow given the custom built nature of the malware used in these attacks.”

“It appears to have been created by someone with an intimate knowledge of how the SWIFT software works as well as its business processes, which is cause for concern,” Maier added. “However, basic system monitoring at the bank would have stopped this at the server endpoint by tracking system changes in real time, triggering alerts to analysts.”

According to the results of the Financial Services Edition of the 2016 Vormetric Data Threat Report, 90 percent of IT security professionals in the financial services industry feel vulnerable to a data breach, and 44 percent have already experienced one.

The report, based on responses from 1,100 senior IT security executives at large enterprises, including more than 100 at U.S. financial services organizations, also found that the leading barriers to adoption of better data security include complexity (68 percent) and lack of staff (35 percent).

In response, 70 percent of respondents are planning to increase spending to protect sensitive data, and 48 percent plan to invest in data-at-rest defenses in the coming year.

Still, 66 percent view meeting compliance requirements as a “very” or “extremely” effective way to protect sensitive data.

“Financial services organizations continue to feel the heat from cyber attackers,” Vormetric vice president of marketing Tina Stewart said in a statement. “They are investing to help solve the problem, but surprisingly, are failing to connect the dots about the best solutions to use.”

“With the world’s financial data in their custody, the most effective way to protect this information, once networks and systems are penetrated, is to enhance data protection investments,” Stewart added.

Via: esecurityplanet

Hacking competitions that will get you noticed

Some of the most highly recommended hacking competitions that will get your name and skills noticed by the right people.

Hack the Pentagon

From the Hack the Pentagon announcement to the Facebook Hacker Cup, there are loads of opportunities for those new to security to either participate in educational hacking competitions or simply learn by watching others compete. Michiel Prins, co-founder, HackerOne, and Ryan Stortz, security researcher, Trail of Bits, offered up a list of popular competitions and what they like most about some of them.

Uber Engineering Bug Bounty

The engineering security team at Uber has developed a bug hunter treasure map inviting hackers to find vulnerabilities in their service, which communicates with the Android and iOS apps while using Uber. Prins said, “Uber’s program is unique because it offers a first of its kind loyalty program and the treasure map gives hackers unprecedented transparency.”

Yahoo’s Hack U

The development network division at Yahoo!, Hack U, offers a platform for different hacking competitions with “no rules or limitations.” Prins said, “”Yahoo! has a large footprint on the web and diverse portfolio of products so there is always something new for bug hunters to find. This makes it a great program for newer hackers.””

GitHub at the core of it all

The GitHub Bug Bounty Program offers a minimum prize of $200. Prins said, “GitHub is a core product for nearly all development teams — if you are able to hack it and report a vulnerability you are potentially helping millions.”


Unlike the unencumbered opportunites at Hack U, Google Bug Hunter University is much more explicit about their boundaries and expectations. “Google’s program is great for bug hunters. They are very particular and transparent about how they determine bounty awards and what technology is in scope. Google’s Bug Hunter University is also a great resource for hackers wanting to look for bugs in Google and any other program,” Prins said.

Capture the Flag (CTF)

“Many competitions (mine included) target the CTF community and tend to punish new people. Much like jazz musicians, we build off of challenges from our peers to pay homage and to show off. Unfortunately this means challenge, sophistication, and difficulty goes way up in a horrible feedback loop,” Stortz said.

Competitions like PicoCTF and Microcorruption are specifically targeted at new players and the stages. “They are meant to slowly build up fundamental skills (and in the case of PicoCTF specifically – recruit you to Carnegie Melon),” Stortz said.

A few more recommendations

DropBox — They pay competitive bounties, they store a lot of data and there are many components, like iPhone app, syncing with computer. It is more than just a web app which creates unique challenges and makes it a fun target for hackers.

CyberCompEx is another community of highly skilled and talented researchers looking to connect through an online platform and various competitions. You can engage in a competition or view past competitions to get a taste of what they are all about.

Some other types:

I have always enjoyed trying to gain access to things I’m not really supposed to play around with. I found Hack This Site a long time ago and I learned a lot from it.

HellBound Hackers is the quintessential site for hacking tutorials. Covering an expansive range of topics including ethics, social engineering and phreaking, the site’s articles hold an impressive wealth of material. With a community of almost 50,000 members it’s also one of the largest hacking sites out there, making it ideal for newbies and experienced pros alike.

In a similar vein to Hack This Site, Hacker Games offers a range of challenges and war games that should pique any budding hacker’s interest. While there’s not much in the way of actual tutorials, the site provides a great, safe avenue for investigating complex security setups.

Current list:


  • Over The Wire They have lots of small hacking challenges like: analyze the code, simple TCP communication application, crypto cracking.
  • We Chall We Chall is similar to Over The Wire. Lots of challenges. They also have a large list of other sites with similar challenges.




Other list like this one:

Other interesting sites:

Via: csoonline