Monthly Archives: October 2018

How automated incident response can help security

Automated incident response can benefit security both in the cloud and in traditional settings. Read what it can be used for and how it helps.

Despite the increase in breaches and security incidents we hear about regularly, many incident response teams are understaffed or struggling to find the right skill sets to get the work done.

Today, more enterprise incident response teams actively look for opportunities to automate processes that often take up too much time for highly skilled analysts, as well as those that require lots of repetition and provide little value in investigations. Common activities that many teams consider automating include the following:

  • Identifying and correlating alerts: Many analysts spend inordinate amounts of time wading through repetitive alerts and alarms from many log and event sources, and then spend time piecing together correlation strategies for similar events. While this is valuable for the later stages of investigations, it can also be highly repetitive, and can be automated to some degree.
  • Identifying and suppressing false positives: This can be tedious work on a good day and overwhelming on a bad one. Identifying false positives can often be streamlined or automated using modern event management and incident response automation tools.
  • Initial investigation and threat hunting: Analysts need to quickly find evidence of a compromised system or unusual activity, and they often need to do so at scale.
  • Opening and updating incident tickets/cases: Due to improved integration with ticketing systems, event management and monitoring tools used by response teams can often generate tickets to the right team members and update these as evidence comes in.
  • Producing reports and metrics: Once evidence has been collected and cases are underway or resolved, generating reports and metrics can take a lot of analysts’ time.

James Carder and Jessica Hebenstreit of Mayo Clinic provided several tactical examples of automated incident response in a past RSA Conference presentation:

  • automated domain name system (DNS) lookups of domain names never seen before and driven by proxy and DNS logs;
  • automated searches for detected indicators of compromise;
  • automated forensic imaging of disk and memory from a suspect system driven by alerts triggered in network and host-based antimalware platforms and tools; and
  • network access controls automatically blocking outbound command-and-control channels from a suspected system.

There are many more areas where automated incident response can help, especially in forensic evidence gathering, threat hunting, and even automated quarantine or remediation activities on suspect systems.

Endpoint security vendors have begun to emphasize response automation activities and integration with detection, response and forensics capabilities. Analysts need to quickly identify indicators of compromise and perform lookup actions across other systems, as automating as much of this as possible is a common goal today.

There are a fair number of vendors and tools that can help integrate automation activities and unify disparate tools and platforms used for detection and response. These include Swimlane, FireEye Security Orchestrator, CyberSponse, Phantom, IBM Resilient Incident Response Platform, Hexadite and more, most of which use APIs with other platforms and tools to enable them to share data and create streamlined response workflows.

Things to consider when evaluating these types of products include maturity of the vendor, integration partners, alignment with SIEM and event management, and the ease of use and implementation.

Automated incident response in the cloud

Incident response in the cloud may rely on scripting, automation and continuous monitoring more heavily than in-house incident response does. Currently, many of the detection and response tools emerging for the cloud are heavily geared toward automation capabilities, which tend to be written to work with a specific provider’s APIs, many of which are focused on Amazon Web Services (AWS) at the moment.

Teri Radichel wrote a paper on AWS automated incident response and released a simple toolkit to help with it, as well.

The ThreatResponse toolkit developed by Andrew Krug, Alex McCormack, Joel Ferrier and Jeff Parr can also be used to automate incident response collection, forensics and reporting for cloud environments.

To truly implement automated incident response in the cloud, incident response teams will need to build automated triggers for event types that run all the time — such as AWS CloudWatch filters — especially as the environment gets more dynamic.

Deciding what triggers to implement and what actions to take is the most time-consuming aspect of building a semi-automated or automated response framework in the cloud. Do you focus on user actions? Specific events generated by instances or storage objects? Failure events? Spending time learning about cloud environment behaviors and working to better understand normal patterns of use may be invaluable here.

None of these tools and methods will replace skilled, knowledgeable security analysts who understand the environment and how to properly react during an incident scenario. However, unless we start detecting and responding more quickly, there’s no way we’ll ever get ahead of the attackers we face now and in the future.

 

via: techtarget

NERC CIP Audits: Top 8 Dos and Dont’s

I have been involved with quite a few projects over my career. I was involved with CIP compliance audits, investigations, auditor training, and many advisory sessions. Typically, I was advising entities across North America on different tactics, techniques, and insight from best practices I have seen. I wanted to share a few of the dos and don’ts during my experience out in the field.

8) Do Practice a Mock Audit

You will be audited. I cannot believe how many times I would walk into an entity and find out they had never performed a mock audit with their staff. They didn’t know the types of questions they would be asked, the evidence to produce, or the responses they should prepare for. Everyone was yelling at each other. IT was a mess. Don’t let these be your entity, and make sure you practice several mock audits to understand where you may have some weaknesses. If you do nothing else listed here, this is highly recommended.

7) Don’t Lawyer up Every Conversation

While having lawyers is very important for any dispute, settlement, or compliance program process, they aren’t always the best to be the front line on answering questions. For example, you don’t want your corporate attorney to answer technical questions on how your ESP are designed and configured.

6) Do Show Your Work

A lot of times, I would see an entity provide evidence of results. Sometimes you will hear auditors ask to see how you got to your results. A great example here is a Cyber Vulnerability Assessment or CVA.

One time, I remember hearing an entity perform their CVA and get a pile of results/action items to fix. They then showed a piece of paper that said “Results” and had a completed check mark. When the auditors asked how they completed some of these tasks or if they could see the steps they went through to get this result, the entity had no answers. They couldn’t even confirm that all of the CVA findings were fixed because they didn’t have documentation for themselves.

5) Don’t Redact all Your Documentation and Evidence

The goal of the auditor is to help your entity demonstrate compliance to the NERC CIP standards, not to find areas of non-compliance.

I have been on audits where the entity would not even allow the auditors to view evidence by themselves – it had to be on an entity-owned machine with limited access and documents that were mostly blacked out information. All this did was extend the audit another week and create a starting point for more questions.

Please help the auditors by making evidence accessible and useful.

4) Do be Polite and Patient

When an auditor asks for information, they are usually just trying to get an understanding of your environment. This isn’t a court hearing. The audit team is just trying to gain an understanding of the entire picture because they don’t know your environment as well as you do.

They may also not be familiar with certain acronyms, diagrams and other procedures at your organization. Take your time and explain to them since they will help tell your story of compliance.

3) Don’t Scramble for Documentation

A perfect example here was always exemplified during CIP-004 R2 and R1 training and awareness program records. The CIP training standards dictate that authorized staff with unescorted physical or electronic access to BES Cyber Assets, otherwise known as BCAs, must go through a NERC CIP compliance training program. The NERC CIP security awareness program requirements under R1 just simply say you need to prove that you made a program aware to the staff and personnel in scope. Seems easy, but it’s not unless you work together with your departments.

Any of your staff, contractors, vendors, and even cleaning crew might fall into the scope of this requirement. Make sure you have reports and records of your security awareness training program content available during the audit scope so that you are not scrambling during the audit. Every department is going to have a different set of personnel to make sure it is compliant.

2) Do Listen to CIP Auditors’ Advice

I have worked with the CIP audit and compliance teams in every region across North America. Your auditors have a lot of experience. They have seen more implementations, configurations, environments and procedures than you could ever imagine.

Listen to them if they talk about best practices or advice for additional approaches towards demonstrating compliance. Sometimes it can really help open your eyes to a different point of view.

1) Don’t Argue Over Every Word

During old CIP Version 3 audits, I have seen words like “significant,” “annual” and other non-defined terms used in every possible way you could imagine. Of course, some of that language has been cleaned up in the modern CIP standards, but you get the point. If you do have an undefined term, ensure you define it somewhere in your internal documents to show the audit team what you mean. Listen to best practices across your region and from NERC. Don’t try and re-invent the wheel.

These are just some basic tips I have personally experienced along the way. Audits are going to be tough no matter how prepared you are. Knowing that going in is half the battle. Make sure you have a plan, get your employees to communicate that plan, and execute. If every program was perfect, we wouldn’t need these types of compliance regulations. Mistakes happen, and how you learn from these mistakes is the goal of a successful compliance program.

Learn more about how Tripwire can help make your NERC CIP audit simpler, including insights on generating RSAWs and responding appropriately to pre-audit requests, by downloading a new paper here.

 

 

via:  tripwire

Proactive System Hardening: Continuous Hardening’s Coming of Age

The first article in this series examined configuration hardening—essentially looking at ports, processes and services where security configuration management (SCM) is key. The second article looked at application and version hardening strategies. This third installment will discuss the role of automation in the coming of age of what’s called “continuous hardening.”

Known Vulnerabilities vs. Conditional Vulnerabilities

If I want to harden my systems against “known vulnerabilities”—weaknesses or deficiencies for which there are known common vulnerabilities and exposures (CVEs)—I use a vulnerability management solution. If I need to harden my systems against “conditional vulnerabilities”—weaknesses based on the way they’re configured—I use an SCM solution. But without automation to provide the element of “continuousness” to these efforts, we rapidly find ourselves back at square one.

What is Configuration Drift?

To stick with our house analogy: If I’ve checked the configurations of all my doors and windows, but I have no way to know when the state has changed and I instead rely on periodic inspection by human eyes, a phenomenon known as “configuration drift” invariably occurs.

I open the fire escape window to water the potted hydrangea sitting out there but forget to close it afterward: configuration drift. I enable Telnet to maintain or update a server and then forget to disable it afterward: configuration drift.

The Role of Automation in Continuous System Hardening

A primary weakness of our house analogy is actually useful here, as it shows us the critical need for automation. In real life, most people have one house. But most organizations have hundreds—if not many, many thousands—of servers, desktop systems, laptops and devices. These represent an almost inexhaustible supply of attack surface and potential beachheads. How can we win a war at this scale?

Automation requires us to not only create continuous, ongoing routines to assess states across this vast array of targets, but it also requires us to make allowances for the constantly changing conditions that give meaning and relevance to risk.

In the case of our house, it’s useful to know that, over the last two years, the leafy maple out back has grown a large solid branch that’s close enough to an upstairs bedroom for a tall thief to reach the window. And the inverse is sometimes true: If the old kitchen window was painted shut twenty years ago, who needs to waste time including it in our daily “is it locked” checklist?

This critical need for current “state” information has caused the security community to create more persistent real-time agents, more effective scanning processes that are “aware” of network constraints and ways to avoid “mega scans” in favor of continuous segmented scanning.

Integrating Disparate Security Systems

They’ve also broken down barriers between infosec solutions themselves and addressed another critical requirement for achieving this attribute of “continuousness”: Information security systems must talk to one another. A few simple examples illustrate this need:

  • Vulnerability Management: Vulnerability management (VM) systems are quite good at finding unexpected (and likely unsecured) systems. When one of these is discovered, the VM system can tell the SCM system about the new asset and ask it to perform an on-the-spot configuration assessment.
  • Security Configuration Management: Similarly, SCM systems are evolving intelligent ways to classify assets: by business unit, by system owner, by critical application, and even by the type and criticality of data stored on the system. This helps manage and prioritize their own risks, but when shared with a VM system, this also helps clarify and prioritize remediation efforts.
  • Security Information and Event Management: Both of these systems are being used extensively by SIEM systems as a foundational source of security information: in the first case, correlating known vulnerabilities with detected threats, and in the second case, using sudden configuration changes (“Why is the ‘Telnet should not be enabled‘ test suddenly failing?“) to power real-time threat intelligence models.

SC Magazine summed up these needs in a prescient review of policy management systems—what we’ve called “security configuration management” systems in this article—way back in 2010: “The only reasonable answer to the challenges of compliance, security and configuration management is to automate the tasks.”

The key to continuous system hardening as a goal and a discipline is a willingness to seek out and employ automation wherever possible. Gone are the days when isolated, siloed systems can harden information systems and keep them that way in the face of continuous drift.

Highly interactive solutions that understand the ever-shifting nature of “state” and talk to each other regularly—security configuration and vulnerability management solutions in particular—are the first, best and often the last line of defense.

 

via:  tripwire

Proactively Hardening Systems: Application and Version Hardening

The first article in this series examined configuration hardening, essentially looking at ports, processes and services as the “doors, gates and windows” into a network where security configuration management (SCM) becomes the job of determining which of these gateways should be open, closed, or locked at any given time. Now it’s time to look at application and version hardening.

What is System Hardening?

If configuration hardening settings are “conditional,” meaning they must find and keep that balance between security and productivity, then hardening against known vulnerabilities in applications and versions is much more black-and-white.

If an exploit path has been found in an operating system or application, the vendor rushes to create a patch or upgrade that removes the vulnerability. “Hardening” in this sense means “making sure the holes are known and that the most current security patches are deployed.”

One Way Hackers Exploit Known Vulnerabilities

To go back to our “secure house” analogy from the previous article in this series for a moment, imagine that the house I’m protecting has three external doors and that they all use Secure-A-Door Model 800 high-strength locks.

But a tester at the Secure-A-Door factory (or worse, a professional burglar) has just discovered an interesting thing: If you slide a credit card along the door jamb at 15 degrees while pulling up on the handle, the Secure-A-Door 800 pops open like a Coke can.

One of the most famous examples of this exploitation began in 2008. That’s when the makers of the Conficker worm discovered and exploited an underlying weakness in Port 445 of the Windows operating system.

The worm created a remote procedure call that dropped a DLL on the system, unloaded two distinct packets for data and code, and hid itself in a remote thread to make itself at home. (It was infinitely more complex and clever than that, but you get the idea.)

In effect, the worm popped the Secure-A-Door Model 800, let itself in, repaired the lock, installed a new phone line to listen for orders, and sat in a comfy chair waiting for instructions. It was able to leverage the internet, could register new domain names in which to hide, and created an extensive botnet that by 2010 had infected, according to Panda Security, as many as 18 million PCs—6 percent of the world’s PC population at the time.

Common Vulnerabilities and Exposures (CVEs)

This type of design failure or exploit is usually repaired by a patch. In the case of Conficker, Windows Security bulletin MS08-067 made the danger known to the worldwide Microsoft community and introduced a patch to prevent easy violation of Port 445.

The MS bulletin was in turn translated by the Common Vulnerabilities and Exposures site as CVE-2008-4250 and given a Common Vulnerability Scoring System (CVSS) rating of 10—the most severe rating possible.

Vulnerability Management

Vulnerability management (VM) systems, unlike SCM systems that check to see that doors and gates and windows are locked, do their part in system hardening differently. They make sure the proper patch levels are maintained and that any available defenses have been utilized. Using our analogy, we’d be conducting the following checks:

  • Proactively discovering whether I have any Secure-A-Door Model 800 locks installed
  • If I do, reporting on whether they’re the corrected “B” version made after October 2012
  • Verifying that any “bad” ones I have are only on inside doors and don’t serve as a primary defense

VM systems enable continuous hardening by making sure that CVE-2008-4250—and its many thousands of friends—are understood, mitigated, and more-or-less unexploitable when the right steps are taken.

More mature solutions provide an ongoing assessment of overall risk based on whether these vulnerabilities are mitigated or ignored.

 

via:  tripwire

California passes law that bans default passwords in connected devices

Good news!

California has passed a law banning default passwords like “admin,” “123456” and the old classic “password” in all new consumer electronics starting in 2020.

Every new gadget built in the state from routers to smart home tech will have to come with “reasonable” security features out of the box. The law specifically calls for each device to come with a preprogrammed password “unique to each device.”

It also mandates that any new device “contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time,” forcing users to change the unique password to something new as soon as it’s switched on for the first time.

For years, botnets have utilized the power of badly secured connected devices to pummel sites with huge amounts of internet traffic — so-called distributed denial-of-service (DDoS) attacks. Botnets typically rely on default passwords that are hardcoded into devices when they’re built that aren’t later changed by the user. Malware breaks into the devices using publicly available default passwords, hijacks the device and ensnares the device into conducting cyberattacks without the user’s knowledge.

Two years ago, the notorious Mirai botnet dragged thousands of devices together to target Dyn, a networking company that provides domain name service to major sites. By knocking Dyn offline, other sites that relied on its services were also inaccessible — like Twitter, Spotify and SoundCloud.

Mirai was a relatively rudimentary, albeit powerful botnet that relied on default passwords. This law is a step in the right direction to prevent these kinds of botnets, but falls short on wider security issues.

Other, more advanced botnets don’t need to guess a password because they instead exploit known vulnerabilities in Internet of Things devices — like smart bulbs, alarms and home electronics.

As noted by others, the law as signed does not mandate device makers to update their software when bugs are found. The big device makers, like Amazon, Apple and Google, do update their software, but many of the lesser-known brands do not.

Still, as it stands, the law is better than nothing — even if there’s room for improvement in the future.

 

via:  techcrunch

Google+ Shutting Down After Bug Leaks Info of 500k Accounts

Google has announced that they are closing the consumer functionality of Google+ due lack of adoption and an API bug that leaked the personal information of up to 500,000 Google+ accounts.

While no evidence was found that indicates this bug was ever misused, it was determined that the complexity of protecting and operating a social network like Google+ was not a worthwhile endeavor when so few users actually used the service for any length of time.

“This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” stated a blog post by Google regarding the Google+ closure. “The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.”

The consumer functionality of Google+ will be closing over a 10 month period, while Google transitions the product to be used internally by the Enterprise.

API bug caused data leak

After performing a code review of the Google+ APIs, called Project Strobe, Google stated they discovered a bug that could leak the private information of Google+ accounts. This bug could allow a user’s installed apps to utilize the API and access non-public information belonging to that user’s friends. The non-public information that was accessible includes an account holder’s name, email address, occupation, gender and age.

Underlining this, as part of our Project Strobe audit, we discovered a bug in one of the Google+ People APIs:

  • Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
  • The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public. 
  • This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data you may have posted or connected to Google+ or any other service, like Google+ posts, messages, Google account data, phone numbers or G Suite content.
  • We discovered and immediately patched this bug in March 2018. We believe it occurred after launch as a result of the API’s interaction with a subsequent Google+ code change.

As Google only keeps two weeks of API logs for its Google+ service, it was impossible for them to determine if the bug was ever misused. They were able to determine that the bug was not misused during the two weeks that they had log data.

Google knew about leak in May but did not disclose

According to a report by the Wall Street Journal, the bug in the Google+ API existed between 2015 and March 2018, which was when Google discovered and fixed the bug. According to their reporting, an internal committee at Google decided not to disclose the bug even though they were not 100% sure that it was not abused.

The Wall Street Journal, reported that they have reviewed a memo prepared by Google’s legal and policy staff, which indicated that disclosing the data breach could lead to scrutiny by government regulatory agencies.

“disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.”

In a statement, a Google Spokesperson said that their Privacy & Data Protection Office felt it was not necessary to disclose as it did not meet the threshold that would warrant it.

“Every year, we send millions of notifications to users about privacy and security bugs and issues. Whenever user data may have been affected, we go beyond our legal requirements and apply several criteria focused on our users in determining whether to provide notice.

Our Privacy & Data Protection Office reviewed this issue, looking at the type of data involved, whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met in this instance.

The review did highlight the significant challenges in creating and maintaining a successful Google+ that meets consumers’ expectations. Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.” – Google Spokesperson.

 

via:  bleepingcomputer