Navigating the Tech Industry’s ‘Great Shakeout’: Expert’s Advice for Securely Migrating to the Cloud

All indications suggest organizations’ adoption of the cloud is going to ramp up considerably in the next few years. According to Cisco’s Global Cloud Index: Forecast and Methodology (2016–2021) white paper, cloud data centers will process 94 percent of workloads and compute instances by 2021. Close to three-quarters of those resources will be Software-as-a-Service (SaaS) assets processed in the public cloud.

Global digital security strategist Ian Trump thinks these trends suggest the world is moving away from on-premise and private cloud data centers. Trump believes those developments could profoundly change how businesses deliver their services and how security teams work to protect those services.

For that reason, he recommends companies seriously consider migrating to the cloud if they haven’t done so already:

“A great shakeout in the tech industry is coming. If your business can’t afford to move to public cloud SaaS from its existing systems, a scrappy cloud startup is going to take your lunch money on the playground. For those in the current security space, adapt to this SaaS trend or become irrelevant to business,” warns Trump.

Of course, organizations can’t just pick up and move all their IT resources to the cloud. They need to keep a few security concerns in mind if they decide to migrate. First and foremost, companies need to figure out what type of deployment model will work best for them.

“I’m an advocate of migrating to the cloud and the intrinsic of having improvements in security and compliance driven by multiple other clients, but this doesn’t mean you can set it and forget it,” explains Matthew Pascucci, Cyber Security Practice Manager at CCSI. “When migrating to the cloud, the deployment model is important to understand first. Will you be in a private, public, SaaS, or PaaS infrastructure? Understanding this will allow organizations to get a better feel for where their risks lie,” Pascucci says.

Companies must then formulate a security strategy for the applications and other assets that they’ll actively deploy in the cloud. Whitney Champion, a Senior Systems Architect, feels organizations need to go through this assessment by asking themselves if they intend to review their code and how regularly they’ll do so, how they’ll set up networks, and what operating systems they’ll use.

According to Champion, doing so can further elevate organizations’ awareness of the issues involved with cloud migration:

“It is crucial to be aware that not every cloud provider is the same, and many of these processes will be implemented differently across different platforms. Each organization needs to be mindful of these requirements and perform their due diligence to be prepared for the implications of moving any of their systems to the cloud,” she says.

Once companies have figured out what they want out of their cloud environment, it’s time for them to begin looking for a cloud service provider (CSP) that meets their needs. Digital security specialist Zoe Rose thinks companies should choose their CSP carefully. That’s especially the case if they’re looking to host sensitive data in the cloud.

“The cloud is simply computers someone else has ownership of and maintains,” Rose notes.

“If information is highly sensitive, you will want to review contractual requirements on security, patch management, and reporting of incidents for the third-party hosting company along with your agreed requirements with the data owners,” added Rose.

At this point in the migration response, it’s important to remember that signing a contract doesn’t mark the end of an organization’s responsibility for their cloud-based data. Under the Shared Responsibility Model, CSPs are responsible only for ensuring security of the cloud, or the infrastructure which supports their cloud computing services. Organizations are still responsible for security in the cloud, or the process of taking adequate measures to protect their data.

According to Ean Meyer, security controls for the cloud should factor into companies’ strategies for how to defend their cloud-based data against digital attackers.

“All too often, companies taking their first steps into the cloud make the mistake of believing security will be completely handled by their cloud hosting provider. Don’t make this mistake,” says Meyer.

“Take time to evaluate your current controls and look at how to enable them in your cloud instances. Once you have your existing controls in place, look at what additional controls cloud deployments can offer. Cloud systems often offer security features that many organizations couldn’t deploy on-premise. If you keep these things in mind when you start to migrate to the cloud, you will be well on your way to making the right security decisions,” suggests Meyer.

Stay tuned for a future post that explores these security controls for the cloud in detail.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

The internet’s worst-case scenario finally happened in real life: An entire country was taken offline, and no one knows why

Mauritania cable Telegeography

A map of undersea internet cables showing Mauritania’s single link to the global infrastructure.TeleGeography

  • Mauritania was taken offline for two days late last month after a submarine internet cable was cut.
  • No one knows why or how it was cut, though Sierra Leone’s government appears to have interfered with its citizens’ internet access around that time.
  • Undersea web cables are uniquely vulnerable to sabotage.
  • UK and US military officials have previously indicated that Russia is capable of trying something like this, though there is no indication that it was involved in this break.

For years, countries have worried that a hostile foreign power might cut the undersea cables that supply the world with internet service.

Late last month, we got a taste of what that might be like. An entire country, Mauritania, was taken offline for two days because an undersea cable was cut.

The 17,000-kilometer African Coast to Europe submarine cable, which connects 22 countries from France to South Africa, was severed on March 30, cutting off web access partially or totally to the residents of Sierra Leone and Mauritania.

It also affected service in Ivory Coast, Senegal, Equatorial Guinea, Guinea, Guinea Bissau, Liberia, Gambia, and Benin, according to Dyn, a web-infrastructure company owned by Oracle.

ACE cable

Oracle Dyn

It is not clear how the cable was cut. But the government of Sierra Leone seems to have imposed an internet blackout on the night of March 31 in an attempt to influence an election there.

There had not been a significant outage along the cable in the past five years.

Loss of service to Mauritania was particularly severe, as the Dyn chart below shows.

“The most significant and longest-lasting disruption was seen in Mauritania, with a complete outage lasting for nearly 48 hours, followed by partial restoration of connectivity,” David Belson wrote in a Dyn research blog on Thursday.

ace cable

Oracle Dyn

The international cable system has several levels of built-in redundancy that allowed providers such as Africell, Orange, Sierra Leone Cable, and Sierratel to restore service.

But the break shows just how vulnerable the worldwide web is to the simple act of cutting a cable. About 97% of all international data is carried on such cables, according to the Asia-Pacific Economic Cooperation forum.

Here’s a map from the telecom analytics company TeleGeography of the cables in Europe:

undersea internet service cables map

TeleGeography

And those connecting the US:

undersea internet cables map

TeleGeography

UK and US military intelligence officials have repeatedly warned that relatively little is done to guard the safety of the cables and that Russia’s navy continually conducts activities near them.

In 2013, three divers were arrested in Egypt after attempting to cut submarine web cables.

“In the most severe scenario of an all-out attack upon undersea cable infrastructure by a hostile actor the impact of connectivity loss is potentially catastrophic, but even relatively limited sabotage has the potential to cause significant economic disruption and damage military communications,” James Stavridis, a retired US Navy admiral, said in a 2017 report for the think tank Policy Exchange.

“Russian submarine forces have undertaken detailed monitoring and targeting activities in the vicinity of North Atlantic deep-sea cable infrastructure,” he added.

There is no indication that Russia was involved in the ACE breakage. But military strategists are likely to study the Mauritania break as an example of the effect of knocking a country off the web by cutting its cables.

 

via:  businessinsider


Save pagePDF pageEmail pagePrint page

Cloud vs. On-Premises: Understanding the Security Differences

More and more organizations are now entrusting their IT resources and processing to the cloud. This trend is likely to grow in the coming years. To illustrate, Gartner predicts that cloud data centers will process 92 percent of workloads by 2020. Cloud workloads are expected to increase 3.2 times in that same span of time, Cisco forecasts.

With migration on their minds, many organizations are beginning to wake up to the security challenges of hosting their data in the cloud. Some might be struggling to identify who’s responsible for their cloud security under the shared responsibility model with their chosen cloud service provider (CSP). Others might be looking to OneLogin and worry about falling victim to a breach that compromises their cloud-based data, not to mention succumbing to other threats that jeopardize their cloud security.

These concerns are all valid. But while cloud security does have its challenges, it’s not impossible to figure out.

Australian web security expert Troy Hunt recommends that organizations begin by not thinking about cloud security in a binary mode. He recommends adopting a conceptualization that involves “differently secure” aspects of the cloud as opposed to elements that are “secure” or not. The same goes for securing the cloud versus securing physical hardware and datacenters.

“On the one hand, you may hand over physical control, but on the other hand, you’re almost certainly doing so to an organization better-equipped to manage computing environments than your own,” Hunt observes. “Then there are concerns around the increased attack surface of putting services in the cloud, but there’s great things that can be done with virtualized networks and access to features that were previously cost-prohibitive for many organizations (WAFs, HSMs, etc.). So think of the cloud as ‘different’ and make the most of those hybrid scenarios where you can gradually move assets across in a fashion that suits your own organization’s comfort level.”

The cloud is certainly different from on-premises resources, so it makes sense that security would be different, too. It follows that organizations must sometimes rethink how they’re currently doing things with respect to implementing security in the cloud.

Adrian Sanabria, Director of Threatcare, says it’s not possible for companies to just “lift and shift” to Amazon Web Services (AWS) or Microsoft Azure without inviting a very expensive disappointment. Instead they must pay attention to the differences and use them. With that said, one of the most important differences in the cloud for Sanabria is the management plane:

“Since everything in the cloud is virtualized, it’s possible to access almost everything through a console. Failing to secure everything from the console’s perspective is a common (and BIG) mistake. Understanding access controls for your AWS S3 buckets is a big example of this. Just try Googling “exposed S3 bucket” to see what I mean.”

Consoles aren’t the only factor that separate the cloud from physical hardware. Craig Young, a security researcher with Tripwire’s Vulnerability and Exposures Research Team (VERT), says the ways in which organizations can choose to process data in the cloud also stand out:

“Cloud service providers allow customers to build complex private network environments suitable for processing even the most sensitive data. The confidentiality of this data rests on security controls unlike those commonly used on-premise, and a slight mistake can ultimately expose this sensitive data to the public Internet. Network administrators need to keep a close eye on the external view of all IP space allocated for their cloud. Vulnerability scanners like Tripwire IP360 make it easy to recognize exposed services and close them up before attackers can exploit them.”

Understanding how cloud security differs from datacenter security is crucial for organizations. They need that knowledge not only to migrate to the cloud. It’s also essential for companies to implement security controls once they’ve completed the move.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

ISO/IEC 27001 and Why It Matters for Your Business

ISO/IEC 27001 is a set of standards for information security management systems (ISMS) created by the International Organization for Standardization and the International Electrotechnical Commission, both independent, and non-governmental organizations. ISO/IEC 27001 is part of the broader ISO/IEC 27000 family, a set of standards designed to “[help] organizations keep information assets secure.”

As we’ll discuss below, the 27001 specification is incredibly important for businesses. From internally auditing your security posture to externally receiving certifications, the specific points within ISO/IEC 27001 should play an active role in managing your business’ data and information security.

What is ISO/IEC 27001?

ISO/IEC 27001 provides standards for enterprises, governments and other organizations to use and maintain their information security management systems. As the ISO defines it, an ISMS is a systematic approach to securing sensitive company information. This can be anything from financial data to intellectual property to employee details to third-party information. And although it has the word ‘system’ in it, an ISMS isn’t constrained to just technology. People and processes are an equally important part of securing information your business uses day-in and day-out.

Because the ISO is a non-governmental organization who writes general compliance principles – not how to implement them – the organization has no authority in and of itself to enforce “violations” of its standards. That said, many institutions that do have legal or regulatory authority rely on it for guidance. It has even been referred to as the “umbrella” for ISMS policies because of this fact.

WHO CARES?

If your business wants to comply with a specific set of industry standards, it’s highly likely that ISO/IEC 27001 plays a role – or at least has similar high-level guidance. This is the case with everything from J-SOX in Japan to the Data Protection Directive (DPP) in Europe to the Payment Card Industry Data Security Standard (PCI DSS) in the United States. Many regulations that already apply to your organization can be aided by following the ISO/IEC 27001 guidelines.

You can also receive certifications directly on these standards through which an affiliate organization can certify your business’ ISMS. Not only does this improve your brand image with clients, but it will also make you stand out from (or catch up with) your competitors. In today’s market environment, cybersecurity is obviously a benefit. We can even imagine a certificate better attracting technical staff or incentivizing organizations to partner with you. If others can trust how you manage and secure your information, that’s obviously a huge benefit for your business. ISO/IEC 27001 strengthens such trust.

(In the event none of that is convincing, check these statistics: by the end of 2016, well over 1.6 million ISO/IEC certificates were recorded worldwide – over 33,000 of them specifically for ISO/IEC 27001.)

WHAT EXACTLY DOES ISO/IEC 27001 SAY?

ISO/IEC 27001 uses a top-down, risk-based approach to information security management systems. One of its strongest features is that it’s not technology-specific – it doesn’t matter which devices or operating systems your business is running; you can still apply the standard’s principles.

As already mentioned, the standard outlines high-level planning and processes. For instance, clause 6 deals with planning, which includes information security risk assessments and general security objectives; clause 8 deals with operation, including the execution of security goals and the regular testing of those goals (i.e. setting and evaluating benchmarks); and clause 9 focuses entirely on performance evaluation, including monitoring, analysis, internal audits, and management reviews.

The specification then dives into more specific detail on specific security techniques, from information exchange procedures to clock synchronization to password management. This detail is designed to help businesses plan out their security policies in a checklist-oriented fashion.

For instance, the specification gives the following structure for access control policies:

  1. Introduction
  2. Policy Statement
  3. Roles and Responsibilities
  4. Information/Systems Access
  5. User Registration/De-Registration
  6. Secure Log-On Requirements
  7. Physical Access Controls

As numerous security experts have pointed out, ISO/IEC 27001 compliance is important for everyone from IT staff all the way to CEOs. Businesses can use the standards to establish high-level security policies that then cascade down the organization, turning into more detailed procedures at each level (e.g. translating from policy goals into operational tasks into technical rules).

NEXT STEPS?

Much like many regulatory guidelines, ISO/IEC 27001 isn’t exactly light reading. The documentation is long, detailed, and complex. It should be clear at this point, though, that such compliance is incredibly important.

You should turn to an ISO/IEC 27001 expert to audit your organization and understand the next steps to compliance. Filling existing gaps is especially important. It’s obviously possible to do so yourself, but it’ll likely take significantly more time and money than the alternative. Regardless, once you are compliant, invest resources in getting certified and staying certified. If there’s one thing that we know for certain in cybersecurity, it’s that stagnancy is death, so constantly reassessing policies and procedures to strengthen ISMS is essential.

 

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

In re Zappos: The 9th Circuit Recognizes Data Breach Harm

In In re Zappos.com, Inc., Customer Data Security Breach Litigation (9th Cir., Mar. 8, 2018), the U.S. Court of Appeals for the 9th Circuit issued a decision that represents a more expansive way to understand data security harm.  The case arises out of a breach where hackers stole personal data on 24 million+ individuals.  Although some plaintiffs alleged they suffered identity theft as a result of the breach, other plaintiffs did not.  The district court held that the plaintiffs that hadn’t yet suffered an identity theft lacked standing.

Standing is a requirement in federal court that plaintiffs must allege that they have suffered an “injury in fact” — an injury that is concrete, particularized, and actual or imminent.  If plaintiffs lack standing, their case is dismissed and can’t proceed.  For a long time, most litigation arising out of data breaches was dismissed for lack of standing because courts held that plaintiffs whose data was compromised in a breach didn’t suffer any harm.  Clapper v. Amnesty International USA, 568 U.S. 398 (2013).  In that case,  the Supreme Court held that the plaintiffs couldn’t prove for certain that they were under surveillance.  The Court concluded that the plaintiffs were merely speculating about future possible harm.

Early on, most courts rejected standing in data breach cases.  A few courts resisted this trend, including the 9th Circuit in Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010).  There, the court held that an increased future risk of harm could be sufficient to establish standing.

Then along came Clapper, adding ammunition to the courts rejecting standing.  Courts found no standing in cases brought by plaintiffs with a theory that a breach resulted in an increased risk of future harm.

But in the past few years, some courts have begun to begun to embrace the theory that increased risk of future harm is a sufficient injury to satisfy the standing requirement.  In Zappos, the defendants argued that Clapper rejected the theory in Krottner, and thus, Krottner should no longer be viable.  The 9th Circuit, however, held that Clapper didn’t reject the risk of future injury theory entirely, only when there wasn’t a “substantial risk that the harm will occur.”

The Zappos court concluded that in the Zappos breach, there was such a substantial risk.  The court reasoned that the the “information taken in the data breach still gave hackers the means to commit fraud or identity theft, as Zappos itself effectively acknowledged by urging affected customers to change their passwords on any other account where they may have used ‘the same or a similar password.’”

Now, there’s a major circuit split on the issue of whether the increased risk of future harm can be sufficient for standing.  Here’s a chart of some of the cases in the split over the past few years:

Standing for Data Breach Harm - TeachPrivacy Security Training 01

For those of you who are interested in the issue of data breach harm, I recently published an article about it:

Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data Breach Harms,  96 Texas Law Review 737 (2018)

 

Here’s a post that summarizes the article:

 

image

 

via:  teachprivacy


Save pagePDF pageEmail pagePrint page

Cloud vs. On-Premises: Understanding the Security Differences

More and more organizations are now entrusting their IT resources and processing to the cloud. This trend is likely to grow in the coming years. To illustrate, Gartner predicts that cloud data centers will process 92 percent of workloads by 2020. Cloud workloads are expected to increase 3.2 times in that same span of time, Cisco forecasts.

With migration on their minds, many organizations are beginning to wake up to the security challenges of hosting their data in the cloud. Some might be struggling to identify who’s responsible for their cloud security under the shared responsibility model with their chosen cloud service provider (CSP). Others might be looking to OneLogin and worry about falling victim to a breach that compromises their cloud-based data, not to mention succumbing to other threats that jeopardize their cloud security.

These concerns are all valid. But while cloud security does have its challenges, it’s not impossible to figure out.

Australian web security expert Troy Hunt recommends that organizations begin by not thinking about cloud security in a binary mode. He recommends adopting a conceptualization that involves “differently secure” aspects of the cloud as opposed to elements that are “secure” or not. The same goes for securing the cloud versus securing physical hardware and datacenters.

“On the one hand, you may hand over physical control, but on the other hand, you’re almost certainly doing so to an organization better-equipped to manage computing environments than your own,” Hunt observes. “Then there are concerns around the increased attack surface of putting services in the cloud, but there’s great things that can be done with virtualized networks and access to features that were previously cost-prohibitive for many organizations (WAFs, HSMs, etc.). So think of the cloud as ‘different’ and make the most of those hybrid scenarios where you can gradually move assets across in a fashion that suits your own organization’s comfort level.”

The cloud is certainly different from on-premises resources, so it makes sense that security would be different, too. It follows that organizations must sometimes rethink how they’re currently doing things with respect to implementing security in the cloud.

Adrian Sanabria, Director of Threatcare, says it’s not possible for companies to just “lift and shift” to Amazon Web Services (AWS) or Microsoft Azure without inviting a very expensive disappointment. Instead they must pay attention to the differences and use them. With that said, one of the most important differences in the cloud for Sanabria is the management plane:

“Since everything in the cloud is virtualized, it’s possible to access almost everything through a console. Failing to secure everything from the console’s perspective is a common (and BIG) mistake. Understanding access controls for your AWS S3 buckets is a big example of this. Just try Googling “exposed S3 bucket” to see what I mean.”

Consoles aren’t the only factor that separate the cloud from physical hardware. Craig Young, a security researcher with Tripwire’s Vulnerability and Exposures Research Team (VERT), says the ways in which organizations can choose to process data in the cloud also stand out:

“Cloud service providers allow customers to build complex private network environments suitable for processing even the most sensitive data. The confidentiality of this data rests on security controls unlike those commonly used on-premise, and a slight mistake can ultimately expose this sensitive data to the public Internet. Network administrators need to keep a close eye on the external view of all IP space allocated for their cloud. Vulnerability scanners like Tripwire IP360 make it easy to recognize exposed services and close them up before attackers can exploit them.”

Understanding how cloud security differs from datacenter security is crucial for organizations. They need that knowledge not only to migrate to the cloud. It’s also essential for companies to implement security controls once they’ve completed the move.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

Amazon rolls out remote access to its FreeTime parental controls

freetime-android (6)

Amazon is making it easier for parents to manage their child’s device usage from their own phone, tablet, or PC with an update to the Parent Dashboard in Amazon FreeTime. Since its launch in 2012, Amazon’s FreeTime Unlimited has been one of the better implementations of combining kid-friendly content with customizable profiles and parental controls. Today, parents can monitor and manage kids’ screen time, time limits, daily educational goals, device activity, and more while allowing children to access family-friendly content like books, videos, apps and games.

Last year, Amazon introduced a Parent Dashboard as another means of helping parents monitor screen time as well as have conversations with kids about what they’re doing on their devices. For example, if the child was reading a particular book, the dashboard might prompt parents with questions they could ask about the books’ content. The dashboard also provided a summary of the child’s daily device use, including things like what books were read, videos watched, apps or games played, and websites visited, and for how long.

According to a research study Amazon commissioned with Kelton Global Research, the company found that 97 percent of parents monitor or manage their kids’ use of tablets and smartphones, but 75 percent don’t want to hover over kids when they’re using their devices.

On Thursday, Amazon addressed this problem by allowing parents to remotely configure the parental control settings from the online Parent Dashboard in order to manage the child’s device from afar from a phone, tablet or computer.

The controls are the same as those available through the child’s device itself. Parents can set a device bedtime, daily goals and time limits, adjust their smart filter, and enable the web browser remotely. They can also remotely add new books, videos, apps and games to their child’s FreeTime profile, and lock or unlock the device for a set period of time.

The addition comes following last year’s launch of FreeTime on Android, and Google’s own entry into the parental control software space with the public launch of Family Link last fall. Apple also this year made vague promises about improving its existing parental controls in the future, in response to pressure from two Apple shareholder groups, Jana Partners LLC and the California State Teachers’ Retirement System.

With the increased activity in the parental control market, Amazon’s FreeTime may lose some of its competitive advantages. Amazon also needed to catch up to the remote control capabilities provided with Google’s Family Link.

There are those who argue that parental controls that do things like limit kids’ activity on apps and games or turn off access to the internet are enablers of lazy parenting, where devices instead of people are setting the rules. But few parents use parental controls in that fashion. Rather, they establish house rules then use software to remind children the rules exist and to enforce them.

The updated FreeTime Parent Dashboard is available via a mobile-optimized website at parents.amazon.com.

 

via:  techcrunch


Save pagePDF pageEmail pagePrint page

Panera Bread’s Website Reportedly Leaked Millions of Customer Records

The personal information of millions of Panera Bread customers was reportedly left exposed online for at least eight months.

According to reports, the popular US bakery-café chain, which operates over 2,100 locations, was initially alerted of the data leak back in August 2017.

As reported by security journalist Brian Krebs, researcher Dylan Houlihan contacted the firm and was told it was “working on a resolution.” However, the issue remained unfixed.

The leaked records – exposed in plain text – appeared to belong to customers who had signed up for an account to place an order online at panerabread.com.

The data included customer names, email addresses, physical addresses, dates of birth and loyalty card numbers, as well as the last four digits of credit card numbers.

Panera Bread acknowledged the breach on Monday, telling Fox Business that 10,000 customer records were impacted.

The St. Louis-based company released the following statement:

“Panera takes data security very seriously and this issue is resolved. Following reports today of a potential problem on our website, we suspended the functionality to repair the issue. Our investigation is continuing, but there is no evidence of payment card information nor a large number of records being accessed or retrieved.”

Meanwhile, Krebs claims Panera’s remediation continued to leave the data exposed for some time afterward.

“The vulnerabilities also appear to have extended to Panera’s commercial division, which serves countless catering companies. At last count, the number of customer records exposed in this breach appear to exceed 37 million,” wrote Krebs.

Tim Erlin, VP of product management and strategy at Tripwire, adds that the incident serves as a reminder that “security is often as much about response as prevention.”

“Organizations that collect, store and transmit customer data need to have plans in place to deal with reported vulnerabilities. The time to plan is before an incident occurs, not during,” said Erlin.

 

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

Walmart is reportedly in early-stage acquisition talks with Humana

Walmart has begun discussing a possible acquisition of health insurer Humana, The Wall Street Journal first reported Thursday citing people familiar with the matter. Reuters also reported the companies are discussing a partnership, but that a full acquisition is also possible.

Shares of Humana soared as much as 13 percent in after-hours trade on Thursday. Walmart shares edged slightly lower in extended trade.

The newspaper said that details of the potential deal were not immediately clear and that it’s possible one may not materialize.

Walmart said in a statement to CNBC that it doesn’t comment on rumors and speculation. Humana did not immediately respond to CNBC’s request for comment.

As of their Thursday close, Humana had a market value of about $37 billion, according to FactSet. Shares of the insurer have surged 30 percent in the past year, while Walmart shares have jumped more than 25 percent.

The news comes amid a rush of deal chatter as insurers are under pressure to lower medical care costs.

In December, CVS Health announced a $69 billion deal to buy insurer Aetna. That deal would combine CVS pharmacies, pharmacy benefit manager platform and Aetna’s insurance business.

Online retail giant Amazon has pledged to partner with J.P. Morgan and Berkshire Hathaway to tackle rising employee health-benefit costs. CNBC has also reported that Amazon has participated in exploratory talks with generic-drug makers.

 

via:  cnbc


Save pagePDF pageEmail pagePrint page

U.S. Department of Defense Kicks Off Fifth Bug Bounty Challenge With HackerOne

The DoD Invites Hackers to Test Enterprise System Security Used for Global Operations.

HackerOne, the leading hacker-powered security platform, today announced the fifth U.S. Department of Defense bug bounty program. The program opened registration on April 1, 2018, scheduled to conclude on April 29, 2018, and will focus on a Department of Defense (DoD) enterprise system relied on by millions of employees for global operations.

“Any compromise of the system or the sensitive information it handles would be detrimental to our people and our mission. These bug bounty challenges are a way to give talent outside the public sector a channel to safely disclose security issues and get rewarded for these acts of patriotism.”

“The DoD has seen tremendous success to date working with hackers to secure our vital systems, and we’re looking forward to taking a page from their playbook,” said Jack Messer, project lead at Defense Manpower Data Center. “We’re excited to be working with the global ethical hacker community, and the diverse perspectives they bring to the table, to continue to secure our critical systems.”

To be eligible to participate in the bug bounty challenge, individuals from the public must be United States taxpayers or a citizen of or eligible to work in the United Kingdom, Canada, Australia, or New Zealand. U.S. government active military members and contractor personnel are also eligible to participate but not eligible for financial rewards. See full eligibility requirements and register here.

“Millions of government employees and contractors use and rely upon key enterprise systems every day,” said Reina Staley, Chief of Staff at Defense Digital Service. “Any compromise of the system or the sensitive information it handles would be detrimental to our people and our mission. These bug bounty challenges are a way to give talent outside the public sector a channel to safely disclose security issues and get rewarded for these acts of patriotism.”

Since the Hack the Pentagon program kicked off in 2016, over 3,000 vulnerabilities have been resolved in government systems. The first Hack the Air Force bug bounty challenge resulted in 207 valid reports and hackers earned more than $130,000 for their contributions. The second Hack the Air Force resulted in 106 valid vulnerabilities surfaced and $103,883 paid to hackers. Hack the Army in December 2016 surfaced 118 valid vulnerabilities and paid $100,000, and Hack the Pentagon in May 2016 resulted in 138 valid vulnerabilities resolved and tens of thousands paid to ethical hackers for their efforts. Hack the Air Force 2.0 demonstrates continued momentum of the Hack the Pentagon program beyond just its first year, as well as a hardened attack surface.

“The most security mature organizations look to others for help,” said Alex Rice, co-founder and CTO at HackerOne. “The Department of Defense continues to innovate with each bug bounty challenge, and the latest challenge is no exception. We’re excited to bring a fresh, mission-critical asset to the hacker community with the goal of protecting the sensitive government data it contains.”

 

via:  businesswire


Save pagePDF pageEmail pagePrint page