Congress Approves Creation of New Cybersecurity Agency at DHS

U.S. DEPARTMENT OF HOMELAND SECURITY

Office of Public Affairs


FOR IMMEDIATE RELEASE

November 16, 2018

Cybersecurity and Infrastructure Security Agency

On November 16, 2018, President Trump signed into law the Cybersecurity and Infrastructure Security Agency Act of 2018. This landmark legislation elevates the mission of the former National Protection and Programs Directorate (NPPD) within DHS and establishes the Cybersecurity and Infrastructure Security Agency (CISA).

  • CISA leads the national effort to defend critical infrastructure against the threats of today, while working with partners across all levels of government and in the private sector to secure against the evolving risks of tomorrow.
  • The name CISA brings recognition to the work being done, improving its ability to engage with partners and stakeholders, and recruit top cybersecurity talent.

What Does CISA Do?

CISA is responsible for protecting the Nation’s critical infrastructure from physical and cyber threats. This mission requires effective coordination and collaboration among a broad spectrum of government and private sector organizations.

Proactive Cyber Protection:

  • CISA’s National Cybersecurity and Communications Integration Center (NCCIC) provides 24×7 cyber situational awareness, analysis, incident response and cyber defense capabilities to the Federal government; state, local, tribal and territorial governments; the private sector and international partners.
  • CISA provides cybersecurity tools, incident response services and assessment capabilities to safeguard the ‘.gov’ networks that support the essential operations of partner departments and agencies.

Infrastructure Resilience:

  • CISA coordinates security and resilience efforts using trusted partnerships across the private and public sectors, and delivers training, technical assistance, and assessments to federal stakeholders as well as to infrastructure owners and operators nationwide.
  • CISA provides consolidated all-hazards risk analysis for U.S. critical infrastructure through the National Risk Management Center.

Emergency Communications:

  • CISA enhances public safety interoperable communications at all levels of government, providing training, coordination, tools and guidance to help partners across the country develop their emergency communications capabilities.
  • Working with stakeholders across the country, CISA conducts extensive, nationwide outreach to support and promote the ability of emergency response providers and relevant government officials to continue to communicate in the event of natural disasters, acts of terrorism, and other man-made disasters.

Organizational Changes Related to the CISA Act

The CISA Act establishes three divisions in the new agency: Cybersecurity, Infrastructure Security and Emergency Communications.

  • The Act transfers the Office of Biometrics Identity Management (OBIM) to DHS’s Management Directorate. Placement within the DHS Headquarters supports expanded collaboration and ensures OBIM’s capabilities are available across the DHS enterprise and the interagency.
  • The bill provides the Secretary of Homeland Security the flexibility to determine an alignment of the Federal Protective Service (FPS) that best supports its critical role of protecting federal employees and securing federal facilities across the nation and territories.

Save pagePDF pageEmail pagePrint page

How automated incident response can help security

Automated incident response can benefit security both in the cloud and in traditional settings. Read what it can be used for and how it helps.

Despite the increase in breaches and security incidents we hear about regularly, many incident response teams are understaffed or struggling to find the right skill sets to get the work done.

Today, more enterprise incident response teams actively look for opportunities to automate processes that often take up too much time for highly skilled analysts, as well as those that require lots of repetition and provide little value in investigations. Common activities that many teams consider automating include the following:

  • Identifying and correlating alerts: Many analysts spend inordinate amounts of time wading through repetitive alerts and alarms from many log and event sources, and then spend time piecing together correlation strategies for similar events. While this is valuable for the later stages of investigations, it can also be highly repetitive, and can be automated to some degree.
  • Identifying and suppressing false positives: This can be tedious work on a good day and overwhelming on a bad one. Identifying false positives can often be streamlined or automated using modern event management and incident response automation tools.
  • Initial investigation and threat hunting: Analysts need to quickly find evidence of a compromised system or unusual activity, and they often need to do so at scale.
  • Opening and updating incident tickets/cases: Due to improved integration with ticketing systems, event management and monitoring tools used by response teams can often generate tickets to the right team members and update these as evidence comes in.
  • Producing reports and metrics: Once evidence has been collected and cases are underway or resolved, generating reports and metrics can take a lot of analysts’ time.

James Carder and Jessica Hebenstreit of Mayo Clinic provided several tactical examples of automated incident response in a past RSA Conference presentation:

  • automated domain name system (DNS) lookups of domain names never seen before and driven by proxy and DNS logs;
  • automated searches for detected indicators of compromise;
  • automated forensic imaging of disk and memory from a suspect system driven by alerts triggered in network and host-based antimalware platforms and tools; and
  • network access controls automatically blocking outbound command-and-control channels from a suspected system.

There are many more areas where automated incident response can help, especially in forensic evidence gathering, threat hunting, and even automated quarantine or remediation activities on suspect systems.

Endpoint security vendors have begun to emphasize response automation activities and integration with detection, response and forensics capabilities. Analysts need to quickly identify indicators of compromise and perform lookup actions across other systems, as automating as much of this as possible is a common goal today.

There are a fair number of vendors and tools that can help integrate automation activities and unify disparate tools and platforms used for detection and response. These include Swimlane, FireEye Security Orchestrator, CyberSponse, Phantom, IBM Resilient Incident Response Platform, Hexadite and more, most of which use APIs with other platforms and tools to enable them to share data and create streamlined response workflows.

Things to consider when evaluating these types of products include maturity of the vendor, integration partners, alignment with SIEM and event management, and the ease of use and implementation.

Automated incident response in the cloud

Incident response in the cloud may rely on scripting, automation and continuous monitoring more heavily than in-house incident response does. Currently, many of the detection and response tools emerging for the cloud are heavily geared toward automation capabilities, which tend to be written to work with a specific provider’s APIs, many of which are focused on Amazon Web Services (AWS) at the moment.

Teri Radichel wrote a paper on AWS automated incident response and released a simple toolkit to help with it, as well.

The ThreatResponse toolkit developed by Andrew Krug, Alex McCormack, Joel Ferrier and Jeff Parr can also be used to automate incident response collection, forensics and reporting for cloud environments.

To truly implement automated incident response in the cloud, incident response teams will need to build automated triggers for event types that run all the time — such as AWS CloudWatch filters — especially as the environment gets more dynamic.

Deciding what triggers to implement and what actions to take is the most time-consuming aspect of building a semi-automated or automated response framework in the cloud. Do you focus on user actions? Specific events generated by instances or storage objects? Failure events? Spending time learning about cloud environment behaviors and working to better understand normal patterns of use may be invaluable here.

None of these tools and methods will replace skilled, knowledgeable security analysts who understand the environment and how to properly react during an incident scenario. However, unless we start detecting and responding more quickly, there’s no way we’ll ever get ahead of the attackers we face now and in the future.

 

via: techtarget


Save pagePDF pageEmail pagePrint page

NERC CIP Audits: Top 8 Dos and Dont’s

I have been involved with quite a few projects over my career. I was involved with CIP compliance audits, investigations, auditor training, and many advisory sessions. Typically, I was advising entities across North America on different tactics, techniques, and insight from best practices I have seen. I wanted to share a few of the dos and don’ts during my experience out in the field.

8) Do Practice a Mock Audit

You will be audited. I cannot believe how many times I would walk into an entity and find out they had never performed a mock audit with their staff. They didn’t know the types of questions they would be asked, the evidence to produce, or the responses they should prepare for. Everyone was yelling at each other. IT was a mess. Don’t let these be your entity, and make sure you practice several mock audits to understand where you may have some weaknesses. If you do nothing else listed here, this is highly recommended.

7) Don’t Lawyer up Every Conversation

While having lawyers is very important for any dispute, settlement, or compliance program process, they aren’t always the best to be the front line on answering questions. For example, you don’t want your corporate attorney to answer technical questions on how your ESP are designed and configured.

6) Do Show Your Work

A lot of times, I would see an entity provide evidence of results. Sometimes you will hear auditors ask to see how you got to your results. A great example here is a Cyber Vulnerability Assessment or CVA.

One time, I remember hearing an entity perform their CVA and get a pile of results/action items to fix. They then showed a piece of paper that said “Results” and had a completed check mark. When the auditors asked how they completed some of these tasks or if they could see the steps they went through to get this result, the entity had no answers. They couldn’t even confirm that all of the CVA findings were fixed because they didn’t have documentation for themselves.

5) Don’t Redact all Your Documentation and Evidence

The goal of the auditor is to help your entity demonstrate compliance to the NERC CIP standards, not to find areas of non-compliance.

I have been on audits where the entity would not even allow the auditors to view evidence by themselves – it had to be on an entity-owned machine with limited access and documents that were mostly blacked out information. All this did was extend the audit another week and create a starting point for more questions.

Please help the auditors by making evidence accessible and useful.

4) Do be Polite and Patient

When an auditor asks for information, they are usually just trying to get an understanding of your environment. This isn’t a court hearing. The audit team is just trying to gain an understanding of the entire picture because they don’t know your environment as well as you do.

They may also not be familiar with certain acronyms, diagrams and other procedures at your organization. Take your time and explain to them since they will help tell your story of compliance.

3) Don’t Scramble for Documentation

A perfect example here was always exemplified during CIP-004 R2 and R1 training and awareness program records. The CIP training standards dictate that authorized staff with unescorted physical or electronic access to BES Cyber Assets, otherwise known as BCAs, must go through a NERC CIP compliance training program. The NERC CIP security awareness program requirements under R1 just simply say you need to prove that you made a program aware to the staff and personnel in scope. Seems easy, but it’s not unless you work together with your departments.

Any of your staff, contractors, vendors, and even cleaning crew might fall into the scope of this requirement. Make sure you have reports and records of your security awareness training program content available during the audit scope so that you are not scrambling during the audit. Every department is going to have a different set of personnel to make sure it is compliant.

2) Do Listen to CIP Auditors’ Advice

I have worked with the CIP audit and compliance teams in every region across North America. Your auditors have a lot of experience. They have seen more implementations, configurations, environments and procedures than you could ever imagine.

Listen to them if they talk about best practices or advice for additional approaches towards demonstrating compliance. Sometimes it can really help open your eyes to a different point of view.

1) Don’t Argue Over Every Word

During old CIP Version 3 audits, I have seen words like “significant,” “annual” and other non-defined terms used in every possible way you could imagine. Of course, some of that language has been cleaned up in the modern CIP standards, but you get the point. If you do have an undefined term, ensure you define it somewhere in your internal documents to show the audit team what you mean. Listen to best practices across your region and from NERC. Don’t try and re-invent the wheel.

These are just some basic tips I have personally experienced along the way. Audits are going to be tough no matter how prepared you are. Knowing that going in is half the battle. Make sure you have a plan, get your employees to communicate that plan, and execute. If every program was perfect, we wouldn’t need these types of compliance regulations. Mistakes happen, and how you learn from these mistakes is the goal of a successful compliance program.

Learn more about how Tripwire can help make your NERC CIP audit simpler, including insights on generating RSAWs and responding appropriately to pre-audit requests, by downloading a new paper here.

 

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

Proactive System Hardening: Continuous Hardening’s Coming of Age

The first article in this series examined configuration hardening—essentially looking at ports, processes and services where security configuration management (SCM) is key. The second article looked at application and version hardening strategies. This third installment will discuss the role of automation in the coming of age of what’s called “continuous hardening.”

Known Vulnerabilities vs. Conditional Vulnerabilities

If I want to harden my systems against “known vulnerabilities”—weaknesses or deficiencies for which there are known common vulnerabilities and exposures (CVEs)—I use a vulnerability management solution. If I need to harden my systems against “conditional vulnerabilities”—weaknesses based on the way they’re configured—I use an SCM solution. But without automation to provide the element of “continuousness” to these efforts, we rapidly find ourselves back at square one.

What is Configuration Drift?

To stick with our house analogy: If I’ve checked the configurations of all my doors and windows, but I have no way to know when the state has changed and I instead rely on periodic inspection by human eyes, a phenomenon known as “configuration drift” invariably occurs.

I open the fire escape window to water the potted hydrangea sitting out there but forget to close it afterward: configuration drift. I enable Telnet to maintain or update a server and then forget to disable it afterward: configuration drift.

The Role of Automation in Continuous System Hardening

A primary weakness of our house analogy is actually useful here, as it shows us the critical need for automation. In real life, most people have one house. But most organizations have hundreds—if not many, many thousands—of servers, desktop systems, laptops and devices. These represent an almost inexhaustible supply of attack surface and potential beachheads. How can we win a war at this scale?

Automation requires us to not only create continuous, ongoing routines to assess states across this vast array of targets, but it also requires us to make allowances for the constantly changing conditions that give meaning and relevance to risk.

In the case of our house, it’s useful to know that, over the last two years, the leafy maple out back has grown a large solid branch that’s close enough to an upstairs bedroom for a tall thief to reach the window. And the inverse is sometimes true: If the old kitchen window was painted shut twenty years ago, who needs to waste time including it in our daily “is it locked” checklist?

This critical need for current “state” information has caused the security community to create more persistent real-time agents, more effective scanning processes that are “aware” of network constraints and ways to avoid “mega scans” in favor of continuous segmented scanning.

Integrating Disparate Security Systems

They’ve also broken down barriers between infosec solutions themselves and addressed another critical requirement for achieving this attribute of “continuousness”: Information security systems must talk to one another. A few simple examples illustrate this need:

  • Vulnerability Management: Vulnerability management (VM) systems are quite good at finding unexpected (and likely unsecured) systems. When one of these is discovered, the VM system can tell the SCM system about the new asset and ask it to perform an on-the-spot configuration assessment.
  • Security Configuration Management: Similarly, SCM systems are evolving intelligent ways to classify assets: by business unit, by system owner, by critical application, and even by the type and criticality of data stored on the system. This helps manage and prioritize their own risks, but when shared with a VM system, this also helps clarify and prioritize remediation efforts.
  • Security Information and Event Management: Both of these systems are being used extensively by SIEM systems as a foundational source of security information: in the first case, correlating known vulnerabilities with detected threats, and in the second case, using sudden configuration changes (“Why is the ‘Telnet should not be enabled‘ test suddenly failing?“) to power real-time threat intelligence models.

SC Magazine summed up these needs in a prescient review of policy management systems—what we’ve called “security configuration management” systems in this article—way back in 2010: “The only reasonable answer to the challenges of compliance, security and configuration management is to automate the tasks.”

The key to continuous system hardening as a goal and a discipline is a willingness to seek out and employ automation wherever possible. Gone are the days when isolated, siloed systems can harden information systems and keep them that way in the face of continuous drift.

Highly interactive solutions that understand the ever-shifting nature of “state” and talk to each other regularly—security configuration and vulnerability management solutions in particular—are the first, best and often the last line of defense.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

Proactively Hardening Systems: Application and Version Hardening

The first article in this series examined configuration hardening, essentially looking at ports, processes and services as the “doors, gates and windows” into a network where security configuration management (SCM) becomes the job of determining which of these gateways should be open, closed, or locked at any given time. Now it’s time to look at application and version hardening.

What is System Hardening?

If configuration hardening settings are “conditional,” meaning they must find and keep that balance between security and productivity, then hardening against known vulnerabilities in applications and versions is much more black-and-white.

If an exploit path has been found in an operating system or application, the vendor rushes to create a patch or upgrade that removes the vulnerability. “Hardening” in this sense means “making sure the holes are known and that the most current security patches are deployed.”

One Way Hackers Exploit Known Vulnerabilities

To go back to our “secure house” analogy from the previous article in this series for a moment, imagine that the house I’m protecting has three external doors and that they all use Secure-A-Door Model 800 high-strength locks.

But a tester at the Secure-A-Door factory (or worse, a professional burglar) has just discovered an interesting thing: If you slide a credit card along the door jamb at 15 degrees while pulling up on the handle, the Secure-A-Door 800 pops open like a Coke can.

One of the most famous examples of this exploitation began in 2008. That’s when the makers of the Conficker worm discovered and exploited an underlying weakness in Port 445 of the Windows operating system.

The worm created a remote procedure call that dropped a DLL on the system, unloaded two distinct packets for data and code, and hid itself in a remote thread to make itself at home. (It was infinitely more complex and clever than that, but you get the idea.)

In effect, the worm popped the Secure-A-Door Model 800, let itself in, repaired the lock, installed a new phone line to listen for orders, and sat in a comfy chair waiting for instructions. It was able to leverage the internet, could register new domain names in which to hide, and created an extensive botnet that by 2010 had infected, according to Panda Security, as many as 18 million PCs—6 percent of the world’s PC population at the time.

Common Vulnerabilities and Exposures (CVEs)

This type of design failure or exploit is usually repaired by a patch. In the case of Conficker, Windows Security bulletin MS08-067 made the danger known to the worldwide Microsoft community and introduced a patch to prevent easy violation of Port 445.

The MS bulletin was in turn translated by the Common Vulnerabilities and Exposures site as CVE-2008-4250 and given a Common Vulnerability Scoring System (CVSS) rating of 10—the most severe rating possible.

Vulnerability Management

Vulnerability management (VM) systems, unlike SCM systems that check to see that doors and gates and windows are locked, do their part in system hardening differently. They make sure the proper patch levels are maintained and that any available defenses have been utilized. Using our analogy, we’d be conducting the following checks:

  • Proactively discovering whether I have any Secure-A-Door Model 800 locks installed
  • If I do, reporting on whether they’re the corrected “B” version made after October 2012
  • Verifying that any “bad” ones I have are only on inside doors and don’t serve as a primary defense

VM systems enable continuous hardening by making sure that CVE-2008-4250—and its many thousands of friends—are understood, mitigated, and more-or-less unexploitable when the right steps are taken.

More mature solutions provide an ongoing assessment of overall risk based on whether these vulnerabilities are mitigated or ignored.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page

California passes law that bans default passwords in connected devices

Good news!

California has passed a law banning default passwords like “admin,” “123456” and the old classic “password” in all new consumer electronics starting in 2020.

Every new gadget built in the state from routers to smart home tech will have to come with “reasonable” security features out of the box. The law specifically calls for each device to come with a preprogrammed password “unique to each device.”

It also mandates that any new device “contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time,” forcing users to change the unique password to something new as soon as it’s switched on for the first time.

For years, botnets have utilized the power of badly secured connected devices to pummel sites with huge amounts of internet traffic — so-called distributed denial-of-service (DDoS) attacks. Botnets typically rely on default passwords that are hardcoded into devices when they’re built that aren’t later changed by the user. Malware breaks into the devices using publicly available default passwords, hijacks the device and ensnares the device into conducting cyberattacks without the user’s knowledge.

Two years ago, the notorious Mirai botnet dragged thousands of devices together to target Dyn, a networking company that provides domain name service to major sites. By knocking Dyn offline, other sites that relied on its services were also inaccessible — like Twitter, Spotify and SoundCloud.

Mirai was a relatively rudimentary, albeit powerful botnet that relied on default passwords. This law is a step in the right direction to prevent these kinds of botnets, but falls short on wider security issues.

Other, more advanced botnets don’t need to guess a password because they instead exploit known vulnerabilities in Internet of Things devices — like smart bulbs, alarms and home electronics.

As noted by others, the law as signed does not mandate device makers to update their software when bugs are found. The big device makers, like Amazon, Apple and Google, do update their software, but many of the lesser-known brands do not.

Still, as it stands, the law is better than nothing — even if there’s room for improvement in the future.

 

via:  techcrunch


Save pagePDF pageEmail pagePrint page

Google+ Shutting Down After Bug Leaks Info of 500k Accounts

Google has announced that they are closing the consumer functionality of Google+ due lack of adoption and an API bug that leaked the personal information of up to 500,000 Google+ accounts.

While no evidence was found that indicates this bug was ever misused, it was determined that the complexity of protecting and operating a social network like Google+ was not a worthwhile endeavor when so few users actually used the service for any length of time.

“This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” stated a blog post by Google regarding the Google+ closure. “The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.”

The consumer functionality of Google+ will be closing over a 10 month period, while Google transitions the product to be used internally by the Enterprise.

API bug caused data leak

After performing a code review of the Google+ APIs, called Project Strobe, Google stated they discovered a bug that could leak the private information of Google+ accounts. This bug could allow a user’s installed apps to utilize the API and access non-public information belonging to that user’s friends. The non-public information that was accessible includes an account holder’s name, email address, occupation, gender and age.

Underlining this, as part of our Project Strobe audit, we discovered a bug in one of the Google+ People APIs:

  • Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
  • The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public. 
  • This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data you may have posted or connected to Google+ or any other service, like Google+ posts, messages, Google account data, phone numbers or G Suite content.
  • We discovered and immediately patched this bug in March 2018. We believe it occurred after launch as a result of the API’s interaction with a subsequent Google+ code change.

As Google only keeps two weeks of API logs for its Google+ service, it was impossible for them to determine if the bug was ever misused. They were able to determine that the bug was not misused during the two weeks that they had log data.

Google knew about leak in May but did not disclose

According to a report by the Wall Street Journal, the bug in the Google+ API existed between 2015 and March 2018, which was when Google discovered and fixed the bug. According to their reporting, an internal committee at Google decided not to disclose the bug even though they were not 100% sure that it was not abused.

The Wall Street Journal, reported that they have reviewed a memo prepared by Google’s legal and policy staff, which indicated that disclosing the data breach could lead to scrutiny by government regulatory agencies.

“disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.”

In a statement, a Google Spokesperson said that their Privacy & Data Protection Office felt it was not necessary to disclose as it did not meet the threshold that would warrant it.

“Every year, we send millions of notifications to users about privacy and security bugs and issues. Whenever user data may have been affected, we go beyond our legal requirements and apply several criteria focused on our users in determining whether to provide notice.

Our Privacy & Data Protection Office reviewed this issue, looking at the type of data involved, whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met in this instance.

The review did highlight the significant challenges in creating and maintaining a successful Google+ that meets consumers’ expectations. Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.” – Google Spokesperson.

 

via:  bleepingcomputer

Save pagePDF pageEmail pagePrint page

The Coders of Kentucky

A bipartisan effort to revitalize the heartland, one tech job at a time.

Matthew Watson opened his car door at a gas station outside Hueysville, Ky., sprang out and exclaimed, “I got a new job! He blushed slightly; he was not one to boast. But for this slender, 33-year-old man with a red beard, a father of two small daughters who had once been ashamed of supplementing his low-pay, long-hours job with food stamps, this was fantastic news.

I’d driven to Hueysville past trucks with “Diggin’ Coal” decals, on a road slicing through mountains that rose in steep, majestic steps up to tops flattened by dynamite, past turnoffs to forgotten union halls where the eight-hour workday had been won and billboards that had recently read, “Trump for President.” (Kentucky went 63 percent for him.) Mr. Watson’s home, like much of Appalachia, reflects the landscape and culture of coal, without the coal mining jobs. And there was little hope of alternatives — until now.

“After I got my two associate’s degrees, the best job I could find was selling cigarettes behind the counter in Hazard, a 45-minute commute from home, for $10 an hour, and that was after a promotion to manager,” Mr. Watson told me the first time we met. “Some of my customers were opioid addicts, who slurred their speech, scratched their arms, laid their heads on my counter. In the back of my mind, I always think, ‘If I want to stay living here, if I didn’t have this job, I’d be working that job.’”

Then one day Mr. Watson heard an ad on the car radio. “It was for a 24-week course in coding, with an eight-week apprenticeship, which I later learned could qualify me for a $40,000-plus job designing apps for cellphones,” he said. The advertisement had been put out by a Louisville tech start-up called Interapt. “I immediately applied online, got interviewed, aced the test, and they hired me as an intern and then as a junior software developer,” Mr. Watson said. Within a year, he was offered yet another job as a software engineer, for a Florida-based company, for a salary well over $50,000.

On its first run in 2016, Interapt had 800 applicants, accepted 50 and graduated 35. (Some of the 15 who dropped out did so to tend a sick relative, join the military or take a non-tech job.) Of the 35 graduates, 25 were given job offers by Interapt, and 10 were hired by other tech companies in the area. This year Interapt will train approximately 90 people; next year Interapt expects that number to rise to more than 150.

Ankur Gopal, a University of Illinois graduate from Owensboro, Ky., started Interapt in his basement in Louisville in 2011, when he was 35. He is now renovating an empty warehouse in a run-down part of the city, investing nearly $4 million and creating jobs in the process. “With millions of U.S. tech jobs out there,” Mr. Gopal said, “we could help transform eastern Kentucky. Well, hey — Middle America.”

Mr. Gopal is at the forefront of a new movement to bring money and jobs from the coastal capitals of high tech to a discouraged, outsource-whipped Middle America. Ro Khanna, the Democratic representative from California whose district includes Apple, Intel, LinkedIn and Yahoo, was among the first politicians to float the idea of Silicon Valley venturing inland. “Why outsource coding jobs to Bangalore when we can insource jobs to eastern Kentucky, poor in jobs but rich in work ethic, and every one I.T. job brings four or five other jobs with it?” he said.

The stories of these Interapt graduates in the green hamlets of eastern Kentucky begin with dead ends and end with new beginnings.

“Nights I was manning the reception desk at Super 8, for $7.50 an hour, and days I was working at Little Caesars and still struggling to pay family bills,” Shea Maynard told me. Now, she said, “I’m modifying the information architecture of Interapt products.” She continued, “I never thought it was possible for a person like me to have a career I love.”

Most described feeling engrossed in the work. “Sitting at the desk in my trailer, I can go till 2 a.m.,” one man said. “I have to remember to stop.”

Starting when Crystal Adkins was 13, she almost single-handedly fed, dressed and raised her two younger siblings, while her own interest in school faded. Now she is Interapt’s star trainer. In addition to teaching, Ms. Adkins has been learning new coding languages and training her own children to code.

The success of the Interapt training program has depended on the enthusiasm of politicians from disconnected regions and increasingly hostile political parties.

Mr. Gopal first gathered support from Gov. Matt Bevin of Kentucky and Representative Hal Rogers, both Republicans. They were instrumental in the Appalachian Regional Commission approving $2.7 million to get the training program off the ground. The Department of Labor authorized apprenticeship status for its graduates.

Mr. Rogers is a conservative who represents Kentucky’s Fifth District, home to many unemployed coal miners and one of the poorest and most population-depleted districts in the country. He found an unlikely ally in Mr. Khanna, a progressive Democrat and former official in the Obama administration, who represents California’s 17th District, one of the richest, fastest-growing and most liberal districts in the country. In the 2016 presidential vote, it went 73.9 percent for Hillary Clinton. Mr. Rogers’s district went 79.6 percent for Mr. Trump. But Mr. Rogers’s office called Mr. Khanna’s, and invited him to see Interapt in a widely promoted visit last year.

Mr. Rogers wants the tech companies in Mr. Khanna’s district to consider investing in Kentucky and hiring its citizens. Mr. Khanna was remarkably open to the idea. “We believe in distributed jobs,” he said. “There is no reason these companies can’t engage thousands of talented workers in Iowa, Kentucky or West Virginia for projects.”

Despite these gestures of bipartisanship, the initiative has had to overcome stereotypes, the first one being about Interapt itself. Many locals were suspicious of outsiders’ intentions. Maybe Interapt was associated with some big-government, Obama-era program, or maybe it was a fraud pulled on rural towns by fast-talking city people. “Even after I was chosen,” a trainee told me, “I didn’t completely trust the program until we were asked to open our folders and I found a check for $400,” the weekly stipend for trainees. “Then I knew it was for real.”

Then there were the stereotypes held by the companies to which Interapt was pitching its graduates; many potential employers were skeptical of the apprenticeship model. As Ervin Dimeny, the former commissioner of the Kentucky Labor Cabinet’s Department of Workplace Standards, explained to me: “We think of apprenticeship as a way to certify 19th-century metalworkers. Or we associate it with boring high school shop class. We need to re-envision apprenticeships as passports to respectable middle-class careers.”

Worse, some saw rural Kentuckians as dubious recruits — tooth-free, grinning, moonshine-drinking hillbillies. “It’s a terrible myth,” an Interapt administrator who is the daughter of an unemployed Pikeville coal miner told me. “A hillbilly can do anything. Out in the hollows, you can’t call in specialists; you fix that stalled truck, that leaky roof, that broken radio yourself.” It’s the “car heads” — who can fix anything under a hood — who turn out to be inspired app developers, a recruiter told me. Those car heads include women too, who made up about a third of the first class.

Other investors are following Mr. Gopal’s lead. For example, the former chief executive of AOL, Steve Case, started an initiative called “Rise of the Rest,” which involves driving a big red bus around the country (it has visited 38 cities so far) and giving out $150 million in seed money to entrepreneurs. J.D. Vance, author of the best-selling “Hillbilly Elegy,” was brought on as a managing partner. As Mr. Case told an audience of hundreds in Louisville’s Speed Art Museum in May, 75 percent of venture capital now goes to three states: California, New York and Massachusetts. And half of all venture capital goes to Silicon Valley. Yet start-ups account for half of all new jobs in the United States. Why can’t those start-ups start somewhere else?

I.T. training is not going to solve all the problems of eastern Kentucky, of course. It may be hard to scale up. Not all of us warm to or can do I.T. work. And like coal-mining itself, I.T. jobs can be lost to automation.

If they are, could these visionary ventures crash into new dead ends? Interapt was itself experimenting with a new software that could improve the process of selecting trainees — possibly reducing tasks associated with one job right there. “Over time, some I.T. jobs will disappear, as will jobs for truck drivers, machine-tool makers and a lot of others too,” Mr. Gopal said. “But we teach our trainees to keep learning.”

If you know French, a trainer explained, “you can get the hang of Spanish and Portuguese. You stay ahead of the curve like that.”

For now, there is so much demand for I.T. workers — 10,000 estimated openings by 2020 in the Louisville metro area alone — that Mr. Gopal is reaching out to new groups. “We’re talking with the Department of Defense about a 16-week, eight-hour-a-day coding training program for vets returning from Afghanistan and Iraq to Fort Knox,” he said.

This is a good-news story. But continuing to increase access to good jobs in Middle America will take deliberate efforts to cooperate across the bitter political and regional divide. President Trump is not helping by proposing cuts in education funding that will raise the cost of student loans by more than $200 billion over the next decade. Last year, he tried to cut all funding for the Appalachian Regional Commission, which paid Interapt students’ stipends. A group of representatives — eight Democrats and two Republicans — signed a joint letter urging Trump to restore the money (it was).

On my last visit to Hueysville, Mr. Watson introduced me to his wife (“I married an outsider,” he said jokingly. “Nicole’s from Martin County, I’m from Floyd.”), his aunt, uncle and cousin, all schoolteachers, and his 93-year-old grandmother, a retired teacher who sews a brightly colored quilt for each new grandchild. His daughters played with dolls and nibbled on chocolate Easter eggs on the living room floor. “We’re really proud of Matthew,” his aunt said.

“My new employer is a home repair services company based in Florida,” Mr. Watson said later, “and I do feature development that had once been outsourced to India. I get to work from home. My 3-year-old asks me to get her juice as if I had nothing better to do.” He chuckled. “But it’s such a blessing. These mountains hug me, and my family is my rock. I thought I’d be forced to leave, and maybe one day I’ll have to. But why would I ever want to?”

 

via:  nytimes


Save pagePDF pageEmail pagePrint page

Adobe to Acquire Marketo

Combination of Adobe Experience Cloud and Marketo Engagement Platform Widens Adobe’s Lead in Customer Experience Across B2C and B2B.

Adobe (Nasdaq:ADBE) today announced it has entered into a definitive agreement to acquire Marketo, the market-leading cloud platform for B2B marketing engagement, for $4.75 billion, subject to customary purchase price adjustments. With nearly 5,000 customers, Marketo brings together planning, engagement and measurement capabilities into an integrated B2B marketing platform. Adding Marketo’s engagement platform to Adobe Experience Cloud will enable Adobe to offer an unrivaled set of solutions for delivering transformative customer experiences across industries and companies of all sizes.

Today, consumers have a very high bar for what constitutes a great customer experience and Adobe Experience Cloud has enabled B2C companies to successfully drive business impact by harnessing massive volumes of customer data and content in order to deliver real-time, cross-channel experiences that are personalized and consistent. When businesses buy from other businesses, they now have the same high expectations as consumers.

Marketo’s platform is feature-rich and cloud-native with significant opportunities for integration across Adobe Experience Cloud. Enterprises of all sizes across industries rely on Marketo’s marketing applications to drive engagement and customer loyalty. Marketo’s ecosystem includes over 500 partners and an engaged marketing community with over 65,000 members.

This acquisition brings together the richness of Adobe Experience Cloud analytics, content, personalization, advertising and commerce capabilities with Marketo’s lead management and account-based marketing technology to provide B2B companies with the ability to create, manage and execute marketing engagement at scale.

“The imperative for marketers across all industries is a laser focus on providing relevant, personalized and engaging experiences,” said Brad Rencher, executive vice president and general manager, Digital Experience, Adobe. “The acquisition of Marketo widens Adobe’s lead in customer experience across B2C and B2B and puts Adobe Experience Cloud at the heart of all marketing.”

“Adobe and Marketo both share an unwavering belief in the power of content and data to drive business results,” said Steve Lucas, CEO, Marketo. “Marketo delivers the leading B2B marketing engagement platform for the modern marketer, and there is no better home for Marketo to continue to rapidly innovate than Adobe.”

The transaction, which is expected to close during the fourth quarter of Adobe’s 2018 fiscal year, is subject to regulatory approval and customary closing conditions. Until the transaction closes, each company will continue to operate independently.

Upon close, Marketo CEO Steve Lucas will join Adobe’s senior leadership team and continue to lead the Marketo team as part of Adobe’s Digital Experience business, reporting to executive vice president and general manager Brad Rencher.

Conference Call Scheduled for 2 p.m. PT September 20th.

Adobe executives will comment on the acquisition of Marketo today during a live conference call, which is scheduled to begin at 2 p.m. PT. Analysts, investors, press and other interested parties can participate in the call by dialing (877) 376-9431 and using passcode 2867298. International callers should dial (402) 875-4755. The call will last approximately 30 minutes and an audio archive of the call will be made available later in the day. Questions related to accessing the conference call can be directed to Adobe Investor Relations by calling 408-536-4416 or sending an email to ir@adobe.com.

Forward-Looking Statements Disclosure

This press release includes forward-looking statements within the meaning of applicable securities law. All statements, other than statements of historical fact, are statements that could be deemed forward-looking statements. Forward-looking statements relate to future events and future performance and reflect Adobe’s expectations regarding the ability to extend its leadership in the experience business through the addition of Marketo’s platform and other anticipated benefits of the transaction. Forward looking statements involve risks, including general risks associated with Adobe’s and Marketo’s business, uncertainties and other factors that may cause actual results to differ materially from those referred to in the forward-looking statements. Factors that could cause or contribute to such differences include, but are not limited to: Adobe’s ability to embed Marketo technology into Adobe Experience Cloud; the effectiveness of Marketo technology; potential benefits of the transaction to Adobe and Marketo customers, the ability of Adobe and Marketo to close the announced transaction; the possibility that the closing of the transaction may be delayed; and any statements of assumptions underlying any of the foregoing. The reader is cautioned not to rely on these forward-looking statements. All forward-looking statements are based on information currently available to Adobe and are qualified in their entirety by this cautionary statement. For a discussion of these and other risks and uncertainties, individuals should refer to Adobe’s SEC filings. Adobe does not assume any obligation to update any such forward-looking statements or other statements included in this press release.

 

via:  adobe


Save pagePDF pageEmail pagePrint page

Is Your Security Dashboard Ready for the Cloud?

The ability to feed key security information onto a big screen dashboard opens up many new opportunities for managing the day-to-day security and maintenance workload as well as providing a useful method of highlighting new incidents faster than “just another email alert.”

Most Security Operation Centers I’ve visited in recent years have embraced having a few dedicated big-screen displays, but most are restricted to monitoring the on-premise architecture such as local firewalls and servers rather than taking a more holistic approach and accounting for the increasing use of cloud hosted infrastructure and services.

Security no longer starts and ends at the “front door,” with cloud playing a bigger role in more and more organizations. Here’s four things I think every company that uses cloud infrastructure should consider surfacing on their security dashboards.

Inventory and Discovery

The traditional model of server provisioning started changing with the growth of virtualization. No longer can you assume that new hardware would be purchased and entered into a CMDB.

With the growth of cloud infrastructure, the provisioning of new virtual infrastructure became even easier, but with that comes new challenges for your security processes. For that reason, making sure that newly detected devices are highlighted front and center on a dashboard makes a lot of sense and can help to understand the changes going on during provisioning of a new or updated application during the DevOps cycle. Ensuring security coverage against these new devices is key to making sure that gaps don’t develop over time.

Vulnerabilities and Priorities

When vulnerabilities are detected, it’s important that they are presented in a practical fashion. Simply listing every missing patch or misconfiguration often isn’t a sensible approach to managing your workload. A good dashboard should help reveal the most common and highest risk vulnerabilities in an easy-to-read fashion.

Tracking progress of investigations is important, too, in order to ensure you’re keeping on top of what’s been discovered as well as giving your security team a goal. Showing how old a vulnerability is, alongside its potential risk, can help provide a focus for teams as well as a sense of accomplishment when you clear down a challenging vulnerability from the dashboard.

Coverage

If you’re carrying out regular scans of your cloud infrastructure via one or more scanning appliances and/or applications, it’s important to account not just for the health of the environment you’re monitoring but also for the status of the tools you’re using to provide the monitoring. Availability indicators for your monitoring architecture as well as alerting for whether or not scans are completing successfully ensures that you always have the full picture.

Compliance

Alongside triaging vulnerabilities, ensuring compliance to your internal security hardening requirements is key.

Making sure that you are proactively and consistently implementing security procedures helps to minimize your company’s risk, and showing compliance levels (typically through a simple percentage score) can verify not just how secure your environment is today but also allow you to track your success over time, helping to demonstrate how everyday investment in your security configuration can help improve your security posture.

Getting the right information out and visible to your SOC team is key. Hopefully, these starting points will help you plan for your security dashboards to provide better overviews of your cloud security.

 

via:  tripwire


Save pagePDF pageEmail pagePrint page