Category Archive: Uncategorized

Proactive System Hardening: Continuous Hardening’s Coming of Age

The first article in this series examined configuration hardening—essentially looking at ports, processes and services where security configuration management (SCM) is key. The second article looked at application and version hardening strategies. This third installment will discuss the role of automation in the coming of age of what’s called “continuous hardening.”

Known Vulnerabilities vs. Conditional Vulnerabilities

If I want to harden my systems against “known vulnerabilities”—weaknesses or deficiencies for which there are known common vulnerabilities and exposures (CVEs)—I use a vulnerability management solution. If I need to harden my systems against “conditional vulnerabilities”—weaknesses based on the way they’re configured—I use an SCM solution. But without automation to provide the element of “continuousness” to these efforts, we rapidly find ourselves back at square one.

What is Configuration Drift?

To stick with our house analogy: If I’ve checked the configurations of all my doors and windows, but I have no way to know when the state has changed and I instead rely on periodic inspection by human eyes, a phenomenon known as “configuration drift” invariably occurs.

I open the fire escape window to water the potted hydrangea sitting out there but forget to close it afterward: configuration drift. I enable Telnet to maintain or update a server and then forget to disable it afterward: configuration drift.

The Role of Automation in Continuous System Hardening

A primary weakness of our house analogy is actually useful here, as it shows us the critical need for automation. In real life, most people have one house. But most organizations have hundreds—if not many, many thousands—of servers, desktop systems, laptops and devices. These represent an almost inexhaustible supply of attack surface and potential beachheads. How can we win a war at this scale?

Automation requires us to not only create continuous, ongoing routines to assess states across this vast array of targets, but it also requires us to make allowances for the constantly changing conditions that give meaning and relevance to risk.

In the case of our house, it’s useful to know that, over the last two years, the leafy maple out back has grown a large solid branch that’s close enough to an upstairs bedroom for a tall thief to reach the window. And the inverse is sometimes true: If the old kitchen window was painted shut twenty years ago, who needs to waste time including it in our daily “is it locked” checklist?

This critical need for current “state” information has caused the security community to create more persistent real-time agents, more effective scanning processes that are “aware” of network constraints and ways to avoid “mega scans” in favor of continuous segmented scanning.

Integrating Disparate Security Systems

They’ve also broken down barriers between infosec solutions themselves and addressed another critical requirement for achieving this attribute of “continuousness”: Information security systems must talk to one another. A few simple examples illustrate this need:

  • Vulnerability Management: Vulnerability management (VM) systems are quite good at finding unexpected (and likely unsecured) systems. When one of these is discovered, the VM system can tell the SCM system about the new asset and ask it to perform an on-the-spot configuration assessment.
  • Security Configuration Management: Similarly, SCM systems are evolving intelligent ways to classify assets: by business unit, by system owner, by critical application, and even by the type and criticality of data stored on the system. This helps manage and prioritize their own risks, but when shared with a VM system, this also helps clarify and prioritize remediation efforts.
  • Security Information and Event Management: Both of these systems are being used extensively by SIEM systems as a foundational source of security information: in the first case, correlating known vulnerabilities with detected threats, and in the second case, using sudden configuration changes (“Why is the ‘Telnet should not be enabled‘ test suddenly failing?“) to power real-time threat intelligence models.

SC Magazine summed up these needs in a prescient review of policy management systems—what we’ve called “security configuration management” systems in this article—way back in 2010: “The only reasonable answer to the challenges of compliance, security and configuration management is to automate the tasks.”

The key to continuous system hardening as a goal and a discipline is a willingness to seek out and employ automation wherever possible. Gone are the days when isolated, siloed systems can harden information systems and keep them that way in the face of continuous drift.

Highly interactive solutions that understand the ever-shifting nature of “state” and talk to each other regularly—security configuration and vulnerability management solutions in particular—are the first, best and often the last line of defense.


via:  tripwire

Save pagePDF pageEmail pagePrint page

Proactively Hardening Systems: Application and Version Hardening

The first article in this series examined configuration hardening, essentially looking at ports, processes and services as the “doors, gates and windows” into a network where security configuration management (SCM) becomes the job of determining which of these gateways should be open, closed, or locked at any given time. Now it’s time to look at application and version hardening.

What is System Hardening?

If configuration hardening settings are “conditional,” meaning they must find and keep that balance between security and productivity, then hardening against known vulnerabilities in applications and versions is much more black-and-white.

If an exploit path has been found in an operating system or application, the vendor rushes to create a patch or upgrade that removes the vulnerability. “Hardening” in this sense means “making sure the holes are known and that the most current security patches are deployed.”

One Way Hackers Exploit Known Vulnerabilities

To go back to our “secure house” analogy from the previous article in this series for a moment, imagine that the house I’m protecting has three external doors and that they all use Secure-A-Door Model 800 high-strength locks.

But a tester at the Secure-A-Door factory (or worse, a professional burglar) has just discovered an interesting thing: If you slide a credit card along the door jamb at 15 degrees while pulling up on the handle, the Secure-A-Door 800 pops open like a Coke can.

One of the most famous examples of this exploitation began in 2008. That’s when the makers of the Conficker worm discovered and exploited an underlying weakness in Port 445 of the Windows operating system.

The worm created a remote procedure call that dropped a DLL on the system, unloaded two distinct packets for data and code, and hid itself in a remote thread to make itself at home. (It was infinitely more complex and clever than that, but you get the idea.)

In effect, the worm popped the Secure-A-Door Model 800, let itself in, repaired the lock, installed a new phone line to listen for orders, and sat in a comfy chair waiting for instructions. It was able to leverage the internet, could register new domain names in which to hide, and created an extensive botnet that by 2010 had infected, according to Panda Security, as many as 18 million PCs—6 percent of the world’s PC population at the time.

Common Vulnerabilities and Exposures (CVEs)

This type of design failure or exploit is usually repaired by a patch. In the case of Conficker, Windows Security bulletin MS08-067 made the danger known to the worldwide Microsoft community and introduced a patch to prevent easy violation of Port 445.

The MS bulletin was in turn translated by the Common Vulnerabilities and Exposures site as CVE-2008-4250 and given a Common Vulnerability Scoring System (CVSS) rating of 10—the most severe rating possible.

Vulnerability Management

Vulnerability management (VM) systems, unlike SCM systems that check to see that doors and gates and windows are locked, do their part in system hardening differently. They make sure the proper patch levels are maintained and that any available defenses have been utilized. Using our analogy, we’d be conducting the following checks:

  • Proactively discovering whether I have any Secure-A-Door Model 800 locks installed
  • If I do, reporting on whether they’re the corrected “B” version made after October 2012
  • Verifying that any “bad” ones I have are only on inside doors and don’t serve as a primary defense

VM systems enable continuous hardening by making sure that CVE-2008-4250—and its many thousands of friends—are understood, mitigated, and more-or-less unexploitable when the right steps are taken.

More mature solutions provide an ongoing assessment of overall risk based on whether these vulnerabilities are mitigated or ignored.


via:  tripwire

Save pagePDF pageEmail pagePrint page

California passes law that bans default passwords in connected devices

Good news!

California has passed a law banning default passwords like “admin,” “123456” and the old classic “password” in all new consumer electronics starting in 2020.

Every new gadget built in the state from routers to smart home tech will have to come with “reasonable” security features out of the box. The law specifically calls for each device to come with a preprogrammed password “unique to each device.”

It also mandates that any new device “contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time,” forcing users to change the unique password to something new as soon as it’s switched on for the first time.

For years, botnets have utilized the power of badly secured connected devices to pummel sites with huge amounts of internet traffic — so-called distributed denial-of-service (DDoS) attacks. Botnets typically rely on default passwords that are hardcoded into devices when they’re built that aren’t later changed by the user. Malware breaks into the devices using publicly available default passwords, hijacks the device and ensnares the device into conducting cyberattacks without the user’s knowledge.

Two years ago, the notorious Mirai botnet dragged thousands of devices together to target Dyn, a networking company that provides domain name service to major sites. By knocking Dyn offline, other sites that relied on its services were also inaccessible — like Twitter, Spotify and SoundCloud.

Mirai was a relatively rudimentary, albeit powerful botnet that relied on default passwords. This law is a step in the right direction to prevent these kinds of botnets, but falls short on wider security issues.

Other, more advanced botnets don’t need to guess a password because they instead exploit known vulnerabilities in Internet of Things devices — like smart bulbs, alarms and home electronics.

As noted by others, the law as signed does not mandate device makers to update their software when bugs are found. The big device makers, like Amazon, Apple and Google, do update their software, but many of the lesser-known brands do not.

Still, as it stands, the law is better than nothing — even if there’s room for improvement in the future.


via:  techcrunch

Save pagePDF pageEmail pagePrint page

Google+ Shutting Down After Bug Leaks Info of 500k Accounts

Google has announced that they are closing the consumer functionality of Google+ due lack of adoption and an API bug that leaked the personal information of up to 500,000 Google+ accounts.

While no evidence was found that indicates this bug was ever misused, it was determined that the complexity of protecting and operating a social network like Google+ was not a worthwhile endeavor when so few users actually used the service for any length of time.

“This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” stated a blog post by Google regarding the Google+ closure. “The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.”

The consumer functionality of Google+ will be closing over a 10 month period, while Google transitions the product to be used internally by the Enterprise.

API bug caused data leak

After performing a code review of the Google+ APIs, called Project Strobe, Google stated they discovered a bug that could leak the private information of Google+ accounts. This bug could allow a user’s installed apps to utilize the API and access non-public information belonging to that user’s friends. The non-public information that was accessible includes an account holder’s name, email address, occupation, gender and age.

Underlining this, as part of our Project Strobe audit, we discovered a bug in one of the Google+ People APIs:

  • Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
  • The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public. 
  • This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data you may have posted or connected to Google+ or any other service, like Google+ posts, messages, Google account data, phone numbers or G Suite content.
  • We discovered and immediately patched this bug in March 2018. We believe it occurred after launch as a result of the API’s interaction with a subsequent Google+ code change.

As Google only keeps two weeks of API logs for its Google+ service, it was impossible for them to determine if the bug was ever misused. They were able to determine that the bug was not misused during the two weeks that they had log data.

Google knew about leak in May but did not disclose

According to a report by the Wall Street Journal, the bug in the Google+ API existed between 2015 and March 2018, which was when Google discovered and fixed the bug. According to their reporting, an internal committee at Google decided not to disclose the bug even though they were not 100% sure that it was not abused.

The Wall Street Journal, reported that they have reviewed a memo prepared by Google’s legal and policy staff, which indicated that disclosing the data breach could lead to scrutiny by government regulatory agencies.

“disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.”

In a statement, a Google Spokesperson said that their Privacy & Data Protection Office felt it was not necessary to disclose as it did not meet the threshold that would warrant it.

“Every year, we send millions of notifications to users about privacy and security bugs and issues. Whenever user data may have been affected, we go beyond our legal requirements and apply several criteria focused on our users in determining whether to provide notice.

Our Privacy & Data Protection Office reviewed this issue, looking at the type of data involved, whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met in this instance.

The review did highlight the significant challenges in creating and maintaining a successful Google+ that meets consumers’ expectations. Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.” – Google Spokesperson.


via:  bleepingcomputer

Save pagePDF pageEmail pagePrint page

The Coders of Kentucky

A bipartisan effort to revitalize the heartland, one tech job at a time.

Matthew Watson opened his car door at a gas station outside Hueysville, Ky., sprang out and exclaimed, “I got a new job! He blushed slightly; he was not one to boast. But for this slender, 33-year-old man with a red beard, a father of two small daughters who had once been ashamed of supplementing his low-pay, long-hours job with food stamps, this was fantastic news.

I’d driven to Hueysville past trucks with “Diggin’ Coal” decals, on a road slicing through mountains that rose in steep, majestic steps up to tops flattened by dynamite, past turnoffs to forgotten union halls where the eight-hour workday had been won and billboards that had recently read, “Trump for President.” (Kentucky went 63 percent for him.) Mr. Watson’s home, like much of Appalachia, reflects the landscape and culture of coal, without the coal mining jobs. And there was little hope of alternatives — until now.

“After I got my two associate’s degrees, the best job I could find was selling cigarettes behind the counter in Hazard, a 45-minute commute from home, for $10 an hour, and that was after a promotion to manager,” Mr. Watson told me the first time we met. “Some of my customers were opioid addicts, who slurred their speech, scratched their arms, laid their heads on my counter. In the back of my mind, I always think, ‘If I want to stay living here, if I didn’t have this job, I’d be working that job.’”

Then one day Mr. Watson heard an ad on the car radio. “It was for a 24-week course in coding, with an eight-week apprenticeship, which I later learned could qualify me for a $40,000-plus job designing apps for cellphones,” he said. The advertisement had been put out by a Louisville tech start-up called Interapt. “I immediately applied online, got interviewed, aced the test, and they hired me as an intern and then as a junior software developer,” Mr. Watson said. Within a year, he was offered yet another job as a software engineer, for a Florida-based company, for a salary well over $50,000.

On its first run in 2016, Interapt had 800 applicants, accepted 50 and graduated 35. (Some of the 15 who dropped out did so to tend a sick relative, join the military or take a non-tech job.) Of the 35 graduates, 25 were given job offers by Interapt, and 10 were hired by other tech companies in the area. This year Interapt will train approximately 90 people; next year Interapt expects that number to rise to more than 150.

Ankur Gopal, a University of Illinois graduate from Owensboro, Ky., started Interapt in his basement in Louisville in 2011, when he was 35. He is now renovating an empty warehouse in a run-down part of the city, investing nearly $4 million and creating jobs in the process. “With millions of U.S. tech jobs out there,” Mr. Gopal said, “we could help transform eastern Kentucky. Well, hey — Middle America.”

Mr. Gopal is at the forefront of a new movement to bring money and jobs from the coastal capitals of high tech to a discouraged, outsource-whipped Middle America. Ro Khanna, the Democratic representative from California whose district includes Apple, Intel, LinkedIn and Yahoo, was among the first politicians to float the idea of Silicon Valley venturing inland. “Why outsource coding jobs to Bangalore when we can insource jobs to eastern Kentucky, poor in jobs but rich in work ethic, and every one I.T. job brings four or five other jobs with it?” he said.

The stories of these Interapt graduates in the green hamlets of eastern Kentucky begin with dead ends and end with new beginnings.

“Nights I was manning the reception desk at Super 8, for $7.50 an hour, and days I was working at Little Caesars and still struggling to pay family bills,” Shea Maynard told me. Now, she said, “I’m modifying the information architecture of Interapt products.” She continued, “I never thought it was possible for a person like me to have a career I love.”

Most described feeling engrossed in the work. “Sitting at the desk in my trailer, I can go till 2 a.m.,” one man said. “I have to remember to stop.”

Starting when Crystal Adkins was 13, she almost single-handedly fed, dressed and raised her two younger siblings, while her own interest in school faded. Now she is Interapt’s star trainer. In addition to teaching, Ms. Adkins has been learning new coding languages and training her own children to code.

The success of the Interapt training program has depended on the enthusiasm of politicians from disconnected regions and increasingly hostile political parties.

Mr. Gopal first gathered support from Gov. Matt Bevin of Kentucky and Representative Hal Rogers, both Republicans. They were instrumental in the Appalachian Regional Commission approving $2.7 million to get the training program off the ground. The Department of Labor authorized apprenticeship status for its graduates.

Mr. Rogers is a conservative who represents Kentucky’s Fifth District, home to many unemployed coal miners and one of the poorest and most population-depleted districts in the country. He found an unlikely ally in Mr. Khanna, a progressive Democrat and former official in the Obama administration, who represents California’s 17th District, one of the richest, fastest-growing and most liberal districts in the country. In the 2016 presidential vote, it went 73.9 percent for Hillary Clinton. Mr. Rogers’s district went 79.6 percent for Mr. Trump. But Mr. Rogers’s office called Mr. Khanna’s, and invited him to see Interapt in a widely promoted visit last year.

Mr. Rogers wants the tech companies in Mr. Khanna’s district to consider investing in Kentucky and hiring its citizens. Mr. Khanna was remarkably open to the idea. “We believe in distributed jobs,” he said. “There is no reason these companies can’t engage thousands of talented workers in Iowa, Kentucky or West Virginia for projects.”

Despite these gestures of bipartisanship, the initiative has had to overcome stereotypes, the first one being about Interapt itself. Many locals were suspicious of outsiders’ intentions. Maybe Interapt was associated with some big-government, Obama-era program, or maybe it was a fraud pulled on rural towns by fast-talking city people. “Even after I was chosen,” a trainee told me, “I didn’t completely trust the program until we were asked to open our folders and I found a check for $400,” the weekly stipend for trainees. “Then I knew it was for real.”

Then there were the stereotypes held by the companies to which Interapt was pitching its graduates; many potential employers were skeptical of the apprenticeship model. As Ervin Dimeny, the former commissioner of the Kentucky Labor Cabinet’s Department of Workplace Standards, explained to me: “We think of apprenticeship as a way to certify 19th-century metalworkers. Or we associate it with boring high school shop class. We need to re-envision apprenticeships as passports to respectable middle-class careers.”

Worse, some saw rural Kentuckians as dubious recruits — tooth-free, grinning, moonshine-drinking hillbillies. “It’s a terrible myth,” an Interapt administrator who is the daughter of an unemployed Pikeville coal miner told me. “A hillbilly can do anything. Out in the hollows, you can’t call in specialists; you fix that stalled truck, that leaky roof, that broken radio yourself.” It’s the “car heads” — who can fix anything under a hood — who turn out to be inspired app developers, a recruiter told me. Those car heads include women too, who made up about a third of the first class.

Other investors are following Mr. Gopal’s lead. For example, the former chief executive of AOL, Steve Case, started an initiative called “Rise of the Rest,” which involves driving a big red bus around the country (it has visited 38 cities so far) and giving out $150 million in seed money to entrepreneurs. J.D. Vance, author of the best-selling “Hillbilly Elegy,” was brought on as a managing partner. As Mr. Case told an audience of hundreds in Louisville’s Speed Art Museum in May, 75 percent of venture capital now goes to three states: California, New York and Massachusetts. And half of all venture capital goes to Silicon Valley. Yet start-ups account for half of all new jobs in the United States. Why can’t those start-ups start somewhere else?

I.T. training is not going to solve all the problems of eastern Kentucky, of course. It may be hard to scale up. Not all of us warm to or can do I.T. work. And like coal-mining itself, I.T. jobs can be lost to automation.

If they are, could these visionary ventures crash into new dead ends? Interapt was itself experimenting with a new software that could improve the process of selecting trainees — possibly reducing tasks associated with one job right there. “Over time, some I.T. jobs will disappear, as will jobs for truck drivers, machine-tool makers and a lot of others too,” Mr. Gopal said. “But we teach our trainees to keep learning.”

If you know French, a trainer explained, “you can get the hang of Spanish and Portuguese. You stay ahead of the curve like that.”

For now, there is so much demand for I.T. workers — 10,000 estimated openings by 2020 in the Louisville metro area alone — that Mr. Gopal is reaching out to new groups. “We’re talking with the Department of Defense about a 16-week, eight-hour-a-day coding training program for vets returning from Afghanistan and Iraq to Fort Knox,” he said.

This is a good-news story. But continuing to increase access to good jobs in Middle America will take deliberate efforts to cooperate across the bitter political and regional divide. President Trump is not helping by proposing cuts in education funding that will raise the cost of student loans by more than $200 billion over the next decade. Last year, he tried to cut all funding for the Appalachian Regional Commission, which paid Interapt students’ stipends. A group of representatives — eight Democrats and two Republicans — signed a joint letter urging Trump to restore the money (it was).

On my last visit to Hueysville, Mr. Watson introduced me to his wife (“I married an outsider,” he said jokingly. “Nicole’s from Martin County, I’m from Floyd.”), his aunt, uncle and cousin, all schoolteachers, and his 93-year-old grandmother, a retired teacher who sews a brightly colored quilt for each new grandchild. His daughters played with dolls and nibbled on chocolate Easter eggs on the living room floor. “We’re really proud of Matthew,” his aunt said.

“My new employer is a home repair services company based in Florida,” Mr. Watson said later, “and I do feature development that had once been outsourced to India. I get to work from home. My 3-year-old asks me to get her juice as if I had nothing better to do.” He chuckled. “But it’s such a blessing. These mountains hug me, and my family is my rock. I thought I’d be forced to leave, and maybe one day I’ll have to. But why would I ever want to?”


via:  nytimes

Save pagePDF pageEmail pagePrint page

Adobe to Acquire Marketo

Combination of Adobe Experience Cloud and Marketo Engagement Platform Widens Adobe’s Lead in Customer Experience Across B2C and B2B.

Adobe (Nasdaq:ADBE) today announced it has entered into a definitive agreement to acquire Marketo, the market-leading cloud platform for B2B marketing engagement, for $4.75 billion, subject to customary purchase price adjustments. With nearly 5,000 customers, Marketo brings together planning, engagement and measurement capabilities into an integrated B2B marketing platform. Adding Marketo’s engagement platform to Adobe Experience Cloud will enable Adobe to offer an unrivaled set of solutions for delivering transformative customer experiences across industries and companies of all sizes.

Today, consumers have a very high bar for what constitutes a great customer experience and Adobe Experience Cloud has enabled B2C companies to successfully drive business impact by harnessing massive volumes of customer data and content in order to deliver real-time, cross-channel experiences that are personalized and consistent. When businesses buy from other businesses, they now have the same high expectations as consumers.

Marketo’s platform is feature-rich and cloud-native with significant opportunities for integration across Adobe Experience Cloud. Enterprises of all sizes across industries rely on Marketo’s marketing applications to drive engagement and customer loyalty. Marketo’s ecosystem includes over 500 partners and an engaged marketing community with over 65,000 members.

This acquisition brings together the richness of Adobe Experience Cloud analytics, content, personalization, advertising and commerce capabilities with Marketo’s lead management and account-based marketing technology to provide B2B companies with the ability to create, manage and execute marketing engagement at scale.

“The imperative for marketers across all industries is a laser focus on providing relevant, personalized and engaging experiences,” said Brad Rencher, executive vice president and general manager, Digital Experience, Adobe. “The acquisition of Marketo widens Adobe’s lead in customer experience across B2C and B2B and puts Adobe Experience Cloud at the heart of all marketing.”

“Adobe and Marketo both share an unwavering belief in the power of content and data to drive business results,” said Steve Lucas, CEO, Marketo. “Marketo delivers the leading B2B marketing engagement platform for the modern marketer, and there is no better home for Marketo to continue to rapidly innovate than Adobe.”

The transaction, which is expected to close during the fourth quarter of Adobe’s 2018 fiscal year, is subject to regulatory approval and customary closing conditions. Until the transaction closes, each company will continue to operate independently.

Upon close, Marketo CEO Steve Lucas will join Adobe’s senior leadership team and continue to lead the Marketo team as part of Adobe’s Digital Experience business, reporting to executive vice president and general manager Brad Rencher.

Conference Call Scheduled for 2 p.m. PT September 20th.

Adobe executives will comment on the acquisition of Marketo today during a live conference call, which is scheduled to begin at 2 p.m. PT. Analysts, investors, press and other interested parties can participate in the call by dialing (877) 376-9431 and using passcode 2867298. International callers should dial (402) 875-4755. The call will last approximately 30 minutes and an audio archive of the call will be made available later in the day. Questions related to accessing the conference call can be directed to Adobe Investor Relations by calling 408-536-4416 or sending an email to

Forward-Looking Statements Disclosure

This press release includes forward-looking statements within the meaning of applicable securities law. All statements, other than statements of historical fact, are statements that could be deemed forward-looking statements. Forward-looking statements relate to future events and future performance and reflect Adobe’s expectations regarding the ability to extend its leadership in the experience business through the addition of Marketo’s platform and other anticipated benefits of the transaction. Forward looking statements involve risks, including general risks associated with Adobe’s and Marketo’s business, uncertainties and other factors that may cause actual results to differ materially from those referred to in the forward-looking statements. Factors that could cause or contribute to such differences include, but are not limited to: Adobe’s ability to embed Marketo technology into Adobe Experience Cloud; the effectiveness of Marketo technology; potential benefits of the transaction to Adobe and Marketo customers, the ability of Adobe and Marketo to close the announced transaction; the possibility that the closing of the transaction may be delayed; and any statements of assumptions underlying any of the foregoing. The reader is cautioned not to rely on these forward-looking statements. All forward-looking statements are based on information currently available to Adobe and are qualified in their entirety by this cautionary statement. For a discussion of these and other risks and uncertainties, individuals should refer to Adobe’s SEC filings. Adobe does not assume any obligation to update any such forward-looking statements or other statements included in this press release.


via:  adobe

Save pagePDF pageEmail pagePrint page

Is Your Security Dashboard Ready for the Cloud?

The ability to feed key security information onto a big screen dashboard opens up many new opportunities for managing the day-to-day security and maintenance workload as well as providing a useful method of highlighting new incidents faster than “just another email alert.”

Most Security Operation Centers I’ve visited in recent years have embraced having a few dedicated big-screen displays, but most are restricted to monitoring the on-premise architecture such as local firewalls and servers rather than taking a more holistic approach and accounting for the increasing use of cloud hosted infrastructure and services.

Security no longer starts and ends at the “front door,” with cloud playing a bigger role in more and more organizations. Here’s four things I think every company that uses cloud infrastructure should consider surfacing on their security dashboards.

Inventory and Discovery

The traditional model of server provisioning started changing with the growth of virtualization. No longer can you assume that new hardware would be purchased and entered into a CMDB.

With the growth of cloud infrastructure, the provisioning of new virtual infrastructure became even easier, but with that comes new challenges for your security processes. For that reason, making sure that newly detected devices are highlighted front and center on a dashboard makes a lot of sense and can help to understand the changes going on during provisioning of a new or updated application during the DevOps cycle. Ensuring security coverage against these new devices is key to making sure that gaps don’t develop over time.

Vulnerabilities and Priorities

When vulnerabilities are detected, it’s important that they are presented in a practical fashion. Simply listing every missing patch or misconfiguration often isn’t a sensible approach to managing your workload. A good dashboard should help reveal the most common and highest risk vulnerabilities in an easy-to-read fashion.

Tracking progress of investigations is important, too, in order to ensure you’re keeping on top of what’s been discovered as well as giving your security team a goal. Showing how old a vulnerability is, alongside its potential risk, can help provide a focus for teams as well as a sense of accomplishment when you clear down a challenging vulnerability from the dashboard.


If you’re carrying out regular scans of your cloud infrastructure via one or more scanning appliances and/or applications, it’s important to account not just for the health of the environment you’re monitoring but also for the status of the tools you’re using to provide the monitoring. Availability indicators for your monitoring architecture as well as alerting for whether or not scans are completing successfully ensures that you always have the full picture.


Alongside triaging vulnerabilities, ensuring compliance to your internal security hardening requirements is key.

Making sure that you are proactively and consistently implementing security procedures helps to minimize your company’s risk, and showing compliance levels (typically through a simple percentage score) can verify not just how secure your environment is today but also allow you to track your success over time, helping to demonstrate how everyday investment in your security configuration can help improve your security posture.

Getting the right information out and visible to your SOC team is key. Hopefully, these starting points will help you plan for your security dashboards to provide better overviews of your cloud security.


via:  tripwire

Save pagePDF pageEmail pagePrint page

Computer System Security Requirements for IRS 1075: What You Need to Know

The IRS 1075 publication lays out a framework of compliance regulations to ensure federal tax information, or FTI, is treated with adequate security provisioning to protect its confidentiality. This may sound simple enough but IRS 1075 puts forth a complex set of managerial, operational and technical security controls you must continuously follow in order to maintain ongoing compliance.

Any organization or agency that receives FTI needs to prove that they’re protecting that data properly with IRS 1075 compliance. Federal, state, county and local entities – as well as the contractors they employ – are all within its scope.

IRS 1075 is comprised of the following sections:

  1. Introduction
  2. Federal Tax Information and Reviews
  3. Recordkeeping Requirement: IRC 6103(p)(4)(A)
  4. Secure Storage: IRC 6103(p)(4)(B)
  5. Restricting Access: IRC 6103(p)(4)(C)
  6. Other Safeguards: IRC 6103(p)(4)(D)
  7. Reporting Requirements: IRC 6103(p)(4)(E)
  8. Disposing of FTI: IRC 6103(p)(4)(F)
  9. Computer System Security
  10. Reporting Improper Inspections or Disclosures
  11. Disclosure to Other Persons
  12. Return Information in Statistical Report

The complete document describing IRS 1075 requirements is available here.

All agency information systems used for receiving, processing, storing or transmitting FTI must be hardened in accordance with the requirements in IRS 1075. Agency information systems include the equipment, facilities and people that collect, process, store, display and disseminate information. This includes computers, hardware, software and communications as well as policies and procedures for their use.

The computer security framework was primarily developed using guidelines specified in NIST SP 800-30 Revision 1, Guide for Conducting Risk Assessments, and NIST SP 800- 53 Revision 4, Security and Privacy Controls for Federal Information Systems and Organizations. Only applicable NIST SP 800-53 controls are included in IRS 1075 as a baseline. Applicability was determined by selecting controls required to protect the confidentiality of FTI.

Let’s focus on Section 9: Computer System Security.

IRS 1075 requires organizations and agencies to protect FTI using core cybersecurity best practices like file integrity monitoring (FIM) and security configuration management(SCM). Both of these technologies depend upon a known, secure baseline. Any deviations from this baseline signal authorized or unauthorized changes that could bring your systems out of compliance or expose them to attacks.

According to IRS 1075, all organizations and agencies that handle FTI must do the following:

  • Determine the types of changes to the information system that are configuration controlled
  • Review proposed configuration-controlled changes to the information system and approve or disapprove such changes with explicit consideration for security impact analyses
  • Document configuration change decisions associated with the information system
  • Implement approved configuration-controlled changes to the information system
  • Retain records of configuration-controlled changes to the information system for the life of the system
  • Audit and review activities associated with configuration-controlled changes to the information system
  • Coordinate and provide oversight for configuration change control activities through a Configuration Control Board that convenes when configuration changes occur
  • Test, validate and document changes to the information system before implementing the changes on the operational system

Tripwire can help with its Tripwire Enterprise software.

One of Tripwire Enterprise’s most fundamental capabilities is establishing a secure baseline configuration for your system and tracking all changes against that baseline. Tripwire Enterprise ensures the integrity of your files and systems, keeping a record of all changes that take place and producing audit-ready reports to make proof of compliance easier.

Tripwire Enterprise supports IRS 1075 Policy Compliance hardening guidelines out of the box.

If your organization or agency handles federal income tax information of any sort, you are required to stay in compliance with IRS 1075. Failure to do so can lead to heavy fines and even criminal charges, but Tripwire technology makes ongoing compliance simple and keeps you audit-ready at all points in time.



via:  tripwire

Save pagePDF pageEmail pagePrint page

14 million customer records exposed in GovPayNow leak, a payment system used by thousands of federal and state government agencies in the U.S. and recently acquired by Securus Technologies, has leaked 14 million customer records.

Information exposed includes the last four digits of payment cards, names, phone numbers and addresses, according to Brian Krebs, who discovered the leak.

Anyone could view the information by changing the digits in the URL of an online receipt that the service gives users when they pay parking citations, fines or make other financial transactions.

“GovPayNet [which is doing business as GovPayNow] has addressed a potential issue with our online system that allows users to access copies of their receipts, but did not adequately restrict access only to authorized recipients,” according to a company statement sent to KrebsOnSecurity, which also said there was no “indication that any improperly accessed information was used to harm any customer, and receipts do not contain information that can be used to initiate a financial transaction.”

Noting that most of the information exposed “is a matter of public record that may be accessed through other means,” the company said. “Nonetheless, out of an abundance of caution and to maximize security for users, GovPayNet has updated this system to ensure that only authorized users will be able to view their receipts.”

Calling the breach at the Indianapolis-based company “fairly minor” compared to others over the last year, Nick Bilogorskiy, cybersecurity strategist at Juniper Networks, said, “Online payment providers, especially those doing business with the government, should take special care to protect their customers’ receipts by using HTTPS and checking that the user is logged in and has permissions to view them.”

Bilogorskiy also recommended, to “avoid information disclosure and directory traversal issues,” that companies deny “anonymous web visitors the ability to read permissions for any sensitive data files and removing any unnecessary files from web-accessible directories.”

Pravin Kothari, CEO of CipherCloud, noted the security incident – which exposed data from as far back as 2012 – isn’t the first for Securus, which bought the company in January.

“Securus has had other issues with cybersecurity over the past few years including the misuse of a service that tracked convicted felons’ cellphones, hackers penetrating this same system and subsequently stealing logins and legitimate credentials, and finally another flaw in May that allowed unauthorized access to accounts by guessing answers to the security questions,” he explained.

In the spring, a hacker swiped 2,800 logins and passwords from Securus, on the heels of Sen. Ron Wyden, D-Ore., asking the Federal Communications Commission (FCC) to investigate the wireless carriers that allow law enforcement to have “unrestricted access to the location data” of their customers after a former Missouri sheriff was indicted for, among other things, tracking the cell phones of numerous persons, including some state troopers, without the benefit of a court order.

The issues prompted wireless carriers like Verizon to review their location aggregator programs and terminate existing location data sharing agreements with third-party brokers.

Many of the “flaws are simple to find and fix. That’s not the issue,” said Kothari. “The issue is that there will always be open vulnerabilities, misconfigurations, and missing updates that attackers can exploit. You cannot fix them all.”

It’s inevitable that attackers will penetrate networks, given increasing numbers and an escalating volume of persistent attacks,” he said.

“Best practices today position safekeeping of your data, at all times, in a pseudonymized form,” Kothari said. “This makes it an order of magnitude harder for the attackers to acquire useful information which they can exploit from within your on-premise networks or your cloud services.” displays an online receipt when citizens use it to settle state and local government fees and fines via the site. Until this past weekend it was possible to view millions of customer records simply by altering digits in the Web address displayed by each receipt.


via:  scmagazine

Save pagePDF pageEmail pagePrint page

ICO Receiving 500 Breach-Related Calls a Week Since GDPR Took Effect

The United Kingdom’s Information Commissioner’s Office (ICO) has been receiving 500 calls pertaining to data breaches since the European Union’s General Data Protection Regulation (GDPR) took effect.

Speaking before hundreds of senior business leaders at the Confederation of British Industry’s (CBI’s) fourth annual Cyber Security Conference, ICO deputy commissioner James Dipple-Johnstone revealed that of the 500 breach-related calls received weekly by the Office, a third of them aren’t warranted or pertain to events that don’t qualify as data security incidents.

All of these unnecessary reports could be an indication that organizations are eager to comply. Dipple-Johnstone clarified that many of the reports tend to “over-report” the details of a perceived security incident. He attributed this phenomenon to organizations’ desire to manage their risk or a prevailing perception that they need to report everything, reported ITPro.

Despite these attempts to maintain transparency, some companies failed to comply with the ICO’s reporting requirements. Dipple-Johnstone explained that some of the data breach reports received by the Office were incomplete. In other notices, organizations mistook the mandatory reporting period of 72 hours as 72 “business” hours, not three consecutive days from the moment of discovery.

These findings came at around the same time that cloud and data firm Talend disclosed a majority of organizations’ failure to comply with certain elements of GDPR. Specifically, it found that just 35 percent of EU-based companies were fulfilling subject access requests (SARs) filed by customers looking to access their data held by controllers within the legal time frame. Outside of Europe, only a half of organizations were meeting those deadlines.

Dipple-Johnstone said the ICO will be working with organizations to help them with their data protection efforts going forward. He also made a point of indicating how the ICO doesn’t always issue fines following an investigation into a potential data security incident. As quoted by ITPro:

The small number of fines we issue always seem to get the headlines, but we close many thousands of incidents each year without financial penalty but with advice, guidance and reassurance. For every investigation which ends in a fine, we have dozens of audits, advisory visits and guidance sessions. That is the real norm of the work we do.

Data protection goes beyond implementing security technologies like encryption and machine learning. It also involves investing in those who use those solutions.


via:  tripwire


Save pagePDF pageEmail pagePrint page