Kaspersky Open Sources Internal Distributed YARA Scanner

Kaspersky Lab has released the source code of an internally-developed distributed YARA scanner as a way of giving back to the infosec community.

Originally developed by VirusTotal software engineer Victor Alvarez, YARA is a tool that allows researchers to analyze and detect malware by creating rules that describe threats based on textual or binary patterns.

Kaspersky Lab has developed its own version of the YARA tool. Named KLara, the Python-based application relies on a distributed architecture to allow researchers to quickly scan large collections of malware samples.

Looking for potential threats in the wild requires a significant amount of resources, which can be provided by cloud systems. Using a distributed architecture, KLara allows researchers to efficiently scan one or more YARA rules over large data collections – Kaspersky says it can scan 10Tb of files in roughly 30 minutes.

“The project uses the dispatcher/worker model, with the usual architecture of one dispatcher and multiple workers. Worker and dispatcher agents are written in Python. Because the worker agents are written in Python, they can be deployed in any compatible ecosystem (Windows or UNIX). The same logic applies to the YARA scanner (used by KLara): it can be compiled on both platforms,” Kaspersky explained.

KLara provides a web-based interface where users can submit jobs, check their status, and view results. Results can also be sent to a specified email address.

The tool also provides an API that can be used to submit new jobs, get job results and details, and retrieve the matched MD5 hashes.

Kaspersky Lab has relied on YARA in many of its investigations, but one of the most notable cases involved the 2015 Hacking Team breach. The security firm wrote a YARA rule based on information from the leaked Hacking Team files, and several months later it led to the discovery of a Silverlight zero-day vulnerability.

The KLara source code is available on GitHub under a GNU General Public License v3.0. Kaspersky says it welcomes contributions to the project.

This is not the first time Kaspersky has made available the source code of one of its internal tools. Last year, it released the source code of Bitscout, a compact and customizable tool designed for remote digital forensics operations.


via:  securityweek

Save pagePDF pageEmail pagePrint page

Saks, Lord & Taylor breached, 5 million payment cards likely compromised

Five million customer credit and debit cards offered for sale by the JokerStash hacking syndicate, also known as Fin7, likely came from records stolen from Saks Fifth Avenue and Lord & Taylor sometime between May 2017 and their March 28 release.

“Based on the analysis of the available data, the entire network of Lord & Taylor and 83 Saks Fifth Avenue locations [has] been compromised” and the majority of cards were “obtained from New York and New Jersey locations,” according to a Gemini Advisory report, which states that approximately 125,000 records were for sale, with the remainder of the cache, advertised on the dark web as BIGBADABOOM-2, expected to be rolled out in the coming months.

“While locale-specific attacks like these aren’t uncommon, the volume of records is a bit larger than usual, which could be a lead to how long the infection was present before detection,” said Terry Ray, CTO of Imperva, noting that organizations often struggle to identify a breach or infection in a reasonable time-frame. “Most attacks are designed to run under the radar and the methods of breach constantly evolve. This requires that cybersecurity teams have effective funding, adequate staff and vast expertise. Sadly, none of those three are common,” Ray added.

Gemini expressed “a high level of confidence” that the stolen cards came from Saks Fifth Avenue, its discount outlet Saks Fifth Avenue OFF 5TH, and Lord & Taylor Stores, all operated by Hudson’s Bay Company (HBC), a Canadian firm.

“We recently became aware of a data security issue involving customer payment card data at certain Saks Fifth Avenue, Saks OFF 5TH, and Lord & Taylor stores in North America,” reads a company statement from Saks Fiftht Avenue. “We identified the issue, took steps to contain it, and believe it no longer poses a risk to customers shopping at our stores.  While the investigation is ongoing, there is no indication at this time that this affects our e-commerce or other digital platforms, Hudson’s Bay, Home Outfitters, or HBC Europe.”

The company added that it is coordinating with law enforcement authorities and payment card companies and assured customers that there is no evidence that Social Security and Social Insurance numbers, driver’s license numbers, and PINs were affected.

Fin7 has successfully hacked hotel chains like Trump Hotels and Omni Hotels & Resorts, as well as retailers like Whole Foods, Jason’s Deli and Chipotle. The group last year also launched spearphishing campaigns targeting Securities and Exchange Commission (SEC) filings using a fileless attack framework.

“This incident shows once again merchants still need to protect themselves against POS system infiltration attacks targeting cardholder data. A multi-layer security strategy is necessary,” including segmenting POS networks and upping monitoring and threat detection capabilities, said Mark Cline, vice president at Netsurion. “If nothing else, dwell time of such an attack would be reduced to hours or days. After all, the report is that this attack has persisted for almost a year, just as we have seen in previous massive card breaches.”


via:  scmagazine

Save pagePDF pageEmail pagePrint page

School uses game-based initiative to find future cyber talent

Skinners’ Academy introduces government-backed Cyber Discovery programme to find cyber security professionals of the future.

Skinners’ Academy in Woodberry Grove, London, has been testing its students with the government-backed Cyber Discovery initiative to find any with a particular aptitude for cyber security.

The scheme uses several game-like stages to assess whether students aged between 14 and 18 might have the talent to become cyber security professionals.

Alex Holmes, deputy director of cyber security at the Department for Digital, Culture, Media and Sport, said that to ensure the UK becomes the “world’s leading digital economy”, it also must be secure.

Holmes said cyber attackers can try to cause harm to the UK as a whole using various methods, such as attempting to sabotage the nation’s energy supply or transport infrastructure, and that the best way to prevent such attacks is to have the appropriate protection in place.

But the UK needs more young people to take an interest in cyber security as a career and “help to defend the country”, he said.

“We don’t have enough skilled professionals in the UK to protect the country right now,” said Holmes. “The game you’re playing [Cyber Discovery] is to help you understand and potentially help you become the cyber security experts of tomorrow.”

Cyber Discovery is part of the government’s Cyber Schools Programme, which was launched in early 2017 with the aim of reaching at least 5,700highly skilled teenagers by 2021, teaching them a cyber security curriculum through a mixture of online and offline teaching.

The £20m funding available for the programme will go towards extra-curricular clubs and activities, as well as the Cyber Discover online game.

The game has four stages: cyberstart assess, cyberstart game, cyberstart essentials and cyberstart elite, each of which involves puzzles and challenges that will improve students’ cyber security knowledge and pick out those who might make good cyber security specialists in the future.

James Lyne, head of research and development at SANS Institute, said 23,000 people across the UK took part in the first stage, cyberstart assess, and 12,000 of those showed the talent to progress to the next stage, cyberstart game.

“Everything you’ll do here today will help secure the technology that will become important in the future.” he told Skinners’ Academy students.

The game gives students access to both knowledge and tools similar to those used in the industry, and students face problems based on real-world examples, such court cases, software flaws or activity by criminal gangs.

Since more headline stories about cyber attacks and cyber crime have hit the media in recent years, there is now more awareness of cyber security among the general public, said Lyne, but this can have both a positive and negative impact.

In some cases, people feel disengaged because they think there is nothing they can do to prevent attacks, but others have become more aware of potential cyber careers, he said.

“I have had more conversations with kids recently where they have context of why cyber is important,”said Lyne.

Many young people, especially girls, make decisions about whether or not to study science, technology, engineering and maths (Stem) subjects at a very early age, which means that if they are not introduced to these concepts early on, they are less likely to pursue them in the future.

Although new security problems arise every day, Lyne said that if people are introduced to the basic concepts of cyber security from an early age, it will be easier to encourage them into careers in cyber and get them up to speed later.

But some teachers say they don’t have the skills to teach Stem subjects, and those who do cannot have the breadth of knowledge about Stem careers that those in industry have.

Nazleen Rao, head of the IT department at Skinners’ Academy, said the Cyber Discovery initiative helped to give depth to material on cyber security as part of the curriculum.

“I couldn’t teach the students what they’ve been learning during this programme – this just makes what I’ve been teaching that much more exciting for them,” she said.

“When I have been teaching cyber security to our students, it’s usually one part of the course you teach and it can seem like a small part.”

Skills in demand

Demand is increasing for professionals with cyber security skills, but there are too few workers in the UK with the skills needed to fill current roles.

Rao said cyber security can be “very challenging” to teach, but it is important not just to fill the cyber skills gap, but also to ensure young people who will grow up to be technology users are aware of the risks.

“Some of the students actually said they didn’t realize there was a need for cyber experts out there,” she said.“Even if they’re not interested in being cyber professionals, it’s just raising awareness among them.”

Like many male-dominated sectors, there is a lack of women in the cyber security space, and it has been suggested that recruiting more women into the sector could be the key to closing the skills gap.

But Rao said very few girls choose to take computer science, and getting them interested in such subjects is a “constant struggle”.

“They have got so much to give and they have so many amazing ideas,” she added.

The girls who did choose to take part in both computer science as part of their year nine GCSE options and in the Cyber Discovery challenge were “nervous” at first, said Rao, but having a female teacher helps to build their confidence.

She also said that for those who did not want to study computer science, Skinners’ Academy also offers digital media and creative iMedia as subjects, which are slightly less technology-focused.

“If they don’t want to go into the computer science field, they can go into creative iMedia, which allows them to be more free with their creative skills,” said Rao.

As automation begins to make some jobs redundant, creative skills in the technology industry have been emphasized as a future necessity.


via:  computerweekly

Save pagePDF pageEmail pagePrint page

Ben is a chatbot that lets you learn about and buy Bitcoin


It’s generally a given that whenever a new technology takes off people rush into the space to build everything under the sun, and eventually natural selection kicks in and only the truly useful remain. For example, chatbots became trendy last year and we quickly began seeing chatbots for weather, movie recommendations, personal finance, etc. Some of these are useful, but until natural language processing improves you’re probably better just doing the task yourself.

But there are a few exceptions, with one in particular being chatbots designed for the purpose of making a very complex topic or task approachable to the average person.

Like cryptocurrencies.

Ben is a chatbot that lets anyone become familiar with cryptocurrencies via a recognizable chat interface. By talking with “Ben”, users can do things like take lessons and learn about cryptocurrency, read the latest industry news, and of course buy and sell Bitcoin.

By focusing on an underserved market (i.e people who have no idea what Bitcoin is or how to buy it) Ben has the unique advantage of not having to go head to head with established crypto titans like Coinbase or Circle.

The startup is part of Y Combinator’s Winter ’18 batch, and previously raised a $580K pre-seed from Third Kind Venture Capital and various angel investors.

After completing a KYC check (which is also done via chat) users in 21 states can buy and sell Bitcoin, with other states and support for Ethereum, Ripple, and Bitcoin Cash rolling out in the coming months. The startup charges 1% for buys and sells, which is in line or lower than most major exchanges.

The app also has a social feature where you can link with friends to see their returns (only on a percentage basis) to see who is a better investor.

User’s cryptocurrency is stored in the cloud but their private keys live only on their own personal device, which isn’t as secure as complete cold storage but does ensure that your bitcoin can’t be spent without someone having access to your phone. Ben also gives new users a backup seed to write down in case they lose their phone.

But Ben isn’t necessarily meant to support an experienced crypto user who has a high-value portfolio and needs advanced features and security.

Instead, the startup’s goal is to make buying and learning about cryptocurrency accessible to anyone, especially those without the technical knowledge or desire to spend the time learning how an exchange world. And as natural language technology evolves Ben will be able to answer more and more questions over time, making it a perfect on-ramp for people who need a little more hand holding before they open their wallet and trade their (actual) benjamins for a string of ones and zeros.


via:  techcrunch

Save pagePDF pageEmail pagePrint page

Microsoft makes it simpler to port your favorite distros : Linux on Windows 10

The company is releasing code designed streamline the process of porting a Linux distribution to run on the Windows Subsystem for Linux (WSL).

Microsoft is making it easier for Linux-based operating systems to run on top of Windows 10.

The company is releasing code designed to streamline the process of porting a Linux distribution to run on the Windows Subsystem for Linux (WSL).

The WSL allows Windows 10 to run various GNU/Linux distros from the Windows Store, providing access to Ubuntu, openSUSE, Fedora, and Kali Linux, with Debian due soon, and other distros to be added over time.

“We know that many Linux distros rely entirely on open-source software, so we would like to bring WSL closer to the OSS community,” said Tara Raj of Microsoft’s WSL team, announcing the release of the code for a”reference implementation for a WSL distribution installer application” on the code repository GitHub.

“We hope open-sourcing this project will help increase community engagement and bring more of your favorite distros to the Microsoft Store.”

WSL distros run with a command line shell, rather than offering graphical desktops, and support a range of command line tools, as well as applications such as Apache web server and Oracle MySQL.

Those managing Linux distributions will be able to study the sample code for the Microsoft’s reference installer to help them turn their distribution into an app that can be submitted to the Microsoft Store.

Raj also announced that developers will be able to sideload custom Linux distros on their Windows 10 machine, although these custom distros will typically not be distributed through the Windows Store.

WSL allows different Linux distros to run side-by-side within Windows and Microsoft has previously stated that its aim with the WSL is to provide “the best development environment, regardless of the technologies that developers use, or the platforms they wish to target”.

However, at present, the WSL also has many disadvantages over a running a dedicated GNU/Linux system. Microsoft doesn’t support desktop environments or graphical applications running on WSL, and also says it is not suitable for running production workloads, for example an Apache server supporting a website.

WSL is a work in progress, with Microsoft adding new features and support over time.


Calling tools from different Linux distros from the Windows command line.

Image: Microsoft



via:  techrepublic

Save pagePDF pageEmail pagePrint page

Why Does Data Exfiltration Remain an Almost Unsolvable Challenge?

From hacked IoT devices to corporate infrastructures hijacked for crypto-mining to automated ransomware, novel and sophisticated cyber-attacks are notoriously hard to catch. It is no wonder that defending against these silent and never-seen-before threats dominates our security agendas. But while we grapple with the challenge of detecting the unknown, data exfiltration – an old and very well-known risk – doesn’t command nearly the same amount of attention. Yet data exfiltration happens, and it happens by the gigabyte.

As attackers improve their methods of purloining the sensitive data we trust our organizations to keep safe, one critical question remains: why does data exfiltration present the security community with such a formidable challenge?

Gigawatts and Flux Capacitors. Let’s go Back in Time.

All data exfiltration attacks share one common trait:  the early warning signs of anomalous activity on the network were present but traditional security failed to catch them. Regardless of level of subtlety, or the number of devices involved, perimeter tools missed the window of opportunity between impact and unauthorized data transfer  – allowing for hundreds of gigabytes of data to be exfiltrated from the organization.

The Sony hack of 2014 brought the world to a startling halt when it was revealed that attackers had spent over a year leaking 100 terabytes of data from the network. The next year brought us the Panama Papers, where allegedly 2.6 terabytes of data were leaked, causing reputational damage to some of the world’s most recognizable public figures. And in 2016, allegedly 80 gigabytes of data escaped from the Democratic National Committee’s network, launching two years of skepticism and distrust around the US elections. Each of these cases of sizeable data exfiltration remained undetected for months, or even years – only to be discovered when the data had already long been lost.

When we look at this cycle of stealthy and silent data breaches, we have to ask ourselves: how can such tremendous amounts of data leave our corporate networks without raising any alarms?

Data Exfiltration

Modern Networks: Living Organisms

The challenge in identifying indicators of data exfiltration lies partly in the structure of today’s networks. As our businesses continue to innovate, we open the door to increased digital complexity and vulnerability – from BYOD to third party supply chains, organizations significantly amplify their cyber risk profile in the name of optimal efficiency.

Against this backdrop, our security teams are hard-pressed to identify the subtle telling signs of a data exfiltration attempt in the hope to stop it in its tracks. To add to the complexity, they need to find the proverbial needle in an ever growing haystack of hundreds of thousands of devices on their network that they did not build, install, or even know existed.

Networks today are much like living organisms: they grow, they shrink, and they evolve at a rapid rate. If we think about a network as a massive data set that changes hundreds, if not thousands, of times per second, then we have to realize that no security team will ever be able to keep up with which actions are authorized versus which actions are indicative of data exfiltration.

The Old Approach Needs Victims Before it Can Offer Solutions

Compounding the challenge of today’s labyrinthine networks, stretched security teams are always on the offense – fighting back-to-back battles against the latest form of unpredictable threat. So how can security teams cut through the noise and discern the subtle differences between legitimate activity and criminal data exfiltration campaigns?

Five years ago, we relied on historical intelligence to define tomorrow’s attack. But the never-ending cycle of data breaches have taught us that these approaches were just as insufficient then as they are now. Identifying data exfiltration should be a low-hanging fruit for security teams, but to do so, we need to rely upon technologies that make no assumptions on what ‘malicious’ activity looks like.

Organizations are increasingly turning to AI technology for the answer, capable of identifying subtle deviations from normal network activity. By understanding the nuances of day-to-day network activity, self-learning technology correlates seemingly-irrelevant pieces of information to form a comprehensive picture of what is happening within our network borders. Consequently, AI spots the subtle indicators of exfiltration as it’s happening – giving security teams valuable time to mitigate the crisis before it becomes a headline.

To break the cycle of high-profile data breaches, we must embrace AI technologies that evolve with our organizations, strengthen its defenses over time, and identify data exfiltration tactics before our sensitive information is long past the network perimeter. And as we face a global cyber skills shortage, it is now more imperative than ever that we work in tandem with technology capable of doing the heavy lifting for us. Attackers seeking to leak our most sensitive data are evolving to keep up with our defenses – are we evolving too?


via:  securityweek

Save pagePDF pageEmail pagePrint page

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

Forward-secrecy protocol comes with the 28th draft.

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts.

Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares.

TLS 1.3 won unanimous approval (well, one “no objection” amid the yeses), paving the way for its widespread implementation and use in software and products from Oracle’s Java to Google’s Chrome browser.

The new protocol aims to comprehensively thwart any attempts by the NSA and other eavesdroppers to decrypt intercepted HTTPS connections and other encrypted network packets. TLS 1.3 should also speed up secure communications thanks to its streamlined approach.

The critical nature of the protocol, however, has meant that progress has been slow and, on occasion, controversial. This time last year, Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

Most recently, banks and businesses complained that, thanks to the way the new protocol does security, they will be cut off from being able to inspect and analyze TLS 1.3 encrypted traffic flowing through their networks, and so potentially be at greater risk from attack.

Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications.

An effort to effectively insert a backdoor into the protocol was met with disdain and some anger by internet engineers, many of whom pointed out that it will still be possible to introduce middleware to monitor and analyze internal network traffic.


The backdoor proposal did not move forward, meaning the internet as a whole will become more secure and faster, while banks and similar outfits will have to do a little extra work to accommodate and inspect TLS 1.3 connections as required.

At the heart of the change – and the complaints – are two key elements: forward secrecy, and ephemeral encryption keys.

TLS – standing for Transport Layer Security – basically works by creating a secure connection between a client and a server – your laptop, for example, and a company’s website. All this is done before any real information is shared – like credit card details or personal information.

Under TLS 1.2 this is a fairly lengthy process that can take as much as half-a-second:

  • The client says hi to the server and offers a range of strong encryption systems it can work with
  • The server says hi back, explains which encryption system it will use and sends an encryption key
  • The client takes that key and uses it to encrypt and send back a random series of letters
  • Together they use this exchange to create two new keys: a master key and a session key – the master key being stronger; the session key weaker.
  • The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn’t have to be processed as much
  • The server acknowledges that system will be used, and then the two start sharing the actual information that the whole exchange is about

TLS 1.3 speeds that whole process up by bundling several steps together:

  • The client says hi, here’s the systems I plan to use
  • The server gets back saying hi, ok let’s use them, here’s my key, we should be good to go
  • The client responds saying, yep that all looks good, here are the session keys

As well as being faster, TLS 1.3 is much more secure because it ditches many of the older encryption algorithms that TLS 1.2 supports that over the years people have managed to find holes in. Effectively the older crypto-systems potentially allowed miscreants to figure out what previous keys had been used (called “non-forward secrecy”) and so decrypt previous conversations.

A little less conversation

For example, snoopers could, under TLS 1.2, force the exchange to use older and weaker encryption algorithms that they knew how to crack.

People using TLS 1.3 will only be able to use more recent systems that are much harder to crack – at least for now. Any effort to force the conversation to use a weaker 1.2 system will be detected and flagged as a problem.

Another very important advantage to TLS 1.3 – but also one that some security experts are concerned about – is called “0-RTT Resumption” which effectively allows the client and server to remember if they have spoken before, and so forego all the checks, using previous keys to start talking immediately.

That will make connections much faster but the concern of course is that someone malicious could get hold of the “0-RTT Resumption” information and pose as one of the parties. Although internet engineers are less concerned about this security risk – which would require getting access to a machine – than the TLS 1.2 system that allowed people to hijack and listen into a conversation.

In short, it’s a win-win but will require people to put in some effort to make it all work properly.

The big losers will be criminals and security services who will be shut out of secure communications – at least until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4.


via:  theregister

Save pagePDF pageEmail pagePrint page

Why do the Vast Majority of Applications Still Not Undergo Security Testing?

Did you know that 84% of all cyber attacks target applications, not networks? What’s even more curious is that 80% of Internet of Things (IoT) applications aren’t even tested for security vulnerabilities.

It is 2018, and despite all the evidence around us, we haven’t fully accepted the problem at hand when it comes to software security. Because we haven’t accepted the problem, we are not making progress in addressing the associated vulnerabilities. Which is why after an active 2017, we are already seeing numerous new attacks before we leave the first quarter of the year.

So why the lack of progress?

The evidence that software is a primary attack point is everywhere, yet many choose to ignore security testing—at least for four out of every five IoT applications running today. Since IoT has proven to be an attractive attack vector, one would think that securing them would be of the utmost importance. Apparently not.

The mythology around limiting testing to perceived high-risk applications has been wrote about in other columns, so I will not cover that ground today. In summary, the evidence is overwhelming; there have been numerous cases where an application perceived as low-risk was used as the entry point to eventually breach high-risk applications to access high-value targets.

A testing regime that ignores large blocks of an organization’s software is no longer viable. However, doing cursory testing simply to check a box is not much better, and may create a false sense of security. Running a test because an auditor dictates that a test be run is not security. Running a test and addressing the findings is a step forward. You would be shocked by the number of organizations I have seen that generate lots of test results but never act on them.

Effectively evaluating secure code

The RSA Conference will be upon us in April, and a trip through the exhibit hall will find numerous application security testing (AST) vendors of all shapes, sizes, and approaches, each breathlessly promising you they are the one silver bullet you need to test your software security. At best they are telling you a partial truth, as the nature of today’s software demands multiple tests to comprehensively evaluate the security of any application. That is because applications contain three specific components where vulnerabilities can be found, and each must be tested in a different way for security testing to be complete.

1. The code you write. In spite of the adoption of open source and the move to agile methodologies, one thing remains constant: Your coders still write code. Source code analysis (static analysis) is designed to find security vulnerabilities and quality issues in your code as it’s being developed.

2. The code you get from open source. With the growing use of open source, the amount of code from external sources in any application is rising exponentially. This open source code may contain profound vulnerabilities that immediately become part of your software. Software composition analysis (SCA) detects open source and third-party component risks in development and production. It also identifies potential licensing issues in open source code used in your applications.

3. The running application. When code is deployed on the web, the runtime environment must be tested for vulnerabilities through dynamic testing. Testing the application in its running state will reveal problems simply not detectable by static analysis. For high-risk applications, many organizations step up their game by including the human element in the dynamic testing process in the form of ethical hacking.

Getting a sense of the problem here? Taking IoT as a widespread example, 80% of these applications are not tested at all. For the one-fifth that does receive some form of testing, the testing is likely incomplete. And we already established that many organizations find but do not fix problems.

No wonder the news in 2018 sounds all too familiar.

Until organizations shift their security priorities from endpoint and network security and start paying more attention to software security, I do not see the carousel stopping anytime soon. I estimate that at any large IT security conference, only 10% of the conference is focused on software security, while the traditional emphasis on perimeter defenses continues to dominate the conversation.

Practical steps to move forward

The best way to reduce the impact of security practices on development is to establish an emphasis on building secure code at the source by integrating secure coding practices into the secure development life cycle. This is a subject near and dear to my heart that I addressed in a previous column.

So how do you move your organization forward? While I do not have a silver bullet for you, I do have practical advice:

● Rebalance your IT security priorities and budgets to shift the emphasis where the problem exists—software security.

● Build a software security group that can then construct and manage a rational and comprehensive software testing program.

● Employ tools and programs that empower developers to write secure, quality software from the start. Building security in is a far better approach than trying to test yourself clean.

It is time for a balanced approach to IT security that places the appropriate emphasis on where you are being attacked: your software. The path to effectively addressing the problem is known, so make the hard choices to give the problem the attention it deserves. 2019 will be here sooner than you think.


via:  securityweek

Save pagePDF pageEmail pagePrint page

GitHub Security Alerts Lead to Fewer Vulnerable Code Libraries

GitHub says the introduction of security alerts last year has led to a significantly smaller number of vulnerable code libraries on the platform.

The code hosting service announced in mid-November 2017 the introduction of a new security feature designed to warn developers if the software libraries used by their projects contain any known vulnerabilities.

The new feature looks for vulnerable Ruby gems and JavaScript NPM packages based on MITRE’s Common Vulnerabilities and Exposures (CVE) list. When a new flaw is added to this list, all repositories that use the affected version are identified and their maintainers informed. Users can choose to be notified via the GitHub user interface or via email.

When it introduced security alerts, GitHub compared the list of vulnerable libraries to the Dependency Graph in all public code repositories.

The Dependency Graph is a feature in the Insights section of GitHub that lists the libraries used by a project. Since the introduction of security alerts, this section also informs users about vulnerable dependencies, including CVE identifiers and severity of the flaws, and provides advice on how to address the issues.

The initial scan conducted by GitHub revealed more than 4 million vulnerabilities in over 500,000 repositories. Affected users were immediately notified and by December 1, roughly two weeks after the launch of the new feature, more than 450,000 of the flaws were addressed either by updating the affected library or removing it altogether.

According to GitHub, vulnerabilities are in a vast majority of cases addressed within a week by active developers.

“Since [December 1], our rate of vulnerabilities resolved in the first seven days of detection has been about 30 percent,” GitHub said. “Additionally, 15 percent of alerts are dismissed within seven days—that means nearly half of all alerts are responded to within a week. Of the remaining alerts that are unaddressed or unresolved, the majority belong to repositories that have not had a contribution in the last 90 days.”

GitHub was recently hit by a record-breaking distributed denial-of-service (DDoS) attack that peaked at 1.3 Tbps, but the service was down for less than 10 minutes.



via:  securityweek

Save pagePDF pageEmail pagePrint page

Microsoft to lock out Windows RDP clients if they are not patched against hijack bug

No update installed? No connection.

Microsoft will prevent Windows Server from authenticating RDP clients that have not been patched to address a security flaw that can be exploited by miscreants to hijack systems and laterally move across a network.

The bug, CVE-2018-0886, was fixed in March’s Patch Tuesday software update, and involves Microsoft’s implementation of its Credential Security Support Provider protocol (CredSSP). A miscreant-in-the-middle on a corporate network can abuse the flaw to send arbitrary commands to a server to execute while masquerading as a legit user or admin.

From there, lateral movement through an intranet becomes possible, and that’s just the sort of thing bad actors love. The flaw was discovered by security company Preempt, which explained it the video below.

Microsoft’s documentation for the patch reads: “Mitigation consists of installing the update on all eligible client and server operating systems and then using included Group Policy settings or registry-based equivalents to manage the setting options on the client and server computers.

“We recommend that administrators apply the policy and set it to ‘Force updated clients’ or ‘Mitigated’ on client and server computers as soon as possible.”

The Microsoft advisory also mentions two planned actions to address the vulnerability. On April 17, 2018, an update to Microsoft’s RDP client “will enhance the error message that is presented when an updated client fails to connect to a server that has not been updated.” And on May 8, or perhaps later, “an update to change the default setting from vulnerable to mitigated” will arrive.

On Friday March 23rd, Preempt personnel told the Black Hat Asia conference in Singapore that the May patches will cause un-patched RDP clients to be rejected by patched Windows Server boxes, so that the vulnerability can’t be exploited.

It seems sensible to keep a close eye on April and May’s Patch Tuesday dump. It’s also worth looking for updates from vendors of third-party RDP clients, as they can also fall foul of this vulnerability.


via:   theregister

Save pagePDF pageEmail pagePrint page