Monthly Archives: April 2014

Amazon Merges Kindle Personal Documents With Cloud Drive

Amazon announced through an email to its customers that all personal documents archived in your Kindle e-reader library are also now being made available from Amazon Cloud Drive. The files will be placed in a new folder called “My Send-to-Kindle Docs” where you’ll then be able to manage the items as you would any other file, including being able to organize, share or delete them as need be.

Personal documents are those which include files you’ve sent to your Kindle device, like Word documents, PDFs, images, online news articles or blogs, or um, you know, e-books you’ve…ahem…acquired. You can upload these to your Kindle device via the browser, desktop, mobile device, or email.

The change is taking place without any need for end user involvement, similar to other actions Amazon has taken in the past, such as when it switched on “AutoRip” functionality for CDs and vinyl, for example, automatically placing music files in users’ online storage drives. (Well that’s one way to workaround the challenge of generating traction for Amazon Cloud Drive!)

Says Amazon:

And as always, you can use Manage Your Kindle to see a list of your documents, re-deliver them to Kindle devices and free reading apps, delete them, or turn off auto-saving of documents to the cloud. Documents will be delivered just as they have in the past and you will continue to have 5 GB of free cloud storage for your personal documents. Just “Send Once, Read Everywhere.”

In addition, the documents you store in your Amazon Cloud Drive will be stored in their native format, like Microsoft Word (DOC) or TXT for example, so you’ll be able to access them from anywhere using the Cloud Drive service. Previously, Amazon would automatically convert things like DOC files to Kindle-friendly formats. It still does this for the sake of reading, but now keeps a copy of the original in your Cloud Drive.

However, the change doesn’t mean your Cloud Drive will automatically turn into a free, web-based e-book reader of sorts. Things that you can do on the Kindle, like bookmark pages or keep tracking of reading progress, won’t work on Cloud Drive.

But there is one benefit, as others have noted: because Kindle owners received 5 GB in free space for Personal Documents and Cloud Drive users received 5 GB for free file storage, the resulting merger means you’ll now have 10 GB of free file storage to play around with.


Via: techcrunch

Facebook ‘Nearby Friends’ Will Track Your Location History To Target You With Ads

Facebook says it’s not using its new Nearby Friends feature to target ads yet, but after I asked why it’s tracking “Location History” it admitted it will eventually use the data for marketing purposes.

This morning, the proximity sharing feature began rolling out to iOS users after its launch, and with it I discovered a new “Location History” setting that must be left on to use Nearby Friends.

For an overview of how the Nearby Friends option lets you share your proximity or real-time exact location, and its privacy implications, read feature article from launch.

The description below the Location History setting in Nearby Friends reads “When Location History is on, Facebook builds a history of your precise location, even when you’re not using the app. See or delete this information in the Activity Log on your profile.” Notice the careful use of ‘builds a history’ instead of the scarier word ‘tracks’.

Behind the Learn More link, Facebook explains that you can turn this tracking off but, “Location History must be turned on for some location feature to work on Facebook, including Nearby Friends.” It also notes that “Facebook may still receive your most recent precise location so that you can, for example, post content that’s tagged with your location or find nearby places.” So even if you turn it off, Facebook will still collect location data when necessary it did before Nearby Friends debuted.

If you leave it on, you’ll see your coordinates periodically added to your Activity Log. However, you’ll only see your Location History if you scroll to the bottom of the filter options and look at this category of data specifically. It’s a bit sketchy that these maps don’t show up in the default view of Activity Log like most other actions.  It’s almost like Facebook is trying to discourage use of the Clear Location History button:

There are plenty of ways Facebook could use this data to make your experience better. The company says “Location History helps us know when it makes the msot sense to notify you (for example, by making sure we don’t send you a notification every time a Facebook friend who works with you is also in the office).”

By tracking where you are, Facebook could show you more relevant News Feed stories from friends or Pages nearby. For example, if Facebook sees I’m in the North Beach district of San Francisco instead of my home district of the Mission in the South of the city, it might increase the likelihood that I’d see status updates from friends who live up there, or an Italian restaurant in the area whose Page I Liked. It could also suggest Events happening nearby.

If I travel far from home, say to London, Facebook could show more posts from friends who live there or Pages based in the U.K. It could also know not to show me checkins or Events from home that would be irrelevant while I’m away.

Location Advertising

But there are also big opportunities for Facebook to use Location History to help itself make money. When I asked if it could power advertising, a Facebook spokesperson told me “at this time it’s not being used for advertising or marketing, but in the future it will be.”

It wouldn’t confirm exactly how, but I foresee it targeting you with ads for businesses that could actually be in sight or just a few hundred feet away. An ad for a brick-and-mortar clothing shop would surely be more relevant if shown when you’re on the same block. The ability to generate foot traffic that leads to sales could let Location History-powered Facebook ads generate big returns on investment for meatspace business advertisers. That means they’d be willing to pay more for these hyper-local ads than for ones pointed to users who are far away and much less likely to visit their store.

Facebook’s own VP Carolyn Everson discussed how she imagined Facebook ads might evolve in a call with Bloomberg in 2012, saying “Phones can be location-specific so you can start to imagine what the product evolution might look like over time, particularly for retailers”. And back in 2011, Facebook acquired a hyperlocal ad targeting startup called Rel8tion.

Luckily, Facebook says that putting Location History in the Activity Log “gives people a way to view and delete the underlying data.” That’s a relief, as it means you could use Nearby Friends with Location History turned on and then clear your history in the Activity Log to prevent that data from being used for marketing if you’re really concerned about that. The company explains that “When you hit delete we remove data from the user interface immediately and start working to permanently delete the data from the system.”

While there’s always a vocal minority angry about Facebook’s aggressive ad targeting, and the wider public is generally uneasy about it, we’ve seen that these feelings don’t necessarily change people’s behavior in the long-term. That’s why Facebook is still thriving with big revenues. And over time I believe the world will get more comfortable with ad targeting. There’s going to be ads on Facebook no matter what, and I personally would rather see relevant ones for local businesses than ads for random apps or websites.

Still, the realization that Location History will be used for ad targeting could scare some people away from using Nearby Friends.

This sentiment is dangerous for Facebook because Nearby Friends is off by default and users have to actively turn it on. It’s also stuck in the navigation menu of its main iOS and Android apps. If users turn it off or refuse to turn it on in the first place, it may be too buried for them to voluntarily go in and activate it.

Facebook may have anticipated this, which could partly be why it plans to use News Feed teaser stories like “4 friends are nearby” to try to coax users into turning on Nearby Friends and Location History. With both social and marketing privacy concerns looming, though, Facebook will have to clearly demonstrate that Nearby Friends makes your life better by helping you gather with friends, or people won’t give up their data to use it.


Via: techcrunch

Massive FBI facial recognition database raises privacy fears

The FBI is building a massive facial recognition database that could contain as many as 52 million images by 2015, according to information obtained by the EFF via a freedom of information request.

The agency’s Next Generation Identification (NGI) system is an update to its existing fingerprint database – which itself contains over 100 million records – and has been in development for years.

In addition to photos, it also includes biometric data such as iris scans and palm prints.

In 2012 the NGI database contained 13.6 million images, covering somewhere between 7 and 8 million individuals. By the middle of last year that had grown to some 16 million images.

The new system will be capable of processing 55,000 direct photo enrolments daily and has the ability to conduct tens of thousands of searches every day. It is estimated that by 2015 the database will contain 52 million facial images.

The EFF cites its biggest concern as being the fact that the FBI anticipates having as many as 4.3 million images of non-criminals stored in the NGI database by next year.

Under the current system there is a clear separation between the records of criminals and non-criminals which are stored in separate databases.

But with the new system that distinction will be blurred – all records will now be stored on one database, irrespective of whether or not someone has been arrested for a crime.

In the past, the FBI has never linked the criminal and non-criminal fingerprint databases. This has meant that any search of the criminal print database (such as to identify a suspect or a latent print at a crime scene) would not touch the non-criminal database. This will also change with NGI. Now every record – whether criminal or non – will have a “Universal Control Number” (UCN), and every search will be run against all records in the database. This means that even if you have never been arrested for a crime, if your employer requires you to submit a photo as part of your background check, your face image could be searched – and you could be implicated as a criminal suspect – just by virtue of having that image in the non-criminal file.

Furthermore, non-criminals that previously applied for jobs where the employer asked for fingerprints may have seen them passed over to the old database. With NGI, employers will now take photos and pass them over too.

EFF also expressed concerns over the origins of some of the images.

By 2015, the FBI estimates that the NGI will include:

But the FBI doesn’t define either the “Special Population Cognizant” or the “New Repositories” category.

This, the group says, is a problem as there is no way of determining what rules govern these categories, where the data comes from, how the images are gathered, who has access to them, and whose privacy is impacted.

Finally, EFF has concerns over NGI serving up questionable responses to search queries:

We know from researchers that the risk of false positives increases as the size of the dataset increases – and, at 52 million images, the FBI’s face recognition is a very large dataset. This means that many people will be presented as suspects for crimes they didn’t commit. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards.


Via: nakedsecurity

Google Patents Tiny Cameras Embedded In Contact Lenses

Google has a new patent application with the USPTO (via 9to5Google), which takes one of the basic concepts of Glass and extends it even further, embedding tiny cameras that could be embedded in contact lenses for various uses, including photographing what a wearer sees, or providing the basic input for a contact-based assistive device for the visually impaired.

Google has previously detailed a plan to build smart contacts that measure blood glucose levels in diabetics to provide non-invasive, constant feedback to both a wearer and potentially their doctor, too. This new system describes uses that could also benefit the medical community, like using input from the camera to spot obstacles and alert a wearer who has vision problems as to their whereabouts. They could also offer up vision augmentation for people with all types of ocular health, and even act as a next-gen platform for a Glass-like computing experience.

Obviously, big tech companies patent stuff all the time, and only a fraction of that ever makes it to shipping products. Plus, wearing contacts is something that anyone who doesn’t have to likely won’t warm up to easily. Still, as an assistive device, and an alternative to other, more obvious gadgets and intrusive tech like hearing aids or cochlear implants, this could be a tech that has legs in the near future.


Via: techcrunch

Condoleezza Rice Joins Dropbox’s Board As It Names New CFO, COO

Condoleezza Rice, former United States Secretary of State and National Security Advisor has joined the board of cloud file storage and syncing firm Dropbox.

Dropbox is in the news today after launching a number of new products and features at a morning event in San Francisco. The company debuted Carousel, a photo storage and sharing service, along with the release of its Dropbox for Business offering to the general public, and an Android client for its Mailbox email solution.

Rice is a famous figure, known in almost equal parts for her ferocious intelligence, and controversial role in the Bush administration, which included comments on alleged weapons of mass destruction that Saddam Hussein was thought at the time to possess.

BusinessWeek initially reported the board pickup in a longer piece on the company. According to the magazine, Rice’s firm RiceHadleyGates has been an active advisor to Dropbox. TechCrunch confirmed the hire.

What’s interesting about bringing Rice onto Dropbox’s board is how normal it feels. Dropbox needs people with international experience to help it at once deal with foreign governments that have blocked its use — China, for example — and as it works to spread a product developed in one country to others that are culturally different.

Rice certainly possesses that expertise. Box, a rival to Dropbox, has also made a recent push to expand internationally. In a market as competitive as this, you must be everywhere.


Dropbox announced two more executive changes. The company has a new CFO, Sujay Jaswa, who is being into the role internally. Also, hailing from Google is Dropbox’s new COO: Dennis Woodside. In the post announcing those changes, it reaffirmed the above, indicating that Rice will help the company with its international operations.



Via: techcrunch

Facebook Is Forcing All Users To Download Messenger By Ripping Chat Out Of Its Main Apps

Facebook is taking its standalone app strategy to a new extreme today. It’s starting to notify users they’ll no longer have the option to send and receive messages in Facebook for iOS and Android, and will instead have to download Facebook Messenger to chat on mobile.

Facebook’s main apps have always included a full-featured messaging tab. Then a few months ago, users who also had Facebook’s standalone Messenger app installed had the chat tab of their main apps replaced with a hotlink button that would open Messenger. But this was optional. If you wanted to message inside Facebook for iOS or Android, you just didn’t download Messenger. That’s not going to be an option anymore.

Soon, all iOS and Android users will have a hotlink at the bottom of their Facebook app that will open Messenger.

Notifications about the change are going out to some users in Europe starting today, and they’ll have about two weeks and see multiple alerts before the requirement to download Messenger kicks in. Eventually, all Facebook users will get migrated to this new protocol. And you can bet some users are going to be angry.

The only way to escape the migration is to either have a low-end Android with an OS too old to run Messenger, use Facebook’s mobile web site, or use Facebook’s standalone content reader app Paper.

In an onstage talk I did with Mark Zuckerberg in November, the CEO revealed an explanation for today’s change that Facebook’s PR team just referred me to:

“the other thing that we’re doing with Messenger is making it so once you have the standalone Messenger app, we are actually taking messaging out of the main Facebook app. And the reason why we’re doing that is we found that having it as a second-class thing inside the Facebook app makes it so there’s more friction to replying to messages, so we would rather have people be using a more focused experience for that.” 

Essentially, Facebook sees Messaging within its main apps as slow, buried, and sub-optimal overall. Its numbers probably indicate that people message more and have a better experience on the standalone Messenger app.

But forcing users to adopt a new messaging behavior could be very unpopular. Not everyone wants to manage multiple Facebook apps on their homescreen or stick them in a folder. A portion of Facebook users may prefer to keep things simple with one app for everything Facebook, even if it means it’s slower and it takes more taps to get to their messages.

Facebook was criticized for its bloated main apps but this announcement seems like an over-correction, swinging wildly in the direction of each function having its own app. Obviously there’s merit to only having to maintain one mobile chat interface. It promotes faster feature development and better stability. And once users go through the chore of setting up Messenger and adapting to its style, they may enjoy it better. Personally, I like Messenger’s clean look and feel, playful sounds, and quick performance.

Still, a unilateral forced migration is the exact kind of change Facebook users hate, and this will only breed more paranoia that their social network could change without their consent. Taking a slower “We’re switching everyone eventually, so you might as well do it now” approach might have gone over better than “Your familiar chat interface will be destroyed in two weeks whether you like it or not”.

The only real explanation for moving this quickly is that desperate times call for desperate measures. Facebook is fighting a war overseas for the fate of messaging. While it bought WhatsApp for $19 billion, it still has to battle lean standalone messaging apps like WeChat, Kik, KakaoTalk, and Line. Unless forced, users might have stuck with the old Facebook app’s messaging interface instead of seeing there was something that could better compete with these other apps. That isn’t going to make this change much easier to swallow, though.



Via: techcrunch

Mumsnet becomes first known UK victim of Heartbleed bug

Parenting website Mumsnet is the first known UK victim of hackers exploiting the recently discovered Heartbleed bug.

The site revealed it has reason to believe hackers could access the passwords and messages of its 1.5 million users before the vulnerability was fixed.

The revelation came within hours of the Canada Revenue Agency announcing that hackers exploiting the Heartbleed bug had stolen the social insurance numbers of 900 Canadians.

The vulnerability is caused by a flaw in OpenSSL software, which is widely used on the internet to provide security and privacy by encrypting data exchanges.

Mumsnet founder Justine Roberts told the BBC it became apparent that user data was at risk when her own username and password were used to post a message online.

Hackers later informed the site’s administrators that the breach was enabled by the Heartbleed bug and that the site’s data was not safe.

“On Friday 11 April, it became apparent that what is widely known as the Heartbleed bug had been used to access data from Mumsnet users’ accounts,” the London-based website said in an email to members.

Mumsnet is resetting all member passwords because it said there was no way of knowing which accounts were affected, and it had to work on the assumption that all accounts may have been exposed.

However, site administrators said there was no evidence that any account had been used for anything other than to highlight the security vulnerability.

Independent security advisor, Graham Cluley, said he was pleased Mumsnet advised users to change other passwords if they used their Mumsnet password elsewhere on the net.

“You should never use the same password in more than one place – otherwise you could have an account breach on a site, which might not be critically important (Mumsnet, for instance) leading to much more serious hacks of your personal information elsewhere,” he wrote in a blog post.

While the Canadian tax agency is informing people of its breach by registered letter, Mumsnet has reportedly been criticised for sending users an email containing a link to its password reset page.

Standard online security advice dictates that users should be wary of clicking links in emails. An email urging users to visit the Mumsnet site to reset passwords would have been better, critics said.

Keith Bird, UK managing director of security firm Check Point, said it is important that people are cautious about clicking on any links in emails that they receive from organisations claiming that their security has been affected as a result of Heartbleed.

“There is a real risk that these are simply phishing emails, aiming to trick users into giving away personal details and passwords,” he said.

Although it was unwittingly introduced to the OpenSSL code in December 2011, the Heartbleed bug was made public only on 8 April 2014 by researchers at Google and Finnish security firm Codenomicon.

The researchers discovered that a coding flaw could enable hackers to access 64KB of unencrypted data repeatedly from the memory of systems using vulnerable versions of OpenSSL.

Large hardware, software and internet service providers have moved quickly since the bug was made public. But hundreds of thousands of IT systems will remain vulnerable to data theft until the affected versions of OpenSSL can be updated.

Millions of Android devices remain vulnerable to the bug a week after the flaw was made public and Google announced that devices running version 4.1.1 of its mobile operating system (OS) were at risk.

Google has created a fix, but it has yet to be pushed out to many of the devices that cannot run higher versions of the OS, potentially putting users at risk of data theft, reports the BBC.

Security firms have warned that hundreds of apps available across multiple platforms still need to be fixed and that hardware including smartphones, routers and cable boxes are all potentially affected.


Via: computerweekly

US Government Will Detail Internet Exploits, Except When It Doesn’t Want To

Heartbleed kicked off a new chapter in the rollicking discussion of privacy, digital security, and the role of government in protecting its citizenry from threats both real and imagined.

News of Heartbleed broke early last week, starting a soul-searching bit of Internet-scrambling by services large and small to examine their own networks and products to see if they were exposed to the flaw. Much work remains for those impacted to get their services air-tight and patched, with certificates and revoked and replaced. It’s no small task, and one that isn’t nearly done.

Friday brought allegations that the NSA not only knew of Heartbleed, but had used the exploit for some time, perhaps two years. The NSA, in a statement, denied this. The White House followed suit. Since then we’ve learned a few things that are worth keeping in mind.

Let’s begin with what the U.S. government’s policy is regarding revealing flaws in Internet security. The New York Times wrote the key report on this, based on sourcing from “senior administration officials.” The gist here is that the U.S. government now claims to have a bent towards disclosing what flaws it does find, provided, as quoted by The Times, there is a “a clear national security or law enforcement need.” 

While it is easy to appreciate a leaning towards disclosure, the above leaves American people in a position of either trusting the government or not. Put simply, as the government gets to decide for itself what is a “clear national security need or law enforcement need,” we, the average folk, have no window into what it not disclosed, and why.

There’s reason for that, naturally: If the NSA decided to tell the world each and every exploit that it found and intended to use, they would all slam shut, and it’s job would become far harder if not impossible. At the same time, we haven’t answered the following question: If the NSA had known about Heartbleed — and some remain convinced that, denials, aside, it did — would it have told the Internet community?

If we can’t be sure that Heartbleed wouldn’t have passed the anti-efficacy test — the idea that a flaw is so dangerous to the public safety that it must be disclosed, potential offensive capabilities be damned — we are left essentially nowhere. That tension negates the fact that the NSA claims to have not known; if we can’t be sure of its own methods for determining what is to be disclosed and what not, at least in the abstract, any single case is simply an occluded data point with no axes to measure from.

The NSA doesn’t even need to know of an exploit in advance to, well, exploit it. The Guardian did a fine job explaining this yesterday [I quote at length to preserve tone]:

The agency’s recently-disclosed minimization procedures permit “retention of all communications that are enciphered.” In other words, when NSA encounters encryption it can’t crack, it’s allowed to – and apparently does – vacuum up all that scrambled traffic and store it indefinitely, in hopes of finding a way to break into it months or years in the future. As security experts recently confirmed, Heartbleed can be used to steal a site’s master encryption keys – keys that would suddenly enable anyone with a huge database of encrypted traffic to unlock it, at least for the vast majority of sites that don’t generate new keys as a safeguard against retroactive exposure.

If NSA moved quickly enough – as dedicated spies are supposed to – the agency could have exploited the bug to steal those keys before most sites got around to fixing the bug, gaining access to a vast treasure trove of stored traffic.

The NSA isn’t building those datacenters to hold its internal email, of course.

Presuming for the moment that the NSA and the larger US government didn’t know about Heartbleed — and you have to ask why, given their supposed prowess —  it doesn’t close the loophole until all is patched. And we have little concrete in the way of promises that the government would have disclosed it had it known. And we have nothing to say that it wouldn’t do so again.

The only upside to this situation is that we are engaged in what Professor Dawkins would call “consciousness raising,” or a period of rising public knowledge of a situation that needs massive course correction. We’re going to need better, and more encryption with more open-source technology along with even more minds parsing the code to ferret out the weaknesses. But at least we understand where we stand.

The childhood and adolescence of the Internet are over. It’s time to grow up.



Via: techcrunch

Why some insurers are dumping utilities.

Here’s a thoughtful piece from three IBM security experts that presents a little-known danger. Along with all the other challenges from our grid security inadequacies, some insurance companies are now refusing to insure utilities against cyberattacks.

But this article is about much more than the uninsurable risk. It lays out a path to a “convergence of all things security” that I believe is an essential step to a more secure grid. – Jesse Berst

By Diana Kelley, Pete Allor and Craig Heilmann

Why the smart grid needs “security intelligence”


Diana Kelley

BBC News recently posted a thought-provoking piece explaining why many energy companies (including power and utilities) are being turned down for insurance policies to cover cyber-attacks. The net: audits of existing defense and protection strategies “concluded that protections were inadequate.”







Given that we’re talking about the entities that supply power to governments, cities and consumers around the globe, the knowledge that their protections from cyber-attacks aren’t even considered adequate is fairly alarming. This doesn’t mean that a Die Hard-style takedown of the United States’ power-grid is imminent. However, it does point to a few key facts that security professionals working with energy and utilities have been discussing for a few years now.






 Craig Heilmann

Peter Allor

One point that often comes up is the convergence of networks, operations, communications and information technology. As the technology converges, so too must the way we monitor and manage security across them. The old way is no longer effective, and a new, more intelligent security paradigm is required.








Where we were – the not-so-smart grid


Not too long ago, the operations technology (OT) side of the energy house was operated separately from the information technology (IT) side. On the OT side Industrial Control System (ICS) devices, substations and other gear were managed with the use of telecommunications, rather than IP based networks. OT Technologies like supervisory control and data acquisition (SCADA) systems use proprietary protocols such as DNP3 or Modbus, over networks that were closed-loop and the software was rarely updated.

Back-end business software lived on the IT side and supported activities like billing and customer management. And when the Internet rose to prominence, IT got connected and adopted emerging security controls like firewalls, anti-malware and intrusion detection systems (IDS). Automatic software updates and patching later became commonplace.

Front-end OT stayed locked down and managed by a small number of administrators. The primary remit of OT is to keep the operations running smoothly and without interruption – reliability of the system was paramount. Security controls like IDS weren’t deemed ready for OT and software updates were made very cautiously and slowly, and then only after approval by the OT Vendor and installation.



Where we are – the smarter grid

A number of priorities in energy changed the old, very separate model and ushered in an era of convergence between OT and IT. The need to remotely manage and control systems on the OT network has led to IP enablement and Internet connectivity for those systems. This remote need was for both using fewer people resources and to allow infrequent vendor updates while maintaining the reliability of systems. And often, to ensure the legacy is accessible, a web front end is put in front to support browser-based access to simplify this management.

Next generation energy systems and the smart grid have also driven convergence. Smart grids improve efficiency and reliability by joining information from multiple suppliers and their consumers. Additionally, bringing that data together requires aggregation not only from partners but also from traditional IT systems.

Smart and advanced meters and smart houses are blurring the lines between IT and OT even more. Further, all of this technology is going mobile in one form other another -whether it’s a customer application to monitor energy usage at a solar-powered house that’s selling excess energy back to the grid, or an energy employee reading meters with a handheld device, or data sent via telecom means to a central data repository.

Where we need to go – the intelligent grid

That just leaves, where are we today? Today we are in a place where insurers deem current protections inadequate. This is not necessarily because the protections were inadequate, but due to the recent convergence activity and changes in how energy companies are doing business with their partners and customers in the smart grid age.

One point that wasn’t raised in the BBC article is the lack of actuarial data for the insurers to determine risk models and construct tables with. Without actuarial data, it’s impossible for the companies to estimate the real costs of a cyber-breach nor to assure reserves for any breaches or to set policy premiums. In fact, this in an area needing more work if you follow the Executive Order 13636 to Security Critical Infrastructures.

So what can we do? Take a fresh look at the protections in the new converged OT/IT world and implement intelligent security controls and processes that understand and support the “convergence of all things security.” Traditionally siloed function areas such as telecom and physical security are now connected to IT, OT and business applications (like SAP Transactions). They must be managed in a unified and intelligent way.

The importance of sensors

All of the above should be instruments with sensors. Monitoring and network forensics tools can be used to capture data from the sensors about events and intrusions that can be leveraged for use in risk models. Traditional IT security controls like security information and event management (SIEM) and identity and access management (IAM) can be used on converged energy networks to provide improved analysis and better protection. Finally, the findings can be rolled up to GRC tools for consistent execution of governance, risk management and compliance across domains.

Some traditional IT security software may be ready for the converged energy networks today. For example, access control to a web-connected legacy system or application security testing for vulnerabilities in that web interface. Others, like SIEM, may require some tweaks to parse OT protocols like modbus and DNP3, and new rule sets to capture activity on those networks.


The importance of a framework

Another key to getting to a protected intelligent grid is the recently released NIST Framework for Improving Critical Infrastructure Cybersecurity and taking it to a Risk Management and Strategy approach across both IT and OT networks and then incorporating it into Enterprise risk. The framework “provides organization and structure to today’s multiple approaches to cybersecurity by assembling standards, guidelines, and practices that are working effectively in industry today.” As energy companies begin the journey, the NIST Framework is an excellent starting point and encompasses all of the areas – hardware, software, communications, people, data and infrastructure – that need to be addressed to build a cohesive solution.
For a framework to have value, it must be put into action. The first function in the framework is Identify. That includes “Understanding the business context, the resources that support critical functions, and the related cybersecurity risks enables an organization to focus and prioritize its efforts, consistent with its risk management strategy and business needs.”
Bringing energy companies, up to insurable “speed” isn’t going to happen overnight. It’s going to require a lot of work on all sides – including the security vendors that supply solutions and the insurers that need to amass the actuarial data.
But nothing will happen if someone doesn’t start the conversation. To do this requires assessing where we are now along with future goals and strategies. We’re ready to get started – are you?
Diana Kelley is a security strategist for IBM Security Systems. She is an internationally recognized security expert with 25 years of IT security experience.
Craig Heilmann is an Associate Partner within IBM’s Global Technology Services organization and practice leader for Industrial Control Systems Security services. His career summary spans twenty years of technical, professional, managerial and entrepreneurial experience specifically applied in areas of information security, controls and governance.
Peter Allor is a Security Strategist in IBM’s Critical Infrastructure Group, assisting in guiding the company’s overall security initiatives and participation in enterprise and government implementation strategies. He is responsible for security strategies, especially as they intersect with critical infrastructures and Central Government Operations / Strategy.



Via: smartgridnews

Amazon offers employees $5,000 to quit

Amazon is offering its warehouse employees up to $5,000 to quit their jobs, even as the company is in the process of adding workers and locations.

The “Pay to Quit” program, which was announced by CEO Jeff Bezos in his letter to shareholders late Thursday, is an effort to make sure that the Internet retailer’s employees really want to be there.

“The goal is to encourage folks to take a moment and think about what they really want,” he wrote in the letter. “In the long-run, an employee staying somewhere they don’t want to be isn’t healthy for the employee or the company.”

Bezos said the offer is made under the headline “Please Don’t Take This Offer.” Amazon will offer to pay its associates to quit once a year.

The company has experimented with this program in recent years, but rolled it out to its 40,000 warehouse employees in January, according to a company spokeswoman.

Newer employees are offered $2,000 to quit. The plan is to increase that offer by $1,000 each year until the amount hits $5,000.

ewer than 10% of the employees who got the offer took it and left the company.

Bezos said the idea came from Zappos, the online footwear and clothing retailer which Amazon purchased in 2009. Zappos continues to operate as a separate unit from the main Amazon site.


How Zappos will run with no job titles

Amazon is in the process of adding warehouses so that it can cut delivery times to customers. Today is has 96 such locations. Company filings show it had 117,300 full-time and part-time employees at the end of last year, up by nearly a third from its employment level a year earlier.

Amazon declines to say how much it pays its warehouse workers, although it says it pay about 30% more than a typical retail worker.

According to data gathered last year by career website, Amazon pays its warehouse workers an average hourly wage of about $12 an hour, which comes to just about $25,000 for a full year. Its full-time workers also get stock grants which Amazon said last year had averaged about 9% of employees’ pay.



Via: cnn