Monthly Archives: November 2013

Facebook says yes, your posts can be used for ads

The company amended its privacy policies, reworking some language related to advertising with children.

Facebook on Friday moved ahead with some proposed changes to its privacy policies to clarify that users’ posts on the site can be used in advertisements, but that users have controls to limit their appearance.

In August Facebook proposed some revisions to its Statement of Rights and Responsibilities, in which it said that it can use the names, profile pictures and other data of its members to deliver ads. The proposed changes drew sharp criticism from some users, privacy groups and the U.S. Federal Trade Commission, which launched an examination into whether the amendments violated a 2011 law.

The updated policies largely stand and will go into effect immediately, Facebook said on Friday.

“Your feedback was clear — we can do better — and it led to a number of clarifying edits,” said Erin Egan, Facebook’s chief privacy officer of policy. Nothing about the update, however, has changed Facebook’s advertising policies or practices, she said in a blog post.

“The goal of the update was to clarify language, not to change policies or practices,” she said.

But there was a change made in the policy’s language addressing the presumption that minors on the site had received permission from their parents to have their data used in connection with ads. In Facebook’s August proposal, one clause said that if a user was under the age of 18, “you represent that at least one of your parents or legal guardians has also agreed to the terms of this section (and the use of your name, profile picture, content and information) on your behalf.”

Based on feedback received, Facebook determined that the language was confusing and so it was removed, the company said on Friday.

“This language was about getting a conversation started; we were not seeking and would not have gained any additional rights as a result of this addition,” Egan said.

Facebook has a number of ways that it uses people’s data to deliver advertisements. One of its most important ad products is its “sponsored stories” program, which led to a class-action lawsuit and then a US$20 million settlement earlier this year.

For some sponsored content, for instance, Facebook uses location check-ins or members’ “likes” to have their profile name and picture paired with ads, if that “like” was given for a participating business. That advertisement would then be eligible to appear to the person’s friends elsewhere on Facebook such as the news feed, timeline, or through the site’s new search engine, called Graph Search.

But members can still limit who sees these types of ads, Facebook said, based on who is allowed to see “likes.” So if a person only allows family members to see that he “liked” a particular business, then only the family members would see the ad paired with the “like,” the company said.

Users can also opt out of social advertising, Facebook said.

Advertising, the company seems to be saying, is par for the course on the Facebook site. “You connect to your friends and the things you care about, you see what your friends are doing and you like, comment, share and interact with all of this content,” Egan said, adding, “it’s social.”

The FTC’s review, meanwhile, was launched to determine whether Facebook’s proposed changes violated an earlier law governing how users’ data should be displayed to certain audiences.

In a statement, the agency said it could not comment on particular cases. However, “the FTC rigorously monitors compliance with all of its orders,” a spokesman said, “and that includes reviewing any material changes to the privacy policy of a company that is under a privacy order from the FTC.”

Via: itworld

Apple’s ground-breaking bet on its clean energy infrastructure, with photos

Apple’s two solar farms and one fuel cell farm near its data center in North Carolina are now all live and generating power. The projects are unprecedented in the industry and have helped usher in real change.

Check out this special report.

——-

Last week a utility in North Carolina announced something seemingly mundane on the surface, but it was a transcendent moment for those that have been following the clean energy sector. Duke Energy, which generates the bulk of its energy in the state from dirty and aging coal and nuclear plants, officially asked the state’s regulators if it could sell clean power (from new sources like solar and wind farms) to large energy customers that were willing to buy it — and yes, shockingly enough, thanks to restrictive regulations and an electricity industry that moves at a glacial pace, this previously wasn’t allowed.

For years Duke Energy largely ignored clean energy in North Carolina (with a few exceptions), mostly with the explanation that customers wouldn’t pay a premium for it. But turns out when those large energy customers are internet companies — with global influential consumer brands, huge data centers that suck up lots of energy and large margins that given them leeway to experiment — they can be pretty persuasive.

Moments after the utility’s filing hit the public record on Friday, Google, which has been publicly working with Duke Energy since the spring of this year on the clean energy buying project and has a large energy-consuming data center in Lenoir, North Carolina, published a blog post celebrating the utility’s move. Google has spent over a billion dollars — through both equity investments and power buying contracts — on clean energy projects over the years, and has been a very public face of the movement to “green” the internet.


Apple’s solar farm next to its data center in Maiden, North Carolina

But absent from a lot of the public dialogue has been the one company that arguably has had a greater effect on bringing clean power to the state of North Carolina than any other: Apple. While the state’s utility has just now become more willing to supply clean energy to corporate customers, several years ago Apple took the stance that if clean power wasn’t going to be available from the local utility for its huge data center in Maiden, North Carolina, it would, quite simply, build its own.

In an unprecedented move — and one that hasn’t yet been repeated by other companies — Apple spent millions of dollars building two massive solar panel farms and a large fuel cell farm near its data center. These projects and are now fully operational and similar facilities (owned by utilities) have cost in a range of $150 million to $200 million to build. Apple’s are the largest privately-owned clean energy facilities in the U.S. and more importantly, they represent an entirely new way for an internet company to source and think about power.

Apple has long been reticent about speaking to the media about its operations, green or otherwise. But I’ve pieced together a much more detailed picture of its clean energy operations after talking to dozens of people, many of them over the years. And Last week I got a chance to see these fully operational facilities for myself.

I walked around these pioneering landscapes, took these exclusive photos, and pondered why Apple made this move and why it’s important. This is Apple’s story of clean power plans, told comprehensively for the first time.

Apple as a power pioneer

When Duke Energy’s news hit the wires on Friday, I was flying back from a day of driving around Apple’s solar farms and fuel cell farm. In the summer of 2012, I took the same drive around North Carolina, visiting not just Apple’s data center, but also Facebook’s, and Google’s, all of which are within an hour or two drive from each other. The internet companies built their data centers in this North Carolina corridor to serve East Coast traffic, and because (among many reasons) power is cheap and readily available. Historically, however, it has been pretty dirty.

Back in the summer of 2012 Apple had already surprised the world by starting construction on its first solar farm. During my road trip then, the plot of land across the street from Apple’s data center had been cleared and poles that would eventually hold solar panels were being installed. A sign in front of the entrance to the plot of land read “Dolphin Solar,” a name that Apple adopted to keep the project under wraps. Apple has long taken a secretive approach to building its clean energy projects, which hasn’t necessarily been all that beneficial for its public image around the issue.

But now all of Apple’s clean power farms are fully constructed, connected to the grid, generating clean power, and — as I was happy to see — very visible from public areas. The 100-acre, 20 megawatt (MW) solar farm across the street from Apple’s data center was finished in 2012, and even cars on the highway next to the huge area of land can’t help but catch sporadic glimpses of panels when quickly driving by.

Adjacent to Apple’s data center is a 10 MW fuel cell farm, which uses fuel cells from Silicon Valley company Bloom Energy and which has been providing energy since earlier this year. Fuel cells are devices that use a chemical reaction to create electricity from a fuel like natural gas (or in Apple’s case biogas) and oxygen.

Apple’s second 20 MW solar panel farm, which is about 15 miles away from the data center near the town of Conover, North Carolina, is also up and running. All told, the three facilities are creating 50 MW of power, which is about 10 MW more than what Apple’s data center uses. Because of state laws, the energy is being pumped into the power grid, and Apple then uses the energy it needs from the grid. But this setup also means Apple doesn’t need large batteries, or other forms of energy storage, to keep the power going when the sun goes down and its solar panels stop producing electricity.

The solar farms


Apple’s solar farm next to its data center in Maiden, North Carolina

Apple’s solar panel farms were built and are operated by Bay Area company SunPower. SunPower manufacturers high-efficient solar panels, solar panel trackers and also develops solar panel projects like Apple’s. The solar farm across from the data center has over 50,000 panels on 100 acres, and it took about a year to build the entire thing.

Each solar panel on Apple’s farms has a microcontroller on its back, and the panels are attached to long, large trackers (the steel poles in the picture). During the day, the computers automatically and gradually tilt the solar panels so that the face of the panels follow the sun throughout the day. The above picture was taken in the late morning, so by the end of the day, the panels will have completely rotated to face where I was standing. The trackers used are single-axis trackers, which basically means they are less complex and less expensive than more precise dual-axis trackers.

North Carolina isn’t exactly the sunniest place. During my visit it was quite cloudy, as you can see in the pictures. During the winter months — from October to February — when the sun is less bright in the sky, Apple’s solar farms are no doubt generating less energy than they are during the peak summer months.


Apple’s solar power farm stretches for 100 acres

You can see in the above picture that the grass is neatly maintained. Apple manages the grass under the panels in a variety of ways, but one of those is a little more unusual. Apple works with a company that ropes in sheep that eat the grass on a portion of the solar farm; when the sheep finish grazing on one spot, they’re moved to the next.

It’s a more sustainable option than running gas-powered mowers across the farm, and also has the added benefit that sheep can get into smaller spaces and up close to the panels. Some companies use goats to eat grass on plots of land, but goats could chew on the farm’s wiring and solar panel parts.


Close up shot of the panels at Apple’s solar farm

Apple’s second 20-MW solar farm is a 15-minute or so drive away from the data center, past a Big Kmart and the Catawba Valley Rifle and Pistol Club. The second solar farm is less discussed, perhaps because it’s nestled behind a neighborhood and well camouflaged by landscaping from the road. Apple built both of its solar farms with large berms around them, and the rows of panels themselves are mostly nestled down below sight level.


Apple’s second solar farm about 15 miles from its data center in North Carolina

Since the second solar farm is a ways away from the data center, it’s also an example of why Apple’s business with the utility is important. The power goes into the power grid near the solar farm, and Apple can use the equivalent back at its data center.


Apple’s second solar farm in Conover, North Carolina

Apple is building another 20 MW solar panel farm next to its data center in Reno, Nevada, and has said it’s working closely with Nevada utility NV Energy on that one. Apple is one of the first companies to take advantage of a new green tariff approved by Nevada’s utility commission that will enable Apple to pay for the cost of building the solar panel farm. Once again in Reno, Apple is working with SunPower for the solar farm, but this time it is using a different kind of solar technology that increases the amount of power generated (combo of panels and mirrors to concentrate the sun light).

The fuel cells farm

Compared to the massive acreage that Apple’s solar panels cover, the fuel cell farm looks quaint. You can walk around it in a couple minutes. In the grand scheme of power generation, fuel cells are weird in that nothing is burned in a fuel cell, in contrast to the combustion that occurs in natural gas and coal plants, or traditional cars that run on fuel. Instead of combustion, fuel cells use a chemical reaction, almost like a battery, to produce electricity.

Apple’s fuel cells were manufactured and are operated by Bloom Energy, a Sunnyvale, Calif.-based company that has raised over a billion dollars in venture capital funding from VCs like Kleiner Perkins, and NEA. The boxes suck up a fuel, usually natural gas (but in Apple’s case biogas), combine it with oxygen, run it over plates lined with a catalyst, and through a complex reaction create electricity on site.


Close up of Apple’s fuel cells, made by Bloom Energy

There are, according to my back of the envelope count, about 50 Bloom Energy boxes at Apple’s fuel cell farm. In total the farm produces 10 MW of energy, and each fuel cell produces 200 KW. Apple originally was planning to install 24 fuel cells (for 4.8 MW), but later decided to double the size of the facility.

When Bloom Energy first publicly launched its fuel cells in 2010 they cost between $700,000 to $800,000 each, though the price probably has come down since then. Bloom also sells fuel cell energy contracts, where the customer doesn’t pay for the installation, but pays for the energy over a several year period of time.


Upclose rows of Apple’s fuel cells, made by Bloom Energy

An interesting thing to note: when I was observing the fuel cells operate, I noticed that they produce a lot of heat. You can actually see heat waves rising from the top of the fuel cells. The fuel cells also produce a noise from fans that hum to keep the fuel cells cool. I don’t think I quite captured the heat waves rising from the top in the photo, but took this little video clip so you could hear the noise of the fans.


When I was walking around the outside of the fuel cell facility I could also see a couple of people doing maintenance work on some of the fuel cells. I’m not sure what they were doing exactly, but fuel cells need some level of maintenance to keep them provided with the fuel, as well as to replace moving parts like fans. Every few years they also need to have a key part replaced called the stack, which can lead to expensive maintenance costs for the fuel cell operator.


Apple’s fuel cell farm next to its data center in Maiden, North Carolina

Apple opted to have its fuel cells powered with biogas instead of natural gas. Biogas is methane gas that can be captured from decomposing organic matter, so like from waste at landfills, animal waste on farms, and water treatment facilities. Biogas can be used in place of natural gas as a cleaner fuel to run buses, cars and trucks, or to run fuel cells. Biogas has the benefit of being cleaner, given natural gas is a fossil fuel.

But the problem with biogas is that it’s notoriously difficult to economically source in large amounts and pipe to places like the Apple data center. Biogas is also more expensive than natural gas, which likely added even more onto the cost of Apple’s clean power facilities.


Apple’s fuel cell farm in Maiden, it has a total output of 10 MW

An unconventional move

Apple’s solar farms at one point were controversial both outside of Apple and likely inside, too. The solar farms use a large amount of land, which had to be raised and prepared for the panels. Back in 2011, when Apple was clearing the land, some local residents complained about burning foliage and smoke blowing toward their houses.

Buying clean power from a utility that had offered it would have no doubt been a considerably cheaper option — but in 2010 and 2011 that wasn’t available. It’s still not even officially available now, and is pending the state regulator’s approval of Duke Energy’s pilot project.

But the cost of the clean power installations to Apple’s data center project in North Carolina were probably substantial. A 20 MW solar panel farm could cost around $100 million to build back in 2010 and 2011, though the cost could be less now that the price of solar panels have dropped in recent years. Apple could also have negotiated a lower price since it’s such a high profile company. Apple’s entire data center project in North Carolina was billed as a billion-dollar data center when it was announced years ago.

It’s also a controversial move for an internet company to get into the energy generation business. But as more and more megascale data centers are being built, and more web services are being moved to the cloud, internet companies are spreading out their investments and innovations from inside the data centers on the server level to outside the data center, down to the energy level. For example, last week Amazon said that it has been building its own electric substations and even has firmware engineers rewrite the archaic code that normally runs on the switchgear designed to control the flow of power to electricity infrastructure.

But more efficient energy infrastructure is one thing, and clean power — not an obvious economic advantage — is another. Apple’s peers — from Facebook and Google — have not (yet) followed in Apple’s footsteps when it comes to building their own clean power plants. Microsoft and eBay have been experimenting with clean power for their data centers but on a much smaller scale.

I’ve asked Google and Facebook execs multiple times over the years if they plan to build their own clean energy generation, and many times they’ve said that while they haven’t ruled it out, they aren’t yet publicly planning anything. I know that Google has discussed this issue internally at length, and I’ve heard has even gone so far as to hire the former Director of the Department of Energy’s ARPA-E program, Arun Majumdar, to in part help look into this issue. But Google hasn’t announced any plans.

Of course, Google’s clean energy investments have definitely had an impact. The company has put more than a billion dollars into a Hoover’s Dam worth of clean energy projects, mostly wind farms and solar farms, over a several year period. The bulk of its investments have been through investing equity in a wind or solar farm, and also making a contract with a utility to buy clean power from a project for a nearby data center.

But Apple’s move was so unusual: it was an aggressive move into an entirely new area for Apple, it was pretty secretive (although that’s standard-operating procedure at Apple), and it cost more money than the standard approach. And Apple plans to continue this method, allowing it to use what it has learned for future projects.

In the world of clean energy there are a lot of ways that companies can pay to green their operations — many buy renewable energy credits that offset consumption of fossil fuel based energy. But building solar farms and a fuel cell farm next to a data center could be the surest way to add clean power in a way that can be validated and seen by the public. It seems like Apple execs thought if they were going to commit to the whole idea of clean energy, it was going to be all the way.

The effects of the clean energy projects on Apple’s brand also can’t be discounted. Apple has a powerful and potentially fragile consumer brand, and the data center in North Carolina was a major push for Apple to move more heavily into cloud services. A record for the largest privately-owned solar farm in the U.S., could add significant branding capital to a brand trying to stay on top.

The effect of Apple’s clean energy move

I hope that this series of photos I took shows the extent to which Apple has gone to create its own clean power sources in a state that at one time wasn’t offering it any other options. And while Apple doesn’t publicly comment on its moves, its actions have no doubt had a strong effect on Duke Energy, on its internet peers, and on North Carolina’s clean power options. Many conversations I’ve had with execs over the past few years have confirmed this.

Google has publicly been working with Duke Energy all year on the recently announced clean power buying plan, but Apple’s decision to move forward with its own facilities — with or without the utility — likely provided a key leverage point. What’s more influential and powerful: friendly encouragement, or a fear that you’re going to be bypassed and lose out on opportunities?

The reality is that data centers, and the internet companies that are building them, are becoming major power users. Some 2 percent of the total electricity in the U.S. as of 2010 was consumed by data centers and this consumption will only grow as more web services are put in the cloud. Most data centers are largely run off of power grids supplied by coal and natural gas plants.

But a handful of these leading internet companies like Apple, Google, Facebook and others have been investing — both money and time — into ways to find and create more clean power for their data centers. The past five years have been a time of transition for these companies, as they move clean power up the list in importance for their data centers. Anpther example of how far they’ve come is that last week Facebook announced that it’s building a data center in Iowa that will be fully powered by wind energy.

The times, they are a-changing. Of course not every data center operator is able to pay a premium to buy clean power, and very few are willing to invest even more in building their own clean power projects. But eventually clean power like huge wind farms won’t be a premium — they aren’t now in some places like Iowa. Google, which has invested in a variety of wind farms, has seen the costs of its contracts drop over the years.

Down the road more companies that use colocation data center services will want clean power options, too. Last week at an event organized by Greenpeace (and moderated by me), Box and Rackspace talked about some of the options they had for clean power. Greenpeace, and many in the industry, are hoping that Amazon and its dominating AWS will start to adopt more clean power down the road.

Change often times happens incrementally. From the outside that happened with clean power and Internet companies in North Carolina. But sometimes crucial change happens with a single brush stroke or a single outlier decision. That’s how I see Apple’s clean power facilities in North Carolina — right now, they stand alone.

Via: gigaom

Hackers throw 16 attacks at HealthCare.gov plus a DoS for good measure

Hackers have thrown about 16 attacks at the US’s HealthCare.gov website, a top US Department of Homeland Security (DHS) official says.

According to CNN, Acting Assistant Homeland Security Secretary Roberta Stempfley of the Office of Cybersecurity and Communications says that the attacks, now under investigation, all failed.

Ms. Stempfley testified at a hearing of the House Homeland Security (HHS) Committee, saying that the attempts were made between 6 and 8 November, but that none were successful.

Authorities are also investigating a separate report of a denial of service (DoS) tool designed to bombard the healthcare site with more requests than it can handle without going belly-up.

The tool was spotted for download from a few sites and mentioned in social media, as Arbor Networks researcher Marc Eisenbarth first described in a blog posting on 7 November.

Eisenbarth wrote at the time that there’s been no evidence that HealthCare.gov has been subjected to any significant denial of service attacks since it went live in October.

He also said that the detected tool’s request rate, non-distributed attack architecture and other limitations mean that the tool is “unlikely to succeed in affecting the availability of the healthcare.gov site.”

The tool is designed to put a strain on the site by repeatedly alternating requests to the https://www.healthcare.gov and https:www.healthcare.gov/contact-us addresses.

If the tool were to make enough requests over a short period of time, it could overload some of the applications that the site relies on to make timely responses.

Eisenbarth said that the tool follows a recent trend wherein DoS attacks are used as tools of social or political protest, in retaliation against a policy, legal rulings or government actions.

Here’s the text from a screenshot of the tool:

Destroy Obama Care.

This program continually displays alternate page of the ObamaCare website. It has no virus, trojans, worms, or cookies.

The purpose is to overload the ObamaCare website, to deny service to users and perhaps overload and crash the system.

You can open as many copies of the program as you want. Each copy opens multiple links to the site.

ObamaCare is an affront to the Constitutional rights of the people. We HAVE the right to CIVIL disobedience!

At any rate, the tool doesn’t appear to have been activated.

Dan Holden director of security research for Arbor Networks, told CNN that the site’s availability problems don’t seem to have been caused by the “Destroy Obama Care” tool:

We have not monitored any attacks. We have not seen any sizable, or anything to believe that these problems are related to DDOS. I don’t believe that the problems with the site’s availability is due to any kind of DDOS attack.

CNN also reports that a top Health and Human Services official, Chief Information Officer Frank Baitman, said in a separate hearing that his department had engaged an ethical hacker to perform penetration testing of the site – i.e., testing that simulates internal and external attacks that can then be used to evaluate computer and network defenses.

One would sure like to believe that the US government has enough security expertise on staff to limit the number of gaping holes a pen test would reveal.

And, indeed, Baitman said that the pen tester described between 7 and 10 items related to attempted security breaches, none of which Baitman said he would describe as serious, and most of which had been resolved.

Others have testified before HHS regarding “subpar” website design – assuredly a grievous accusation from a taxpayer’s perspective, given that the site cost millions of dollars, if not hundreds of millions.

If the US government wants to spare us from paying through the nose to pen-test that deluxe-but-creaky site, they might want to do us all a favor and check out these tips on how to manage cost-effective pen testing.

Just a thought!

Via: nakedsecurity

SecretInk Lets You Send Self-Destructing Messages Over Email Or SMS Right From Your Inbox


PowerInbox, the email platform company which merged with competitor ActivePath a year ago, is today launching technology called SecretInk that enables “self-destructing” messages that can be sent over email or SMS. The system works online or inside Gmail and other webmail services using the company’s PowerInbox add-on. This utility also enables other interactive email content from dozens of social networks, news sites, and more. But SecretInk is one of the first email applications the company has funded itself.

The move to launch an email add-on that enables additional privacy comes at a time when other encrypted email services like Silent Circle and Lavabit have preemptively shuttered their services in the wake of the NSA’s spying agenda, deciding it would be better to not exist at all than to risk their users’ privacy.

SecretInk, meanwhile, wants to help fill that void – for at least as long as it can. Says PowerInbox Chief Product Officer Matt Thazhmon, “we can go on the record stating that we have never been contacted by the NSA to compromise our server. If we ever were, we would shut the service down so the NSA would never read your message.” (So it’s only a matter of time, then, before SecretInk is no more?)

 


The email application enables messages to be sent fully encrypted over the network using HTTPS. “No plain text or message content ever goes through third-party servers,” Thazhmon explains. “We also remove the message from our servers as soon as it’s opened. No copy is retained or backed up in any way. Messages are never retrievable after they’ve been opened,” he adds.

When a sender creates a message using SecretInk on the web, no login is required, and the recipient is then alerted via either email or SMS that they have a message waiting. In other words, the actual communication doesn’t contain the actual message content.

In addition to Gmail, SecretInk’s system works with Hotmail, Yahoo, and any other clients PowerInbox supports, including, soon, some mobile clients too. And it also works over SMS, which is something that makes it different from other encrypted email providers.

The SMS aspect may have some appeal, as users today turn to a variety of mobile messaging apps to communicate more privately. In those cases, it’s often less about fears of government spying, but rather representative of a tide change in what users want from their social services –  that is, something smaller and more intimate than Facebook.

Apps for self-destructing texts are hugely popular right now. Snapchat thinks it’s worth more than $3 billion. Other competitors are also trying to get in on the action with private messaging apps like PrivatextWickr, Frankly, Gryphn, and many more, which themselves followed TigerText and others‘ first steps in this space many years ago.

 


 

With SecretInk, though, you have an app that straddles the line between a web and mobile offering, which is something of a twist. While it works over SMS, it works right from your inbox, too.

Thazhmon, of course, likens the new service to Snapchat, noting that it shares one of that service’s faults, too  – a user could take a screenshot of the message to keep a copy for themselves. In other words, the system is more about keeping sensitive information out of the hands of the government, or anyone else who could be spying on your inbox (including, perhaps hackers or snooping spouses) – but you’ll still need to have some trust for the person on the other end of your missives.

As a side note, the SecretInk app was built by former members of the TweetDeck team, says Thazhmon. This group is now working to add more features including animations, plus support for pictures and attachments, as well as dedicated mobile apps.

SecretInk is a free service, but there will be a premium tier for marketers who will be able to send time-sensitive messages needing immediate attention in the future. (Hey, is that an actual business model I smell?)

 

via: techcrunch

Microsoft Releases ’3D Builder,’ A 3D Printing App For Windows 8.1


Out from Microsoft is a 3D-printing application called 3D Builder that will help the amateur set dig into 3D printing, provided that they 1) have a Windows 8.1 machine, and 2) have a Windows 8.1-ready 3D printer.

So, it’s a small group. But that’s just fine. Every technology has an incubation phase apart from the mainstream, and 3D printing is only now enjoying public awareness, let alone mass adoption.

Windows 8.1 was designed to support 3D printing in an almost gimmick that’s cool instead of moonshotty, due to the falling price of consumer-grade 3D printers, such as what MakerBot produces. MakerBot will support Windows 8.1 this year, if you didn’t know.

The application is designed to help you design. It has a catalog of built-in pieces, and you can add your own to zazz things a touch. From the looks of it, if you recall the creature stage of Spore, it should be somewhat similar. I didn’t get to road test the app as I don’t have a 3D printer (AOL? Hey?), but reviews will tell the tale over the next few days.

Microsoft was late to the Internet and missed the smartphone train, but it appears hell-bent on being early in 3D printing. If the technology advances far enough as the price falls quickly enough, this could be a winning move for Microsoft in the next five years. For now, you can probably only make little lumpen dinosaurs for your kid if you have all the hardware.

From humble beginnings.

Via: techcrunch

Amazon WorkSpaces delivers Windows desktops on demand

WorkSpaces provides cloud-hosted Windows 7 desktops in four virtual machine configurations, but current VDI solutions may still have an edge in terms of flexibility.

Amazon Web Services is determined to make buying a desktop machine a thing of the past.

Yesterday at AWS re:Invent, Amazon announced a virtual desktop service called Amazon WorkSpaces that promises to provide Windows 7 desktops on demand to almost any client device.

Amazon also claims it can deliver those desktops at a better per-user price than pretty much anyone else on the market, with the lowest-tier desktops available for $35 per user per month. For a 1,000-user setup, Amazon claimed around 59 percent cost savings over delivering the same desktops on-premises.

The desktops are provided to clients by way of a client app, available for most major platforms: Mac OS X, iOS, Android, and Windows (ironically enough). Four different virtual machine configurations are available — from a single virtual CPU with 3.75GB of memory and 50GB of persistent storage to a dual-vCPU, 7.5GB, 100GB storage model. Organizations with an existing Active Directory repository or other in-network resources can connect those up to the desktops by way of a VPN.

Software is included with the systems as well, although the standard bundles are nothing that can’t be obtained for free (Adobe Reader, Adobe Flash, Firefox, and 7-Zip). Higher-end bundles also include Microsoft Office Professional 2013 and Trend Micro’s antivirus, and Amazon allows customization of bundles. Existing Windows software licenses can be moved into WorkSpaces, albeit for a fee.

WorkSpaces follows hot on the heels of another desktop-oriented announcement, a graphics-as-a-service offering from Amazon Web Services via its new G2 instance type. In many ways, WorkSpaces and G2 instances are complementary: the former for day-to-day desktop jobs, the latter for high-end performance work.

Amazon’s cloud offerings have typically competed with similar services from Google, Rackspace, IBM, and so on. WorkSpaces, on the other hand, puts the company in competition with all the VDI (virtual desktop infrastructure) providers out there. Citrix and VMware are two of the biggest names, but Microsoft also has some VDI presence.

VMware isn’t all that impressed with what it sees, though. Erik Frieberg, VMware’s vice president of product marketing for end-user computing, thinks VMware’s Desktone product is still a better deal. “In terms of features,” Frieberg said in an email, “Amazon WorkSpaces offers limited capabilities compared to VMware. The limited nature of the four bundles and the dependency on the Windows 2008 Server OS will make Amazon WorkSpaces incompatible with many enterprise applications and desktop management tools.”

Frieberg also criticized “peripheral handling, touchscreen support, unified communications and a range of other areas” where Amazon WorkSpaces, in his view, falls short of VMware’s Horizon View.

Still, it would be a mistake to write off Amazon in this space from the get-go. AWS rose from being a curiosity to being one of the most foundational of modern computing technologies. There’s little that says Amazon can’t, in time, refine WorkSpaces to become everything VMware would really fear — provided it doesn’t get hidebound by the same inflexibility and erratic service that’s plagued AWS before.

Via: infoworld

Amazon bashes private clouds, launches virtual desktops

The head of Amazon Web Services bashes IBM and launches a VDI service at this year’s AWS Reinvent conference.

Private clouds offer “none of the benefits” of a robust public cloud, and are only a stopgap solution perpetuated by “old-guard” IT companies such as IBM, said Andy Jassy, Amazon senior vice president who heads up Amazon Web Services.

“If you’re not planning on using the public cloud in some significant fashion, you will be at a significant competitive disadvantage,” Jassy told a packed auditorium of nearly 9,000 IT pros Wednesday in Las Vegas, for the opening keynote of the AWS Reinvent conference.

Jassy split his time between extolling the benefits of using large public clouds such as Amazon’s and introducing new services.

While he spent much of his presentation discussing the benefits of cloud computing, arguing that it offers increased agility, better security and lower costs, he also took time to criticize private clouds, or cloud infrastructures that organizations have set up in-house for their own use.

To set up a private cloud, an organization still needs to invest a considerable amount of money in hardware and software, so it requires up-front capital costs that a public cloud doesn’t, he said. Private clouds don’t offer the agility of public clouds, in that the enterprise still can’t change to a new platform or set of software as quickly. It also doesn’t offer economic advantages of buying hardware in large amounts.

Some organizations, such as governments and health-care providers that have strict regulatory requirements, still need to run operations in private data centers, he said, but over time, these specialized-use cases will diminish as more of the features required will be available on public clouds.

Amazon offers a number of services that help organizations run hybrid clouds that are partially run on Amazon and partially in-house, including VPNs (virtual private networks), and identity and access management. The company also works with traditional enterprise IT management tool providers, such as Eucalyptus, CA Technologies and BMC Software, to provide a single view of both on-premises and cloud operations.

But AWS put these services and partnerships to help customers move almost entirely to the AWS public cloud.

“We have a pretty different view of how hybrid is evolving than the old-guard IT companies,” Jassy said. The approach popular with companies such as Hewlett-Packard, Microsoft and IBM, for instance, assumes an enterprise will want to run most of its operations in-house and use public clouds to augment operations when traffic is heavy.

“We believe in the fullness of time, very few enterprises will run their own data centers,” Jassy said to note the difference in the AWS approach. “That informs our approach in what we build. We will meet enterprises where they are now, but we will make it simple to transition to where the future workloads will be, in the cloud.”

“I think a lot of old-guard technology companies aren’t so thrilled about how fast things are moving to cloud,” Jassy said. He showed a slide of one of a number of advertisements that IBM has placed on buses this week in Las Vegas that claim that the IBM Cloud service hosts “30 percent more top websites” than any other cloud provider.

“It’s creative, I’ll say that,” Jassy said. “I don’t think anybody who knows anything about cloud computing would argue [IBM] has a larger cloud business than AWS.”

In June, IBM purchased SoftLayer to boost its public cloud offerings.

Jassy also took time to announce some new services.

Perhaps the most notable launch for the company is a new VDI (virtual desktop infrastructure) service, called Amazon Workspaces.

Workspaces provides a virtual desktop for an organization’s employees that can be accessed from Apple Macs, Microsoft Windows computers or Android devices. It provides a “persistent state,” Jessy said, meaning that the desktop’s contents will remain the same no matter what device the desktop is accessed from.

Despite the advantages it offers administrators in managing their users’ computers, VDI thus far has not made major inroads into the enterprise IT market, though Amazon is hoping Workspaces will prove cost-effective and easy enough to manage that it will be appealing.

Workspaces will cost about half of the expense of the current average VDI implementation, he said. The service, which is now offered in a limited preview, can be paid for on a month-by-month basis. A Workspaces desktop with one virtual CPU and 50GB of storage space will cost US$35 a month, and the “performance” desktop with 2 virtual CPUs and 100GB of storage will cost $60 per month.

With Workspaces, an organization can bring its own licenses for Microsoft Office and security software, or Amazon will offer these applications for an additional $15 a month.

AWS also launched a security service that can provide customers with detailed log reports of who is accessing their APIs (application programming interfaces) and what services they consume, as well as a streaming service for apps.

Via: networkworld

International Space Station Infected With USB Stick Malware

Renowned security expert Eugene Kaspersky reveals that the International Space Station was infected by a USB stick carried into space by a Russian astronaut.

Russian security expert Eugene Kaspersky has also told journalists that the infamous Stuxnet had infected an unnamed Russian nuclear plant and that in terms of cyber-espionage “all the data is stolen globally… at least twice.”

Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS).

Kaspersky said he had been told that from time to time there were “virus epidemics” on the station.

Kaspersky doesn’t give any details about when the infection he was told about took place, but it appears as if it was prior to May of this year when the United Space Alliance, the group which oversees the operaiton of the ISS, moved all systems entirely to Linux to make them more “stable and reliable.”

Windows XP

Prior to this move the “dozens of laptops” used on board the space station had been using Windows XP, which is inherently more vulnerable to infection from malware than Linux.

According to Kaspersky the infections occurred on laptops used by scientists who used Windows as their main platform and carried USB sticks into space when visiting the ISS.

The ISS’s control systems (known generally as SCADA systems) were already running various flavours of Linux prior to this switch for laptops last May.

According to a report on ExtremeTech, as far back as 2008 a Windows XP laptop was brought onto the ISS by a Russian astronaut infected with the W32.Gammima.AG worm, which quickly spread to other laptops on the station – all of which were running Windows XP.

Stuxnet

The Russian said this example shows that not being connected to the internet does not prevent you from being infected. In another example, Kaspersky revealed that an unnamed Russian nuclear facility, which is also cut off from the public internet, was infected with the infamous Stuxnet malware.


Founder of Kaspersky security company, Eugene Kaspersky, reveals the

International Space Station was infected with malware carried on USB sticks.

(Screengrab)

Quoting an employee of the plant, Kaspersky said:

“[The staffer said] their nuclear plant network which was disconnected from the internet … was badly infected by Stuxnet. So unfortunately these people who were responsible for offensive technologies, they recognise cyber weapons as an opportunity.”

Infamous

Stuxnet is one of the most infamous pieces of malware ever created, though it was never designed to come to the attention of the public.

Never officially confirmed by either government, the widely-held belief is that Stuxnet was created jointly by the US and Israeli governments to target and disable the Natanz nuclear enrichment facility in Iran, in a bid to disrupt the country’s development of nuclear weapons.

The malware was introduced to the Natanz facility, which is also disconnected from the internet, through a USB stick and went on to force centrifuges to spin out of control and cause physcial damage to the plant.

Stuxnet only became known to the public when an employee of the Natanz facility took an infected work laptop home and connected to the internet, with the malware quickly spreading around the globe infecting millions of PCs.

Expensive

Kaspersky told the Press Club that creating malware like Stuxnet, Gauss, Flame and Red October is a highly complex process which would cost up to $10 million to develop.

Speaking about cyber-crime, Kaspersky said that half of all criminal malware was written in Chinese, with a third written in Spanish or Portuguese. Kaspersky added that Russian-based malware was the next most prevalent threat, but that it was also the most sophisticated.

He also added that Chinese malware authors were not very interested in security with some adding social media accounts and personal photos on servers hosting the malware.

 

via: ibtimes

Manufacturers building security flaws into Android smartphones

North Carolina State University study finds that companies like Samsung and HTC create vulnerabilities while customizing phones.

Android smartphone manufacturers that customize their devices to make them standout in the market are compromising security by building vulnerabilities into the products, a university study shows.

On average, 60 percent of the vulnerabilities in the 10 smartphone models evaluated by researchers at North Carolina State University came from the manufacturers. The study covered an old and a new model from each of five companies, including Samsung, HTC, LG, Sony and Google.

Device manufacturers preload on average 80 percent of the apps that come with a device. Models running Android version 2.x had an average of 22 vulnerabilities per device, while models powered by version 4.x had an average of 18 vulnerabilities, according to the study.

However, that did not mean newer models were more secure. Those with more serious vulnerabilities presented a higher risk to buyers, according to the study. Of the smartphones evaluated, Google’s Nexus 4 had the least number of flaws.

Among the problems found were apps that could record audio and make phone calls without the user’s permission. Some apps could wipe out the user’s data. In general, a vulnerability was defined as being a flaw that an attacker could use to steal data or grab permissions to use phone services.

Depending on the model, from 65 percent to 85 percent of vulnerabilities were due to vendor customizations. The only exception were the Sony models, which had substantially fewer flaws.

Fully, 85 percent of the apps were over-privileged, meaning that they required users to give them permissions they did not use. While this may benefit developers who will have the option of using the permissions in future updates, it compromises user control.

Google produces a baseline version of Android, which the company makes freely available through the Android Open Source Project (AOSP). Device manufacturers and wireless carriers are free to customize the mobile operating system however they want.

The customizations have become increasingly sophisticated since the release of Android in 2007. The first phone based on the OS was the HTC Dream, released a year later.

“Flagship devices today often offer a substantially different look and feel (than the baseline version), along with a plethora of pre-loaded third-party apps,” the study said.

Because so many players have their hands in the Android pie — Google, device manufacturers, carriers and third-party app developers it’s important to identify who is responsible for any security issues, so they can be fixed.

“It is worrisome to notice that vendor customizations were, on the whole, responsible for the bulk of the security problems suffered by each device,” the study said.

The researchers found that the number of vulnerabilities varied little between old and new models, with the exception being HTC. Security was markedly better in the new HTC smartphone.

In February, the Federal Trade Commission dropped the hammer on HTC for failing to protect consumers’ personal data and privacy in software it designed and customized for millions of mobile devices. In settling the FTC complaint, HTC agreed to put in place a process for patching vulnerabilities, to make security part of the device development process and to take responsibility for securing customers’ personal data.

At the time, industry observers saw the settlement as a warning to other manufacturers that failed to protect the privacy and data of customers. The FTC did not respond to a request for comment on the latest study.

Whether the commission will address the current problem is not known. However, manufacturers are unlikely to change as long as there is no financial incentive to do the development work needed to fit Android updates into their customized software, Christopher Soghoian, principal technologist for the American Civil Liberties Union’s Project on Speech, Privacy and Technology, said.

Once a smartphone is sold, manufacturers no longer get any revenue from the device. On the other hand, carriers receive monthly income from smartphones, but have been unwilling to share any of it with manufacturers in return for security updates.

“For the market to deliver a solution, either consumers have to pay the handset manufacturers for the updates or the carriers are going to have to pay them for the updates,” Soghoian said.

The NCSU study will be presented Wednesday at the ACM Conference on Computer and Communications Security in Berlin.

Via: csoonline

F-Secure Safe Search : Searching the Internet with Peace of Mind

Conventional wisdom tells us to choose search engines based on its ability to deliver us the most relevant search results, but is that really enough?

F-Secure definitely does not think so.

I’m very happy to introduce you to F-Secure Safe Search – a search engine that delivers you the most relevant and SAFE results! Built into the search results are safety badges that indicate how safe or potentially malicious a web-link might be. In other words, we tell you which web-links you should and should NOT be clicking on!


Wondering what makes F-Secure Safe Search so good? First of all, the web-link ratings are powered by F-Secure’s own best-in-class reputation engine. If you have ever tried F-Secure’s Browsing Protection you will be familiar with how this works.

On top of that, to develop this product they partnered with one of the industry’s best and most popular search giants – Google. I’m sure we don’t need to explain how accurate Google’s search results are or how popular Google has become until people use the word ‘Google’ as an verb in their daily conversations.

Ready for the best part of this product? F-Secure Safe Search is available to you today completely FREE. Surf on over to http://search.f-secure.com to try it out for yourself! If you are using Internet Explorer, Chrome or Firefox, we even provide a one click feature that allows you to set F-Secure Safe Search as your search provider right on your browser! Existing F-Secure Internet Security users will also be receiving a FREE update which will include Safe Search as a new feature.

We hope that you will enjoy using F-Secure Safe Search! If you do, tell your friends about it and keep their Internet searches safe too.

Via: safeandsavvy