Monthly Archives: March 2015

Windows 10’s speedy new Project Spartan browser will ditch Internet Explorer name

Microsoft hasn’t quite made it official, but a Microsoft executive recently came the closest yet to confirming that Project Spartan—the primary browser in Windows 10—will not be named Internet Explorer.

During Microsoft’s Convergence conference in Atlanta on Monday, Microsoft’s marketing chief Chris Capossela pretty much admitted that the name Internet Explorer was on its way out, as first reported by The Verge.

“We’re now researching what the new brand, or the new name, for our browser should be in Windows 10,” Capossela said. “We’ll continue to have Internet Explorer, but we’ll also have a new browser called Project Spartan, which is codenamed Project Spartan. We have to name the thing.”

Alongside Project Spartan, Microsoft will also release Internet Explorer 11 as a legacy option primarily for enterprises with applications and internal sites designed for Microsoft’s longtime browser.

Why this matters: Internet Explorer is still the most popular browser on Windows, but it is far from the most capable. Browsers like Chrome and Firefox feature far more consumer-friendly features and a broad catalog of extensions and add-ons that IE has never been able to match. Google’s Chrome is also morphing from just a browser to a complete desktop-like working environment with the web at its center. To keep the built-in Windows browser relevant, a fresh start is sorely needed. Dumping the Internet Explorer brand as part of the reboot would go a long way to reinforcing the new browser’s departure from IE.

Cortana inside

Project Spartan has yet to roll out to Windows 10 users, but rumors and leaks about it have been popping up in recent weeks. The new browser will come packed with Cortana, Microsoft’s personal digital assistant that is also built into the Windows desktop.

Based on a leak earlier in March, Spartan will have a streamlined interface that is very basic and almost Chrome-like. The Spartan browser also comes with a brand new rendering engine called Edge. The new browser engine promises to be much speedier than Trident, which powers IE. Edge is already built into the Windows 10 preview and can be enabled for Internet Explorer 11.

We should get our first look at Project Spartan later in March after the browser is added to an upcoming preview build of Windows 10. In the meantime, check out the 10 must-try new features already active in Windows 10.

 

 

Via: networkworld

Yahoo releases e2e encryption source code and launches ‘on-demand’ passwords

Yahoo took advantage of South by Southwest’s (SXSW) opening weekend this week to make major announcements surrounding its security protocol. Primarily, the company announced its new “on-demand” passwords, and followed up with news that its end-to-end encryption source code for Yahoo Mail was available on GitHub.

The company’s on-demand passwords will, as Chris Stoner, director of product management, Yahoo, explained in a blog post, make the logging in process “less anxiety-inducing.” Essentially, users won’t need to use a predetermined password to log into an account. Instead, each time they want to log in, they will receive a text message with a verification code.

The system is markedly separate from two-factor authentication, however. Two-factor requires two forms of logging into an account and often incorporates a text message password. The on-demand passwords only require one factor of authentication.

Yahoo is not the first to implement the technology, but a Yahoo spokesperson said in an email to SCMagazine.com that it’s “still a relatively new trend in the industry, so we’re excited to be leading on this for our users.”

The prospect of phasing out passwords might be exciting to many in the security community, but multiple professionals have noted its possible security lapses. In particular, the password program didn’t consider, or at least take seriously enough, mobile malware and the chances of a device being compromised.

“While Yahoo is lifting the burden of remembering a password, they are maintaining a single target for compromise: your SMS messages,” said Tim Erlin, director of product management, security and IT risk strategist for Tripwire, in prepared commentary to SCMagazine.com. “Malware on your phone could be used to grab those SMS messages and then have full access to your account.”

Furthermore, two-factor authentication and on-demand passwords are mutually exclusive, so users will have to choose between the two.

At the same time, John Bradley, senior technical architect at Ping Identity, noted in prepared commentary to SCMagazine.com that this move optimized account recovery, and he said receiving a new password through SMS could be more secure than through email.

The feature is currently only available for only U.S.-based users.

Also during SXSW, Yahoo’s Chief Information Security Officer Alex Stamos elaborated on plans to integrateend-to-end encryption on Yahoo Mail. The company released the encryption extension source code on GitHub for security researchers to sift through for possible bugs.

The encryption is slated to deploy for all users by the end of the year.

 

 

Via: scmagazine

Premera Blue Cross Announces Data Breach, Estimates 11 Million People Affected

The health insurer Premera Blue Cross announced on Tuesday it was the victim of a “sophisticated” cyberattack that could potentially impact at least 11 million people.

The Pacific Northwest-based company said it became aware of a network intrusion dating back to May of last year, but did not discover the breach until late January.

Premera said the compromise could have exposed sensitive customer data, including claims, clinical information, banking account numbers and social security numbers, as well as birth dates, mailing and email addresses, and phone numbers.

The data breach affects users of Premera Blue Cross, Premera Blue Cross Blue Shield of Alaska, and Vivacity and Connexion Insurance Solutions.

The company is currently working with the FBI and a cybersecurity firm to further investigate the cause of the attack.

Earlier this year, a similar incident hit Anthem, the second largest insurer in the United States, which affected nearly 80 million customers. Additionally, about 19 million non-customers were impacted by the breach.

“When the Anthem breach hit, many in the security industry were well aware [the company] was not alone,” said Tripwire Senior Security Analyst Ken Westin.

“Organized criminal syndicates targeting this type of data don’t target one organization—they target an entire industry.”

Westin explained it is not uncommon for many of the vulnerabilities or security lapses found in one organization to appear in multiple organizations within the same industry.

“The fact that the breach went undiscovered for seven months indicated that the institution likely did not have proper detective controls in place to identify an attacker was inside the network,” said Westin.

Consumers affected by the breach will be provided two years of free credit monitoring, as well as identity protection services.

Premera currently serves millions of customers across Washington, Oregon and Alaska, among other states.

 

Via: tripwire

The cloud is full of zombies, but that’s OK

Zombie VMs comprise half the public cloud, which enterprise IT needs to embrace — that’s where the best new stuff is being built.

Microsoft wants you to believe that Amazon Web Services is “a bridge to nowhere,” but nothing could be further from the truth. In fact, as Gartner says, “New stuff [workloads] tends to go to the public cloud … and new stuff is simply growing faster” than the traditional workloads that currently feed the data center.

Most of that “new stuff” is heading for AWS, although Microsoft Azure is an increasingly credible play.

In fact, both reflect the reality that the future belongs to the public cloud. This is in part a matter of price, as Actuate’s Bernard Golden posits, but it’s mostly a matter of flexibility and convenience. While the convenience may lead to plenty of waste in the form of unused VMs, it’s a necessary evil on the road to building the future.

Public cloud: Big and getting bigger

The number at which analysts now peg the value of Amazon Web Services has reached a whopping $50 billion. That’s an amazing figure, and it’s buttressed by an estimate that AWS will generate $20 billion in annual revenue by 2020, up from roughly $5 billion in 2014.

We’ve had doubters hate on such prognostications before, and they’ve been wrong — every single time.

Clearly, there’s an industrywide, tectonic shift toward the scale and convenience of public cloud computing, as Gartner analyst Thomas Bittman’s research shows.


According to Gartner, the number of VMs running in the public cloud tripled from 2011 to 2014.

What’s clear from these charts is that, overall, the number of active VMs has tripled, as has the number of private cloud VMs — not bad.

But much more impressive is the groundswell for VMs running in the public cloud. As Bittman highlights, “The number of active VMs in the public cloud has increased by a factor of twenty. Public cloud IaaS now accounts for about 20 percent of all VMs – and there are now roughly six times more active VMs in the public cloud than in on-premises private clouds.”

In other words, the private cloud is growing at a reasonable clip, but the public cloud is growing at a torrid pace.

A false number?

Of course, a significant chunk of that public cloud growth is vapor. As Bittman notes, “Lifecycle management and governance for VMs in the public cloud are not nearly as rigorous as management and governance in on-premises private clouds,” leading to 30 to 50 percent of public cloud VMs being “zombies,” or VMs that are paid for but not used.

That number may be generous. In my own conversations with a variety of enterprises large and small, I’ve seen VM waste as high as 80 percent.

Not that this will be much of a surprise to data center pros. According to McKinsey estimates, data center utilization stands at a sorry 6 percent. While Gartner gives hope — estimating utilization at 12 percent — this still speaks of terrible inefficiencies in hardware use.

In other words, there’s always a fair amount of waste in IT, whether it’s running in public or private clouds or in traditional data centers. Yes, there are tools like Cloudyn to help track actual cloud usage. Even AWS, which theoretically stands to lose revenue if customers turn off 30 to 50 percent of unused capacity, has its CloudWatch monitoring service to help its customers avoid waste. But that isn’t really the point.

Inventing the future

The reality is that public cloud has exploded in popularity because it’s helping enterprises transform their businesses. The very convenience that makes it so easy for developers to spin up new server instances leads to the likelihood of forgetting they’re running when the next project comes along.

This is a strength, not a weakness, of the public cloud. As Matt Wood, AWS head of data science, told me in an interview recently:

Those that go out and buy expensive infrastructure find that the problem scope and domain shift really quickly. By the time they get around to answering the original question, the business has moved on. You need an environment that is flexible and allows you to quickly respond to changing big data requirements. Your resource mix is continually evolving; if you buy infrastructure it’s almost immediately irrelevant to your business because it’s frozen in time. It’s solving a problem you may not have or care about any more.

Sure, it would be more cost-effective to shut down unused VMs. But in the rush to invent the future, it can be expensive to make the bother. Back to Bittman, who characterizes public vs. private cloud workloads as follows:

Public cloud VMs are much more likely to be used for horizontally scalable, cloud-friendly, short-term instances, while private cloud tends to have much more vertically scalable, traditional, long-term instances. There are certainly examples of new cloud-friendly instances in private clouds, and examples of traditional workloads migrated to public cloud IaaS, but those aren’t the norm. New stuff tends to go to the public cloud, while doing old stuff in new ways tends to go to private clouds.

Pay attention to that last line, because it’s the clearest indication why every company needs to invest heavily in the public cloud, and why private cloud feels to me like a short-term stopgap. Yes, there may be workloads that today feel inappropriate for the public cloud. But they won’t last.

Via: infoworld

‘TeslaCrypt’ holds video game files hostage in ransomware first

Online gamers are no longer spared the wrath of crypto-ransomware, with a recently discovered attack encrypting game files, as well as iTunes files.

Bromium Labs, and in a separate post, Bleeping Computer, detailed a specific campaign of the ransomware being spread through a local U.S. newspaper’s website. The WordPress-based site redirects visitors to the Angler Exploit Kit through a Flash clip.

The redirect only operates in Internet Explorer (IE) and Opera, and before dropping any malware, Angler checks for the presence of virtual machines and anti-virus products. If none are present, the exploit drops a Flash exploit, CVE-2015-0311, and an IE exploit, CVE-2013-2551.

Then, a new ransomware, identified as TeslaCrypt, drops and claims to be a new version of Cryptolocker, although Bromium Labs’ Senior Security Researcher Vadim Kotov wrote that it most likely is just a re-brand.

The variant targets 185 file extensions, most of which pertain to video games. iTunes files are also affected, but not as much as images and documents.

One reason for this could be because gamers are dedicated to their games, Kotov said in an interview with SCMagazine.com, and might not be able to restore their data, including level maps or online game session replays.

“It’s also a purely psychological effect,” he said. “[When seeing that files are encrypted] a person might actually panic and go and pay [the attackers].”

Affected games include Call of Duty, Minecraft, and Assassin’s Creed, among others.

The ransomware also appears to generate a bitcoin address for each infected device, making finding the attackers difficult.

Also because of this, Kotov couldn’t provide a number of those infected or where they are primarily based. Although he did note that this specific infected website being based in the U.S. makes it likely that those affected live in the U.S., as well.

Kotov recommended keeping a backup external hard drive updated and disconnected from any computer to avoid having to pay a ransom to gain back files.

“The problem is that once you’re infected with this is, there’s no way to reverse it unless you pay, and we wouldn’t recommend doing that,” he said.

The compromised website has yet to be cleared of the ransomware.

 

 

Via: scmagazine

Google Steps Up Safe Browsing Protections

Malware and phishing Web sites can often lurk among the legitimate sites you’ll find when conducting an online search, so Google has been tweaking its Safe Browsing technology to make it easier to identify and avoid such unwanted sites. The search giant recently began providing new warnings alerting surfers to the possibility that sites they’re about to visit could contain unwanted software that might hijack their browsers.

The warnings appear for people searching online using the Google Chrome, Apple Safari or Mozilla Firefox Web browsers. Microsoft’s Internet Explorer browser uses its own filter — SmartScreen — to warn surfers about phishing and malware sites.

Google recently also began providing automatic notifications about Web pages with potential malware to Google Analytics users. In December, it also revised its Google AdWords requirements with an updated policy on unwanted software.

Defining ‘Unwanted Software’

According to Google’s new policy, unwanted software includes apps that are deceptive, affect user systems in unexpected ways, are secretly bundled with other software or use trickery or piggybacking on other programs to get people to install them. Unwanted apps can also be difficult to remove or they can collect and transmit information about users without their knowledge.

For marketers who use Google, this means that “advertisers with software downloads hosted on their sites or linked to from their sites must comply with the Unwanted Software policy, regardless of the devices on which the software is installed. All such software downloads must comply with this new policy, whether or not these downloads are promoted through AdWords.”

Not all sites with unwanted software deliver their malware intentionally. Some may have been hacked, and Google’s Safe Browsing tool has been designed to identify those sites as well as intentionally harmful sites. Karl Sigler, Threat Intelligence Manager at the cybersecurity firm Trustwave, told us that sites with unwanted software are significant problems on the Web.

“(T)here’s an entire underground economy surrounding the practice. Criminals create networks of malicious Web sites called exploit kits. They then rent those malicious Web sites out to other criminals that use them to compromise victims. Some of these exploit kit campaigns breach hundreds of thousands of computers,” Sigler said.

“Many times criminals don’t need to exploit a vulnerability. They can often use social engineering to trick a victim into installing malware on their own systems,” he added. “This often occurs by prompting a user to install a fake software update that is actually malware.” Sigler called Google’s latest Safe Browsing changes “a wonderful service.”

Scanning ‘Millions of Web Sites’

“Safe Browsing scans millions of Web sites to identify those sites that install malware without a user’s knowledge,” according to Google’s Transparency Report. “We discover and categorize these sites by autonomous system numbers, thousands of which exist on the Internet.”

Recently, scanning by Google’s Autonomous System identified 6,062 attack sites in just one day on the more than 73,000 sites managed by Hostspace Networks. Google noted that the success of various malware techniques can shift rapidly, leading to spikes in the number of problematic sites discovered over time.

Google continually revises and updates how its algorithms work to refine the search results it produces. In addition to making it easier for users to identify potentially harmful sites, it is also putting a growing priority on ensuring that its search results provide reliable and factual information.

Last month, for example, it added a new feature to deliver medically validated facts in its Knowledge Graph for health-related searches. It also has researchers working on systems to bump Web sites with trustworthy information higher up in its search rankings.

 

Via: enterprise-security-today

Google Leaks Private Data from Hundreds of Thousands of Domains

The private information of hundreds of thousands of domain owners was inadvertently released to the public, thanks to a mistake by Google. The hidden Whois data for more than 282,000 domains was accidentally leaked by Google Apps, according to a report by the Web site Ars Technica.

The error affected domains that Google had registered with its partner, domain registrar eNom. Around 94 percent of the domains Google registered with eNom have been made public.

Whois is a query and response protocol that identifies the individual or company behind the registration of a domain name, essentially revealing the owner of a Web site. The error stems from a software bug in the Google Apps for Work platform that arose in 2013. As a result of the defect, the database used by Google Apps leaked the Whois data for a domain whenever the owner renewed it.

The Phone Book of the Internet

Although the bug has existed for almost two years, Google only recently became aware of the issue and took the steps necessary to fix it. The flaw was initially discovered in February by the Talos Security Intelligence and Research Group, a division of Cisco systems, as part of Google’s Vulnerability Rewards Program. The bug was patched within five days of its discovery, according to Ars Technica.

The information that was made public by the breach includes full names, street addresses, phone numbers and e-mail addresses for the domains. The information leak exposed the affected users to a number of possible threats, including being targeted by spammers, spearphishers, or other online threats, according to a blog post by the Talos Security team. In fact, eNom had specifically marketed itself to customers as providing the securityprecautions necessary to keep their information secure.

“Whois acts as the phone book of the Internet, allowing anyone to query who owns what domain and how to contact them,” the Cisco researchers wrote in a blog post. “This is a requirement prescribed by ICANN, who organizes and manages the conventions used in domain names. Domain Name privacy protections are used to mask this information from always being publicly displayed. Just as it’s possible to pay to have your name removed from the phonebook.”

Repercussions for Years

Unfortunately for the individuals and companies affected by the breach, the information that was leaked is now a permanent part of the Internet record, since there are a number of services that keep Whois data archived. However, the news is not entirely negative: the leak has also identified several domains that have already been linked to malicious activity.

Domains such as “federalbureauinvestigations.com” and “hfcbankonline.com” both have extremely low reputation scores, and are likely to be involved in activities that are not entirely on the up-and-up, according to the Talos team.

Nevertheless, many domain owners opt to keep their personal and corporate information private for completely legitimate reasons. Those parties are likely to experience significant repercussions as a result of the breach for years to come, as the information will remain available to anyone with access to a cached version of the Whois database.

“Organizations that handle any sensitive information must ensure that the appropriate systems are safeguarded and that the processes handle failure gracefully,” according to Talos. “In this instance, a simple check on domains changing state from being privacy protected to not being privacy protected could have identified the problem as it started to occur.”

 

Via: enterprise-security-today

Attackers spread worm via Facebook, leverage cloud services

Facebook users who clicked an Ow.ly link in a post promising pornographic content may have become infected with a worm – believed to belong to the Kilim family – that then spread the same link to all of their contacts and groups, according to a Thursday post by Malwarebytes.

Kilim targets social media networks – particularly Facebook and Twitter – by installing a rogue extension within the Google Chrome browser, Jerome Segura, senior security researcher at Malwarebytes, told SCMagazine.com in a Friday email correspondence. The malware can be used to post new messages, like a page, follow users and send direct messages, he explained.

“The goal [of this current attack] is to harvest as many users as possible to create a very large [botnet] consisting of social networks profiles which can be leveraged in various ways, [such as by] reselling Facebook friends and likes, reselling Twitter followers, [and] generating pay per click revenue by visiting sites and clicking ads,” Segura said, adding this attack seems to target Chrome specifically.

To infect users, attackers are taking advantage of a multi-layer redirection architecture that leverages cloud services, the post indicates. Segura said the attackers may be using this method to “make it harder to pinpoint exactly how the malicious redirection takes place, but also to be able to switch services quickly if they get blacklisted,”

Upon clicking the Ow.ly link claiming to deliver “sex photos of teen girls in school,” Facebook users are redirected to another Ow.ly link, which then redirects to an Amazon Web Services page, which then redirects to a malicious website, according to the post.

At this point, the malicious website checks the user’s system. Mobile users are “taken to an offer page based on their geographic location and language,” Segura said. “These offers usually end up being bogus apps or surveys.”

Computer users are instead sent to a Box website where they are prompted to download a file, the post indicates. Running the file will result in the machine becoming infected, which then leads to additional components – the worm – being downloaded and the original Ow.ly link being spread to the infected user’s Facebook contacts and groups.

“The file hosted on Box is trimmed down to a minimum size and its only purpose is to download additional components,” Segura said. “This is typically done to avoid initial detection, but also to allow the bad guys to update the backend code on the server so that the trojan downloader can retrieve the latest versions of each module. After the additional components are downloaded (Chrome extension, worm binary) they are installed on the machine and simply wait for the user to log into Facebook.”

Box is aware of the attack, according to a statement emailed to SCMagazine.com on Friday. To address the issue, the company is removing the files, eliminating sharing privileges for malicious accounts and is continuously scanning for viruses and related activity.

Facebook is also aware of the threat. Working with the other companies targeted in the attack, the social media giant spent the past week blocking associated links and stopping the links from spreading on its platform, according to a statement emailed to SCMagazine.com on Friday.

In a statement, an Amazon Web Services (AWS) spokesperson told SCMagazine.com on Friday that the “activity being reported is not currently happening on AWS.”

 

Via: scmagazine

Defending Against PoS RAM Scrapers

Stealing payment card data has become an everyday crime that yields quick monetary gains. Attackers aim to steal the data stored in the magnetic stripe of payment cards, optionally clone the cards, and run charges on the accounts associated with them. The topic of PoS RAM scraper malware always prompts businesses and retailers to ask two important questions: “How do I protect myself?” and “What new technologies are vendors introducing to protect businesses and consumers?

This blog entry seeks to answer these questions by discussing a PoS Defense Model and new technologies that can protect businesses and consumers from PoS RAM attacks.

PoS Defense Model

Based on our analysis of the PoS RAM scraper attack chain and PCI-DSS and PA-DSS requirements, we have created a multi-tiered PoS Defense Model that businesses and retailers can implement to defend against PoS RAM scraper malware attacks.


Figure 1. Multi-tiered PoS Defense Model

The four layers of the PoS Defense Model are:

  1. Infection Layer – this is the first and most important line of defense against PoS RAM scrapers as it aims to prevent initial infection, or block the malware’s execution before it causes damage.
  2. Lateral Movement Layer – if the infection layer fails to stop the malware, then the next layer of defense aims to identify suspicious or malicious behavior when the malware attempts to spread and blocks it.
  3. Data Collection Layer – PoS RAM scraper attacks might involve other information stealing components that sniff network traffic and keylogs, and steal sensitive files. This layer of defense aims to prevent data theft.
  4. C&C and Data Exfiltration Layer – the stolen credit card data is only valuable after it has been exfiltrated from the victim machine. The final layer of defense aims to prevent the malware from communicating with the C&C servers and prevent exfiltration of stolen data.

We have identified 26 defensive technologies and strategies that businesses and retailers can implement in their environments to defend against PoS RAM scraper attacks. The following Venn diagram shows these defensive technologies and strategies placed within the PoS Defense Model.


Figure 2. Defensive technologies and strategies (click on the image to embiggen)

Next Generation Payment Technologies

The new reality is that any Internet-connected device that processes payment card data should be viewed as a data theft target. Buyer security rests on the shoulders of several key players – device manufacturers, service providers, businesses, banks, and even credit card brands. Strong IT defense goes a long way in preventing PoS system breaches but it is not a magic bullet. New secure payment technologies must also be deployed alongside strong IT defenses to protect against PoS RAM scrapers. Two technologies that are being widely deployed are:

EMV or Chip-and-PIN cards


Figure 3. Encrypted data stored in chip (outlined in red)

EuroPay, MasterCard, and Visa (EMV) is the global standard for Integrated Circuit Cards (ICC). EMV cards store encrypted Tracks 1 and 2 data on a chip in the card. This chip stores a cryptogram that allows banks to determine if cards or transactions have been modified. It also stores a counter that gets incremented with each transaction. Duplicate or skipped counter values indicate potential fraudulent activities. The EMV cards interact with PoS terminals that have ICC readers and use the EMV-defined protocol for transactions. Similar to debit cards, cardholders need to input a PIN for authentication before the transaction is processed.

Encryption plus Tokenization

PoS RAM scrapers will have nothing to steal if credit card Tracks 1 and 2 data are not present in the PoS system’s RAM. This is the underlying principle behind the new payment processing architectures being developed and deployed today. One implementation uses tokenization, a process that replaces a high-value credential such as a credit card with a surrogate value that is used in transactions in place of the high-value credential, and encryption.


Figure 4. Process flow for Encryption and Tokenization

The workflow is as follows:

  1. Customer swipes their credit card at the merchant’s PoS terminal to complete the purchase.
  2. The PoS terminal reads and encrypts the credit card data and transmits it to the Payment Service Provider (PSP) for processing.
  3. The PSP forwards the credit card data to the banks (acquirers & issuers) for authorization.
  4. The PSP uses a tokenization algorithm to replace the actual credit card data with a token.
  5. The generated token and bank authorization status is send back to the merchant’s PoS system.
  6. The merchant’s PoS system stores the token instead of the actual credit card data in all places.

The Future for PoS RAM Scraper Attacks

As PoS RAM scrapers become more prominent threats, big businesses will heavily invest in cybersecurity to prevent attacks against their PoS environments. Attackers will thus refocus on SMBs, as these may not necessarily have the cybersecurity budgets that enterprises have to prevent PoS system breaches. We expect to see more SMBs get compromised, which will collectively be a bigger breach than compromising a few enterprises.

Rollout of new security measures will significantly change the PoS playing field for attackers. As businesses upgrade to new secure payment systems, attackers will attempt to come up with new strategies against improved systems and environments.

For an in-depth analysis about protecting your business against the threat of PoS RAM Scraper malware, please read the Trend Micro paper, Defending Against PoS RAM Scrapers – Current and Next-Generation Technologies.

 

 

Via: trendmicro

Amazon’s ‘Write On’ Crowd-Publishing Platform Opens To All


Amazon has a new crowd-publishing platform called Write On, which is a direct competitor toWattpad, the social network with self-publishing authors offering up their content for free, and working together with the community to incorporate feedback into their ongoing work. The Amazon version launched last October as an invite-only beta, but now it’s a full-fledged product available to all, and the beta label is gone.

The Amazon platform allows anyone to share anything they’re working on at any stage. They can offer full works, chapters, outlines, vague character sketches or even just single snippets and poll the community for feedback. You don’t have to write to participate, either – anyone who wants only to read has plenty of content to browse and nibble on, organized by genre, and there’s a “shuffle” feature that brings you to a random work.

Amazon has a lot of catching up to do to match Wattpad’s engagement – the nine-year old company has 40 million active users on the network monthly, according to its latest shared stats, and those members post over 24 hours’ worth of new material for reading every single day.

Write On isn’t Amazon’s only product that looks to leverage crowdsourcing to serve readers – the company also recently launched Kindle Scout, where authors can submit completed manuscripts to be vetted by the user community, and to be potentially chosen based on crowd response for digital publication by Kindle’s publishing arm. In theory, then, you could start, write and fine tune a book on Write On, submit it to Scout, and have it made available for sale, all inside Amazon’s ever-loving embrace.

 

 

Via: techcrunch