Monthly Archives: October 2016

Mirai Malware Simplifies Internet Attacks

A Massive internet attack that paralyzed Twitter, Netflix and other services is being blamed on a specific kind of malware designed to harness the power of ordinary consumer devices.

The bad news: Using it isn’t particularly hard and doesn’t require much money. The malware, known as Mirai, was recently posted online for others to adapt for their own attacks.

Researchers say Mirai exploited security vulnerabilities in thousands of internet-connected devices such as web cameras, then used those devices to attack a major internet firm, resulting in widespread outages. Researchers say Mirai has been used before, but not on the scale of Friday’s attacks.

Here’s a look at Mirai and what makes it so destructive.

What Happened?

Dyn Inc., an internet company in Manchester, New Hampshire, said its servers were hit by a distributed denial-of-service attack. These types of attacks work by overwhelming targeted computers with junk traffic, so legitimate traffic can’t get through.

Jason Read, founder of the internet performance monitoring firm CloudHarmony, said his company tracked a half-hour-long disruption early Friday affecting access to many popular sites from the East Coast. A second attack later in the day spread disruption to the West Coast as well as some users in Europe.

What Made This Attack So Nasty?

While distributed denial-of-service attacks have been around for years, hackers have many more devices they can use to pull off their attacks, thanks to the proliferation of internet-connected cameras, thermostats, lights and more.

And Mirai makes it easy for a would-be attacker to scan the internet for devices to take over and turn into “botnets” for launching coordinated attacks, Chris Carlson of the cybersecurity firm Qualys said.

While botnets have been used as weapons for nearly a decade, they have typically been employed by organized crime groups that targeted websites involved in less-than-savory businesses such as pornography or gambling. Those sites pay extortion money to make the problem go away quietly, Carlson said.

“But when you bring it to Dyn, and a lot of the internet gets shut down, people take notice,” Carlson said.

What Kinds of Devices Were Affected?

Researchers at the cybersecurity firm Flashpoint say very few devices in the U.S. seem to be involved.

Most of the junk traffic heaped on Dyn came from internet-connected cameras and video-recording devices that had components made by an obscure Chinese company. Those components had little security protection, so devices they went into became easy to exploit.

Because the components were put into a variety of devices that were then packaged and rebranded, it’s hard to tell exactly where they ended up. But Flashpoint researchers Allison Nixon and Zach Wikholm say their research shows that the bulk of them ended up in Vietnam, Brazil, Turkey, Taiwan and China.

Who’s Behind It?

That remains unclear. Nixon and Wikholm say it’s unlikely that this is a state-sponsored attack. Because the blueprints, or source code, for Mirai were public, an attack like this wouldn’t need a government’s resources.

Hacker groups have claimed responsibility through Twitter, but those claims haven’t been verified and the pair says it’s likely that they’re all lying.

“These guys are amateurs and they managed to get this far. That’s kind of scary,” Nixon said.

Are More Attacks Coming?

Probably. Hacker groups have threatened targets ranging from the Russian government to major corporations and the U.S. presidential election. But it’s unclear if those groups are actually capable, or just making empty threats.

Experts say that whatever the target, more attacks are inevitable in light of the continued growth of connected devices and the lack of security requirements for them. Therefore, the solution lies in boosting device security at the hardware level.

“At the end of the day, these attacks aren’t super sophisticated,” Carlson says. “They’re just a blunt hammer and whoever has the biggest hammer wins.”

The Department of Homeland Security said Monday that it’s been working on security practices for internet-connected gadgets and will release them in the common weeks.


via:  enterprise-security-today

New FCC rules impose privacy boost for ISP customers

The U.S. Federal Communications Commission (FCC) has just issued long-awaited rules about how Internet Service Providers (ISPs) can use and share the personal information they capture while you’re using their internet connections. The rules are a whole lot tougher than the ISPs would like. While most privacy advocates seem pleased, others wanted the FCC to go even further, applying the same rules to powerful non-ISPs like Google and Facebook.

The FCC says it’s aiming to give consumers:

Increased choice, transparency, and security online… ISPs serve as a consumer’s “on-ramp” to the internet. Providers have the ability to see a tremendous amount of their customers’ personal information that passes over that internet connection, including their browsing habits.

Consumers deserve the right to decide how that information is used and shared – and to protect their privacy and their children’s privacy online.

To begin, broadband ISPs will have to tell customers what kind of information they’re collecting, how and when they share it, and the “types of entities” they share it with.

Where your personal information is sensitive, you’ll have to opt in before they can use or share it. What’s “sensitive”? Your precise geolocation; information about your children, your health, and your finances; social security numbers; web browsing and app usage histories; and the content of your communications. (Note that your ISP can use and share this info if you give them explicit permission – so watch out for those inviting, gently worded dialog boxes.)

For “all other individually identifiable customer information” – such as the tier of internet service you subscribed to – your ISP can use and share unless you opt out.

Your consent is “inferred” for its use of non-sensitive information “to provide and market services and equipment typically marketed with [your] broadband service… to provide the broadband service, and bill and collect for [it, and] to protect the broadband provider and its customers from [fraud].”

Your ISP can use “de-identified information” that has been disconnected from your identity – but only if they take strong precautions against re-identifying it. ISPs will no longer be permitted to refuse your business if you won’t opt in to their use of your private data. And if they want to give you a discount in exchange for your precious info, they’ll have to provide some to-be-determined form of “heightened disclosure”.

The New York Times quoted two US privacy advocacy organizations – the Center for Digital Democracy and Public Knowledge – as welcoming the new rules. Another, the Electronic Privacy Information Center (EPIC), has argued that the rules don’t go nearly far enough:

While ISPs are clearly engaged in invasive consumer tracking and profiling practices, they are not the only ‘gatekeepers’ to the internet who have extensive and detailed views of consumers’ online activities.

Indeed, many of the largest email, search, and social media companies far exceed the data collection practices of ISPs.

One dissenter from the FCC’s 3-2 vote, Republican FCC Commissioner Ajit Pai, wrote that “There is no good reason to single out ISPs – new entrants in the online advertising space – for disparate treatment.” “Selectively burdening ISPs,” he adds, “confers a windfall to those who are already winning” – companies like Google and Facebook, which face much less rigorous regulation by another agency, the US Federal Trade Commission.

Pai also argues that, as encryption spreads on the internet, less of your private data will remain visible to your friendly ISP anyhow.

You won’t be shocked to hear that the National Cable and Telecommunications Association called the new rules “profoundly disappointing… regulatory opportunism”. And you can just imagine what advertisers think.

OK, we’ll save you the trouble. Here’s Dan Jaffe, vice-president of government relations for the Association of National Advertisers, as quoted by AdExchanger:

This… terrible and unprecedented [proposal]… sweeps all browsing data into the category of ‘sensitive information,’ even if it’s just someone interested in their local weather or whether the orange juice with or without pulp is on sale.

Large ISPs will get a year to implement the new rules; smaller ISPs will get two years. That assumes the rules don’t get tossed out in court. Of course, with three Democrats voting “yes,” and two Republicans voting “no,” they might also get Trumped by a change of presidential administration… but we’ll know about that soon enough.


via:  nakedsecurity

Red Cross data breach shows security is still not a priority

The Australian Red Cross Blood Service has responded quickly to a breach of 550,000 donor details, but security commentators say the incident shows security is still not a priority for many organizations.

The Australian Red Cross Blood Service has admitted that the personal details of 550,000 donors were placed on a publicly accessible web server by mistake.

Security commentators say the error could have exposed the donors to identity theft or other crimes and underlines the fact that data security is still not a top priority for many organizations.

The Red Cross said on 26 October that its blood service had become aware that a file containing donor information had been placed in an insecure environment by a third-party website developer.

The file contained registration information collected between 2010 and 2016, including details such as names, addresses, dates of birth and other personal details.

The Red Cross said someone scanning for security vulnerabilities had alerted the Australian Cyber Emergency Response Team (AusCert), which helped the blood service to address the problem.

The blood service has also contacted the Australian Cyber Security Centre, the Australian Federal Police and the Office of the Australian Information Commissioner.

According to the blood service, IDCARE, a national identity and cyber support service, had assessed the information accessed as of low risk of future direct misuse.

“To our knowledge, all known copies of the data have been deleted,” said Shelly Park, chief executive of the blood service. “However, investigations are continuing.”

Park said the online forms do not connect to the service’s secure databases, which contain more sensitive medical information.

“The blood service continues to take a strong approach to cyber safety so that donors and the Australian public can feel confident in using our systems,” she said.

“We are incredibly sorry to our donors. We are deeply disappointed this could happen. We take full responsibility and I assure the public we are doing everything in our power to not only right this, but to prevent it from happening again.”

The blood service is trying to contact everyone who made an application to be a blood donor on the site and inform them of the potential data breach. The organization has also set up a hotline, website and email address to provide information for donors.

While some commentators have praised the organization for the way it responded to the breach – described as the worst in Australia to date – others have been critical of the lax attitude to security that led to the breach in the first place.

“In this age of data-sharing, many organizations look at logistics before security,” said Mark James, security specialist at ESET. “If the data needs to be accessible by many people, then that priority is top of the list.”

According to James, protecting data requires multi-layered defense comprising security software, hardware, education and expertise.

“Failure to ensure software is patched and up to date is one of the biggest problems,” he said. “As a result, many webservers are using outdated software that still has vulnerabilities or flaws waiting to be exploited.”

With software available to scan multiple IP addresses looking for certain types of file, most of the hard work has already been done for the attacker, said James.

Correct authentication methods

However, he said the likelihood of breaches could be reduced significantly if the correct authentication methods are in place and there are periodic security reviews on all servers holding or handling private data.

“Having open facing servers available for plunder by all and sundry is just sloppy these days and is easily fixable,” said James.

Steve Murphy, senior vice-president for Europe at data giant Informatica, said that if organizations do not track where their data is moving and who holds it, it is only a matter of time before a damaging breach occurs.

“With sensitive data often passing between multiple companies during partnerships and sales, it is essential that organizations have a data-centric security strategy in place to ensure that data is secure wherever it goes,” he said.

The cost of poor data security is now far more than just financial, said Murphy. “Consumers are sharing more and more personal information with a wide range of organizations, from medical trusts to e-vendors, and, as a result, businesses that fail to secure that data risk inadvertently exposing their customers to blackmail, impersonation and scams – not to mention the reputational damage to the company.”


via:  computerweekly

Crack for Charity — GCHQ launches ‘Puzzle Book’ Challenge for Cryptographers

The UK’s Signals Intelligence and Cyber Security agency GCHQ has launched its first ever puzzle book, challenging researchers and cryptographers to crack codes for charity.


Dubbed “The GCHQ Puzzle Book,” the book features more than 140 pages of codes, puzzles, and challenges created by expert code breakers at the British intelligence agency.

Ranging from easy to complex, the GCHQ challenges include ciphers and tests of numeracy and literacy, substitution codes, along with picture and music challenges.
Writing in the GCHQ Puzzle Book’s introduction, here’s what GCHQ Director, Robert Hannigan says:

“For nearly one hundred years, the men and women of GCHQ, both civilian and military, have been solving problems. They have done so in pursuit of our mission to keep the United Kingdom safe. GCHQ has a proud history of valuing and supporting individuals who think differently; without them, we would be of little value to the country. Not all are geniuses or brilliant mathematicians or famous names, but each is valued for his or her contribution to our mission.”

The idea for the GCHQ Puzzle Book came after the success of last year’s cryptographic puzzle challenge that was dubbed the ‘hardest puzzle in the world’ and featured in Hannigan’s Christmas card.

Nearly 600,000 people from across the globe take part in the challenge; only 30,000 had made it reach the final stage, but three people came very close, who were considered winners by the GCHQ.

However, the solution to the Christmas puzzle, including explanations from the puzzle-setters, was publicly made available early this year for anyone to have a look.
The GCHQ Puzzle Book, published by Penguin Random House, will be on sale from 20th October at High Street book retailers and online.

All GCHQ earnings from the book will be donated to Heads Together — the “campaign spearheaded by the Duke and Duchess of Cambridge and Prince Harry, to tackle stigma, raise awareness and provide vital help for people with mental health challenges.”


via:  thehackernews

Mozilla strives for performance boost with new Project Quantum

As the web becomes less about static webpages and more about intricate web apps, browsers are being pushed to their limits to display interactive content without lag and erratic frame rates. Today, in a blog post, Mozilla outlined the development of a new project it is calling Quantum, a browser engine designed to address these changes at a fundamental level. When the project is completed, it’s promised to bring a smoother browsing experience to Firefox users.

Work on Quantum is leveraging previous work on Servo and Rust to deliver a smoother browsing experience on more intensive websites. Rust, a programming language, was initially created as the side project of a Mozilla employee. It was designed to be fast while ensuring thread and memory safety when developing parallel programs.

This is important because Servo, the second piece to the puzzle, is a Mozilla-sponsored, community-based parallel web engine. Servo will be the source of many of the underlying components for Quantum that will actually improve the rendering of webpages.

Separate from this endeavor, Mozilla has been hard at work rolling out Electrolysis to bring the benefits of a multiprocess architecture to Firefox users for quite some time. Though Mozilla has put a large amount of its resources into the development of Electrolysis, the company has remained consistent about insisting there was more up its sleeve.

Electrolysis is being painted as a necessary first step that laid the groundwork for Quantum development. From here, Mozilla wants to throw out major components of its Gecko engine, replacing them with more efficient components that will play better with parallelization and GPU offloading.

“We’ll be re-engineering foundational building blocks, like how we apply CSS styles, how we execute DOM operations, and how we render graphics to your screen,” said David Bryant, Head of Platform Engineering at Mozilla.

The new engine will also focus processing power more effectively to prioritize the most important web content. Together with Electrolysis, set to roll out to all Firefox desktop users in the coming months, Quantum should improve the stability, security and overall quality of the browsing experience.

Mozilla hopes to push out an initial iteration of Quantum by the end of 2017 for Android, Windows, Mac and Linux Firefox users. That means that, for now, iOS users are not going to be invited to the party, but Mozilla says they are hopeful they can be included in future releases.


via:  techcrunch

St. Joseph Health to pay $2 million for HIPAA violations

After an incident exposed the protected health information of 31,800 people, the organization failed to conduct a proper risk analysis, according to federal officials.


St. Joseph Health will pay $2,140,500 to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 Privacy and Security Rules.

At issue, according to the Office for Civil Rights, which oversees HIPAA rules, were files containing electronic protected health information that were publicly accessible through internet search engines from 2011 until 2012.

SJH, a nonprofit integrated Catholic healthcare delivery system sponsored by the St. Joseph Health Ministry, will also adopt a comprehensive corrective action plan as part of the settlement.

The health system operates 14 acute care hospitals, home health agencies, hospice care, outpatient services, skilled nursing facilities, community clinics and physician organizations throughout California and in parts of Texas and New Mexico.

On Feb. 14, 2012, SJH reported to OCR that certain files it created for its participation in the meaningful use program, which contained electronic PHI, were publicly accessible on the Internet from Feb. 1, 2011, until Feb. 13, 2012, via Google and also perhaps through other search engines.

The server SJH purchased to store the files included a file-sharing application whose default settings allowed anyone with an Internet connection to access them. The problem occurred after SJH rolled out the server and the file-sharing application, but failed to examine and evaluate how they were working.

The public had unrestricted access to PDF files containing the ePHI of 31,800 individuals, including patient names, health statuses, diagnoses, and demographic information.

Moreover, OCR concluded that although SJH hired contractors to assess the risks and vulnerabilities to the confidentiality, integrity and availability of ePHI, the work was conducted in a patchwork fashion and did not result in an enterprise-wide risk analysis, as required by the HIPAA Security Rule.

“Entities must not only conduct a comprehensive risk analysis, but must also evaluate and address potential security risks when implementing enterprise changes impacting ePHI,” OCR Director Jocelyn Samuels said in a statement.

In addition to the monetary settlement, SJH has agreed to a corrective action plan that requires the organization to conduct an enterprise-wide risk analysis, develop and implement a risk management plan, revise its policies and procedures, and train its staff on these policies.


via:  healthcareitnews

Checks and Balances – Asset + Vulnerability Management

Creating a Positive Feedback Loop

Recently I’ve focused on some specific use cases for vulnerability analytics within a security operations program.  Today, we’re taking a step back to discuss tying vulnerability management back in to asset management to create a positive feedback loop.  This progressive, strategic method can mitigate issues and oversights caused by purely tactical, find-fix vulnerability cycles.  And it can be done using vulnerability scan data, creating additional value from your ongoing security operations.

Consider the top four CIS Critical Security Controls from the lens of asset and vulnerability management:

  • CSC 1 – Inventory of Authorized and Unauthorized Devices
  • CSC 2 – Inventory of Authorized and Unauthorized Software
  • CSC 3 – Secure configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
  • CSC 4 – Continuous Vulnerability Assessment and Remediation

The top three controls can be roughly grouped together as “Do Asset Management.”  Number four can be roughly described as “Do Vulnerability Management“.  Often times organizations address these as completely separate problems, when in reality they are part of the same lifecycle:

  1. Many vulnerability management findings stem from oversights or problems with asset management programs.
  2. Many vulnerability scan details can be used to help audit and improve asset management programs.


Consider the down-cycle flow here to be point one, and the up-cycle flow to be point two:
The down-cycle is a given, based on relationship between asset management and vulnerability management.  The up-cycle requires proactive lifecycle management to properly implement.   The examples below describe creating asset review processes as a practical way to leverage vulnerability management data for improved asset management.

Asset & Software Inventory Review

Vulnerability management scanning generally begins with preliminary network scanning using tools like nmap.  Before reviewing vulnerability check results, the basic network scan data can provide significant value for a checks-and-balances review.  Consider the following short vignettes, boiled down from actual conversations in the field.

Admin: “We just finally replaced the last of our Windows XP machines six months ago, it took forever!  We finally had to put a task-force team together to get it done.”

< Run a discovery scan, review results >

Consultant: “It looks like you still have 12 Windows XP machines in your environment.”

< Cue: “We’re putting the band back together” >

Consultant: “You would be surprised how many times I find non-secure FTP, or even TELNET running in an environment.”

Admin: “We have a strictly locked down network; we know everything that’s running out there and we definitely don’t have FTP or TELNET.”

Consultant: “Well, it can’t hurt to do a search.  It might just come back empty.”

< Run a quick search in vulnerability scan data >

Consultant: “Yep, looks like there are a handful of each here.”

Admin: “Oh ****, that one’s in our DMZ.”

< Cue: “Quick, to the Batcave!” >

The broad theme here is the vulnerability scan data is good for more than just vulnerability analysis; these vignettes do not represent specific vulnerabilities, but misdeployed or misconfigured assets within the environment.  By creating an inventory review cycle, you can realize additional value from the scan data you are already collecting.

Vulnerability Remediation Review

While it is important to validate asset management through checks and balances review, it is also important to continue proactively validating vulnerability remediations as a check-and-balance for the vulnerability management program.  Consider the following scenario, again pulled from field conversation.

Admin: “We had a massive fire drill a few months ago for [ insert topical buzz-heavy vulnerability ].  It was a lot of long nights, but it’s good to have it done.”

Consultant: “Well, we should set up some searches to make sure that vulnerability doesn’t show back up in the environment again.”

< Run a quick search for a specific vulnerability title or CVE Identifier >

Consultant: “Yeah, a few of these assets definitely have that same vulnerability.  They must have fallen through the cracks, or someone re-installed the older software version after your cleanup effort.”

< Cue: Fire Alarm >

That last item specifically describes a vulnerability regression, which has been written about before.

The Big Picture

The above scenarios outline a few basic risks associated with a tactical, find-fix vulnerability management approach.  With a purely tactical approach:

  • There may be unmanaged, untracked, and unnoticed assets on the network.
  • There may be undesirable, unsecure, and unnoticed network services running on some assets.
  • High criticality vulnerabilities may be reintroduced to the environment and remain unnoticed

By building a more strategic vulnerability management program with more thorough, iterative review cycles you can leverage your existing scan data to more effectively lock down your environment.

The need for strategic vulnerability management is not only linked to asset management validation; it is also linked to the penetration testing (CSC 20) response and remediation cycle. For more discussion, see my colleague Joe Tegg’s talk from DerbyCon: “We’re a Shooting Gallery, Now What?

Looking to evaluate a new vulnerability management solution? Download a free trial of our vulnerability scanner here.


via:  rapid7

Fleex now lets you learn English by streaming Netflix shows

When I first covered Fleex, it was a neat little video player that let you learn English using your favorite movies or TV shows. Since then, Reverso has acqui-hired the team behind Fleex and now plans to relaunch the language learning platform with a new killer feature — Netflix shows.

Maybe I’m biased because it’s basically how I learned English, but I think watching all your movies and TV shows in a foreign language will drastically help you when it comes to learning that language.

At first, you start with subtitles, then you switch the language of the subtitles so that both the audio and the subtitles are in the foreign language. Then you drop the subtitles altogether. You’ve got to force yourself so that when you go to the next phase it feels difficult to understand at first.

On Fleex, you start with subtitles in both your native language and English. Slowly, Fleex removes subtitles in your native language. For the hardest parts of the video, you still get subtitles in both languages, but not all the time. Then Fleex removes subtitles completely.

At any time, you can pause the video, click on a word to look up the definition in Reverso Context and add it to your list of words you want to learn.

Fleex costs €6.90 per month or €39 per year (about $7.60 and $43.00). It still works with TED talks and your personal videos if you don’t download the Fleex player on your computer. But the company is also adding Netflix as a source. I couldn’t try it yet, but it’s supposed to go live any day.

So how does Netflix support work? It’s a client-side integration. Netflix doesn’t have an API, but uses an HTML5 player that browser extensions can play with. For instance, you can add subtitles and interactions on top of the Netflix player.

“Netflix wasn’t available internationally a couple of years ago,” Reverso CEO Theo Hoffenberg told me. “But now, Netflix is quite open and we can work with the Netflix player directly. It’s not an API, but it’s open.”

The good thing is that Netflix can’t stop them from doing that as everything happens in your browser. From Netflix’s servers, it looks like yet another person streaming a show on Netflix. And Netflix will probably be happy that you’re spending more time (and money) watching Netflix content anyway — it’s a win-win.


via:  techcrunch

Amity’s interactive messaging app one-ups iOS 10’s iMessage, and works on Android, too

With the  arrival of iOS 10, iMessage is finally getting a much-needed revamp that will see it incorporating third-party apps, as well as more engaging and interactive features like message bubbles, animations, handwriting, tapbacks, invisible ink, and more. But unless your friends are also on iOS 10, you won’t be able to use these additions in your group chats. However, a new messaging app called Amity is launching now to bring a similar – perhaps even upgraded – experience, but one that works across platforms.

Based in Brisbane, Australia, Amity’s bootstrapped team of eight has been working to create this more interactive messaging app over the course of the past two years.

As with many messaging clients, Amity offers the ability to send rich media in your chats – that is, things like photos (with filters, naturally), videos, links, voice messages, emojis, stickers, your location and more. But in Amity, you’re able to add these items by tapping buttons in the app itself – you don’t have to switch to another screen or app.

Plus, Amity offers its own collections of custom, original stickers to choose from, eliminating the need for add-on keyboards.

The app can also track of the media you’ve shared in your conversations. A “Memories” section, for example, archives all the photos, videos, links, news articles, YouTube videos, and “postcards” (a feature that involves sharing a location along with crowdsourced photos and other information pulled from Foursquare) in a single place you can revisit anytime.




But what makes Amity really fun are its interactive features. When chatting with the founder by phone this morning, I probably spent half the interview just tapping buttons in the chat app to try out all the different options, I have to admit.

For instance, you can “high five” a friend, which makes an animated version of this gesture appear in your chat, or you can “nudge” them, which actually makes the whole screen appear to shake while your phone buzzes.

Amity lets you ask your friends to send you media, too – you can request for a photo, video or location, with just the press of a button. A timer starts, encouraging  the friend to press another button that appears (e.g. “Send Location”) to respond to your request.

In addition, Amity introduces a feature it calls “Live Mode” which activates whenever two or more people join a chat together on the same screen.

In this mode, you can send live emojis, live touch gestures, and emoji bursts.

You pick from several emojis (e.g. a smiley, hearts, heart eyes, etc.) and then drag the emoji onto the screen where dozens “explode” in a burst-like fashion.

Another experience (see below) is akin to the little “hearts” you send a broadcaster on Periscope or the Likes you send when viewing a Facebook Live video – the only difference is that it’s in chat, not on social media.

Amity was co-founded by Johnny Cheng (CEO), who previously founded a mobile gaming company with 3 million users; Nick Pestov (CTO), the former head of engineering for an e-commerce company; Kieran Harper, a programmer who worked in government; and Jackson Cheng, Johnny’s brother and a designer.

Though only the 1.0 release, Amity comes across as a fairly polished app. (Unfortunately it’s crashing on the iOS 10 developer build, but a recent update has made several of my apps unstable; the team says they haven’t seen this problem on other builds.)

Being immediately engaging and usable is by design, we’re told.

“We set out to come out on day one with something that’s complete and more compelling than anything out there as a starting point – that was really important to us,” says Johnny.

Of course, it’s challenging to get anyone to adopt a new messaging app these days in a world where apps like Messenger, WhatsApp, Snapchat, and others dominate, and where many are fine with using just basic SMS testing or iMessage. Amity’s bells and whistles are incredible and fun, but ultimately, I found myself wishing they were just Facebook Messenger’s new features.

Not to mention, with the upgrade to iOS 10, Amity will have heavy competition from iMessage.

“That was a surprise to us,” admits Johnny, when asked about the upgraded Apple messaging app. After all, Amity had begun its work even before WhatsApp sold to Facebook – it was prepared to offer something fresh and new, but now will have to prove itself against Apple’s built-in messenger.

The company hopes its product is interesting enough to thrive even in this competitive landscape. To help encourage growth, it has added a ton of ways for users to add friends. You can add them by mobile number, invite them from your contacts, add them by username, add them from Twitter, or even add friends who are nearby (as detected via Wi-Fi and Bluetooth).

Amity is now preparing to raise a seed round. Its first investor is Mick Johnson, Facebook’s former Director of Product for Mobile, who offered Amity a five-figure investment.

The app is a free download on iOS and Android.


via:  techcrunch

Amazon Echo could soon start talking to you unprompted

The speaker that speaks might soon speak without you speaking to ask it to speak. Amazon’s Echo is set to get push notifications, according to The Information, which would allow it to give you a heads up about activity from its connected services, so it could, for example, tell you when your connected doorbell rings or pipe up and tell you when a loved one’s flight has landed.

Currently, Echo only speaks when spoken to; a user has to use the activation word “Alexa” to prompt it to begin listening for a command or request, and then it’ll respond to said input with its own vocal response. Alexa hasn’t supported the ability to provide any kind of audio notice unprompted as a result of data it receives from a user’s connected services – the closest it comes is being able to sound an alert based on an alarm or timer.

Echo has both audio and visual capabilities, thanks to a light ring that surrounds its upper edge, and The Information suggests Amazon could allow developers access to both for push notifications, so that users can choose how much of an intrusion said notices provide.

Alexa is also a service that exists unbound from the Echo hardware itself, and it’s very possible than any push notification support would extend to other hardware that uses the Alexa API, including the Nucleus smart intercom. The Information says the use of push notices would be part of a larger plan to give developers more control of third-party apps and gadgets overall via Echo and Alexa.

Notifications will be a tricky thing to get right on Echo, since a push alert with voice in a device with no display is a very different thing from a subtle vibration or screen-based alert on a smartphone. Still, it’s a feature that could do very well provided the user has total and intuitive control over when they’re alerted, and how.


via:  techcrunch