Monthly Archives: March 2018

Microsoft forces Windows 10 update on PCs that were set up to block it

Some users reported being pushed to the Win10 1709 upgrade with no advanced warning.

  • Certain Windows 10 users are being forced to upgrade to version 1709, even if they have deferred the Feature Updates.
  • All users who have been forced to upgrade to Windows 10 version 1709 seem to have limited the Diagnostic Data that could be collected by Microsoft.

Some Windows 10 users are reportedly being forced to upgrade to version 1709, even if they had chosen to opt out of automatic updates.

As reported by Windows blog AskWoody, Windows 10 users on versions 1607 and 1703 were pushed into the update, even if they had Feature Updates deferred. In a separate Woody on Windows column in Computerworld, it was also noted that the updates were forced on users with no advance warning.

Version 1607 is also known as the Anniversary Update, and version 1703 is called the First Spring Creators Update of Windows 10. The push to version 1709 is an upgrade to what is known as the Fall Creators Update, originally released on October 17, 2017.

The forced updates are interesting because they seem to bypass a safeguard feature that prevents automatic updates. By deferring feature updates, Windows users can push back certain updates for quite a long time, placing them on a path called “Current Branch for Business.” But this surprise upgrade was unavoidable for some users, even with the deferral in place.

However, as noted in Computerworld’s report, Microsoft has done this twice before. Once in July 2017, once in November 2017, and once in January 2018.

This is causing problems for some users. A user named bobcat5536 posted on the AskWoody site that the update caused their PC to boot into version 1709, but with no sound or color.

This forced upgrade didn’t hit all users of version 1607 and version 1703. But the users who were forced to upgrade seem to have had the Diagnostic Data level set to zero, Computerworld reported. To put it another way, upgrades were pushed to users who were sending “the minimum amount of telemetry to Microsoft,” the report said.

If a user’s Diagnostic Data level is set to Full or Basic, they likely won’t get the update, the report noted. The forced update also might have something to do with this update not going through Windows Update, therefore not following the determined deferral settings.

 

via:  techrepublic

These programs will save your butt when Mac users need to remove malware


No wonder they moved on to High Sierra.

 

Yes Virginia, Macs do get viruses. By 2017, McAfee said they have detected over 700,000 malware strains so far. The lion’s share of Mac malware is adware. It’s certainly better to get infected by adware than ransomware (although Mac ransomware is a thing, too). But adware is also something you want to get rid of. Some adware can engage in spyware actions which violate your privacy and put your sensitive data at risk. All malware pretty much uses CPU cycles and memory which can be better allocated toward the applications you actually want to run!

Now that the “Macs don’t get malware” myth is gradually starting to fade away, it’s likely you will be called upon to remove malware from someone’s Mac.

 

Before I start recommending programs, I’ll show you a couple of little procedures I was taught that may help users and tech support with very mild forms of Mac malware. If a user’s Mac behaves suspiciously, I would try these steps first first and run malware removal applications as your second step.

This is what you can do if a user’s Mac gets “you’ve got a virus” scareware in their web browser. (This applies to any web browser in macOS, not just Safari.)

Close the web browser right away. The user can always retrieve the tabs they were using later.

Open the Downloads folder. Drag every installer file and unfamiliar file into Trash. Empty the Trash. Then relaunch the web browser. If you don’t see the scareware pages again, chances are you removed the web malware. But I would still run malware removal tools afterwards.

Here’s something you can try if you see the UI of an app that you suspect is malicious. Note the name of the app. Then try to close it. If you can’t close it and are forced to drag the window elsewhere, that’s a good reason to be suspicious. Open the Utilities folder and launch Activity Monitor. Look under All Processes for the name of the suspicious app or anything else you don’t recognize. Click Quit Process for each of them. Check your Applications folder and see if you can find the suspicious app’s name there. If so, drag the icon into Trash, then empty Trash. Whether or not you were able to Trash the malicious application, you should still run malware removal tools afterwards. My malware removal experience has taught me that removed malware can still leave malicious files and unwelcome changes to configuration files.

As in my Windows piece, I recommend putting these apps onto USB sticks and DVDs as well. Have them available to carry with you in both mediums just in case you can only access one method or the other on a Mac. Many Macbooks lack optical drives, and you may also find a Mac with a functioning optical drive with malfunctioning USB ports. As I said, be prepared for anything.

Malware removal

Malwarebytes for Mac

I recommended Malwarebytes in my Windows piece. The Mac version is great, too! The free version of Malwarebytes for Mac will scan your disks and remove any malware it recognizes, and the UI is nice and simple. You can download it from here.

No consumer malware removal tool will help with zero day or fileless attacks. But the majority of Mac malware can be removed with Malwarebyes for Mac, provided you have updated its signatures recently.

Mac Rogue Remover Tool

Some versions of macOS still have a serious problem with Mac Defender, Mac Security, Mac Protector, and Mac Guard rogue anti-spyware programs. If your user runs the Leopard, Snow Leopard, Lion, or Mountain Lion versions of OS X,BleepingComputer’s tool will remove those particular trojans which plague those operating systems.

Download BleepingComputer’s free tool here.

Kaspersky Virus Scanner for Mac

Kaspersky’s freeware tool for Mac can detect and remove malware for Windows and Android. Windows and Android malwaremay not noticeably affect your Mac, but you don’t want to be sharing that malware to Windows PCs or Android devices if they connect to your Mac over the internet, by being mounted, or by sharing disks.

Kaspersky Virus Scanner will also remove malware that targets macOS specifically, so it’s worth a try. You can learn more here.

Bootable OS

It’s not unheard of for Macs to be difficult or impossible to boot into macOS properly. Some Mac malware may damage the file system or boot sector. Put a DVD or USB stick with the following OS into the user’s Mac and reboot it. Before the Mac tries to boot into macOS or OS X, hit the Option(⌥) key. You will execute Startup Manager, and you can select the optical or USB disk from there.

Disk images on a USB stick need to be written with software which makes them bootable. Again, you can use UNetbootin to make a bootable USB drive. There are Windows, Mac, and Linux versions of UNetbootin you can download from here.

PartedMagic

I recommended PartedMagic for Windows. But as it supports HFS and HFS+ as well, you can also use PartedMagic to fix broken file systems on a Mac. PartedMagic can partition, rescue data, fix how your HDD boots, and even do disk cloning.

You can download it here.

77% of companies don’t have a consistent cybersecurity response plan – Report

An IBM security report found that the time to resolve security issues is increasing, and that is costing companies more money.

  • In a study of cyber resilience, 77% of respondents didn’t have formal cyber security incident response plan (CSIRP) applied consistently across their organization. — IBM, 2018
  • 57% of business leaders said it’s taking longer to resolve cyber incidents and 65% said attack severity is increasing. — IBM, 2018

Despite the rapid proliferation of new cyber threats, 77% of business leaders admitted that they don’t have a formal cybersecurity incident response plan (CSIRP) that’s applied consistently in their organization.

That statistic comes from a new IBM report on cybersecurity resilience—a study of 2,800 security and IT professionals from around the world—released Wednesday. Although a form CSIRP can be considered a core part of cyber readiness, nearly half of those surveyed said that their response plan is informal or ad hoc, if it even exists at all.

Even though a majority of the respondents didn’t have a formal plan applied properly in their business, 72% felt that they were more cyber resilient today than they were at the same time last year. Of those that felt confident in their resilience, 61% said it was due to their ability to hire skilled security staff.

But, as any security expert knows, an organization needs the right people and the right tools to stay safe. Apparently, many respondents felt that way too, as 60% said a lack of investment in next-gen tech like artificial intelligence (AI) and machine learning was holding them back from achieving proper resilience to cyberattacks.

Despite this confidence, 57% said it’s taking longer to resolve cybersecurity incidents than before. Additionally, 65% said the severity of cyberattacks is increasing. And what makes this worse is that only 31% had the proper budget in place to boost their security capabilities.

“Organizations may be feeling more Cyber Resilient today, and the biggest reason why was hiring skilled personnel,” Ted Julian, co-founder of IBM Resilient, said in a press release. “Having the right staff in place is critical but arming them with the most modern tools to augment their work is equally as important.”

The lack of proper security planning could hit these businesses in their wallets as well. A 2017 Cost of a Data Breach Study, also from IBM, found that a data breach would cost roughly $1 million less, on average, if the victim could contain it within 30 days.

 

via:  techrepublic

How PostgreSQL just might replace your Oracle database

Although heavily dependent on Oracle today, Salesforce seems to be seeking database freedom—and its efforts could result in the same freedom for all enterprises.

Despite being filled with Oracle veterans, Salesforce.com can’t seem to stop flirting with rival databases, with reports surfacing that the SaaS vendor has made “significant progress” to move away from Oracle with its own homegrown database. This comes on the heels of Salesforce adding to its investment in NoSQL database leader MongoDB, which compounds the company’s long-standing interest in PostgreSQL.

With Silicon Valley at the vanguard of change, Salesforce’s infidelity to Oracle could be a sign of, or at least a spark to, a broader shift in enterprise database decisions.

This looking beyond Oracle shouldn’t be happening

Oracle has dominated the database industry for decades, using that heft to catapult it into enterprise applications and other adjacent markets. Lately, however, the wheels seem to be wobbling on its database gravy train. As Gartner analyst Merv Adrian has made clear, although Oracle still has a commanding lead in database market share, it has bled share every year since 2013. The only thing keeping the wheels on that train is inertia: “When someone has invested in the schema design, physical data placement, network architecture, etc. around a particular tool, that doesn’t get lifted and shifted easily, something that Gartner calls ‘entanglement.’”

Such entanglement has been particularly strong at Salesforce. With nearly two decades invested in Oracle, the pain involved in moving off Oracle would be substantial. Even so, and despite a 2013 megadeal between Salesforce and Oracle to cement Salesforce’s dependence on the database giant for nine years, Salesforce has never really stopped shopping around for alternatives.

The reason? Data sovereignty. Even if Oracle weren’t a fierce Salesforce competitor (and it is), having another vendor—any vendor—own such a critical part of a company’s data infrastructure necessarily reduces its agility.

Shopping around for database freedom

And so Salesforce has been looking for alternatives to Oracle. Although attempts to build its own database are relatively new, Salesforce’s attempts to look at rival databases has been going on for years, most recently with MongoDB. As reported, Salesforce just increased its investment in NoSQL leader MongoDB by nearly 45,000 shares, having first invested while MongoDB was still a private company. Between the two investments, Salesforce’s MongoDB investment represents 6 percent of its institutional holdings, the second-largest such investment it has made.

Salesforce has been an active investor in a variety of startups over the years, using such investments to strategically keep a pulse on the market (while keeping competitors out). With investments as varied as Twilio, Jitterbit, and SessionM, Salesforce has been a very active investor with tens of millions of dollars plowed into dozens of companies.

Seen this way, the MongoDB investment is no big deal.

Indeed, Salesforce’s MongoDB investment is a rounding error in MongoDB’s current $1.9 billion market cap. Even so, the fact that the SaaS vendor opted to put money into an Oracle database rival suggests an interest in keeping a foot firmly planted outside the Oracle camp. Nor is it alone: MongoDB counts more than 6,000 customers, indicating broad interest in moving beyond Oracle for modern applications.

And yet Salesforce’s database wanderlust points to a different database than MongoDB that could spoil Oracle’s dominance.

A long-term flirtation with PostgreSQL

If, in fact, Salesforce is developing a homegrown replacement for Oracle’s database, it might well be building it on PostgreSQL, the database Salesforce has actively flirted with since 2012. In 2013, Salesforce hired Tom Lane, a prominent PostgreSQL developer. In that same year, it hired several more, and even today PostgreSQL experience is called out for in dozens of jobs advertised on the company’s career page. Just as Facebook, Google, and other web giants have shaped MySQL to meet their aggressive demands for scale, so too might Salesforce be able to mold PostgreSQL to wean it from its dependence on Oracle.

Could Salesforce opt to tweak MongoDB or another NoSQL database? Sure, but it’s more likely that Salesforce would modify PostgreSQL to suit its needs than MongoDB, for a few reasons:

  • Although MongoDB is licensed under an open source license (AGPL version 3), it’s a license that raises question marks as to whether Salesforce could modify it and run a public service on top without either contributing those changes back to MongoDB (which it is unlikely to want to do) or paying MongoDB a great deal of money (also unlikely).
  • More important, while MongoDB is an excellent database (disclosure: I worked at MongoDB for a few years), it’s not as close a replacement for Oracle as PostgreSQL is. PostgreSQL is by no means a drop-in replacement for Oracle’s database, but a developer or DBA that is familiar with Oracle will find PostgreSQL similar.

Oracle would claim that it isn’t worried, but the DB-Engines database popularity ranking, which measures database popularity across a range of factors, should give it pause. For years, PostgreSQL has been on the rise, even as Oracle and MySQL (its open source database) have faded. PostgreSQL is now a strong fourth place, with MongoDB right behind it. If you talk to Silicon Valley startups and enterprise giants alike, you quickly see that PostgreSQL is having a “moment,” one that has been going on for years.

That moment, however, could become a serious movement with a tech bellwether like Salesforce behind it. If Salesforce jumped to PostgreSQL, or a variant thereof—or even if it managed to build a completely unrelated, custom database—that would be a serious signal to the rest of the Global 2000 that Oracle’s era of dominance is at an end.

 

via: infoworld

EU plans new laws to force companies to hand over data held outside the EU on request

EU Justice Commissioner Vera Jourova claims such measures will speed-up legal investigations.

EU plans new laws to force companies to hand over data held outside the EU on request

EU Justice Commissioner Vera Jourova all but confirms EU plans.

 

 

The European Commission is planning new measures in forthcoming law enforcement legislation that would force technology and social media companies to hand over customer data held outside the EU. It claims that the measures, due to be unveiled before the end of March, will speed up legal investigations.

But the new laws would be little different from the ongoing case in the US whereby the Department of Justice (DoJ) has demanded that Microsoft hand over emails held by Microsoft in a data center located in Ireland.

Microsoft has argued that it is outside US legal jurisdiction and, hence, should take its order to Ireland. Microsoft is supported in that case by the European Union. In December it was reported that the EU planned to make a submission in support of Microsoft’s position.

But according to Reuters, European officials are planning new laws that will compel organizations to turn over personal data on request, even if that data is held outside the European Union.

The new measures will almost certainly be opposed by privacy campaigners, who claim that such extra-territorial jurisdiction not only erodes well-established legal principles, but will undermine privacy rights.

Technology firms, meanwhile, fear that it will undermine trust in cloud computing and cloud services, not to mention clashing with privacy laws, such as the EU’s own General Data Protection Regulation (GDPR).

Under the proposals, according to Reuters’ sources, the personal data of anyone “linked” with an investigation by an EU state could be compromised, regardless of whether they are an EU citizen or not.

This could potentially put EU states at loggerheads with other governments around the world.

Reuters adds that the proposed legislation is still in its drafting stage and will go before member states by the end of March. The resulting directive could take two years to be agreed.

European Justice Commissioner Vera Jourova appeared to confirm the plans, telling Reuters that current measures for accessing cross-border evidence held on computer was “very slow and non-efficient”.

 

 

via:  v3

Shortly, even the CEO will be outsourced to an online labor marketplace

Over the past decade, there has been a ferocious rise in the freelance economy in the United States. Millions of people today work on platforms ranging from Uber and Lyft to Taskrabbit and Fiverr, accepting what are usually short-term tasks that can be completed efficiently and repeatedly. While these casual jobs have been the focus of intense scrutiny about their pay structures and work security, their effects have been mostly limited to talent not engaged in business professions.

Times are quickly changing, though, and marketplaces are increasingly entering white collar territory with better product design.

Take for example Clora, which works with the highly-specialized talent required to produce and launch a new life sciences product. Or Paro, which connects businesses to bookkeepers and other financial professionals to manage a company’s finance department. Or Catalant(formerly HourlyNerd), which works with independent business consultants who can solve a range of complex problems like market sizing or product marketing.

It might be an exaggeration, but it is only a matter of time until rent-a-CEO options exist as well.

Part of this transformation of white collar work certainly comes from companies and workers desiring more flexible work arrangements. However, I would also argue though that renewed focus on product design has been critical for effectively building a marketplace for online talent.

Different tasks often have wildly different requirements and workflows, and the product needs to match the kinds of talent it hopes to attract. These newer marketplaces understand that professional work is often ambiguous and hard to judge for quality, and have built key product features to handle those challenges.

That’s very different from the early years of the consumer internet, when online labor marketplaces were designed as free-for-alls, with both sides of the market competing for transactions to occur. A customer might post a potential job to Craigslist, or a worker might post their availability to be hired. These were marketplaces built around serendipity, with almost no guidance from the platform on what to charge, how to charge, or how to find the best talent for a particular project.

Over time, it became clear that some tasks were much more popular than others, and they were quite repeatable as well. Uber takes a passenger from one origin to one destination, and Taskrabbit allows you to hire someone to install IKEA furniture. You don’t need to use paragraphs to describe what you are looking to do, nor should you have to negotiate a price every single time you want to get into a car or get a MALM bed frame installed.

That regularization became the product itself — suddenly the free-for-all marketplace became a tight menu of options with standardized pricing and rating systems. Even more importantly, the identities of the workers themselves are often shielded from the customer on these platforms. You are buying the company’s brand of quality, not the worker’s guarantee that they can do the job.

There is a marketplace today for pretty much every simple and easy-to-define task imaginable. The challenge has been how to design marketplaces to handle more complex forms of work and evaluate the quality of that labor, particularly in cases where quality can be in the eye of the beholder.

One answer has been to focus on hyper-specific (yet lucrative) verticals. Clora and Paro are good examples of this trend. Clora’s strength is knowing the hiring model for the pharmaceutical and life sciences industry. They understand what talent is needed, and even more importantly, how to evaluate that talent. By focusing on accounting, Paro has a relatively objective standard on what quality work looks like, and that makes evaluation a bit easier.

The key to each of these platforms is that pricing, ratings, quality, and product offerings are not standardized in the same way as the productized marketplaces that came before. Each of these platforms recognizes that, say, the launch of a cancer drug is going to be very different from a male-pattern baldness therapeutic. There is no secret algorithm that is going to be able to detect these nuanced differences without human judgment entering the equation.

So to compensate for that variability, these marketplaces put much more of the judgment around the quality of work on the shoulders of the professionals themselves. Unlike Uber, there is no GPS telling someone to move from this work to that work. As such, these newer marketplaces are something of a hybrid between the first-generation free-for-all Craigslist-style marketplaces and the second-generation productized ones like Uber we have seen more recently.

B12, a company I profiled recently, is trying to build a unified infrastructure for handling exactly these sorts of ambiguous jobs. Its open-source Orchestra platform is designed to handle not just the non-linear work patterns that arise in these contexts, but also how to judge quality on the platform (it’s answer is to have professionals evaluate other professionals). It’s an audacious idea, although I still believe we will see many virtualized marketplaces since sales and marketing into these each of these industries is challenging.

When we really evaluate what is happening around work, particularly professional work, there can be incredible complications that make machine learning and algorithms hard to execute. Product designers need to include the ambiguity, complexity, and social human factors into labor marketplaces to properly attack these industries. Thankfully, we are seeing a new crop of startups do exactly that, and that will not just provide opportunities for founders and VCs to make a bunch of money, but potentially millions of more workers as well.

 

via:  techcrunch

Google Search comes to iMessage

Google Search is now available within iMessage. In an effort to more deeply integrate Google’s search engine on iOS devices, the company announced today that its Search app for iOS has added an iMessage extension, allowing iPhone and iPad users to search the web, then quickly add those search results to their iMessage conversations.

The extension itself is clearly inspired by Google’s experiments with Gboard, Google’s own third-party keyboard app for iOS devices, and comes at a time when the popularity of alternative iOS keyboards seems to be declining. Nuance, for example, just shut down its Swype keyboard app; and Swiftkey had exited to Microsoft a couple of years ago.

With Google’s iMessage extension, users can type a query in the search box, or tap a button below for a specific type of search – like Weather, Food, Nearby (venues/businesses), Trending (news), or Videos – all very similar to Gboard.

Google highlights restaurant search in particular in its announcement, as deciding where to eat is a common theme in iMessage chats. Tapping “Nearby” can help quickly pull up business listings or points of interest near your current location, which is also useful when communicating about a particular location.

Each search result includes a “Share” button that, when tapped, adds the item directly into an iMessage conversation as card. When the recipient taps on the card, they’ll go to the Google search result.

In addition, Google’s iMessage app offers a GIF search engine. This is available by tapping on the “GIF” button to the right of the search box, which will immediately load a selection of GIFs from sites like Giphy and Tenor, which you can then filter further by performing a search.

GIF search is a popular feature in Gboard, following that app’s update last year.

Making Gboard’s features available via an iMessage extension makes sense because not everyone wants to swap out their default keyboard on their iOS device.

It also give Google another way to reach users on iOS, even when they’re not in Google’s app itself. This is important because Google’s traffic acquisition costs have been climbing due to the shift to mobile devices, where apps and assistants like Siri serve up many of the answers users once turned to Google for, when on desktop. It also has to pay Apple billionsto be the default search engine on iOS.

The app, like other iMessage extensions, is available by tapping the iMessage apps drawer in iMessage, then scrolling over to the Google app icon. (If you have a lot of iMessage apps already installed, it may be tucked away under “More.”)

The iMessage extension is one of three new features Google announced today for iOS users. However, it has actually been available in the Google app since February 7, 2018, according to Sensor Tower, indicating more of a soft launch.

Google also added two other features that integrate its search engine into iOS in new ways. One is a new share sheet option – now, when you share a webpage from Safari with Google, it will also show you suggestions for related content. A similar feature launched last year in the Google Search app, but is annoying as it pops up over the content when you reach the bottom of the page. With the new Share with Google functionality, it makes a bit more sense as the idea is that you could move from a webpage in Safari directly into a set of Google search results on the topic/

Also new is support for drag and drop on iPad. You can use the feature to move text, images, and links to and from the Google app, share articles from Google into iMessage, or save them into Notes for later reading.

All features are available today, but the iMessage extension is currently U.S.-only, Google says.

 

via:  techcrunch

Amazon is buying smart doorbell maker Ring

With Nest’s first smart video doorbell right around the corner, Amazon is busy buying up the competition.

After acquiring Blink just two months ago, Amazon is now acquiring Ring, makers of the self-titled Ring doorbell (plus a bunch of other security gear, like solar security cameras, floodlight cams and an in-home alarm system).

GeekWire broke the rumor this afternoon, and we’ve just received independent confirmation.

Details on the deal are still pretty light; the financial terms of the deal, for example, haven’t trickled out just yet. Update: Reuters is reporting, via tweet, that the sale price was more than $1 billion. The company had raised around $209 million to date, according to Crunchbase.

This acquisition makes plenty of sense. Amazon has already built a few connected cameras of its own — but hardware is, as they say, hard, and that’s not going to change. With nearly a dozen solid products to its name, the Ring team has proven themselves more than capable of building hardware (and I’m sure its array of patents doesn’t hurt, either.) With Amazon, Google, Apple et al. all duking it out for physical space in and around your home, someone was going to make a big offer — and I’d be surprised if Amazon was the only bidder in the mix. Plus, who on earth is responsible for more doorbell presses than Amazon?

(Fun bit of trivia: Ring debuted to the world on Shark Tank back in 2013, then known as “DoorBot.” They wanted $700,000 for 10 percent of the company, but no one took the deal.)

 

Via:  Techcrunch

84% of cybersecurity pros are open to switching companies in 2018

Cybersecurity workers say they are seeking workplaces that take their opinions seriously more than those that offer a high salary, according to (ISC)².

As demand for cybersecurity professionals continues to grow, those with the coveted skillset are looking for workplaces that offer more than just a large salary, according to a new report from (ISC)².

Of the 250 cybersecurity pros surveyed across the US and Canada, 14% said they plan to search for a new job in 2018, while 70% said they are open to new job opportunities, the report found. Just 15% said they have no plans to switch jobs this year.

The data suggests that unmet expectations between companies and their cybersecurity workforce during both the hiring process and time on the job, combined with high demand for security skills and frequent contact from recruiters, may be luring cybersecurity pros away from their current workplace.

“The cybersecurity workforce gap is growing rapidly, and turnover within cybersecurity teams makes filling those roles even more challenging,” (ISC)² COO Wesley Simpson said in a press release. “It is more critical than ever for organizations to ensure their recruitment and employment retention strategies are aligned with what cybersecurity professionals want most from an employer.”

When asked what’s most important for cybersecurity pros’ personal fulfillment at work, the top response (68%) was wanting to work where their opinions are taken seriously, the survey found. Other top drivers were wanting to work where they can protect people and their data (62%), wanting to work for an employer that adheres to a strong code of ethics (59%), and wanting a high salary (49%).

In terms of professional goals, 62% of cyber pros said they want to work for a company with clearly defined ownership of cybersecurity responsibilities, while 59% said they want an employer that views cybersecurity more broadly than just technology. Another 59% said they want to work for an organization that trains employees on cybersecurity best practices.

Due to demand, cybersecurity workers are being aggressively targeted by recruiters: 13% said they are contacted by recruiters many times a day, while 8% said once a day, 16% said a few times a week, and 34% said a couple times each month.

Employers often fail to impress cybersecurity jobseekers and staff, the report found. An organization demonstrates a lack of cybersecurity knowledge to a cyber pro when it offers vague job descriptions (52%), job descriptions that do not accurately reflect the role responsibilities (44%), and job postings that ask for insufficient qualifications (42%).

Before taking a job with a new company, 85% of cyber pros said they would investigate that employer’s security capabilities—and that what they discover would influence their decision. More than half of respondents (52%) said that they are more likely to take a job with an organization that takes security seriously.

 

via:  techrepublic

Google’s DeepMind and the NHS: A glimpse of what AI means for the future of healthcare

The Google subsidiary has struck a series of deals with organizations in the UK health service — so what’s really happening?

Healthcare has always been seen as rich pickings for artificial intelligence: when IBM first decided to kit out Watson for use in the enterprise, its earliest commercial test was in cancer care.

There are a number of reasons why health and artificial intelligence might seem like a good fit.

One is simply that healthcare organizations around the world, and in the UK in particular, need to save money: any task that can be taken off a clinician’s workload and automated by AI potentially represents a cost saving.

What’s more, healthcare has loads of data — test results, scans, consultation notes, details of appointment follow-ups — most of which is unstructured. For AI companies, that means lots of material that can be used to train up AI systems, and for healthcare providers, it means a lot of data that needs organizing and turning into usable information.

For an NHS under pressure to deliver better healthcare at lower cost, the lure of AI will prove hard to resist.

Take the agreements between Google subsidiary DeepMind and the likes of Moorfields Eye Hospital and University College London Hospitals (UCLH) Trust: both pave the way for a future where the routine work of reading scans is done by an algorithm rather than a healthcare professional, leaving clinicians more free time to attend to patients.

The deals lay the foundation for greater use of AI in the NHS by providing data that DeepMind can use to train up its algorithms for healthcare work. In the case of Moorfields, a million eye scans along with associated information about the conditions they represent will be fed into the DeepMind software, teaching it how to recognize eye illnesses from such scans alone in future.

Under the UCLH deal, 700 scans of head and neck cancers will be given to DeepMind to see if its AI can be used in ‘segmentation’, the lengthy process whereby the areas to be treated or avoided during radiotherapy are delineated using patient scans. Currently, it’s a process that takes four hours — a figure that the DeepMind and the trust claim could eventually be cut down to one hour with the use of AI.

It’s not quite clear who will be benefitting most from these deals: DeepMind or the NHS. Moorfields’ scans will allow the Google subsidiary to improve the commercial viability of its systems, by improving the accuracy with which it can detect particular eye diseases — potentially making a commercial version of the software a must-buy for the hospital. However, according to a freedom of information (FOI) request filed by ZDNet, there has been no deal agreed between the two organizations to roll-out the software once it’s trained up, and DeepMind is only paying Moorfields for the staff time involved in processing the data before handing it on to the AI company.

It’s a similar story with UCLH: “The collaboration between UCLH and DeepMind is focused on research with the goal of publishing the results. There are currently no plans for future rollouts… DeepMind is not providing financial compensation to UCLH for access to data. DeepMind will support the costs of UCLH staff time spent on the de-identification and secure transfer of data,” UCLH said in response to an FOI filed by ZDNet.

Both trusts have already made clear that they, rather than DeepMind, remain the data controller and that ownership of the scans remains with them. DeepMind for its part will appoint a data guardian to control who has access to the scans, and will destroy the data once the agreement ends.

The need to champion good data hygiene likely comes from an earlier deal DeepMind made with the Royal Free NHS Trust, which came in for a good deal of criticism over its handling of patient data.

After a New Scientist investigation revealed that the details of 1.6 million people were being made available to DeepMind — including data over and above that which related to acute kidney injury — the Information Commissioner’s Office began an investigation into the pair’s arrangement. It was also recently criticized in an academic paper, Google DeepMind and healthcare in an age of algorithms, which said “the collaboration has suffered from a lack of clarity and openness”, adding: “if DeepMind and Royal Free had endeavored to inform past and present patients of plans for their data, initially and as they evolved, either through email or by letter, much of the subsequent fallout would have been mitigated”.

The Royal Free entered its agreement with DeepMind last year, when it announced it would be using an app called Streams to identify people who could be at risk of acute kidney injury. By keeping tabs on patients’ blood test results and other data, the DeepMind system can alert clinicians through the Streams app on a dedicated handheld device about when patients are experiencing a deterioration in their condition, and so aid medical staff to take preventative action.

The system was first used with live patient data in January of this year, the Royal Free Trust told ZDNet in response to an FOI. Up to 40 clinicians will be using Streams in the first phase of the rollout, and “the implementation will be phased across all Trust sites starting with the Royal Free Hospital”.

A similar agreement with the Imperial Trust looks to be more of a slow burner than the Royal Free’s: when the trust announced the Google deal, it said it had signed up for an API that would allow data to be moved between electronic patient record systems and clinical apps, be they DeepMind or other companies’.

The trust did note that the deal would cover the eventual rollout of Streams. In response to an FOI request by ZDNet, Imperial said it will begin piloting Streams app in either April or May, and the trial is currently working its way through the Trust’s governance processes for technical and clinical approval.

As a result, Imperial said it couldn’t provide details of where Streams will be piloted and by how many staff, but added it will be used “in a limited environment and in parallel with existing response processes and procedures”.

It also appears to not be following in the footsteps of the Royal Free when it comes to functionality just yet, saying the initial deployment won’t use the alerting facility and so there have been no targets set around when staff should respond to any alerts sent.

Interestingly, given DeepMind is best known as an AI company, there is no whiff of AI used in Streams, as Imperial’s website makes clear, saying: “this partnership does not use artificial intelligence (AI) technology and the agreement between the Trust and DeepMind does not allow the use of artificial intelligence”.

Streams is, according to Gartner analyst Anurag Gupta, more akin to standard issue analytics software than AI.

“Streams has nothing to do with AI at this point in time. It basically collects information from a few different systems, it then uses NICE’s proprietary algorithm on top of that… and it presents information from multiple systems in an easy to understand format. It’s common sense. It’s more like business intelligence.”

While DeepMind’s work with the UK’s health service seems to have attracted a great deal of headlines — many of them negative — there’s no doubt that we’ll be seeing more artificial intelligence in healthcare in general and in the NHS in particular. Anecdotally, the non-AI Streams app has already been saving hours of nurses’ time, and the benefits of such systems could potentially be even greater once AI has been properly brought to bear. Used well, with proper data governance, patient buy-in, and competition among AI providers, it can help the NHS deliver better patient care by freeing up clinicians from some of the more mundane tasks.

“In the short term, our skills and the AI’s skills are complementary: things that AI systems can do very well, crunching a lot of information, making sense of a lot of information in a narrow domain, doing things in a repetitive fashion — that work can be done by AI, and [humans’ skills] can sit on top of that AI, because we are much better in building the context to that information,” Gupta said.

Diagnosis, he adds, is “part science and part art”. Artificial intelligence can provide the former, but only human intelligence can do both.

 

via:  zdnet