Monthly Archives: March 2017

How to use the collaborative editing features in iWork for iOS 10

Apple’s iWork, a great productivity tool for iOS users, was recently improved with revamped collaboration tools. Here’s how to use the tools in a business environment with mixed operating systems.

iWork is Apple’s first party productivity apps that handles word processing (Pages), spreadsheet creation (Numbers), and presentations (Keynote). These apps have existed on iOS since the original version of the iPad was released, and over the years has gained iPhone and iCloud Drive support, and the newest iteration includes an overhauled collaborative editing feature that works on Mac, Windows, and iOS through native apps and a fully featured website. The new collaborative editing features are backed by iCloud’s CloudKit.

I’ll walk you through the process of saving a document for collaborative editing, editing in the collaborative environment in the native iOS app, and opening a document sent to you for collaboration purposes.

How to send a collaboration invite

To send a collaboration invite, save the document in your iCloud Drive folder; if you don’t save the document there, you’ll be prompted to move it before continuing (which the iWork apps can do automatically).

In any of the iWork apps, follow these steps to start sharing a document.

  1. Select the more menu item (three dots) in the navigation bar.
  2. Select Collaborate With Others.
  3. Select how you’d like to add people (iMessage, Mail, Direct Link, etc.) (Figure A).
  4. Select any additional Share Options you wish to apply.

Figure A



Select between messaging, emailing, or sharing the link directly or through a supported app on your device to start the collaboration process.

Cory Bohon / TechRepublic

After you select a share option, the document will be available to the people with whom you’ve shared the document. If you copy a link, you can paste that anywhere for people to click on.

In the Share Options menu item, you can select who has access to the document: choose between Only People You Invite and Anyone With The Link. You can also select permissions: choose whether anyone with access Can Make Changes or View Only. When you select Anyone With The Link as a sharing option, you also get the ability to add a password.

How to use the collaborative editing tools

Any tool that you can utilize inside of Pages, Numbers, or Keynote can be used in the collaborative editing interface—use comments to banter back and forth on changes that need to be made and edit the document as you normally would. As each collaborator makes changes, their insertion point will be visible in a different color (Figure B).


Figure B



When edits are being made, each collaborator will receive a different color insertion point to keep track of who is editing what.

Cory Bohon / TechRepublic


If you want, you can easily see who has joined the document in collaborative mode or view only mode. To do this, tap the new icon that appears in the navigation bar on iOS that looks like a silhouette of a person. Doing this will show who has access to the document and gives you access to change the share options (Figure C). The number beside the icon also denotes how many people are currently actively editing or viewing the document.


Figure C



This Collaboration view shows who is editing the document, and gives quick access to the document link, the ability to stop sharing with everyone, and the ability to change sharing settings.

Cory Bohon / TechRepublic

How to open a collaborative document

Whenever you receive a link to collaborate on a document, whether it’s inside a text message, an email, or a chat app, the process is the same: Simply click or tap the link and, if your system is capable of opening the document in a native iWork app, it will do so; otherwise, the iCloud website will be launched.

If the native app launches, perform the edits, and the app will automatically save the changes. If the website launches, you’ll be required to log in with your iCloud account, and then join the shared document. Once you’re in the web editor, you’ll have access to most iWork app tools to make changes. In this web interface, your changes will be saved automatically and shared back to the collaboration group’s devices.


via:  techrepublic

The Real Reason You’re Not Allowed To Work From Home

Back in 1984, every HR conference attended included at least one session on Managing the Flexible Workforce.

Back in 1984, everyone who studied the workplace predicted that most white-collar employees would be working from home or somewhere else — the beach or a coffee shop, for instance — by now.

Thirty-three years later, the prediction that most white-collar employees would be working from home and/or making their own schedule has not come true.

Watch on Forbes:

According to the Bureau of Labor Statistics, about 24% of employed people did some or all of their work from home on the days they worked in 2015.

With so much technology available to make remote work faster, less expensive and more effective, why is this number so low?

Some large organizations (including Yahoo! when CEO Marissa Mayer took the helm) have pulled their formerly-flexible-workforces back into the office.

Why would a company tell employees “You may no longer work from home — come back and work in the office”?

Office space is expensive.

Back in the 1980s and 1990s, employers started realizing how expensive their office space was.

They started a practice called ‘hoteling’ where employees take an available workstation for the day when they are in the office, rather than having a fixed office or workstation of their own. Employers could cut down on office space that way.

Wouldn’t it be cheaper for most or all employers to let their white-collar, Knowledge Worker employees work from home?

It would be cheaper. Most of us grew up learning that business is the art of investing wisely, but sometimes emotions overpower financial decision-making in the business world.

The real reason you’re not allowed to work from home is that managers at all levels are fearful of change and especially fearful of change that requires them to step out of their comfort zone.

A leader whose employees work from home or from Starbucks has to trust their teammates. If the leader is fearful, the first way that fear will show itself is in the policies the leader hands down.

Leaders make their fear and trust levels clear in their words and even more so in their actions.

Leaders who cannot trust themselves enough to hire people they can trust will always revert to power and control mechanisms, including forcing people to drive a car or take a train to work every day so that their supervisors can keep an eye on them.

Those control mechanisms keep the leader’s fear at bay.

Managers often say “I need my employees here in the office! That’s where collaboration and teamwork spring up!”

In their hearts they know that collaboration and teamwork are things that spring up organically when people feel free to be themselves, and only then.

You will never get organic teamwork or collaboration out of people who are forced to be in a place they don’t want to be.

The reason you’re not allowed to work from home is that fear grips the corporate and institutional landscape, and many leaders are afraid to trust their employees whenever they’re out of sight.

They may assume that an employee who’s working from home is watching TV soap operas and eating bon-bons instead of getting their work done.

That lack of trust in themselves is a failure of leadership, and it hurts communities and individuals as well as the organization’s own customers and shareholders.

Clearing roads and highways of morning and afternoon commuters would be good for the planet, as well as the physical and emotional health of commuters.

Allowing employees to work from home would give them better life/work balance, more chances to stretch during the day and a less hectic environment in which to have big ideas.

Your customers need and expect  you to staff your organization with people who are charged-up and set free to accomplish great things.

Your customers would not approve of a leadership style that only trusts your own hand-picked employees when they are right in front of you!

It’s time to ease up on your fear and let your employees work from home. You can begin with a pilot project and expand your work-from-home options from there.

It is time to step out of managerial fear and trust the people you hired to run your company.

If you can’t trust the people you carefully vetted and selected from a group of qualified candidates, who can you trust?

If you don’t trust yourself to lead, why should any customer, employee or shareholder trust you?


via:  forbes

Honeypot catches social engineering scams on social media

Research company investigates 40,000 fake accounts to find impersonator tactics.

Say you just got laid off from your job. Bills are piling up and the pressure to get a new job quickly is building. Your desperation has you taking chances you wouldn’t normally take, such as clicking on a link to a job offer — even if something about it doesn’t quite look right.

Research firm ZeroFOX has found that unless a company has a verified recruiting account, it can be difficult for an applicant to decipher a legitimate account from an impersonator. One way to spot an impersonator is that they commonly provide Gmail, Yahoo, and other free email provider addresses through which applicants can inquire about a job and send their resumes (more advanced scammers can spoof company email domains). Some also include links to official job sites and LinkedIn for follow-up. In most cases, the impersonator uses the company logo to portray themselves as an official recruiter for the company.

Once the impersonator receives an email, he or she will either try to extract personally identifiable information (PII) or demand payment for an application fee. Some companies are aware of recruitment scams and have a page on their site asking job seekers to be aware of scammers using unofficial company email addresses.

ZeroFOX created honeypot accounts, engaged with the impersonators, and observed the social engineering attack within a sandboxed environment in investigating 40,000 fake accounts. This allowed the research company to reveal the anatomy of the attacks, identify commonalities and differences in these attacks, and more clearly understand motives.

“Social media is no longer used solely as a personal communication tool. It has evolved into a critical business application – helping businesses dramatically increase revenue and productivity – while strengthening and growing customer relationships. As businesses increasingly look to leverage social media – so are cybercriminals.,” said ZeroFOX’s Evan Blair.



In the last two years, the overall number of malicious impersonations has increased 11 times from December 2014 to December 2016, according to ZeroFOX. In its research, ZeroFOX uses a suite of machine learning, natural language processing, image recognition, and other data science techniques in order to measure the relative similarity between an impersonating profile and the genuine account.

Nearly half (48.1 percent) of all malicious social media impersonators disguise their payload as a fake coupon or giveaway using the brand to attract promotions seekers. More than 1,000 impersonators incorporated credibility-building words like “official,” “authentic,” “real,” “authorized,” “actual,” and “legitimate” within their names, screen names and descriptions.

“In our new digital lives, where people are free to assume others’ identities and perpetrate malicious activity in their name, businesses – regardless of size –  are at an all-time high risk of financial and reputational losses. Social media and digital security is now a mission-critical function for brands to protect themselves – and more importantly, their customers – from falling victim to safety, privacy, reputation and revenue vulnerabilities,” Blair said.

The social networks have taken the first step in combatting the impersonator problem by verifying accounts, indicating to a user that the profile they’re interacting with is legitimate and not an imposter. This is similar to websites that are verified using website digital certificates, and browsers that highlight the URL in green. But what this approach doesn’t provide is any indication of a nefarious account, ZeroFOX reports. Social networks rely on abuse reports from their users or manual triage in order to identify and respond to these accounts. This approach cannot keep up with the constant flux of impersonating accounts as they are created and deleted each day.

The problem of fraudulent accounts is systemic across the social networks and the tactics are broad and diverse. Proactively hunting for these accounts requires sophisticated, layered methods using account verification, threat detection, and machine learning.

This approach can be subsequently integrated to allow large-scale, cross-network analysis and improved detection accuracy. Machine learning classifiers that can report on these threats targeting an individual or enterprise at a large scale. An organization can then take a more proactive and timely approach to thwarting threats, requesting account takedowns, and mitigating risk.

ZeroFOX has shared some of the scenarios they saw in setting the trap:

From product complaints, to account security issues, to undelivered packages, customers publicly express their discontent by directly mentioning the company’s social media account. Companies have responded by forming rapid response teams who address such customer inquiries. But they aren’t the only ones to do so. Impersonators have latched on to the inherent trust that customers place in these support accounts.

Other than the blue verified checkmark, the differences between the real account and its two impersonators are negligible to the human eye. Customers with bank accounts identify themselves by mentioning the authentic bank’s account alongside a personal question, and the impersonator then uses this publicly posted information as a one-stop-shop for victim acquisition.

Another common theme involves impersonators who target military members and veterans. From the data collected, 1,047 impersonators incorporated military-associated words like “military,” “navy,” “army,” “air force,” “marines,” and “nato” within their names, screen names and descriptions. Impersonators try to penetrate the social media circles of military members to try to steal personal and sensitive information.

Some impersonators garner followers and likes by promising vouchers, gift certificates, and other fake giveaway promotions. In most instances they request a @mention or repost of the contest along with an email address or photo. Obtaining followers allows them to inflate their own prominence on social media, a tactic called fame farming.


The value of inflating followers count is threefold:


More followers creates a more credible account: There is a feedback loop between offering fake promotions for likes and having a strong following. A strong following increases an account’s credibility, and more credibility means more follows. Accounts build this following until they are ready to do something else, almost always something malicious, with the account.

Followers now are victims later: By building a following over time without conducting any overtly malicious activity, the followers are less likely to suspect malicious activity once the account does spring into action. The cybercriminal may begin direct messaging its followers or posting more overtly malicious content, such as phishing links disguised as fake offers or malware in the form of fake contests.

Robust accounts can be sold: Scammers, spammers, and cybercriminals pay a hefty price for accounts with a pre-built following. Building and selling accounts, called “account flipping,” is a lucrative tradecraft in the social media cybercrime economy.

Enterprises should be concerned about these tactics because there is a profound element of brand reputation that is not part of the traditional cost analysis of an incident, ZeroFOX said. These attacks target a brand’s customer base, especially those that are particularly engaged. Organizations ought to assess these attacks in term of the value of a single customer, not just the direct financial fallout of the attack.

Paid promotion

Another way for a cybercriminal to ensure their attack is viewed by a huge number of potential victims is to use paid promotion, which broadcasts the phishing link to wider audiences. Promotion is a service offered to social media marketers to display an ad to users beyond just their followers, and it is the basis for revenue for most social networks. Scammers using this method take a huge risk because the social networks review ads before they are posted and the scammer may have their entire account banned if the network deems their purposes to be nefarious. Scammers must invest extra time and energy ensuring their promoted content will dupe the network’s filters.

In the image to the left, a website offering counterfeit sunglasses at a too-good-to-be-true discount is promoted on Instagram. The website sells fake merchandise despite adopting the real brand’s logo. The more scammers are willing to pay, the more the networks will distribute the post.

Impersonators use a variety of techniques to avoid detection by the social networks. One of the most popular is creating an account but letting it sit dormant for significant periods of time before springing into action. They can return to dormancy just as quickly. The reasons for this might be:

1. Older accounts are more credible

For a user doing a cursory check on a potentially malicious impersonation account, the account’s age is a good indicator of legitimacy. Users expect the authentic account of well-known brands to have been around for quite a while. For a scammer, this means “aging” the account makes it more authentic. During this aging process, the account must remain undetected, and thus the perpetrators leave the account dormant and blank.

2. Dormant accounts are more likely to fly under the radar

Cyber criminals regularly wipe the account to avoid detection. Wiping the account helps cover the tracks of the attacker and avoid detection in between attacks.

3. The account may have been recently sold

Accounts are bought and sold regularly. Cybercriminals might buy a dormant account with a lucrative handle, perhaps one very similar to that of the brand they intend to impersonate. Once the account has changed hands, it may spring to life and start spreading its attack campaign.

Sneaky ways

The authentic Twitter user @verified posts a URL with information about how users can get their accounts verified. Its impersonator uses the same default image, similar background, and a deceptive @HeIpSupport username with a homoglyph uppercase “i” replacing the lowercase “l.” The account laid dormant for four years before starting to phish, but now actively engages by posting and liking often, following other users, and following back similar accounts spreading malicious URLs.

On Facebook, verification scams target both “pages” and “profiles.” Pages are used by businesses and organizations for marketing purposes while profiles are intended for individuals. The actual verified accounts have the blue badge adjacent to the username. The name of the account is “Get Verified on Your Account,” and the banner advertises verification services. The post on the page instructs the victim to download a linked text file with Javascript code.

In another situation, a perpetrator instructs a victim to open the developer console in Firefox while on their Facebook page. Fortunately, Facebook actually warns of the dangers of using this console to run Javascript. If the user ignores the warning, they are told to paste the Javascript into the console. The Javascipt begins by capturing the user’s Facebook session cookie, a very common technique used for account hijacking. Additionally, the code uses the hijacked session to like multiple pages and lists those accounts later in the script.

Retailers are targets

Retailers are also targets for fraud and scams. The fake gift card, coupon, and promotion impersonators can be used to phish information from coupon-clippers, provide discount codes that bait-and-switch to malware, and even generate usable gift card numbers from fake mobile apps.

Retail scams distribute links that redirect the user to a page to enter the contest, thus harvesting name, address, email, birthdate, and other PII. Despite following registration instructions, entry confirmation is never received. Instead, the page leads to multiple pop-ups with malware and eventually redirects to a website designed for data extraction.

Other impersonator accounts simply request an email address in conjunction with a repost. Once entered, the email is sold to spam lists. The user is typically encouraged to follow steps for providing contact information in exchange for an unfulfilled card. Additionally, a perpetrator can check these against exposed account lists such as haveibeenpwned. The social network account can also be reviewed to identify the user’s posts, hobbies, and more for password guessing purposes.

Financial services firms are obvious targets for fraud and scams, such as money-flipping scams, work from home scams, card cracking, and more. Financial scammers hijack banks’ logos in an attempt to make their services look official. They monitor legitimate bank profiles on social media and identify when they’re followed by a new user. The scammer will then immediately tag them or use an @mention to ask if the user would like to make a quick return on their money. Then the perpetrator takes the conversation with the user to private direct messages (DM) to engage off the radar. This activity is not completely hidden; the initial post is public to all including the bank.

In another scenario, a scammer offered to money-flip for a number of banks, going as far as providing their phone number. The bulk of the malicious activity is carried out via DM or off of the platform entirely, making it difficult to detect.

The scammers target victims in dire financial need, often appending hashtags like #help, #debt, and even #singlemom. They also target members of the military and holiday shoppers, who make for lucrative targets. At the end of the day, it’s often the banks who eat the costs of these scams, which combined across platforms, could total in the hundreds of millions annually, ZeroFOX said.


via:  csoonline

Gang Used 3D Printers for ATM Skimmers

An ATM skimmer gang stole more than $400,000 using skimming devices built with the help of high-tech 3D printers, federal prosecutors say.

Before I get to the gang, let me explain briefly how ATM skimmers work, and why 3D printing is a noteworthy development in this type of fraud. Many of the ATM skimmers profiled in my skimmer series are carefully hand-made and crafted to blend in with the targeted cash machine in both form and paint color. Some skimmer makers even ask customers for a photo of the targeted cash machine before beginning their work.

The skimmer components typically include a card skimmer that fits over the card acceptance slot and steals the data stored on the card’s magnetic stripe, and a pinhole camera built into a false panel that thieves can fit above or beside the PIN pad. If these components don’t match just-so, they’re more likely to be discovered and removed by customers or bank personnel, leaving the thieves without their stolen card data.

Enter the 3D printer. This fascinating technology, explained succinctly in the video below from 3D printing company i.materialise, takes two dimensional computer images and builds them into three dimensional models by laying down successive layers of powder that are heated, shaped and hardened.

3D printing in action from i.materialise on Vimeo.


Apparently, word is spreading in the cybercrime underworld that 3D printers produce flawless skimmer devices with exacting precision. Last year, i-materialise blogged about receiving a client’s order for building a card skimmer. The company said it denied the request when it became clear the ordered product was a fraud device.

3D printer firm i.materialise received and promptly declined orders for this skimmer device – a card acceptance slot overlay

In June, a federal court indicted four men from South Texas (PDF) whom authorities say had reinvested the profits from skimming scams to purchase a 3D printer. According to statements by the U.S. Secret Service, the gang’s leader, Jason Lall of Houston, was sent to prison for ATM fraud in 2009. Lall was instrumental in obtaining skimming devices, and the gang soon found themselves needing to procure their own skimmers. The trouble is, skimmer kits aren’t cheap: They range from $2,000 to more than $10,000 per kit.

Secret Service agents said in court records that on May 4, 2011, their undercover informer engaged in a secretly taped discussion with the ring’s members about a strategy for obtaining new skimmers. John Paz of Houston, one of the defendants, was allegedly the techie who built the skimming devices using a 3-D printer that the suspects purchased together. The Secret Service allege they have Paz on tape explaining the purchase of the expensive printer.

“When [Lall was] put in jail, we asked, ‘What are we going to do?’ and we had to figure it out and that’s when we came up with this unit,” Paz allegedly told the undercover officer.

The government alleges Paz also was the guy who encoded the stolen card data onto counterfeit cards. The feds say Albert Richard of Missouri City, Texas prepared ATMs at numerous banks where the skimming devices were installed, by covering the ATM cameras or spray-painting over them, and by acting as a lookout.

A fourth defendant, John Griffin, is alleged to have used the counterfeit cards to withdraw funds at different ATMs around Texas. Prosecutors allege the group stole more than $400,000 between Aug. 2009 and June 2011. Prior to their arrest this summer, the gang started making decent money but they split the profits amongst them. Federal prosecutors say the men stole $57,808.14 in month of April 2011 alone (yes, that’s an odd amount to have come out of ATMs, but I digress).

The court documents don’t say how much the men spent on the 3D printer, nor do they include pictures of the fraud devices. The Secret Service declined to offer more details, citing an ongoing investigation. But i.materialize’s Franky De Schouwer said a high quality 3D printer can be had for between $10,000 and $20,000.

“Just looking at the idea of 3D printing a potential skimming device, a criminal could invest in buying a desktop 3D printer,” De Schouwer wrote in an email to KrebsOnSecurity. “Not a kit printer in the line of a Makerbot or a RepMan but a desktop printer of a high end manufacturer of 3D printers like Objet, 3D Systems or Stratasys (HP). You could get one of those between $10,000 – $20,000 and they will print a high quality skimming device that, including some post finishing, will look like the real thing.”

De Schouwer said his company thankfully hasn’t had any more requests to print ATM skimming devices. But that doesn’t mean the demand has gone away.

“We do notice that some people end up on our blog with the keywords ‘I want to buy an ATM skimming device,” he said.

A copy of the original complaint in this case is available here (PDF).


via:  krebsonsecurity

U.S. and UK puts restrictions on carrying electronic from 10 airports.

Due to threats the American and British intelligence has been following up, the two governments has put restrictions on electronics from 10 different airports, in the middle-east and North African countries. This is due to reports that militant groups in those countries want to smuggle explosive devices hidden in electronics gadgets.

The Department of Homeland Security said passengers traveling from those airports could not bring devices larger than a cellphone, such as tablets, portable DVD players, laptops and cameras, into the main cabin. Instead, they must be stored in the checked baggage.

The airports where the restrictions take place are Cairo; Istanbul; Kuwait City; Doha, Qatar; Casablanca, Morocco; Amman, Jordan; Riyadh and Jeddah, Saudi Arabia; and Dubai and Abu Dhabi in United Arab Emirates.

Now what does that mean for the airlines? It means that several of the airlines, including Turkish Airlines, Etihad and Qatar, said early on Tuesday that they were quickly moving to comply. Royal Jordanian and Saudi Airlines said on Monday that they were immediately putting the directive into place.

All this is due to attacks which  happened on several different occasions, such as the one in Yemen, AQAP, where in 2015 it took responsibility for the attack on the Charlie Hebdo magazine offices in Paris, or the same group taking responsibility for a failed attempt by a Nigerian Islamist to put down an airliner over Detroit. The device of that man, which was located in the man’s underwear, failed to detonate, thankfully. Also, in 2010, security officials in Britain and Dubai intercepted parcel bombs sent from Yemen to the United States.

According to the Trump administration this is nothing to do with the Muslim ban, but it is because the Department of Homeland Security has multiple reports that radical Islamist groups want to bring those devices on board and use them as explosives.

This is a step to increase security on the airline carriers that come from the Muslim countries as well as passengers with connecting in the airports of those countries, or people flying in from those specific countries and boarding other airlines’ planes in connecting flights in other airports of the world, all with a final destination to the US and UK.

What are your thoughts about this step? Do you think that it was necessary in order to prevent any further attacks? Who will suffer in the end? The passengers or the airline companies of those countries?

Ethical Hacking: The Most Important Job No One Talks About

If your company doesn’t have an ethical hacker on the security team, it’s playing a one-sided game of defense against attackers.

Great power comes with great responsibility, and all heroes face the decision of using their powers for good or evil. These heroes I speak of are called white hat hackers, legal hackers, or, most commonly, ethical hackers. All these labels mean the same thing: A hacker who helps organizations uncover security issues with the goal of preventing those security flaws from being exploited. If companies don’t have an ethical hacker working for them, they’re in a one-sided game, only playing defense against attackers.

Meet the Hackers
Companies house both developer and security teams to build out codes, but unfortunately, there often is little communication between the two teams until code is in its final stages. DevSecOps — developer and security teams — incorporates both sides throughout all of the coding process to catch vulnerabilities early on, as opposed to at the end, when making updates becomes harder for developers.

Although secure coding practices and code analysis should be automated-  and a standard step in the development process – hackers will always try to leverage other techniques if they can’t find code vulnerabilities. Ethical hackers, as part of the DevSecOps team, enhance the secure coding practices of the developers because of the knowledge sharing and testing for vulnerabilities that can be easily taken advantage of by someone outside the company.

Take, for example, Jared Demott. Microsoft hosts the BlueHat competition for ethical hackers to find bugs in its coding, and Demott found a way to bypass all of the company’s security measures. Let that sink in for a moment — he found a way to bypass all of Microsoft’s security measures. Can you imagine the repercussions if that flaw had been discovered by a malicious hacker?

Let the Hackers Hack
Security solutions (such as application security testing and intrusion detection and prevention systems) are a company’s first line of defense because they’re important for automatically cleaning out most risks, leaving the more unique attack techniques for the ethical hackers to expose. These could include things such as social engineering or logical flaws that expose a risk. Mature application security programs will use ethical hackers to ensure continuous security throughout the organization and its applications. Many organizations also use them to ensure compliance with regulatory standards such as PCI-DSS and HIPAA, alongside defensive techniques, including static application security testing.

You may be thinking, “What about security audits? Wouldn’t they do the trick?” No, not fully. Ethical hacking is used to build real-world potential attacks on an application or the organization as a whole, as opposed to the more analytical and risk-based analysis achieved through security audits. As an ethical hacker, the goal is to find as many vulnerabilities as possible, no matter the risk level, and report them back to the organization.

Another advantage is that once hackers detect a risk, vendors can add the detection capability to their products, thus enhancing detection quality in the long run. For example, David Sopas, security research team leader for Checkmarx, discovered a potentially malicious hack within a LinkedIn reflected filename download. This hack could have had a number of potential outcomes, including a full-blown hijacking of a victims’ computers if they had run the file. It’s probably safe to say that just the audit wouldn’t have identified this hidden flaw.

How to Hack
The good news for companies searching for someone to fill this role is that there are several resources for their own employees to learn more about ethical hacking and become a more-valuable asset.

The first step is to get certified. EC-Council has resources and certifications available, and if you want to continue brushing up on your ethical hacking skills, OWASP has you covered. While getting certified isn’t a requirement, I highly recommend this, because getting the basics down will help to provide a foundation on which to build. After you have the basics down, there are many tools and automated processes that can be utilized, but ethical hackers usually use penetration testing and other, mostly offensive, techniques to probe an organization’s networks, systems, and applications. In essence, ethical hackers use the same techniques, tools, and methods that malicious hackers use to find real vulnerabilities.

One Small Step for Companies, One Giant Leap for Hackers
What does this all mean for companies? Well, companies must first acknowledge how ethical hackers can help them. Strong application security programs need to focus both on the code security as it’s being developed, as well as in its running state — and that’s where ethical hacking comes into play. Nothing beats secure coding from the get-go, but mistakes do happen along the way, and that’s where ethical hacking experts can make a difference in an organization.

At the next meeting on staffing, ethical hackers should be right at the top of the list of priorities to keep your company, and its data, safe.


via:  darkreading

Build an effective cyberattack recovery playbook by following this NIST guide

Cybersecurity prevention efforts should not trump response capabilities. Experts at NIST spell out four steps to recovering from a cyberattack.

Preventing cybersecurity disasters—large or small—rather than having to recover from them is preferable, for obvious reasons. However, experts at the National Institute of Standards and Technology (NIST) are concerned that overreliance on prevention is as bad as being underprepared. In the NIST special publication Guide for Cybersecurity Event Recovery (PDF), authors Michael Bartock, Jeffrey Cichonski, Karen Scarfone, Matthew Smith, Murugiah Souppaya, and Greg Witte explain why:

“There has been widespread recognition that some cybersecurity events cannot be stopped and solely focusing on preventing cyber events from occurring is a flawed approach.”

That attitude among NIST experts started gaining traction two years ago when the Federal Government’s Office of Management and Budget published the agency’s Cybersecurity Strategy and Implementation Plan (CSIP). The following quote, in particular, captured the attention of NIST personnel:

“CSIP identified significant inconsistencies in cyber-event response capabilities among federal agencies. The CSIP stated that agencies must improve their response capabilities.”

The CSIP defines recovery as, “The development and implementation of plans, processes, and procedures for recovery and full restoration, in a timely manner, of any capabilities or services that are impaired due to a cyber event.”

The report continues, “Although there are existing federal policies, standards, and guidelines on cyber event handling, none of them focuses solely on improving cybersecurity recovery capabilities, and the fundamental information is not captured in a single document. The previous recovery content tends to be spread out in documents such as security, contingency, disaster recovery, and business continuity plans.”

NIST’s Guide for Cybersecurity Event Recovery

Enter the NIST’s Guide for Cybersecurity Event Recovery, which is a compilation of information and processes that can be used by private and public organizations to create recovery plans and be better prepared if a cybersecurity event occurs.

The Guide’s authors believe the recovery function consists of two phases: “The immediate tactical recovery phase is largely achieved through the execution of the recovery playbook planned prior to the incident with input from the NIST Cybersecurity Framework (CSF).”

More subtle is the second strategic phase, which according to the authors, allows organizations to improve pre-recovery functions mentioned in the CSF, in particular: Identify, Protect, Detect, and Respond (Figure A), reducing the likelihood and impact of future incidents.


Figure A


Image: NIST, Michael Bartock, Jeffrey Cichonski, Karen Scarfone, Matthew Smith, Murugiah Souppaya, Greg Witte

Four steps to recovering from a cyberattack

The authors of the Guide go into detail on how to develop an effective recovery process. A brief overview of each step follows.


1. Plan for cyber-event recovery

Effective planning is critical, according to the authors. Planning enables organizations to:

  • determine crisis-management and incident-management roles;
  • make arrangements for alternate communication channels, services, and facilities;
  • explore “what if” scenarios based on recent cyber events that have negatively impacted other organizations;
  • identify and address gaps before a crisis occurs, reducing their impact on business; and
  • exercise technical and non-technical aspects of recovery, such as personnel considerations, legal concerns, and facility issues.

2. Continuous improvement

The Guide’s authors warn that recovery planning is not static, adding, “Cyber-event recovery planning is not a one-time activity. The plans, policies, and procedures created for recovery should be continually improved by addressing lessons learned during recovery efforts and by periodically validating the recovery capabilities themselves.”


3. Recovery metrics

Rather than guessing if the recovery process worked as planned in a cybersecurity event, the authors suggest metrics to remove any guesswork. “It is beneficial to determine these metrics in advance, both to understand what should be measured and to implement the processes to collect relevant data,” mentions the authors. “This process also requires the ability to determine where the metrics that have been identified can be most beneficial to the recovery activity and identify which activities cannot be measured in an accurate and repeatable way.”

Some suggested metrics are:

  • Costs due to the loss of competitive edge from the release of proprietary or sensitive information
  • Legal costs
  • Hardware, software, and labor costs to execute the recovery plan
  • Costs relating to business disruption, such as system downtime, lost employee productivity, and lost sales

4. Building the playbook

The authors did not forget one of the more serious concerns presented in the Cybersecurity Strategy and Implementation Plan: Recovery guidelines do not reside in a single document, but are spread throughout security, contingency, disaster-recovery, and business-continuity plans.

Understanding mission-supporting information systems, as well as any dependencies surrounding them, is important under normal operating conditions. “In the event of a cybersecurity event, this information becomes paramount, and the processes and procedures need to be presented in an actionable manner to effectively restore business functions quickly and holistically,” conclude the authors. “The playbook is a way to express tasks and processes required to recover from an event in a way that provides actions and milestones specifically relevant for each organization’s systems.”


Throughout the Guide, the authors stress that the document’s main purpose is to provide guidance. “This document is not intended to be used by organizations responding to an active cyber event, but as a guide for developing recovery plans in the form of customized playbooks,” the authors explain in the executive summary. “As referred to in this document, a playbook is an action plan that documents an actionable set of steps an organization can follow to recover successfully from a cyber event.”


via: techrepublic

Windows 10 gets even more ads: Here’s how to disable them all

Users report promos for OneDrive have been added to Windows 10’s File Explorer, here’s how to ensure you never see these or most other ads in the OS.

Windows 10 already shows users ads on the lock screen and the Start Menu, but now Microsoft appears to be promoting its services via Windows’ File Explorer.

Various Windows 10 users are reporting seeing adverts for Microsoft’s cloud storage service OneDrive while browsing files on their machine.

The ad, shown in the screenshot above, offers 1TB of OneDrive storage for $6.99 per month, and is technically a ‘sync notification’, designed to let people know they can get more than the 5GB of free storage that comes with a Microsoft account.

Ads for apps and services are already shown throughout Windows 10, and can be found on the Start Menu and lock screen.

The introduction of promotions to File Explorer has been heavily criticized by some Microsoft watchers, and marks a widening of advertising to new areas of Windows 10.

Most of the ads in the Windows 10 are pitched as suggestions for apps and services that might appeal to the user, and some users don’t appear to notice them.

But to some they are intrusive, and if they are offensive to you there are steps you can take to remove them. Follow the video guide above to ensure you won’t see these ads again.


The OneDrive/Office 365 promotions appearing in Windows 10 File Explorer are technically ‘sync’ notifications.

Image: Tall_Ships_for_Life/Reddit/Microsoft


via:  techrepublic

Google Maps’ latest trick is remembering where you parked

It depends on you manually dropping a pin, though.

Google Now kept track of parking locations before, but it wasn’t with any degree of accuracy. The latest version of the Android Google Maps app circumvents how inaccurate the feature was by having you mark a parking spot for yourself. That’s a pretty stark comparison to the dark magic (read: GPS and other data) that Now used prior.

Simply open the application after parking, tap the blue location dot and you’re good to go. From there you can add notes (helpful for jotting down location in parking ramps) and even take photos to remind you which blue Toyota Camry is yours. Additionally you can add a timer so you know when the meter will expire. All of this info can be pulled for notifications and alerts, too.

As Android Police points out, though, this appears to only work for one car at a time. Not a huge deal, but it does rule out keeping track of your car at home, and a rental car in another city while on vacation.

AP also notes that some Android Auto users might see a new arrival screen too. Oh, and folks using Maps to find their way around via public transit could see weather alerts.


via:  engadget

Here is a tiny GameBoy emulator for your tiny Apple Watch screen

Are your fingers small enough?


Gabriel O’Flaherty-Chan

The last place you’d probably want to play a video game is on an Apple Watch. The wearable has a tiny screen, almost no buttons and can only be operated with one hand. It’s a completely impractical gaming device, but developer Gabriel O’Flaherty-Chan made a Game Boy emulator for it anyway.

Named after Pokémon’s Giovanni, the wrist-worn Game Boy emulator crams Nintendo’s original gaming portable into an Apple Watch Series 2. It doesn’t quite play games at full speed, but it is fully functional. On-screen buttons underneath the game display let users tap in start, select and B button inputs, and swiping up, down, left or right emulates the d-pad inputs. Want to press the A button? Just tap on the right side of the watch’s face.

The project is a fork of Gambatte, an existing Game Boy emulator — but O’Flaherty-Chan says it wasn’t an easy port. Apple’s WatchOS didn’t use any of the graphics standards the original emulator relied on, and was never really meant to play complex games. Still, the project is a neat proof of concept, albeit one that will never see full support on the App Store. Still, if you want to check it out for yourself, hit up the GitHub link at source link below. Giovanni is open-source, after all.


via:  engadget