Quantcast
Channel: Fast Company
Viewing all 4691 articles
Browse latest View live

How Machine Learning Can Speed The Spread Of Solar-Powered Homes

$
0
0

A California startup is using big data to predict who is literally ready to go off the grid—so it can convince them to buy solar panels.

For all its promise of delivering a bold future, the green energy industry is still decidedly low tech when it comes to moving merchandise. Solar panels are still sold door-to-door, a slow and expensive process that has impeded the industry's growth.

Now, a California startup thinks it has a solution, using sophisticated data science to find consumers likely to adopt solar power and let them know just how much they can save on their electric bills. PowerScout, a machine-learning-enabled eCommerce platform for solar energy, aims to eliminate marketing costs that, according to CEO Attila Toth, can exceed the cost of the actual equipment for some green-power vendors.

"This is very absurd—very crazy," Toth says. "[I]f somebody is trying to go solar, that person is going to pay more for the sales guy, for the marketing costs, than for the panels themselves, and that's the reality today."

PowerScout, which was founded in 2014 and has received $5.2 million in funding, including an award from the Department of Energy's SunShot Initiative announced this week, uses a mix of data from commercial databases and LIDAR imaging to predict which households are most likely to be interested in using solar energy. The company began sales in the first quarter of this year and has since signed customers in four states, with sales increasing each month, says Toth.

The map shows how much solar radiation that neighborhood is receiving. It takes into account the height of the buildings, vegetation and other objects that could cast shade on an area. The red pixels receive the most sunlight and the blue pixels receive the least.

Families with fuel-efficient cars are much more inclined to be interested in powering their homes on green energy, for instance, and other factors like education levels, household size, credit scores—since most solar installations are financed—and income levels all factor in, Toth says.

"Lower-income families, they adopt because this is cost savings every month to the bottom line, zero money down," while higher-income families adopt more for reasons for prestige, he says. "The middle-income households are the ones where most of the marketing dollars need to spend on convincing."

And more generally, areas that already have a high degree of solar adoption are likely to see more, he says.

"There's a herd effect here, so if people keep seeing a lot of solar in the neighborhood, they become increasingly more intrigued, which is natural," he says. The company can even estimate potential customers' existing electric bills based on an existing sample set of about 100,000 electric bills and dozens of available data points, he says.

And while there's no national database of who has or doesn't have rooftop solar, other than confidential tax credit records, PowerScout's image analysis tools can help the company figure it out.

Those tools can also help estimate how much energy can be harvested from a home's rooftop without needing to take measurements in person with a decent degree of accuracy.

"We do the latest convolutional neural network image recognition," says Toth. "Very few companies in energy do that."

PowerScout can target direct mail and online marketing to the most promising customers and quickly give them online estimates. Then, those who are interested in rooftop solar can choose a financing plan and get connected to a local installation partner to have it installed, Toth says.

And for those who rent, or otherwise might better benefit from connecting to a shared, community solar project, the platform can still give them a savings estimate and help large-scale solar installation developers acquire the customers they need to sell their generating capacity, he says.

In the future, as smart electric meters tracking precise data on usage become more prominent, potential customers will be able to share more data with PowerScout to get more precise estimates, Toth says.

"Once we have that, we can tailor any clean energy product to your home in a precise fashion without setting foot in your kitchen," he says. That could include increasingly inexpensive battery storage options, like Tesla's Powerwall system, he says.

PowerScout's systems are able to perform all their calculations thanks to the on-demand power of Amazon Web Service's private cloud systems, making possible computation on a level that wouldn't have been available just five years ago, Toth says.

"This is probably the largest big data problem of this century, because the electric grid is the largest man-made machine," he says.


Pinterest Is Using Machine Learning To Help You Find What You'll Pin Next

$
0
0

The social-sharing platform uses cutting-edge techniques to tailor recommendations to each user and boost engagement.

With 100 million users active on its platform every month, Pinterest is increasingly relying on machine learning to help guide the company to new online discoveries.

People come to Pinterest to explore, save, and share images and posts from around the internet. Finding content they like naturally keeps them engrossed in the platform: The company says 30% of engagement and 25% of in-Pinterest purchases are driven by the platform's recommendations of related content. To get those recommendations right, the company relies on cutting-edge, data-driven techniques and lots of experimentation.

"A lot of what I"m doing here is trying to shape what direction we go in approaching the discovery problem," says Pinterest's lead discovery science engineer Mohammad Shahangian. "We launch hundreds of experiments that actually make small changes to our algorithms, and every single one of these changes has places where it helps, and places where it hurts."

One advantage is that the platform is explicitly built around recording people's interests, as users save products, posts, and images from around the web to virtual pinboards. That means Pinterest doesn't have to guess what users find interesting from, say, click patterns or time spent on particular pages, as other social networks might. And it means its algorithms can guess which of the 75 billion pinned items in its database are related to each other, since they're more likely to be pinned to the same boards.

"A lot of companies are trying to infer what interests users have off of inputs or signals," Shahangian says. "At Pinterest, users are explicitly giving that signal, saying this is what I'm interested in."

Pinterest visitors are essentially contributing to an ever-growing, three-part social graph, with billions of connections between users, the items they pin and the boards to which they pin them. And all that data lets Pinterest populate users' home feeds, search results, and related pin recommendations with a greater degree of nuance: Simply showing users' recommendations based on who they follow is less than ideal—think of a case where a user is planning a wedding and pinning dresses, while her followers are not—and just suggesting similar items can get repetitive, according to Shahangian.

"If you pinned a kitchen sink, do we want to send you 10,000 more kitchen sinks, or inspiration for how you could design your kitchen overall?" he asks.

To make those kinds of decisions, the company's engineers have experimented with a variety of machine learning algorithms. They've studied how those different formulas perform on test sets of similar and dissimilar pins and, ultimately, how they impact the engagement of real-world users.

"We do have live experiments, but there are cases where we actually know a lot before exposing it to users," Shahangian says.

Of course, there's no way, short of actual testing, to know for a fact whether a given user will prefer a new set of recommendations. "I can't pay somebody money to tell me whether or not Jack or Susie is going to like this pin," Shahangian says. But looking at whether the algorithms accurately recommend content that human testers agree is related to a particular pin has proven to be a decent approximation.

Moving to an algorithmically generated feed, instead of a purely chronological display of followed users' posts, has boosted engagement by a factor of five or 10, with additional boosts as the algorithms have gotten better.

"We've seen a lot of gains throughout its history," Shahangian says. "Personalization has been one of the biggest levers for boosting user engagement."

At the same time, the company has also been working on improving visual searches, helping users find pinned images that are similar to other pictures. Pinterest engineers have worked with researchers from the University of California Berkeley Vision and Learning Center to develop the technology, which as of earlier this year can automatically detect objects in images using deep learning techniques. Then, users can tap those objects to find similar examples across Pinterest's library of saved content.

"It's not quite like a classification task where we try to figure out, is this a cat or a dog," says Dmitry Kislyuk, a lead visual search engineer at Pinterest. "We're actually trying to find some visual similarity between every single image, and we want to do this in real time."

The visual search tool works particularly well for finding home decor and fashion products saved to the site, he says. And in the future, the company hopes to improve its ability to map objects to categories, making it more useful for other types of searches. One example might be helping users find new recipes that are alike in ways beyond having similar photos of food.

"I think our models can become more semantic," says lead visual search engineer Andrew Zhai, referring to the idea of using deep learning to effectively map images to more conceptual categories. "We can eventually get better at those types of pins."

In the meantime, Pinterest's engineers have focused on perfecting object detection and search, with an eye toward potentially developing an app that would let smartphone users take pictures of objects in the real world, then get recommendations of related pins on the platform.

"It's just such an exciting time in the deep learning, computer vision field—everything moves so quickly," Kislyuk says. "The state of the art changes every couple of months."

MIT Scientists Learn To Track Emotions Using Wireless Signals

$
0
0

The scientists can remotely read emotions by scanning heart and breathing rates, but what does it mean for privacy?

Researchers at the Massachusetts Institute of Technology say they've developed the first known system able to read people's emotions by bouncing wireless signals off a person's body.

Potential applications include more adaptive user interfaces as discussed in Co.Design. And while the team from MIT's Computer Science and Artificial Intelligence Lab is taking measures to make it difficult to scan people's emotions without their consent, the experiment still raises questions about privacy that some experts say current legal frameworks may be ill-equipped to handle.

"The whole thing started by trying to understand how we can extract information about people's emotions and health in general using something that's completely passive—does not require people to wear anything on their body or have to express things themselves actively," says Prof. Dina Katabi, who conducted the research along with graduate students Mingmin Zhao and Fadel Adib.

The system, called EQ-Radio, works by generating a low-power wireless signal and measuring the time it takes the signal to reflect from various signals in its vicinity. Since the reflection time from people's bodies vary as they inhale and exhale, and as their hearts beat, it can distinguish humans from other objects that generate static reflections, according to a paper the team plans to present next month at the Association for Computing Machinery's International Conference on Mobile Computing and Networking.

Then, the system learns to distinguish heartbeats, which cause faster but smaller changes in reflections, from breathing, which leads to slower but larger differences. It's roughly as accurate at measuring heartbeat time as a traditional electrocardiogram, say the MIT scientists, who are also working with researchers at Massachusetts General Hospital to study potential medical applications.

"We are able to extract breathing and heart rate in a very passive way without asking the user to do anything except for what he does naturally," says Katabi, who in 2013 was awarded a MacArthur Foundation "genius grant" for her work on wireless networks.

Both sets of measurements are then fed into a machine-learning process that observes people in emotional states including anger, joy, and sadness, along with their heart and breathing rates. Once it's trained, EQ-Radio is about 87% accurate in recognizing emotions in people it observed during training and more than 70% accurate in others, the researchers say.

And while they emphasize that commercializing the system won't happen overnight, they do envision some potential applications. Similar systems might be able to help diagnose mental health conditions like depression or bipolar disorder, they say. Or, they could also be able to help movie studios and advertisers track people's emotional reactions to their work. Katabi says the technology is likely to find its way into tools made by Emerald, a company she founded that's also developing wireless tools to sound an alert if elderly people fall within their homes.

Professor Dina Katabi (middle) explains how PhD Fadel Adib's face (right) is neutral, but that EQ Radio's analysis of his heartbeat and breathing show that he is sad[Photo: Jason Dorfman, MIT CSAIL]

EQ-Radio is far from the first attempt to scientifically determine people's emotions: the medieval Persian physician Avicenna wrote around the year 1000 of diagnosing melancholia, similar to what would now be called depression, by measuring a patient's pulse, wrote Manuel Garcia-Garcia, a senior vice president for Research and Innovation at the Advertising Research Foundation, in an email to Fast Company.

And more recently, companies ranging from fledgling startups to giants like Microsoft and IBM have offered software to infer emotional state from facial expressions, spoken words, and written language. And while these tools can be useful to companies looking to understand their customers' emotional states or even to consumers looking to track their own feelings over time, there's also plenty of potential for abuse, especially if people's emotions, or simply their heart rate and breathing, are tracked without their consent or even knowledge.

"I think that any kind of nonconsensual monitoring of people's metabolism is a pretty serious invasion of privacy," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "I wouldn't be surprised if we see security applications for that, maybe even commercial applications, where people aren't even aware that they're being monitored, let alone having given permission for it."

Security officials could treat elevated heart rates as evidence of lying or suspicious behavior, or employers might shy away from hiring job candidates whose vital signs suggest potential health issues, he suggests.

The MIT researchers say their current prototypes are designed so they can only be used consensually: The existing version of the device prompts users to make certain distinctive motions that it can wirelessly detect, in order to effectively authorize it to begin tracking, says Katabi. And, she says, they've already developed ways that people can block such a system system from taking measurements where it's known to be in use, essentially by transmitting interference at similar frequencies.

"You want to block the information this wireless signal has by countering it with another wireless signal," she says.

But in some cases, like in employer-employee relationships, people might still find themselves coerced into allowing such technology to be used, potentially with little recourse under current laws, says Stanley.

"If an employer informed its employees that it was doing it, and was very up front about it and made it a condition of employment, I'm not sure whether it would be illegal," he says. "Even so, it's an extremely intrusive thing to do to your workers."

The Anti-Drone Arms Race Is Taking Off

$
0
0

On the margins of technology and the law, companies and governments are hacking unmanned vehicles and putting up their drone shields.

Last week, the director of the Federal Aviation Administration reported that his department is receiving an average of 2,000 new registration requests for drones every day, and that it has registered up to some half a million drones since new rules went into effect in January. But as sales of drones have increased, so too have other more worrying numbers. The FAA also says it receives more than 100 reports per month of drones flying around airports and other forbidden places, where they could damage infrastructure or accidentally collide with the engine of a landing airplane.

Then there are the more deliberate misuses: It's thought that terrorists will inevitably use unmanned aircraft to deliver explosives, just as they've already been used to smuggle drugs across prison walls. Last year, the Secret Service reported at least two incidents where unmanned aircraft flew in restricted airspace around the White House, while in Japan, an antinuclear activist was charged with using a drone to deliver a tiny amount of radioactive sand to Prime Minister Shinzo Abe's office.

Meanwhile, camera-equipped drones are already being used to surreptitiously invade people's privacy, and they can intercept data, too, says Gilad Beeri, a software engineer with experience in cybersecurity and radio communication.

"I can buy a $300 or $500 drone and just send a device that is a computer, cloud-connected, with HD camera," onto private property. "I can use it to hop your network, because it can come with all of those kinds of radio sensors close to your computer."

Beeri is cofounder and chief technology officer of Palo Alto-based ApolloShield, one of a number of startups and defense companies selling ways to take down drones behaving badly.

Unlike physical options for taking down a drone—which have included nets, guns, and birds of prey—ApolloShield's handheld device leaves drones intact and functional, sending them back to their pilots nearby, Beeri says.

"The drone never crashes, just goes back to the operator and lands safely near the original operator," he says.

Beeri, 30, and his friend and cofounder Nimo Shkedy, 33, who honed their cyber skills in Israel's elite cyber agency Unit 8200, first focused on the drone problem in 2014, according to a report on the blog of the incubator Y Combinator. While the pair were watching a soccer game between Albania and Serbia, a drone carrying an Albanian flag zipped into the stadium, causing havoc. A few weeks later, they were playing volleyball at the beach when a quadcopter with a professional camera began hovering ominously overhead. Beeri and Shkedy resolved to develop a solution.

ApolloShield, which resembles a wireless router and costs about $30,000 a year, is designed to detect nearby drones up to two miles high—well above the FAA-established ceiling for drones of 400 feet—and record their unique identifying numbers. It also gives its users the option of spoofing the drone's "go home" signal, ordering them to return to their operators and land.

The process involves buying commercially available drones and reverse engineering the ways they communicate, something that generally takes about two weeks per model, Beeri says. "We actually learn the language of the drone and the remote control, and we teach our system how to speak that language," he says.

As new drones come on the market, the company can push updates to its customers, who, he says, only include those in charge of protecting airports, stadiums, prisons, power plants, and other critical infrastructure.

So far, Beeri and Shkedy have raised $500,000 from Y Combinator and other angel investors. They declined to name any of the company's existing customers, but in an email Beeri wrote that "each customer is screened and vetted on a case-by-case basis, depending on vertical and geography. The purpose is to make sure only customers who should be able to have such a system get it. The process is still being formalized with time so I can't share any more details right now."

Unmanned But Not Unlawyered

There are no specific rules about who should use an anti-drone system or under what conditions, as there are about drones themselves. Under FAA rules that went into effect this summer, drone pilots are required to register aircraft weighing more than roughly half a pound with the agency, and commercial pilots are required to obtain a special certificate from the FAA. Anyone flying a drone within five miles of an airport is required to coordinate with air traffic controllers, and commercial pilots aren't allowed to fly over people or launch from a moving vehicle.

FAA Sign

Additional restrictions apply to secured airspace around Washington, D.C., and various military installations, according to the FAA. Drones also aren't allowed near certain high-profile sporting events or firefighting operations, and some states place additional restrictions on where, when, and how drones can fly and take pictures.

Still, there aren't yet any standardized technical measures for keeping unmanned vehicles from going where they don't belong. Generally, federal law and FAA regulations make it illegal to damage or tamper with any aircraft, including drones, and Federal Communications Commission rules make it illegal to jam any kind of radio transmissions, according to Dallas attorney Jason Melvin, who's been called the "Texas Drone Lawyer." But, he says, courts have yet to specifically deal with the question of the legality of anti-drone systems.

"None of these issues have been litigated, and there aren't a lot of laws and regulations that address the situation yet," says Melvin. Even in cases where, say, federal law enforcement agencies are granted permission to take down drones, there are still likely to be questions of liability if a drone is unintentionally damaged, or even crashes into someone, when it's sent an overriding signal, he says. Moreover, a powerful signal from an anti-drone jammer could also disable other nearby communications technology, like cell phones and navigation systems.

ApolloShield says its customers are required to comply with local laws, wherever in the world they're based, and some customers have asked for restrictions to be applied to their devices to ensure compliance.

Research and development firm Battelle also offers an anti-drone device, a rifle-shaped anti-drone radio transmitter called the DroneDefender, but Federal Communications Commission regulations on radio transmissions mean the company can only test it under restricted circumstances and can't sell it to civilians. While Battelle is reluctant to disclose too much about which federally authorized customers are using the DroneDefender, Battelle researcher Dan Stamm says it's sold about 100 of the devices to the Department of Defense and Department of Homeland Security. In July, one was spotted on a military base in Iraq. The devices run in the five-figure range, he says, and well less than $100,000.

Battelle DroneDefender

The rifle-shaped radio transmitter and external 10-pound battery pack wouldn't look out of place in the Ghostbusters arsenal. Because the DroneDefender is directional, it shouldn't interfere with drones—or perhaps other electronics—that aren't in its line of fire, according to Battelle researcher Alex Morrow.

While ApolloShield's device sends a drone back to its operator by impersonating a drone's operator and sending its own commands, the Battelle device effectively blocks the radio signals the operator is sending. It transmits a "proprietary waveform" that's designed to interfere with any commercial drone it's aimed at, without needing to understand the specifics of how the device communicates. (Most commercial drones are programmed to safely return to their pilots when they lose signal.)

"We really wanted to focus on not destroying the aircraft in the air," says Morrow. "Many of the incidents are just wrong place, wrong time—maybe a 14-year-old kid flying their aircraft too close to the airport and not knowing the rules involved."

Some drones can be self-programmed to avoid sensitive areas, like airports or private property, a feature that popular Hong Kong-based drone manufacturer DJI began to include on its newer drones, with a restriction on flights around Washington, D.C.

But government and military agencies aren't taking any risks. Last year, it was revealed the Secret Service was testing a drone shield around the White House, and the FAA and Department of Homeland Security began testing a drone detection system earlier this year intended to locate errant drones around airports and other secure locations and one that can both detect and block radio communications with misbehaving drones. To stop drones from interfering with firefighting aircraft during wildfires, the Interior Department announced in July that it was testing a "geofencing" system to send software warnings to nearby drone pilots, as part of a collaboration with the drone industry.

Meanwhile, NASA and the Defense Advanced Research Projects Agency, the Pentagon's R&D arm, are developing and seeking proposals for building a system that could track all drones flying below a certain altitude across a city, perhaps using tracking systems mounted on additional drones.

Stamm, who developed the DroneDefender with Morrow, says that no anti-drone system will work every time. "There are certainly drones that are out there, that will be resistant to our effects, for sure, but we like to say that we're effective against the vast majority of commercial [unmanned aircraft] that are out there."

Anti-drone vendors will struggle to stay one step ahead of drone makers in the quest for vulnerabilities. "Like any kind of security company, it's always a bit of an arms race," says Grant Jordan, CEO at anti-drone tech startup SkySafe. SkySafe provides a subscription service including hardware and software to identify, land, and potentially even take control of unauthorized drones. The company's currently focused on the public safety market as it builds out new capabilities, and is not currently selling to individual end users, says Jordan.

The security flaws used by anti-drone devices could also be used by people with less beneficent intentions to hijack legitimate drones. In March, an American researcher demonstrated he could hijack a heavy-duty quadcopter used by police and fire departments and in industrial uses from a mile way, with only a laptop and a cheap digital radio. Not only was the drone's Wi-Fi connection dependent upon "WEP" encryption, which is known to be weak, but the connection between the operator and the drone used an even less-secured radio protocol, leaving the drone open to a man-in-the-middle attack. Such vulnerabilities, he told the RSA security conference, may apply to a broad swathe of high-end drones.

Meanwhile, with rules around anti-drone equipment still unclear, it's not certain when consumers will be able to legally buy, build, or use tools to keep unwanted drone flights away their property, says Jordan. "For some of these things, we're just going to have to wait and see."

Related Video: Drone vs. Car Wash

How This Cloud-Based Security Tool Protected The Super Bowl From Hackers

$
0
0

ProtectWise says handling security analytics in the cloud lets it store more data and move faster than its competitors.

When football fans checked their email and uploaded photos from Super Bowl 50 this year, the Denver-based security software startup ProtectWise was monitoring traffic from Levi's Stadium in Santa Clara, California, for potential threats.

"We were invited by Norwich University's computer security program to participate with the Santa Clara Police Department as part of their security architecture for the Super Bowl," says ProtectWise cofounder and CEO Scott Chasin.

Unlike other security companies, ProtectWise generally doesn't provide hardware devices to attach to a network, or cumbersome software that can slow down individual computers by scanning files and network traffic. Instead, it provides lightweight software "sensors" that simply upload compressed network traffic from customer machines to its private cloud for analysis and monitoring.

ProtectWise HUD

"We have a very lightweight software sensor that acts like a virtual camera," Chasin says. "Essentially, we take that recording, compress it, optimize it, and replay it in real time or near real time for our platform in the cloud."

That approach let ProtectWise get up and running "in minutes" at the Super Bowl, where the company's software monitored about nine terabytes of data transfer to about 17 million different websites, Chasin says. He can't reveal too much about what the software discovered at the Super Bowl, other than that it found about 19 potential threats amid all that network data.

But in other cases, ProtectWise's software has been able to spot malware infections and hacking attempts—and use archived traffic history data to trace them back to their origins and determine which machines were compromised when. If a machine is spotted communicating with a malware command-and-control server, for instance, ProtectWise can replay prior data to determine how and when the machine was first infected.

"Today's attacks are extremely complex, and they happen over a really long period of time," says Chasin.

A February report from security firm Mandiant found that in one set of security breaches, it took companies a median time of 146 days to realize they've been compromised. ProtectWise can store data for weeks, months, or longer, in order to replay for further analysis in the event of a security issue. Customers pay varying fees depending on how much data they wish to store and for how long, with ProtectWise's cloud-based approach meaning they don't need to allocate their own servers to store the data or worry about keeping it safe.

And as the company learns of new kinds of attacks from published reports or its own research, it can automatically scan for them in its customers' recorded internet traffic, notifying them if, for instance, it's discovered their employees have been sending data to a known phishing site.

"We have many examples of retrospection," Chasin says. "They run the gamut from what you would think would be more traditional malware whose existence wasn't known previously to zero-day vulnerabilities, where we only learned about them recently, but in fact they're used in breaches historically, to ransomware phishing attacks that weren't discovered until a retrospective scan had taken place."

And network data is generally enough to catch most attacks, even if they come in through other channels like compromised USB keys, since they'll ultimately involve someone trying to remotely control machines, extract data, or something else relying on the internet, he says.

"We like to say the network doesn't lie," he says. "It's our true north."

Related Video: Anyone Can Follow These Steps To Avoid A Cell Phone Hack

Yahoo Hack, Among Largest Ever, Could Be Work of China, Experts Say

$
0
0

It's still unclear when Yahoo learned of the breach or what it will mean for its planned $4.8 billion asset sale to Verizon.

A day after Yahoo announced that login credentials for at least 500 million accounts had been stolen in one of the biggest known data breaches in history, questions still remain about who orchestrated the attack and why it took so long for the internet giant to inform users.

Yahoo, which is in the midst of selling its core business to Verizon, attributed the attack to a "state-sponsored actor," saying data, including usernames, passwords, dates of birth, security questions, and contact information was stolen around late 2014. It's unclear when the company learned of the compromise, and members of Congress have already called for stricter data-breach notification rules and, potentially, an investigation as to whether Yahoo knew of the hack and failed to disclose it in negotiating the $4.8 billion Verizon deal.

"This breach demonstrates the urgent need for Congress to enact data breach and security legislation—only stiffer enforcement and stringent penalties will make sure companies are properly and promptly notifying consumers when their data has been compromised," said Connecticut Sen. Richard Blumenthal in a statement. "As law enforcement and regulators examine this incident, they should investigate whether Yahoo may have concealed its knowledge of this breach in order to artificially bolster its valuation in its pending acquisition by Verizon."

One possibility, says Neill Feather, the president of Scottsdale-based security firm SiteLock, is that the breach was discovered in preparation for the acquisition. While reports surfaced over the summer of an anonymous dark web vendor offering to sell the credentials to hundreds of millions of Yahoo accounts, it's not clear whether that offer was legitimate or linked to the same breach.

To Chris Finan, former director for cybersecurity legislation and policy on the National Security Council staff, it's more likely that the just-announced hack was the work of China, which was heavily involved in hacking public networks to track political enemies around the time of the breach.

"Back in that 2013/2014 time period, there was quite a bit of state-sponsored, or at least state-aligned group activity targeting credentials, and the theory at least was that it was a means of monitoring dissidents in China and abroad," he says, before talks between President Barack Obama and Chinese President Xi Jinping reduced the number of hacks.

Under that theory, it's more likely that the attackers would have only been interested in a small number of accounts connected to political targets, perhaps even harnessing reused Yahoo credentials or cross-site login features to access their accounts on other sites as well.

"When I'm talking to individual companies around the globe, you'd be shocked how many people use the same two or three or four passwords," says Miller Newton, CEO of security firm PKWare.

If only a few accounts were actually accessed, that could explain why Yahoo took so long to notice its servers had been breached, says Finan, who is now CEO of blockchain-powered security startup Manifold Technology.

"If the credentials weren't used en masse, it would make it more difficult to realize they had been stolen," he says. "Still, to go two years, that seems a little surprising."

Yahoo credentials have long been relatively inexpensive on the black market, he says—the 200 million listed earlier this year were reportedly offered for under $2,000—which could be a reflection of the ease with which hackers can obtain them, Finan says.

But aside from its size, which seems to dwarf even other large-scale data breaches—MySpace saw data on 360 million accounts stolen earlier this year, and about 167 million LinkedIn account logins were reportedly offered for sale in May as the result of a 2012 breach—the Yahoo attack is also unusual in that it compromised additional credentials like dates of birth and security questions and answers, which may be hard for users to change or even recall where else they were used, says Feather.

"I don't know where I've used the same security questions," he says.

For Yahoo's users, security firms are offering the advice that's become almost routine to hear alongside reports of major data breaches: Pick strong passwords and don't reuse them, store them in a password manager if possible and enable two-factor authentication with services that support it.

"Also, everyone should be aware of what's going on," said Comodo Enterprise vice president and general manager John Peterson in a statement. "If an organization that you interact with reports a breach, don't wait to update your password. Do it immediately."

And for Yahoo and its shareholders, it's still uncertain what the hack could mean for the pending Verizon acquisition. In a statement emailed to Fast Company Thursday, the telecom giant indicated it was still seeking to learn more details about the breach.

"We understand that Yahoo is conducting an active investigation of this matter, but we otherwise have limited information and understanding of the impact," a company spokesman said in the statement. "We will evaluate as the investigation continues through the lens of overall Verizon interests, including consumers, customers, shareholders, and related communities."

How IBM's Bluemix Garages Woo Enterprises And Startups To The Big Blue Cloud

$
0
0

The locations let IBM teach both startups and big companies how to harness its cloud services.

At one time, a tech industry truism held that "nobody ever got fired for buying IBM." The company was practically synonymous with computing in many industries, whether it was offering mainframes or early PCs. But when it comes to new technologies like cloud computing, younger programmers at startups today are less likely to instinctively reach for offerings from Big Blue, the company readily acknowledges.

"This new kind of emerging, new-style programmer doesn't think positively, they don't think negatively, IBM's just kind of invisible to them," says Steve Robinson, general manager for client engagement.

That's part of why IBM started its Bluemix Garages. They're locations that are typically embedded within incubator or coworking spaces popular with startups, where developers can get assistance from the company in exploring its Bluemix cloud platform, he says.

The first Bluemix Garage opened in 2014 at the San Francisco branch of Galvanize, a company offering workspace and tech training at locations across the country. It hosts about 220 startups at that workspace alone. Since then, IBM has opened additional Garages in cities including Toronto, New York, London, and Nice, France, with more planned for Melbourne, Tokyo, and Singapore.

"We set them up where there are these larger groups of startups," Robinson says. "We are a citizen of their community, and we bring the Bluemix and the IBM story there as well."

The facilities offer collaborative sessions where IBM staffers help companies brainstorm potential ideas and spec out ways to reach particular types of users, or even work together over a period of weeks building out working apps harnessing IBM technology.

"They go home with not just a prototype, but a live, active application running on the cloud," Robinson says.

IBM isn't the only cloud vendor to offer walk-in locations for developers to ask questions. Amazon Web Services, which according to data from industry analyst Synergy Research Group still dominates the cloud market, operates what it calls AWS Pop-Up Lofts in New York and San Francisco. The Lofts feature training sessions, walk-in office hours and workspace for developers, and, of course, plenty of other vendors regularly demonstrate their offerings at meetups and conferences.

But Robinson says the emphasis on design thinking and serious collaboration—IBM encourages companies using the Garage to bring developers, designers, and business staff and to meet with counterparts from within the company—set it apart from the competition and help IBM learn what its cloud clients really need.

"It's given a chance to have our IBM groups be much closer to the pulse of startups," he says. "They, in turn, get to see IBM in a newer light."

And, it turns out, the sessions don't just attract startups: They also bring in more established companies looking to learn modern design and development practices while turning out new products for the cloud, he says.

Visiting developers typically pair program with IBM engineers sitting at computers equipped with two keyboards and developing a fledgling application together using various IBM APIs, from the Watson artificial intelligence and machine learning suite to weather data feeds. In the New York Garage, in an area of SoHo not too far from Wall Street or the financial industries' Jersey City data centers, many companies are interested in exploring IBM's blockchain tools, Robinson says.

"We've done everything—we had one company looking to work with the Watson APIs where they wanted to take a look at their executive speeches and see whether they were online with the marketing messages they wanted to put out," he says. "We had another bank in who wanted to open up some APIs to their extended business partner community."

For PLM Industries, a startup working on digital trackers for freight shipments, the staff of the San Francisco Bluemix Garage helped bring together engineers expert in IBM's Internet of Things platform (which the company relies upon as a backend), designers, and even IBM employees who had previously worked in the logistics industry and could understand the company's goals, says PLM President Tim Parker.

IBM helped the startup focus on what was necessary for a basic version it could quickly test with potential customers, says Vernice London, vice president at PLM.

"We came in with tons and tons of ideas and functionality we wanted to put into the system," he says. "They allowed us to focus in on the end user and get that minimal viable product and showed us the way to quickly get to market and what they call land and expand."

Even In The Tech Industry, Sticky Tape Remains A Preferred Security Measure

$
0
0

Preventing hackers from spying through digital webcams and microphones often involves surprisingly analog solutions.

Although they presumably have access to cutting-edge security tools and first-rate professional advice, Facebook CEO Mark Zuckerberg and Federal Bureau of Investigation Director James Comey still use a surprisingly low-tech piece of security equipment.

Both reportedly place a piece of sticky tape over their computers' webcams to make sure that even if hackers get access to the machines, they still can't spy through their cameras.

"There's some sensible things you should be doing, and that's one of them," Comey said at a September conference.

Years of reports have said that hackers, including some employed by the FBI, can sometimes activate webcams without turning on the indicator light, and research released last year by a team at the University of California, Berkeley, suggested that distracted users may not even notice the lights when they do turn on unexpectedly.

All of this has made physical security precautions to block out webcams and other recording devices—Zuckerberg reportedly also covers his computer's microphone—more mainstream than ever. A recent survey from virtual private network provider Hide My Ass! reported that 82% of computer users polled were concerned about webcam spying, and that 35% had already taken steps to cover their cameras.

Hide My Ass! even released a promotional webcam cover, designed to allow would-be intruders to see nothing but a static image of a cat. "Instead of just blacking out a computer's webcam with a piece of tape or a shutter, we wanted to give the public a way to protect their everyday privacy and simultaneously send a personalized message to intruders," said Cian Mckenna-Charley, the company's marketing director, in a statement.

A Real Threat

While there's no sign that Zuckerberg and Comey themselves have ever had their webcams tampered with, the problem is more than just a theoretical worry. In 2014, a California man pleaded guilty to charges related to allegedly hacking into multiple women's computers, surreptitiously taking nude photos and attempting to blackmail his victims. Pranksters have reportedly also taken control of networked baby monitors, frightening parents and children. Computer stores have been accused of installing spying software on the devices they sell, and a Pennsylvania school district agreed in 2010 to pay more than $600,000 to settle a lawsuit alleging district employees spied on students through cameras installed on school-issued laptops.

In some cases, malware used to remotely activate webcams and spy on users has been used for international espionage, says Kevin Haley, director of security response at Norton by Symantec, the security vendor.

"We see these used by nation-states to spy on others," he says. "They're done by criminals when they get inside an organization and they want to steal something."

Hackers can even hijack webcams in offices to observe people typing in passwords, he says. The sheer size and complexity of video files means that it's generally not possible to fully automate webcam snooping—a human typically has to be on the other end actually spying to make use of the hacked device. But there are still enough hackers more than willing to actively watch webcam feeds to make such attacks a threat, he says.

And since any software tools used to disable webcams, microphones, and other input tools can generally be overridden by hackers with sufficient control over a computer, physically obscuring the devices generally does make sense, he says.

"You could turn off the driver if you knew what you were doing on the machine, but of course, if somebody's on your machine, they could turn it back on," he says, referring to the low-level software that controls the camera. "Blocking your webcam is kind of your last line of defense."

The Analog Solution

Makers of computers, phones, and other connected devices historically don't provide built-in ways to physically disable potential spy gear. It could be that they are reluctant to provide built-in ways to block cameras and microphones since it would add to the complexity of the devices—and make potential customers think of hacking risk, which is hardly a great marketing point, he suggests.

"The last thing they want to do is make you think about somebody being able to spy on you when you're trying to decide on a new computer," says Haley.

It also runs against a decades-old culture in the computing industry that emphasizes software control, rather than physical switches, in digital electronics, says Gunter Ollmann, the chief security officer at San Jose-based security firm Vectra Networks.

The risk isn't just limited to traditional webcams, says Ollmann, whose company reported on vulnerabilities in one inexpensive networked camera earlier this year. He adds that internet-enabled household tools like home security cameras and networked TVs with cameras and microphones can also be hacked. So can videoconferencing tools often installed in offices, which can sometimes be used as a gateway into other office machines.

"Those webcams themselves are compromised as if they were a computer and used for additional nefarious harm into the network it's connected to," Ollmann says. "The current generations of these technologies are still highly vulnerable to network exploitation and compromise."

In addition to taking physical steps to block or disable webcams and microphones—connecting an external microphone cord with nothing attached will often disable a laptop's internal microphone—computer users can take traditional steps like keeping firewalls and security software up to date and monitoring device makers' websites for security patches, he says. But many devices, especially those geared toward home users, simply don't deliver security updates, even if newer versions are safer, he warns.

"There's not an awful lot of product support for year-old technology in the consumer sphere," he says. "They're just not patched or updated. Security is a cost to these companies."

Ultimately, that may mean that as such devices, not to mention smartphones, and wearable recording devices like Google Glass and Snapchat Spectacles become ever more ubiquitous, people may simply learn to guard themselves around any electronics and seek deliberately private spaces for private activities, he says.

Edward Snowden, the National Security Agency whistleblower, reportedly insisted his Hong Kong lawyers place their cell phones in a refrigerator to avoid remote spying, and some government agencies restrict where employees can bring personal devices for security's sake.

"It's not just our home to which we're deploying these technologies," says Ollmann. "They're now in our workspace and public and private areas, so we'll constantly be monitoring against these things."


The Feds Want To Stop Election Hackers, But States And Voters Are Wary

$
0
0

Cyber threats to U.S. elections are real—and so is distrust in the federal government.

After hackers said to be linked to Russia stole data from voter registration systems in Arizona and Illinois earlier this year, the federal Department of Homeland Security offered digital security assistance to state and local election officials around the country.

In August, Homeland Security Secretary Jeh Johnson also raised the possibility of declaring some election-related systems to be "critical infrastructure." Under an executive order issued by President Barack Obama in 2013, that would likely mean federal officials would work with local authorities to coordinate voluntary security standards for those systems.

So far, 21 states have reached to DHS for assistance, Johnson said in a statement released on Saturday. But some state officials and activists have expressed fears that even voluntary assistance programs and especially a future critical infrastructure designation could lead to unprecedented level of federal involvement in elections.

"This suggestion caught many elections officials by surprise and rightfully so," Georgia Secretary of State Brian Kemp told a Congressional subcommittee last week. "The administration of elections is a state responsibility. Moreover, this suggestion came from an agency completely unfamiliar with the elections space and raised the level of public concern beyond what was necessary."

Kemp and other state and local officials have expressed concern about federal officials setting standards about how elections are conducted. While Johnson has emphasized that taking assistance from DHS is voluntary, skeptics worry federal officials will ultimately set legal or de facto standards that states will feel compelled to follow.

"It's one thing if they want to make recommendations to the states on how to improve cybersecurity," says Hans von Spakovsky, a manager at the conservative Heritage Foundation's Election Law Reform Initiative and a former member of the Federal Election Commission. "It's quite another if they want to come in and dictate what states do, because that then brings the federal government into trying to run election administration, which is not a role given to the federal government. That's been done by the states throughout our entire history."

In terms of digital threats, security researchers have warned for years that some electronic voting machines are disturbingly easy to tamper with. Just last week, Princeton University computer science professor Andrew Appel again urged Congress to help phase out touchscreen machines that don't generate a backup paper record of ballots cast, making them especially vulnerable to tampering or accidental data loss.

But experts say it's unlikely that hackers could exploit bugs in voting machines to reliably sway a national election. Since the machines aren't internet-connected, that would require hackers surreptitiously getting physical access to large numbers of individual machines at precincts scattered across the country.

"What is more realistic is a smaller number of confirmed intrusions that maybe again aren't enough to change the outcome on the national level but are enough to undermine people's confidence in the results," says Julian Sanchez, a senior fellow at the libertarian-leaning Cato Institute who studies cybersecurity. "I think that's a more likely scenario."

Attackers looking to undermine confidence could steer clear of voting machines altogether, and focus their attacks on internet-enabled systems. That could mean entering false names into online registration systems or tampering with them to cause check-in delays and long lines at polling places, according to Joe Kiniry, CEO and chief scientist of Free & Fair, a company that develops open-source voting software.

"You don't even have to touch the voting machines," he says. "You just mess up the database."

Or, Kiniry says, attackers could interfere with online systems that publicize results after ballots are cast: Even if they can't actually change the official tallies and the right numbers ultimately make it to the public, they could still sow doubt among voters about the results.

In theory, DHS should be able to help local election authorities with limited tech resources to keep their systems more secure, providing services like digital vulnerability scans to agencies that request them and helping share information about known risks.

"They do offer that: You can contact DHS if you're an elections jurisdiction and they'll come and help you," says Pamela Smith, president of Verified Voting, which advocates for transparency and verifiability in election technology. "It's important to do vulnerability scanning or testing. If you haven't done anything like that or you're not even sure what that means because you didn't used to have know those things to be an election official, then [they're] here for you."

Verified Voting advocated for declaring voting equipment to be critical infrastructure back in 2013, when federal agencies sought public input on implementing Obama's cybersecurity order. The designation could help marshal more resources to protect elections, at a time when some local authorities lack the tech expertise needed to implement their own cybersecurity programs.

In a 2013 filing, Smith and others from the group wrote, "Given that large corporate entities, banks, government institutions, and others have experienced security breaches and sometimes sustained significant losses despite being well-resourced, it is unlikely that an under-resourced elections office, if targeted, would be able to evade similar breaches or even detect them in a timely manner."

But with trust in the federal government at near-historic lows, increased DHS involvement might not be the best way to address worries about vote tampering and legitimacy, Sanchez notes. And formally classifying election technology to be critical infrastructure, alongside the nation's dams, nuclear facilities, food supply, and other sectors, doesn't necessarily mean it'll be kept free from hackers. "Lots of things are designated critical infrastructure," Sanchez says. "It doesn't prevent those companies from getting hacked from time to time."

Twitter Met With Senate Staffers To Discuss Concerns Over Russian Propaganda

$
0
0

Among the concerns was that Russia could be using social media to spread misinformation designed to sway the U.S. presidential election.

Representatives from Twitter have met with Senate staffers to discuss concerns about Russian-backed efforts to use the network to spread propaganda and misinformation to manipulate the U.S. election, both sides confirmed on Wednesday. The meeting was the result of a letter sent by Delaware Sen. Tom Carper, ranking Democrat on the Senate Homeland Security Committee, to Twitter CEO Jack Dorsey about the issue.

"These 'social' cyberattacks are made possible through the proliferation of 'bots,' automated and often false accounts controlled by a single entity, that pollute information streams by generating messages that appear to come from many different users," Carper wrote last month in the letter to Dorsey, inquiring about steps the company takes to curb automated bots on the network.

"Our staff recently met with Senator Carper's staff, to explain our content policies and anti-spam tools," Twitter spokesman Nu Wexler wrote in an email to Fast Company Wednesday, though Wexler declined to comment further on the meeting or on any statistics the company may have on such bots on the network. A spokesperson for Carper also declined to comment, beyond saying that the senator's staffers were "pleased to have a substantive meeting" with the Twitter representatives.

A report in The Guardian last year described a Russian state-sponsored "troll army," paid to intersperse innocuous clickbait posts on blogs, forums, and social networks, with content praising Russian President Vladimir Putin and critiquing enemies of his regime at home and abroad.

Both Russian-funded official media, like the RT television network, and government-backed internet trolls have reportedly critiqued U.S. targets in the past, notably including now-White House Communications Director Jen Psaki during a past stint at the State Department. Adrian Chen, a staff writer for The New Yorker who's reported on Russian internet propaganda in the past, wrote in July that some Twitter accounts linked to Russian trolls had begun to promote Donald Trump.

And after reports that Kremlin-backed hackers were behind hacks on the Democratic National Committee's networks and subsequent embarrassing leaks, security experts expressed concern that the Russian government could be attempting to influence the U.S. election.

"Election officials at every level of government should take this lesson to heart: Our electoral process could be a target for reckless foreign governments and terrorist groups," warned members of the Aspen Institute Homeland Security Group, including former top Department of Homeland Security officials, in a July statement. "The voting process is critical to our democracy and must be proof against such attacks or the threat of such attacks."

While foreign digital attacks on the U.S. political system are believed to be a first this year, the Russian government is believed to have previously used similar tactics to influence politics in Ukraine and Georgia.

Recent hacks on online voter registration systems in Arizona and Illinois have also been tentatively linked to Russia. Carper also wrote last month to the heads of the National Governors Association, urging them to work with federal officials to safeguard state election systems.

Cybersecurity experts have said it's unlikely that Russian hackers or other digital attackers could manage to digitally alter vote totals and sway the election, but they've warned that even limited tampering could undermine public confidence in the vote in a time when the country is at a high level of partisan distrust.

As Airlines Digitize, They Are Confronted With Increased Cybersecurity Risks

$
0
0

As aviation systems increasingly resemble standard computer networks, airlines are learning to deal with familiar cybersecurity risks.

Since the start of last year, major airlines including United, American, Delta, Southwest, and JetBlue have all seen flights delayed or canceled due to on-the-ground computer issues.

And while none of the outages have been linked to deliberate sabotage, it's likely that hackers do probe aviation systems looking for potential vulnerabilities, whether in ticketing systems, air traffic control networks, or computer systems onboard planes, experts say.

"We don't have a lot [of hacker attempts] in the airline systems yet where they've been successful," says Mickey Roach, a partner at PricewaterhouseCoopers who works with cybersecurity issues. "We know that they're trying."

Last year, United reportedly banned security researcher Chris Roberts after he implied he could take control of the plane's digital systems by connecting to a computer accessible from his seat. And while the airline has said the technique wouldn't actually work, a report issued last year by the Government Accountability Office issued a general warning that increasingly connected systems on planes could boost the possibility of cyberattacks or malware entering through computers brought on board by airline staff.

"For example, the presence of personal smartphones and tablets in the cockpit increases the risk of a system being compromised by trusted insiders, both malicious and non-malicious, if these devices have the capability to transmit information to aircraft avionics systems," according to the report.

Similarly, the GAO warned that plans by the Federal Aviation Administration for more interconnected air traffic control systems would likely require greater attention to cybersecurity—not as necessary in existing systems with limited connectivity. In essence, as aviation technology modernizes and more closely resembles other computer networks, it's vulnerable to the same threats seen in other industries and to a wider range of attackers with the knowledge necessary to inflict damage, says Tim Erlin, senior director of IT security and risk strategist at the security firm Tripwire.

"These traditional systems require physical presence or physical access. They require specialized equipment to access them," he says. "There's a tendency to make an assumption of security through obscurity."

Airlines are making progress, he says, by being more mindful of potential threats and how to prevent them. They're also increasingly sharing information on potential digital threats through organizations like the Aviation Information Sharing and Analysis Center.

"The mitigation strategies are sharing information between all parties and collaboration," wrote Pascal Buchner, CIO of the industry trade group International Air Transport Association, via email.

Even if hackers don't gain access to in-flight systems, they can still potentially cause disruptions, tampering with ticketing systems, maintenance tracking systems, or even the computers that track where flight crews are spending the night, according to Roach. If airlines can't figure out who has a valid boarding pass, whether a plane's had all of its necessary maintenance, or if the flight crew has had enough time off to fly legally, they will be forced to cancel flights.

In other cases, airlines can lose money and face angry customers because of online fraudsters gaining access to frequent-flyer accounts. A Florida man was arrested this spring on charges that he stole more than $260,000 worth of American Airlines miles, and a man said to have knowledge of Air India's frequent flyer systems was arrested in July after he allegedly used a combination of illicitly obtained login credentials and forged paperwork to steal miles and sell airline tickets to travel agents.

"It's a big problem, because what happens is, it's not the major hacking groups that are doing this—usually it's this one-off kind of stuff," Roach says. "People's individual accounts get hacked, they transfer the points out, and then people complain, and [airlines] have to replace the points."

To help curb attacks on consumer-facing systems, last year United became the first major airline and one of the first large non-tech companies to launch a bug bounty program, rewarding hackers who report security flaws in the company's systems.

"We did it because our overriding concern in everything we do is to ensure our customers' information is well secured and that their private data is in good hands with us," says Arlan McMillan, the airline's chief information security officer.

Participants who report bugs are rewarded with frequent flyer points' not cash, like some other bug bounty programs, and they aren't allowed to experiment with in-flight systems. So far, McMillan says, the program has delivered valuable results, though he declined to go into detail about the number or nature of detected bugs, or the number of miles paid out. While the company already had standard security measures like penetration testing in place across its servers, bug bounty hunters have still found additional flaws, says McMillan. Participants can earn up to 1 million miles for a severe bug that allows hackers to execute code on United's servers.

"We've found some interesting business logic situations that the moon has to be aligned perfectly for this vulnerability to actually present itself, so very unique cases like that," he says. "My team loves puzzles, and you can think of these types of researchers in very much the same way: They look for puzzles."

Generally, airlines have been quick to adopt new technologies, saving money and giving customers more options in how to do business with them, Tripwire's Erlin says. But many of those technologies also increase the number of ways the airlines' growingly complex processes can go awry, whether due to out-and-out sabotage or simply unexpected technical flaws.

"In adopting that technology, they've adopted not just the security risks but the operational risks that come with that technology," he says. "The tricky part with IT is there are always new and interesting ways for things for fail."

Old-Fashioned Stings Nab Weapons Buyers As Illicit Markets Move Online

$
0
0

Encryption on dark websites is no match for a tried-and-true law enforcement technique: impersonating vendors of illegal merchandise.

According to prosecutors, a Houston man arrested last month on federal explosives charges tried to buy dynamite, a grenade, and a remote detonator through the anonymous "dark web" market called AlphaBay.

The man, 50-year-old Cary Lee Osborn, allegedly took precautions to keep his identity a secret, as he sought explosives to ensure a building "burns to the ground" and to "send [a] message" to its occupant, according to Federal Bureau of Investigation transcripts of his alleged online messages.

In addition to shopping on a black market site only available through Tor, the anonymizing web service, and paying with bitcoin, Osborn wrote of using a "multi hop VPN" to further obscure his digital address, according to the transcripts. He allegedly rented a post office box with a false name and fake driver's license in order to receive the explosives, officials say.

"Dont know exactly whats inside but person using for apartment," he's alleged to have written to an AlphaBay vendor, explaining his need for the explosives. "Person will not be there when set off."

But despite his alleged use of modern cryptographic tools and old-fashioned deception, Osborn was quickly arrested for a surprisingly simple reason: The online vendor he's accused of contacting to order the materials was an undercover employee of the FBI. According to court records, the explosives he received were fake, and he was arrested soon after opening the package.

The case, in which prosecutors say Osborn could face up to 10 years in prison, is one of a number of recent incidents where alleged buyers of illegal goods on dark web sites have been arrested attempting to buy from vendors who are actually undercover law enforcement agents. The sites, which can offer anonymized marketplaces for drugs, weapons, and other illegal goods, complete with Amazon-style vendor reviews, allow users to do business without revealing their internet protocol addresses, email addresses, or phone numbers.

But to buy physical goods, they're still ultimately forced to trust those unknown vendors with some sort of address where they can receive their merchandise, which leaves them vulnerable to old-fashioned sting operations.

"We see fraudsters of all kinds, whether it's health care or just trying to steal your banking transactions, trying to operate in a way that we can't see,"FBI director James Comey told Congress last September. "And so they think if they go to the dark web—the hidden layers, so called, of the internet—that they can hide from us. They're kidding themselves, because of the effort that's been put in by all of us in the government over the last five years or so, that they are out of view."

An FBI spokesperson declined to comment on the number of investigations, past or present, that have involved undercover work on the dark web. But the FBI and prosecutors have previously mentioned cited work in a number of cases. Last year, a then-22-year-old Manhattan man named Cheng Le was sentenced to 16 years in prison, after being convicted on charges that he attempted to buy the fatal poison ricin from an undercover FBI employee on an unnamed dark web site.

"This might sound blunt but do you sell ricin?" he allegedly asked, before repeatedly hinting at reselling the poison or using it for murder for hire, according to court documents.

"I'll be trying out new methods in the future," he wrote, according to the documents. "After all, it is death itself we're selling here, and the more risk-free, the more efficient we can make it, the better."

Also last year, a computer programmer from Liverpool, England, named Mohammed Ali was sentenced by a U.K. court to eight years in prison, also accused of attempting to buy ricin from an undercover U.S. investigator, who reported the case to British authorities. Ali told the court he was simply curious after exploring the dark web and learning about ricin from a Breaking Bad episode and didn't realize the chemical was illegal, according to a report in The Guardian.

In a more controversial case, dubbed Operation Pacifier by the FBI, agents seized a server last year belonging to a notorious dark web child pornography site called Playpen and obtained a warrant letting them install malware on visitors' computers in order to locate them despite Tor's anonymized connections. Since the illegal goods were entirely digital, users would be unlikely to supply physical addresses or other identifying information. Officials arrested more than 135 people, though some of those accused are challenging the legality of operating the server and distributing malware, even with a warrant.

"The FBI carried out thousands of searches and seizures, in locations around the world, based on a single warrant,"the Electronic Frontier Foundation has argued."The particularity requirement of the Fourth Amendment was designed to prevent precisely this type of sweeping authority."

But in cases where law enforcement officers are simply impersonating sellers of illegal goods, and not hacking computers belonging to potential buyers, there's likely little legal protection for those caught in such a sting, says Frank Rubino, a defense lawyer with offices in Houston and Miami. The technique is essentially the same as one that's been used offline long before the dark web existed, he says.

"The government is allowed to act as a seller of illegal objects, no matter what they be," he says. "It's been going on for years, where the government poses as, for example, a drug dealer."

And while laws protect against actual entrapment—where someone's induced to commit a crime they otherwise had "no predisposition to commit," he says—there's no legal problem with the government pretending to offer illegal merchandise to those clearly looking to buy.

"If a guy's out there looking to buy bomb material to blow you and I up, I'm thrilled the FBI is the one selling it to them, because they're going to bust him," he says. "And you and I are going to live happily ever after."

Tech Support Scams Are Getting More Sophisticated

$
0
0

Scammers selling bogus tech support services have moved from cold calls to targeted pop-ups and malware, according to security researchers.

For years, scammers have phoned unsophisticated computer users claiming to be from software companies and internet providers and charging hundreds of dollars to fix nonexistent technical problems.

Last September, Microsoft warned customers not to fall for fraudsters claiming to work for the company, estimating 3.3 million U.S. users would pay $1.5 billion to tech support scammers in 2015 alone. Now, according to security vendor Malwarebytes, such scammers are getting more sophisticated then ever, placing online ads that generate fake error messages adapted to each victim's computer setup.

"The evolution of this scam is leading to more victims and much greater consequences for the general public," the company warned in a report issued this week.

The error messages urge users to call hotlines operated by the scammers for help fixing bogus computer problems, and call center workers charge them inflated prices for basic services like running antivirus scans and clearing software caches, or for essentially nothing at all, says Malwarebytes CEO Marcin Kleczynski.

Often, the scammers use JavaScript to generate a series of popup error windows that make it hard for unskilled users to even close their browsers. And in roughly the last six months, Malwarebytes researchers have seen scammers taking a page from ransomware attackers, installing malware to lock victims out of their computers until they pay to have it removed.

"We're going to see more aggressive techniques," says a Malwarebytes researcher who asked not to be named because he's involved in active investigations of the scams. "In particular, I wouldn't be surprised if they started using ransomware and encrypting people's files."

But unlike with traditional ransomware attacks, where users are openly blackmailed into paying to have their computers repaired, victims of tech support scams may not even realize they haven't paid for real tech support service, says Kleczynski. That's enabled scammers to operate through seemingly legitimate companies in the U.S. and abroad, accepting credit cards for payments without immediately generating suspicious numbers of complaints to banks.

Workers at fraudulent call centers may not even realize they're part of a scam, since they're often isolated from the parts of the company deploying malware or fraudulent ads. And scam operators often deliberately hire employees incentivized not to ask too many questions, even advertising in classified ads that they're willing to hire employees with criminal records who might have a difficult time finding work.

"The upper management is aware that they're hiring people who may not find a job elsewhere and may be easier to manipulate," says the Malwarebytes researcher.

Malwarebytes has worked with the Federal Trade Commission to shut down some scam operations and provided experts to testify in one case in which an alleged scam company called OMG Tech Help agreed earlier this year to surrender its assets to a court-appointed receiver. But even as regulators strike back, other fraudsters continue to take advantage of users who don't know to watch out for the scams, Kleczynski says.

"We've got to keep screaming this from the rooftops," he says.

The scammers have also gotten adept at evading detection, switching IP and web addresses to evade blocking by browser vendors and security software. And to get around filtering by the online advertising networks they use to deploy misleading pop-ups, they'll often purchase legitimate ads for a time, then begin injecting nefarious content.

"These things are embedded in real time," Kleczynski says of internet ads. "You've got criminals serving good advertising for a while and then swapping it out for bad advertising."

Sometimes the scammers will even filter calls from unknown numbers or numbers tied to government investigators or security firms, he says, in an effort to evade detection.

For internet users looking to dodge scams, Kleczynski advises following typical online security advice: Keep operating systems patched and use security software to filter out malware that could be used by scammers; avoid browsing dodgy websites that are more likely to allow unsavory advertisers; and be skeptical of unsolicited messages or calls from anyone claiming to represent companies like Microsoft or Apple.

Microsoft has worked with AARP to help inform seniors about the scams, but elderly users remain more likely to fall for the fraudulent messages and cold calls.

"I would not pick up a random phone call. My grandmother would," Kleczynski says. "I think that just [effectively] selects who's going to be talking to a lot of these scammers."

After Years Of Warnings, Internet Of Things Devices To Blame For Big Internet Attack

$
0
0

Hundreds of thousands of cameras, routers, and DVRs have been hijacked by malware for use in massive denial of service attacks.

On Friday, a series of massive distributed denial of service attacks disrupted access to major internet services including GitHub, Twitter, Spotify, and Netflix.

The attackers apparently used tens of thousands of hacked internet of things devices—household appliances such as digital video recorders, security cameras, and internet routers—to generate a massive amount of digital traffic. That digital noise was sent to Dyn, a domain name service provider used by major online companies, disrupting its ability to translate human-readable internet addresses into the IP addresses networks use to route traffic.

The attack came after years of warnings from security experts that the makers of many internet-enabled devices paid too little attention to security, shipping internet-connected hardware with preset passwords, insecure default connections, and other vulnerabilities.

"It is just a matter of time until attackers find a way to profit from attacking IoT devices," a report from security firm Symantec warned last year. "This may lead to connected toasters that mine cryptocurrencies or smart TVs that are held ransom by malware. Unfortunately, the current state of IoT security does not make it difficult for attackers to compromise these devices once they see the benefit of doing so."

Hackers and security researchers have previously exploited vulnerabilities to get access to devices like baby monitors and webcams. Researchers from security company Pen Test Partners even demonstrated earlier this year how hackers could install ransomware on an internet-connected thermostat, leaving victims sweltering or shivering until a ransom is paid.

And in Friday's attack, compromised IoT devices were coordinated as part of a botnet—a network of hacked machines essentially turned into remote-controlled robots by malware—dubbed Mirai. Between 500,000 and 550,000 hacked devices around the world are now part of the Mirai botnet, and about 10% of those were involved in Friday's attack, said Level 3 Communications chief security officer Dale Drew on the internet backbone provider's Periscope channel Friday.

"With a rapidly increasing market for these devices and little attention being paid to security, the threat from these botnets is growing," according to a blog post published by Level 3 just days before the attack.

Mirai-controlled devices were also key components in a September denial of service attack on Krebs on Security, the high-profile blog by security journalist Brian Krebs that's both required reading for many in the industry and a juicy target for the hacking groups Krebs covers. At the time, Krebs reported that the attack was the largest ever seen by content distribution network provider Akamai, nearly twice the size of the existing record holder.

Devices compromised by Mirai have been detected in at least 164 countries, researchers from security firm Imperva reported earlier this month, with the bot programmed essentially to scan wide swaths of the internet looking for more devices with default or easily predictable passwords that it can infect. It's still not known who created the initial Mirai malware, although the source code powering the botnet was released by a hacker using the name Anna_Senpai earlier this month.

It's also unclear whether the botnet's initial creators are directly behind the attack on Dyn or whether they're effectively selling access to the attackers.

"The person who's buying time on that bonnet could be buying time on quite a few other botnets as well," Drew said on the Level3 Periscope channel. The Department of Homeland Security and Federal Bureau of Investigation have said they're investigating Friday's attack.

Security experts advise users of IoT devices to take simple steps like changing default passwords and installing any security updates that manufacturers provide, but it can be difficult to make many such devices fully secure against a determined hacker. Some manufactures don't provide updates at all, and some only provide them through an insecure online channel, letting hackers effectively generate their own malicious updates, according to last year's Symantec report.

"Unfortunately, it is difficult for a user to secure their IoT devices themselves, as most devices do not provide a secure mode of operation," says the report, which also urges manufacturers to implement basic security measures on their connected products.

Requiring users to set their own secure passwords when setting up the devices, and disabling unneeded avenues for remote control, would help keep hackers out, according to Level 3's Mirai report.

Users can often also configure the devices to disable remote login to the devices and use free tools to make sure those connections are actually disabled, says Imperva.

"With over a quarter billion CCTV cameras around the world alone, as well as the continued growth of other IoT devices, basic security practices like these should become the new norm," says the company. "Make no mistake; Mirai is neither the first nor the last malware to take advantage of lackluster security practices."

Related Video: Inside The Secret World Of Code-To-Code Combat

Why Verizon's Due Diligence May Not Have Caught Yahoo's Massive Security Breach

$
0
0

Cyber due diligence typically looks at overall policies and broad risk rather than scouring networks from top to bottom, experts say.

After Yahoo announced its users had been the victims of one of the largest known security breaches of all time, Verizon suggested it would take at least a second look at its plans to acquire the company's core businesses.

After all, the breach, said to have compromised user login credentials and other information as early as 2014, affected at least 500 million users and has reportedly led some users to close their accounts altogether. But if the hack proves significant enough to scuttle the Verizon deal, or even to affect the ultimate sale price, that raises questions about why the security failure wasn't uncovered during Verizon's due diligence process prior to the deal's announcement.

"It's very surprising to me, because Verizon has an excellent incident response and data breach response [team]," says John Reed Stark, a security consultant and author of The Cybersecurity Due Diligence Handbook. "They have their own professional consulting arm that is extremely good at responding to data breaches."

Just as companies will hire accounting experts to pore over acquisition target financials to avoid uncovering any irregularities or surprises, they'll increasingly engage digital security experts to uncover any cyber risks that might lie hidden in a company's networks or security procedures.

"There are so many categories of information that are worth looking at," Stark says. "You're going to look at every single one of them to try to quantify the risk, and it's very important, because any sort of data breach, any sort of cyberattack, can really cripple a company."

That can include talking to current and former employees about security frameworks and any prior known incidents, reviewing penetration tests and outside audits, and investigating security's role in the company's culture—everything from who's ultimately in charge of digital security and where they sit in the corporate hierarchy to what procedures are in place when a digital alarm sounds in the middle of the night, Stark says.

"Like any sort of due diligence exercise, you're gonna dig down and get granular and look at the people who are really doing the work," Stark says.

But in practice, experts say, cybersecurity due diligence is often limited by time, budget, access, and even expertise, with security skills in severe shortage across digital industries.

"Some firms do it very, very well and some firms don't," Stark says. "Sometimes circumstances don't allow for it and it just means increased risk."

Even talented investigators may only be given a few days to figure out the security risks in sprawling sets of computer networks. They may also get only limited, if any, direct access to the systems involved, says Sean Curran, a director in the security and infrastructure practice at Chicago consulting firm West Monroe Partners.

To acquiring companies, cybersecurity is usually just one part of a larger due diligence process, and to companies being vetted for acquisition, it's a disruption they're looking to minimize. And with both sides often looking to move fast, especially when multiple bids are in play, that can mean only a few days' access to people, records, and computers and a focus on overall signs of risk rather than particular breaches and vulnerabilities, he says.

"The ability to identify an ongoing breach that's actually occurring at the time of the breach is nigh on impossible unless you're talking to someone who's aware of the fact," Curran says.

After all, he points out, Verizon's own annual industry-wide study of security breaches has found many take weeks or even months to discover, often only with the help of reports from outside sources like law enforcement.

Yahoo's breach, which is said to have been the work of state-sponsored attackers, apparently went unreported for several years, meaning detection in a short diligence process may have been difficult. Still, that may be of little comfort to shareholders in either company affected by the uncertainty after the breach announcement.

"It could have been beyond the scope, but I'm sure the investors are going to be asking if it was beyond the scope, then why was it," says Scott Shackelford, an associate professor of business law and ethics at Indiana University's Kelley School of Business who's written about cybersecurity due diligence.

Increasingly, companies are having to quickly decide which deals are too risky to do based on digital security risk, and the answers aren't always clear cut. A fast-moving internet startup might allow developers greater freedom to install software on their own machines than other companies, but take steps to ensure those machines can't compromise important data, Curran says.

"You've got to make a decision between the risk of this happening and the potential that you're going to miss out on this organization," Curran says. "In a competitive world, and a competitive landscape, that may be a very difficult position to be in."


IBM Will Learn How You Interact With Your Bank Site To Detect Fraud

$
0
0

IBM is betting that automatically learning how you move the mouse will help detect unauthorized logins.

Traditionally, banking websites have relied primarily on passwords and PIN codes to make sure people logging in are really who they claim to be.

But users can be tricked by phishing attacks into entering their bank credentials into fake websites, or they can have their login information stolen by malware eavesdropping on their devices, letting thieves access their accounts and potentially steal funds. According to reports from IBM Security's X-Force team, almost 20 million financial records were breached last year alone, with each costing financial institutions an average of $215.

To help make it easier for banks to detect unauthorized logins, IBM is introducing what it calls behavioral biometrics to its Trusteer Pinpoint Detect anti-bank-fraud toolkit. The new feature will automatically use machine learning to build statistical models of how individual users move the cursor while using banking sites and flag unusual behavior.

"The system automatically learns normal user behavior," says Brooke Satti Charles, financial crime prevention strategist at IBM Trusteer, a formerly independent security company acquired by the computing giant in 2013. And since there's no new credential for a user to accidentally reveal, the system should be harder for fraudsters to fool than those based on passwords alone, she says.

"It's about what the user does, not what the user knows," she says.

When the new feature rolls out later this year, it will work in conjunction with existing Pinpoint Detect features that look for unusual changes in a user's location, device, or software settings. The software itself won't ever make the decision to lock a user out of an account, Charles emphasizes, but it will flag any suspicious findings for banks' own systems to review and use to take action.

The system should be sophisticated enough to learn multiple patterns of normal behavior for accounts with multiple users, like joint bank accounts, she says. Since it looks at overall patterns in how a user moves the cursor, not at what elements of the page they actually click on, it shouldn't penalize account holders who access new areas of their banks' sites, she says.

"The really cool, unique part is it's seamless and non-invasive to an end user, so it supports the online customer experience, basically stopping fraud—not productivity," she says.

It also won't be possible for fraudsters to simply capture users' exact mouse movements and replay them, since the system will detect that they're suspiciously identical, like a forged signature that matches too well. And the data and machine learning models that IBM will build in will be anonymized and won't be able to be used to extract account credentials or other confidential information, Charles says.

When it first rolls out, the new feature will focus on learning how users move laptop and desktop mice and trackpads, but the company may introduce comparable mobile tools in the future. Pinpoint Detect already offers tools to detect malware and compromised operating systems on mobile devices.

NASA Is Harnessing Graph Databases To Organize Lessons Learned From Past Projects

$
0
0

The space agency has a new tool to discover unexpected patterns in past projects.

NASA famously maintains a "lessons learned" database containing valuable information from its past programs and projects. But the vast system, which has been online since 1994, is not always easy to navigate. Now the agency is modernizing it with help from a tool more familiar to social media than space missions: graph databases.

The genesis of the change began about a year and a half ago when an engineer, attempting to search "lessons learned" for relevant documents, found the number of possible results overwhelming. "He was getting things that really were not relevant to what he was looking for," David Meza, NASA's chief knowledge architect, recalls.

Looking to make the database more useful, and help users investigate relationships beyond what basic keyword searches could uncover, Meza experimented with storing the information in a graph database—that is, a database optimized to store information in terms of data records and the connections between them. In recent years, such network graphs have become a familiar feature of online social networks.

The individual lesson write-ups themselves were nodes in the network, as were topics to which the lessons were associated by a machine learning algorithm. And to store and organize that data, Meza turned to Neo4j, a database system that's specifically designed to store graph data more efficiently than traditional, SQL-powered relational databases.

"We frequently have customers telling us that we're a thousand times faster or a million times faster than a relational database," says Emil Eifrem, CEO of Neo Technology, the San Mateo, California, company behind Neo4j.

The tool was also notably used by the International Consortium of Investigative Journalists to map connections between people and companies identified in the massive leaked collection of offshore finance data dubbed the Panama Papers. And, says Eifrem, it's frequently used by e-commerce companies looking to generate automated product recommendations based on relationships between users and products, and by financial institutions looking to identify suspicious sets of transactions—even in cases where the individual transactions aren't independently off-looking.

"A fraud ring is all about relationships," says Eifrem.

And at NASA, Neo4j and a graph visualization tool called Linkurious helped Meza's team build an interface to explore the databases of lessons, finding documents relating to particular topics and even uncovering connections between disparate subjects. In one case, Meza says, he came across a seemingly puzzlingly strong connection between lessons relating to contaminated fluid valves and those dealing with battery fire risks.

"I couldn't figure out how valve contamination was actually correlated to fire hazards within batteries," he says. "I realized the topic that talked about battery hazards and fires, there were issues where lead leaked out of the batteries and contaminated the water."

That could let an engineer researching valve contamination issues discover potentially relevant documents about battery issues that might not have have popped up on a keyword search.

Meza says he's now looking at analyzing how the lessons are clustered by time and geographical location, which might help uncover trends in what sorts of issues are being reported or situations where particular NASA sites are reporting more problems of a certain type.

He's also looking at using Neo4j to store relationships between other types of documents, particularly when one document cites another. As authoritative documents like policy directives change over time, it can take some time for those changes to propagate through chains of documents citing one another, he says.

"With a graph database, I can see being able to find out really quickly which documents could be affected," Meza says.

The tool might also be able to help track how particular NASA research projects influence other research or industrial developments, even indirect cases where a product is influenced by another invention, itself influenced by NASA research, he says.

Amid Security Skills Shortage, Intel's McAfee Moves Toward Data Sharing And Automation

$
0
0

The company is adding machine learning, automation, and interoperability features to ease the load on overburdened security engineers.

As hacking attacks become more prevalent and the cybersecurity industry continues to struggle with a shortage of skilled workers, the sector needs to rely more on automation and data sharing to let experts focus quickly on the hardest problems, says Chris Young, general manager of Intel Security.

"What you see, unfortunately, in a lot of cybersecurity shops today is that the humans are drowning in alerts," says Young, who is slated to remain at the helm of the security unit after it's spun off next year under the name McAfee. (In 2010, Intel acquired the firm McAfee and the company has used the brand name for various purposes since, but Young emphasizes the firm has been unaffiliated with controversial founder John McAfee for many years).

In many networks, Young says, warnings and notifications fire from a variety of security products from different vendors, and the software tools often aren't set up to communicate with one another. That means engineers need to spend valuable time connecting data from different sources themselves and determining what is and isn't a threat.

"Right now the humans are the glue between the disparate parts of cyber infrastructure in many, many places," he says.

At the same time, the industry is facing a rising number of threats—"In our Q3 threat report, we've seen 125% increase in variants of ransomware, for example, year on year," says Young—and attacks against new types of targets, like the internet of things devices implicated in the recent record-breaking denial of service attack on infrastructure provider Dyn.

To help engineers devote more time to fewer, more complex security issues, Intel has developed what it calls the McAfee Data Exchange Layer, which lets products from different vendors directly share data with one another. The company has also developed a tool dubbed the McAfee ePolicy Orchestrator, which lets security workers gather data and change configurations for multiple security products from one interface. That helps avoid situations where customers need to separately configure scores of security systems across their networks.

"They may have upwards of 50 or 60 vendors that they're using in their own cybersecurity infrastructure," Young says. "They all have their own management. They all have their own information repositories."

The idea may not sound novel to engineers working in other areas of information technology, where being able to configure multiple servers or software products through one visual interface or set of scripts is now relatively commonplace—and practically a cornerstone of the popular DevOps methodology. But such efficiency is still relatively rare in security, says Young, despite a skills shortage that networking firm Cisco estimated last year leaves more than 1 million security jobs unfilled.

"We're not kind of fully caught up to the rest of IT," says Young. And as threats and hacking attacks get more complex, customers increasingly need to be able to coordinate their responses across multiple computers and cloud systems, he says.

"Threats are going to hit me at all different parts of their infrastructure, so I need to have an integrated response at all different parts of my infrastructure," he says.

Intel Security is also working to share its own data on security threats with rivals such as Symantec, Palo Alto Networks and Fortinet, through what the companies call the Cyber Threat Alliance.

And the company is increasingly integrating machine learning features into products to let automated systems detect and respond to more basic attacks and let engineers focus their attention on more complicated ones. Ideally, those statistically based tools can spot suspicious behavior and new variants of malware before they cause damage—and without risking engineers chasing down minor malware infections while bigger attacks are a more serious risk.

"If you can use the machines to deal with the volume so that the humans can focus on kind of the more difficult-to-detect attacks, that's when you get the balance right," says Young.

This Simulation Of Chess Champion Magnus Carlsen Is Ready To Checkmate You

$
0
0

A smartphone app called Play Magnus mimics Carlsen's playing style at different stages in his career, beginning at age 5.

As world chess champion Magnus Carlsen defends his title against Russian grandmaster Sergey Karjakin in this month's championship in New York, a smartphone app called Play Magnus invites users to test their own skill against simulated versions of Carlsen from different stages throughout his life and career.

The app, originally released in 2014, was developed by a team including Tord Romstad, one of the creators of Stockfish, a celebrated open-source chess engine, and does more than simply regurgitate moves historically played by Carlsen. The engine actually emulates his style of play at various ages—progressing from the aggressive style of his early years as a chess prodigy to his more thoughtful and defensive adult play.

Magnus Carlsen

"Openings are played like he had played at that age, endgames are played in the same way," says Kate Murphy, CEO of Play Magnus. "When Magnus plays himself in the app he'll actually say, 'I remember this game, it was against so-and-so on such-and-such a date.'"

Since its launch, the Play Magnus app has seen close to 1 million downloads, but its creators noticed a limitation: Somewhere around the age that most of us are mastering our timetables, Carlsen simply became too good a chess player for many amateur players to compete with. After all, Carlsen, who turns 26 later this month, was first named a grandmaster at age 13, one of the youngest in the game's history to hold that rank.

"Everyone kept getting stuck on the Play Magnus app," says Murphy. "Everyone kept getting stuck on age 8 and age 9."

To give users a chance of improving their games, the Play Magnus team released a second app this week, dubbed Magnus Trainer. That app, available for iOS with an Android release to follow, provides a series of gamelike chess training exercises designed to help players improve their skills, even if they're still learning how the pieces move.

Some feature more abstract challenges, like navigating chess pieces through a layout based on Carlsen's family cabin in his native Norway, or even moving chess pieces in their signature patterns to escape from nightmarish monsters. One game, called Beach Bounty, begins with players navigating chess pieces to capture stationary sea shells on the beach, but as players progress, they begin having to face off against other chess pieces that could capture them back.

More advanced players, or those looking for a more conventional experience, can also step through interactive versions of Carlsen's historic games, punctuated by challenges to guess which move Carlsen took in a particular position and explanations of his actual tactics.

"Even very advanced players can come in and learn from that section of theory," says Murphy.

Magnus Trainer launched with 11 mini games, developed in part by Carlsen and other grandmasters, including one of his former coaches, and the company plans to roll out more games in the weeks to follow. The games are tested on actual players to make sure they actually learn the skills they're intended to pick up, Murphy says.

The company also plans to unveil multiplayer support in future versions of Play Magnus and will add the current championship competition's games to the theory section of Magnus Trainer. Ultimately, they'll also help train a version of Play Magnus's engine emulating Carlsen's play at age 26, Murphy says, though it's likely that his current level of play will be out of reach for most at-home opponents.

"Most people can defeat Magnus, age 5," says Murphy. "From the age of 7 is when he started to have competitive fervor."

A New Way To Pay To Get Hacked: Battalions Of Freelance Hackers

$
0
0

Synack, whose clients include the federal government, limits hack attempts to prescreened researchers working through logged connections.

Bug bounty programs have become an increasingly common tool in cybersecurity, with startups and established companies inviting hacker bounty hunters from around the world to attempt to find security holes in their networks in exchange for rewards. Even Apple, which long dismissed the idea, began offering bug bounties this year—for as much as $200,000—amid widespread questions about iOS vulnerabilities.

Organizations offering bug bounties often say the programs allow them to test their digital safeguards against a wider range of attackers than they could possible have on staff, often at a fraction of the cost. One bounty management provider, San Francisco-based Bugcrowd, reported earlier this year that companies using its platform have paid out more than $2 million between January 2013 and March 2016.

But for some companies and government agencies, inviting arbitrary strangers from across the internet to probe their security systems is a bigger risk than they're willing to take, says Jay Kaplan, CEO of Redwood City, California, security firm Synack.

"When you open the doors, so to speak, and say, 'come attack us and if you attack us, we're going to pay you some money,' and you don't know who those people are and you have no auditability of what they're doing, it brings a lot of risk into the equation, especially for conservative enterprises," says Kaplan.

To allow those organizations to still get some of the benefits of a bug bounty program while maintaining discretion and security, Synack employs a network of freelance security researchers around the world, including programmers, engineers, and academic researchers. Once they're properly vetted, they're assigned to probe particular customers' networks based on their particular strengths and interests and are rewarded for the vulnerabilities they uncover.

In addition to corporate clients in the health care, energy, finance, and other sectors, Synack recently inked a contract with the Department of Defense, focused on the department's more sensitive IT assets. Contracts with Synack and San Francisco-based HackerOne are together worth $7 million. In a three-week pilot "Hack the Pentagon" program, the Defense Department reportedly received nearly 1,200 bug reports with about 1,400 hackers, paying out a total of about $150,000.

And this week Synack also announced a $2 million deal with the Internal Revenue Service to protect systems on the irs.gov domain, the first crowdsourced security bug-hunting effort by a civilian federal agency. Those government clients generally require strict background checks and limit participation to U.S.-based hackers, says Kaplan, who founded Synack in 2013 with CTO Mark Kuhr after both had worked at the National Security Agency.

Jay Kaplan, co-founder of Synack

In the future, Synack might be able to supply hackers with security clearances for projects with potential access to classified data, though the current Pentagon assignments don't involve systems with any such information, Kaplan says.

Critics of bug bounty programs have also complained that bounty hunters are only picking up the easy-to-find flaws while leaving more difficult vulnerabilities undiscovered. According to security firm High-Tech Bridge, nine in ten companies with public or private bug bounty programs have at least two high- or critical-risk vulnerabilities detected in less than three days of professional auditing, issues that were missed by the crowd.

Synack's approach is intended to supplement, not replace a company's dedicated security specialists. But its human-led, machine-supported system offers a more systematic approach to the problem than traditional bug bounty free-for-alls, Kaplan says.

Synack, and other companies that manage crowdsourced security testing, including Bugcrowd, HackerOne, and San Francisco-based Cobalt Labs, also help alleviate the need for companies to manage bug search programs internally, a potential challenge for organizations already looking for solutions to an industry-wide shortage of security talent. Many of the managed crowdsourcing vendors also maintain their own points systems or other reputation rankings for participating researchers across multiple client engagements, letting them build their own profiles in addition to receiving monetary rewards.

Unlike a typical bug bounty challenge, Synack's security researchers don't simply connect from their own computers to client machines. Instead, they route their attempted hacks through a connection similar to a corporate virtual private network, which automatically logs their interactions with client systems.

"All of their traffic when they're conducting their work actually routes through us, and then we have an ability to audit that traffic on an ongoing basis," Kaplan says. Since the program is limited to approved researchers connecting through Synack's computers, it reduces the risk of hackers taking advantage of a bounty program and penetrating a network for malicious purposes, he says.

The logging data isn't just used for security reasons: It's also used to let clients know what types of hacks researchers have tried and how much time they've spent probing the network. In traditional bug bounty programs, without such logging, companies offering rewards often have no way of knowing whether hackers haven't reported bugs in a system because they've tried to find them and failed or simply because they haven't looked for them, he says.

When bugs are found, Synack provides assistance in rooting them out. Though it doesn't have access to internal source code, it generally can provide enough information to let clients discover and fix the underlying problem, Kaplan says. Clients can also notify researchers when they've updated particular components of their systems, letting them know good areas to probe for new vulnerabilities, he says.

Synack charges clients a steady subscription rate, rather than charging based on the number of bugs found, and separately pays out rewards to its researchers. While it's possible that one client could have an unexpected burst of bugs, Synack's total risk is essentially spread across its clients, similar to an insurance company, Kaplan says.

And clients see benefits in paying for a continuous evaluation of their systems, rather than for finding individual flaws, he says.

"They're paying for validation that there are no new vulnerabilities in their system, and that positive validation is just as important and valuable to pay for as constantly paying for new vulnerabilities that come out," he says.

Viewing all 4691 articles
Browse latest View live