This article is more than 1 year old

Image-rec startup for cops, Feds can probably identify you from 3 billion pics it's scraped from Facebook, YouTube etc

Plus other 'fun' news from the world of AI

Roundup Here's a roundup of news beyond what we've covered already in the world of machine-learning.

Huge facial-recognition database scrutinized: An AI startup called Clearview claims to have a database of more than three billion photos, scraped from Facebook, YouTube, Venmo, and other websites and services, and all used to train a massive facial-recognition system.

Access to the technology is sold to the FBI and cops. Officers or agents can submit a picture of a person, and Clearview will attempt to identify them from its database, complete with links to relevant profiles and pages. Investigators have, it is reported, used Clearview to catch criminals from gym selfies and similar snaps.

Clearview says it's using publicly shared photos, and thus isn't doing anything wrong. The biz received funding from Facebook board member and zillionaire Peter Thiel, we note. Facebook is said to be double checking the startup's usage of its social network.

You can read a deep dive into this image-recognition tech, by Kashmir Hill, here. When the reporter fed her pictures into the system to test it, Clearview contacted its police users to ask if they were talking to the media, it is reported, suggesting the outfit has some kind filter and alert system for journalists, at least.

There is the fear that this kind of technology effectively kills privacy dead: it is virtually impossible to be anonymous now.

Apple snaps up AI startup for $200m: Apple has splashed out on a shiny new AI startup, Xnor.ai, for $200m in a bid to cram machine learning tools in its future iPhones and other hardware.

The Seattle-based outfit emerged as a spin-off from the Allen Institute for Artificial Intelligence – a lab known as AI2 founded by the now-late Microsoft cofounder Paul Allen. Led by Ali Farhidi, co-founder and CXO (chief xnor officer, groan), Xnor.ai focuses on developing software and has dabbled in hardware that runs AI applications at low power on so-called “edge devices” that don’t rely on cloud services.

“Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams,” Geekwire first reported.

Apple has acquired similar startups. Back in 2016, it snapped up Turi, another AI startup based in Seattle that built tools for developers to crank out AI-based apps. It also acquired Spectral Edge, an upstart in the UK that has come up with various techniques to enhance digital photographs.

Hey Google, is it raining? Researchers at Google have trained a neural network to predict whether it’s going to rain up to six hours ahead of time by analyzing radar images.

Described as “precipitation nowcasting”, the technique uses a convolutional neural network that has been fed a series of images captured by weather radars describing the motion of rain unfolding over a specific area.

“Since radar data is organized into images, we can pose this prediction as a computer vision problem, inferring the meteorological evolution from the sequence of input images,” Jason Hickey, a senior software engineer working on a research team at Google, explained this week.

By training a model on the sequence of images, the CNN can learn to predict what the next image - or where and when the rain will move - given the previous set of images. The CNN developed by Google Research can generate forecasts with a kilometer resolution in just five to ten minutes - a much quicker turnaround than traditional prediction methods that rely on complex numerical models. It specifically looks at two properties: the movement and the formation of clouds over time.

Although relying on neural networks provides a quicker and cheaper alternative to other predicting modelling techniques, it won’t be useful for forecasting weather events far ahead of time. That requires a more in-depth knowledge of all the different physical processes that actually affect weather patterns, not just handy guesswork.

Here’s a paper [PDF] describing Google’s precipitation nowcasting in more detail.

What do reinforcement learning algorithms and dopamine in our brains have in common? Folks over at DeepMind have published a new paper in Nature outlining a theory that supports the idea that our brains function like reinforcement learning algorithms.

In short, reinforcement learning (RL) describes a method of teaching bots to complete certain tasks by incentivizing them with some sort of reward. A high score is awarded when it carries out an action that gets it closer to completing a goal, and a low score is given to discourage it from repeating an action that hinders it from reaching that goal.

The agent is designed to chase awards so it should learn how to perform a specific task overtime through trial and error. The trick to crafting clever bots is to build an effective reward function.

A distributional temporal difference (TD) network predicts a distribution of rewards. In other words, some actions lead to more or less rewards and a distributional TD networks learns to map these by ranking them from high to low.

“Is distributional TD used in the brain?,” the researchers asked. “This was the driving question behind our paper recently published in Nature.”

To answer that question, the researchers partnered up with a team at Harvard University to test their theories on mice. They recorded the activity of dopamine neurons in mice whilst the animals performed specific tasks after which they were rewarded with a drop of water as a treat, or given an annoying puff of air to the face, or nothing.

Different tasks were rewarded with different awards. “We know that there should be zero prediction error when a reward is received that is the exact size as what a cell had predicted, and therefore no change in firing rate,” DeepMind explained this week.

“For each dopamine cell, we determined the reward size for which it didn’t change its baseline firing rate. We call this the cell's 'reversal point'. We wanted to know whether these reversal points were different between cells."

Interestingly enough, it turns out that in some recordings, where the mice weren’t sure what award they would be receiving, there was a change in activity of dopamine neurons that supported the idea that distributional TD was happening in their brains.

“We show that there were marked differences between cells, with some cells predicting very large amounts of reward, and other cells predicting very little reward. These differences were above and beyond the amount of difference we would expect to see from random variability inherent in the recordings,” it said.

You can read the full paper here. ®

More about

TIP US OFF

Send us news


Other stories you might like