This article is more than 1 year old

Microsoft wields ML to catch child predators, city drops 7-year facial-recognition experiment after no arrests...

Plus: Hollywood wants to revamp the film business with AI, Nvidia improves StyleGAN

Roundup Welcome to the first AI round up of this year. AI continues to spread like wildfire and everyone wants a slice of the pie - even Hollywood. Read on for the latest flop in facial recognition, too.

Hollywood is cozying up to AI algos: Warner Bros, the massive American film studio and entertainment conglomerate, is employing algorithmic tools to help it decide if a film will become a blockbuster, or go bust at the cinema.

Studios like Warner Bros have a limited amount of budget to splash on new projects every year. Directors bid fiercely to fund the films they believe will make everyone the most profits. But there are a vast number of factors to consider, and it can become all very time consuming and prove to be a total waste of time if a film eventually flops. So, why not employ a machine to help you decide?

Warner Bros have signed a deal with Cinelytic, an AI analytics startup based in Los Angeles, to do just that, according to Hollywood Reporter. Cinelytic’s software will help predict a particular film’s profits, helping studios decide issues like what to release and where.

“The platform reduces executives’ time spent on low-value, repetitive tasks and instead focuses on generating actionable insights for packaging, green-lighting, marketing and distribution decisions in real time,” according to a statement from Cinelytic.

The ultimate decision whether to fund a film or not, however, is still up to humans. Hopefully that'll stop more mistakes like Cats.

Over Christmas... Nvidia improved its StyleGAN software – capable of generating realistic photos of faces, buildings, and so on, from scratch – to version two, ironing out artifacts that give away the fact the images were imagined by a computer.

Microsoft is licensing software that catches child groomers on Xbox: Redmond has deployed a tool it has been developing with academics to prevent online child abuse.

Codenamed Project Artemis, the software analyzes text conversations and rates how inappropriate the interactions are and if the messages should be flagged for human moderators to review. Those humans then report suspected sexual exploitation to law enforcement.

Microsoft’s chief digital safety officer Courtney Gregoire did not reveal how Project Artemis works in a blog post, this week, so we spoke to the boffins behind it directly.

The tool was developed internally with the help of academics, who participated in a hackathon in 2018. Hany Farid, a professor working at the University of California Berkeley’s department of electrical engineering and computer science and the school of information, told The Register that no fancy deep learning was used, instead the system is based on some “fairly standard non-linear regression to learn a numeric risk score based on the text-based conversation between two people.”

Companies interested in licensing the technology should contact Thorn, a tech company building software applications aimed at protecting children against sexual abuse.

Uh oh! Contractors have snooped in on thousands of Skype calls: Stop us if you’ve heard this one before, but contractors working on behalf of tech companies have been listening to sensitive audio clips gleaned from users in the hopes of improving its services.

This time it’s Skype owner Microsoft. A former contractor working in Beijing revealed that he had listened to thousands of sensitive and disturbing recordings over Skype and Cortana. There was little security and workers in China could access the clips via a web app on Google Chrome, as reported by The Guardian.

The leaker was also encouraged to use the same password for all his Microsoft accounts, apparently. Contractors were not given any security training either, a risky move considering the data could be stolen by miscreants. He said he heard “all kinds of unusual conversations, including what could have been domestic violence.”

Microsoft has since said that it has updated its privacy statement to make it clear that humans are sometimes listening in on Skype calls or interactions with its voice-enabled assistant Cortana. And it said that recorded audio clips flagged for review are only ten seconds long, so that contractors don’t have access to longer conversations.

Here’s how the White House wants America’s companies to develop AI tech: The Trump Administration is working to expand its national AI strategy to broach the topic of regulation.

There is little oversight or rules on how AI technology should be used by the private sector at the moment. So the US government wants to take a stab at changing that by, erm, “proposing a first-of-its-kind set of regulatory principles”.

These principles probably won’t do much, they’re not real policies unless backed up by law. But nevertheless, the Trump Administration wants to make some sort of attempt at guiding regulation.

“Must we decide between embracing this emerging technology and following our moral compass?,” the chief technology officer of the US, Michael Kratsios, wrote in an op-ed published in Bloomberg this week.

“That’s a false choice. We can advance emerging technology in a way that reflects our values of freedom, human rights and respect for human dignity,” he continued. Kratsios proposes that federal agencies should make it easier for the public, academics, companies, and non-profits to make comments and leave feedback on any AI policies made.

Agencies like the National Institute of Standards and Technology (NIST) should assess a product’s risk and cost before regulating a particular technology. They should also take into account issues like transparency, safety, security and fairness that support American values.

“Americans have long embraced technology as a tool to improve people’s lives. With artificial intelligence, we are ready to do it again,” Kratsios concluded.

San Diego has ended its seven-year experiment with facial recognition: Finally, here’s a long read on how San Diego’s law enforcement used facial recognition over seven years to hunt for criminals prowling the American city’s streets.

A network of 1,300 cameras embedded on smartphones and tablets manipulated by staff recorded over 65,000 faces from 2012 to 2019. These images were then ran against a database of mugshots to look for any potential matches.

And over these last seven years, no arrests as result from the technology were ever made, according to Fast Company. But for some bizarre reason, police didn’t track the results of its experiment so there is no solid evaluation on the system’s performance.

As of 2020, San Diego has shut down the experiment. You can read more about that here. ®

More about

TIP US OFF

Send us news


Other stories you might like