New stuff in robots and AI. Jan 6 edition

Videos (& other entertainments)

PowerEgg, because flying things come from eggs.

 

Gorgeous butoh video from Google’s machine learning collaborations between engineers and artists.

 

Mayfield’s Kuri

Toyota’s insanely sleek Concept-i self driving car.

LG is launching robotics everything, apparently.

Itty-bitty Dobby drone.

Intel demoing their new Segway robot.


Chatter (the week’s news and PR, with notes)

Deepmind announced that they were behind a secretive new online Go player who just kept beating everyone. Lots of people were surprised, but mostly because they’d never thought about online Go before. AlphaGO is now the uncontested champion

Amazon announced they are making flying warehouse blimps (excuse me, “Airborne fulfillment centers”) serviced by delivery drone. I can’t say how much I love this. I’m a huge delivery drone skeptic, but throw in a dirigible and I’m sold.

Autonomous cars were serious business at CES. Ford is putting Alexa in their carsThey debuted their planned gas-free driver-free carsAudi and Nvidia are teaming up on full autonomyclaiming they’ll ship a vehicle in 3 yearsAnd Honda is getting very smart about the use of local control/swarm behaviour.

And DJI bought Hasselblad.


Deepthoughts (musings and commentary on the state of the art)

There were the usual scare stories about robots taking jobs, like this BBC story about Japanese insurance adjusters being replaced by an IBM Watson-based system. It’s almost as if the insurance industry hasn’t been heavily computerized since the 1950s, depriving untold millions of spreadsheet tabulators from their right to drudgery (erm, employment). It’s worth noting that people still make the final decisions, thereby allowing the Kafkaesque whims of the insurance industry to stay firmly in human hands.

Luke Dormehl had an excellent piece on inspectability of algorithms. There are several problems packed in there — who owns data about you; the conflicting private (IP, business secrets) and public (privacy, security) interests in the details of algorithm design, and the fundamental human illegibility of our shiniest new techniques. The first two are addressable legal and social questions. The last, however, is much trickier. We’re used to technology being analyzable, but large neural networks are defiantly analysis-proof. They’re complex systems that we use because they work, not because we can granularly explain their output. They’re empirical beasts that might be best studied like biological systems, not mechanical ones, especially when they are black boxes embedded inside other black boxes. Sam Arbesman’s new book Overcomplicated is a compelling picture of living with such complex technologies.

Alan Winfield’s new blog post on the insertion of autonomous systems into what he calls the infrastructure of life deals with similar questions. How do we navigate the relationship between human decisions and algorithmic decisions, especially when we can’t inspect the algorithms? What are the implications for our safety and security?


Data (info on filings, acquisitions, usage, uptake…)

Frank Tobe’s The Robot Report did a typically excellent job summing up the year in robot startup fundings. $1,950,000,000 is a lot of cash. The biggest chunks went to LIDAR makers, with a range of social, industrial, medical and other shops getting big sums as well. Also, $19,000,000,000 on acquisition. Wow.


Products (new hardware, software, and services)

Franka Emika is an extraordinary-looking new self-replicating robot arm. It looks sleek, is cheap (Relatively. It’s still almost €10,000. So, cheap in manufacturing terms, not 9th birthday present terms.), and is designed to have a robust software ecosystem.

Lego announced Boost — a kid friendly new robot/coding toy.

At CES, there were lots, and lots, and lots of social robots. Mayfield’s Kuriseems to have stolen the show. The new devices are uniformly white, curvilinear, vaguely humanoid objects that are either stationary, have little wheels, or awkward legs. IEEE Spectrum’s Evan Ackerman wondered why they all look the same. The answer he was given was “what really differentiates robots is what’s on the inside”. Which, well, yes. But it sure makes robots sound like just another software platform. If it doesn’t matter what your robot looks like, is shaped like, or how it moves, why not make it an app?