New stuff in robots and AI. Feb 7 edition

Our collection of robot and AI news, chatter, and entertaining videos. №7.

Chatter (the week’s news and PR, with notes)

Mercedes is teaming with Uber. Mercedes will build and operate the fleet of self-driving cars. Uber will managing the front end, which allows it to maintain a low capital model. It’s not clear what Mercedes get out of the deal. (Link. Link.) It’s worth noting that Mercedes’s autonomous cars are not best in class. (Link.)

A CMU AI system beat top rated human poker players in an epic tournament (120, 000 hands). This is notable because poker, unlike chess and go, is an incomplete information and misinformation game. To win, you have to bluff and spot bluffs. You have to model your opponent. That’s tough. (Link. Link.Link. Link. Link.)

The Future of Life Institute’s Beneficial AI conference resulted in the release of the Asilomar AI Principles (Link.), a set of guidelines and, well, principles, for the ethical development of AI technologies. I’m normally skeptical of the obsession with AI ethics. It strikes me as a bit grandiose and aloof from the very real benefits and risks associated. However, this set of principles is (mostly) reassuringly modest and focused on real world objectives. (Link.)

Starting in May, an wave gliding tsunami sentinel robot will patrol the seas in a 3 month test. (Link.)

More on EPFL’s robot building builder (in-situ constructor). (Link. Link.)

Deepthoughts (musings and commentary on the state of the art)

John Markoff makes a clear and cogent argument why we shouldn’t worry about robots taking our jobs in this interview with Kara Swisher. (Link.)

Similarly, Alan Bundy argues that the hype and worry around a superintelligent singularity is “based on an oversimplified and false understanding of intelligence” in Smart Machines Are Not a Threat to Humanity (Link.)

New York Times reminds us that always-on cloud AI systems are great vectors for hacking, hijacking, and surveillance. Careful out there kids. (Link.)

Andra Keay had a smart assessment of why being a woman is a benefit in robotics. (Link.)

Robotics legend Rodney Brooks has a new blog. His first post was on self driving cars. (Link.)

Also, good interview with Gill Pratt in Spectrum. (Link.)

Yves Behar has new principles of designing for AI. (Link.) Also a good piece in Verge about future of interaction. (Link.) As well as some thoughts in Wired about why it is hard to design human-ish robots. (Link.) In a nod to the tricky nature of emotional/affective awareness, MIT is making a wearable that does sentiment/emotion analysis on voice conversations. (Link.)

The TERESA project also focusses on the subtle social dimensions of robotics and AI. (Link.)

Part of the problem with deploying AI and robots as decision-making agents is that they tend to inherit the biases of their creators and trainers. The last year has seen several high profile examples. Microsoft’s Tay experiment was explicitly hijacked by trolls exploiting it’s training algorithm. Propublica exposed the biases of software based recidivism ‘predictors’. (Link.)And Google’s image tagger was famously racist on launch. Making AI more diverse.(Link.)

Amazing Wired article on the roboticists building war bots in Iraq. (Link.)

Most chatbots are the “Interactive Voice Response” of the 2010s — aka inhuman hellscapes whose sole purpose is to reduce customer service cost-centres. Occasionally they’re well designed and functional. (Link.)

IBM announces their 5 in 5 innovations that will change lives in 5 years. (Link.)

Connecting up the robots predicted to be a big tech trend in 2017. (Link.)

Data (info on filings, acquisitions, usage, uptake…)

The Chan-Zuckerberg initiative bought Toronto’s Meta, a platform for discovering research. (Link.)

Frank Tobe’s summary of deals in January. (Link.)

$250M for the Advanced Robotics Manufacturing Institute in Pittsburgh. (Link.)

RBC, U of A and Richard Sutton open machine learning lab in Edmonton. (Link.)

Microsoft bought Waterloo’s Maluuba. (Link.)

Robot ETFs are growing fast. (Link.)

Products (new hardware, software, and services)

H&R Block goes ahead and admits that computers do taxes better than they do (ok, they’re “using” IBM Watson to find deductions). Side note — you’ve known this was true since you bought that copy to TurboTax at Staples in 2001. Side note 2 — it takes one of history’s most sophisticated and powerful machines (Watson) to do a personal tax return? Something is wrong with that. (Link.)

I am so pumped for Piaggio’s Gita. (Link.) This thing looks delightful. (Link.Link. Link.)

Line-us —a neat little sketching robot that’s on KickStarter. (Link.)

Segway’s ‘mobility robot platform’ is going mass. (Link.)

Exoskeletons are cool. Soft exoskeletons are cooler. (Link.)

A soft robot that pumps your heart so you don’t have to. (Link. Link. Link.)

More drone deliveries, this time in our back yard. (Link.)

Flying cars are something people are talking about semi-seriously now. (Link. Link.) But only semi-seriously, because they’re still a terrible idea.

Cyborg dragonfly spies are a thing now. (Link.)

Self-driving cars are getting better. Much better. (Link.)

Videos (& other entertainments)

Boston Dynamics made a new robot. Everyone, including Marc Raibert, says it’s “nightmare-inducing”. I don’t see it. Looks pretty neat to me. (Link.)

Robots grabbing vegetables might not sound exciting, but…

Fast hydrogel robots catch fish. Details on MIT News (link).

ELVIA, the Alexa powered Elvis robot/nightmare.

New stuff in robots and AI. Jan 13 edition

Chatter (the week’s news and PR, with notes)

Another crowdfunded robot/drone project died. Lily this time. Rough for them and their would-be customers, but these stories are getting old. They raised $15 million and had $34 million in orders, but still couldn’t sort out actually making and delivering the thing? While they didn’t use Kickstarter or Indiegogo, the promise was similar — give us a threshold amount, and we’ll be able to get the project off the ground (natch). Yes, the market changed. Yes, hardware is hard. They knew that going in. Undermining the market with pre-sold vapourware and giving capitalized competitors a head start couldn’t have helped. If the goal was to secure massive venture funding on the strength of preorders, oops. But that’s not what customers bought. It looks like the San Fransisco DA was similarly non-plussed. Here’s another critical take from way back in May:

Self-driving cars might get cheap, fast. Hyundai is making a sub-$30,000 self-driving car. Waymo claims to have cut the cost of Lidar by 90%. And training up driving algorithms will be much easier (at least in the early stages)— is a Grand Theft Auto V self-driving car simulation that is integrated into Universe , OpenAI’s platform for training AIs with games, websites, and other digital ephemera.

Waymo is talking about keeping it’s cars offline to hinder hackers. It seems so obvious as to not need mentioning, but that’s only the 2nd best reason to keeping all driving functions onboard. The other is basic reliability. Cars should function fully and properly even when they can’t connect to the internet. Autonomy needs to mean just that — the car drives on it’s own.

Google Brain’s Jeff Dean penned a nostalgic trip down 2016’s memory lane. Which, fair enough, they really did do a ton of interesting work in 2016.

Gigaom had a nice interview with Baidu’s Andrew Ng.

Deepthoughts (Other people's musings and commentary on the state of the art)

Alan Winfield continued his excellent series on the infrastructure of life. This time investigating the nature of transparency in autonomous systems. This is tricky stuff, and “transparency” is subjective. SD Marlow’s comment on the piece is worth quoting:

“there is an academic disconnect between verification(a system of checks and balances) and social norms. People want to know the food they eat won’t make them sick, but there is no monthly pilgrimage by millions of people wanting to directly observe how cow becomes burger.

Figuring out how we navigate the boundaries between autonomy, trust, control, privacy, and ease of use becomes more pressing as autonomous consumer systems proliferate.

Sorry, drones won’t deliver packages to your house (but robot carts might, see Starship below). The argument is that there’s no simple solution to the legal muddle they present. I’m not persuaded. Laws can change. But a thousand whining delivery drones perpetually overhead is a nightmare scenario I doubt people will accept. Flying things around is also wildly energy intensive, especially when rolling them is a viable option (again, see Starship below). I’m partial to pneumatic tube delivery (and waste disposal), but that’s another conversation.

This is so right: ‘AI-powered’ is tech’s meaningless equivalent of ‘all natural’. ‘AI’ is latest in a noble linage of buzzwords, but that doesn’t mean it refers to something in particular. Or something new. AI is poorly defined and has been around for a while. Pretty much any digital tech can make a dubious claim to being “AI” powered. They can, and do. As always, beware BS.

Data (info on filings, acquisitions, usage, uptake…)

Starship Technologies, the upstart Estonian delivery robot, raised $17.2 million from Daimler AG. Local delivery is a thing. Here’s hoping they figure out how to deal with the massive piles of empty boxes in Amazon’s wake.

Rethink raised another pile of cash. And are apparently selling like hotcakes in China.

Products (new hardware, software, and services)

Yves Behar’s Fuseproject teamed with Intuition Robotics on a social robot for the elderly — ElliQ. Don Norman is listed as “helping the company achieve its vision”. Good crew! They also made the only social robot promo video I’ve seen with a sense of humour.

Videos (& other entertainments)

The hum of a 100 drone swarm is the stuff of nightmares. Especially if they’re hunting you.

Great Google Tech Talk with the legendary Don Norman and Mick McManus on Design in the Age of AI.

Another one from UBTECH. They’re just cranking out the humanoid/social/personal/whatever robots.

Nice Bloomberg piece on Kuri, which was the social robot hit at CES. (Need to quibble with using “her” to refer to an object. Animate or not, it definitely isn’t male or female.)

New stuff in robots and AI. Jan 6 edition

Videos (& other entertainments)

PowerEgg, because flying things come from eggs.


Gorgeous butoh video from Google’s machine learning collaborations between engineers and artists.


Mayfield’s Kuri

Toyota’s insanely sleek Concept-i self driving car.

LG is launching robotics everything, apparently.

Itty-bitty Dobby drone.

Intel demoing their new Segway robot.

Chatter (the week’s news and PR, with notes)

Deepmind announced that they were behind a secretive new online Go player who just kept beating everyone. Lots of people were surprised, but mostly because they’d never thought about online Go before. AlphaGO is now the uncontested champion

Amazon announced they are making flying warehouse blimps (excuse me, “Airborne fulfillment centers”) serviced by delivery drone. I can’t say how much I love this. I’m a huge delivery drone skeptic, but throw in a dirigible and I’m sold.

Autonomous cars were serious business at CES. Ford is putting Alexa in their carsThey debuted their planned gas-free driver-free carsAudi and Nvidia are teaming up on full autonomyclaiming they’ll ship a vehicle in 3 yearsAnd Honda is getting very smart about the use of local control/swarm behaviour.

And DJI bought Hasselblad.

Deepthoughts (musings and commentary on the state of the art)

There were the usual scare stories about robots taking jobs, like this BBC story about Japanese insurance adjusters being replaced by an IBM Watson-based system. It’s almost as if the insurance industry hasn’t been heavily computerized since the 1950s, depriving untold millions of spreadsheet tabulators from their right to drudgery (erm, employment). It’s worth noting that people still make the final decisions, thereby allowing the Kafkaesque whims of the insurance industry to stay firmly in human hands.

Luke Dormehl had an excellent piece on inspectability of algorithms. There are several problems packed in there — who owns data about you; the conflicting private (IP, business secrets) and public (privacy, security) interests in the details of algorithm design, and the fundamental human illegibility of our shiniest new techniques. The first two are addressable legal and social questions. The last, however, is much trickier. We’re used to technology being analyzable, but large neural networks are defiantly analysis-proof. They’re complex systems that we use because they work, not because we can granularly explain their output. They’re empirical beasts that might be best studied like biological systems, not mechanical ones, especially when they are black boxes embedded inside other black boxes. Sam Arbesman’s new book Overcomplicated is a compelling picture of living with such complex technologies.

Alan Winfield’s new blog post on the insertion of autonomous systems into what he calls the infrastructure of life deals with similar questions. How do we navigate the relationship between human decisions and algorithmic decisions, especially when we can’t inspect the algorithms? What are the implications for our safety and security?

Data (info on filings, acquisitions, usage, uptake…)

Frank Tobe’s The Robot Report did a typically excellent job summing up the year in robot startup fundings. $1,950,000,000 is a lot of cash. The biggest chunks went to LIDAR makers, with a range of social, industrial, medical and other shops getting big sums as well. Also, $19,000,000,000 on acquisition. Wow.

Products (new hardware, software, and services)

Franka Emika is an extraordinary-looking new self-replicating robot arm. It looks sleek, is cheap (Relatively. It’s still almost €10,000. So, cheap in manufacturing terms, not 9th birthday present terms.), and is designed to have a robust software ecosystem.

Lego announced Boost — a kid friendly new robot/coding toy.

At CES, there were lots, and lots, and lots of social robots. Mayfield’s Kuriseems to have stolen the show. The new devices are uniformly white, curvilinear, vaguely humanoid objects that are either stationary, have little wheels, or awkward legs. IEEE Spectrum’s Evan Ackerman wondered why they all look the same. The answer he was given was “what really differentiates robots is what’s on the inside”. Which, well, yes. But it sure makes robots sound like just another software platform. If it doesn’t matter what your robot looks like, is shaped like, or how it moves, why not make it an app?

New stuff in robots and AI. Dec 16 edition.

Videos (& other entertainments)

Bloomberg did a good bit on Reach Robotics’ amazing looking MekaMon system: augmented reality robot battles, in your living room. I’m sold.

Researchers at the University of Minnesota’s Biomedical Engineering published a report in Nature demonstrating a novel EEG brain computer interface controlling a robot arm.

UBTECH produced a cute little series about their Alpha 1 and Meebot going to entertain at a Man City game.

TED-Ed does a pretty good job introducing the wide world of affective computing.

The Verge went on a field trip to Huaqiangbei Market in Shenzhen. It’s pretty clear that this is the beating heart of global technology. Robots will come to life here first.

Real life use cases for robots and AI aren’t always flashy. They’re more often boring, routine, unskilled, high-risk/low-reward, like towing cars around an auto plant parking lot. Nice one Nissan.

Here is ArtiMinds’ contribution to the venerable robot Christmas video genre.

Chatter (the week’s news and PR, with notes)

The IEEE is developing a guide for ethically designing AI. The draft guide is available for review, and they’re soliciting feedback through March 17, 2017. The goal is to clarify and produce guidelines around collection of personal data, machine autonomy, and other legal/economic issues. This is a very, very good thing. They’ve avoided the hypey talk about AI megalomania and focussed on the real problems that accumulate around centralized information and control in algorithmic environments. Smart. Here are some comments from around the web.

An Amazon Prime Air drone delivered popcorn to a guy in England.

It was big news, but not everyone is convinced that airborne delivery is the most efficient way to get people popcornIt seems better suited to high-value, time constrained deliveries — primarily medicine and tissue. My guess is that’s Amazon’s plan, and this is just the splashy sales pitch.

We got an answer to the question of the week — what will Google/Alphabetcall their self-driving car unit, since Drive is already taken? We have an answer — um — Waymo. Name-aside, I’m excited to see what they have planned. Even if, though the particulars remain a mystery, it looks like it won’t be the long-teased steering-wheel free car of the future.

Magic Leap got pummelled after a report from The Information (paywall) suggested that they had oversold their technology. And, troublingly, that work Weta did on a demo video wasn’t just game graphics, it simulated the AR itself. The video was faked, leading Recode to lump them in with Theranos as “fake tech”. Ouch. Still, several of their other videos specifically claim to be SFX free. The Weta video didn’t. And it doesn’t seem fair to equate having trouble resolving a technology (Magic Leap) with just up and lying (Theranos). Rony Abovitz’s end of year blog post came out after The Information’s story without addressing it directly. For now, though, world’s hopes for AR rest with the HoloLens (Which is real. And works. And which people think is pretty awesome.) and Apple’s rumoured AR/VR/smart glass project.

A robot hand got a soft touch by combining optical waveguides, 3D printing, and soft robotics. Robert Shepherd’s Organic Robotics Lab at Cornell developed the approach, which was published in Science Robotics.

Uber launched self driving cars in San Fransisco on Wednesday morning.Yay!

On Wednesday afternoon, California ordered them to stopand start over by applying through the usual channels. Oomph. There was also a little red light incident that didn’t help Uber’s PR case. Uber says human drivers were in control of the offending vehicles, but we’re left with the nagging question of how other drivers will react when they believe a vehicle is driving itself. It is increasingly obvious that self-driving cars are going to cause regulators a lot of headaches.

Rumour has it that ex-Google X self driving car lead (he gave a great TED talk last year) is starting up his own self-driving car company. We count a lot of players in the AI for self-driving car space — WaymoUberDrive.aiComma.ainutonomyblackberry (via qnx),, and now IBM+BMW. I wonder how many of them can survive. It seems likely that self-driving cars will be dominated by 1 or 2 AI service providers, bundled into OEM hardware, competing with proprietary systems from big car makers. At least until we enter the brave new world of 100% on-demand transport in the late 2030s.

GM announced they’re expanding self-driving vehicle tests to Detroit, and will be building their test vehicles there.

Deepthoughts (musings and commentary on the state of the art)

Riva-Melissa Tez warned against uncritical hype in AI. Case study: RocketAI, a “Temporally Recurrent Optimal Learning” (TROL…see what they did there?) startup “launched” at NIPS. It was 100% fake, and a perfectly calibrated AI cred shibolleth. People who knew, knew. Everyone else looked stupid. But it highlights a real problem. Many non-insiders are baffled/awed/excited by developments in AI — and therefore prone to BS of all kinds. Investors, users, and the public need to be educated about real developments, not wowed by exaggerated claims.

CMU’s Stephanie Rosenthal wrote a post clearly articulating the need for human language communication and trust between robots and people. Roboticists tend to be comfortable auditing log files to understand system behaviour. Most other people, to put it mildly, are not. But, if robots are going to play a larger role in our lives, “most other people” need to be able to communicate with them effortlessly, in real time. Otherwise, we won’t feel secure using them. Transparency, clarity, and ease of use are fundamental. Robots need nuanced design that is both human centred and system centred. Human/system centred design?

Intel Capital’s Ramamurthy Sivakumar paints a nice picture of the introduction of robots. He argues that SaviokeFetchFellow, and Starshipare clear signals that the proliferation of cheap sensors and processors, and better machine learning is finally opening previously robot-free industries -service, retail, delivery - and will eventually lead to a general purpose robot.

Steven Levy wrote a great post-mortem on the Pebble failure. Summary— good people worked hard and did something exciting, but got overwhelmed by a fickle market.

There were some interesting arguements about why we need to ban/prevent killer robots — 1) in robot v. human fights, the robot will win, 2) a little software tweak can make a robot change sides. I agree. But why stop at killer robots? I’d include killer airplanes, killer missiles, killer guns, killer bombs, killer landmines, killer tanks, killer boats, etc., in the list of killer things worth eliminating.

Edward Snowden and Jack Dorsey had a chat on periscopewhere Snowden made a case for control of data being a central question/problem from here on in. That question, along with the risk of manipulation and hacking of AI algorithmssecurity obligations, and the risks built into trusting algorithms to make decisions, are getting more pressing by the day. Maybe it’s finally time to get serious about a New Deal on Data.

Data (info on filings, acquisitions, usage, uptake…)

New York ETF shop Global X launched a robot themed ETF — the catchily named Global X Robotics & Artificial INtelligence Thematic ETF. It joins a handful of others: IndxxRobo Global ETFs, and Pictet.


Remembering Boston Dynamics’ 2015 festive greetings. Note: teams of robodeer should be harnessed and lined.

New stuff in robots and AI. Dec 12 edition. No. 3


SALTO, Berekley’s Biomimetic Millisystems Lab’s mini parkour robot burned up the internet with its crazy jumping antics. (

This is ChainFORM, the fancy “novel type of shape changing interface” (read: robot) from MIT’s Tangible Media Lab. One small step closer to programmable matter?

Boston Dynamics’ Spot Mini is always a delight. This demo from NIPS is pretty darn impressive.

More Spot Mini sightings:

OK. This 3 min celebration of Cybathlon, the world’s biggest cyborg athletic meet, really hits the spot.

We are huge suckers for building with robots. Kuka is ace. We construction as the biggest sector primed for automation. After, well, automobiles of course.

Tesla’s sort-of-POV video of a self-driving car completing a commute is stunning, if totally uneventful.

Chatter (the week’s news and PR, with notes)

NIPS happened. Apple made it clear that they’ll start publishing, in an effort to remain an attractive destination for researchers and be a good citizen in the AI community. NVIDA and Yann LeCun announced toolkit for teaching deep learning at the university level. NIPS spotlight talks are available here.

It was, as usual, a big week in self-driving cars. There was lots of talk about Apple’s making moves. Baidu started testing in ChinaGoogle is spinning cars out of X. Now they’ll have their very own division. Any guesses on names, given that Google Drive is taken?

Pebble failed, and were bought by Fitbit for less than outstanding debt. Seems that everyone went underwater. Innovator and victim of the wearables boom and bust. Ouch.

We’re going to keep seeing these stories of racist/sexist/otherwise horrible decisions made by and blamed on AI. (Rememeber the ProPublica report about a biased machine learning based recidivism “predictor”?) It’s obvious that we shouldn’t be deploying these systems without very carefully thinking about the biases we’re burying inside them. Otherwise we’re just building racist machines to do our racist dirty work.

However, creating a herd of robot rhinos to deter poachers is such an obviously bad idea that it might just be genius. Unless of course the robot rhinos displace all the real rhinos and we end up in robot jurassic park. Could happen.

Tencent is opening an AI lab, making them the latest to do so in China.

Nishogakusha University unveiled an android remake of a long-dead author, made by Hiroshi Ishiguro. It looks and feels like an animatronic display from an 80s anthropology museum, which is essentially what it is. I don’t really see the point. Still skeptical of androids.

Researchers at John’s Hopkins have successfully run a proof-of-concept showing that remote drones can be used to deliver blood without spoilage.

Deepthoughts (musings and commentary on the state of the art)

Senator Ted Cruz pontificated about AI in his committee hearing. It was surprisingly good, especailly given the source and context.

Robin Hanson is skeptical about the overpromises of deep learning/AI. It boils down to a belief that the benefits of enhanced prediction aren’t as easy as we seem to think they are. He thinks we’re in for some disappointment.

Here is a great interview with Google X’s Astro Teller, including a lot of insight on the future of personal robotics. As well as some semi-clarfications on what X is doing with all the roboticsts they bought in 2013.

Morgan Stanley’s Ruchir Scharma adapted a reassuring essay from his book The Rise and Fall of Nations: Forces of Change in the The Post-Crisis World for the Washington Post. Summary: don’t worry, robots aren’t taking your job.

Data (info on filings, acquisitions, usage, uptake…)

Thermal sensnor maker Flir bought mini drone maker Prox, in a bet that tiny sensor laden drones will be a hot military product.

Boeing bought Liquid Robotics.

Uber bought Geometric Intelligence. So, along with Tencent, they’re the new newest kid on the AI block.

Products (new hardware, software, and services)

DeepMind and OpenAI launched open platforms allowing users to train their AI on simulated games, websites, etc. OpenAI’s is called Universe. DeepMind’s is DeepMind Lab. Both allow agents to perceive and control pixels, on the rationale that it approximates the low-information environments that humans learn in.

Local Motors’ drone equipped 3d printed autonomous car (software by Mouser Electronics) is certifiably bananas. Amplifying “the sensory experience that you have when you travel through space” seems like a crazy fun, if niche, idea to me.

Toronto’s The Sky Guys have launched a new fixed wing drone through their Defiant Labs technology division. It’s an sleek looking long range unit (1500km), designed for remote inspection.

New stuff in robots and AI. Dec 2 edition.

Videos (& other entertainments)

One of Marco Tempest’s delightful drone magic performances.

Researchers at EPFL are developing brain-spine interfaces, potentially allowing direct control of artificial limbs.

Fei-Fei Li on the challenges of computer vision, even for our fanciest algorithms.

CSAIL’s Carl Vondrick, along with Hamed Pirsiavash and Antonio Torralba, taught a computer to extrapolate video from still images. Fancy. More info is here:

Florida Institute for Human & Machine Cognition is teaching robots to walk on uneven terrain. It may not look too impressive, but it really really is.

Chatter (the week’s news and PR, with notes)

Starship and Just Eat completed the “World’s First” drone meal delivery. The dreamer in me refuses to believe that no one ever retrofitted an Egemin Mailmobile to deliver bagels, so this can’t be the first. Still, pretty cool.

AI and art is getting progressively awesomer. AIs write music now. A group of artists used a bunch of toolkits to generate city maps from drawings. And researchers are building nice labelled data set to help train up future AI composers.

Here’s some fancy drone flying. Polytechnique Montréal’s Mobile Robotics Lab landed a self-flying quadcopter on a car driving at 50km/h (OK, that one was dicey, but still). The suggestion is that this could further the cause of aerial drone delivery by extending their range.

Last month,, George Hotz’s autonomous car startup, scrapped their Comma One product under pressure from regulators. This week, they open sourced.

George Hotz on Bloomberg, explaining why is open sourcing. (Spoiler, because regulators).

Hotz is clear that they need a heftier data set to compete with Tesla et al. Inclined Android users can help them collect data with chffr.

Intel has signed a deal with Delphi and Mobileye to provide self-driving car CPU muscle.

Ontario is the latest jurisdiction to jump on the self-driving wagon, allowing BlackBerry, Waterloo, and Erwin Hymer to test on public roads starting in 2017. BlackBerry isn’t just there for local colour. They’ve been in the space since buying embedded systems OS makers QNX in 2010. Across the Atlantic, Ford will be testing cars in Europe. The battle of tax breaks and lax regulations is heating up. Winner gets to be the home of autonomous cars, I guess. Unless they just get made in the existing factories. Which, frankly, seems pretty likely.

Since delivering us into the Trumpian dystopia we call “reality”, punditry has been all aflutter about fake news. Turns out Facebook has the technological capacity to fix it, but fixing it is an executive decision. Better late than never, I guess? Thomson Reuters has already built a tool to filter fake news. Still, given the strong motives for making and spreading misinformation, it won’t be long before these tools are gamed, like the search engines of yore. “News Filter Optimizer” is the grossest job title of 2017. Barf.

Further to the “we really need to sort out security for the whole Internet of Things” thing, massive new botnets are emerging and are available for rent. Cheap.

Deepthoughts (musings and commentary on the state of the art)

Fretting about the dangers of AI and robots is a peak fret. Sensibly, major players are taking this seriously, especially in the context of unsupervised systems. DeepMind has a paper outlining techniques for safely interrupting agents. SVRobo set up the Good Robot Design Council, complete an exellent set of 5 Laws of Robotics (no weapons, obey the law, be safe&reliable, no pretending to be people, discoverable ownership). Quanta Magazine has a nice interview with Cynthia Dwork on building fairness into our algorithms. And VentureBeat had articles on chatbots masquarading as people and using AI to augment, not replace, human workers.

The takeaway - let’s establish ground rules to prevent us from building systems that relentlessly exploit human weakness. Unless that ship has already sailed. In which case, whatever, pass the Cheetos.

Products (new hardware, software, and services)

On Wednesday Amazon announced a suite of AI services on AWS: Rekognition image recognition (see what they did there?), Polly speach synthesis, Lex chatbots. They look pretty awesome, and are already available.

Hease is unveilling a retail robot at CES. It appears to be a hybrid AI/telepresence controlled system designed as roaming customer service for malls, hotels, and conferences.

Dashbot is a startup that wants to put Alexa in your car. Their Kickstarter is funded, though it isn’t clear how this space isn’t already filled dash mounted phones, CarPlay etc.

There is now a robot mini submarine that can be launched from a robot boat. Magic. It’s being tested, and will be released early next year. Then they’ll be used for persistent underwater monitoring, mapping, etc.


This vintage view of the robots of the 2000s is magical.

New stuff in robots and AI, Nov 25 edition. RIIFT_NL1

This is our first weekly roundup of news and happenings in robotics and AI. Stay tuned.

Videos (& other entertainments):

UCLA’s Robotics and Mechanisms Lab (RoMeLa) show off their Non Anthropomorphic Bipedal Robotic System. It’s simple, lightweight, and apparently very effective. We're especially fond of the cardboard box and sharpie face.

Another RoMeLa video. This time of a BALLU: Buoyancy Assisted Lightweight Legged Unit, which is an absurd/wonderful humanoid balloon robot. 

A few weeks late, but here’s Otto (the truck people, not the Clearpath spinoff factory material transport people) touting the first ever commercial delivery by autonomous truck. Of Bud, obviously.

Anki is pushing an ability update to their Cozmo robot, including a remote control mode and a pet detector. This announcement video is appropriately cute.

Finally, here is Yann LeCun's deep dive lecture on unsupervised learning at CMU Robotics. 

Chatter (THE WEEK's news and PR, with notes):

Google's new Neural Machine Translation tool appears to develop internal representations of abstract ideas, independent of language. The hype machine has now decided that machines can think.

A DeepMind/CIFAR funded Oxford study produced an AI that reads lips with 93.4% accuracy (at least from a neatly labelled set of well lit videos). The model is reportedly the first that reads lips at the sentence level, rather than word level, allowing it to learn contextual movement/phoneme correspondences.

Another Oxford/DeepMind paper trained up a system on BBC newscasts, finding that it could get to 46.8% accuracy on “in the wild” videos. This is impressive, given the variability of data (facial angle, lighting, accent, idiom).

Siemens continues work on their Spiders, now as a swarm robots lunar construction crew.

A reminder of the cold reality of manufacturing&jobs: it really is a lot cheaper to employ a robot than a human, for certain highly repeatable tasks. $8/hr vs $25/hr for a spot welder.

Are AIs boys or girls? Here's an excellent argument for why the question matters. It’s going to matter a lot more as AIs and robots become more deeply embedded in our lives. I think gender neutrality is the only way to go. Calling a thing “he” or “she” basic anthropomorphism. It leads us to project cheap human stereotypes onto things, reinforcing those cheap stereotypes. But machines aren't people, so let's just agree to call them "it".

Blockchain may be losing steam as the technology du jour while AI reaches peak hype. I remain convinced that a blockchain style technology is the best option for securing our myriad IOT and robotic devices.

Phrenology, machine learning edition. Here's a paper detailing a system able to identify "criminals" by their facial features. It is an object lesson in how ethics-free AI and machine learning applications can have deeply disturbing implications. Take away: don't be awful.

A Japanese consortium aims to improve mapping tech so that driverless cars can ply the road of the Tokyo 2020 Olympics.

Softbank’s humanoid robot Pepper made it’s debut as a shop assistant at a couple of malls in California.

Jibo is push release (again) to 2017, putting them 2 years behind schedule. In a letter to supporters, they are saying that findings from their “Beta 2” in-home test have led to the delay. Early backers are getting irate, but I’m cautiously optimistic. It’s not clear what Jibo is for, and taking a little longer to sort that out is probably better than releasing too early. Hardware ≠ software. Moving fast and breaking things isn’t always the best strategy. Though being late to market doesn’t help either.

Thalmic teams up with Po to make inexpensive gesture controlled prosthetics.

Expanding on their existing Singapore fleet, NuTonomy announces the introduction of (one) hailable self-driving taxi in Boston by end of the year. With more to come in 2017.

Cheap wifi security camera compromised in 98 seconds, reminding us that we’ll need bulletproof security if IOT and robotics are going to get real.

Hyperloop One, the Elon Musk inspired pneumatic train company, is jumping on the autonomous car, um,...bandwagon.

Boston’s autonomous vehicle testing seaport is closing, but there is a push to allow street tests within the next year.

Michigan is also getting in on the autonomous vehicle test city game. In their case, bills are already approved.

Disney is adding sound to silent videos. Using an unsupervised clustering system trained on reams of video, it correlates sound and image and is able to create soundtracks on audio-stripped video.

Trump's election is making Canada an even more attractive destination, visa edition.

Deep Thoughts (musings and commentary on the state of the art):

The Rotman School's Agrawal, Gans, and Goldfarb have a great on machine learning as the technology of prediction. The radical drop of price (and associated rising availability) of prediction will radically reshape industries. Many activities will be reframed as predictive tasks and overtaken my machine learning solutions.

Here’s a talk from Autodesk’s Jeff Kowalski on the use of AI in design, and what that means for designers.

University of Buffalo professor Theo Karppi questions the practicality of banning killer robots, encouraging us to focus on the chains of causation leading to them instead.

The fine folks from Algorithmia on why Deep Learning is transformative, and what the frontiers look like.

Info (filings, acquisitions, fundings, usage, uptake...):

Andy Rubin is raising another $500 million for Playground Ventures, his hardware incubator. There’s pretty good reason to think that the guy who named his company Android and went on the biggest robot buying binge in history will direct some (most? all?) of that money toward cool robot and AI ventures.

In a pleasant change of pace from acquisition, there are IPO filings from 2 robotics companies - Myomo (MYO) on the NYSE MKT exchange and ZMP (7316) on the TY0 Mothers Market exchange.

GE is buying AI companies, adding AI capabilities to it’s industrial IOT Predix Platform.

Google is investing $4.5 million in Yoshua Bengio’s Montreal AI lab. (Bengio, along with Geoff Hinton and Yann LeCun, is one of the big three of deep learning.)

Aeryon unveils new $3 million drone environmental simulation and testing complex.

Phew. That was a long first kick at the can. Thanks for reading all the way to the end! Would love your feedback/comments. Hoping to make this little summary of the week as useful as possible. Let us know what you think!