On August 5th, OpenAI successfully defeated top human players in a Dota 2 best-of-three series. Their AI Dota agent, called OpenAI Five, was a deep neural network trained using reinforcement learning. As a researcher studying deep reinforcement learning, as well as a long-time follower of competitive Dota 2, I found this face-off really interesting. The eventual OAI5 victory was both impressive and well-earned - congrats to the team at OpenAI, and props to the humans for a hard-fought battle!
In Tensorflow: The Confusing Parts (1), I described the abstractions underlying Tensorflow at a high level in an intuitive manner. In this follow-up post, I dig more deeply, and examine how these abstractions are actually implemented. Understanding these implementation details isn’t necessarily essential to writing and using Tensorflow, but it allows us to inspect and debug computational graphs.
Inspecting Graphs The computational graph is not just a nebulous, immaterial abstraction; it is a computational object that exists, and can be inspected.
Click here to skip the intro and dive right in!
Introduction What is this? Who are you? I’m Jacob, a Google AI Resident. When I started the residency program in the summer of 2017, I had a lot of experience programming, and a good understanding of machine learning, but I had never used Tensorflow before. I figured that given my background I’d be able to pick it up quickly. To my surprise, the learning curve was fairly steep, and even months into the residency, I would occasionally find myself confused about how to turn ideas into Tensorflow code.