Remember the classic videogame Breakout on the Atari 2600? When you first sat down to try it, you probably learned to play well pretty quickly, because you already knew how to bounce a ball off a wall in real life. You may have even worked up a strategy to maximise your overall score at the expense of more immediate rewards. But what if you didn't possess that real-world knowledge — and only had the pixels on the screen, the control paddle in your hand, and the score to go on? How would you, or equally any intelligent agent faced with this situation, learn this task totally from scratch?

This is exactly the question that we set out to answer in our paper “Human-level control through deep reinforcement learning”, published in Nature this week. We demonstrate that a novel algorithm called a deep Q-network (DQN) is up to this challenge, excelling not only at Breakout but also a wide variety of classic videogames: everything from side-scrolling shooters (River Raid) to boxing (Boxing) and 3D car racing (Enduro). Strikingly, DQN was able to work straight “out of the box” across all these games – using the same network architecture and tuning parameters throughout and provided only with the raw screen pixels, set of available actions and game score as input.

The results: DQN outperformed previous machine learning methods in 43 of the 49 games. In fact, in more than half the games, it performed at more than 75% of the level of a professional human player. In certain games, DQN even came up with surprisingly far-sighted strategies that allowed it to achieve the maximum attainable score—for example, in Breakout, it learned to first dig a tunnel at one end of the brick wall so the ball could bounce around the back and knock out bricks from behind.


So how does it work? DQN incorporated several key features that for the first time enabled the power of Deep Neural Networks (DNN) to be combined in a scalable fashion with Reinforcement Learning (RL)—a machine learning framework that prescribes how agents should act in an environment in order to maximize future cumulative reward (e.g., a game score). Foremost among these was a neurobiologically inspired mechanism, termed “experience replay,” whereby during the learning phase DQN was trained on samples drawn from a pool of stored episodes—a process physically realized in a brain structure called the hippocampus through the ultra-fast reactivation of recent experiences during rest periods (e.g., sleep). Indeed, the incorporation of experience replay was critical to the success of DQN: disabling this function caused a severe deterioration in performance.
Comparison of the DQN agent with the best reinforcement learning methods in the literature. The performance of DQN is normalized with respect to a professional human games tester (100% level) and random play (0% level). Note that the normalized performance of DQN, expressed as a percentage, is calculated as: 100 X (DQN score - random play score)/(human score - random play score). Error bars indicate s.d. across the 30 evaluation episodes, starting with different initial conditions. Figure courtesy of Mnih et al. “Human-level control through deep reinforcement learning”, Nature 26 Feb. 2015.
This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks, taking in only raw pixels as inputs and transforming these into actions that can be executed in real-time. This kind of technology should help us build more useful products—imagine if you could ask the Google app to complete any kind of complex task (“Okay Google, plan me a great backpacking trip through Europe!”).

We also hope this kind of domain general learning algorithm will give researchers new ways to make sense of complex large-scale data creating the potential for exciting discoveries in fields such as climate science, physics, medicine and genomics. And it may even help scientists better understand the process by which humans learn. After all, as the great physicist Richard Feynman famously said: “What I cannot create, I do not understand.”


We have just completed another round of the Google Faculty Research Awards, our biannual open call for research proposals on Computer Science and related topics, including systems, machine perception, structured data, robotics, and mobile. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.

This round we received 808 proposals, an increase of 12% over last round, covering 55 countries on 6 continents. After expert reviews and committee discussions, we decided to fund 122 projects, with 20% of the funding awarded to universities outside the U.S. The subject areas that received the highest level of support were systems, human-computer interaction, and machine perception.

The Faculty Research Award program enables us to build strong relationships with faculty around the world who are pursuing innovative research, and plays an important role for Google’s Research organization by fostering an exchange of ideas that advances the state of the art. Each round, we receive proposals from faculty who may be just starting their careers, or who might be experimenting in new areas that help us look forward and innovate on what's emerging in the CS community.

Congratulations to the well-deserving recipients of this round’s awards. If you are interested in applying for the next round (deadline is April 15), please visit our website for more information.


(Cross-posted from the Google for Education Blog)

Science is about observing and experimenting. It’s about exploring unanswered questions, solving problems through curiosity, learning as you go and always trying again.

That’s the spirit behind the fifth annual Google Science Fair, kicking off today. Together with LEGO Education, National Geographic, Scientific American and Virgin Galactic, we’re calling on all young researchers, explorers, builders, technologists and inventors to try something ambitious. Something imaginative, or maybe even unimaginable. Something that might just change the world around us.

From now through May 18, students around the world ages 13-18 can submit projects online across all scientific fields, from biology to computer science to anthropology and everything in between. Prizes include $100,000 in scholarships and classroom grants from Scientific American and Google, a National Geographic Expedition to the Galapagos, an opportunity to visit LEGO designers at their Denmark headquarters, and the chance to tour Virgin Galactic’s new spaceship at their Mojave Air and Spaceport. This year we’re also introducing an award to recognize an Inspiring Educator, as well as a Community Impact Award honoring a project that addresses an environmental or health challenge.

It’s only through trying something that we can get somewhere. Flashlights required batteries, then Ann Makosinski tried the heat of her hand. His grandfather would wander out of bed at night, until Kenneth Shinozuka tried a wearable sensor. The power supply was constantly unstable in her Indian village, so Harine Ravichandran tried to build a different kind of regulator. Previous Science Fair winners have blown us away with their ideas. Now it’s your turn.

Big ideas that have the potential to make a big impact often start from something small. Something that makes you curious. Something you love, you’re good at, and want to try.

So, what will you try?


In 2009, Google created the PhD Fellowship program to recognize and support outstanding graduate students doing exceptional work in Computer Science (CS) and related disciplines. In that time we’ve seen past recipients add depth and breadth to CS by developing new ideas and research directions, from building new intelligence models to changing the way in which we interact with computers to advancing into faculty positions, where they go on to train the next generation of researchers.

Reflecting our continuing commitment to building strong relations with the global academic community, we are excited to announce the latest North American Google PhD Fellows. The following 15 fellowship recipients were chosen from a highly competitive group, and represent the outstanding quality of nominees provided by our university partners:

  • Justin Meza, Google US/Canada Fellowship in Systems Reliability (Carnegie Mellon University)
  • Waleed Ammar, Google US/Canada Fellowship in Natural Language Processing (Carnegie Mellon University)
  • Aaron Parks, Google US/Canada Fellowship in Mobile Networking (University of Washington)
  • Kyle Rector, Google US/Canada Fellowship in Human Computer Interaction (University of Washington)
  • Nick Arnosti, Google US/Canada Fellowship in Market Algorithms (Stanford University)
  • Osbert Bastani, Google US/Canada Fellowship in Programming Languages (Stanford University)
  • Carl Vondrick, Google US/Canada Fellowship in Machine Perception, (Massachusetts Institute of Technology)
  • Wojciech Zaremba, Google US/Canada Fellowship in Machine Learning (New York University)
  • Xiaolan Wang, Google US/Canada Fellowship in Structured Data (University of Massachusetts Amherst)
  • Muhammad Naveed, Google US/Canada Fellowship in Security (University of Illinois at Urbana-Champaign)
  • Masoud Moshref Javadi, Google US/Canada Fellowship in Computer Networking (University of Southern California)
  • Riley Spahn, Google US/CanadaFellowship in Privacy (Columbia University)
  • Saurabh Gupta, Google US/Canada Fellowship in Computer Vision (University of California, Berkeley)
  • Yun Teng, Google US/Canada Fellowship in Computer Graphics (University of California, Santa Barbara)
  • Tan Zhang, Google US/Canada Fellowship in Mobile Systems (University of Wisconsin-Madison)

This group of students represent the next generation of researchers who endeavor to solve some of the most interesting challenges in Computer Science. We offer our congratulations, and look forward to their future contributions to the research community with high expectations.


Nature reserves have a vital role for protecting biodiversity and its many functions. However, there is often insufficient information available to determine where to most effectively invest conservation efforts to prevent future extinctions, or which species may be left out of conservation actions entirely.

To help address these issues, Map of Life, in collaboration with Google Earth Engine, has now pre-released a new service to pinpoint at-risk species and where in the world that they occur. At the fingertips of regional naturalists, conservation groups, resource managers and global threat assessors, the tool has the potential to help identify and close key information gaps and highlight species of greatest concern.

Take the Tamaulipas Pygmy Owl, one of the smallest owls in the world that is restricted to highland forests in Mexico. The consensus range map for the species indicates a broad distribution of over 50,000 km2:
Left: Tamaulipas Pygmy Owl (Glaucidium sanchezi, photo credit: Adam Kent). Right: Map of Life consensus range map showing the potentially habitable range of this species.

But accounting for available habitat in the area using remotely sensed information presents a different picture: less than 10% of this range are forested and at the suitable elevation.
Users can change the habitat association settings and explore on-the-fly how this affects the distribution and map quality. This refined range map now allows a much improved evaluation of the owl’s potential protection. Furthermore, the sensitivity of conservation assessments to various assumptions can be directly explored in this tool.
The owl’s potential protection is likely to occur in only around 1,000 km2 that are under formal protection, representing seven reserves of which only two have greater than 100 km2 area. This is much less than would be desirable for a species with this small a global range.

Another species example, the Hildegard’s Tomb Bat, is similarly concerning: less than 6,000 km2 of suitable range remains for this forest specialist in East Africa, with less than half currently under protection.

A demonstration of this tool for 15 example species was pre-released at the decadal World Parks Congress in Sydney Australia last November to the global community of conservation scientists and practitioners. In the coming months this interactive evaluation will be expanded to thousands more species, providing a valuable resource to aid in global conservation efforts. For more information and updates, follow Map of Life.


Last July, Google and the Institute of Electrical and Electronics Engineers Power Electronics Society (IEEE PELS) announced the Little Box Challenge, a competition designed to push the forefront of new technologies in the research and development of small, high power density inverters.

In parallel, we announced the Little Box Challenge award program designed to help support academics pursuing groundbreaking research in the area of increasing the power density for DC-­to­-AC power conversion. We received over 100 proposals and today we are proud to announce the following recipients of the academic awards:

Primary Academic Institution
Principal Investigator
University of Colorado Boulder
National Taiwan University of Science and Technology
Universidad Politécnica de Madrid
Texas A&M University
ETH Zürich
University of Bristol
Case Western Reserve University
University of Illinois Urbana-Champaign
University of Stuttgart
Queensland University of Technology

The recipients hail from many different parts of the world and were chosen based on their very strong and thoughtful entries dealing with all the issues raised in the request for proposals. Each of these researchers will receive approximately $30,000 US to support their research into high power density inverters, and are encouraged to use this work to attempt to win the $1,000,000 US grand prize for the Little Box Challenge.

There were many submissions beyond those chosen here that reviewers also considered to be very promising. We encourage all those who did not receive funding to still participate in the Little Box Challenge, and pursue improvements not only in power density, but also in the reliability, efficiency, safety, and cost of inverters (and of course, to attempt to win the grand prize!)


Imagine a world in which access to networked technology defies the constraints of desktops, laptops or smartphones. A future where we work seamlessly with connected systems, services, devices and “things” to support work practices, education, and daily interactions. While the Internet of Things (IoT) conjures a vision of “anytime, any place” connectivity for all things, the realization is complex given the need to work across interconnected and heterogeneous systems, and the special considerations needed for security, privacy, and safety.

Google is excited about the opportunities the IoT presents for future products and services. To further the development of open standards, facilitate ease of use, and ensure that privacy and security are fundamental values throughout the evolution of the field, we are in the process of establishing an open innovation and research program around the IoT. We plan to bring together a community of academics, Google experts and potentially other parties to pursue an open and shared mission in this area.

As a first step, we are announcing an open call for research proposals for the Open Web of Things:

  • Researchers interested in the Expedition Lead Grant should build a team of PIs and put forward a proposal outlining a draft research roadmap both for their team(s), as well as how they propose to integrate related research that is implemented outside their labs (e.g., Individual Project Grants).
  • For the Individual Project Grants we are seeking research proposals relating to the IoT in the following areas (1) user interface and application development, (2) privacy & security, and (3) systems & protocols research.

Importantly, we are open to new and unorthodox solutions in all three of these areas, for example, novel interactions, usable security models, and new approaches for open standards and evolution of protocols.

Additionally, to facilitate hands-on research supporting our mission driven research, we plan to provide participating faculty access to hardware, software and systems from Google. We look forward to your submission by January 21, 2015 and expect to select proposals early Spring. Selected PIs will be invited to participate in a kick-off workshop at Google shortly after.