As Google celebrates its 20th anniversary this month, the American company inaugurated on Tuesday, September 18, an artificial intelligence (AI) research center in Paris. In a few years, these technologies have become part of all Google products, which has become one of the most advanced companies in this sector. Jeff Dean joined the Mountain View firm in 1999 and now leads Google’s global artificial intelligence research. And must face the growing social debates around this issue.
How is artificial intelligence progressing? What is Google’s strategy?
Discipline is progressing rapidly. We are able to do things that were impossible five, six or seven years ago. Computers are now able to see. We have also made progress in language comprehension, speech recognition, translation…
Our strategy is to do fundamental research and then work with Google’s product team to improve our own services. We publish almost all the research we do to advance science. We also study the applications of artificial intelligence to other fields, such as health or robotics. And we seek to work on important issues, the resolution of which would tend to make the world a better place.
What do you think are the most promising avenues?
I am particularly enthusiastic about health, because the potential of artificial intelligence could be very important there. Think about how doctors are trained: they see 20,000 to 30,000 patients in their careers. A machine learning system could learn from these countless interactions. Imagine a health system that would include tens of millions of patients, 20,000 doctors, ten years of data per patient…
There is an incredible amount of accumulated knowledge that these systems could integrate. What worked? What worked? What went wrong? What didn’t work? This is the promise of automatic learning applied to health. Even if it is also a complex and highly regulated field, for very good reasons.
Why open an artificial intelligence laboratory in France?
The French education system and researchers are of a very high quality. And the possibility of forging partnerships with different research organisations to identify interesting problems is very useful. And it’s important to attract talent. But people want to live in Paris because it is a fantastic place.
What are your ethical requirements and how do you implement them?
We want artificial intelligence to be applied in a thoughtful way. We have therefore worked to set out a series of principles: artificial intelligence must therefore be fair and unbiased[in the United States, software for predicting the risk of recidivism for defendants has, for example, been accused of giving higher scores to black people]. It must also be safe. The data used must also respect privacy.
We are not only stating these principles, we are conducting research on their application. An example: if a database used to train software is biased, you don’t necessarily want to eliminate them completely because they help the machine recognize recurring patterns, but you can ask that each person in category A has as much chance of getting a loan, or something else, as those in category B. If there is an imbalance in the data, the algorithm can be adjusted.
Shouldn’t there be legislation or oversight by an independent external committee?
It is up to governments to determine what kind of laws or regulations are relevant. We have published our principles because we are convinced that we must follow them, whether or not laws impose them. This is a delicate issue because the sector is evolving very quickly. It is therefore difficult to design the appropriate regulation. In addition, in many cases, there are already rules in place, for example in medicine.
Google has committed not to use its technologies for autonomous weapons, but do you exclude yourself from working with defense?
We want to work with the authorities on many things, in line with our principles. We have drawn a clear line on autonomous weapons. But if we are asked to provide our Gmail email service to the defense department, for example, we would be happy to do so.
Do you believe in the emergence of a superintelligence that transcends the human, or is it a fantasy?
We see a lot of speculation on this subject and opinions vary enormously: is it possible? Will this happen in three years’ time – in my opinion, it is impossible – or in a hundred years’ time? Some software is trained to do a particular task and can, in some cases, do it better than humans. But we would still need one or more major advances to achieve a general intelligence (the term used to designate an intelligence capable of acting in all fields and reasoning, like that of humans). And this type of jump is difficult to predict.
Humans have something unique: if I ask you to do something, you can rely on the world around you and all the things you know how to do to design a way to accomplish this new task. We are not building software today that can do that. I don’t know when we’ll do it.
The difficulties encountered by autonomous cars on the road seem more complex than expected. Will engineers be able to make cars without a driver?
Yes, they will succeed! Waymo (a subsidiary of Alphabet, Google’s parent company) began testing cars in early 2018 with no drivers capable of taking control in the event of a problem. Approximately 100 cars were driven, with passengers in the back, through the streets of Phoenix. Admittedly, this city of Arizona is particularly suitable for autonomous cars, with its wide streets, its low number of pedestrians… But the test went well. The autonomous car is not at all a crazy and distant prospect.
How can we create artificial intelligences capable of “providing relevant explanations” for the decisions they make, as one of your principles requires?
This is a central point: artificial intelligence must be accountable to humans. We are therefore conducting research on the “explainability” of algorithms. For example, we have trained software on patient medical records, anonymized, but by creating a model that can be interpreted. Thus, after reading the documents, this artificial intelligence can state a diagnosis but also point to extracts from the files used to establish it: a sentence in a consultation two years ago, an insulin level in an analytical result….
Donald Trump and others accuse Google of being a partisan microcosm, leaning to the left. Is the lack of diversity a real concern?
It is important that all points of view can be heard, on different subjects, especially in artificial intelligence. The people who create these technologies must reflect the diversity of the people who will use them. This principle partly explains why we have opened an artificial intelligence laboratory in Ghana, Africa. Diversity is not a principle to be rejected, whether in IT, engineering recruitment or politics. It must be reflected in our products, our employees and our actions. Although not all areas are equal, there is still a lot of work to be done.