My college-bound daughter recently asked for my opinion on “the importance of majoring in something technology-based.” I advised her to follow her interests because ultimately, she should work in a field she enjoys while earning a living. In the same breath, I noted that just about every major now has a technology component. Technology affects every facet of life. It is multi-disciplinary and the keystone of most subject areas as practiced in this digital era. Whichever industry one covets requires a basic understanding of the technology employed within that system. Even if a student is not majoring in technology outright, such as robotics engineering, he or she gains an advantage by keeping up with the latest vetted technology in a chosen specialty.

Later in the day, this little piece of advice led me to visualize what the future, and future on this planet, really means for this next group of college graduates. Do we really only have about 100 years left on earth as Stephen Hawking predicts? He’s a really smart guy, but as Amazon’s Jeff Bezos has said, really smart people are not always right. So, maybe our forthcoming college graduates and generations are not too doomed. If we can get our planet back in shape and duck the North Korean missiles, perhaps it will be livable beyond 100 years. Yet there are other existential risks besides climate change and missiles to worry about, like artificial intelligence (AI). Embodied Intelligence is developing self-taught AI using complex algorithms and it is not alone — other labs across the nation are working on similar AI upgrades. As we have already seen, thanks to DeepMind, machines have the capacity to learn and become more intelligent than the smartest human being. Eventually, humans may become a hindrance to a robot society. Why would robots cohabit with a bunch of demanding half-wits when they can ensure a super-smart culture for themselves, by themselves?

The Future of Life Institute (FLI), boasting an impressive set of founders and advisory board, exists to “ensure tomorrow’s most powerful technologies are beneficial for humanity.” The FLI advisory group includes bright folks like Nick Bostrom. For Bostrom, the AI threat is very real, “[i]f we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world.” (The Guardian, 2016). Another FLI member, Stuart Russell, whom I had the pleasure of listening to at ASU Law’s most recent annual LSI Conference on Governance of Emerging Technologies & Science, echoes Bostrom. Of super-intelligent AI, Russell notes, “[e]ven when you think you’ve put fences around what an AI system can do it will tend to find loopholes.” (The Guardian, 2016). The troublesome part is those loopholes cannot all be predicted and so, AI software should have built-in safety parameters. The challenge is designing effective ones that don’t come back to bite or completely extinguish the human race. It’s a matter of control versus capabilities. We should retain control. But, can we retain control if we allow AI’s intellectual capabilities to exceed ours — and how? Should AI’s progress be regulated – and how? Perhaps this is what tomorrow’s graduates should focus on. Thoughts welcome.