We may have to reframe our mental picture of Captain James T Kirk. Thus far, we've imagined the human entrusted with carrying our exploratory hopes and fears, however fictional they may be, into deep space as a heroic, square-jawed adventurer.
The real Captain Kirk, the ambassador from Earth who will travel to the farthest reaches of the solar system and the galaxy, is much more likely to be an assemblage of microprocessors and intelligent software, than a Spandex-clad space captain.
In fact, it's probably crucial that we hand over the deep space swashbuckling to the computers and machines, mostly because humans are too squishy, and too short-lived, to be of much use in such arenas. Dr Steve Chien is the technical group supervisor of the artificial intelligence (AI) group (and, to give him his complete title, senior research scientist in the mission planning and execution section) at Nasa's famed Jet Propulsion Laboratory (JPL) in Pasadena, California. The Irish Times meets him on a miserably wet morning at Convention Centre Dublin on the north quays, as he's in town to speak to the National Analytics Summit, organised by the Analytics Institute.
Quashing my Kirkian dreams with a gentle smile and a soft voice, he says: “When I talk about how AI is essential to the search for life beyond Earth, it’s more about how hard it is to send humans anywhere. Humans are very sensitive, very fragile. Sending humans even to low Earth orbit requires an amazing endeavour; sending humans to the Moon required an enormous endeavour; sending humans to Mars is even more challenging. And those are all places where we’re certain that there isn’t life.
If you look within the solar system, there are several places where we believe that there could be life.
“If you look within the solar system, there are several places where we believe that there could be life. Basically, the strategy is to look for water, for liquid water. One of the most promising places is Europa, a moon of Jupiter. And you can’t really send people there because the Jupiter radiation is very harsh, plus you need a very long mission to go there, and so we have to send robots to look for life. When we send robots to look for life, we can try to manage them, because of the distances involved, and the communication is very difficult. And so, the robots have to be smart enough to look on their own. And that’s what I really mean when I say we need AI to look for life.”
Go-to plot device
Artificial intelligence is as closely entwined with popular science fiction as is Shatner's staccato line delivery, or Leonard Nimoy's pointy ears. It's the go-to plot device for generations of writers and script editors, looking to create some kind of existential threat for the crew of a starship, or humanity itself, to face. Heck, Arnold Schwarzenegger wouldn't have a film career without AI.
The thing is, AI isn’t really like that. We cannot, at least not yet, create machines or computers that can think like a human, or even outsmart one in a general sense. We probably will do one day, and that day will have ramifications for the human race as profound as the discovery of fire, electricity, the power of the split atom, and climate change. Until then, computers and software using AI can be – pretty easily – taught how to be better than humans at one or two specific tasks. We can teach an artificially intelligent computer to beat us at chess or Jeopardy, for example. Teaching one how to successfully cross the road is another matter.
"There's a central question of how smart the machines need to be, and that's poking at the edges of AGI, or artificial general intelligence. And that's what people talk about when they think of characters such as Data, from Star Trek," says Dr Chien. "Something that could interact at a peer level with humans on a broad spread of topics, just like a human could. We don't really need that in space. What we need is a specialised intelligence. The reason being that a smart spacecraft doesn't need to know how to take a bus in Dublin, or it doesn't need to know how to book a flight, it doesn't need to know that if I'm going to Ireland in November, it's likely to be rainy.
"What it does need to know are very specific things, such as how its sensors work; it needs to know about the science that people want it to do. So by no means are we saying, when we send an AI space craft to another world, that it has to be fully intelligent. Let's take an example, which is the M2020 rover that Nasa is sending to Mars next year: it will be very smart, in the sense that it will have unprecedented ability to take images, to target some of its instruments, if activities such as a drive to a particular point takes longer than expected, to then adjust the scheduling of further activities as needed.
“However, from another perspective, it’s quite stupid. It doesn’t know how to talk with a reporter, or hold a press conference, or present its results. It doesn’t know the context of its results. So where we are now, in AI, is we can have these narrow expertises, and in many of those we can significantly outperform humans? The analogy that I make is why did we start having computers? We started having computers because we wanted to process large numbers, and that’s something that most humans are not very good at, but which computers are exceptionally good at. So we can make these narrow-focus computers very good at their tasks.
"So, in my specific area, which is logistics – planning and scheduling of the space missions – it's about planning and conserving your energy, managing where the spacecraft is. That is something that people can't understand, can't get a grip on, the thousands of variables involved. So if I think about a modern-day logistics company transporting things all over Europe, it has to consider how perishable its products are, it has to think about customs, about weather affecting arrival times, and it has to absorb all of that, which is a lot of information, and make good decisions based on that information. That's a niche that people are less good at.
However, people will know things, commonsense things, that it's harder to have the computer know
“However, people will know things, commonsense things, that it’s harder to have the computer know. So, people know about holidays, and that the traffic is going to be horrible. People know about weather patterns. And the challenge is how to make those things meet. So it’s about how do we bring computers towards humans, and how do we bring humans towards computers, and how they best work together. That’s the fundamental challenge.”
The biggest task that Dr Chien wants to pass over to his future robotic explorers is how to recognise that something is important, and to be able to turn its sensors and cameras to suit, rather than just simply running through a schedule.
"So a great example is that fantastic mission called New Horizons, which was run by a colleague of mine, Alan Stern, who was the principal investigator. They did some amazing things – flew by Pluto, flew by Ultima Thule, [an asteroid since renamed Arrokoth] and did some incredible science, but they pre-planned out how they would fly by those places," says Dr Chien. "They said: 'We're going to take this image, then this image, then this image.' And what we would like in the future for the spacecraft to be smarter, and look for certain things. So it would look for satellites, for moons and moonlets, those are every, very interesting for the scientists, so the spacecraft would search for those, and if it finds them, would take extra images of them.
“If you see plumes, little geysers, well those are a remarkable scientific phenomenon, so again the spacecraft would ‘know’ to take more and more images of those. And that seems like it would be very easy; that you would just tell the software to do that. But it turns out that it’s actually really complicated: you have to know how the spacecraft is moving, how and where to point, and most of all the spacecraft has to understand ‘That’s a plume.’ These are all things that humans do very well. We walk around the world, and we say ‘That’s a chair’ or ‘That’s rain falling’ but making computers understand these things is not so easy.”
Applications
For all those reaching for the traditional (but erroneous) complaint that all that money spent on the Apollo Moon landings only gave us some pretty pictures and Velcro, there are some significant Earthly applications of the AI on which Dr Chien is working. “There are fantastic terrestrial applications of that technology,” he says. “For example, the excitement in AI right now is that we’re starting to become better and better at these things, and also we’re able to see the kinds of specific niche problems that the computers can be very good at.
"So recognising cat videos on YouTube – they're surprisingly good at that. But independent of that, there are many more things that AI can be used for that are a little more practical. Looking at medical results, for example, and saying: 'That's a cancerous tumour, but that's not.' My wife is a biologist, and it was fascinating for me to learn that for many, many years in the medical industry, humans would look and count the different cells in a sample, and only count a small portion but then extrapolate out to the rest of the sample. I think that artificial intelligence and computer vision can automate those kinds of tests and even do more subtle things, such as interpreting an X-ray, or an MRI. So, we're seeing an explosion of these kinds of applications.
Climate change is affecting everywhere on the Earth, so we need to understand that and study it, and part of that means sending vehicles under the ice in both the Arctic and Antarctic
"We want to go under the Ross ice shelf in Antarctica, to help us understand how climate change is affecting things under there. Climate change is affecting everywhere on the Earth, so we need to understand that and study it, and part of that means sending vehicles under the ice in both the Arctic and Antarctic."
Presumably, if one can design a spacecraft that can pilot itself, one could also turn that software to the task of driving us around in an autonomous car? Well, actually, in an echo of the Nasa engineer who once said ‘Putting a man on the Moon was tough, but I wouldn’t have wanted to be the guy who had to make Concorde work’, Dr Chien says that, oddly, space is in some ways easier than O’Connell Street: “I would actually say that making a self-driving car and navigating around the streets of, say, Dublin, is quite a bit more challenging than an autonomous spacecraft. They are challenging in different ways.
"The space craft is more challenging because the environment is somewhat unknown. If we send a spacecraft to Europa, we're going there because we know very little about it. But we're fairly certain that it won't have to deal with the variables of a bunch of crazy people driving about on the street, dealing with rain. For instance, where I'm from in Los Angeles, when it rains it's crazy because people aren't used to the rain. When I first moved there – and I'm from the midwest, where it rains all the time – I said these Californians were crazy; they don't know how to drive in the rain. And now I'm one of them.
“So, while an autonomous car, I think, is inevitable, and it’s inevitable that we will have such a thing, it’s proving to be quite a bit harder than people thought. It’s hard because when someone else doesn’t play by the rules you still have to behave and not run into them. If a pedestrian dashes in front of the car, you still want to stop, you don’t want to say ‘Well, there’s no crosswalk there; they weren’t supposed to do that.’”
Wonder
There’s surely one shortcoming to sending out probes and robots, however clever we can make them, to explore the universe, and that is wonder, or lack thereof. One could say that the most fundamental part of being human is a sense of wonder, of curiosity. Without it, perhaps we’d never have come down from the trees, or left the rift valley, or crossed the first sea. Do we have to surrender that sense, if we start putting robots into space, instead of a human on the Moon?
Dr Chien says no, not necessarily; that is one of the great challenges of creating these exploring AIs. “It depends on what you see as the vision for space exploration. There’s what I would call a bottom-up version, and a top-down version. In the bottom-up version, AI is just a tool. So right now we have to write all the software to make the rover, for example, operate. We communicate with it only two or three times a day. So imagine getting your teenage children to do things for you, but you can only talk to them two or three times a day. You might imagine that could be challenging.
“A rover is, I guess, not quite as rebellious as a teenager, but it doesn’t have common sense. So currently, the way we operate most space missions is we tell them very specifically what to do, and if a rover finishes something ahead of schedule, it just sits there. And these are very expensive space missions, so we don’t want that; what we want is to give it enough common sense that if it completes a task early, it then goes on to the next thing, and then the next. Telling it to go onto the next thing is the easiest part of it. So maybe we tell it to take more images.
“Better yet, let’s teach it to take more images, then analyse those, decide that they’re blurry, or washed out, or just rubbish images in some way, and then take more images, better ones, and decide to keep those and transmit those instead. Send the interesting ones. So that’s the bottom-up version, with the AI as a tool, and I think we see a lot of the same kind of scenarios in industry.
“From the other side, you need to look at why we are there. Why are we exploring things? We want to increase knowledge. I often say that at JPL and Nasa, we’re in the knowledge business, not the data business. You can return lots of data, but we really want the data that changes our beliefs about the world and the galaxy, because that’s what makes it a scientific discovery. And so that’s a much more challenging problem, but we are already working on things like that.
“Interpreting all that data, however, is very hard. What we want to do is search intelligently, and we do that with things we call white list and black list. A white list contains specific things you’re looking for. So it might include sulphur, because sulphur is a sign of life, for example. A black list is where you’re expecting to see certain things, but if we see something else, something we weren’t expecting to see, that’s interesting. So you search, you see all of the things you expect, but – whoa! – you spot something you weren’t ready for, like maybe a Martian runs past the lens. That is the big step towards intelligence – recognising that which is unusual, and exciting. If all of my images are of smooth sand, let’s say, and here’s a patch of ruffled sand, well that’s interesting, and the inverse applies too. Scientific discoveries are made that way. That’s how we discovered that there are liquid lakes trapped under kilometres of ice in Antarctica, because there were patches of weirdly smooth ice on top of the glaciers.
“When you talk about curiosity, and you talk about wonder, the first steps to that are being able to spot what’s unusual, what’s different. And that’s what we’re trying to teach our AI right now.”
As long as they call the software Kirk, right? I mean, they have to, surely...