Douglas Adams, in The Restaurant at the End of the Universe, tells the story of the Golgafrinchians. The people of planet Golgafrincham, the story goes, figured out how to get rid of an entire useless third of their population by duping them into thinking the planet was doomed and that they were eligible for the first ship out. This group was, apparently, designated by profession: Doctors, teachers, and (presumably) writers of humorous science-fiction were deemed worthy to remain; telephone sanitizers, hairdressers, and jingle writers were shanghaied.
Sometimes, remembering this story, I wonder whether this field of UX—to which I’ve given my professional life—would qualify me for the ship. After all, we create no shelter, food, or clothing for anyone; our work rarely inspires anyone to the point of tears (unless they be tears of frustration); and I’ve never met a 6-year-old who wants to be one of us when they grow up.
Despite this, I consider us a pretty useful lot. Indeed, lately I have begun to believe that UX might literally save the world—possibly fairly soon. To be sure, it might happen in such a way that it will never be known. (Perhaps it is happening right now. I certainly hope so.)
Serious thought about the end of humanity at the hands of computers has crescendoed recently. Pronouncements by Stephen Hawking and Elon Musk mention it as a real possibility, and last year’s open letter to the International Joint Conference on Artificial Intelligence signed by them and many others has been hyped (far out of proportion, when one reads the text) as a warning that our fate may end up in the hands of our machines.
Phil Torres, writing for Salon, concludes, “[I]t’s increasingly important for the public to understand exactly why the experts are nervous about superintelligent machines.” His points are, essentially, that
- A prodigiously capable AI may harm or destroy humanity as an unintended consequence of pursuing goals not fundamentally rooted in human well-being; and
- In contrast to other types of human development efforts, development of AI can afford no initial failures or dead-ends, because such a dead-end could be catastrophic.
As Torres elaborates in his first point, correctly setting a self-aware AI’s priorities may be tricky and consequential. For instance, merely tell it to minimize all human suffering, and it might try eliminating all humans; tell it to maximize our dopamine and serotonin release, and we might all end up like the Soma-addled vegetables in Brave New World.
But, one might say, t’was ever thus. Across aeons, as humanity has increased its power through its machines, we have always faced the question of what it is, exactly, that we want from all our power.
Among all our various human desires, for which we enlist all our various non-human devices, is there a common element to be distilled and defined? What will ultimately satisfy us, anyway?
The stakes involved in advanced AI development simply focus us unavoidably on this question, which has been there all along. From one perspective, it seems only appropriate that this most powerful tool of all in the pursuit of human happiness should require us to understand what happiness is.
The pursuit of advanced AI is humanity’s going double-or-nothing on the prospect of fulfillment through tools, a prospect we have embraced since we first put flint to tinder. Of course we will have to figure out what fulfillment really means, in language our nonhuman companions in the universe can understand, if we want to enjoy the “double” and avoid the “nothing.”
And who will be the heroes that accomplish this feat, if any can? Who among us is equipped to solve this puzzle? Why, UX practitioners.
We are the ones who professionally, habitually, bring human concerns to the mechanical world and ask that world to accommodate them. We are the ones who consult our fellow humans to ascertain what they want, with an eye to how their priorities can be most optimally met by machines. We are the right ones for the job. We have, one might say, the right stuff.
To function for long in our position, between the human and mechanical, is to develop a faith the two realms are compatible. A crucial part of that compatibility is the ability of human needs and wants to be decoded from, say, a desire to see ourselves as cool, or to provide for our progeny, and re-encoded from electron positions on silicon chips. The challenge of dealing with advanced AI is different from others we have faced in the past—but quantitatively so, not qualitatively.
Torres’ other problem—of having no room for error and no luxury of prototyping—is, to my mind, questionable. Even the first nuclear bomb, whose makers reportedly described themselves as only 95% sure it would not annihilate the entire earth when used, was prototyped. But let us say that it is a problem, just as Torres proposes. If so, then it is a problem, fundamentally, of design.
No design can anticipate all outcomes of emergent phenomena like intelligence. These phenomena are characteristically chaotic, meaning imperceptibly small variations in initial conditions will recursively magnify to produce substantial variations in outcome. Thus, no one can predict all possible literature from the design of an alphabet.
What one can do instead is design initial conditions that reflect our best understanding of humanity. Possible alphabets (to stick with the metaphor) can be evaluated with respect to things like legibility and pronounceability—in other words, with respect to their human usability. Based on such evaluation, we can at least feel fairly confident that whatever is ultimately written will have minimal relevance, utility, and respect for human beings. Again, UX designers are the right people to do this.
Perhaps it should not be so revolutionary to think of our humble (and by many accounts, our young) profession as so vital to the survival and success of humankind. Our function, if not our job title, has been important for a while now. We might assume that mechanical engineering, for example, was responsible for the wheel, but to me it seems reasonable to think that without UX, a cart would still be a travois.
Without engineers and developers, AI would never exist. But without us, we might end up wishing it didn’t.
Oh, and planet Golgafrincham? Entire remaining population wiped out by a disease contracted from a dirty telephone. And the presumed undesirables they cast off went on, so the story goes, to populate planet Earth.