Juneleung Chan


NOTE:
  • Content copyright belongs to the original author.
  • This page is only for archiving personal study notes and will not be used for any commercial channels.


The Worlds I See



1

Unlikely as it seemed at the time, I found my way to the furthest reaches of that world in the years that followed. Not aerospace, but the science of the mind, and the still-nascent study of intelligent machines.

I was certain the future of AI would depend on institutions far beyond science, including education, activism, and, of course, government.

2

He was possessed by an unrepentant adolescent spirit and didn’t reject the obligations of adulthood so much as he seemed genuinely unable to perceive them, as if lacking some basic sense that comes naturally for everyone else.

It must have been obvious even from a distance that the traditional hierarchy of parent and child was absent between us, as he carried himself more like a peer than a father, unburdened by paternal neuroses.

My intuitions had run aground, robbing me of the fluency I’d displayed in my math classes and confounding every attempt at understanding.

But even as this new skill seemed to pour out of me, I was humbled— thrilled, really—by how much more there was to learn. In physics, I didn’t see complexity so much as I saw grandeur. It offered the elegance and certainty of mathematics, the tangibility of chemistry, and, most arresting of all, something I never imagined the sciences could offer: a sense of humanity that felt as poetic as the literature I was raised on. Its history read like drama, rich and vibrant, reaching across centuries, and I couldn’t get enough of it.

3

My life had begun in the East, a hemisphere’s diameter from the science I would grow to love. It was a gulf that couldn’t have been much wider, in terrestrial terms at least, as the doors of our 747 swung shut, muffling the engines and triggering a slow lurch across the tarmac. Our destination, unbeknownst to either of us, was ground zero of a young field still struggling to establish a fraction of the legitimacy enjoyed by traditional disciplines, but destined, ultimately, to herald a revolution.

a loose community of scientists and engineers in the U.S. and U.K.—from Cambridge to Boston to Northern California—were already decades into a scientific quest that would one day rank among the most profound in the history of our species.

Finally, there were the people themselves. Rowdiness and irreverence seemed to be the norm among the kids. Even from behind a still-towering language barrier, I knew I’d never seen students talk to teachers the way Americans did. But what astonished me most was how the informalities appeared to cut both ways. Their dynamic was often adversarial, but jocular as well. Even warm.

4

thanks to tools like the Hubble, we—as a species—are getting our first glimpses. That’s why I’m showing you this image on our last day —because I don’t want you to ever forget this feeling. Stay curious. Stay bold. Be forever willing to ask impossible questions.

Before long, a period known as an “AI winter” had set in—a long season of austerity for a now unmoored research community. Even the term “artificial intelligence” itself—seen by many as hopelessly broad, if not delusional—was downplayed in favor of narrower pursuits like decision- making, pattern recognition, and natural language processing, which attempted to understand human speech and writing. “Artificial intelligence” seemed destined to remain the domain of science fiction writers, not academics. Just as the history of physics follows a sinusoid-like pattern as the abundance of discovery ebbs and flows, AI was revealing itself to be a temperamental pursuit.

5

Two years had passed since that life-altering moment in a darkened lab —those crackling and whooshing sounds that yielded my first glimpse of the inner workings of a mind other than my own. Two years of a pursuit that had only just begun. I was intrigued and challenged by the art of engineering, but I didn’t want to be an engineer. And although I was enthralled by the mysteries of neuroscience, I didn’t want to be a neuroscientist. I wanted to draw on both while constrained by neither.

6

Evolution bore down on a single photosensitive protein for a half-billion years, pushing relentlessly as it blossomed across eons into an apparatus so exquisite that it nearly defies comprehension.

That technology finally arrived in the form of neuroscientific tools like the EEG and functional magnetic resonance imagery, or fMRI, arming researchers with higher grades of clinical precision than ever before. Thorpe’s paper was among the most notable, but it was far from the only one. Equally important was the work of the MIT cognitive neuroscientist Nancy Kanwisher and her students, who used fMRI analysis to identify a number of brain regions associated with precisely the kind of processing that was necessary to deliver the fast, accurate feats of perception that researchers like Thorpe and Biederman had uncovered. Whereas EEG measures electrical impulses across the brain, which are exceedingly fast but spread diffusely over its surface area, fMRI measures blood oxygen level changes when specific patches of neurons are engaged.

7

WordNet was a revelation. It provided an answer, or at least a hint, to the questions that had consumed so much of my waking life in the nearly four years since stumbling upon Biederman’s number. It was a map of human meaning itself, uncompromising in both the scope of its reach and the veracity of its contents. I didn’t yet know how computer vision would achieve the scale Biederman imagined, but now, at least, I had proof such an effort was conceivable. There was a path before me for the first time, and I could see the next step.

Whether I was on the verge of a breakthrough or a failure, I was excited. Science may be an incremental pursuit, but its progress is punctuated by sudden moments of seismic inflection—not because of the ambitions of some lone genius, but because of the contributions of many, all brought together by sheer fortune. As I reflected on all the threads of possibility that had had to align to spur this idea, I began to wonder if this might be just such a moment.

“Yeah, unfortunately. A little too boring for the undergrads we hired. And it was hardly meaningful research, so no PhD student wanted to touch it.”

“Isn’t ‘out there’ exactly the kind of idea you’ve been looking for?”

I’m sorry, but this just doesn’t make any sense.

The more I discussed the idea for ImageNet with my colleagues, the lonelier I felt. Silvio’s pep talks notwithstanding, the nearly unanimous rejection was a bad sign at the outset of an undertaking defined by its sheer size; I might need a whole army of contributors, and I couldn’t seem to find a single one. Worst of all, whether or not I agreed with them, I couldn’t deny the validity of their criticisms.

There was no escaping the fact that algorithms were the center of our universe in 2006, and data just wasn’t a particularly interesting topic. If machine intelligence was analogous to the biological kind, then algorithms were something like the synapses, or the intricate wiring woven throughout the brain. What could be more important than making that wiring better, faster, and more capable? I thought back to the attention our paper on one- shot learning had enjoyed—the instant conversation-starting power of a shiny new algorithm richly adorned with fancy math. Data lived in its shadow, considered little more than a training tool, like the toys a growing child plays with.

But that was exactly why I believed it deserved more attention. After all, biological intelligence wasn’t designed the way algorithms are—it evolved. And what is evolution if not the influence of an environment on the organisms within it? Even now, our cognition bears the imprint of the world inhabited by countless generations of ancestors who lived, died, and, over time, adapted. It’s what made the findings of Thorpe and Biederman, and even our own lab at Caltech, so striking: we recognize natural imagery nearly instantaneously because that’s the kind of sensory stimuli—the data, in other words—that shaped us. ImageNet would be a chance to give our algorithms that same experience: the same breadth, the same depth, the same spectacular messiness.

however, was his need to find a new advisor for an exceptionally bright student named Jia Deng. Kai described him as the perfect collaborator: a young mind with engineering talent to spare, eager for a new challenge.

Brainpower aside, Jia’s status as a newcomer to the field caught my attention. His unusual background not only endowed him with engineering skills of a caliber the average computer vision student would be unlikely to have, but spared him the burden of expectations. This was an unorthodox project, if not an outright risky one, and far out of step with the fashions of the field at the time. Jia didn’t know that.

Jia and I watched from the corner of the lab as the row of undergrads produced a steady beat of mouse clicks and key presses. The response to the email we’d sent out earlier in the week had been quick. Wanted: Undergrads willing to help download and label images from the internet. Flexible shifts. $10/hr. It seemed like a fair trade: we’d take a step toward a new age of machine intelligence and they’d get some beer money. It was a satisfying moment, but it didn’t take long for reality to sink in.

Luckily for me, Jia was the kind of partner who reacted to a frustrating problem by thinking harder. Human participation was the costliest part of our process, both in terms of time and money, and that’s where he began his counterattack: making it his personal mission to reduce that cost to the absolute minimum.

“Looks like we’ve hit a bit of a speed bump. Uh … yep. Google’s banned us.”

More important, the real argument against automating the labeling process wasn’t technological but philosophical.

“The trick to science is to grow with your field. Not to leap so far ahead of it.”

It was a clever name, taken from the original Mechanical Turk, an eighteenth-century chess-playing automaton that toured the world for years as both a marvel of engineering and a formidable opponent, even for experienced players. The device was actually a hoax; concealed in its base was a human chess master, who controlled the machine to the delight and bewilderment of its audiences.

Centuries later, the emerging practice of crowdsourcing was predicated on the same idea: that truly intelligent automation was still best performed by humans. Amazon Mechanical Turk, or AMT, built a marketplace around the concept, allowing “requesters” to advertise “human intelligence tasks” to be completed by contributors, known as “Turkers,” who could be anywhere in the world. It made sense in theory and seemed to promise everything we wanted: the intelligence of human labeling, but at a speed and scale on par with that of automation. Amusingly—and quite perceptively—Amazon called it “artificial artificial intelligence.”

With completion close at hand, we no longer had to use our imaginations; for the first time, it was obvious to everyone that we were building something worth sharing with the world.

As a scientist, however, the decision was much simpler. I was part of a young, fast-evolving field poised to change the world, maybe within my lifetime, and the people I met at Stanford believed that as sincerely as I did. Princeton felt like home, but I couldn’t deny that Stanford seemed like an even more hospitable backdrop for my research. In fact, the more I thought about it, the more I worried that a place like “home” might be too comfortable for times like these. Moving somewhere new appealed to me precisely because it wasn’t comfortable. It felt uncertain—maybe even risky —and I needed that.

8

As in many of our experiments from the era, the accuracy of the algorithms we used was spotty and much work remained to be done—even simple image recognition was still nascent, after all—but the rough edges only heightened the spirit of adventure that gripped us. Our work felt daring and forward-looking, unrefined but provocative. Much of it was conceptually simple, too.

In place of the previous flight’s manic thoughts and burning questions was something unexpected. It wasn’t quite serenity, but rather a dawning sense of awareness. Reflection. I was content to sit in silence this time, from takeoff till landing, with a single thought reverberating in my head: history had just been made, and only a handful of people in the world knew it.

9

Meanwhile, a new generation of students had arrived, their fidgety eagerness contrasting endearingly with the veterans’ poise. Thanks to ImageNet’s success, our lab had become a magnet for a particular kind of young thinker. As the first generation of students to come of academic age in this era of newly revitalized AI, they enjoyed a rare privilege. They were old enough to recognize history in the making, but young enough to catch it at the dawn of their careers.

Each of them followed the news, online, on television, and in the buzz they’d overhear as they walked the halls or talked to their professors. It all pointed to a future that seemed to be arriving decades ahead of schedule, and one that offered them more than any previous generation could have expected. For the first time, the highest ambition of a computer vision student wasn’t one of a handful of coveted faculty positions scattered across the country, but a path into the technology industry, whether a job with a start-up or one of the giants.

It was an uncommonly exciting prospect in a world like ours, and maybe even a lucrative one. But our actions suggested a simpler motivation, even among the rookies: that we’d never been more eager to explore, the unknown stretching far over the horizon. We were possessed by an especially ambitious brand of creativity, the kind that makes for manic days and sleepless nights. So, while the industries of the world surely had their own plans for ImageNet and the many applications they’d no doubt wring out of it, we knew that was their path, not ours. The North Star was still out there. We weren’t yet done with the science.

With or without the time constraints, I found this ability captivating. Photographs may be still, but we excel at extracting the motion frozen within them, from the grand and sweeping to the nearly imperceptible, and all with impressive acumen. We naturally consider the angle of bodies,

arms, and legs, and instantly sense where they came from and where they’re going; speed and force, weight and balance, energy and potential. We imagine the circumstances leading to the moment the picture captures and the outcome that may result, as in the fraction of a second following a photograph of a skateboarder leaping off a curb, or the lifetime that follows an image of a young couple exchanging wedding vows.

Even intent can be inferred. We can write volumes about the tension we sense in a figure’s pose, the proximity of one person to another, or something as simple as the angle of an eyebrow. It’s often more than enough to conclude who we’re looking at, how they relate to one another, and what they want. An impatient boss looms over an overworked employee. A sympathetic parent helps a struggling child. Close friends. Complete strangers. Affection or anger. Work or play. Safety or danger.

Language and vision are very different things. The fundamental unit of an image is the “pixel”—a now common term that began as a contraction of “picture element”—an almost imperceptible dot capturing the color at a single tiny point within a scene. It can take hundreds of pixels, if not thousands, or more, to depict anything meaningful. The phones in our pockets capture massively detailed images composed of tens of millions of such points. But pixels themselves tell us essentially nothing about an image when evaluated individually. The job of a vision algorithm, whether the gray matter in our skulls or the silicon in our machines, is to group these pixels into ever-larger regions of a two-dimensional image, then somehow scan for patterns within them that correspond to the three-dimensional features of the real world: space, volumes, surfaces, textures, and the like.

“Why not?” the other student replied. “If it were intelligent enough to converse at a human level—I mean real human conversation; you know, like the way we’re talking right now—who’s to say there wouldn’t be potential for something like a romantic connection?”

“I don’t know … It just sounds a little ridiculous to me.”

Today we were discussing Superintelligence, a provocative tome by Oxford philosopher Nick Bostrom exploring the future of AI. The book had become an unexpected mainstream success after figures like Bill Gates and Elon Musk tweeted both their praise for it and their fears about its implications, reviving the age-old sci-fi cliché of an impending showdown between man and machine. Our conversation had been appropriately eclectic, spanning killer robots, the potential for subjective consciousness within algorithms, and, in the final moments, the idea of falling in love with a computer. But even the afternoon’s most provocative detours carried a weight I wouldn’t have expected in previous years. It’s hard to dismiss talk of the future when it suddenly seems to be arriving so fast.

It was the birth of an entirely new paradigm, much as the early years of the twentieth century were for physics. I was reminded of the stories that captured my imagination as a teenage girl, daydreaming about life as a physicist in those heady days, trying to conjure the mystery and awe those early pioneers must have felt. It was hard not to envy them, their view of reality elevated—so radically, and so suddenly—by an awakening to the

mysteries of the quantum world and the relativistic majesty of the cosmos. They were born at just the right time and in just the right places to receive some of history’s most breathtaking gifts. It didn’t feel like hyperbole to wonder if this modern incarnation of neural networks was our generation’s equivalent.

Even then, though, there were reasons to acknowledge the future wouldn’t be a purely poetic one. Among the more jarring harbingers of change was the transformation of academic conferences related to AI. They’d been modest affairs for decades, attended exclusively by professors, researchers, and students, blissfully free of media attention and endearingly cash-strapped. Corporate sponsors were rare, generally limited to academic publishers like Springer, and relegated to a few long bench desks in the corner of an exhibition hall.

A hunger gripped the field as the demand for more set in. More layers to make neural networks deeper and more powerful. More silicon to speed up the training process and make ever-larger networks feasible to deploy. And, of course, more data. More imagery, more video, more audio, more text, and anything else a network might be trained to understand. More of everything.

It was exciting to think about the capabilities this newly organized data might enable, but harrowing as well; in my own lab, we’d already seen that more was always hidden in the stuff than we’d realized. It was never just imagery, or audio, or text—data allowed a model to form a representation of the world, and bigger data meant more powerful, nuanced representations. Relationships, connections, and ideas. Truths and falsehoods. Insights and prejudices. New understandings, but also new pitfalls. The deep learning revolution had arrived, and none of us were prepared for it.

In the meantime, our lab’s research agenda was showing a voraciousness of its own; no matter how much we achieved, each new

publication seemed to spawn ten follow-on ideas that someone, whether a postdoc or a first-year grad student, was willing to pick up and run with. That’s exactly how I liked it, even if it often felt overwhelming.

I wondered, in fact, if the true value of the North Star as a metaphor wasn’t just its ability to guide but the fact that its distance remains perpetually infinite. It can be pursued until the point of exhaustion, the object of a lifetime’s obsession, but never be reached. It’s a symbol of the scientist’s most distinctive trait: a curiosity so restless that it repels satisfaction, like opposing magnets, forever. A star in the night, a mirage in the distance, a road without end. This, I realized, was what AI was becoming for me.

10

Arnie and I envisioned a technology meant to fill a space with smart, reliable awareness, but defined by its unobtrusiveness. Unlike human auditors, our technology would blend into the background discreetly, keeping a silent watch and speaking up only when it sensed danger. We called it “ambient intelligence.”

My work with Arnie taught me two essential lessons: that the greatest triumphs of AI wouldn’t merely be scientific, but humanistic as well, and that achieving them would be impossible without help.

11

The world was becoming a surreal place. My colleagues and I had spent our careers exploring the science of AI, but we were suddenly confronted by something like—I didn’t have precisely the right word for it—the phenomenon of AI. For all the mysteries posed by the technology, its suddenly growing interactions with industries and governments, journalists and commentators, and even the public at large were every bit as complex. After decades spent in vitro, AI was now in vivo. It was restless, hungry, and eager to explore. And although I hesitate to liken it too explicitly to a living organism (our field’s history is replete with attempts at anthropomorphization that are more misleading than insightful), it had undeniably evolved into something new.

To the ears of researchers like us, the new term sounded a bit superfluous. But it was catchy, and it made the ultimate ambitions of our field clear to outsiders. And it positioned DeepMind as an unusually bold player in an already competitive ecosystem.

Where all of this would lead was anyone’s guess. Our field had been through more ups and downs than most; the term “AI winter” is a testament to its storied history of great expectations and false starts. But this felt different. As the analysis of more and more pundits took shape, a term was gaining acceptance from tech to finance and beyond: “the Fourth Industrial Revolution.” Even accounting for the usual hyperbole behind such buzz phrases, it rang true enough, and decision makers were taking it to heart. Whether driven by genuine enthusiasm, pressure from the outside, or some combination of the two, Silicon Valley’s executive class was making faster, bolder, and, in some cases, more reckless moves than ever. We were all about to find out what that philosophy would yield.

Words continued to fail. “Phenomenon” was too passive. “Disruption” too brash. “Revolution” too self-congratulatory. Modern AI was revealing itself to be a puzzle, and one whose pieces bore sharp edges. Nevertheless, as disturbing as it was to realize, this growing sense of danger was also the kind of thing scientists are wired to appreciate. It stoked a different form of curiosity in me, uncomfortable but compelling. I just needed a way to see it up close.

AI was becoming a privilege. An exceptionally exclusive one.

Since the days of ImageNet it had been clear that scale was important, but the notion had taken on nearly religious significance in recent years. The media was saturated with stock photos of server facilities the size of city blocks and endless talk about “big data,” reinforcing the idea of scale as a kind of magical catalyst, the ghost in the machine that separated the old era of AI from a breathless, fantastical future. And although the analysis could get a bit reductive, it wasn’t wrong. No one could deny that neural networks were, indeed, thriving in this era of abundance: staggering quantities of data, massively layered architectures, and acres of interconnected silicon really had made a historic difference.

What did it mean for the science? What did it say about our efforts as thinkers if the secret to our work could be reduced to something so nakedly quantitative? To what felt, in the end, like brute force? If ideas that appeared to fail given too few layers, or too few training examples, or too few GPUs suddenly sprung to life when the numbers were simply increased sufficiently, what lessons were we to draw about the inner workings of our algorithms? More and more, we found ourselves observing AI, empirically, as if it were emerging on its own. As if AI were something to be identified first and understood later, rather than engineered from first principles.

Granted, it was possible that more engineering might help. A new, encouraging avenue of research known as “explainable AI,” or simply “explainability,” sought to reduce neural networks’ almost magical deliberations into a form humans could scrutinize and understand. But it was in its infancy, and there was no assurance it would ever reach the heights its proponents hoped for. In the meantime, the very models it was intended to illuminate were proliferating around the world.

Even fully explainable AI would be only a first step; shoehorning safety and transparency into the equation after the fact, no matter how sophisticated, wouldn’t be enough. The next generation of AI had to be developed with a fundamentally different attitude from the start. Enthusiasm was a good first step, but true progress in addressing such complex, unglamorous challenges demanded a kind of reverence that Silicon Valley just didn’t seem to have.

Academics had long been aware of the negative potential of AI when it came to issues like these—the lack of transparency, the susceptibility to bias and adversarial influence, and the like—but given the limited scale of our research, the risks had always been theoretical. Even ambient

intelligence, the most consequential work my lab had ever done, would have ample opportunities to confront these pitfalls, as our excitement was always tempered by clinical regulations. But now that companies with market capitalizations approaching a trillion dollars were in the driver’s seat, the pace had accelerated radically. Ready or not, these were problems that needed to be addressed at the speed of business.

As scary as each of these issues was in isolation, they pointed toward a future that would be characterized by less oversight, more inequality, and, in the wrong hands, possibly even a kind of looming, digital authoritarianism. It was an awkward thought to process while walking the halls of one of the world’s largest companies, especially when I considered my colleagues’ sincerity and good intentions. These were institutional issues, not personal ones, and the lack of obvious, mustache-twirling villains only made the challenge more confounding.

Silicon Valley had never been accused of a lack of hubris, but the era of AI was elevating corporate bluster to new heights, even as our understanding of its pitfalls seemed to grow. CEOs on stages around the world delivered keynote speeches that ranged from the visionary to the clumsy to the downright insulting, promising cars that would soon drive themselves, virtuoso tumor detection algorithms, and end-to-end automation in factories. As for the fates of the people these advances would displace—taxi drivers, long-haul truckers, assembly-line workers, and even radiologists—corporate sentiment seemed to settle somewhere between half-hearted talk of “reskilling” to thinly veiled indifference.

But no matter how thoroughly the words of CEOs and self-proclaimed futurists might alienate the public, the growing deployments of the technology would give people even greater reasons to fear AI. It was an era of milestones, and the darkest imaginable kind was approaching. For the first time in the history of our field, blood would be shed.

12

“Oh, you mean, um … human-centric AI?”

“Human-centered,” I replied with a laugh. “At least, I think. Still working on the name, too.”

“Hmm…” The student scratched his head. “That sounds interesting, but it wasn’t what I expected to hear in a class like this. I guess it makes me wonder … what does ethics and society have to do with writing code and stuff?”

The Gates Computer Science Building feels both grand and humble to me. With its high ceiling and marble floors, its lobby echoes like a museum, and its vaulted, theater-sized classrooms pay fitting homage to the power of ideas. But I’ve come to know best the cramped hallways of its upper floors, where my lab is located, along with SAIL. Now, the building is home to something new, in a refurbished wing on the ground floor: the headquarters of the Stanford Institute for Human-Centered Artificial Intelligence, or Stanford HAI.

I’m heartened by the symbolism of such an explicitly humanistic organization in the heart of one of the nation’s oldest computer science departments. But Stanford HAI’s ambition—to become a hub for cross- disciplinary collaboration—is more than poetic, and it’s already becoming real. On any given day, I’m bound to run into someone like Dan Ho from the Stanford Law School; Rob Reich, a professor of political science; Michele Elam, a professor of the humanities; or Surya Ganguli, a string theory physicist turned computational neuroscientist. Each readily agreed to become a part of HAI, working directly with students and researchers in AI,

exploring the intersections between our fields and sharing the expertise they’ve gained over the course of their careers and lives. We’ve even attracted partners from beyond the campus entirely, including Erik Brynjolfsson, the renowned MIT economist, who moved across the country to help HAI better understand AI’s impacts on jobs, wealth, and the concentration of power in the modern world. It sometimes feels as if the whole discipline is being born anew, in a more vibrant form than I could have imagined even a few years ago.

One partnership in particular has done more than any other to transform my thinking about what’s possible. When I first met John Etchemendy a decade earlier, he was the university provost, and I was an East Coast transplant obsessively focused on the still-unfinished ImageNet. We became neighbors and friends in the years since, and my regard for the sheer depth of his intellect as a scholar has only grown. But over many years as an administrator, John developed an expertise on the inner workings of higher education as well—the good, the bad, and the downright Kafkaesque—and knew exactly what it’d take to bring HAI’s unlikely vision to life. Not merely to talk about human-centered AI, or to debate its merits, but to build it, brick by brick. So when he agreed to partner with me as a codirector of Stanford HAI, I knew we actually had a shot at making it work.

Among my favorite achievements of our partnership is the National Research Cloud, or NRC, a shared AI development platform supported entirely by public funding and resources, rather than by the private sector. Its goal is to keep AI research within reach for scholars, start-ups, NGOs, and governments around the world, ensuring that our field isn’t forever monopolized by the tech giants, or even universities like ours.

Two years before, the NRC was nothing more than an idea. And without Stanford HAI, that’s likely all it ever would have been. But in the hands of a more diverse team, including experts in law and public policy, it became a mission. John, in particular, called in a career’s worth of favors, recruiting universities across the country to form a coalition as impressive as any I’d ever seen in academia, and kicked off a flurry of ideas, suggestions, cross- country flights, and debate that soon became a fully realized legislative blueprint on its way to Capitol Hill. We still have a long way to go to make AI a truly inclusive pursuit, but achievements like the NRC are significant steps in the right direction.

The future of AI remains deeply uncertain, and we have as many reasons for optimism as we do for concern. But it’s all a product of something deeper and far more consequential than mere technology: the question of what motivates us, in our hearts and our minds, as we create. I believe the answer to that question—more, perhaps, than any other—will shape our future. So much depends on who answers it. As this field slowly grows more diverse, more inclusive, and more open to expertise from other disciplines, I grow more confident in our chances of answering it right.

In the real world, there’s one North Star—Polaris, the brightest in the Ursa Minor constellation. But in the mind, such navigational guides are limitless. Each new pursuit—each new obsession—hangs in the dark over its horizon, another gleaming trace of iridescence, beckoning. That’s why my greatest joy comes from knowing that this journey will never be complete. Neither will I. There will always be something new to chase. To a scientist, the imagination is a sky full of North Stars.


@juneleung