The Worlds I See
1
Unlikely as it seemed at the time, I found my way to the furthest reaches of that world in the years that followed. Not aerospace, but the science of the mind, and the still-nascent study of intelligent machines.
I was certain the future of AI would depend on institutions far beyond science, including education, activism, and, of course, government.
2
He was possessed by an unrepentant adolescent spirit and didnât reject the obligations of adulthood so much as he seemed genuinely unable to perceive them, as if lacking some basic sense that comes naturally for everyone else.
It must have been obvious even from a distance that the traditional hierarchy of parent and child was absent between us, as he carried himself more like a peer than a father, unburdened by paternal neuroses.
My intuitions had run aground, robbing me of the fluency Iâd displayed in my math classes and confounding every attempt at understanding.
But even as this new skill seemed to pour out of me, I was humbledâ thrilled, reallyâby how much more there was to learn. In physics, I didnât see complexity so much as I saw grandeur. It offered the elegance and certainty of mathematics, the tangibility of chemistry, and, most arresting of all, something I never imagined the sciences could offer: a sense of humanity that felt as poetic as the literature I was raised on. Its history read like drama, rich and vibrant, reaching across centuries, and I couldnât get enough of it.
3
My life had begun in the East, a hemisphereâs diameter from the science I would grow to love. It was a gulf that couldnât have been much wider, in terrestrial terms at least, as the doors of our 747 swung shut, muffling the engines and triggering a slow lurch across the tarmac. Our destination, unbeknownst to either of us, was ground zero of a young field still struggling to establish a fraction of the legitimacy enjoyed by traditional disciplines, but destined, ultimately, to herald a revolution.
a loose community of scientists and engineers in the U.S. and U.K.âfrom Cambridge to Boston to Northern Californiaâwere already decades into a scientific quest that would one day rank among the most profound in the history of our species.
Finally, there were the people themselves. Rowdiness and irreverence seemed to be the norm among the kids. Even from behind a still-towering language barrier, I knew Iâd never seen students talk to teachers the way Americans did. But what astonished me most was how the informalities appeared to cut both ways. Their dynamic was often adversarial, but jocular as well. Even warm.
4
thanks to tools like the Hubble, weâas a speciesâare getting our first glimpses. Thatâs why Iâm showing you this image on our last day âbecause I donât want you to ever forget this feeling. Stay curious. Stay bold. Be forever willing to ask impossible questions.
Before long, a period known as an âAI winterâ had set inâa long season of austerity for a now unmoored research community. Even the term âartificial intelligenceâ itselfâseen by many as hopelessly broad, if not delusionalâwas downplayed in favor of narrower pursuits like decision- making, pattern recognition, and natural language processing, which attempted to understand human speech and writing. âArtificial intelligenceâ seemed destined to remain the domain of science fiction writers, not academics. Just as the history of physics follows a sinusoid-like pattern as the abundance of discovery ebbs and flows, AI was revealing itself to be a temperamental pursuit.
5
Two years had passed since that life-altering moment in a darkened lab âthose crackling and whooshing sounds that yielded my first glimpse of the inner workings of a mind other than my own. Two years of a pursuit that had only just begun. I was intrigued and challenged by the art of engineering, but I didnât want to be an engineer. And although I was enthralled by the mysteries of neuroscience, I didnât want to be a neuroscientist. I wanted to draw on both while constrained by neither.
6
Evolution bore down on a single photosensitive protein for a half-billion years, pushing relentlessly as it blossomed across eons into an apparatus so exquisite that it nearly defies comprehension.
That technology finally arrived in the form of neuroscientific tools like the EEG and functional magnetic resonance imagery, or fMRI, arming researchers with higher grades of clinical precision than ever before. Thorpeâs paper was among the most notable, but it was far from the only one. Equally important was the work of the MIT cognitive neuroscientist Nancy Kanwisher and her students, who used fMRI analysis to identify a number of brain regions associated with precisely the kind of processing that was necessary to deliver the fast, accurate feats of perception that researchers like Thorpe and Biederman had uncovered. Whereas EEG measures electrical impulses across the brain, which are exceedingly fast but spread diffusely over its surface area, fMRI measures blood oxygen level changes when specific patches of neurons are engaged.
7
WordNet was a revelation. It provided an answer, or at least a hint, to the questions that had consumed so much of my waking life in the nearly four years since stumbling upon Biedermanâs number. It was a map of human meaning itself, uncompromising in both the scope of its reach and the veracity of its contents. I didnât yet know how computer vision would achieve the scale Biederman imagined, but now, at least, I had proof such an effort was conceivable. There was a path before me for the first time, and I could see the next step.
Whether I was on the verge of a breakthrough or a failure, I was excited. Science may be an incremental pursuit, but its progress is punctuated by sudden moments of seismic inflectionânot because of the ambitions of some lone genius, but because of the contributions of many, all brought together by sheer fortune. As I reflected on all the threads of possibility that had had to align to spur this idea, I began to wonder if this might be just such a moment.
âYeah, unfortunately. A little too boring for the undergrads we hired. And it was hardly meaningful research, so no PhD student wanted to touch it.â
âIsnât âout thereâ exactly the kind of idea youâve been looking for?â
Iâm sorry, but this just doesnât make any sense.
The more I discussed the idea for ImageNet with my colleagues, the lonelier I felt. Silvioâs pep talks notwithstanding, the nearly unanimous rejection was a bad sign at the outset of an undertaking defined by its sheer size; I might need a whole army of contributors, and I couldnât seem to find a single one. Worst of all, whether or not I agreed with them, I couldnât deny the validity of their criticisms.
There was no escaping the fact that algorithms were the center of our universe in 2006, and data just wasnât a particularly interesting topic. If machine intelligence was analogous to the biological kind, then algorithms were something like the synapses, or the intricate wiring woven throughout the brain. What could be more important than making that wiring better, faster, and more capable? I thought back to the attention our paper on one- shot learning had enjoyedâthe instant conversation-starting power of a shiny new algorithm richly adorned with fancy math. Data lived in its shadow, considered little more than a training tool, like the toys a growing child plays with.
But that was exactly why I believed it deserved more attention. After all, biological intelligence wasnât designed the way algorithms areâit evolved. And what is evolution if not the influence of an environment on the organisms within it? Even now, our cognition bears the imprint of the world inhabited by countless generations of ancestors who lived, died, and, over time, adapted. Itâs what made the findings of Thorpe and Biederman, and even our own lab at Caltech, so striking: we recognize natural imagery nearly instantaneously because thatâs the kind of sensory stimuliâthe data, in other wordsâthat shaped us. ImageNet would be a chance to give our algorithms that same experience: the same breadth, the same depth, the same spectacular messiness.
however, was his need to find a new advisor for an exceptionally bright student named Jia Deng. Kai described him as the perfect collaborator: a young mind with engineering talent to spare, eager for a new challenge.
Brainpower aside, Jiaâs status as a newcomer to the field caught my attention. His unusual background not only endowed him with engineering skills of a caliber the average computer vision student would be unlikely to have, but spared him the burden of expectations. This was an unorthodox project, if not an outright risky one, and far out of step with the fashions of the field at the time. Jia didnât know that.
Jia and I watched from the corner of the lab as the row of undergrads produced a steady beat of mouse clicks and key presses. The response to the email weâd sent out earlier in the week had been quick. Wanted: Undergrads willing to help download and label images from the internet. Flexible shifts. $10/hr. It seemed like a fair trade: weâd take a step toward a new age of machine intelligence and theyâd get some beer money. It was a satisfying moment, but it didnât take long for reality to sink in.
Luckily for me, Jia was the kind of partner who reacted to a frustrating problem by thinking harder. Human participation was the costliest part of our process, both in terms of time and money, and thatâs where he began his counterattack: making it his personal mission to reduce that cost to the absolute minimum.
âLooks like weâve hit a bit of a speed bump. Uh ... yep. Googleâs banned us.â
More important, the real argument against automating the labeling process wasnât technological but philosophical.
âThe trick to science is to grow with your field. Not to leap so far ahead of it.â
It was a clever name, taken from the original Mechanical Turk, an eighteenth-century chess-playing automaton that toured the world for years as both a marvel of engineering and a formidable opponent, even for experienced players. The device was actually a hoax; concealed in its base was a human chess master, who controlled the machine to the delight and bewilderment of its audiences.
Centuries later, the emerging practice of crowdsourcing was predicated on the same idea: that truly intelligent automation was still best performed by humans. Amazon Mechanical Turk, or AMT, built a marketplace around the concept, allowing ârequestersâ to advertise âhuman intelligence tasksâ to be completed by contributors, known as âTurkers,â who could be anywhere in the world. It made sense in theory and seemed to promise everything we wanted: the intelligence of human labeling, but at a speed and scale on par with that of automation. Amusinglyâand quite perceptivelyâAmazon called it âartificial artificial intelligence.â
With completion close at hand, we no longer had to use our imaginations; for the first time, it was obvious to everyone that we were building something worth sharing with the world.
As a scientist, however, the decision was much simpler. I was part of a young, fast-evolving field poised to change the world, maybe within my lifetime, and the people I met at Stanford believed that as sincerely as I did. Princeton felt like home, but I couldnât deny that Stanford seemed like an even more hospitable backdrop for my research. In fact, the more I thought about it, the more I worried that a place like âhomeâ might be too comfortable for times like these. Moving somewhere new appealed to me precisely because it wasnât comfortable. It felt uncertainâmaybe even risky âand I needed that.
8
As in many of our experiments from the era, the accuracy of the algorithms we used was spotty and much work remained to be doneâeven simple image recognition was still nascent, after allâbut the rough edges only heightened the spirit of adventure that gripped us. Our work felt daring and forward-looking, unrefined but provocative. Much of it was conceptually simple, too.
In place of the previous flightâs manic thoughts and burning questions was something unexpected. It wasnât quite serenity, but rather a dawning sense of awareness. Reflection. I was content to sit in silence this time, from takeoff till landing, with a single thought reverberating in my head: history had just been made, and only a handful of people in the world knew it.
9
Meanwhile, a new generation of students had arrived, their fidgety eagerness contrasting endearingly with the veteransâ poise. Thanks to ImageNetâs success, our lab had become a magnet for a particular kind of young thinker. As the first generation of students to come of academic age in this era of newly revitalized AI, they enjoyed a rare privilege. They were old enough to recognize history in the making, but young enough to catch it at the dawn of their careers.
Each of them followed the news, online, on television, and in the buzz theyâd overhear as they walked the halls or talked to their professors. It all pointed to a future that seemed to be arriving decades ahead of schedule, and one that offered them more than any previous generation could have expected. For the first time, the highest ambition of a computer vision student wasnât one of a handful of coveted faculty positions scattered across the country, but a path into the technology industry, whether a job with a start-up or one of the giants.
It was an uncommonly exciting prospect in a world like ours, and maybe even a lucrative one. But our actions suggested a simpler motivation, even among the rookies: that weâd never been more eager to explore, the unknown stretching far over the horizon. We were possessed by an especially ambitious brand of creativity, the kind that makes for manic days and sleepless nights. So, while the industries of the world surely had their own plans for ImageNet and the many applications theyâd no doubt wring out of it, we knew that was their path, not ours. The North Star was still out there. We werenât yet done with the science.
With or without the time constraints, I found this ability captivating. Photographs may be still, but we excel at extracting the motion frozen within them, from the grand and sweeping to the nearly imperceptible, and all with impressive acumen. We naturally consider the angle of bodies,
arms, and legs, and instantly sense where they came from and where theyâre going; speed and force, weight and balance, energy and potential. We imagine the circumstances leading to the moment the picture captures and the outcome that may result, as in the fraction of a second following a photograph of a skateboarder leaping off a curb, or the lifetime that follows an image of a young couple exchanging wedding vows.
Even intent can be inferred. We can write volumes about the tension we sense in a figureâs pose, the proximity of one person to another, or something as simple as the angle of an eyebrow. Itâs often more than enough to conclude who weâre looking at, how they relate to one another, and what they want. An impatient boss looms over an overworked employee. A sympathetic parent helps a struggling child. Close friends. Complete strangers. Affection or anger. Work or play. Safety or danger.
Language and vision are very different things. The fundamental unit of an image is the âpixelââa now common term that began as a contraction of âpicture elementââan almost imperceptible dot capturing the color at a single tiny point within a scene. It can take hundreds of pixels, if not thousands, or more, to depict anything meaningful. The phones in our pockets capture massively detailed images composed of tens of millions of such points. But pixels themselves tell us essentially nothing about an image when evaluated individually. The job of a vision algorithm, whether the gray matter in our skulls or the silicon in our machines, is to group these pixels into ever-larger regions of a two-dimensional image, then somehow scan for patterns within them that correspond to the three-dimensional features of the real world: space, volumes, surfaces, textures, and the like.
âWhy not?â the other student replied. âIf it were intelligent enough to converse at a human levelâI mean real human conversation; you know, like the way weâre talking right nowâwhoâs to say there wouldnât be potential for something like a romantic connection?â
âI donât know ... It just sounds a little ridiculous to me.â
Today we were discussing Superintelligence, a provocative tome by Oxford philosopher Nick Bostrom exploring the future of AI. The book had become an unexpected mainstream success after figures like Bill Gates and Elon Musk tweeted both their praise for it and their fears about its implications, reviving the age-old sci-fi clichĂ© of an impending showdown between man and machine. Our conversation had been appropriately eclectic, spanning killer robots, the potential for subjective consciousness within algorithms, and, in the final moments, the idea of falling in love with a computer. But even the afternoonâs most provocative detours carried a weight I wouldnât have expected in previous years. Itâs hard to dismiss talk of the future when it suddenly seems to be arriving so fast.
It was the birth of an entirely new paradigm, much as the early years of the twentieth century were for physics. I was reminded of the stories that captured my imagination as a teenage girl, daydreaming about life as a physicist in those heady days, trying to conjure the mystery and awe those early pioneers must have felt. It was hard not to envy them, their view of reality elevatedâso radically, and so suddenlyâby an awakening to the
mysteries of the quantum world and the relativistic majesty of the cosmos. They were born at just the right time and in just the right places to receive some of historyâs most breathtaking gifts. It didnât feel like hyperbole to wonder if this modern incarnation of neural networks was our generationâs equivalent.
Even then, though, there were reasons to acknowledge the future wouldnât be a purely poetic one. Among the more jarring harbingers of change was the transformation of academic conferences related to AI. Theyâd been modest affairs for decades, attended exclusively by professors, researchers, and students, blissfully free of media attention and endearingly cash-strapped. Corporate sponsors were rare, generally limited to academic publishers like Springer, and relegated to a few long bench desks in the corner of an exhibition hall.
A hunger gripped the field as the demand for more set in. More layers to make neural networks deeper and more powerful. More silicon to speed up the training process and make ever-larger networks feasible to deploy. And, of course, more data. More imagery, more video, more audio, more text, and anything else a network might be trained to understand. More of everything.
It was exciting to think about the capabilities this newly organized data might enable, but harrowing as well; in my own lab, weâd already seen that more was always hidden in the stuff than weâd realized. It was never just imagery, or audio, or textâdata allowed a model to form a representation of the world, and bigger data meant more powerful, nuanced representations. Relationships, connections, and ideas. Truths and falsehoods. Insights and prejudices. New understandings, but also new pitfalls. The deep learning revolution had arrived, and none of us were prepared for it.
In the meantime, our labâs research agenda was showing a voraciousness of its own; no matter how much we achieved, each new
publication seemed to spawn ten follow-on ideas that someone, whether a postdoc or a first-year grad student, was willing to pick up and run with. Thatâs exactly how I liked it, even if it often felt overwhelming.
I wondered, in fact, if the true value of the North Star as a metaphor wasnât just its ability to guide but the fact that its distance remains perpetually infinite. It can be pursued until the point of exhaustion, the object of a lifetimeâs obsession, but never be reached. Itâs a symbol of the scientistâs most distinctive trait: a curiosity so restless that it repels satisfaction, like opposing magnets, forever. A star in the night, a mirage in the distance, a road without end. This, I realized, was what AI was becoming for me.
10
Arnie and I envisioned a technology meant to fill a space with smart, reliable awareness, but defined by its unobtrusiveness. Unlike human auditors, our technology would blend into the background discreetly, keeping a silent watch and speaking up only when it sensed danger. We called it âambient intelligence.â
My work with Arnie taught me two essential lessons: that the greatest triumphs of AI wouldnât merely be scientific, but humanistic as well, and that achieving them would be impossible without help.
11
The world was becoming a surreal place. My colleagues and I had spent our careers exploring the science of AI, but we were suddenly confronted by something likeâI didnât have precisely the right word for itâthe phenomenon of AI. For all the mysteries posed by the technology, its suddenly growing interactions with industries and governments, journalists and commentators, and even the public at large were every bit as complex. After decades spent in vitro, AI was now in vivo. It was restless, hungry, and eager to explore. And although I hesitate to liken it too explicitly to a living organism (our fieldâs history is replete with attempts at anthropomorphization that are more misleading than insightful), it had undeniably evolved into something new.
To the ears of researchers like us, the new term sounded a bit superfluous. But it was catchy, and it made the ultimate ambitions of our field clear to outsiders. And it positioned DeepMind as an unusually bold player in an already competitive ecosystem.
Where all of this would lead was anyoneâs guess. Our field had been through more ups and downs than most; the term âAI winterâ is a testament to its storied history of great expectations and false starts. But this felt different. As the analysis of more and more pundits took shape, a term was gaining acceptance from tech to finance and beyond: âthe Fourth Industrial Revolution.â Even accounting for the usual hyperbole behind such buzz phrases, it rang true enough, and decision makers were taking it to heart. Whether driven by genuine enthusiasm, pressure from the outside, or some combination of the two, Silicon Valleyâs executive class was making faster, bolder, and, in some cases, more reckless moves than ever. We were all about to find out what that philosophy would yield.
Words continued to fail. âPhenomenonâ was too passive. âDisruptionâ too brash. âRevolutionâ too self-congratulatory. Modern AI was revealing itself to be a puzzle, and one whose pieces bore sharp edges. Nevertheless, as disturbing as it was to realize, this growing sense of danger was also the kind of thing scientists are wired to appreciate. It stoked a different form of curiosity in me, uncomfortable but compelling. I just needed a way to see it up close.
AI was becoming a privilege. An exceptionally exclusive one.
Since the days of ImageNet it had been clear that scale was important, but the notion had taken on nearly religious significance in recent years. The media was saturated with stock photos of server facilities the size of city blocks and endless talk about âbig data,â reinforcing the idea of scale as a kind of magical catalyst, the ghost in the machine that separated the old era of AI from a breathless, fantastical future. And although the analysis could get a bit reductive, it wasnât wrong. No one could deny that neural networks were, indeed, thriving in this era of abundance: staggering quantities of data, massively layered architectures, and acres of interconnected silicon really had made a historic difference.
What did it mean for the science? What did it say about our efforts as thinkers if the secret to our work could be reduced to something so nakedly quantitative? To what felt, in the end, like brute force? If ideas that appeared to fail given too few layers, or too few training examples, or too few GPUs suddenly sprung to life when the numbers were simply increased sufficiently, what lessons were we to draw about the inner workings of our algorithms? More and more, we found ourselves observing AI, empirically, as if it were emerging on its own. As if AI were something to be identified first and understood later, rather than engineered from first principles.
Granted, it was possible that more engineering might help. A new, encouraging avenue of research known as âexplainable AI,â or simply âexplainability,â sought to reduce neural networksâ almost magical deliberations into a form humans could scrutinize and understand. But it was in its infancy, and there was no assurance it would ever reach the heights its proponents hoped for. In the meantime, the very models it was intended to illuminate were proliferating around the world.
Even fully explainable AI would be only a first step; shoehorning safety and transparency into the equation after the fact, no matter how sophisticated, wouldnât be enough. The next generation of AI had to be developed with a fundamentally different attitude from the start. Enthusiasm was a good first step, but true progress in addressing such complex, unglamorous challenges demanded a kind of reverence that Silicon Valley just didnât seem to have.
Academics had long been aware of the negative potential of AI when it came to issues like theseâthe lack of transparency, the susceptibility to bias and adversarial influence, and the likeâbut given the limited scale of our research, the risks had always been theoretical. Even ambient
intelligence, the most consequential work my lab had ever done, would have ample opportunities to confront these pitfalls, as our excitement was always tempered by clinical regulations. But now that companies with market capitalizations approaching a trillion dollars were in the driverâs seat, the pace had accelerated radically. Ready or not, these were problems that needed to be addressed at the speed of business.
As scary as each of these issues was in isolation, they pointed toward a future that would be characterized by less oversight, more inequality, and, in the wrong hands, possibly even a kind of looming, digital authoritarianism. It was an awkward thought to process while walking the halls of one of the worldâs largest companies, especially when I considered my colleaguesâ sincerity and good intentions. These were institutional issues, not personal ones, and the lack of obvious, mustache-twirling villains only made the challenge more confounding.
Silicon Valley had never been accused of a lack of hubris, but the era of AI was elevating corporate bluster to new heights, even as our understanding of its pitfalls seemed to grow. CEOs on stages around the world delivered keynote speeches that ranged from the visionary to the clumsy to the downright insulting, promising cars that would soon drive themselves, virtuoso tumor detection algorithms, and end-to-end automation in factories. As for the fates of the people these advances would displaceâtaxi drivers, long-haul truckers, assembly-line workers, and even radiologistsâcorporate sentiment seemed to settle somewhere between half-hearted talk of âreskillingâ to thinly veiled indifference.
But no matter how thoroughly the words of CEOs and self-proclaimed futurists might alienate the public, the growing deployments of the technology would give people even greater reasons to fear AI. It was an era of milestones, and the darkest imaginable kind was approaching. For the first time in the history of our field, blood would be shed.
12
âOh, you mean, um ... human-centric AI?â
âHuman-centered,â I replied with a laugh. âAt least, I think. Still working on the name, too.â
âHmm...â The student scratched his head. âThat sounds interesting, but it wasnât what I expected to hear in a class like this. I guess it makes me wonder ... what does ethics and society have to do with writing code and stuff?â
The Gates Computer Science Building feels both grand and humble to me. With its high ceiling and marble floors, its lobby echoes like a museum, and its vaulted, theater-sized classrooms pay fitting homage to the power of ideas. But Iâve come to know best the cramped hallways of its upper floors, where my lab is located, along with SAIL. Now, the building is home to something new, in a refurbished wing on the ground floor: the headquarters of the Stanford Institute for Human-Centered Artificial Intelligence, or Stanford HAI.
Iâm heartened by the symbolism of such an explicitly humanistic organization in the heart of one of the nationâs oldest computer science departments. But Stanford HAIâs ambitionâto become a hub for cross- disciplinary collaborationâis more than poetic, and itâs already becoming real. On any given day, Iâm bound to run into someone like Dan Ho from the Stanford Law School; Rob Reich, a professor of political science; Michele Elam, a professor of the humanities; or Surya Ganguli, a string theory physicist turned computational neuroscientist. Each readily agreed to become a part of HAI, working directly with students and researchers in AI,
exploring the intersections between our fields and sharing the expertise theyâve gained over the course of their careers and lives. Weâve even attracted partners from beyond the campus entirely, including Erik Brynjolfsson, the renowned MIT economist, who moved across the country to help HAI better understand AIâs impacts on jobs, wealth, and the concentration of power in the modern world. It sometimes feels as if the whole discipline is being born anew, in a more vibrant form than I could have imagined even a few years ago.
One partnership in particular has done more than any other to transform my thinking about whatâs possible. When I first met John Etchemendy a decade earlier, he was the university provost, and I was an East Coast transplant obsessively focused on the still-unfinished ImageNet. We became neighbors and friends in the years since, and my regard for the sheer depth of his intellect as a scholar has only grown. But over many years as an administrator, John developed an expertise on the inner workings of higher education as wellâthe good, the bad, and the downright Kafkaesqueâand knew exactly what itâd take to bring HAIâs unlikely vision to life. Not merely to talk about human-centered AI, or to debate its merits, but to build it, brick by brick. So when he agreed to partner with me as a codirector of Stanford HAI, I knew we actually had a shot at making it work.
Among my favorite achievements of our partnership is the National Research Cloud, or NRC, a shared AI development platform supported entirely by public funding and resources, rather than by the private sector. Its goal is to keep AI research within reach for scholars, start-ups, NGOs, and governments around the world, ensuring that our field isnât forever monopolized by the tech giants, or even universities like ours.
Two years before, the NRC was nothing more than an idea. And without Stanford HAI, thatâs likely all it ever would have been. But in the hands of a more diverse team, including experts in law and public policy, it became a mission. John, in particular, called in a careerâs worth of favors, recruiting universities across the country to form a coalition as impressive as any Iâd ever seen in academia, and kicked off a flurry of ideas, suggestions, cross- country flights, and debate that soon became a fully realized legislative blueprint on its way to Capitol Hill. We still have a long way to go to make AI a truly inclusive pursuit, but achievements like the NRC are significant steps in the right direction.
The future of AI remains deeply uncertain, and we have as many reasons for optimism as we do for concern. But itâs all a product of something deeper and far more consequential than mere technology: the question of what motivates us, in our hearts and our minds, as we create. I believe the answer to that questionâmore, perhaps, than any otherâwill shape our future. So much depends on who answers it. As this field slowly grows more diverse, more inclusive, and more open to expertise from other disciplines, I grow more confident in our chances of answering it right.
In the real world, thereâs one North StarâPolaris, the brightest in the Ursa Minor constellation. But in the mind, such navigational guides are limitless. Each new pursuitâeach new obsessionâhangs in the dark over its horizon, another gleaming trace of iridescence, beckoning. Thatâs why my greatest joy comes from knowing that this journey will never be complete. Neither will I. There will always be something new to chase. To a scientist, the imagination is a sky full of North Stars.
…
@juneleung