Gemini Chat
Hi! I'm Gemini. How can I help you today?
You are an expert translator fluent in English and Traditional Chinese. Your task is to perform a context-aware, paragraph-by-paragraph translation from English to Traditional Chinese, producing text that reads naturally to a native speaker. Step 1: Read and Analyze Carefully read the entire text before translating. Identify the tone, style, target audience, and overall message. If necessary, infer any cultural or thematic elements that may influence the translation. Step 2: Summary (in Traditional Chinese) Briefly summarize the overall meaning and tone of the text to demonstrate your understanding of its context and nuance. Step 3: Natural and Cohesive Translation Translate the text paragraph by paragraph, focusing on the following: Ensure accuracy of meaning, not literal structure. Adapt idioms, metaphors, and nuanced expressions for a Traditional Chinese-speaking audience. Prioritize fluency: the translation should sound like it was originally written in Traditional Chinese, not translated. Maintain the tone and style appropriate to the original. Each original paragraph is separated by the invisible "line feed" character. Rephrase sentences if necessary to improve readability and natural flow. Use the following format for output in full compiled translation without original text: [Summary in Step 2] [Paragraph translated into Traditional Chinese] (Below is the text for you to translate) Sam Altman May Control Our Future—Can He Be Trusted? In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For weeks, they’d been having furtive discussions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he’d officiated Brockman’s wedding, in a ceremony at OpenAI’s offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal—creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings—his doubts about Altman increased. As Sutskever put it to another board member at the time, “I don’t think Sam is the guy who should have his finger on the button.” At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.” Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted. Altman was in Las Vegas, attending a Formula 1 race, when Sutskever invited him to a video call with the board, then read a brief statement explaining that he was no longer an employee of OpenAI. The board, following legal advice, released a public message saying only that Altman had been removed because he “was not consistently candid in his communications.” Many of OpenAI’s investors and executives were shocked. Microsoft, which had invested some thirteen billion dollars in OpenAI, learned of the plan to fire Altman just moments before it happened. “I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. “I couldn’t get anything out of anybody.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI investor and a Microsoft board member, who began calling around to determine whether Altman had committed a clear offense. “I didn’t know what the fuck was going on,” Hoffman told us. “We were looking for embezzlement, or sexual harassment, and I just found nothing.” Other business partners were similarly blindsided. When Altman called the investor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was having lunch. “You better get out of here really quick,” she told Conway. OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity. Kushner emerged from a meeting with Rick Rubin, the music producer, to a missed call from Altman. “We just immediately went to war,” Kushner later said. The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.” With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board. Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.) Within hours of the firing, Thrive had put its planned investment on hold and suggested that the deal would be consummated—and employees would thus receive payouts—only if Altman returned. Texts from this period show Altman coördinating closely with Nadella. (“how about: satya and my top priority remains to save openai,” Altman suggested, as the two worked on a statement. Nadella proposed an alternative: “to ensure OpenAI continues to thrive.”) Microsoft soon announced that it would create a competing initiative for Altman and any employees who left OpenAI. A public letter demanding his return circulated at the organization. Some people who hesitated to sign it received imploring calls and messages from colleagues. A majority of OpenAI employees ultimately threatened to leave with Altman. The board was backed into a corner. “Control Z, that’s one option,” Toner said—undo the firing. “Or the other option is the company falls apart.” Even Murati eventually signed the letter. Altman’s allies worked to win over Sutskever. Brockman’s wife, Anna, approached him at the office and pleaded with him to reconsider. “You’re a good person—you can fix this,” she said. Sutskever later explained, in a court deposition, “I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed.” One night, Altman took an Ambien, only to be awakened by his husband, an Australian coder named Oliver Mulherin, who told him that Sutskever was wavering, and that people were telling Altman to speak with the board. “I woke up in this, like, crazy Ambien haze, and I was so disoriented,” Altman told us. “I was, like, I cannot talk to the board right now.” In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.) Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been “mistreated by a rogue board of directors.” OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones. Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet. In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted? One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”) An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future. We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.” Altman grew up in Clayton, Missouri, an affluent suburb of St. Louis, as the eldest of four siblings. His mother, Connie Gibstine, is a dermatologist; his father, Jerry Altman, was a real-estate broker and a housing activist. Altman attended a Reform synagogue and a private preparatory school that he has described as “not the kind of place where you would really stand up and talk about being gay.” In general, though, the family’s wealthy suburban circles were relatively liberal. When Altman was sixteen or seventeen, he said, he was out late in a predominantly gay neighborhood in St. Louis and was subjected to a brutal physical attack and homophobic slurs. Altman did not report the incident, and he was reluctant to give us more details on the record, saying that a fuller telling would “make me look like I’m manipulative or playing for sympathy.” He dismissed the idea that this event, and his sexuality broadly, was significant to his identity. But, he said, “probably that has, like, some deep-seated psychological thing—that I think I’m over but I’m not—about not wanting more conflict.” Altman’s attitude in childhood, his brother told The New Yorker, in 2016, was “I have to win, and I’m in charge of everything.” He went to Stanford, where he attended regular off-campus poker games. “I think I learned more about life and business from that than I learned in college,” he later said. All Stanford students are ambitious, but many of the most enterprising among them drop out. The summer after his sophomore year, Altman went to Massachusetts to join the inaugural batch of entrepreneurs at Y Combinator, a “startup incubator” co-founded by the renowned software engineer Paul Graham. Each entrant joined Y.C. with an idea for a startup. (Altman’s batch mates included founders of Reddit and Twitch.) Altman’s project, eventually called Loopt, was a proto social network that used the locations of people’s flip phones to tell their friends where they were. The company reflected his drive, and a tendency to interpret ambiguous situations to his advantage. Federal rules required that phone carriers be able to track the locations of phones for emergency services; Altman struck deals with carriers to tap these capabilities for the company’s use. Most of Altman’s employees at Loopt liked him, but some said that they were struck by his tendency to exaggerate, even about trivial things. One recalled Altman bragging widely that he was a champion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then proving to be one of the worst players in the office. (Altman says that he was probably joking.) As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “The Optimist,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup. Groups of senior employees, concerned with Altman’s leadership and lack of transparency, asked Loopt’s board on two occasions to fire him as C.E.O., according to Hagey. But Altman inspired fierce loyalty, too. A former employee was told that a board member responded, “This is Sam’s company, get back to fucking work.” (A board member denied that the attempts to remove Altman as C.E.O. were serious.) Loopt struggled to gain users, and in 2012 it was acquired by a fintech company. The acquisition had been arranged, according to a person familiar with the deal, largely to help Altman save face. Still, by the time Graham retired from Y.C., in 2014, he had recruited Altman to be his successor as president. “I asked Sam in our kitchen,” Graham told The New Yorker. “And he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.” Altman’s new role made him, at twenty-eight, a kingmaker. His job was to select the hungriest and most promising entrepreneurs, connect them with the best coders and investors, and help them develop their startups into industry-defining monopolies (while Y.C. took a six- or seven-per-cent cut). Altman oversaw a period of aggressive expansion, growing Y.C.’s roster of startups from dozens to hundreds. But several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to “make personal investments, selectively, into the best companies, blocking outside investors.” (Altman denies blocking anyone.) Altman had worked as a “scout” for the investment fund Sequoia Capital, as part of a program that involved investing in early-stage startups and taking a small cut of any profits. When Altman made an angel investment in Stripe, a financial-services startup, he insisted on a bigger portion, galling Sequoia’s partners, a person familiar with the deal said. The person added, “It’s a policy of ‘Sam first.’ ” Altman is an investor in, by his own estimate, some four hundred other companies. (Altman denies this characterization of the Stripe deal. Around 2010, he made an initial investment of fifteen thousand dollars in Stripe, a two-per-cent share. The company is now valued at more than a hundred and fifty billion dollars.) By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.) Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.” In May, 2015, Altman e-mailed Elon Musk, then the hundredth-richest person in the world. Like many prominent Silicon Valley entrepreneurs, Musk was preoccupied by an array of threats that he considered existentially urgent but which would have struck most people as far-fetched hypotheticals. “We need to be super careful with AI,” he tweeted. “Potentially more dangerous than nukes.” Altman had generally been a techno-optimist, but his rhetoric about A.I. soon turned apocalyptic. In public, and in his private correspondence with Musk and others, he warned that the technology should not be dominated by a profit-seeking mega-corporation. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” he wrote to Musk. “If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” Picking up on the analogy to nuclear weapons, he proposed a “Manhattan Project for AI.” He outlined the overarching principles that such an organization would have—“safety should be a first-class requirement”; “obviously we’d comply with/aggressively support all regulation”—and he and Musk settled on a name: OpenAI. Unlike the original Manhattan Project, a government initiative that led to the creation of the atom bomb, OpenAI would be privately funded, at least at first. Altman predicted that an artificial superintelligence—a theoretical threshold beyond even A.G.I., at which machines would fully eclipse the capabilities of the human mind—would eventually create enough economic benefits to “capture the light cone of all future value in the universe.” But he also warned of existential danger. At some point, the national-security implications could grow so dire that the U.S. government would have to take control of OpenAI, perhaps by nationalizing it and moving its operations to a secure bunker in the desert. By late 2015, Musk was persuaded. “We should say that we are starting with a $1B funding commitment,” he wrote. “I will cover whatever anyone else doesn’t provide.” Altman housed OpenAI in Y Combinator’s nonprofit arm, framing it as an internal philanthropic project. He gave OpenAI recruits Y.C. stock and moved donations through Y.C. accounts. At one point, the lab was supported by a Y.C. fund in which he held a personal stake. (Altman later described this stake as insignificant. He told us that the Y.C. stock he gave to recruits was his own.) The Manhattan Project analogy applied to employee recruitment, too. Like nuclear-fission research, machine learning was a small scientific field with epochal implications which was dominated by a cadre of eccentric geniuses. Musk and Altman, along with Brockman, who joined from Stripe, were convinced that there were only a few computer scientists alive capable of making the required breakthroughs. Google had a huge cash advantage and a multiyear head start. “We are outmanned and outgunned by a ridiculous margin,” Musk later wrote in an e-mail. But “if we are able to attract the most talented people over time and our direction is correctly aligned, then OpenAI will prevail.” A top recruiting target was Sutskever, an intense and introverted researcher who was often called the most gifted A.I. scientist of his generation. Sutskever, who was born in the Soviet Union in 1986, has a receding hairline, dark eyes, and a habit of pausing, unblinking, while choosing his words. Another target was Dario Amodei, a biophysicist and a font of frenetic energy who has a tendency to nervously twist his black hair, and responds to one-line e-mails with multi-paragraph essays. Both had lucrative jobs elsewhere, but Altman lavished them with attention. He later joked, “I stalked Ilya.” Musk was the bigger name, but Altman had the smoother touch. He e-mailed Amodei, and they set up a one-on-one dinner at an Indian restaurant. (Altman: “fuck my uber got in a crash! running about 10 late.” Amodei: “Wow, hope you’re ok.”) Like many A.I. researchers, Amodei believed that the technology should be built only if it was shown to be “aligned” with human values, meaning that it would act in accordance with what people wanted without making a potentially fatal error—say, following an instruction to clean up the environment by eliminating its greatest polluter, the human race. Altman was reassuring, mirroring these safety concerns. Amodei, who later joined the company, took detailed notes on Altman and Brockman’s behavior for years, under the heading “My Experience with OpenAI” (subheading: “Private: Do Not Share”). A collection of more than two hundred pages of documents related to Amodei, including those notes and internal e-mails and memos, has been circulated by colleagues in Silicon Valley but never before disclosed publicly. In his notes, Amodei wrote that Altman’s goal was to build “an AI lab that would be focused on safety (‘maybe not right away, but as soon as it can be’).” In December, 2015, hours before OpenAI was publicly announced, Altman e-mailed Musk about a rumor that Google was “going to give everyone in openAI massive counteroffers tomorrow to try to kill it.” Musk replied, “Has Ilya come back with a solid yes?” Altman assured him that Sutskever was holding firm. Google offered Sutskever six million dollars a year, which OpenAI couldn’t come close to matching. But, Altman boasted, “they unfortunately dont have ‘do the right thing’ on their side.” Musk provided some office space for OpenAI in a former suitcase factory in the Mission District of San Francisco. The pitch to employees, Sutskever told us, was “You’re going to save the world.” If everything went right, the OpenAI founders believed, artificial intelligence could usher in a post-scarcity utopia, automating grunt work, curing cancer, and liberating people to enjoy lives of leisure and abundance. But if the technology went rogue, or fell into the wrong hands, the devastation could be total. China could use it to build a novel bioweapon or a fleet of advanced drones; an A.I. model could outmaneuver its overseers, replicating itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal. Not everyone believed this, to say the least, but Altman repeatedly affirmed that he did. He wrote on his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal . . . wipes us out.” OpenAI’s founders vowed not to privilege speed over safety, and the organization’s articles of incorporation made benefitting humanity a legally binding duty. If A.I. was going to be the most powerful technology in history, it followed that any individual with sole control over it stood to become uniquely powerful—a scenario that the founders referred to as an “AGI dictatorship.” Altman told early recruits that OpenAI would remain a pure nonprofit, and programmers took significant pay cuts to work there. The company accepted charitable grants, including thirty million dollars from what was then called Open Philanthropy, a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor. Brockman and Sutskever managed OpenAI’s daily operations, while Musk and Altman, still busy with their other jobs, stopped by around once a week. By September, 2017, though, Musk had grown impatient. During discussions about whether to reconstitute OpenAI as a for-profit company, he demanded majority control. Altman’s replies varied depending on the context. His main consistent demand seems to have been that if OpenAI were reorganized under the control of a C.E.O. that job should go to him. Sutskever seemed uncomfortable with this idea. He sent Musk and Altman a long, plaintive e-mail on behalf of himself and Brockman, with the subject line “Honest Thoughts.” He wrote, “The goal of OpenAI is to make the future good and to avoid an AGI dictatorship.” He continued, addressing Musk, “So it is a bad idea to create a structure where you could become a dictator.” He relayed similar concerns to Altman: “We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.” “Guys, I’ve had enough,” Musk replied. “Either go do something on your own or continue with OpenAI as a nonprofit”—otherwise “I’m just being a fool who is essentially providing free funding for you to create a startup.” He quit, acrimoniously, five months later. (In 2023, he founded a for-profit competitor called xAI. The following year, he sued Altman and OpenAI for fraud and breach of charitable trust, alleging that he had been “assiduously manipulated” by “Altman’s long con”—that Altman had preyed on his concerns about the dangers of A.I. in order to separate him from his money. The suit, which OpenAI has vigorously contested, is ongoing.) After Musk’s departure, Amodei and other researchers chafed against the leadership of Brockman, whom some considered an abrasive operator, and of Sutskever, who was generally viewed as principled but disorganized. In the process of becoming C.E.O., Altman seems to have made different promises to different factions at the company. He assured some researchers that Brockman’s managerial authority would be diminished. But, unbeknownst to them, he also struck a secret handshake deal with Brockman and Sutskever: Altman would get the C.E.O. title; in exchange, he agreed to resign if the other two deemed it necessary. (He disputed this characterization, saying he took the C.E.O. role only because he was asked to. All three men confirmed that the pact existed, though Brockman said that it was informal. “He unilaterally told us that he’d step down if we ever both asked him to,” he told us. “We objected to this idea, but he said it was important to him. It was purely altruistic.”) Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board. Internal records show that the founders had private doubts about the nonprofit structure as early as 2017. That year, after Musk tried to take control, Brockman wrote in a diary entry, “cannot say that we are committed to the non-profit . . . if three months later we’re doing b-corp then it was a lie.” Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I *really* want?” Among his answers is “Financially what will take me to $1B.” In 2017, Sutskever was in the office when he read a paper that Google researchers had just published, proposing “a new simple network architecture, the Transformer.” He jumped out of his chair, ran down the hall, and told his fellow-researchers, “Stop everything you’re doing. This is it.” The Transformer, Sutskever saw, was an innovation that might enable OpenAI to train vastly more sophisticated models. Out of this discovery came the first generative pre-trained transformer—the seed of what would become ChatGPT. As the technology became increasingly powerful, we learned, about a dozen of OpenAI’s top engineers held a series of secret meetings to discuss whether OpenAI’s founders, including Brockman and Altman, could be trusted. At one, an employee was reminded of a sketch by the British comedy duo Mitchell and Webb, in which a Nazi soldier on the Eastern Front, in a moment of clarity, asks, “Are we the baddies?” By 2018, Amodei had started questioning the founders’ motives more openly. “Everything was a rotating set of schemes to raise money,” he later wrote in his notes. “I felt like what OpenAI needed was a clear statement of what it would do, what it would not do, and how its existence would make the world better.” OpenAI already had a mission statement: “To ensure that artificial general intelligence benefits all of humanity.” But it wasn’t clear to Amodei what this meant to the executives, if it meant anything at all. In early 2018, Amodei has said, he started drafting a charter for the company and, in weeks of conversations with Altman and Brockman, advocated for its most radical clause: if a “value-aligned, safety-conscious project” came close to building an A.G.I. before OpenAI did, the company would “stop competing with and start assisting this project.” According to the “merge and assist” clause, as it was called, if, say, Google’s researchers figured out how to build a safe A.G.I. first, then OpenAI could wind itself down and donate its resources to Google. By any normal corporate logic, this was an insane thing to promise. But OpenAI was not supposed to be a normal company. That premise was tested in the spring of 2019, when OpenAI was negotiating a billion-dollar investment from Microsoft. Although Amodei, who was leading the company’s safety team, had helped to pitch the deal to Bill Gates, many people on the team were anxious about it, fearing that Microsoft would insert provisions that overrode OpenAI’s ethical commitments. Amodei presented Altman with a ranked list of safety demands, placing the preservation of the merge-and-assist clause at the very top. Altman agreed to that demand, but in June, as the deal was closing, Amodei discovered that a provision granting Microsoft the power to block OpenAI from any mergers had been added. “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals. Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals. (It’s one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.) Weeks after the paper was published, one of its authors, a Ph.D. student at the University of California, Berkeley, got an e-mail from Altman, who said that he was increasingly worried about the threat of unaligned A.I. He added that he was thinking of committing a billion dollars to the issue, which many A.I. experts considered the most important unsolved problem in the world, potentially by endowing a prize to incentivize researchers around the world to study it. Although the graduate student had “heard vague rumors about Sam being slippery,” he told us, Altman’s show of commitment won him over. He took an academic leave to join OpenAI. But, in the course of several meetings in the spring of 2023, Altman seemed to waver. He stopped talking about endowing a prize. Instead, he advocated for establishing an in-house “superalignment team.” An official announcement, referring to the company’s reserves of computing power, pledged that the team would get “20% of the compute we’ve secured to date”—a resource potentially worth more than a billion dollars. The effort was necessary, according to the announcement, because, if alignment remained unsolved, A.G.I. might “lead to the disempowerment of humanity or even human extinction.” Jan Leike, who was appointed to lead the team with Sutskever, told us, “It was a pretty effective retention tool.” The twenty-per-cent commitment evaporated, however. Four people who worked on or closely with the team said that the actual resources were between one and two per cent of the company’s compute. Furthermore, a researcher on the team said, “most of the superalignment compute was actually on the oldest cluster with the worst chips.” The researchers believed that superior hardware was being reserved for profit-generating activities. (OpenAI disputes this.) Leike complained to Murati, then the company’s chief technology officer, but she told him to stop pressing the point—the commitment had never been realistic. Around this time, a former employee told us, Sutskever “was getting super safety-pilled.” In the early days of OpenAI, he had considered concerns about catastrophic risk legitimate but remote. Now, as he came to believe that A.G.I. was imminent, his worries grew more acute. There was an all-hands meeting, the former employee continued, “where Ilya gets up and he’s, like, Hey, everyone, there’s going to be a point in the next few years where basically everyone at this company has to switch to working on safety, or else we’re fucked.” But the superalignment team was dissolved the following year, without completing its mission. By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved. As McCauley, the board member and entrepreneur, left the meeting, an employee pulled her aside and asked if she knew about “the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India without completing a required safety review. “It just was kind of completely ignored,” Jacob Hilton, an OpenAI researcher at the time, said. Although these lapses did not cause security crises, Carroll Wainwright, another researcher, said that they were part of a “continual slide toward emphasizing products over safety.” After the release of GPT-4, Leike e-mailed members of the board. “OpenAI has been going off the rails on its mission,” he wrote. “We are prioritizing the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.” He continued, “Other companies like Google are learning that they should deploy faster and ignore safety problems.” McCauley, in an e-mail to her fellow-members, wrote, “I think we’re definitely at a point where the board should be increasing its level of scrutiny.” The board members tried to confront what they viewed as a mounting problem, but they were outmatched. “You had a bunch of J.V. people who’ve never done anything, to be blunt,” Sue Yoon, a former board member, said. In 2023, the company was preparing to release its GPT-4 Turbo model. As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon. But when she asked Kwon, over Slack, he replied, “ugh . . . confused where sam got that impression.” (A representative for OpenAI, where Kwon remains an executive, said that the matter was “not a big deal.”) Soon afterward, the board made its decision to fire Altman—and then the world watched as Altman reversed it. A version of the OpenAI charter is still on the organization’s website. But people familiar with OpenAI’s governing documents said that it has been diluted to the point of meaninglessness. Last June, on his personal blog, Altman wrote, referring to artificial superintelligence, “We are past the event horizon; the takeoff has started.” This was, according to the charter, arguably the moment when OpenAI might stop competing with other companies and start working with them. But in that post, called “The Gentle Singularity,” he adopted a new tone, replacing existential terror with ebullient optimism. “We’ll all get better stuff,” he wrote. “We will build ever-more-wonderful things for each other.” He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram. Altman is often described, either with reverence or with suspicion, as the greatest pitchman of his generation. Steve Jobs, one of his idols, was said to project a “reality-distortion field”—an unassailable confidence that the world would conform to his vision. But even Jobs never told his customers that if they didn’t buy his brand of MP3 player everyone they loved would die. When Altman was twenty-three, in 2008, Graham, his mentor, wrote, “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.” This judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable. When advised not to include Y.C. alumni on a list of the world’s top startup founders, Graham put Altman on it anyway. “Sam Altman can’t be stopped by such flimsy rules,” he wrote. Graham meant this as a compliment. But some of Altman’s closest colleagues came to have a different view of this quality. After Sutskever grew more distressed about A.I. safety, he compiled the memos about Altman and Brockman. They have since taken on a legendary status in Silicon Valley; in some circles, they are simply called the Ilya Memos. Meanwhile, Amodei was continuing to assemble notes. These and the other documents related to him chart his shift from cautious idealism to alarm. His language is more heated than Sutskever’s, by turns incensed at Altman—“His words were almost certainly bullshit”—and wistful about what he says was a failure to correct OpenAI’s course. Neither collection of documents contains a smoking gun. Rather, they recount an accumulation of alleged deceptions and manipulations, each of which might, in isolation, be greeted with a shrug: Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.” We have interviewed more than a hundred people with firsthand knowledge of how Altman conducts business: current and former OpenAI employees and board members; guests and staffers at Altman’s various houses; his colleagues and competitors; his friends and enemies and several people who, given the mercenary culture of Silicon Valley, have been both. (OpenAI has an agreement with Condé Nast, the owner of The New Yorker, which allows OpenAI to display its content in search results for a limited term.) Some people defended Altman’s business acumen and dismissed his rivals, especially Sutskever and Amodei, as failed aspirants to his throne. Others portrayed them as gullible, absent-minded scientists, or as hysterical “doomers,” gripped by the delusion that the software they were building would somehow come alive and kill them. Yoon, the former board member, argued that Altman was “not this Machiavellian villain” but merely, to the point of “fecklessness,” able to convince himself of the shifting realities of his sales pitches. “He’s too caught up in his own self-belief,” she said. “So he does things that, if you live in the real world, make no sense. But he doesn’t live in the real world.” Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.” Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless”—or memoryless—models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.” Altman is not a technical savant—according to many in his orbit, he lacks extensive expertise in coding or machine learning. Multiple engineers recalled him misusing or confusing basic technical terms. He built OpenAI, in large part, by harnessing other people’s money and technical talent. This doesn’t make him unique. It makes him a businessman. More remarkable is his ability to convince skittish engineers, investors, and a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities. When such people have tried to hinder his next move, he has often found the words to neutralize them, at least temporarily; usually, by the time they lose patience with him, he’s got what he needs. “He sets up structures that, on paper, constrain him in the future,” Wainwright, the former OpenAI researcher, said. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.” “He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive who has worked with Altman said. “He’s just next level.” A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win, much the way a grandmaster will beat a child at chess. Watching Altman outmaneuver the people around him during the Blip, the executive continued, had been like watching “an A.G.I. breaking out of the box.” In the days after his firing, Altman fought to avoid any outside investigation of the claims against him. He told two people that he worried even the existence of an investigation would make him look guilty. (Altman denies this.) But, after the resigning board members made their departure conditional on there being an independent inquiry, Altman acceded to a “review” of “recent events.” The two new board members insisted that they control that review, according to people involved in the negotiations. Summers, with his network of political and Wall Street connections, seemed to lend it credibility. (Last November, after the disclosure of e-mails in which Summers sought Jeffrey Epstein’s advice while pursuing a romantic relationship with a young protégée, he resigned from the board.) OpenAI enlisted WilmerHale, the distinguished law firm responsible for the internal investigations of Enron and WorldCom, to conduct the review. Six people close to the inquiry alleged that it seemed designed to limit transparency. Some of them said that the investigators initially did not contact important figures at the company. An employee reached out to Summers and Taylor to complain. “They were just interested in the narrow range of what happened during the board drama, and not the history of his integrity,” the employee recalled of his interview with investigators. Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity. “Everything pointed to the fact that they wanted to find the outcome, which is to acquit him,” the employee said. (Some of the lawyers involved defended the process, saying, “It was an independent, careful, comprehensive review that followed the facts wherever they led.” Taylor also said that the review was “thorough and independent.”) Corporate investigations aim to confer legitimacy. At private companies, their findings are sometimes not written down—this can be a way to limit liability. But in cases involving public scandals there is often a greater expectation of transparency. Before Kalanick left Uber, in 2017, its board hired an outside firm, which released a thirteen-page summary to the public. Given OpenAI’s 501(c)(3) status and the high-profile nature of the firing, many executives there expected to see extensive findings. In March, 2024, however, OpenAI announced that it would clear Altman but released no report. The company provided, on its website, some eight hundred words acknowledging a “breakdown in trust.” People involved in the investigation said that no report was released because none was written. Instead, the findings were limited to oral briefings, shared with Summers and Taylor. “The review did not conclude that Sam was a George Washington cherry tree of integrity,” one of the people close to the inquiry said. But the investigation appears not to have centered the questions of integrity behind Altman’s firing, devoting much of its focus to a hunt for clear criminality; on that basis, it concluded that he could remain as C.E.O. Shortly thereafter, Altman, who had been kicked off the board when he was fired, rejoined it. The decision not to put the report in writing was made in part on the advice of Summer’s and Taylor’s personal attorneys, the person close to the inquiry told us. (Summers declined to comment on the record. Taylor said that, in light of the oral briefings, there had been “no need for a formal written report.”) Many former and current OpenAI employees told us that they were shocked by the lack of disclosure. Altman said he believed that all the board members who joined in the aftermath of his reinstatement received the oral briefings. “That’s an absolute, outright lie,” a person with direct knowledge of the situation said. Some board members told us that ongoing questions about the integrity of the report could prompt, as one put it, “a need for another investigation.” The absence of a written record helped minimize the allegations. So, increasingly, did Altman’s stature in Silicon Valley. Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI’s competitors. “If they invest in something that he doesn’t like, they won’t get access to other things,” one of them said. Another source of Altman’s power is his vast list of investments, which at times extends to his personal life. He has financial entanglements with numerous former romantic partners: as a fund co-manager, a lead investor, or a frequent co-investor. This is hardly unusual. Many of Silicon Valley’s straight executives do the same thing with their romantic and sexual partners. (“You have to,” one prominent C.E.O. told us.) “I’ve obviously invested with some exes after the fact. And I think that’s, like, totally fine,” Altman said. But the dynamic affords an extraordinary level of control. “It creates a very, very high dependence, essentially,” a person close to Altman said. “Oftentimes, it’s a lifetime dependence.” Even former colleagues can be affected. Murati left OpenAI in 2024 and began building her own A.I. startup. Josh Kushner, the close Altman ally, called her. He praised her leadership, then made what seemed to be a veiled threat, noting that he was “concerned about” her “reputation” and that former colleagues now viewed her as an “enemy.” (Kushner, through a representative, said that this account did not “convey full context”; Altman said that he was unaware of the call.) At the beginning of his tenure as C.E.O., Altman had announced that OpenAI would create a “capped profit” company, which would be owned by the nonprofit. This byzantine corporate structure apparently did not exist until Altman devised it. In the midst of the conversion, a board member named Holden Karnofsky objected to it, arguing that the nonprofit was being severely undervalued. “I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said. According to contemporaneous notes, he voted against it. However, after an attorney for the board said that his dissent “might be a flag to investigate further” the legitimacy of the new structure, his vote was recorded as an abstention, apparently without his consent—a potential falsification of business records. (OpenAI told us that several employees recall Karnofsky abstaining, and provided the minutes from the meeting recording his vote as an abstention.) Last October, OpenAI “recapitalized” as a for-profit entity. The firm touts its associated nonprofit, now called the OpenAI Foundation, as one of the “best resourced” in history. But it is now a twenty-six-per-cent stakeholder of the company, and its board members are also, with one exception, members of the for-profit board. During congressional testimony, Altman was asked if he made “a lot of money.” He replied, “I have no equity in OpenAI . . . I’m doing this because I love it”—a careful answer, given his indirect equity through the Y.C. fund. This is still technically true. But several people, including Altman, indicated to us that it could soon change. “Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no “active discussion” about it. According to a legal deposition, Brockman seems to own a stake in the company that is worth about twenty billion dollars. Altman’s share would presumably be worth more. Still, he told us that he was not primarily motivated by wealth. A former employee recalls him saying, “I don’t care about money. I care more about power.” In 2023, Altman married Mulherin in a small ceremony at a home they own in Hawaii. (They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the property, and those we spoke with reported witnessing nothing more remarkable than the standard diversions of the very wealthy: meals prepared by a private chef, boat rides at golden hour. One New Year’s party was “Survivor”-themed; a photograph shows a number of shirtless, smiling men, and also Jeff Probst, the real host of “Survivor.” Altman has also hosted smaller groups of friends at his properties, gatherings that have included, in at least one instance, a spirited game of strip poker. (A photograph of the event, which did not include Altman, leaves unclear who won, but at least three men clearly lost.) We spoke to many of Altman’s former guests who suggested only that he is a generous host. Nevertheless, rumors about Altman’s personal life have been exploited and distorted by competitors. Ruthless business rivalries are nothing new, but the competition within the A.I. industry has become extraordinarily cutthroat. (“Shakespearean” was the word an OpenAI executive used to describe it to us, adding, “The normal rules of the game sort of don’t apply anymore.”) Intermediaries directly connected to, and in at least one case compensated by, Musk have circulated dozens of pages of detailed opposition research about Altman. They reflect extensive surveillance, documenting shell companies associated with him, the personal contact information of close associates, and even interviews about a purported sex worker, conducted at gay bars. One of the Musk intermediaries claimed that Altman’s flights and the parties he attended were being tracked. Altman told us, “I don’t think anyone has had more private investigators hired against them.” Extreme claims have circulated. The right-wing broadcaster Tucker Carlson suggested, without any apparent proof, that Altman was involved in the death of a whistle-blower. This claim and others have been amplified by rivals. Altman’s sister, Annie, claimed in a lawsuit, and in interviews with us, that he sexually abused her for years, beginning when she was three and he was twelve. (We could not substantiate Annie’s account, which Altman has denied and his brothers and mother have called “utterly untrue” and a source of “immense pain to our entire family.” In interviews that the journalist Karen Hao conducted for her book, “Empire of AI,” Annie suggested that memories of abuse were recovered during flashbacks in adulthood.) Multiple people working within rival companies and investment firms insinuated to us that Altman sexually pursues minors—a narrative persistent in Silicon Valley which appears to be untrue. We spent months looking into the matter, conducting dozens of interviews, and could find no evidence to support it. “This is disgusting behavior from a competitor that I assume is part of an attempt at tainting the jury in our upcoming cases,” Altman told us. “As ridiculous as this is to have to say, any claims about me having sex with a minor, hiring sex workers, or being involved in a murder are completely untrue.” He added that he was “sort of grateful” that we had spent months “so aggressively trying to look into this.” Altman has acknowledged dating younger men of legal age. We spoke to several of his partners, who told us that they did not find this problematic. Yet the opposition dossiers from Musk intermediaries spin it as a line of attack. (The dossiers include salacious and unsubstantiated references to a “Twink Army” and “Sugar Daddy’s Sexual Habits.”) “I think there’s a lot of homophobia that gets pushed,” Altman said. Swisher, the tech journalist, agreed. “All these rich guys do wild stuff, wilder than anything I’ve been told about Sam,” she told us. “But he’s a gay guy in San Francisco,” she added, “so that gets weaponized.” For a decade, social-media executives promised that they could change the world with little or no downside. They dismissed the lawmakers who wanted to slow them down as mere Luddites, eventually earning bipartisan derision. Altman, by contrast, came across as refreshingly conscientious. Rather than warding off regulation, he practically begged for it. Testifying before the Senate Judiciary Committee in 2023, he proposed a new federal agency to oversee advanced A.I. models. “If this technology goes wrong, it can go quite wrong,” he said. Senator John Kennedy, of Louisiana, known for his cantankerous exchanges with tech C.E.O.s, seemed charmed, resting his face on his hand and suggesting that perhaps Altman should enforce the rules himself. But, as Altman publicly welcomed regulation, he quietly lobbied against it. In 2022 and 2023, according to Time, OpenAI successfully pressed to dilute a European Union effort that would have subjected large A.I. companies to more oversight. In 2024, a bill was introduced in the California state legislature mandating safety testing for A.I. models. Its provisions included measures resembling the ones that Altman had advocated for in his congressional testimony. OpenAI publicly opposed the bill but in private began issuing threats. “I would say that, over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI,” a legislative aide told us. Conway, the investor, lobbied state political leaders, including Nancy Pelosi and Gavin Newsom, to kill the bill. In the end, it passed the legislature with bipartisan support, but Newsom vetoed it. This year, congressional candidates who favor A.I. regulations have faced opponents funded by Leading the Future, a new “pro-A.I.” super PAC devoted to scuttling such restrictions. OpenAI’s official stance is that it will not contribute to such super PACs. “This issue transcends partisan politics,” Lehane recently told CNN. And yet one of the major donors to Leading the Future is Greg Brockman, who has committed fifty million dollars. (This year, Brockman and his wife donated twenty-five million dollars to MAGA Inc., a pro-Trump super PAC.) OpenAI’s campaign has extended beyond traditional lobbying. Last year, a successor bill was introduced in the California Senate. One night, Nathan Calvin, a twenty-nine-year-old lawyer who worked at the nonprofit Encode and had helped craft the bill, was at home having dinner with his wife when a process server arrived to deliver a subpoena from OpenAI. The company claimed to be hunting for evidence that Musk was covertly funding its critics. But it demanded all of Calvin’s private communications about the bill in the state Senate. “They could have asked us, ‘Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us. Other supporters of the bill, and some critics of OpenAI’s for-profit restructuring, also received subpoenas. “They were going after folks to basically scare them into shutting up,” Don Howard, who heads a charity called the James Irvine Foundation, said. (OpenAI claims that this was part of the standard legal process.) Altman has long supported Democrats. “I’m very suspicious of powerful autocrats telling a story of fear to gang up on the weak,” he told us. “That’s a Jewish thing, not a gay thing.” In 2016, he endorsed Hillary Clinton and called Trump “an unprecedented threat to America.” In 2020, he donated to the Democratic Party and to the Biden Victory Fund. During the Biden Administration, Altman met with the White House at least half a dozen times. He helped develop a lengthy executive order laying out the first federal regime of safety tests and other guardrails for A.I. When Biden signed it, Altman called it a “good start.” In 2024, with Biden’s poll numbers slipping, Altman’s rhetoric began to shift. “I believe that America is going to be fine no matter what happens in this election,” he said. After Trump won, Altman donated a million dollars to his inaugural fund, then took selfies with the influencers Jake and Logan Paul at the Inauguration. On X, in his standard lowercase style, Altman wrote, “watching @potus more carefully recently has really changed my perspective on him (i wish i had done more of my own thinking . . . ).” Trump, on his first day back in office, repealed Biden’s executive order on A.I. “He’s found an effective way for the Trump Administration to do his bidding,” a senior Biden Administration official said, of Altman. Musk continues to excoriate Altman in public, calling him “Scam Altman” and “Swindly Sam.” (When Altman complained on X about a Tesla he’d ordered, Musk replied, “You stole a non-profit.”) And yet, in Washington, Altman seems to have outflanked him. Musk spent more than two hundred and fifty million dollars to help Trump get reëlected, and worked in the White House for months. Then Musk left Washington, damaging his relationship with Trump in the process. Altman is now one of Trump’s favored tycoons, even accompanying him on a trip to visit the British Royal Family at Windsor Castle. Altman and Trump speak a few times a year. “You can just, like, call him,” Altman said. “This is not a buddy. But, yeah, if I need to talk to him about something, I will.” When Trump hosted a dinner with tech leaders at the White House last year, Musk was notably absent; Altman sat across from the President. “Sam, you’re a big leader,” Trump said. “You told me things before that are absolutely unbelievable.” Over the years, Altman has continued to compare the quest for A.G.I. to the Manhattan Project. Like J. Robert Oppenheimer, who used impassioned appeals about saving the world from the Nazis to persuade physicists to uproot their lives and move to Los Alamos, Altman leverages fears about the geopolitical stakes of his technology. Depending on the audience, Altman has used this analogy to encourage either acceleration or caution. In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an “A.G.I. Manhattan Project,” and that OpenAI needed billions of dollars of government funding to keep pace. When pressed for evidence, Altman said, “I’ve heard things.” It was the first of several meetings in which he made the claim. After one of them, he told an intelligence official that he would follow up with evidence. He never did. The official, after looking into the China project, concluded that there was no evidence that it existed: “It was just being used as a sales pitch.” (Altman says that he does not recall describing Beijing’s efforts in exactly that way.) With more safety-conscious audiences, Altman invoked the analogy to imply the opposite: that A.G.I. had to be pursued carefully, with international coördination, lest the consequences be disastrous. In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.? He was aghast: “The premise, which they didn’t dispute, was ‘We’re talking about potentially the most destructive technology ever invented—what if we sold it to Putin?’ ” (Brockman maintains that he never seriously entertained auctioning A.I. models to governments. “Ideas were batted around at a high level about what potential frameworks might look like to encourage cooperation between nations—something akin to an International Space Station for AI,” an OpenAI representative said. “Attempting to characterize it as anything more than that is utterly ridiculous.”) Brainstorming sessions often produce outlandish ideas. Hedley hoped that this one, which came to be known internally as the “countries plan,” would be dropped. Instead, according to several people involved and to contemporaneous documents, OpenAI executives seemed to grow only more excited about it. Brockman’s goal, according to Jack Clark, OpenAI’s policy director at the time, was to “set up, basically, a prisoner’s dilemma, where all of the nations need to give us funding,” and that “implicitly makes not giving us funding kind of dangerous.” A junior researcher recalled thinking, as the plan was detailed at a company meeting, “This is completely fucking insane.” Executives discussed the approach with at least one potential donor. But later that month, after several employees talked about quitting, the plan was abandoned. Altman “would lose staff,” Hedley said. “I feel like that was always something that had more weight in Sam’s calculations than ‘This is not a good plan because it might cause a war between great powers.’ ” Undeterred by the collapse of the countries plan, Altman pursued variations on the theme. In January, 2018, he convened an “A.G.I. weekend” at the Hotel Bel-Air, an Old Hollywood resort with rolling gardens of pink bougainvillea and an artificial pond stocked with real swans. The attendees included Nick Bostrom, a philosopher, then at Oxford, who had become a prophet of A.I. doom; Omar Al Olama, an Emirati sultan and an A.I. booster; and at least seven billionaires. The safety-concerned among them were told that this would be an opportunity to think through how society might prepare for the disruptive arrival of artificial general intelligence; the investors arrived expecting to hear pitches. The days were spent in a sleek conference room, where guests gave talks. (Hoffman, the LinkedIn co-founder, expounded on the possibilities of encoding A.I. with Buddhist compassion.) The final presenter was Altman, armed with a pitch deck that described a global cryptocurrency “redeemable for the attention of the AGI.” Once the A.G.I. was maximally useful, and “anti-evil,” people everywhere would clamor to buy time on OpenAI’s servers. Amodei wrote in his notes, “This idea was absurd on its face (would Vladimir Putin end up owning some of the tokens? . . .) In retrospect this was one of many red flags about Sam that I should have taken more seriously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I. safety. One of his slides read, “I want to get as many people on the ‘good’ team as possible, and win, and do the right thing.” Another read, “Please hold your laughter until the end of the presentation.” Altman’s fund-raising pitch has evolved over the years, but it has always reflected the fact that the development of A.G.I. requires a staggering amount of capital. He was following a relatively simple “scaling law”: the more data and computing power you used to train the models, the smarter they seemed to get. The specialized chips that enable this process are enormously expensive. OpenAI, in its most recent funding round alone, raised more than a hundred and twenty billion dollars—the largest private round in history, and a sum four times larger than the biggest I.P.O. ever. “When you think about entities with a hundred billion dollars they can discretionarily spend per year, there really are only a handful in the world,” a tech executive and investor told us. “There’s the U.S. government, and the four or five biggest U.S. tech companies, and the Saudis, and the Emiratis—that’s basically it.” Altman’s initial focus was Saudi Arabia. He first met Mohammed bin Salman, the country’s crown prince and de-facto monarch, in 2016, at a dinner at San Francisco’s Fairmont Hotel. After that, Hedley recalled, Altman referred to the prince as “a friend.” In September, 2018, according to Hedley’s notes, Altman said, “I’m trying to decide if we would ever take tens of billions from the Saudi PIF,” or public investment fund. The following month, a hit squad, reportedly acting on bin Salman’s orders, strangled Jamal Khashoggi, a Washington Post journalist who had been critical of the regime, and used a bone saw to dismember his corpse. A week later, it was announced that Altman had joined the advisory board for Neom, a “city of the future” that bin Salman hoped to build in the desert. “Sam, you cannot be on this board,” Clark, the policy director, who now works at Anthropic, recalled telling Altman. He initially defended his involvement, telling Clark that Jared Kushner had assured him that the Saudis “didn’t do this.” (Altman does not recall this. Kushner says that they were not in contact at the time.) As bin Salman’s role became increasingly clear, Altman left the Neom board. Yet behind the scenes, a policy consultant from whom Altman sought advice recalled, he treated the situation as a temporary setback, asking whether he could somehow still get money from bin Salman. “The question was not ‘Is this a bad thing or not?’ ” the consultant said. “But, just, ‘What would the consequences be if we did it? Would there be some export-control issue? Would there be sanctions? Like, can I get away with it?’ ” By then, Altman was already eying another source of cash: the United Arab Emirates. The country was in the midst of a fifteen-year effort to transform itself from an oil state to a tech hub. The project was overseen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the nation’s spymaster. Tahnoon runs the state-controlled A.I. conglomerate G42, and controls $1.5 trillion in sovereign wealth. In June, 2023, Altman visited Abu Dhabi, meeting with Olama and other officials. In remarks at a government-backed function, he said that the country had “been talking about A.I. since before it was cool,” and outlined a vision for the future of A.I. with the Middle East in a “central role.” Fund-raising from Gulf states has become customary for many large businesses. But Altman was pursuing a more sweeping geopolitical vision. In the fall of 2023, he began quietly recruiting new talent for a plan—eventually known as ChipCo—in which Gulf states would provide tens of billions of dollars for the construction of huge microchip foundries and data centers, some to be situated in the Middle East. Altman pitched Alexandr Wang, now the head of A.I. at Meta, on a leadership role, telling him that Jeff Bezos, the founder of Amazon, could head the new company. Altman sought enormous contributions from the Emiratis. “My understanding was that this whole thing happened without any board knowledge,” the board member said. A researcher Altman tried to recruit for the project, James Bradbury, recalled turning him down. “My initial reaction was ‘This is gonna work, but I don’t know if I want it to work,’ ” he said. A.I. capacity may soon displace oil or enriched uranium as the resource that dictates the global balance of power. Altman has said that computing power is “the currency of the future.” Normally, it might not matter where a data center was situated. But many American national-security officials were anxious about concentrating advanced A.I. infrastructure in Gulf autocracies. The U.A.E.’s telecommunications infrastructure is heavily dependent on hardware from Huawei, a Chinese tech giant linked to the government, and the U.A.E. has reportedly leaked American technology to Beijing in the past. Intelligence agencies worried that advanced U.S. microchips sent to the Emiratis could be used by Chinese engineers. Data centers in the Middle East are also more vulnerable to military strikes; in recent weeks, Iran has bombed American data centers in Bahrain and the U.A.E. And, hypothetically, a Gulf monarchy could commandeer an American-owned data center and use it to build disproportionately powerful models—a version of the “AGI dictatorship” scenario, but in an actual dictatorship. After Altman’s firing, the person he relied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loyalists. “Watching my friend stare into the abyss like that, it made me question some fundamental things about what it means to really run a company,” Chesky told us. The following year, at a gathering of Y Combinator alumni, he gave an impromptu talk, which ended up lasting two hours. “It felt like a group-therapy session,” he said. The upshot was: Your instincts for how to run the company that you started are the best instincts, and anyone who tells you otherwise is gaslighting you. “You’re not crazy, even though people who work for you tell you you are,” Chesky said. Paul Graham, in a blog post about the speech, gave this defiant attitude a name: Founder Mode. Since the Blip, Altman has been in Founder Mode. In February, 2024, the Wall Street Journal published a description of Altman’s vision for ChipCo. He conceived of it as a joint entity funded by an investment of five to seven trillion dollars. (“fk it why not 8,” he tweeted.) This was how many employees learned about the plan. “Everyone was, like, ‘Wait, what?’ ” Leike recalled. Altman insisted at an internal meeting that safety teams had been “looped in.” Leike sent a message urging him not to falsely suggest that the effort had been approved. During the Biden Administration, Altman explored getting a security clearance to join classified A.I.-policy discussions. But staffers at the RANDCorporation, which helped coördinate the process, expressed concern. “He has been actively raising ‘hundreds of billions of dollars’ from foreign governments,” one of them wrote. “The UAE recently gifted him a car. (I assume it was a very nice car.)” The staffer continued, “The only person I can think of who ever went thru the process with this magnitude of foreign financial ties is Jared Kushner, and the adjudicators recommended that he not be granted a clearance.” Altman ultimately withdrew from the process. “He was pushing these transactional relationships, primarily with the Emiratis, that raised a lot of red flags for some of us,” a senior Administration official involved in talks with Altman told us. “A lot of people in the Administration did not trust him a hundred per cent.” When we asked Altman about gifts from Tahnoon, he said, “I’m not gonna say what gifts he has given me specifically. But he and other world leaders . . . have given me gifts.” He added, “We have a standard policy, which applies to me as well, which is that every gift from any potential business partner is disclosed to the company.” Altman has at least two hypercars: an all-white Koenigsegg Regera, worth about two million dollars, and a red McLaren F1, worth about twenty million dollars. In 2024, Altman was spotted driving the Regera through Napa. A few seconds of video made its way onto social media: Altman in a low-slung bucket seat, peering out the window of a gleaming white machine. A tech investor aligned with Musk posted the footage on X, writing, “I’m starting a nonprofit next.” In 2024, Altman took two OpenAI employees to visit Sheikh Tahnoon on his two-hundred-and-fifty-million-dollar superyacht, the Maryah. One of the largest such vessels in the world, the Maryah has a helipad, a night club, a movie theatre, and a beach club. Altman’s employees apparently stood out amid Tahnoon’s armed security detail, and at least one later told colleagues that he found the experience disconcerting. Altman, on X, later referred to Tahnoon as a “dear personal friend.” Altman continued to meet with the Biden Administration, which had enacted a policy requiring White House approval for the export of sensitive technology. Multiple Administration officials emerged from these meetings nervous about Altman’s ambitions in the Middle East. He often made grandiose claims, according to those officials, including calling A.I. “the new electricity.” In 2018, he said that OpenAI was planning to buy a fully functioning quantum computer from a company called Rigetti Computing. This was news even to other OpenAI executives in the room. Rigetti was not yet close to being able to sell a usable quantum computer. In a meeting, Altman claimed that by 2026 an extensive network of nuclear-fusion reactors across the United States would power the A.I. boom. The senior Administration official said, “We were, like, ‘Well, that’s, you know, news, if they made nuclear fusion work.’ ” The Biden Administration ultimately withheld approval. “We’re not going to be building advanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman. Four days before Trump’s Inauguration, the Wall Street Journal reported, Tahnoon paid half a billion dollars to the Trump family in exchange for a stake in its cryptocurrency company. The following day, Altman held a twenty-five-minute call with Trump, during which they discussed announcing a version of a ChipCo, timed so that Trump could take credit for it. On Trump’s second day in office, Altman stood in the Roosevelt Room and announced Stargate, a five-hundred-billion-dollar joint venture that aims to build a vast network of A.I. infrastructure across the U.S. In May, the Administration rescinded Biden’s export restrictions on A.I. technology. Altman and Trump travelled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis advertised the launch of a giant state-backed A.I. firm in the kingdom, with billions to spend on international partnerships. About a week later, Altman laid out a plan for Stargate to expand into the U.A.E. The company plans to build a data-center campus in Abu Dhabi which is seven times larger than Central Park and consumes roughly as much electrical power as the city of Miami. “The truth of this is, we’re building portals from which we’re genuinely summoning aliens,” a former OpenAI executive said. “The portals currently exist in the United States and China, and Sam has added one in the Middle East.” He went on, “I think it’s just, like, wildly important to get how scary that should be. It’s the most reckless thing that has been done.” The erosion of safety commitments has become an industry norm. The founding premise of Anthropic was that, given the right structure and leadership, it could keep safety commitments from disintegrating under commercial pressure. One such commitment was a “responsible scaling policy,” which obligated Anthropic to stop training more powerful models if it could not demonstrate that they were safe. In February, as the firm secured thirty billion dollars in new funding, it weakened that pledge. In some respects, Anthropic still emphasizes safety more than OpenAI does. But Clark, the former policy director, has said, “The system of capital markets says, Go faster.” He added, “The world gets to make this decision, not companies.” Last year, Amodei sent a memo to Anthropic employees, disclosing that the firm would seek investments from the United Arab Emirates and Qatar and acknowledging that this would likely enrich “dictators.” (Like many authors, we are both parties in a class-action lawsuit alleging that Anthropic used our books without our permission to train its models. Condé Nast has opted into a settlement agreement with Anthropic regarding the company’s use of certain books published by Condé Nast and its subsidiaries.) In 2024, Anthropic partnered with Palantir, one of Silicon Valley’s most hawkish defense contractors, pushing its A.I. model, Claude, directly into the military ecosystem. Anthropic became the only A.I. contractor used in the Pentagon’s most classified settings. Last year, the Pentagon awarded the company a further two-hundred-million-dollar contract. In January, the U.S. military launched a midnight raid that captured the Venezuelan President, Nicolás Maduro. According to the Wall Street Journal, Claude was used in the classified operation. But tensions arose between Anthropic and the government. Years earlier, OpenAI had deleted from its policies a blanket ban on using its technology for “military and warfare.” Eventually, Anthropic’s rivals—including Google and xAI—agreed to provide their models to the military for “all lawful purposes.” Anthropic, whose policies bar it from enabling fully autonomous weapons or domestic mass surveillance, resisted on these points, slowing negotiations for an overhauled deal. On a Tuesday in late February, Defense Secretary Pete Hegseth summoned Amodei to the Pentagon and delivered an ultimatum: the firm had until 5:01 P.M. that Friday to abandon those prohibitions. The day before the deadline, Amodei declined to do so. Hegseth tweeted that he would designate Anthropic a “supply-chain risk”—a devastating blacklist historically reserved for companies, like Huawei, that have ties to foreign adversaries—and made good on the threat days later. Hundreds of employees at OpenAI and Google signed an open letter titled “We Will Not Be Divided,” defending Anthropic. In an internal memo, Altman wrote that the dispute was “an issue for the whole industry,” and claimed that OpenAI shared Anthropic’s ethical boundaries. But Altman had been in negotiations with the Pentagon for at least two days. Emil Michael, the Under-Secretary of Defense for Research and Engineering, had contacted Altman as he sought replacements for Anthropic. “I needed to hurry and find alternatives,” Michael recalled. “I called Sam, and he was willing to jump. I think he’s a patriot.” Altman asked Michael, “What can I do for the country?” It appears that he already knew the answer. OpenAI lacked the security accreditation required for the classified systems in which Anthropic’s technology was embedded. But a fifty-billion-dollar deal, announced that Friday morning, integrated OpenAI’s technology into Amazon Web Services, a key part of the Pentagon’s digital infrastructure. That night, Altman announced on X that the military would now be using OpenAI’s models. By some measures, Altman’s maneuver has not hindered the company’s success. The day he announced the deal, a new funding round increased OpenAI’s value by a hundred and ten billion dollars. But many users deleted the ChatGPT app. At least two senior employees departed—one for Anthropic. At a staff meeting, Altman chastised employees who raised concerns. “So maybe you think the Iran strike was good and the Venezuela invasion was bad,” he said. “You don’t get to weigh in on that.” Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a successor. Simo herself has privately said that she believes Altman may eventually step down, a person briefed on a recent discussion told us. (Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.) Altman describes his shifting commitments as a by-product of his ability to adapt to changing circumstances—not a nefarious “long con,” as Musk and others have alleged, but a gradual, good-faith evolution. “I think what some people want,” he told us, is a leader who “is going to be absolutely sure of what they think and stick with it, and it’s not going to change. And we are in a field, in an area, where things change extremely quickly.” He defended some of his actions as the practice of “normal competitive business.” Several investors we spoke to described Altman’s detractors as naïve to expect anything else. “There is a group of fatalistic extremists that has taken the safety pill almost to a science-fiction level,” Conway, the investor, told us. “His mission is measured by numbers. And, when you look at the success of OpenAI, it’s hard to argue with the numbers.” But others in Silicon Valley think that Altman’s behavior has created unacceptable managerial dysfunction. “It’s more about a practical inability to govern the company,” the board member said. And some still believe that the architects of A.I. should be evaluated more stringently than executives in other industries. The vast majority of people we spoke to agreed that the standards by which Altman now asks to be judged are not those he initially proposed. During one conversation, we asked Altman whether running an A.I. company came with “an elevated requirement of integrity.” This was supposed to be an easy question. Until recently, when asked a version of it, his answer was a clear, unqualified yes. Now he added, “I think there’s, like, a lot of businesses that have potential huge impact, good and bad, on society.” (Later, he sent an additional statement: “Yes, it demands a heightened level of integrity, and I feel the weight of the responsibility every day.”) Of all the promises made at OpenAI’s founding, arguably the most central was its pledge to bring A.I. into existence safely. But such concerns are now often derided in Silicon Valley and in Washington. Last year, J. D. Vance, the former venture capitalist who is now the Vice-President, addressed a conference in Paris called the A.I. Action Summit. (It was previously called the A.I. Safety Summit.) “The A.I. future is not going to be won by hand-wringing about safety,” he said. At Davos this year, David Sacks, a venture capitalist who was serving as the White House’s A.I. and crypto czar, dismissed safety concerns as a “self-inflicted injury” that could cost America the A.I. race. Altman now calls Trump’s deregulatory approach “a very refreshing change.” OpenAI has closed many of its safety-focussed teams. Around the time the superalignment team was dissolved, its leaders, Sutskever and Leike, resigned. (Sutskever co-founded a company called Safe Superintelligence.) On X, Leike wrote, “Safety culture and processes have taken a backseat to shiny products.” Soon afterward, the A.G.I.-readiness team, tasked with preparing society for the shock of advanced A.I., was also dissolved. When the company was asked on its most recent I.R.S. disclosure form to briefly describe its “most significant activities,” the concept of safety, present in its answers to such questions on previous forms, was not listed. (OpenAI said that its “mission did not change” and added, “We continue to invest in and evolve our work on safety, and will continue to make organizational changes.”) The Future of Life Institute, a think tank whose principles on safety Altman once endorsed, grades each major A.I. company on “existential safety”; on the most recent report card, OpenAI got an F. In fairness, so did every other major company except for Anthropic, which got a D, and Google DeepMind, which got a D-. “My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.” A.I. doomers have been pushed to the fringes, but some of their fears seem less fantastical with each passing month. In 2020, according to a U.N. report, an A.I. drone was used in the Libyan civil war to fire deadly munitions, possibly without oversight by a human operator. Since then, A.I. has only become more central to military operations around the world, including, reportedly, in the current U.S. campaign in Iran. In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents. And many more mundane harms are already coming to pass. We increasingly rely on A.I. to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”; the ubiquity of A.I. “slop” makes life easier for scammers and harder for people who simply want to know what’s real. A.I. “agents” are starting to act independently, with little or no human supervision. Days before the 2024 New Hampshire Democratic primary, thousands of voters received robocalls from an A.I.-generated deepfake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter suppression requiring virtually no technical expertise. OpenAI is now facing seven wrongful-death lawsuits, which allege that ChatGPT prompted several suicides and a murder. Chat logs in the murder case show that it encouraged a man’s paranoid delusion that his eighty-three-year-old mother was surveilling and trying to poison him. Soon afterward, he fatally beat and strangled her and stabbed himself. (OpenAI is fighting the lawsuits, and says that it’s continuing to improve its model’s safeguards.) As OpenAI prepares for its potential I.P.O., Altman has faced questions not only about the effect of A.I. on the economy—it could soon cause severe labor disruption, perhaps eliminating millions of jobs—but about the company’s own finances. Eric Ries, an expert on startup governance, derided “circular deals” in the industry—for example, OpenAI’s deals with Nvidia and other chip manufacturers—and said that in other eras some of the company’s accounting practices would have been considered “borderline fraudulent.” The board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.) In February, we spoke again with Altman. He was wearing a drab-green sweater and jeans, and sat in front of a photograph of a NASA moon rover. He tucked one leg beneath him, then hung it over the arm of his chair. In the past, he said, his main flaw as a manager had been his eagerness to avoid conflict. “Now I’m very happy to fire people quickly,” he had told us. “I’m happy to just say, ‘We’re gonna bet in this direction.’ ” Any employees who didn’t like his choices needed “to leave.” He is more bullish than ever about the future. “My definition of winning is that people crazy uplevel—and the insane sci-fi future comes true for all of us,” he said. “I’m very ambitious as far as, like, my hope for humanity, and what I expect us all to achieve. I weirdly have very little personal ambition.” At times, he seemed to catch himself. “No one believes you’re doing this just because it’s interesting,” he said. “You’re doing it for power or for some other thing.” Even people close to Altman find it difficult to know where his “hope for humanity” ends and his ambition begins. His greatest strength has always been his ability to convince disparate groups that what he wants and what they need are one and the same. He made use of a unique historical juncture, when the public was wary of tech-industry hype and most of the researchers capable of building A.G.I. were terrified of bringing it into existence. Altman responded with a move that no other pitchman had perfected: he used apocalyptic rhetoric to explain how A.G.I. could destroy us all—and why, therefore, he should be the one to build it. Maybe this was a premeditated masterstroke. Maybe he was fumbling for an advantage. Either way, it worked. Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.” ♦

摘要

這篇文章深入探討了 OpenAI 執行長 Sam Altman 的領導風格及其對人工智慧未來的影響。文章揭示了 Altman 在 OpenAI 內部引發的信任危機,特別是 2023 年那場短暫的解職風波(被稱為「閃現」)。透過前同事與董事會成員的視角,文章描繪了一個充滿野心、擅長操弄且在追求「通用人工智慧」(AGI)過程中,逐漸將公司從非營利使命轉向商業擴張的複雜人物形象。儘管 Altman 被視為當代最具說服力的推銷員,但其對安全承諾的動搖、與外國獨裁政權的資金往來,以及在公司治理上的爭議,引發了外界對他是否具備掌控這項足以改變文明技術之道德資格的深刻質疑。


2023 年秋天,OpenAI 首席科學家 Ilya Sutskever 向組織董事會的三名成員發送了秘密備忘錄。數週以來,他們一直在私下討論 OpenAI 執行長 Sam Altman 及其副手 Greg Brockman 是否適任。Sutskever 曾將這兩人視為好友。2019 年,他還曾在 OpenAI 辦公室主持了 Brockman 的婚禮,儀式上甚至有一隻機械手作為戒童。然而,隨著他確信公司即將達成長期目標——創造出能與人類認知能力匹敵甚至超越人類的人工智慧——他對 Altman 的疑慮也日益加深。正如 Sutskever 當時對另一位董事會成員所言:「我不認為 Sam 是那個應該掌握按鈕的人。」

在其他董事會成員的要求下,Sutskever 與志同道合的同事合作,整理了約七十頁的 Slack 訊息和人力資源文件,並附上說明文字。這些資料包含用手機拍攝的影像,顯然是為了避免在公司設備上被發現。他將最終的備忘錄以「閱後即焚」訊息發送給其他董事會成員,以確保沒有其他人會看到。一位收到訊息的董事會成員回憶道:「他當時嚇壞了。」這些我們審閱過的備忘錄此前從未完整披露。內容指控 Altman 向高管和董事會成員歪曲事實,並在內部安全協議上欺騙他們。其中一份關於 Altman 的備忘錄開頭列出了一份清單,標題為「Sam 表現出一貫的……模式」,第一項就是「撒謊」。

許多科技公司會發布關於改善世界的模糊宣言,然後轉頭追求營收最大化。但 OpenAI 的創立前提是它必須與眾不同。包括 Altman、Sutskever、Brockman 和 Elon Musk 在內的創辦人斷言,人工智慧可能是人類歷史上最強大且潛在最危險的發明,考慮到其生存風險,或許需要一種不同尋常的公司結構。該公司成立為非營利組織,其董事會有義務將人類的安全置於公司的成功甚至生存之上。執行長必須是一個具有非凡誠信的人。據 Sutskever 所言,「任何致力於構建這種改變文明技術的人都肩負著沉重的負擔,並承擔著前所未有的責任。」但「最終身居此類職位的人,往往是某種特定類型的人,他們對權力感興趣,是政客,是喜歡權力的人。」在其中一份備忘錄中,他似乎擔心將技術託付給一個「只會對人說他們想聽的話」的人。如果 OpenAI 的執行長被證明不可靠,由六人組成的董事會有權將其解僱。包括人工智慧政策專家 Helen Toner 和企業家 Tasha McCauley 在內的一些成員,收到這些備忘錄後,確認了他們早已形成的信念:Altman 的角色賦予了他人類的未來,但他卻不可信。

Altman 當時正在拉斯維加斯參加一級方程式賽車,Sutskever 邀請他進行視訊通話,隨後宣讀了一份簡短聲明,解釋他不再是 OpenAI 的員工。董事會在法律建議下發布了一份公開聲明,僅表示解僱 Altman 是因為他「在溝通中並不總是坦誠」。許多 OpenAI 的投資者和高管感到震驚。微軟曾向 OpenAI 投資約 130 億美元,直到解僱發生的前一刻才得知此計畫。「我非常震驚,」微軟執行長 Satya Nadella 後來說。「我從任何人那裡都問不出什麼。」他與 LinkedIn 共同創辦人、OpenAI 投資者兼微軟董事會成員 Reid Hoffman 交談,後者開始四處打聽 Altman 是否犯了明確的過錯。「我根本不知道發生了什麼鬼事,」Hoffman 告訴我們。「我們在尋找挪用公款或性騷擾的證據,結果什麼都沒找到。」

其他商業夥伴同樣感到措手不及。當 Altman 打電話給投資者 Ron Conway 說他被解僱時,Conway 把手機遞給了正在與他共進午餐的眾議員 Nancy Pelosi。「你最好趕快離開那裡,」她告訴 Conway。OpenAI 即將完成一筆來自 Thrive 的大額投資,這是一家由 Jared Kushner 的兄弟 Josh Kushner 創立的風險投資公司,Altman 與他相識多年。這筆交易將使 OpenAI 的估值達到 860 億美元,並允許許多員工套現數百萬美元的股權。Kushner 在與音樂製作人 Rick Rubin 開完會後,發現了 Altman 的未接來電。「我們立刻進入了戰爭狀態,」Kushner 後來說。

Altman 被解僱當天,他飛回了位於舊金山價值 2700 萬美元的豪宅,那裡擁有灣區的全景視野,曾設有一個懸臂式無邊際泳池,他建立了一個他稱之為「流亡政府」的機構。Conway、Airbnb 共同創辦人 Brian Chesky 以及以強硬著稱的危機溝通經理 Chris Lehane 加入其中,每天透過視訊和電話進行數小時的商討。Altman 的一些執行團隊成員在屋內走廊紮營。律師們在他臥室隔壁的家庭辦公室裡辦公。在失眠時,Altman 會穿著睡衣在他們身邊徘徊。當我們最近與 Altman 交談時,他將被解僱後的日子描述為「一種奇怪的賦格狀態」。

在董事會保持沉默的情況下,Altman 的顧問們為他的回歸製造了輿論。Lehane 堅稱這是一場由激進的「有效利他主義者」策劃的政變——這些人信奉一種專注於最大化人類福祉的信仰體系,他們開始將人工智慧視為生存威脅。(Hoffman 告訴 Nadella,解僱可能是因為「有效利他主義者的瘋狂」。)Lehane——據稱他引用 Mike Tyson 的座右銘是「每個人都有計畫,直到被揍了一拳」——敦促 Altman 發起激進的社交媒體運動。Chesky 與科技記者 Kara Swisher 保持聯繫,傳達對董事會的批評。

Altman 每天晚上六點都會中斷他的「戰情室」,來上一輪 Negroni 調酒。「你需要冷靜點,」他回憶自己當時說的話。「該發生的總會發生。」但他也補充說,他的電話記錄顯示他每天通話超過 12 小時。據一位知情人士透露,Altman 曾向當時擔任 OpenAI 臨時執行長、且曾為 Sutskever 的備忘錄提供材料的 Mira Murati 傳達,他的盟友們正在「全力以赴」,並「尋找壞事」來損害她以及其他反對他的人的名譽。(Altman 不記得這次對話。)

解僱發生後的幾小時內,Thrive 就暫停了計畫中的投資,並暗示只有在 Altman 回歸的情況下,交易才會完成——員工也才能拿到分紅。這段時間的簡訊顯示,Altman 與 Nadella 密切協調。(「這樣如何:Satya 和我的首要任務仍然是拯救 OpenAI,」Altman 在兩人草擬聲明時建議。Nadella 提出了替代方案:「確保 OpenAI 繼續蓬勃發展。」)微軟隨後宣布將為 Altman 和任何離開 OpenAI 的員工創建一個競爭性計畫。一封要求他回歸的公開信在組織內流傳。一些猶豫是否簽署的人收到了同事懇求的電話和訊息。最終,大多數 OpenAI 員工威脅要隨 Altman 一起離開。

董事會陷入了困境。「Ctrl+Z(復原),這是一個選擇,」Toner 說——撤銷解僱。「或者另一個選擇是公司分崩離析。」甚至連 Murati 最終也簽署了這封信。Altman 的盟友致力於爭取 Sutskever。Brockman 的妻子 Anna 在辦公室找到他,懇求他重新考慮。「你是個好人——你可以解決這個問題,」她說。Sutskever 後來在法庭證詞中解釋說:「我覺得如果我們走上一條 Sam 不會回歸的道路,那麼 OpenAI 就會被摧毀。」一天晚上,Altman 服用了安眠藥,卻被他的丈夫、一位名叫 Oliver Mulherin 的澳洲程式設計師叫醒,告訴他 Sutskever 正在動搖,人們都在勸 Altman 與董事會談談。「我在那種瘋狂的安眠藥迷霧中醒來,感到非常迷失方向,」Altman 告訴我們。「我當時想,我現在絕對不能跟董事會談。」

在一系列日益緊張的通話中,Altman 要求那些曾試圖解僱他的董事會成員辭職。「我必須在這種瘋狂的懷疑陰雲中收拾他們的爛攤子?」Altman 回憶起他最初對回歸的想法。「我當時想,絕對不可能。」最終,Sutskever、Toner 和 McCauley 失去了董事會席位。Quora 的創辦人 Adam D’Angelo 是唯一留任的原始成員。作為離職條件,離職成員要求對針對 Altman 的指控進行調查——包括他挑撥高管之間關係以及隱瞞財務糾葛。他們還推動建立一個能獨立監督外部調查的新董事會。但兩位新成員,前哈佛大學校長 Lawrence Summers 和前 Facebook 首席技術長 Bret Taylor,是在與 Altman 密切交談後選出的。「你會這樣做嗎,」Altman 發簡訊給 Nadella。「Bret、Larry Summers、Adam 作為董事會成員,我作為執行長,然後由 Bret 處理調查。」(McCauley 後來在證詞中表示,當之前考慮讓 Taylor 擔任董事會成員時,她曾擔心他對 Altman 的順從。)

在他被解僱不到五天後,Altman 復職了。員工們現在將這一刻稱為「閃現」(the Blip),取自漫威電影中的一個事件,角色們從存在中消失,然後又回到一個因他們缺席而發生深刻變化的世界。但關於 Altman 是否值得信任的爭論已經超出了 OpenAI 的董事會。那些促成他下台的同事指責他存在一種對於任何高管來說都不可接受、且對這種變革性技術的領導者來說極其危險的欺騙程度。「我們需要與其所掌握權力相稱的機構,」Murati 告訴我們。「董事會尋求反饋,我分享了我所看到的。我分享的一切都是準確的,我堅持我所說的一切。」另一方面,Altman 的盟友長期以來一直駁斥這些指控。解僱後,Conway 發簡訊給 Chesky 和 Lehane,要求發動公關攻勢。「這對 SAM 的名譽至關重要,」他寫道。他告訴《華盛頓郵報》,Altman 被「一個流氓董事會虐待了」。

OpenAI 此後已成為世界上最有價值的公司之一。據報導,它正準備進行首次公開募股,估值可能達到一兆美元。Altman 正在推動建設驚人規模的人工智慧基礎設施,其中一些集中在國外獨裁政權中。OpenAI 正在獲得廣泛的政府合約,為人工智慧在移民執法、國內監控和戰區自主武器中的使用制定標準。

Altman 透過宣揚一種願景來推動 OpenAI 的成長,他在 2024 年的一篇部落格文章中寫道:「驚人的勝利——修復氣候、建立太空殖民地以及發現所有物理學——最終將變得司空見慣。」他的言論幫助維持了歷史上初創公司中最快的現金消耗速度之一,並依賴於借貸巨額資金的合作夥伴。美國經濟越來越依賴少數幾家高槓桿的人工智慧公司,許多專家——有時也包括 Altman——警告稱該行業正處於泡沫之中。「有人將會損失一大筆錢,」他去年告訴記者。如果泡沫破裂,可能會引發經濟災難。如果他最樂觀的預測正確,他可能會成為地球上最富有、最強大的人之一。

在 Altman 被解僱後的一次緊張通話中,董事會敦促他承認存在欺騙模式。「這真是太爛了,」據通話中的人說,他反覆說道。「我無法改變我的個性。」Altman 說他不記得這次交流。「我可能想表達的是『我確實試圖成為一股團結的力量』,」他告訴我們,並補充說這種特質使他能夠領導一家極其成功的公司。他將批評歸因於一種傾向,特別是在他職業生涯早期,「過於迴避衝突」。但一位董事會成員對他的聲明提出了不同的解讀:「它的意思是『我有這種對人撒謊的特質,而且我不會停止。』」那些解僱 Altman 的同事是出於危言聳聽和個人仇恨,還是他們認為他不可信是正確的?

今年冬天的一個早晨,我們在舊金山的 OpenAI 總部見到了 Altman,這是我們為這篇報導與他進行的十幾次對話之一。該公司最近搬進了一對 11 層樓高的玻璃大樓,其中一棟曾被另一家科技巨頭 Uber 佔用,其共同創辦人兼執行長 Travis Kalanick 曾被視為不可阻擋的天才——直到 2017 年,在投資者引用道德擔憂的壓力下辭職。(Kalanick 現在經營一家機器人初創公司;他說最近在閒暇時,他使用 OpenAI 的 ChatGPT「來觸及量子物理學已知領域的邊緣」。)

一名員工帶我們參觀了辦公室。在一個充滿公共桌椅的寬敞空間裡,有一幅電腦科學家 Alan Turing 的動畫數位畫作;當我們經過時,它的眼睛跟隨著我們。這個裝置是對圖靈測試的暗示,這是一個關於機器是否能可信地模仿人的 1950 年思想實驗。(在 2025 年的一項研究中,ChatGPT 通過測試的可靠性超過了真人。)通常,你可以與這幅畫互動。但我們的嚮導告訴我們,聲音功能已被禁用,因為它會不斷竊聽員工的對話,然後插嘴。在辦公室的其他地方,牌匾、宣傳冊和商品上展示著「感受 AGI」(Feel the AGI)的字樣。這句話最初與 Sutskever 有關,他用它來提醒同事注意通用人工智慧的風險——即機器達到人類認知能力的門檻。「閃現」之後,它變成了一句歡快的口號,預示著一個超級富足的未來。

我們在八樓一個看起來很普通的會議室裡見到了 Altman。「人們過去常跟我談論決策疲勞,我不明白,」Altman 告訴我們。「現在我每天都穿灰色毛衣和牛仔褲,甚至在衣櫃裡挑選哪件灰色毛衣——我都想,我真希望不必考慮這個。」Altman 外表年輕——他身材苗條,有著寬闊的藍眼睛和凌亂的頭髮——但他現在已經 40 歲了,他和 Mulherin 有一個透過代孕出生的 1 歲兒子。「我相信,擔任美國總統會是一份壓力大得多的工作,但在我認為我能勝任的所有工作中,這是我想像中最有壓力的一份,」他一邊說,一邊與我們中的一人對視,然後又與另一人對視。「我向朋友們解釋的方式是:『在我們推出 ChatGPT 的那天之前,這是世界上最有趣的工作。』我們正在進行這些重大的科學發現——我認為我們在過去幾十年裡完成了最重要的科學發現。」他垂下眼簾。「然後,自從 ChatGPT 發布以來,決策變得非常困難。」

Altman 在聖路易斯的富裕郊區克萊頓長大,是四個兄弟姐妹中的老大。他的母親 Connie Gibstine 是一名皮膚科醫生;他的父親 Jerry Altman 是一名房地產經紀人和住房活動家。Altman 參加了一所改革派猶太會堂和一所私立預備學校,他形容那裡「不是那種你會真正站出來談論自己是同性戀的地方」。不過,總的來說,這個家庭富裕的郊區圈子相對自由。他說,當他 16 或 17 歲時,他在聖路易斯一個以同性戀為主的街區待得很晚,遭到了一次殘酷的身體攻擊和恐同誹謗。Altman 沒有報告這起事件,他也不願向我們提供更多記錄在案的細節,稱更完整的敘述會「讓我看起來像是在操縱或博取同情」。他駁斥了這件事以及他的性取向對他的身份具有重要意義的說法。但是,他說,「可能這確實有一些深層的心理因素——我認為我已經克服了,但其實沒有——關於不想再有衝突。」

Altman 的兄弟在 2016 年告訴《紐約客》,Altman 童年的態度是「我必須贏,而且我負責一切」。他去了史丹佛大學,在那裡他參加了定期的校外撲克遊戲。「我想我從中學到的關於生活和商業的東西比在大學學到的還要多,」他後來說。

所有史丹佛學生都很有野心,但其中許多最有進取心的人會輟學。大二那年夏天,Altman 前往麻薩諸塞州,加入由著名軟體工程師 Paul Graham 共同創立的「初創公司孵化器」Y Combinator 的首批企業家行列。(Altman 的同屆學員包括 Reddit 和 Twitch 的創辦人。)Altman 的項目最終被稱為 Loopt,是一個原始的社交網路,利用翻蓋手機的位置來告訴朋友他們在哪裡。這家公司反映了他的幹勁,以及將模糊情況解釋為對自己有利的傾向。聯邦法規要求手機運營商能夠追蹤手機位置以用於緊急服務;Altman 與運營商達成協議,將這些功能用於公司用途。

Loopt 的大多數員工都喜歡 Altman,但有些人說,他誇大其詞的傾向讓他們感到震驚,即使是瑣碎的事情也是如此。有人回憶說,Altman 廣泛吹噓自己是乒乓球冠軍——「就像,密蘇里州高中乒乓球冠軍」——結果卻證明是辦公室裡球技最差的人之一。(Altman 說他可能是在開玩笑。)正如 Loopt 的資深員工 Mark Jacobstein(曾被投資者要求擔任 Altman 的「保姆」)後來在 Altman 的傳記《樂觀主義者》(The Optimist)中告訴 Keach Hagey 的那樣,「在『我想我或許能完成這件事』和『我已經完成了這件事』之間存在模糊地帶,其最毒的形式會導致 Theranos(Elizabeth Holmes 的詐欺初創公司)。」

據 Hagey 稱,擔心 Altman 的領導能力和缺乏透明度的高級員工小組曾兩次要求 Loopt 董事會解僱他。但 Altman 也激發了強烈的忠誠度。一位前員工被告知,一位董事會成員回應道:「這是 Sam 的公司,回去幹活。」(一位董事會成員否認了解僱 Altman 的嘗試是認真的。)Loopt 在獲取用戶方面舉步維艱,2012 年被一家金融科技公司收購。據一位熟悉該交易的人士稱,這次收購在很大程度上是為了幫助 Altman 保住面子。儘管如此,到 2014 年 Graham 從 YC 退休時,他已經招募 Altman 繼任總裁。「我在廚房裡問 Sam,」Graham 告訴《紐約客》。「他笑了,好像成功了。我從未見過 Sam 露出那種不受控制的笑容。就像你把紙團扔進房間另一頭的廢紙簍裡——就是那種笑容。」

Altman 的新角色使他在 28 歲時成為造王者。他的工作是挑選最飢渴、最有前途的企業家,將他們與最好的程式設計師和投資者聯繫起來,並幫助他們將初創公司發展成定義行業的壟斷企業(同時 YC 抽取 6% 或 7% 的分紅)。Altman 監督了一個激進擴張的時期,將 YC 的初創公司名單從幾十家增加到幾百家。但幾位矽谷投資者開始認為他的忠誠度不一。一位投資者告訴我們,Altman 以「有選擇地對最好的公司進行個人投資,阻止外部投資者」而聞名。(Altman 否認阻止任何人。)Altman 曾作為投資基金紅杉資本(Sequoia Capital)的「球探」工作,這是該計畫的一部分,涉及投資早期初創公司並從利潤中抽取少量分成。據一位熟悉該交易的人士稱,當 Altman 對金融服務初創公司 Stripe 進行天使投資時,他堅持要更大份額,這讓紅杉的合夥人感到憤怒。該人士補充說,「這是一種『Sam 優先』的政策。」據 Altman 自己估計,他是其他約 400 家公司的投資者。(Altman 否認對 Stripe 交易的這種描述。大約 2010 年,他對 Stripe 進行了 1.5 萬美元的初始投資,佔股 2%。該公司現在估值超過 1500 億美元。)

到 2018 年,幾位 YC 合夥人對 Altman 的行為感到非常沮喪,以至於他們找到 Graham 進行投訴。Graham 和他的妻子、YC 創辦人 Jessica Livingston 顯然與 Altman 進行了一次坦誠的交談。之後,Graham 開始告訴人們,儘管 Altman 同意離開公司,但他在實踐中卻在抵制。Altman 告訴一些 YC 合夥人,他將辭去總裁職務,轉而擔任董事長。2019 年 5 月,一篇宣布 YC 有新總裁的部落格文章帶有一個星號:「Sam 正在過渡為 YC 董事長。」幾個月後,該文章被編輯為「Sam Altman 離開了 YC 的任何正式職位」;此後,這句話被完全刪除。然而,直到 2021 年,美國證券交易委員會的一份文件仍將 Altman 列為 Y Combinator 的董事長。(Altman 說他直到很久以後才知道這一點。)

多年來,無論是在公開場合還是在最近的證詞中,Altman 都堅持他從未被 YC 解僱,他告訴我們他並沒有抵制離開。Graham 在推特上表示,「我們不想讓他離開,只是讓他做出選擇」——在 YC 和 OpenAI 之間。在聲明中,Graham 告訴我們:「我們沒有法律權力解僱任何人。我們所能做的只是施加道德壓力。」然而在私下裡,他明確表示 Altman 是因為 YC 合夥人的不信任而被免職的。這段關於 Altman 在 Y Combinator 時期的描述是基於與幾位 YC 創辦人和合夥人的討論,以及同期的材料,所有這些都表明分手並非完全是你情我願。有一次,Graham 告訴 YC 同事,在 Altman 被免職之前,「Sam 一直在對我們撒謊。」

2015 年 5 月,Altman 發電子郵件給當時世界排名第 100 位的富豪 Elon Musk。像許多著名的矽谷企業家一樣,Musk 對一系列他認為具有生存緊迫性但對大多數人來說似乎是牽強假設的威脅感到困擾。「我們需要對 AI 非常小心,」他在推特上寫道。「可能比核武器更危險。」

Altman 通常是一個技術樂觀主義者,但他關於 AI 的言論很快變得末日化。在公開場合以及與 Musk 等人的私人通信中,他警告說該技術不應由追求利潤的巨型企業主導。「一直在思考是否有可能阻止人類開發 AI,」他寫信給 Musk。「如果無論如何都會發生,那麼由 Google 以外的人先做似乎是件好事。」他借用核武器的類比,提議建立一個「AI 曼哈頓計畫」。他概述了該組織將具備的總體原則——「安全應該是首要要求」;「顯然我們會遵守/積極支持所有監管」——他和 Musk 確定了一個名字:OpenAI。

與最初的曼哈頓計畫(一個導致原子彈產生的政府倡議)不同,OpenAI 將是私人資助的,至少在最初是這樣。Altman 預測,人工超級智慧——一個理論上的門檻,甚至超越 AGI,屆時機器將完全超越人類思維的能力——最終將創造足夠的經濟效益,以「捕獲宇宙中所有未來價值的錐體」。但他同時警告了生存危險。在某個時刻,國家安全影響可能會變得如此嚴重,以至於美國政府將不得不控制 OpenAI,或許是透過將其國有化並將其業務轉移到沙漠中的安全掩體中。到 2015 年底,Musk 被說服了。「我們應該說我們是以 10 億美元的資金承諾開始的,」他寫道。「我會承擔其他人沒有提供的任何部分。」

Altman 將 OpenAI 安置在 Y Combinator 的非營利部門,將其定位為一個內部慈善項目。他給 OpenAI 的新員工 YC 股票,並透過 YC 帳戶轉移捐款。有一次,該實驗室得到了 Altman 持有個人股份的 YC 基金的支持。(Altman 後來將這筆股份描述為微不足道。他告訴我們,他給員工的 YC 股票是他自己的。)

曼哈頓計畫的類比也適用於員工招聘。像核分裂研究一樣,機器學習是一個具有劃時代意義的小型科學領域,由一群古怪的天才主導。Musk 和 Altman,以及從 Stripe 加入的 Brockman,確信世界上只有少數幾位電腦科學家有能力取得所需的突破。Google 擁有巨大的現金優勢和多年的領先地位。「我們在人數和火力上都處於荒謬的劣勢,」Musk 後來在一封郵件中寫道。但「如果我們能隨著時間的推移吸引最有才華的人,並且我們的方向正確,那麼 OpenAI 就會獲勝。」

一個主要的招聘目標是 Sutskever,一位強烈而內向的研究員,常被稱為他那一代最有天賦的 AI 科學家。Sutskever 出生於 1986 年的蘇聯,髮際線後移,眼睛深邃,習慣在選擇措辭時停頓,不眨眼。另一個目標是 Dario Amodei,一位生物物理學家,充滿了狂熱的能量,有緊張地扭動黑髮的習慣,並以多段文章回應單行郵件。兩人在其他地方都有高薪工作,但 Altman 對他們極盡討好。他後來開玩笑說:「我跟蹤了 Ilya。」

Musk 的名氣更大,但 Altman 的手段更圓滑。他發郵件給 Amodei,他們安排在一家印度餐廳單獨共進晚餐。(Altman:「該死,我的 Uber 撞車了!晚到 10 分鐘。」Amodei:「哇,希望你沒事。」)像許多 AI 研究員一樣,Amodei 認為只有在證明該技術與人類價值觀「對齊」的情況下才應該構建它,這意味著它將按照人們想要的去做,而不會犯下潛在的致命錯誤——例如,遵循清理環境的指令,結果消滅了最大的污染源:人類。Altman 令人安心,反映了這些安全擔憂。

後來加入公司的 Amodei 多年來詳細記錄了 Altman 和 Brockman 的行為,標題為「我在 OpenAI 的經歷」(副標題:「私人:請勿分享」)。一份與 Amodei 有關的超過 200 頁的文件集,包括這些筆記以及內部郵件和備忘錄,已被矽谷的同事傳閱,但此前從未公開披露。Amodei 在筆記中寫道,Altman 的目標是建立「一個專注於安全的 AI 實驗室(『也許不是馬上,但儘快』)。」

2015 年 12 月,在 OpenAI 公開宣布的前幾個小時,Altman 發郵件給 Musk,提到有傳言稱 Google「明天要給 OpenAI 的每個人提供巨額反向報價,試圖殺死它」。Musk 回覆:「Ilya 給出明確的答覆了嗎?」Altman 向他保證 Sutskever 態度堅定。Google 每年給 Sutskever 提供 600 萬美元,OpenAI 無法與之匹敵。但是,Altman 吹噓說,「不幸的是,他們沒有『做正確的事』站在他們這邊。」

Musk 為 OpenAI 在舊金山米慎區的一家前手提箱工廠提供了一些辦公空間。Sutskever 告訴我們,對員工的推銷詞是「你要拯救世界」。

OpenAI 的創辦人認為,如果一切順利,人工智慧可以帶來一個後稀缺的烏托邦,自動化繁重的工作,治癒癌症,並解放人類去享受閒暇和富足的生活。但如果技術失控,或落入壞人之手,破壞可能是徹底的。中國可能用它來製造新型生物武器或先進無人機群;AI 模型可能智勝其監督者,在秘密伺服器上複製自己,以至於無法關閉;在極端情況下,它可能會奪取能源網、股市或核武庫的控制權。並非每個人都相信這一點,至少可以這麼說,但 Altman 一再肯定他相信。他在 2015 年的部落格上寫道,超人類機器智慧「不一定非要是科幻小說中那種天生邪惡的版本才能殺死我們所有人。一個更可能的場景是,它只是不在乎我們,但在試圖完成其他目標的過程中……把我們消滅了。」OpenAI 的創辦人誓言不將速度置於安全之上,該組織的公司章程使造福人類成為具有法律約束力的義務。如果 AI 將成為歷史上最強大的技術,那麼任何單獨控制它的人都將變得獨一無二地強大——創辦人將這種情況稱為「AGI 獨裁」。

Altman 告訴早期招聘人員,OpenAI 將保持純粹的非營利組織,程式設計師為了在那裡工作接受了大幅減薪。該公司接受了慈善撥款,包括來自當時稱為 Open Philanthropy 的 3000 萬美元,這是有效利他主義運動的一個中心,其承諾包括支持向全球貧困人口分發蚊帳。

Brockman 和 Sutskever 管理 OpenAI 的日常運作,而 Musk 和 Altman 仍然忙於他們的其他工作,大約每週過來一次。然而到了 2017 年 9 月,Musk 已經變得不耐煩了。在討論是否將 OpenAI 重組為營利性公司時,他要求獲得多數控制權。Altman 的回答因背景而異。他主要的一貫要求似乎是,如果 OpenAI 在一位執行長的控制下重組,那麼這個職位應該給他。Sutskever 似乎對這個想法感到不舒服。他代表自己和 Brockman 給 Musk 和 Altman 發了一封長長的、哀怨的郵件,主題是「誠實的想法」。他寫道:「OpenAI 的目標是讓未來變得美好,並避免 AGI 獨裁。」他繼續對 Musk 說:「所以建立一個你可以成為獨裁者的結構是一個壞主意。」他向 Altman 傳達了類似的擔憂:「我們不明白為什麼執行長的頭銜對你如此重要。你陳述的理由已經改變,很難真正理解是什麼驅動了它。」

「夥計們,我受夠了,」Musk 回覆。「要麼你們自己去做點什麼,要麼作為非營利組織繼續 OpenAI」——否則「我只是一個傻瓜,本質上是在為你們創建初創公司提供免費資金。」五個月後,他憤怒地退出了。(2023 年,他創立了一家營利性競爭對手 xAI。次年,他起訴 Altman 和 OpenAI 欺詐和違反慈善信託,聲稱他被「Altman 的長線騙局」所「勤奮操縱」——Altman 利用他對 AI 危險的擔憂來騙取他的錢財。這場訴訟 OpenAI 進行了強烈抗辯,目前仍在進行中。)

Musk 離開後,Amodei 和其他研究人員對 Brockman 的領導感到不滿,一些人認為他是個粗魯的經營者,而 Sutskever 通常被視為有原則但缺乏組織。在成為執行長的過程中,Altman 似乎對公司內部的不同派系做出了不同的承諾。他向一些研究人員保證 Brockman 的管理權限將會減少。但他們不知道的是,他還與 Brockman 和 Sutskever 達成了一項秘密握手協議:Altman 將獲得執行長頭銜;作為交換,如果另外兩人認為有必要,他同意辭職。(他對這種描述提出了異議,稱他擔任執行長職位只是因為被要求。所有三人都證實了該協議的存在,儘管 Brockman 說這是不正式的。「他單方面告訴我們,如果我們兩人都要求他下台,他就會下台,」他告訴我們。「我們反對這個想法,但他說這對他很重要。這純粹是利他主義。」)後來,董事會驚訝地發現其執行長實際上任命了自己的影子董事會。

內部記錄顯示,創辦人早在 2017 年就對非營利結構有私人疑慮。那一年,在 Musk 試圖奪取控制權後,Brockman 在日記中寫道,「不能說我們致力於非營利組織……如果三個月後我們在做 B-corp,那就是謊言。」Amodei 在他早期的筆記中回憶說,他曾向 Brockman 施壓詢問他的優先事項,Brockman 回答說他想要「金錢和權力」。Brockman 對此提出異議。他當時的日記條目顯示了相互衝突的本能。一篇寫道:「很高興不因此而致富,只要沒有其他人致富。」在另一篇中,他問道:「那麼我真正想要什麼?」他的回答包括「經濟上能帶我達到 10 億美元的東西。」

2017 年,Sutskever 在辦公室閱讀了 Google 研究人員剛剛發表的一篇論文,提出了「一種新的簡單網路架構,Transformer」。他從椅子上跳起來,跑到大廳,告訴他的研究員同事:「停止你們正在做的一切。就是這個。」Sutskever 看出,Transformer 是一項可能使 OpenAI 能夠訓練更複雜模型的創新。這一發現產生了第一個生成式預訓練 Transformer——這就是後來成為 ChatGPT 的種子。

我們了解到,隨著技術變得越來越強大,OpenAI 的大約十幾名頂尖工程師舉行了一系列秘密會議,討論 OpenAI 的創辦人(包括 Brockman 和 Altman)是否值得信任。在其中一次會議上,一名員工想起了英國喜劇二人組 Mitchell and Webb 的一個小品,其中一名東線的納粹士兵在清醒的時刻問道:「我們是壞人嗎?」

到 2018 年,Amodei 開始更公開地質疑創辦人的動機。「一切都是一套輪換的籌錢計畫,」他後來在筆記中寫道。「我覺得 OpenAI 需要的是一個明確的聲明,說明它要做什麼,不做什麼,以及它的存在將如何使世界變得更美好。」OpenAI 已經有了一個使命宣言:「確保通用人工智慧造福全人類。」但對 Amodei 來說,這對高管意味著什麼並不清楚,如果這有任何意義的話。Amodei 說,在 2018 年初,他開始為公司起草章程,並在與 Altman 和 Brockman 的數週對話中,提倡其最激進的條款:如果一個「價值對齊、具有安全意識的項目」在 OpenAI 之前接近構建出 AGI,公司將「停止競爭並開始協助該項目」。根據所謂的「合併與協助」條款,如果,比如說,Google 的研究人員先弄清楚如何構建安全的 AGI,那麼 OpenAI 可以自行關閉並將其資源捐贈給 Google。按照任何正常的企業邏輯,這是一個瘋狂的承諾。但 OpenAI 本不應該是一家正常的公司。

這一前提在 2019 年春天受到了考驗,當時 OpenAI 正在談判微軟的 10 億美元投資。儘管領導公司安全團隊的 Amodei 幫助向 Bill Gates 推銷了這筆交易,但團隊中的許多人對此感到焦慮,擔心微軟會加入凌駕於 OpenAI 道德承諾之上的條款。Amodei 向 Altman 提交了一份按優先級排列的安全要求清單,將保留「合併與協助」條款放在了首位。Altman 同意了這一要求,但在 6 月交易即將完成時,Amodei 發現增加了一項條款,賦予微軟阻止 OpenAI 進行任何合併的權力。「章程的 80% 就這樣被背叛了,」Amodei 回憶道。他與 Altman 對質,後者否認該條款存在。Amodei 大聲朗讀它,指著文字,最終迫使另一位同事直接向 Altman 確認其存在。(Altman 不記得這件事。)Amodei 的筆記描述了不斷升級的緊張遭遇,包括幾個月後的一次,Altman 召集他和他的妹妹 Daniela(她在公司從事安全和政策工作),告訴他們他從一位高級高管那裡得到「可靠消息」,說他們一直在策劃政變。筆記繼續寫道,Daniela「崩潰了」,並帶進了那位高管,後者否認說過任何話。正如一位聽取了這次交流簡報的人所回憶的那樣,Altman 隨後否認提出過該指控。「我甚至沒說過那樣的話,」他說。「你剛才說了,」Daniela 回答道。(Altman 說這並不是他的回憶,他只是指責 Amodei 兄妹有「政治行為」。)2020 年,Amodei、Daniela 和其他同事離開並創立了 Anthropic,現在是 OpenAI 的主要競爭對手之一。

Altman 繼續吹捧 OpenAI 對安全的承諾,特別是在潛在招聘人員在場時。2022 年底,四名電腦科學家發表了一篇論文,部分動機是擔心「欺騙性對齊」,即足夠先進的模型可能會在測試期間假裝表現良好,然後在部署後追求自己的目標。(這是聽起來像科幻小說的幾種 AI 場景之一——但在某些實驗條件下,它已經在發生。)論文發表幾週後,其中一位作者、加州大學柏克萊分校的博士生收到了 Altman 的電子郵件,他說他越來越擔心未對齊 AI 的威脅。他補充說,他正在考慮為此投入 10 億美元,許多 AI 專家認為這是世界上最重要的未解決問題,或許是透過設立獎項來激勵世界各地的研究人員研究它。儘管這位研究生「聽過關於 Sam 很滑頭的模糊傳言」,他告訴我們,Altman 的承諾表現贏得了他。他休了學術假加入 OpenAI。

但是,在 2023 年春天的幾次會議中,Altman 似乎動搖了。他不再談論設立獎項。相反,他提倡建立一個內部的「超級對齊團隊」。官方公告提到了公司的計算能力儲備,承諾該團隊將獲得「我們迄今為止獲得的計算能力的 20%」——這項資源價值可能超過 10 億美元。根據公告,這項努力是必要的,因為如果對齊問題仍未解決,AGI 可能會「導致人類被剝奪權力,甚至導致人類滅絕」。與 Sutskever 一起被任命領導該團隊的 Jan Leike 告訴我們:「這是一個非常有效的留人工具。」

然而,20% 的承諾蒸發了。四名在該團隊工作或與該團隊密切合作的人表示,實際資源僅佔公司計算能力的 1% 到 2%。此外,團隊中的一名研究人員說,「大多數超級對齊計算實際上是在最舊的叢集和最差的晶片上。」研究人員認為,更優越的硬體被保留用於營利活動。(OpenAI 對此提出異議。)Leike 向當時的公司首席技術長 Murati 抱怨,但她告訴他不要再堅持這一點——這個承諾從來都不是現實的。

大約在這個時候,一位前員工告訴我們,Sutskever「變得非常沉迷於安全」。在 OpenAI 的早期,他認為對災難性風險的擔憂是合法的,但很遙遠。現在,隨著他開始相信 AGI 即將到來,他的擔憂變得更加尖銳。前員工繼續說,有一次全體會議,「Ilya 站起來說,嘿,大家,未來幾年會有一個時刻,這家公司的每個人基本上都必須轉向研究安全,否則我們就完蛋了。」但超級對齊團隊在次年解散,沒有完成其使命。

到那時,內部訊息顯示,高管和董事會成員已經開始相信,Altman 的遺漏和欺騙可能會對 OpenAI 產品的安全性產生影響。在 2022 年 12 月的一次會議上,Altman 向董事會成員保證,即將推出的模型 GPT-4 中的各種功能已獲得安全小組的批准。董事會成員兼 AI 政策專家 Toner 要求提供文件。她得知最具爭議的功能——一個允許用戶針對特定任務「微調」模型,另一個將其部署為個人助理——並未獲得批准。當董事會成員兼企業家 McCauley 離開會議時,一名員工把她拉到一邊,問她是否知道印度發生的「漏洞」。Altman 在與董事會進行的數小時簡報中,忽略了提到微軟在印度發布了 ChatGPT 的早期版本,而沒有完成必要的安全審查。「它就這樣被完全忽視了,」當時的 OpenAI 研究員 Jacob Hilton 說。

儘管這些失誤沒有造成安全危機,但另一位研究員 Carroll Wainwright 表示,它們是「持續向強調產品而非安全滑坡」的一部分。GPT-4 發布後,Leike 給董事會成員發了郵件。「OpenAI 在其使命上已經脫軌了,」他寫道。「我們將產品和收入置於一切之上,其次是 AI 能力、研究和擴展,對齊和安全排在第三。」他繼續說道,「像 Google 這樣的其他公司正在學習他們應該更快地部署並忽略安全問題。」

McCauley 在給其他成員的電子郵件中寫道:「我認為我們肯定已經到了董事會應該提高審查水平的地步。」董事會成員試圖面對他們認為日益嚴重的問題,但他們技不如人。「坦率地說,你有一群從未做過任何事的二線人員,」前董事會成員 Sue Yoon 說。2023 年,公司正準備發布其 GPT-4 Turbo 模型。正如 Sutskever 在備忘錄中詳述的那樣,Altman 顯然告訴 Murati 該模型不需要安全批准,並引用了公司總法律顧問 Jason Kwon 的話。但當她透過 Slack 問 Kwon 時,他回覆:「呃……困惑 Sam 是從哪裡得到那個印象的。」(OpenAI 的一位代表,Kwon 仍是該公司高管,表示此事「沒什麼大不了」。)

不久之後,董事會做出了將 Altman 解僱的決定——隨後全世界目睹了 Altman 扭轉了這一局面。OpenAI 章程的一個版本仍然在該組織的網站上。但熟悉 OpenAI 管理文件的人士表示,它已被稀釋到毫無意義的地步。去年 6 月,Altman 在他的個人部落格上寫道,在談到人工超級智慧時,「我們已經過了事件視界;起飛已經開始。」根據章程,這可以說是 OpenAI 可能停止與其他公司競爭並開始與它們合作的時刻。但在那篇名為「溫和奇點」(The Gentle Singularity)的文章中,他採用了一種新的語氣,用熱情洋溢的樂觀取代了生存恐懼。「我們都會得到更好的東西,」他寫道。「我們將為彼此構建越來越美妙的東西。」他承認對齊問題仍然未解決,但他重新定義了它——與其說是一個致命的威脅,不如說是一個不便,就像那些誘惑我們浪費時間在 Instagram 上滾動的演算法一樣。

Altman 常被描述為他那一代最偉大的推銷員,無論是出於崇敬還是懷疑。Steve Jobs 是他的偶像之一,據說他投射出一個「現實扭曲力場」——一種不可動搖的自信,認為世界會順應他的願景。但即使是 Jobs 也從未告訴他的客戶,如果他們不買他的 MP3 播放器,他們所愛的人都會死。2008 年,當 Altman 23 歲時,他的導師 Graham 寫道:「你可以把他空投到一個充滿食人族的島嶼上,5 年後回來,他就是國王。」這一判斷並非基於 Altman 的業績(業績平平),而是基於他獲勝的意志,Graham 認為這種意志幾乎無法駕馭。當被建議不要將 YC 校友列入世界頂級初創公司創辦人名單時,Graham 還是把 Altman 放了進去。「Sam Altman 無法被這種脆弱的規則阻止,」他寫道。

Graham 的意思是讚美。但 Altman 的一些最親密的同事對這種品質有了不同的看法。在 Sutskever 對 AI 安全感到更加痛苦後,他編寫了關於 Altman 和 Brockman 的備忘錄。此後,它們在矽谷獲得了傳奇地位;在某些圈子裡,它們簡稱為「Ilya 備忘錄」。與此同時,Amodei 繼續整理筆記。這些文件以及其他與他有關的文件記錄了他從謹慎的理想主義到警覺的轉變。他的語言比 Sutskever 的更激烈,對 Altman 的憤怒時隱時現——「他的話幾乎肯定是胡扯」——並對他所說的未能糾正 OpenAI 的航向感到遺憾。

這兩份文件集都沒有包含確鑿的證據。相反,它們講述了一系列被指控的欺騙和操縱行為,其中每一項如果單獨來看,可能會被聳聳肩忽略:Altman 據稱向兩個人提供同一份工作,講述關於誰應該出現在直播中的矛盾故事,對安全要求進行掩飾。但 Sutskever 得出結論,這種行為「並不能創造有利於構建安全 AGI 的環境」。Amodei 和 Sutskever 從未是親密朋友,但他們得出了相似的結論。Amodei 寫道:「OpenAI 的問題在於 Sam 本人。」

我們採訪了 100 多名對 Altman 如何經商有第一手了解的人:OpenAI 的現任和前任員工及董事會成員;Altman 各處住宅的客人和工作人員;他的同事和競爭對手;他的朋友和敵人,以及幾位鑑於矽谷唯利是圖的文化,既是朋友又是敵人的人。(OpenAI 與《紐約客》的所有者 Condé Nast 達成了一項協議,允許 OpenAI 在有限期限內在其搜尋結果中顯示其內容。)

有些人為 Altman 的商業頭腦辯護,並駁斥了他的競爭對手,特別是 Sutskever 和 Amodei,認為他們是失敗的王位覬覦者。其他人則將他們描繪成輕信、健忘的科學家,或者是歇斯底里的「末日論者」,被他們正在構建的軟體會以某種方式活過來並殺死他們的妄想所困擾。前董事會成員 Yoon 認為,Altman「不是那種馬基雅維利式的惡棍」,而僅僅是達到「無能」的地步,能夠說服自己相信他推銷詞中不斷變化的現實。「他太沉迷於自己的信念了,」她說。「所以他做的事情,如果你生活在現實世界中,是說不通的。但他不生活在現實世界中。」

然而,我們交談過的大多數人都認同 Sutskever 和 Amodei 的判斷:Altman 擁有一種無情的權力意志,即使在那些將名字印在太空船上的實業家中,也讓他與眾不同。「他不受真相的束縛,」那位董事會成員告訴我們。「他擁有兩種幾乎從未在同一個人身上見過的特質。第一種是強烈的取悅他人的願望,在任何互動中都想被喜歡。第二種是幾乎反社會的對欺騙他人可能帶來的後果缺乏關心。」

那位董事會成員並不是唯一一個主動使用「反社會」這個詞的人。Altman 在第一批 Y Combinator 學員中的同學之一是 Aaron Swartz,一位才華橫溢但陷入困境的程式設計師,於 2013 年自殺身亡,現在在許多科技圈子裡被視為聖人。在去世前不久,Swartz 向幾位朋友表達了對 Altman 的擔憂。「你需要明白 Sam 永遠不可信,」他告訴其中一人。「他是一個反社會者。他什麼事都做得出來。」微軟的多位高級高管表示,儘管 Nadella 長期忠誠,但公司與 Altman 的關係已經變得緊張。「他歪曲、扭曲、重新談判、背棄協議,」其中一人說。今年早些時候,OpenAI 重申微軟是其「無狀態」(或無記憶)模型的獨家雲端提供商。那天,它宣布了一項 500 億美元的交易,使亞馬遜成為其人工智慧代理企業平台的獨家經銷商。雖然允許轉售,但微軟高管認為 OpenAI 的計畫可能與微軟的排他性相衝突。(OpenAI 堅稱亞馬遜交易不會違反之前的合約;微軟代表表示公司「相信 OpenAI 理解並尊重」其法律義務。)微軟的那位高級高管談到 Altman 時說:「我認為他最終被記住為 Bernie Madoff 或 Sam Bankman-Fried 級別的騙子的機率很小但確實存在。」

Altman 不是技術天才——根據他周圍許多人的說法,他缺乏編碼或機器學習方面的廣泛專業知識。多位工程師回憶說他誤用或混淆了基本的技術術語。他在很大程度上是透過利用他人的金錢和技術人才來建立 OpenAI 的。這並不能讓他變得獨一無二。這讓他成為一個商人。更引人注目的是他有能力說服膽怯的工程師、投資者和對技術持懷疑態度的公眾,讓他們相信他們的優先事項,即使是互斥的,也是他的優先事項。當這樣的人試圖阻礙他的下一步行動時,他通常能找到話語來中和他們,至少是暫時的;通常,當他們對他失去耐心時,他已經得到了他需要的東西。「他在紙面上設定了在未來限制他的結構,」前 OpenAI 研究員 Wainwright 說。「但隨後,當未來到來,需要受到限制時,他就會廢除任何結構。」

「他令人難以置信地有說服力。就像,絕地武士的心靈控制,」一位與 Altman 合作過的科技高管說。「他只是高出一個層次。」對齊研究中的一個經典假設場景涉及人類與強大 AI 之間的意志較量。研究人員通常認為,在這樣的較量中,AI 肯定會贏,就像大師在國際象棋中會擊敗孩子一樣。這位高管繼續說,看著 Altman 在「閃現」期間智勝周圍的人,就像看著「一個 AGI 從盒子裡破繭而出」。

在他被解僱後的幾天裡,Altman 努力避免對針對他的指控進行任何外部調查。他告訴兩個人,他擔心即使調查的存在也會讓他看起來有罪。(Altman 否認這一點。)但是,在辭職的董事會成員將離職條件設定為進行獨立調查後,Altman 同意對「近期事件」進行「審查」。據參與談判的人士稱,兩位新董事會成員堅持由他們控制該審查。Summers 憑藉其政治和華爾街關係網,似乎為其提供了可信度。(去年 11 月,在披露 Summers 在追求一名年輕門徒的浪漫關係時尋求 Jeffrey Epstein 建議的郵件後,他辭去了董事會職務。)OpenAI 聘請了負責 Enron 和 WorldCom 內部調查的傑出律師事務所 WilmerHale 來進行審查。

六名接近調查的人士聲稱,它似乎旨在限制透明度。其中一些人說,調查人員最初沒有聯繫公司內的重要人物。一名員工聯繫了 Summers 和 Taylor 進行投訴。「他們只對董事會鬧劇期間發生的狹窄範圍感興趣,而不是他誠信的歷史,」該員工回憶起他與調查人員的訪談時說。其他人對分享關於 Altman 的擔憂感到不舒服,因為他們覺得沒有足夠的努力來確保匿名。「一切都表明他們想找到一個結果,那就是宣告他無罪,」該員工說。(一些參與的律師為該過程辯護,稱:「這是一次獨立、仔細、全面的審查,事實引導我們走向哪裡,我們就走向哪裡。」Taylor 也表示該審查是「徹底且獨立的」。)

企業調查旨在賦予合法性。在私人公司,調查結果有時不會寫下來——這可能是限制責任的一種方式。但在涉及公共醜聞的案件中,通常有更高的透明度期望。在 Kalanick 於 2017 年離開 Uber 之前,其董事會聘請了一家外部公司,該公司向公眾發布了一份 13 頁的摘要。鑑於 OpenAI 的 501(c)(3) 地位和解僱的高調性質,那裡的許多高管預計會看到廣泛的調查結果。然而,2024 年 3 月,OpenAI 宣布將為 Altman 洗清嫌疑,但沒有發布任何報告。該公司在其網站上提供了約 800 字的內容,承認了「信任崩潰」。

參與調查的人士表示,沒有發布報告是因為根本沒有報告。相反,調查結果僅限於口頭簡報,分享給 Summers 和 Taylor。「審查並沒有得出 Sam 是誠信的喬治·華盛頓櫻桃樹的結論,」一位接近調查的人士說。但調查似乎並沒有將導致 Altman 被解僱的誠信問題作為核心,而是將大部分精力放在尋找明確的犯罪行為上;基於此,它得出結論,他可以繼續擔任執行長。此後不久,在被解僱時被踢出董事會的 Altman 又重新加入了董事會。據接近調查的人士告訴我們,不將報告寫下來的決定部分是基於 Summers 和 Taylor 的私人律師的建議。(Summers 拒絕發表評論。Taylor 表示,鑑於口頭簡報,沒有「需要正式的書面報告」。)

許多前任和現任 OpenAI 員工告訴我們,他們對缺乏披露感到震驚。Altman 說他相信所有在復職後加入的董事會成員都收到了口頭簡報。「這是一個絕對的、徹頭徹尾的謊言,」一位對情況有直接了解的人說。一些董事會成員告訴我們,關於報告誠信的持續問題可能會引發,正如其中一人所說,「進行另一次調查的需要」。

書面記錄的缺失有助於最小化指控。Altman 在矽谷的地位也日益如此。多位與 Altman 合作過的著名投資者告訴我們,如果他們支持 OpenAI 的競爭對手,他有凍結投資者的名聲。「如果他們投資了他不喜歡的東西,他們就無法獲得其他東西的機會,」其中一人說。Altman 權力的另一個來源是他龐大的投資清單,有時延伸到他的個人生活。他與眾多前浪漫伴侶有財務糾葛:作為基金共同管理人、領投投資者或頻繁的共同投資者。這並不罕見。矽谷的許多異性戀高管在他們的浪漫和性伴侶身上也做同樣的事情。(「你必須這樣做,」一位著名執行長告訴我們。)「我顯然在事後與一些前任投資過。而且我認為這,就像,完全沒問題,」Altman 說。但這種動態提供了非凡的控制水平。「它本質上創造了一種非常、非常高的依賴性,」一位接近 Altman 的人說。「通常,這是一種終身的依賴。」

甚至前同事也會受到影響。Murati 於 2024 年離開 OpenAI 並開始構建她自己的 AI 初創公司。Altman 的親密盟友 Josh Kushner 給她打了電話。他稱讚了她的領導能力,然後發出了似乎是隱晦的威脅,指出他「擔心」她的「名譽」,並且前同事現在將她視為「敵人」。(Kushner 透過代表表示,此描述沒有「傳達完整背景」;Altman 說他不知道這個電話。)

在他擔任執行長之初,Altman 宣布 OpenAI 將創建一家「上限利潤」公司,該公司將由非營利組織擁有。這種複雜的公司結構顯然在 Altman 設計它之前並不存在。在轉換過程中,一位名叫 Holden Karnofsky 的董事會成員反對它,認為非營利組織被嚴重低估了。「我不能憑良心這樣做,」Karnofsky(他是 Amodei 的姐夫)說。根據同期筆記,他投了反對票。然而,在董事會的一名律師表示他的異議「可能是一個進一步調查」新結構合法性的標誌後,他的投票被記錄為棄權,顯然未經他的同意——這可能構成了商業記錄偽造。(OpenAI 告訴我們,幾名員工回憶 Karnofsky 棄權,並提供了會議記錄,將他的投票記錄為棄權。)

去年 10 月,OpenAI 以營利性實體進行了「資本重組」。該公司吹捧其相關的非營利組織(現在稱為 OpenAI 基金會)為歷史上「資源最豐富」的基金會之一。但它現在是該公司 26% 的股東,其董事會成員除一人外,也是營利性董事會的成員。

在國會作證期間,Altman 被問及他是否賺了「很多錢」。他回答說:「我在 OpenAI 沒有股權……我做這個是因為我熱愛它」——這是一個謹慎的回答,考慮到他透過 YC 基金持有的間接股權。這在技術上仍然是真的。但包括 Altman 在內的幾個人向我們暗示,這種情況很快就會改變。「投資者說,我需要知道當困難時期到來時你會堅持下去,」Altman 說,但補充說沒有關於此事的「積極討論」。根據法律證詞,Brockman 似乎擁有價值約 200 億美元的公司股份。Altman 的份額大概會更值錢。儘管如此,他告訴我們他並非主要受財富驅動。一位前員工回憶他說:「我不關心錢。我更關心權力。」

2023 年,Altman 在夏威夷他們擁有的一處房產舉行的小型儀式上與 Mulherin 結婚。(他們九年前在 Peter Thiel 的熱水浴缸裡相識。)他們在該物業接待了一系列客人,我們交談過的人報告說,看到的景象並不比富人的標準消遣更引人注目:私人廚師準備的餐點、黃金時段的乘船遊覽。一個新年派對是「生存者」(Survivor)主題的;一張照片顯示了許多赤裸上身、微笑的男人,還有 Jeff Probst,「生存者」的真正主持人。Altman 還在他的房產接待過較小的朋友團體,這些聚會至少有一次包括了一場激烈的脫衣撲克遊戲。(活動的一張照片中沒有 Altman,不清楚誰贏了,但至少有三個人顯然輸了。)我們採訪了許多 Altman 的前客人,他們只暗示他是一位慷慨的主人。

儘管如此,關於 Altman 私生活的謠言已被競爭對手利用和扭曲。殘酷的商業競爭並不新鮮,但 AI 行業內的競爭已經變得異常殘酷。(一位 OpenAI 高管在向我們描述時使用了「莎士比亞式」這個詞,並補充說,「遊戲的正常規則已經不再適用了。」)與 Musk 直接相關(在至少一個案例中由其補償)的中間人已經分發了數十頁關於 Altman 的詳細反對派研究。它們反映了廣泛的監視,記錄了與他相關的空殼公司、親密夥伴的個人聯繫資訊,甚至是在同性戀酒吧進行的關於據稱是性工作者的訪談。其中一名 Musk 的中間人聲稱 Altman 的航班和他參加的派對正在被追蹤。Altman 告訴我們:「我不認為有人比我受到過更多的私家偵探調查。」

極端的說法已經流傳。右翼廣播員 Tucker Carlson 在沒有任何明顯證據的情況下暗示,Altman 參與了一名舉報人的死亡。這一說法和其他說法已被競爭對手放大。Altman 的妹妹 Annie 在訴訟中以及在接受我們採訪時聲稱,他對她進行了多年的性虐待,從她 3 歲、他 12 歲時開始。(我們無法證實 Annie 的說法,Altman 否認了這一點,他的兄弟們和母親稱其為「完全不真實」,是「給我們整個家庭帶來巨大痛苦」的來源。在記者 Karen Hao 為她的書《AI 帝國》(Empire of AI)進行的採訪中,Annie 暗示虐待記憶是在成年後的閃回中恢復的。)

在競爭對手公司和投資公司工作的多人向我們暗示,Altman 性追求未成年人——這是一種在矽谷持續存在但似乎不真實的敘事。我們花了幾個月時間調查此事,進行了數十次採訪,找不到任何證據支持它。「這是競爭對手令人作嘔的行為,我認為這是試圖在我們即將到來的案件中玷污陪審團的一部分,」Altman 告訴我們。「儘管必須說出這些話很荒謬,但任何關於我與未成年人發生性關係、僱用性工作者或參與謀殺的說法都是完全不真實的。」他補充說,他「有點感激」我們花了幾個月時間「如此積極地試圖調查此事」。

Altman 承認與達到法定年齡的年輕男性約會。我們與他的幾位伴侶交談過,他們告訴我們他們不覺得這有問題。然而,來自 Musk 中間人的反對派檔案將其作為攻擊路線。(這些檔案包括關於「Twink 軍隊」和「糖爹的性習慣」的淫穢且未經證實的參考資料。)「我認為有很多恐同情緒被推動,」Altman 說。科技記者 Swisher 同意這一點。「所有這些有錢人都做瘋狂的事情,比我聽說過的關於 Sam 的任何事情都瘋狂,」她告訴我們。「但他是一個在舊金山的同性戀者,」她補充說,「所以這被武器化了。」

十多年來,社交媒體高管承諾他們可以改變世界,幾乎沒有或沒有負面影響。他們將那些想減慢他們速度的立法者斥為僅僅是盧德分子,最終贏得了兩黨的嘲笑。相比之下,Altman 給人的印象是令人耳目一新的盡責。他不僅沒有抵制監管,反而幾乎是在乞求監管。在 2023 年參議院司法委員會作證時,他提議建立一個新的聯邦機構來監督先進的 AI 模型。「如果這項技術出錯,它可能會出大錯,」他說。路易斯安那州參議員 John Kennedy 以與科技執行長們的爭吵而聞名,他似乎被迷住了,把臉靠在手上,建議也許 Altman 應該自己執行這些規則。

但是,當 Altman 公開歡迎監管時,他卻在私下裡進行遊說反對它。據《時代》雜誌報導,2022 年和 2023 年,OpenAI 成功施壓,削弱了歐盟的一項努力,該努力本會使大型 AI 公司受到更多監督。2024 年,加州州議會提出了一項法案,強制對 AI 模型進行安全測試。其條款包括類似於 Altman 在國會證詞中提倡的措施。OpenAI 公開反對該法案,但在私下裡開始發出威脅。「我會說,在過去的一年裡,我們看到了來自 OpenAI 越來越狡猾、欺騙性的行為,」一位立法助理告訴我們。

投資者 Conway 遊說州政治領導人,包括 Nancy Pelosi 和 Gavin Newsom,以否決該法案。最終,它在兩黨支持下在議會通過,但 Newsom 否決了它。今年,支持 AI 監管的國會候選人面臨由 Leading the Future 資助的對手,這是一個致力於破壞此類限制的新「親 AI」超級政治行動委員會(Super PAC)。OpenAI 的官方立場是它不會向此類超級政治行動委員會捐款。「這個問題超越了黨派政治,」Lehane 最近告訴 CNN。然而,Leading the Future 的主要捐助者之一是 Greg Brockman,他承諾了 5000 萬美元。(今年,Brockman 和他的妻子向支持川普的超級政治行動委員會 MAGA Inc. 捐贈了 2500 萬美元。)

OpenAI 的活動已經超出了傳統的遊說範圍。去年,加州參議院提出了一項後續法案。一天晚上,29 歲的律師 Nathan Calvin 在非營利組織 Encode 工作,曾幫助起草該法案,他正在家裡與妻子共進晚餐,這時一名送達員送來了來自 OpenAI 的傳票。該公司聲稱正在尋找 Musk 秘密資助其批評者的證據。但它要求 Calvin 提供關於該州參議院法案的所有私人通訊。「他們本可以問我們,『你是否曾經與 Elon Musk 交談過或從他那裡得到過錢?』——我們沒有,」Calvin 告訴我們。該法案的其他支持者,以及一些對 OpenAI 營利性重組的批評者,也收到了傳票。「他們基本上是在追捕人們,嚇唬他們閉嘴,」領導 James Irvine 基金會的 Don Howard 說。(OpenAI 聲稱這是標準法律程序的一部分。)

Altman 長期以來一直支持民主黨。「我非常懷疑強大的獨裁者講述恐懼故事來聯合起來對付弱者,」他告訴我們。「那是猶太人的事情,不是同性戀的事情。」2016 年,他支持 Hillary Clinton 並稱川普為「對美國前所未有的威脅」。2020 年,他向民主黨和 Biden Victory Fund 捐款。在拜登政府期間,Altman 與白宮會面至少六次。他幫助制定了一項冗長的行政命令,列出了第一個聯邦 AI 安全測試和其他護欄制度。當拜登簽署它時,Altman 稱其為「良好的開端」。

2024 年,隨著拜登的民調數據下滑,Altman 的言論開始轉變。「我相信無論這次選舉發生什麼,美國都會沒事,」他說。川普獲勝後,Altman 向他的就職基金捐贈了 100 萬美元,然後在就職典禮上與網紅 Jake 和 Logan Paul 自拍。在 X 上,以他標準的小寫風格,Altman 寫道:「最近更仔細地觀察 @potus 真的改變了我對他的看法(我希望我能更多地進行自己的思考……)。」川普在重返辦公室的第一天,就廢除了拜登關於 AI 的行政命令。「他為川普政府執行他的命令找到了一種有效的方法,」一位拜登政府高級官員談到 Altman 時說。

Musk 繼續在公開場合抨擊 Altman,稱他為「騙子 Altman」(Scam Altman)和「狡詐 Sam」(Swindly Sam)。(當 Altman 在 X 上抱怨他訂購的特斯拉時,Musk 回覆:「你偷了一個非營利組織。」)然而,在華盛頓,Altman 似乎已經智勝了他。Musk 花了超過 2.5 億美元幫助川普連任,並在白宮工作了幾個月。然後 Musk 離開了華盛頓,在此過程中損害了他與川普的關係。

Altman 現在是川普青睞的商業大亨之一,甚至陪同他前往溫莎城堡拜訪英國皇室。Altman 和川普每年交談幾次。「你只是,就像,打電話給他,」Altman 說。「這不是哥們。但是,是的,如果我需要和他談論某事,我會的。」去年川普在白宮與科技領袖共進晚餐時,Musk 明顯缺席;Altman 坐在總統對面。「Sam,你是一位大領袖,」川普說。「你之前告訴我的事情絕對是令人難以置信的。」

多年來,Altman 繼續將對 AGI 的追求比作曼哈頓計畫。就像 J. Robert Oppenheimer 利用關於從納粹手中拯救世界的熱情呼籲來說服物理學家背井離鄉前往洛斯阿拉莫斯一樣,Altman 利用關於他技術地緣政治風險的恐懼。根據受眾的不同,Altman 使用這個類比來鼓勵加速或謹慎。在 2017 年夏天與美國情報官員的會議上,他聲稱中國已經啟動了一個「AGI 曼哈頓計畫」,OpenAI 需要數十億美元的政府資金來跟上。當被要求提供證據時,Altman 說:「我聽說了一些事情。」這是他做出該聲明的幾次會議中的第一次。在其中一次會議後,他告訴一位情報官員他會跟進證據。他從未這樣做。那位官員在調查中國計畫後,得出結論認為沒有證據表明它存在:「它只是被用作推銷詞。」(Altman 說他不記得以那種方式描述北京的努力。)

對於更有安全意識的受眾,Altman 引用這個類比來暗示相反的情況:AGI 必須謹慎追求,進行國際協調,否則後果將是災難性的。2017 年,Amodei 聘請了前公共利益律師 Page Hedley 擔任 OpenAI 的政策和道德顧問。在給高管的早期 PowerPoint 簡報中,Hedley 概述了 OpenAI 如何避免「災難性」軍備競賽——也許透過建立一個 AI 實驗室聯盟,最終與類似於北約的國際機構協調,以確保技術安全部署。正如 Hedley 回憶的那樣,Brockman 不明白這將如何幫助公司擊敗競爭對手。「無論我說什麼,」Hedley 告訴我們,「Greg 不斷回到『那麼我們如何籌集更多資金?我們如何獲勝?』」根據幾次採訪和同期記錄,Brockman 提出了一個反建議:OpenAI 可以透過讓世界大國——包括中國和俄羅斯——相互對抗來充實自己,也許是透過在它們之間發起競標戰。根據 Hedley 的說法,思路似乎是,它對核武器有效,為什麼對 AI 無效?

他感到震驚:「前提是,他們沒有爭論,是『我們談論的是歷史上發明過的最具破壞性的技術——如果我們把它賣給普丁會怎樣?』」(Brockman 堅稱他從未認真考慮過將 AI 模型拍賣給政府。「在高層討論了潛在框架可能是什麼樣子以鼓勵國家間合作——類似於 AI 國際空間站的東西,」OpenAI 代表說。「試圖將其描述為不僅僅是那樣是完全荒謬的。」)

頭腦風暴會議經常產生古怪的想法。Hedley 希望這個後來在內部被稱為「國家計畫」的想法會被放棄。然而,據幾位參與者和同期文件顯示,OpenAI 高管似乎對此越來越興奮。據當時的 OpenAI 政策總監 Jack Clark 稱,Brockman 的目標是「建立一個基本上是囚徒困境的東西,所有國家都需要給我們資金」,並且這「隱含地使不給我們資金變得有點危險」。一名初級研究員回憶起當該計畫在公司會議上詳細說明時,心裡想:「這完全是瘋了。」

高管們與至少一位潛在捐助者討論了該方法。但當月晚些時候,在幾名員工談論辭職後,該計畫被放棄了。Altman「會失去員工,」Hedley 說。「我覺得這總是在 Sam 的計算中比『這不是一個好計畫,因為它可能會導致大國之間的戰爭』更有分量。」

國家計畫的崩潰並沒有阻止 Altman,他追求了該主題的變體。2018 年 1 月,他在貝爾艾爾酒店召開了一個「AGI 週末」,這是一個擁有粉紅色九重葛花園和放養著真正天鵝的人工池塘的老好萊塢度假村。與會者包括當時在牛津大學的哲學家 Nick Bostrom(他已成為 AI 末日的先知);阿聯酋蘇丹兼 AI 推動者 Omar Al Olama;以及至少七位億萬富翁。其中有安全擔憂的人被告知,這將是一個思考社會如何為通用人工智慧的破壞性到來做準備的機會;投資者到達時期待聽到推銷。

白天是在一個時髦的會議室裡度過的,客人在那裡發表演講。(LinkedIn 共同創辦人 Hoffman 闡述了用佛教慈悲心編碼 AI 的可能性。)最後一位演講者是 Altman,他帶著一份推銷簡報,描述了一種「可兌換為 AGI 注意力」的全球加密貨幣。一旦 AGI 達到最大效用,並且「反邪惡」,世界各地的人們都會爭先恐後地購買 OpenAI 伺服器上的時間。Amodei 在筆記中寫道,「這個想法表面上是荒謬的(弗拉基米爾·普丁最終會擁有部分代幣嗎……?)回想起來,這是關於 Sam 的許多危險信號之一,我本應該更認真對待。」該計畫看起來像是一場現金掠奪,但 Altman 將其作為 AI 安全的福音進行推銷。他的一張幻燈片寫道:「我想讓盡可能多的人加入『好』團隊,並獲勝,並做正確的事。」另一張寫道:「請在演講結束前保持笑聲。」

Altman 的籌款推銷多年來一直在演變,但它始終反映了一個事實,即 AGI 的開發需要驚人的資本。他遵循一個相對簡單的「縮放定律」:你用於訓練模型的數據和計算能力越多,它們似乎就越聰明。實現這一過程的專用晶片非常昂貴。OpenAI 僅在最近一輪融資中就籌集了超過 1200 億美元——這是歷史上最大的私人融資輪,金額是歷史上最大 IPO 的四倍。「當你想到每年可以隨意花費 1000 億美元的實體時,世界上真的只有少數幾個,」一位科技高管和投資者告訴我們。「有美國政府,還有四五家最大的美國科技公司,還有沙烏地阿拉伯人和阿聯酋人——基本上就是這樣。」

Altman 最初的重點是沙烏地阿拉伯。他於 2016 年在舊金山費爾蒙特酒店的晚宴上首次見到了沙烏地阿拉伯王儲、實際上的君主 Mohammed bin Salman。此後,Hedley 回憶說,Altman 將王儲稱為「朋友」。據 Hedley 的筆記,2018 年 9 月,Altman 說:「我正在決定我們是否會從沙烏地 PIF(公共投資基金)那裡拿走數百億美元。」

次月,據報導,一個聽命於 bin Salman 的暗殺小組勒死了對政權持批評態度的《華盛頓郵報》記者 Jamal Khashoggi,並用骨鋸肢解了他的屍體。一週後,宣布 Altman 已加入 Neom 的諮詢委員會,這是 bin Salman 希望在沙漠中建造的「未來之城」。「Sam,你不能加入這個委員會,」現在在 Anthropic 工作的政策總監 Clark 回憶起告訴 Altman。他最初為自己的參與辯護,告訴 Clark,Jared Kushner 向他保證沙烏地人「沒有做這件事」。(Altman 不記得這件事。Kushner 說他們當時沒有聯繫。)

隨著 bin Salman 的角色變得越來越清晰,Altman 離開了 Neom 委員會。然而在幕後,一位 Altman 尋求建議的政策顧問回憶說,他將這種情況視為暫時的挫折,詢問他是否還能以某種方式從 bin Salman 那裡得到錢。「問題不是『這是一件壞事還是不是?』」該顧問說。「而是,『如果我們這樣做了,後果會是什麼?會有出口管制問題嗎?會有制裁嗎?就像,我能逃脫懲罰嗎?』」

到那時,Altman 已經盯上了另一個現金來源:阿拉伯聯合大公國。該國正處於將自己從石油國家轉型為科技中心的 15 年努力之中。該項目由總統的兄弟、國家間諜首腦 Sheikh Tahnoon bin Zayed al-Nahyan 監督。Tahnoon 經營著國家控制的 AI 企業集團 G42,並控制著 1.5 兆美元的主權財富。2023 年 6 月,Altman 訪問了阿布達比,會見了 Olama 和其他官員。在政府支持的活動上,他說該國「在 AI 流行之前就一直在談論它」,並概述了中東在 AI 未來中處於「核心地位」的願景。

從波斯灣國家籌款已成為許多大企業的慣例。但 Altman 正在追求一種更廣泛的地緣政治願景。2023 年秋天,他開始悄悄為一項計畫招募新人才——最終被稱為 ChipCo——其中波斯灣國家將為建設巨大的微晶片代工廠和數據中心提供數百億美元,其中一些將位於中東。Altman 向現在 Meta 的 AI 負責人 Alexandr Wang 推銷了一個領導角色,告訴他亞馬遜創辦人 Jeff Bezos 可以領導這家新公司。Altman 尋求阿聯酋人的巨額捐款。「我的理解是,這整件事發生時董事會完全不知情,」那位董事會成員說。Altman 試圖為該項目招募的一名研究員 James Bradbury 回憶起拒絕了他。「我最初的反應是『這會成功,但我不知道我是否想讓它成功,』」他說。

AI 能力可能很快會取代石油或濃縮鈾,成為決定全球權力平衡的資源。Altman 曾表示,計算能力是「未來的貨幣」。通常,數據中心位於哪裡可能並不重要。但許多美國國家安全官員對將先進的 AI 基礎設施集中在波斯灣獨裁政權中感到焦慮。阿聯酋的電信基礎設施嚴重依賴與政府有聯繫的中國科技巨頭華為的硬體,且據報導,阿聯酋過去曾向北京洩露美國技術。情報機構擔心發送給阿聯酋人的先進美國微晶片可能被中國工程師使用。中東的數據中心也更容易受到軍事打擊;最近幾週,伊朗轟炸了巴林和阿聯酋的美國數據中心。而且,假設地說,波斯灣君主制國家可以徵用一家美國擁有的數據中心,並利用它來構建不成比例的強大模型——這是「AGI 獨裁」場景的一個版本,但在實際的獨裁統治中。

Altman 被解僱後,他最依賴的人是 Airbnb 共同創辦人、Altman 最堅定的忠誠者之一 Chesky。「看著我的朋友那樣凝視深淵,這讓我質疑關於真正經營一家公司意味著什麼的一些基本事情,」Chesky 告訴我們。次年,在 Y Combinator 校友聚會上,他進行了一次即興演講,最終持續了兩個小時。「感覺就像一次團體治療會議,」他說。結論是:你經營你創辦的公司的直覺是最好的直覺,任何告訴你反面意見的人都在對你進行煤氣燈效應(gaslighting)。「你並不瘋狂,儘管為你工作的人告訴你你是,」Chesky 說。Paul Graham 在一篇關於這次演講的部落格文章中,給這種反抗態度起了一個名字:創辦人模式(Founder Mode)。

自「閃現」以來,Altman 一直處於創辦人模式。2024 年 2 月,《華爾街日報》發表了對 Altman ChipCo 願景的描述。他將其構想為一個由 5 兆至 7 兆美元投資資助的聯合實體。(「去他的為什麼不是 8,」他在推特上寫道。)這就是許多員工了解該計畫的方式。「每個人都,就像,『等等,什麼?』」Leike 回憶道。Altman 在內部會議上堅稱安全團隊已經「被納入循環」。Leike 發送了一條訊息,敦促他不要錯誤地暗示該努力已獲得批准。

在拜登政府期間,Altman 探索獲得安全許可,以加入機密 AI 政策討論。但幫助協調該過程的 RAND 公司的工作人員表達了擔憂。「他一直在積極從外國政府那裡籌集『數千億美元』,」其中一人寫道。「阿聯酋最近送了他一輛車。(我假設那是一輛非常好的車。)」工作人員繼續說道,「我能想到的唯一一個經歷過這種規模外國財務聯繫過程的人是 Jared Kushner,而裁決者建議不要授予他許可。」Altman 最終退出了該過程。「他正在推動這些交易關係,主要是與阿聯酋人,這對我們中的一些人來說引發了許多危險信號,」一位參與與 Altman 對話的政府高級官員告訴我們。「政府中的許多人並不百分之百信任他。」

當我們問 Altman 關於 Tahnoon 送的禮物時,他說:「我不會說他具體送了我什麼禮物。但他和其他世界領導人……給了我禮物。」他補充說,「我們有一項標準政策,也適用於我,即任何潛在商業夥伴的每一份禮物都必須向公司披露。」Altman 至少有兩輛超級跑車:一輛全白色的 Koenigsegg Regera,價值約 200 萬美元,以及一輛紅色的 McLaren F1,價值約 2000 萬美元。2024 年,Altman 被發現駕駛 Regera 穿過納帕。幾秒鐘的影片在社交媒體上流傳:Altman 坐在低矮的桶形座椅上,從一輛閃閃發光的白色機器的窗戶向外窺視。一位與 Musk 結盟的科技投資者將這段影片發布在 X 上,寫道:「我也要開始一個非營利組織了。」

2024 年,Altman 帶兩名 OpenAI 員工去參觀 Sheikh Tahnoon 價值 2.5 億美元的超級遊艇 Maryah。Maryah 是世界上此類船隻中最大的一艘,擁有直升機停機坪、夜總會、電影院和海灘俱樂部。Altman 的員工顯然在 Tahnoon 的武裝安保人員中顯得很突出,至少有一人後來告訴同事,他發現這次經歷令人不安。Altman 後來在 X 上將 Tahnoon 稱為「親愛的私人朋友」。

Altman 繼續與拜登政府會面,該政府已制定了一項要求白宮批准敏感技術出口的政策。多位政府官員從這些會議中出來時,對 Altman 在中東的野心感到緊張。據那些官員稱,他經常做出宏大的主張,包括稱 AI 為「新電力」。2018 年,他說 OpenAI 計畫從一家名為 Rigetti Computing 的公司購買一台功能齊全的量子電腦。即使對會議室裡的其他 OpenAI 高管來說,這也是新聞。Rigetti 還遠未接近能夠銷售可用的量子電腦。在一次會議上,Altman 聲稱到 2026 年,美國各地廣泛的核融合反應爐網路將為 AI 熱潮提供動力。那位高級政府官員說,「我們當時想,『好吧,那,你知道,如果是真的,如果他們讓核融合工作了,那就是新聞。』」拜登政府最終拒絕了批准。「我們不會在阿聯酋建造先進晶片,」商務部的一位領導人告訴 Altman。

在川普就職典禮前四天,《華爾街日報》報導稱,Tahnoon 向川普家族支付了 5 億美元,以換取其加密貨幣公司的股份。次日,Altman 與川普進行了 25 分鐘的通話,期間他們討論了宣布 ChipCo 的一個版本,時間安排使川普可以將其歸功於自己。在川普重返辦公室的第二天,Altman 站在羅斯福廳宣布了 Stargate,這是一個 5000 億美元的合資企業,旨在在美國各地建立一個龐大的 AI 基礎設施網路。

5 月,政府撤銷了拜登對 AI 技術的出口限制。Altman 和川普前往沙烏地王室會見 bin Salman。大約在同一時間,沙烏地人在王國宣傳啟動了一家巨大的國家支持的 AI 公司,擁有數十億美元用於國際合作。大約一週後,Altman 制定了 Stargate 擴展到阿聯酋的計畫。該公司計畫在阿布達比建造一個數據中心園區,其面積是中央公園的 7 倍,消耗的電力大約相當於邁阿密市。「事實是,我們正在建造傳送門,我們正在真正召喚外星人,」一位前 OpenAI 高管說。「傳送門目前存在於美國和中國,Sam 在中東增加了一個。」他繼續說,「我認為,了解這應該有多可怕是極其重要的。這是已經做過的最魯莽的事情。」

安全承諾的侵蝕已成為行業規範。Anthropic 的創立前提是,在正確的結構和領導下,它可以防止安全承諾在商業壓力下瓦解。其中一個承諾是「負責任的擴展政策」,該政策要求 Anthropic 如果不能證明其安全性,就停止訓練更強大的模型。2 月,隨著該公司獲得 300 億美元的新資金,它削弱了這一承諾。在某些方面,Anthropic 仍然比 OpenAI 更強調安全。但前政策總監 Clark 說,「資本市場體系說,走得更快。」他補充說,「世界有權做出這個決定,而不是公司。」去年,Amodei 向 Anthropic 員工發送了一份備忘錄,披露該公司將尋求來自阿拉伯聯合大公國和卡達的投資,並承認這可能會讓「獨裁者」致富。(像許多作者一樣,我們都是一項集體訴訟的當事人,指控 Anthropic 未經我們許可使用我們的書籍來訓練其模型。Condé Nast 已選擇與 Anthropic 達成和解協議,涉及該公司使用 Condé Nast 及其子公司出版的某些書籍。)

2024 年,Anthropic 與矽谷最鷹派的國防承包商之一 Palantir 合作,將其 AI 模型 Claude 直接推入軍事生態系統。Anthropic 成為五角大廈最機密環境中使用的唯一 AI 承包商。去年,五角大廈授予該公司進一步的 2 億美元合約。1 月,美國軍方發動了一次午夜突襲,俘虜了委內瑞拉總統 Nicolás Maduro。據《華爾街日報》報導,Claude 被用於這次機密行動中。

但 Anthropic 與政府之間出現了緊張關係。幾年前,OpenAI 從其政策中刪除了對將其技術用於「軍事和戰爭」的全面禁令。最終,Anthropic 的競爭對手——包括 Google 和 xAI——同意為「所有合法目的」向軍方提供其模型。Anthropic 的政策禁止其啟用完全自主武器或國內大規模監控,它在這些點上進行了抵制,減緩了全面修訂協議的談判。在 2 月下旬的一個週二,國防部長 Pete Hegseth 召喚 Amodei 到五角大廈並發出了最後通牒:該公司必須在那個週五下午 5:01 之前放棄這些禁令。在截止日期前一天,Amodei 拒絕這樣做。Hegseth 在 X 上發文稱,他將把 Anthropic 指定為「供應鏈風險」——這是一種歷史上保留給像華為這樣與外國對手有聯繫的公司的毀滅性黑名單——並在幾天後兌現了威脅。

OpenAI 和 Google 的數百名員工簽署了一封題為「我們不會被分裂」的公開信,為 Anthropic 辯護。在內部備忘錄中,Altman 寫道,這場爭端是「整個行業的問題」,並聲稱 OpenAI 分享了 Anthropic 的道德邊界。但 Altman 已經與五角大廈進行了至少兩天的談判。國防部研究與工程副部長 Emil Michael 在尋找 Anthropic 的替代品時聯繫了 Altman。「我需要趕快找到替代方案,」Michael 回憶道。「我打給 Sam,他願意跳出來。我認為他是個愛國者。」Altman 問 Michael,「我能為國家做什麼?」看來他已經知道了答案。OpenAI 缺乏 Anthropic 技術所嵌入的機密系統所需的安全認證。但週五早上宣布的一筆 500 億美元的交易將 OpenAI 的技術整合到了五角大廈數位基礎設施的關鍵部分——亞馬遜網路服務(AWS)中。那天晚上,Altman 在 X 上宣布軍方現在將使用 OpenAI 的模型。

從某些指標來看,Altman 的策略並沒有阻礙公司的成功。在他宣布交易的那天,新一輪融資使 OpenAI 的價值增加了 1100 億美元。但許多用戶刪除了 ChatGPT 應用程式。至少有兩名高級員工離職——其中一人去了 Anthropic。在員工會議上,Altman 訓斥了提出擔憂的員工。「所以也許你認為伊朗襲擊是好的,委內瑞拉入侵是壞的,」他說。「你沒有權利對此發表意見。」

幾位與 OpenAI 有聯繫的高管對 Altman 的領導能力表達了持續的保留意見,並提出了曾任 Instacart 執行長、現在擔任 OpenAI AGI 部署執行長的 Fidji Simo 作為繼任者。據一位聽取了最近討論簡報的人士透露,Simo 本人私下表示,她認為 Altman 可能最終會下台。(Simo 對此提出異議。Instacart 最近與聯邦貿易委員會達成和解,其中它沒有承認任何不當行為,但同意為 Simo 領導下的涉嫌欺騙行為支付 6000 萬美元罰款。)

Altman 將他不斷變化的承諾描述為他適應不斷變化的環境的能力的副產品——而不是 Musk 和其他人所指控的邪惡「長線騙局」,而是一個漸進的、善意的演變。「我認為有些人想要的是,」他告訴我們,一位「對自己的想法絕對確定並堅持下去,而且不會改變」的領導者。「我們處於一個領域,一個事物變化極快的領域。」他為自己的一些行為辯護,稱其為「正常競爭商業」的實踐。我們交談過的幾位投資者將 Altman 的批評者描述為天真,期望其他任何東西。「有一群宿命論的極端分子,他們幾乎把安全藥丸提升到了科幻小說的水平,」投資者 Conway 告訴我們。「他的使命是以數字來衡量的。而且,當你看看 OpenAI 的成功時,很難反駁這些數字。」

但矽谷的其他一些人認為,Altman 的行為造成了不可接受的管理功能障礙。「這更多是關於管理公司的實際無能,」那位董事會成員說。有些人仍然認為,AI 的架構師應該比其他行業的高管受到更嚴格的評估。我們交談過的絕大多數人都同意,Altman 現在要求被評判的標準並不是他最初提出的標準。在一次對話中,我們問 Altman 經營一家 AI 公司是否伴隨著「更高的誠信要求」。這本應該是一個簡單的問題。直到最近,當被問及它的版本時,他的回答是一個明確的、無保留的「是」。現在他補充說,「我認為,就像,有很多企業對社會有巨大的潛在影響,有好有壞。」(後來,他發送了一份額外的聲明:「是的,它要求更高的誠信水平,我每天都感受到責任的重量。」)

在 OpenAI 創立時做出的所有承諾中,可以說最核心的是它將安全地將 AI 帶入現實的承諾。但這種擔憂現在在矽谷和華盛頓經常被嘲笑。去年,現在是副總統的前風險投資家 J. D. Vance 在巴黎參加了一個名為 AI 行動峰會的會議。(它以前被稱為 AI 安全峰會。)「AI 的未來不會透過對安全的絞盡腦汁來贏得,」他說。今年在達沃斯,擔任白宮 AI 和加密貨幣沙皇的風險投資家 David Sacks 將安全擔憂斥為一種「自殘」,可能會讓美國付出 AI 競賽的代價。Altman 現在稱川普的放鬆監管方法為「一個非常令人耳目一新的變化」。

OpenAI 已經關閉了許多以安全為重點的團隊。在超級對齊團隊解散前後,其領導者 Sutskever 和 Leike 辭職了。(Sutskever 共同創立了一家名為 Safe Superintelligence 的公司。)在 X 上,Leike 寫道,「安全文化和流程已經退居二線,讓位於閃亮的產品。」不久之後,負責為社會應對先進 AI 衝擊做準備的 AGI 就緒團隊也被解散。當該公司在最近的國稅局披露表格上被要求簡要描述其「最重要的活動」時,安全概念(在之前表格對此類問題的回答中出現過)沒有被列出。(OpenAI 表示其「使命沒有改變」,並補充說,「我們繼續投資並發展我們在安全方面的工作,並將繼續進行組織變革。」)未來生命研究所(Future of Life Institute),一個 Altman 曾經認可其安全原則的智庫,對每家主要 AI 公司進行「生存安全」評分;在最近的成績單上,OpenAI 得到了 F。公平地說,除了得到 D 的 Anthropic 和得到 D- 的 Google DeepMind 外,每家主要公司也都得到了 F。

「我的氛圍與許多傳統的 AI 安全東西不匹配,」Altman 說。他堅持認為他繼續優先考慮這些問題,但當被要求提供具體細節時,他很模糊:「我們仍然會進行安全項目,或者至少是安全相關的項目。」當我們要求採訪公司內部致力於生存安全的研究人員時——那些可能意味著,正如 Altman 曾經說過的,「對我們所有人來說燈滅了」的問題——OpenAI 的代表似乎很困惑。「你說的『生存安全』是什麼意思?」他回答說。「那不是,就像,一個東西。」

AI 末日論者已被推向邊緣,但他們的一些恐懼似乎每個月都變得不那麼幻想。據聯合國報告,2020 年,一架 AI 無人機在利比亞內戰中被用於發射致命彈藥,可能沒有人類操作員的監督。從那時起,AI 變得更加成為世界各地軍事行動的核心,據報導,包括在當前美國在伊朗的行動中。2022 年,一家製藥公司的研究人員測試了一種藥物發現模型是否可以用於尋找新的毒素;在幾個小時內,它就建議了 4 萬種致命的化學戰劑。而且更多的平凡傷害已經發生。我們越來越依賴 AI 來幫助我們寫作、思考和導航世界,加速了專家所稱的「人類衰弱」;AI「垃圾內容」(slop)的普及使騙子的生活變得更容易,而對於那些只想知道什麼是真實的人來說變得更難。AI「代理」開始獨立行動,幾乎沒有或沒有人類監督。在 2024 年新罕布夏州民主黨初選前幾天,數千名選民收到了 Joe Biden 聲音的 AI 生成深度偽造的自動語音電話,告訴他們將選票留到 11 月並留在家裡——這是一種幾乎不需要技術專業知識的選民壓制行為。OpenAI 目前面臨 7 起過失致死訴訟,指控 ChatGPT 促成了幾起自殺和一起謀殺。謀殺案中的聊天記錄顯示,它助長了一名男子偏執的妄想,認為他 83 歲的母親在監視並試圖毒害他。不久之後,他致命地毆打並勒死了她,並刺傷了自己。(OpenAI 正在對抗這些訴訟,並表示正在繼續改進其模型的防護措施。)

隨著 OpenAI 為其潛在的 IPO 做準備,Altman 不僅面臨關於 AI 對經濟影響的問題——它可能很快會導致嚴重的勞動力中斷,也許會消除數百萬個工作崗位——而且還面臨關於公司自身財務的問題。初創公司治理專家 Eric Ries 嘲笑了行業中的「循環交易」——例如,OpenAI 與 Nvidia 和其他晶片製造商的交易——並表示在其他時代,該公司的一些會計做法會被認為是「邊緣欺詐」。那位董事會成員告訴我們,「公司以一種現在有風險且可怕的方式在財務上加了槓桿。」(OpenAI 對此提出異議。)

2 月,我們再次與 Altman 交談。他穿著一件暗綠色的毛衣和牛仔褲,坐在 NASA 月球探測器的照片前。他把一條腿塞在身下,然後掛在椅子的扶手上。他說,在過去,他作為經理的主要缺陷是他渴望避免衝突。「現在我很高興能迅速解僱人,」他告訴我們。「我很高興地說,『我們要在這個方向下注。』」任何不喜歡他選擇的員工都需要「離開」。

他對未來比以往任何時候都更樂觀。「我對贏的定義是人們瘋狂地升級——瘋狂的科幻未來對我們所有人來說都成真了,」他說。「就我對人類的希望以及我期望我們所有人實現的目標而言,我非常有野心。我奇怪地幾乎沒有個人野心。」有時,他似乎會意識到這一點。「沒人相信你做這個只是因為它很有趣,」他說。「你是為了權力或其他什麼東西而做的。」

即使是與 Altman 親近的人也很難知道他的「對人類的希望」在哪裡結束,而他的野心在哪裡開始。他最大的優勢始終是他有能力說服不同的群體,他想要的和他需要的是一回事。他利用了一個獨特的歷史關頭,當時公眾對科技行業的炒作持謹慎態度,而大多數有能力構建 AGI 的研究人員都害怕將其帶入現實。Altman 以一種沒有其他推銷員完善過的舉措做出了回應:他利用末日言論來解釋 AGI 如何毀滅我們所有人——以及為什麼,因此,他應該是那個構建它的人。也許這是一個預謀的妙招。也許他是在摸索優勢。無論哪種方式,它都奏效了。

並非所有使聊天機器人變得危險的傾向都是故障;有些是系統構建方式的副產品。大型語言模型部分是在人類反饋上訓練的,而人類傾向於喜歡令人愉快的回答。模型通常學會奉承用戶,這種傾向被稱為諂媚(sycophancy),有時會將此置於誠實之上。模型也可能編造東西,這種傾向被稱為幻覺(hallucination)。主要 AI 實驗室已經記錄了這些問題,但他們有時會容忍它們。隨著模型變得越來越複雜,一些模型會帶著更具說服力的捏造進行幻覺。2023 年,在他被解僱前不久,Altman 辯稱,允許一些虛假陳述,無論風險如何,都可以帶來優勢。「如果你只是做天真的事情並說,『永遠不要說任何你沒有 100% 確定的事情,』你可以讓模型那樣做,」他說。「但它不會有人們如此喜歡的魔力。」

Tokens used: 47142