Sam Altman, of OpenAI and ChatGPT fame, has been getting philosophical of late, taking a step back and looking at the grand technocratic path ahead. He clearly considers himself the saviour of the world through his transhumanist agenda. “My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe.” And these changes involve worker replacement of a surprising number of jobs, as he sees AI able to “do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.” The reference to coding is instructive as hat was usually cited b y the AI gurus as a field where displaced workers should retrain: “just learn coding.”
If AI progresses at this rate it will probably even replace the Sam Altman’s of he world, who will also become redundant, according to the vision of our AI overlords.
https://futurism.com/sam-altman-replace-normal-people-ai?utm
“That’s one way to talk about other human beings.
As writer Elizabeth Weil notes in a new profile of OpenAI CEO Sam Altman in New York Magazine, the powerful AI executive has a disconcerting penchant for using the term “median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe.”
Altman’s hope is that artificial general intelligence (AGI) will have roughly the same intelligence as a “median human that you could hire as a co-worker.”
It’s a disconcerting assertion, considering that it really sounds like Altman is looking to replace the work of normal people with a not-yet-realized AGI.
And according to Insider, it’s not even the first time he’s said as much. In a 2022 interview on the Lex Fridman podcast, Altman explained that this theoretical AI would be able to “do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.”
In other words, if you happen to live a “median” life, you could soon be out of a job — or, at least, that’s one way to interpret his comments. Cheers to our AI future.
As Insider and New York Mag both note, Altman isn’t the only person in the world of AI who uses the unsettling terminology. The phrase is present across an incredible number of AI blogs, and was even featured in a CNBC article titled “How to talk about AI like an insider.”
“Eventually, when we develop an AGI in earnest,” reads a blog post from an AI startup dubbed Snippet, which seems to fall very much in line with Altman’s median theory, “it would display the capabilities of the median human, but retain the potential to become an expert in the field, something we now consider reserved for the narrow AI.”
To use the word “median” specifically also feels like a distinct — and telling — choice. It’s a squishy term that could leave plenty up to interpretation. How Altman, or anyone else, could possibly go about determining a holistic definition for this statistical average is unclear. Regardless, such a quantification of the human experience feels in many ways dehumanizing and incomplete.
“Comparing AI to even the idea of median or average humans is a bit offensive,” Brent Mittelstadt, director of research at the Oxford Internet Institute, told Insider. “I see the comparison as being concerning and see the terminology as being concerning too.”
Adding that there’s yet to be a “concrete measurable comparison of human intelligence” within AI research, Mittelstadt also noted that the concept of a median person seems like “an intentionally vague concept as compared to having a very specific grounded meaning.”
Besides, the concept of performance and the much-less-tangible notion of human intelligence are two very different things — and as Middlestadt told Insider, equating them doesn’t quite add up.
“That is a hugely problematic leap to make,” said the Oxford researcher, “because all of a sudden you’re assigning agency, comprehension, cognition, or reasoning to these mechanistic models.”
Middlestadt isn’t alone in his critique.
“One thing that current AI architectures and models have shown is that they can achieve basically typical human-level performance. That’s not problematic in itself,” Henry Shevlin, an AI ethicist and professor at the University of Cambridge, told Insider. “I feel when we get into things like intelligence people are more touchy, and there are some good reasons for that.”
https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html
This past spring, Sam Altman, the 38-year-old CEO of OpenAI, sat down with Silicon Valley’s favorite Buddhist monk, Jack Kornfield. This was at Wisdom 2.0, a low-stakes event at San Francisco’s Yerba Buena Center for the Arts, a forum dedicated to merging wisdom and “the great technologies of our age.” The two men occupied huge white upholstered chairs on a dark mandala-backed stage. Even the moderator seemed confused by Altman’s presence.
“What brought you here?” he asked.
“Yeah, um, look,” Altman said. “I’m definitely interested in this topic” — officially, mindfulness and AI. “But, ah, meeting Jack has been one of the great joys of my life. I’d be delighted to come hang out with Jack for literally any topic.”
It was only when Kornfield — who is 78 and whose books, including The Wise Heart, have sold more than a million copies — made his introductory remarks that the agenda became clear.
“My experience is that Sam … the language I’d like to use is that he’s very much a servant leader.” Kornfield was here to testify to the excellence of Altman’s character. He would answer the question that’s been plaguing a lot of us: How safe should we feel with Altman, given that this relatively young man in charcoal Chelsea boots and a gray waffle henley appears to be controlling how AI will enter our world?
Kornfield said he had known Altman for several years. They meditated together. They explored the question: How could Altman “build in values — the bodhisattva vows, to care for all beings”? How could compassion and care “be programmed in in some way, in the deepest way?”
Throughout Kornfield’s remarks, Altman sat with his legs uncrossed, his hands folded in his lap, his posture impressive, his face arranged in a manner determined to convey patience (though his face also made it clear patience is not his natural state). “I am going to embarrass you,” Kornfield warned him. Then the monk once again addressed the crowd: “He has a pure heart.”
For much of the rest of the panel, Altman meandered through his talking points. He knows people are scared of AI, and he thinks we should be scared. So he feels a moral responsibility to show up and answer questions. “It would be super-unreasonable not to,” he said. He believes we need to work together, as a species, to decide what AI should and should not do.
By Altman’s own assessment — discernible in his many blog posts, podcasts, and video events — we should feel good but not great about him as our AI leader. As he understands himself, he’s a plenty-smart-but-not-genius “technology brother” with an Icarus streak and a few outlier traits. First, he possesses, he has said, “an absolutely delusional level of self-confidence.” Second, he commands a prophetic grasp of “the arc of technology and societal change on a long time horizon.” Third, as a Jew, he is both optimistic and expecting the worst. Fourth, he’s superb at assessing risk because his brain doesn’t get caught up in what other people think.
On the downside: He’s neither emotionally nor demographically suited for the role into which he’s been thrust. “There could be someone who enjoyed it more,” he admitted on the Lex Fridman Podcast in March. “There could be someone who’s much more charismatic.” He’s aware that he’s “pretty disconnected from the reality of life for most people.” He is also, on occasion, tone-deaf. For instance, like many in the tech bubble, Altman uses the phrase “median human,” as in, “For me, AGI” — artificial general intelligence — “is the equivalent of a median human that you could hire as a co-worker.”
At Yerba Buena, the moderator pressed Altman: How did he plan to assign values to his AI?
One idea, Altman said, would be to gather up “as much of humanity as we can” and come to a global consensus. You know: Decide together that “these are the value systems to put in, these are the limits of what the system should never do.”
The audience grew quiet.
“Another thing I would take is for Jack” — Kornfield — “to just write down ten pages of ‘Here’s what the collective value should be, and here’s how we’ll have the system do that.’ That’d be pretty good.”
The audience got quieter still.
Altman wasn’t sure if the revolution he was leading would, in the fullness of history, be considered a technological or societal one. He believed it would “be bigger than a standard technological revolution.” Yet he also knew, having spent his entire adult life around tech founders, that “it’s always annoying to say ‘This time it’s different’ or ‘You know, my thing is supercool.’” The revolution was inevitable; he felt sure about that. At a minimum, AI will upend politics (deep fakes are already a major concern in the 2024 presidential election), labor (AI has been at the heart of the Hollywood writers’ strike), civil rights, surveillance, economic inequality, the military, and education. Altman’s power, and how he’ll use it, is all of our problem now.
Yet it can be hard to parse who Altman is, really; how much we should trust him; and the extent to which he’s integrating others’ concerns, even when he’s on a stage with the intention of quelling them. Altman said he would try to slow the revolution down as much as he could. Still, he told the assembled, he believed that it would be okay. Or likely be okay. We — a tiny word with royal overtones that was doing a lot of work in his rhetoric — should just “decide what we want, decide we’re going to enforce it, and accept the fact that the future is going to be very different and probably wonderfully better.”
This line did not go over well either.
“A lot of nervous laughter,” Altman noted.
Then he waved his hands and shrugged. “I can lie to you and say, ‘Oh, we can totally stop it.’ But I think this is …”
Altman did not complete this thought, so we picked the conversation back up in late August at the OpenAI office on Bryant Street in San Francisco. Outside, on the street, is a neocapitalist yard sale: driverless cars, dogs lying in the sun beside sidewalk tents, a bus depot for a failing public-transportation system, stores serving $6 lattes. Inside, OpenAI is low-key kinda-bland tech corporate: Please help yourself to a Pellegrino from the mini-fridge or a sticker of our logo.
In person, Altman is more charming, more earnest, calmer, and goofier — more in his body — than one would expect. He’s likable. His hair is flecked with gray. He wore the same waffle henley, a garment quickly becoming his trademark. I was the 10-billionth journalist he spoke to this summer. As we sat down in a soundproof room, I apologized for making him do yet one more interview.
He smiled and said, “It’s really nice to meet you.”
On Kornfield: “Someone said to me after that talk, ‘You know, I came in really nervous about the fact that OpenAI was gonna make all of these decisions about the values in the AI, and you convinced me that you’re not going to make those decisions,’ and I was like, ‘Great.’ And they’re like, ‘Nope, now I’m more nervous. You’re gonna let the world make these decisions, and I don’t want that.’”
Even Altman can feel it’s perverse that he’s on that stage answering questions about global values. “If I weren’t in on this, I’d be, like, Why do these fuckers get to decide what happens to me?” he said in 2016 to The New Yorker’s Tad Friend. Seven years and much media training later, he has softened his game. “I have so much sympathy for the fact that something like OpenAI is supposed to be a government project.”
The new nice-guy vibe can be hard to square with Altman’s will to power, which is among his most-well-established traits. A friend in his inner circle described him to me as “the most ambitious person I know who is still sane, and I know 20,000 people in Silicon Valley.”
Still, Altman took an aw-shucks approach to explaining his rise. “I mean, I am a midwestern Jew from an awkward childhood at best, to say it very politely. And I’m running one of a handful …” He caught himself. “You know, top few dozen of the most important technology projects. I can’t imagine that this would have happened to me.”
ltman grew up the oldest of four siblings in suburban St. Louis: three boys, Sam, Max, and Jack, each two years apart, then a girl, Annie, nine years younger than Sam. If you weren’t raised in a midwestern middle-class Jewish family — and I say this from experience — it’s hard to imagine the latent self-confidence such a family can instill in a son. “One of the very best things my parents did for me was constant (multiple times a day, I think?) affirmations of their love and belief that I could do anything,” Jack Altman has said. The stores of confidence that result are fantastical, narcotic, weapons grade. They’re like an extra valve in your heart.
The story that’s typically told about Sam is that he was a boy genius — “a rising star in the techno whiz-kid world,” according to the St. Louis Post-Dispatch. He started fixing the family VCR at age 3. In 1993, for his 8th birthday, Altman’s parents — Connie Gibstine, a dermatologist, and Jerry Altman, a real-estate broker — bought him a Mac LC II. Altman describes that gift as “this dividing line in my life: before I had a computer and after.”
The Altman family ate dinner together every night. Around the table, they’d play games like “square root”: Someone would call out a large number. The boys would guess. Annie would hold the calculator and check who was closest. They played 20 Questions to figure out each night’s surprise dessert. The family also played Ping-Pong, pool, board games, video games, and charades, and everybody always knew who won. Sam preferred this to be him. Jack recalled his brother’s attitude: “I have to win, and I’m in charge of everything.” The boys also played water polo. “He would disagree, but I would say I was better,” Jack told me. “I mean, like, undoubtedly better.”
Sam, who is gay, came out in high school. This surprised even his mother, who had thought of Sam “as just sort of unisexual and techy.” As Altman said on a 2020 podcast, his private high school was “not the kind of place where you would really stand up and talk about being gay and that was okay.” When he was 17, the school invited a speaker for National Coming Out Day. A group of students objected, “mostly on a religious basis but also just, like, gay-people-are-bad basis.” Altman decided to give a speech to the student body. He barely slept the night before. The last lines, he said on the podcast, were “Either you have tolerance to open community or you don’t, and you don’t get to pick and choose.””