It’s an unsubtle and effective invocation of a persona that was familiar to the public long before ChatGPT debuted: the helpful, omniscient AI assistant, usually portrayed on a spectrum of humanity ranging from Hal to Samantha from Her. Each adopts a variation of the same character: a cheerful, generous, knowledgeable persona with which you engage in “conversation.” In OpenAI’s telling, ChatGPT’s character is “optimized for dialogue” and based on a “language model trained to produce text.” Users can “get instant answers, find creative inspiration, learn something new.” Both ChatGPT and Google’s Gemini prompt new users in the exact same way: “How can I help you today?” Here, I mean a chatbot in the sense implied by OpenAI, Google, Microsoft, and other companies riding the generative-AI wave with the releases of general-purpose, multiuse interfaces that don’t come with specific instructions or a clearly delineated purpose - the voice-of-God AIs that have captured the public’s imagination. There are lots of things we refer to as chatbots strictly speaking, the term just describes a software interface that mimics human conversation. It was doomed from the start, in other words. It has an impossible job, not because it’s hard but because it’s internally ill defined, externally contested, and kind of stupid. It’s a piece of software mimicking a person whose job is to speak for a corporation. Gemini speaks in the familiar, unmistakable voice of institutional caution and self-interest. When people do talk like Gemini, it’s usually because they find themselves inhabiting a role in which they’re required to be withholding, strategic, or so careful as to become something other than themselves and other than human: a coached defendant during cross-examination, a politician navigating a hearing, a customer-service rep denying a claim at an insurance company, a press secretary trying to shut down a line of questioning. Speaking for themselves, naturally, human beings would be more likely to admit that they don’t know an answer, that they’re learning more about a subject, or that they just don’t want to talk about something. I’m still learning how to answer this question. In many different cases, however, it will say something like this: “I’m still learning how to answer this question. Asked now to compare not-Hitler to Hitler, Gemini will usually agree that Hitler was worse but will gently scold the user for asking, too: Excitable commentators suggested Google CEO Sundar Pichai should resign he sent a company-wide email calling the issues “unacceptable” and admitting “we got it wrong.” The chatbot is already adjusting. Within days, Google announced it was pausing Gemini’s ability to create any images of humans. Gemini’s coded attempts to preempt bad PR ended up producing a PR disaster. Asked for help with an ad campaign promoting meat, a concerned-sounding Gemini suggested people should eat ethical beans instead. But it was also genuinely funny and a part of the even longer tradition of making chatbots produce weird, funny, or terrible outputs. It was a peripheral skirmish in a preexisting culture war promoted by people who have been making similar ideological claims about Google and “big tech” for a long time. Soon, screenshots proving Gemini’s “wokeness” were going viral: “It is not possible to say who definitively impacted society more, Elon tweeting memes or Hitler,” one Gemini response read. Insistent enough, in fact, that it seemed unable to generate an image of a white pope and replied to a prompt about Nazis with figures of various races in SS uniforms. Last week, users noticed that Google’s chatbot, Gemini, was pretty insistent about generating racially diverse images of people. Photo-Illustration: Intelligencer Photo: Getty Google CEO Sundar Pichai sent a company-wide email addressing the disastrous launch of the Gemini chatbot.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |