社区所有版块导航
Python
python开源   Django   Python   DjangoApp   pycharm  
DATA
docker   Elasticsearch  
aigc
aigc   chatgpt  
WEB开发
linux   MongoDB   Redis   DATABASE   NGINX   其他Web框架   web工具   zookeeper   tornado   NoSql   Bootstrap   js   peewee   Git   bottle   IE   MQ   Jquery  
机器学习
机器学习算法  
Python88.com
反馈   公告   社区推广  
产品
短视频  
印度
印度  
Py学习  »  chatgpt

《华尔街日报》&《纽约客》最新“科技话题”外刊分享:国外年轻人正在和AI谈恋爱 & 美国大学生使用ChatGPT作弊

阿满英文讲学 • 6 天前 • 18 次点击  
  • 读者朋友的点赞推荐, 是我继续更新的动力。

  • 我已建群分享自己翻阅的英语外刊资料, 后台发送关键词“外刊”入群。



分享The Wall Street Journal《华尔街日报》和The New Yorker《纽约客》科技/人工智能板块最新文章, 简要概述主题, 标注关键词条, 供同学们参考积累。

 

美国《华尔街日报》June 24, 2025刊文Can You Really Have a Romantic Relationship With AI?”, 探讨人类与AI建立浪漫关系的现象, 专家指出AI能够提供情感慰藉, 但本质是单向关系, 可能削弱现实中的人际交往能力。



关键词条

  • science-fiction trope — 科幻作品中的老套情节 (“trope”指“被反复用在文艺作品中的情节桥段”; 文章指出“Falling in love with a robot is no longer just a science-fiction trope.”, “人机恋爱”已经从科幻走向现实)

  • mimicking human behavior and speech patterns — 模仿人类行为和说话模式 (注意“mimicking”的原形是“mimic”)

  • emotional resilience — 情感韧性 (该词条指个体在面对逆境、压力或创伤时, 能够有效适应并恢复心理稳定的能力; 文章指出“An always-available AI companion can buffer us against social rejection, enhancing emotional resilience.”, 即“一个随叫随到的AI伴侣可以缓解我们遭受的社会排斥, 增强情感韧性”)

  • sycophantic engagement — 阿谀奉承式的互动 (文章指出AI伴侣的“谄媚式互动”, 即“只对用户说他们想听的话”, 这种反馈会扭曲用户对现实的认知)

  • nonjudgmental and validating — 不带评判、给予认可 (“validate”除可指“批准, 认可, 使生效”, 也可专门表示“认可并尊重某人的感受, 让对方觉得自己被理解、被重视”; 原文在探讨“为什么已婚人士也会使用AI伴侣”时, 提到AI伴侣具有“给予情感认可, 不作道德评判, 默认全力配合”的特性)

  • eroding real-world social skills — 削弱现实生活中的社交技能 (原文指出, AI伴侣随叫随到、百依百顺, 如果对其产生依赖, 就会让你忘掉现实世界的人是不完美的, 以及“Real intimacy happens in the repair, not the perception of perfection.”, 即“真正的亲密关系存在于和解的过程, 而非完美的表象”)

  • 更多英文成语、常见词活用、外刊用词难点, 参见我的167期文字专栏《英文报刊中的地道表达》(点击浅蓝超链接跳转预览)。


外刊全文

Can You Really Have a Romantic Relationship With AI?

Yes, you can. And it can be good for you. But the danger is seeing it as a substitute for a human connection. Three experts weigh in.

 

Falling in love with a robot is no longer just a science-fiction trope. As artificial intelligence becomes better at mimicking human behavior and speech patterns, people are increasingly turning to AI not just to save time on research or to generate quirky images but to find companionship, connection and even love.

 

But how healthy is it for people to have close friends or romantic partners who are AI?

 

The Wall Street Journal hosted a videoconference with three experts offering differing views on this question: Nina Vasan, psychiatrist and founder of Brainstorm: The Stanford Lab for Mental Health Innovation; Julian De Freitas, assistant professor of business administration in the marketing unit at Harvard Business School; and Shannon Vallor, philosophy professor at the University of Edinburgh and author of “The AI Mirror.”

 

Here are edited excerpts of the discussion.

 

We crave connection

 

WSJ: Do you think increasingly, men and women will use AI for true deep friendships and even romantic relationships?

 

SHANNON VALLOR: No, because true deep friendships and romantic relationships are not possible with AI; relationships of these kinds are a two-way bond that requires more than one party to be aware of it. A “large language model” [the deep-learning AI that understands human language] has no awareness of anything at all. It’s a mathematical tool for text-pattern analysis and generation. It has no way to be aware that it is in a relationship, or even aware of the other party’s existence as a person. The fact that it can mimic and feign such awareness is the danger.

 

JULIAN DE FREITAS: I think they will. In our research, we’ve seen that highly engaged users of a leading AI companion report feeling closer to their virtual partner than to almost any real-life relationship—including close friends—ranking only family members above it. Further, when the app removed its erotic role-play feature, users exhibited signs of grief, suggesting that they had deeply bonded with the chatbot.



From an immediate user-perception standpoint, what matters is that the chatbot makes them feel understood—not the abstract question of whether an AI can truly “understand” them. And with the pace of innovation today, it’s potentially just a matter of time before AI companions feel more attuned to our needs than even our closest human connections.

 

NINA VASAN: Yes, absolutely. Not because AI is truly capable of friendship or love, but because we are. Humans are wired to bond, and when we feel seen and soothed—even by a machine—we connect. Think about existing machines like robot dogs that offer comfort and companionship, for example. We’re not falling in love with the AI. We’re falling in love with how it makes us feel.

 

In a world where loneliness is rampant, especially among young people who’ve grown up as digital natives emotionally fluent with tech, AI relationships will feel less like science fiction and more like a natural next step. These relationships won’t replace human connection, but they will fill a void. Whether that’s healthy or harmful depends 100% on how we design and use them.

 

A one-sided relationship

 

WSJ: What might happen to people’s capability to thrive in the real world if they rely too much on the ease of an always-supportive AI relationship?

 

VASAN: As a psychiatrist, I often see the effects of one-sided relationships, where one partner always pleases, avoids conflict, or suppresses their needs to keep the peace. On the surface, these relationships look smooth, but under the surface, they’re emotionally stunted. The person being “pleased” often feels disconnected, unsure what their partner really thinks or wants. And the person doing the pleasing feels invisible and resentful.



That same emotional work is what’s missing in AI relationships. At first, it feels like safety. But over time, it can erode your capacity to navigate the real world, where people are imperfect, messy and sometimes disagree with you. Real intimacy happens in the repair, not the perception of perfection. AI offers comfort on demand, but emotional comfort without friction can stunt emotional growth.

 

DE FREITAS: At present, the evidence is still fledgling and largely correlational, so we can’t draw firm conclusions. Since some have sounded dire warnings, let me point to some noteworthy potential upsides. An always-available AI companion can buffer us against social rejection, enhancing emotional resilience. It might also serve as a confidence boost for people with social anxiety—much like exposure therapy—by gradually easing them into real-world interactions.

 

Nonjudgmental and validating

 

WSJ: A University of Sydney study found that 40% of users of AI companions were married. Why do you think someone who’s already in a close human relationship would want to supplement that with an AI relationship?

 

DE FREITAS:  I think there are certain features of the apps that are conducive to both friendship and romance. So one, the apps are validating. Related to that, they’re nonjudgmental. If you think about something like role play, which is kind of fantasy, they’re also very cooperative by default on this. So you don’t have to worry about this tricky issue of consent that humans deal with.

 

Also, you can customize the apps in various ways that could satisfy certain types of role play or relationships that you might not otherwise be able to capture. And then the other one that’s important is also the ability for sexual intimacy. We know that people use it for this.

 

VASAN: I’m going to use myself as an example here—not for romance, but for friendship. After a recent breakup, I was feeling lonely and stuck in a spiral of “what ifs.” I leaned on my friends, family and therapist, and they were wonderful. But at midnight when I couldn’t sleep, or in the middle of the day when everyone else was working, I turned to Claude.

 

I was pleasantly surprised that it responded with real compassion and insight. One thing it said that was different from what I heard from my friends or therapist really stayed with me: “It sounds like what you’re grieving isn’t just the relationship you had, but the future you hoped you would have together. The vision, the potential, the promise—that’s what’s hurting now.”

 

That gave language to something I hadn’t been able to name. It helped me begin to grieve not just the person, but the imagined future I was still holding on to. And while I knew it wasn’t a person, Claude’s response didn’t feel robotic, it felt attuned to both my pain and my hope. That emotional clarity made a real difference in how I processed things. It helped me feel seen in a moment when I really needed it.

 

I have friends where one partner does not like texting during the day and the other does, and this has led to conflict. So I can see in times like that, just having a simple conversation with an AI can help you in the moment. It’s not cheating on your partner. It’s not taking emotional intimacy away from your partner. It’s more about recognizing that we all have different needs, and our romantic partner meets a lot of them, but not all of them.

 

VALLOR: It depends on the design of the system, but it also depends a lot on the person. One of the things we’ve seen with smartphones and social media is that it’s often the most socially advantaged and already capable and well-resourced users who get the most benefits from social media and other technologies. It’s vulnerable users—users who are already somewhat isolated or having issues with impulse control or finding difficulties connecting with other people—it’s those users often that tend to suffer the disproportionate share of the harms that come from technology use.



I think we should expect to see the same pattern play out with AI, and I think we already are. If you have a healthy relationship, whether it’s with friends or a romantic partner, you can probably use these tools in a way that isn’t going to be damaging to your relationship and is going to potentially bring you more benefits.

 

I’m more skeptical than Nina about these tools, but there are users for whom clearly that is true. But that is not who I’m worried about. I’m worried about all the people who are already struggling in their relationships, who are already missing the techniques and emotional language to reconnect with their partners.

 

WSJ: What kind of concerns do you have for those people?

 

VALLOR:  Learning to be a good friend, a good spouse, a good partner, a good parent, takes time and experience. It’s a process of skill development: emotional skills (learning to understand others’ needs and feelings), cognitive skills (learning to make good judgments about other people and how we relate to them), and moral skills (learning appropriate boundaries and habits, learning to care well for others and for oneself).  

 

Just like you don’t acquire the skills of skiing or mountain climbing without a great deal of repeated practice—including learning to take risks, fail and try again—we don’t acquire the necessary skills for healthy relationships without years of constant practice and trial and error.

 

Looking ahead

 

WSJ: We can expect tomorrow’s AI companions to be much more sophisticated than the ones we have today. Will that mitigate some of the issues that we’re seeing? Or exacerbate them?

 

VALLOR: In terms of making the technology safe and beneficial, we know that the tech companies know how to do that, but their commercial incentives often are to not do it. They don’t have a record of being trustworthy in this area to make these technologies better and safer.

 

The harms we can anticipate or have already seen include sycophantic engagement—in other words the AI companion telling people what they want to hear, which can distort their sense of reality by isolating them from perspectives other than their own. Then there’s reinforcement and amplification of existing pathologies in thinking (such as suicidal ideation, self-deception or conspiracy theories), as well as decreased capacity for independent self-management. If people start to rely on an AI tool too much, that could affect their ability to do things like managing boredom with creative activity, or spending time alone reflecting on and evaluating their own thoughts, feelings and plans.

 

Another danger is developing unrealistic expectations of non-AI partners (such as to be always available, or always accommodating of requests). There’s also the risk that reliance on the AI relationship will drain time, affection and energy away from relationships with existing partners and friends.

 

Then there are the harms we don’t know about because we haven’t seen them emerge at scale or over a longer time period.

 

美国《纽约客》June 30, 2025刊文What Happens After A.I. Destroys College Writing?”, 揭露美国大学生普遍用ChatGPT作弊的现象, 教授被迫取消课后作文, 回归手写考试。



关键词条

  • academic dishonesty — 学术不诚实 (调查显示, 59%的美国高校报告学生作弊现象增加)

  • AI-detection tool — “AI生成内容”检测工具 (美国教师使用GPTZero等检测工具, 通过分析行文结构和句法, 来评估学生作业是否由机器生成)

  • screen-free classroom   — 禁止电子屏幕的教室 (后缀“-free”作“无, 没有”理解, 再比如“lead-free fuel”指“无铅汽油”)

  • the intoxication of hyper-efficiency — 超高效率带来的陶醉感 (“intoxication”的基本含义是“醉酒状态”, 该词条在原文中需要理解为“ChatGPT以惊人速度完成任务所带来的愉悦感”)

  • artisanal — 传统匠人手工制作的 (原文中“artisanal”需要理解为一种“不依赖AI技术捷径的、亲力亲为的、老派的学习方式”)


外刊全文

What Happens After A.I. Destroys College Writing?

The demise of the English paper will end a long intellectual tradition, but it’s also an opportunity to reëxamine the purpose of higher education.

 

On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.

 

Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”

 

OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”

 

Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)

 

He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.

 

Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.

 

Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.

 

When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.


I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”



As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.

 

“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.

 

Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.

 

“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”

 

I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.

 

It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.

 

There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”

 

Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.

 

Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.

 

But for English departments, and for college writing in general, the arrival of A.I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person? E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.

 

Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.

 

Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class. “Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”

 

His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels. “I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.” And these were English majors.

 

The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.

 

But other professors have renewed their emphasis on getting students to see the value of process. Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose. Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.

 

Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics. As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.



“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”

 

I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear. But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.

 

College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork. Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears. I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits. A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.

 

Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college. “I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu. “I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”

 

He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break. He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.

 

I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking. In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.)

 

May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes. “I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge. As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.”

 

Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out. “I like the exercise as a tone-setter, because it stresses their writing,” he told me.

 

The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling. But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.

 

Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.”

 

Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.

 

As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did. He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”

 

A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility. Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the  Times, “We still don’t know how these models work exactly.”

 

But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”

 

Kevin, by contrast, desired a more general kind of moral distinction. I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair. I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”

 

In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space. The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?

 

What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor. May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics. He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand? If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.

 

“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.



Lam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”

 

Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams. I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach. Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.

 

As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan. The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.

 

According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.)

 

Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way. He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process. “I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”

 

Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days. Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.

 

When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth. Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.

 

Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking. Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.

 

Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human. I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.

 

Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic. They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do. One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.I. automating entry-level coding jobs.

 

None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.

 

When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus.


星标⭐和点赞👍可以让算法为你推荐我的文章。👇

Python社区是高质量的Python/Django开发社区
本文地址:http://www.python88.com/topic/184052
 
18 次点击