资源简介 压轴题02 阅读理解C、D篇命题预测 分析近几年高考英语阅读理解 C、D 篇可知,人工智能类说明文是高频压轴题材,选材贴合时代热点,语篇多来自英美科技媒体、科研报告、高校研究发布,主题聚焦 AI 技术原理、应用场景、伦理争议、社会影响、未来发展等。文章逻辑性强、长难句多、专业术语常见,侧重考查信息定位、逻辑推理、主旨概括等高阶能力。2026 年高考仍会将人工智能类作为 C、D 篇核心考查方向,命题更关注 AI 与教育、医疗、生活、科研、环保、版权等领域的结合,强调辩证思考与实际应用。高频考法 推理判断题 标题归纳题 细节理解题 词义猜测题 主旨大意题 6. 观点态度题人工智能类说明文基本规律及解题要领高考人工智能类阅读通常无标题,结构稳定,逻辑清晰,一般分为四部分:首段:开门见山引出 AI 核心话题 —— 新技术、新模型、新争议、新研究。背景:介绍 AI 发展现状、传统技术局限、现实需求或争议起源。主干:详细说明 AI工作原理、功能特点、实验数据、应用场景、优势与问题。结尾:总结 AI 价值、未来前景、现存挑战、专家观点或社会反思。二、人工智能类说明文的解题技巧1. 抓结构,快速把握主旨用略读法浏览首尾段 + 各段首尾句,圈出 AI 相关核心词(model/algorithm/LLM/chatbot/robot 等)。人工智能类文章常见说明思路:技术介绍型:原理→特点→优势→应用→局限社会争议型:现象→正方观点→反方观点→作者态度研究发现型:实验目的→过程→数据→结论→展望2. 定位标志词,精准破解细节与推理题干关键词:人名、机构、专业术语、数字、时间、转折词。长难句处理:先找主句谓语,剥离定语、状语、插入语,抓核心意思。答案原则:原文同义替换 / 合理概括,不主观臆断。3. 重点关注观点态度与引语文中researchers/experts/developers/authorities所说的话,常是观点题、推理题题眼。把握情感词:positive/negative/concerned/skeptical/optimistic/cautious。4. 紧盯转折逻辑,锁定核心信息AI 类文章高频转折词:however / but / yet / while / although / on the other hand转折后往往是作者真正观点、核心问题、重要结论,是命题重灾区。5. 熟悉选项陷阱,快速排除干扰正确选项:原文信息同义改写、合理归纳。干扰项:张冠李戴(把 A 的功能安到 B)偷梁换柱(改变程度、范围、对象)无中生有(原文未提及)以偏概全(只讲局部当主旨)6. 标题归纳技巧(AI 类专用)必须包含AI/technology/model等核心概念。范围适中,不夸大、不缩小。常见格式:AI + 功能 / 争议 / 未来 / 应用。02 人工智能类1.(2026·青岛·一模)Artificial intelligence (AI) researchers have long dreamed of tools to supercharge science-asking novel questions, designing and running experiments. Recently, large language models (LLMs) have made discoveries that some AI developers claim have inched us closer to that future. But how do you test whether an AI model can truly do science For answers, researchers turn to benchmarks (基准): standardized sets of questions or tasks that help measure an AI’s efficiency and reliability and compare it against other models. But the complexity of science makes assessing their aptitude especially challenging. As Hao Peng, a computer scientist at the University of Illinois Urbana-Champaign, puts it: “Models have all this knowledge. Do they know how to use it ”Dozens of new science-focused benchmarks have emerged over the past year to answer that question, but scientists have yet to settle on a single best approach. One of the most popular, published in Nature, is Humanity’s Last Exam (HLE). It uses 2500 questions drawn from “the frontier of human knowledge” to put LLMs through their paces. One, for example, asks how many types of sensory receptors the human skin contains. “We wanted a diverse dataset that only experts who have been working on a field for a long time can answer,” says Long Phan, a research engineer with the HLE’s developer.Since the HLE first appeared as a preprint in January 2025, the benchmark has become an important proving ground for LLMs and HLE scores are now a common talking point for AI companies seeking to highlight the capabilities of their products. At the HLE’s launch, the leading developer OpenAI’s AI model won the best score at a mere 8.3%. Earlier this month, Google claimed that its latest reasoning model for science, called Gemini 3 Deep Think, had achieved a new record HLE score of 48.4%.But some scientists argue that many of the HLE’s questions test for little-known or even useless knowledge, rather than an ability to do meaningful research. A Nature editorial accompanying the HLE’s publication also raised this issue: “We think that more scientists should be asking: What would it take to develop an AI benchmark that truly measures expert-level thinking ”1. What does the underlined word “aptitude” in paragraph 2 mean A. Knowledge. B. Performance. C. Intelligence. D. Progress.2. What does Long Phan stress about HLE A. Its topic diversity. B. Experts’ involvement in it.C. The expertise of its dataset. D. Its data-backed popularity.3. What is paragraph 4 mainly about A. HLE’s role as a key AI test. B. Companies’ use of HLE.C. HLE scores of leading AI models. D. The process of HLE’s launch.4. By sharing its view, the Nature editorial aimed to ________.A. back the current testing B. express concern over HLEC. propose a workable solution D. predict future AI benchmarks2.(2026·聊城·一模)In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, spread and consumed.LLMs, we are told, make us more efficient, simplify complex work, automate boring tasks and allow us to focus on what matters. But as we feel surprised at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they eroding our capacity for independent thought, judgment and critical reflection Efficiency is not a neutral term. The current narrative around generative AI treats efficiency as progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.The popular belief is that LLMs allow humans to assign repetitive work to machines and reserve their energy for more reflective tasks, but the opposite is often true. As the more intellectual labor — writing, summarizing and decision-making, for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.So what do we really mean by “efficiency” If it means shortening the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it’s not a gain; it’s a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to worsen the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight.5. What does the underlined word “eroding” in paragraph 2 mean A. Changing. B. Improving. C. Destroying. D. Expanding.6. What do LLMs lead to, according to paragraph 4 A. We get more reflective labor. B. We do independent thinking less.C. We engage in more repetitive tasks. D. We reduce our work efficiency indeed.7. What does the author advocate about our using LLMs A. Putting efficiency first. B. Reducing intellectual effort.C. Achieving more with less time. D. Increasing human engagement.8. What is the author’s purpose in writing the text A. To describe the fast development of LLMs.B. To reflect on the negative effects of LLMs.C. To question the necessity of pursuing efficiency.D. To challenge the traditional definition of efficiency.3.(2026·广州·一模)Survey data shows that most freshmen regularly use generative AI, often treating it as “an intellectual partner”, Professor John Hampson reported at a faculty (全体教师) meeting in Elite Technology University (ETU). Students most commonly use it to understand difficult concepts, search, generate study materials, and edit writing. Interestingly, the lowest reported use is for generating text.Meanwhile, students are using faculty office hours and the speaking and writing centers less. In last year’s computer science courses, scores on problem sets increased, yet exam scores declined. “This is concerning,” noted Hampson. “If they were using AI as a study pal, they weren’t absorbing as much as they might think.”Students want clearer AI policies, and Hampson advised faculty to carefully consider and share what level of use they permit, the reasoning behind it, how to cite use of AI, and examples of what’s permissible. He also encouraged department-wide discussions to best prepare students for a workplace where they will need to know how to write or code with its assistance. “I also believe that students need to learn to write and code unaided, to develop critical thinking skills, their agency as citizens, and also meaning — making the ideas that help them understand their own lives,” he added.Some professors expressed concerns about how AI use is impacting students’ mental health and learning. Professor George Wilson noted that students are often highly competitive, and “it’s important to create rules so that competition leads to healthy behaviors that make them better educated people.” While some suggested more one-on-one time with students, others noted that budget restrictions would make that difficult.Professor Poly Burnett observed that lecture attendance is also down. She urged faculty to make lectures something students genuinely want to attend. She also noted that many teachers are making small changes, in hopes of continuing teaching as they’ve previously taught. “We actually have to see this less as a problem and more as an opportunity,” Burnett suggested. “How can ETU lead in rethinking how we teach, how we learn... and have our students be benefiting and being at the leading edge of that ”9. What does the author imply about the survey findings by using “interestingly” in paragraph 1 A. They indicate a promising trend. B. They contradict a common assumption.C. They capture the faculty’s interest. D. They require further investigation.10. Which of the following changes is mentioned in paragraph 2 A. Students are interacting more with others.B. AI use has led to better learning outcomes.C. Exam scores rose while homework scores fell.D. Students are using off-line academic services less.11. Why does Hampson emphasize students writing and coding without AI A. To clarify acceptable uses of AI in coursework.B. To prepare students for future workplace demands.C. To ensure students develop essential human capacities.D. To improve students’ long-term academic performance.12. What is Burnett’s suggestion to the faculty A. Make lectures more entertaining.B. Let students take the leading role.C. Take the chance to reform education.D. Adjust teaching slightly to AI challenges.4.(2026·苏北七市·二模)Generative AI tools have exploded in popularity, enabling users to create text, images, music and video in seconds. But behind the innovation lies a controversial issue: the unauthorized use of copyrighted material to train these models and the uncredited reproduction of protected works in AI-generated content.For artists, writers and filmmakers, the rise of generative AI feels like a threat. Many AI systems are trained on vast datasets scraped from the internet—including novels, paintings, songs and films—without permission or compensation to the original creators. When AI produces new content that closely mimics the style or even specific elements of copyrighted works, it often does so without attribution, leaving creators feeling their labor is exploited.Take visual artists as an example. A digital painter might spend years developing a unique style, only to find AI tools can replicate that style instantly. Some artists have filed lawsuits against AI companies, arguing that training models on their work without consent violates copyright law. They demand fair compensation and clearer rules on how AI can use creative content.Tech companies, however, argue that AI training falls under "fair use," a legal doctrine that allows limited use of copyrighted material without permission for purposes such as education, research or innovation. They claim AI transforms the original material into something new, thus not infringing on copyright. Yet this argument fails to address the core concern of many creators: that AI profits from their work without giving anything back.The debate is far from settled. Governments around the world are struggling to update copyright laws to keep pace with AI. The European Union’s AI Act requires transparency about training data, while the US Copyright Office has refused to grant copyright to purely AI-generated works. In China, new regulations mandate labeling AI-generated content to prevent deception.As generative AI continues to evolve, finding a balance between innovation and protection will be critical. Without clear rules, both creators and AI developers will face uncertainty. The goal should be to foster AI progress while ensuring that those who create original work are respected and rewarded.13.What problem is mainly discussed in Paragraph 1 The rapid development of AI technology.B. The lack of legal protection for AI users.C. The illegal use of copyrighted material by AI.D. The difficulty in creating original content.14.Why do many artists oppose generative AI AI makes their works less popular.AI copies their styles without permission.AI reduces the value of creative jobs.D. AI fails to produce high-quality content.15.What do tech companies claim about AI training It should be strictly banned by law.It belongs to the category of fair use.It needs full permission from creators.D. It has nothing to do with copyright.16.What can be inferred from the last two paragraphs Global laws on AI copyright are consistent.AI-generated works can get copyright in the US.China requires clear labels for AI content.D. Innovation should come before copyright protection.5.(2026·运城·一模)New research challenges the widespread belief that artificial intelligence (AI) is driving a major rise in global greenhouse gas emissions Scientists from the University of Waterloo and the Georgia Institute of Technology analyzed U.S. economic data alongside estimates of how frequently AI tools are used across different industries. Their aim was to understand what might happen to the environment if AI adoption increases along its current path.According to the U.S. Energy Information Administration, 83 percent of the nation’s economic activity relies on petrol, coal and natural gas. These fuels release greenhouse gases when burned. The researchers noted that total energy use from AI in the U.S. matched the electricity consumption of Iceland, yet this amount remained insignificant when viewed at national or global levels.“It is important to note that the increase in energy use is not going to be uniform. It’s going to be felt more in the places where electricity is produced to power the data centers,” said Dr Juan Moreno-Cruz, a professor at the School of Environment, Enterprise and Development at the University of Waterloo and Canada Research Chair in Energy Transitions. “If you look at that energy from the local perspective, that’s a big deal because some places could see double the amount of electricity output and emissions. But at a larger scale, AI’s use of energy won’t be noticeable.”“For people who believe that the use of AI will be a major problem for the climate and think we should avoid it, we’re offering a different perspective,” Moreno-Cruz added. “The effects on climate are not that significant, and we can use AI to develop green technologies or to improve existing ones.”To develop their findings, environmental economists Moreno-Cruz and Dr Anthony Harding reviewed a variety of economic sectors, the types of jobs within those sectors, and the share of tasks that could potentially be performed by AI. Moreno-Cruz and Harding intend to apply the same approach to additional countries in order to understand how AI adoption may affect environmental outcomes across different regions of the world.17. What is the primary goal of the research A. To promote the development of green AI. B. To measure energy consumption worldwide.C. To warn about AI’s growing energy demands. D. To assess AI’s potential environmental effects.18. What can be said about AI energy consumption in the U.S. A. It contributes to petrol-based activities. B. It will soon reach the global emission target.C. It has small influence at the national level. D. It exceeds Iceland’s electricity consumption.19 What do researchers plan to do next A. Extend their research to more countries. B. Shift focus to AI’s economic advantages.C. Develop AI applications to stop emissions. D. Reduce the energy use of AI in data centers.20. Which of the following is the main idea of the text A. AI technology drives greenhouse gas emissions.B. AI energy consumption urgently needs regulating.C. Data centers emit more than previously estimated.D. AI’s impact on climate is much smaller than believed.6.(2026·郑州·一模)AI technology has long been able to recognize patterns in music preferences and create personalized playlists. Now, a new AI system has taken this a step further by analyzing how people listen to music and identifying their unique “listening styles”. This advancement changes how music streaming services tailor playlists to individual users, making them more enjoyable.Music recommendation algorithms (算法) have been highly effective at suggesting new songs and artists. But Dr. Emily Carter, a music data scientist at the University of Music and Technology, notes that these algorithms often use a one-size-fits-all approach that doesn’t record the slight differences of individual listening behavior. To better understand and satisfy individual preferences, researchers need to analyze each user’s unique listening patterns.To develop and train their AI, the researchers collected data from over 50 million listening sessions and fed it into a neural network. They tested the system by seeing how well it could distinguish between different users’ listening habits. The system was given 100 listening sessions from each of about 3,000 known users and 100 new sessions from an unknown user. The AI looked for the best match and identified the unknown user 86% of the time, according to a study presented at the International Society for Music Information Retrieval (ISMIR).“We were quite surprised by the accuracy,” says Alex Johnson, a doctoral student in Carter’s lab and the lead author of the study. A non-AI method was only 28% accurate.“The work is innovative,” says Dr. Sarah Kim, a music researcher. “Personalized music experiences could transform how we interact with music platforms.”The researchers are aware of the privacy impact of their system, which could potentially identify users based on their listening habits. In theory, similar systems could also analyze other behaviors, such as the types of podcasts (播客) people listen to or the timing of their music consumption. ISMIR organizers found the study impressive but questionable, and accepted it on condition that the researchers detail the privacy risks. Carter says they have decided, for now, not to release the software publicly.21. What advancement of AI is mentioned in paragraph 1 A. Protecting people’s privacy.B. Recognizing music patterns.C. Tailoring personalized playlists.D. Improving music streaming quality.22. What does Carter say about the music recommendation algorithms A. They consider listening styles.B. They renew networks constantly.C. They recommend popular songs.D. They ignore individual preferences.23. What is the main concern about the new AI system A. Its technical weaknesses in analyzing data.B. Its inability to distinguish between users’ habits.C. Its limited accuracy compared to non-AI methods.D. Its potential privacy risk from tracking listening habits.24. How do ISMIR organizers feel about the new AI system study A. Careful. B. Disappointed. C. Favorable. D. Uninterested.7.(2026·西安·3月)An AI-powered robot was able to separate a gall bladder (胆藏) from the liver of a dead pig in what researchers claim is the first realistic surgery by a machine with almost no human intervention.The robot is powered by a two-tier AI system trained on 17 hours of video containing 16,000 motions performed by human surgeons during operations. When put to work, the first layer of the AI system watches video, monitors the surgery and issues plain-language instructions, while the second AI layer turns each instruction into 3D tool motions. In all, the gall bladder surgery requires 17 separate tasks. The robotic system has performed the operation eight times, achieving 100 percent success in all of the tasks.“Current surgical robotic technology has made some procedures less invasive,but risks haven't really dropped from previous laparoscopic(使用腹腔镜的)surgeries by human surgeons,” says team member Axel Krieger at Johns Hopkins University in Maryland. “This made us look into what is the next generation of robotic systems that can help patients and surgeons.”“The study really highlights the art of the possibility with Al and surgical robotics,” says Danail Stoyanov at University College London. “Incredible advances in computer vision for surgical video with the availability of open robotic platforms for research make it possible to demonstrate surgical automation. "But many challenges remain to make the system practical in clinical use. “While the robot completed the task with 100% success, it had to self-correct six times per case. For example, this could mean a gripper (夹持器) designed to grasp an artery missed its hold on the first try, " Stoyanov said.“There were a lot of instances where it had to self-correct, but this was all fully autonomous,” says Krieger. “It would correctly identify the initial mistake and then fix itself. " The robot also had to ask a human to change one of its surgical instruments for another, meaning some level of human intervention was required. The next step,says Krieger, is to let a robot operate autonomously on a live animal, where breathing and bleeding could complicate things. “But with continued research, we' re confident that we can overcome these obstacles step by step. "25. What are the two-tier tasks that the Al system is trained to perform A. Giving instructions and performing motions.B. Monitoring the surgery and issuing commands.C. Analyzing video and choosing surgical tools.D. Imitating human surgeons and separating tasks.26. What breakthrough does the new robot achieve over traditional laparoscopic surgeries A. Minimal invasiveness with no danger.B. Near autonomy with high success rate.C. Low risks in complex surgical tasks.D. Faster self-correction speed in operations.27. What may prove challenging in a robot operation according to the last paragraph A. Adapting to real-time variability.B. Identifying surgical mistakes quicker.C. Reducing human help for crucial tasks.D. Dealing with complicated surgeries.28. Which of the following best summarizes the passage A. The Robotic Surgery: Cutting Medical RisksB. The Robotic Surgery: Great Clinical ProgressC. The Robotic Surgery: Simplifying SurgeryD. The Robotic Surgery: Success and Ongoing Issue8.(2026·衡阳·3月)Chinese scientists have uncovered the world’s first AI - powered breeding robot named GEAIR. It can cruise autonomously and carry out cross–pollination (异花授粉), promising reduced breeding costs, short breeding cycles, and improved breeding efficiency.GEAIR has been built with a combination of two technologies: AI and biotechnology. Xu Cao, a researcher from the Chinese Academy of Sciences, led the research team that built the robot.Cross-pollination, also known as hybrid pollination, is the process of transferring pollen (花粉) from a flower of one plant to another. This process helps in creating hybrid flowers of plants, also known as hybrid breeding.The aim of hybrid breeding is to develop crop varieties with improved traits, thereby achieving enhanced yield and quality. However, according to Xu, doing this process repeatedly is time - consuming. GEAIR can help reduce the time and also avoid human errors.Living up to its promised potential, the robot carried out a trial in a greenhouse. It identified a flower accurately and extended its arm gently to complete the hybrid pollination process. The entire breeding process was done with inch-perfect precision. The researchers also built the first “intelligent robotic breeding factory”, which can quickly and efficiently develop new, high-quality plant varieties.GEAIR will start a new era backed by AI and biotechnology in the breeding industry. “Our new study has initiated an intelligent breeding model of integrated biotechnology, AI and robot labor — marking China’s successful pioneering efforts in the construction of a closed-loop (闭环的) technology system for intelligent robotized hybrid breeding,” Xu said. “It also shows the application prospects of ‘AI for science’ in the sector of biological breeding.”With biotechnology as its foundation, AI as empowerment, and robots as operators, this study could help China take the lead in the race to create breeding robots that are fully autonomous and intelligent.29. What is the primary function of the GEAIR robot A. To take care of human gardeners.B. To monitor plant growth conditions.C. To conduct hybrid pollination tasks.D. To harvest mature crops automatically.30. What problem of traditional hybrid breeding does GEAIR solve A. Lack of pollen sources.B. Long time and mistakes.C. High costs of hybridization.D. A narrow range of hybrid types.31. What can we infer about the “intelligent robotic breeding factory” A. It is popular worldwide now.B. It can work without any power.C. It mainly focuses on common crops.D. It can enhance the diversity of agriculture.32. What is the significance of GEAIR’s development A. It makes organic farming possible.B. It lowers the cost of traditional farming.C. It proves robots can work better than humans.D. It shows China’s leadership in agricultural technology.9.(2026·河南·一模)For years, the dream future kitchen looked like something from a sci-fi film: robots turning burgers, mechanical arms moving wildly. But at CES (International Consumer Electronics Show) 2026, industry experts painted a different prospect. The future isn’t arriving with robots looking like us. It’s arriving quietly, invisibly, and it’s already here.Early smart kitchen products made a critical mistake. As Nicole Papantoniou from the Good Housekeeping Institute put it, “A lot of people were putting smart features, which you didn’t really need, into products.” Today’s successful ideas aren’t about adding technology for its own purpose. They’re about friction reduction — making cooking easier without the user even noticing the intelligence at work.This shift is clear in the latest AI appliances. Several brands offer ovens (烤箱) with systems that “see” what you put inside. Simply place the food in, and the machine automatically selects the best cooking option. No buttons, no guesswork. Refrigerators are changing in a similar way. The latest AI models have cameras that identify ingredients, track best-before dates, and suggest recipes based on what you have. A partnership with chef Jamie Oliver brings AI-made recipes tailored to your needs. But perhaps the most unexpected use of AI in the kitchen has nothing to do with panies are developing smart range hoods (抽油烟机) that use airflow to create a low-pressure zone above the pan, trapping very small particles (颗粒) before they reach your lungs.So will robots replace human cooks At a CES Discussion, chef Tyler Florence gave a firm answer. “Human-made will become the new luxury item,” he said, “Machines excel at repetitive, boring tasks. But creativity, the human touch — these will only become precious as technology advances.”The vision from CES 2026 is not a kitchen without cooks. It’s a kitchen where invisible intelligence handles the heavy work, and humans are freed to turn ingredients into meals, and meals into memories.33. What is the big change of today’s smart kitchen ideas A. Creating more robot lookalikes. B. Reducing trouble while cooking.C. Designing more sci-fi products. D. Adding more complex functions.34. How do new AI ovens simplify the cooking process A. They recognize food and set the right mode.B. They bring AI-made recipes tailored to needs.C. They suggest recipes based on what you have.D. They use airflow to create a low-pressure zone.35. What can be inferred from Tyler Florence’s words A. Human creativity will be highly valuable.B. AI will take the place of human creativity.C. Human-made food is more than expensive.D. Machines are better at innovative cooking.36. What can be a suitable title for the text A. AI in Kitchens: A Smart Master for CookingB. Smart Kitchens: More Robotic, Less HumanC. CES 2026: When Kitchens Finally Go Sci-FiD. Hidden AI: The New Face of Future Kitchens10.(2026·呼和浩特·一模)Around Christmas 50-year-old New Yorker Holly Jespersen felt unwell but hesitated to see a doctor. She turned to ChatGPT, which advised her against visiting. Days later, with a high fever and headaches, again using the chatbot to decide when, she finally went to urgent care and was diagnosed with influenza A.Holly is far from alone. According to OpenAI, over 40 million daily health-related enquiries, with 230 million weekly. In January, it announced ChatGPT Health, allowing users to upload medical records for customized (定制的) support. The company stresses it is meant to “support, not replace” medical care, not for diagnosis or treatment, but to help with everyday questions and pattern recognition.Yet concerns arise. Family physician Dr. Alexa Mieses Malchuk warns that ChatGPT, like WebMD, prioritizes being helpful over accurate. A 2023 study found ChatGPT’s cancer treatment plans contained many errors, some hard even for experts to detect. However, newer research on colon cancer showed its answers on symptoms and prevention were highly accurate, suggesting LLMs (大型语言模型) may assist patient education but not clinical decisions.Beyond accuracy, psychologists highlight anxiety risks. A 2013 study confirmed that online symptom searches can intensify health anxiety, especially for those intolerant of uncertainty. Clinical psychologist Elizabeth Sadock notes that ChatGPT, always available and affirming, fuels reassurance-seeking (寻求慰藉) behavior, trapping users in a cycle of anxiety. For some patients, limiting ChatGPT use may now be part of treatment.Privacy is another puzzle. Biomedical informatics professor Bradley Malin acknowledges OpenAI’s security efforts, but stresses ChatGPT Health falls outside HIPAA regulation. Patients may unknowingly lose legal protections when their data flows from secured medical records to an unregulated third party.Yet some see value. Dermatologist Kumar views ChatGPT Health as educational, clarifying terms like sunscreen types, not diagnostic. He distinguishes it from WebMD’s curated, reviewed content, while ChatGPT’s AI may mislead.Thus, ChatGPT Health enters America’s broken system as a double-edged sword: a round-the-clock assistant that may empower (赋权) patients, yet risks misinforming, over-reassuring, and exposing them to unregulated data practices.37. Why does OpenAI launch ChatGPT Health A. To replace medical care totally. B. To provide consultation timely.C. To treat the patients early. D. To diagnose diseases quickly.38. What can we learn from paragraphs 3-5 A. ChatGPT may lead to more risks than benefits.B. ChatGPT is always available, helpful and accurate.C. Psychologists advise people not to use ChatGPT.D. People will have no privacy when using ChatGPT.39. How does Kumar find ChatGPT A. It teaches patients some medical terms.B. It can be used as an assistant to patients.C. It can help more patients cure diseases.D. It has more advantages than disadvantages.40. What is the author’s attitude toward ChatGPT Health A. Enthusiastic and supportive. B. Cautious and optimistic.C. Disapproving and negative. D. Critical and loyal.11.(2026·江西赣南·一模)During a golden sunset, Sharon Wilson pointed a thermal-imaging (热成像) camera at a flagship data centre, revealing the enormous heat its AI supercomputer had been releasing into the sky. Meanwhile, the facility’s core product, like many other AI chatbots, kept generating floods of false or harmful content for users worldwide. “It’s a horrible waste,” said Wilson, director of the campaign group Oilfield Witness.Wilson is not alone in having this concern. Scientists are watching the AI expansion with unease as it pollutes the natural world with carbon and the digital world with dangers ranging from misinformation to poisonous videos.Data centres currently consume about 1% of global electricity, but that share may jump soon. Their slice of power is projected to hit 8.6% by 2035, while the International Energy Agency (IEA) expects data centres to account for at least a fifth of electricity-demand growth to the end of the decade.What if AI could pay off its energy debts by saving carbon elsewhere That idea was put forward in an IEA report, which argued that AI applications could cut emissions (排放) by far more than data centres produce. A research paper reached a similar conclusion after modelling cases in which AI would help integrate solar and wind into power networks, improve battery chemistry in electric cars, and encourage consumers to make climate-friendly choices.The projected carbon savings carry large uncertainties-greater efficiency can lead to greater use, the IEA warns, and rebound effects may undercut the gains, such as self-driving cars undermining public transport. But other sectors are so polluting, the researchers say, AI would need to cut their emissions by only a small percentage to cover its own carbon cost.Ultimately, given the massive energy consumed by algorithms (算法), it is essential that AI be employed to “do good in terms of fighting the climate crisis-designing the next generation of batteries, tracking deforestation,” as Sasha Luccioni, climate lead at an AI firm, said, rather than “create social-media websites filled with rubbish while data centres are still powered by coal-fired generators.”41. What does the underlined words “this concern” in paragraph 2 refer to A. The shortage of AI service. B. The unreliability of AI output.C. The release of heat by AI centers. D. The misuse of energy by AI systems.42. What do the IEA report and the research paper in paragraph 4 agree on A. AI can be a net carbon saver. B. AI can be energy-efficient.C. AI can provide computing power. D. AI can direct electricity distribution.43 What is the purpose of paragraph 5 A. To put forward an opposite position. B. To offer a more comprehensive view.C. To add some background information D. To demonstrate the previous argument.44. What does Sasha Luccioni argue about AI A. Its design calls for improvement. B. Its energy use demands restriction.C. Its application requires wise guidance. D. Its development deserves public support.12.(2026·天津·统考)The question of whether artificial intelligence (AI) will take away our jobs is on many people’s minds today. Current applications, from AI robotics performing complex surgeries to large language models like ChatGPT writing academic essays and solving tough problems, have not only demonstrated remarkable capabilities but also sparked significant moral concerns.Broadly speaking, public opinion is divided. Some view AI as the ultimate tool for solving society’s most pressing challenges, from disease to climate change. Others, however, fear that AI will overtake human intelligence. Both views rest on a common assumption that AI possesses, or will possess, a superior form of intelligence that could replace human decision-making. But given the fact that technology is the product of human civilization, the challenge from AI is something we have created for ourselves as we keep pushing our own boundaries. In other words, AI’s progress, functions and future direction are all directed by the human mind.Therefore, before AI evolves into a potential threat, the global community must reach an agreement on the role it is to play. More importantly, related laws and regulations must ensure that AI will benefit society and prevent it from threatening human life. For instance, while future robots might develop a form of emotional intelligence, enabling them to recognize, understand and express emotions in a way that is similar to humans, we must establish clear boundaries to prevent AI copying human emotions. Without legal restrictions, AI may become a social disaster.The new industrial revolution, driven by AI, is an unstoppable force. This change, much like the steam and internet revolutions that brought once-unimaginable shifts, will definitely reshape the world of work, meaning some jobs will disappear. Yet, history repeatedly shows that humanity possesses a great capacity for adaptation. Following each technological leap, new forms of work have emerged, often more creative and fulfilling than the previous ones. Consequently, it’s unnecessary to worry AI will replace our jobs. While technology advances at a rapid pace, what we need to do is to welcome the AI era rather than resisting its progress for fear of the unknown.45. Why does the author provide examples of AI applications in Paragraph 1 A. To compare the functions of different AIs.B. To explain the principles of deep learning.C. To show evidence for worries about AI.D. To predict breakthroughs in medical fields.46. What does the author imply about AI’s progress A. It will be too complex to control.B. It depends on human innovation.C. It will overtake human intelligence.D. It helps human break boundaries.47. How can we prevent AI’s potential threat A. By preventing it threatening humans.B. By stopping it expressing emotions.C. By changing global agreements.D. By setting clear rules and laws.48. What does the writer suggest readers do with the coming of the AI era A. Deal with it positively.B. Accept it passively.C. Respond to it randomly.D. Defend it unconditionally.49. Where is the passage most probably taken from A. A newspaper column on science.B. A textbook on computer science.C. An advertisement for AI software.D. A research paper on AI development.13.(2026·深圳·一模)People around the globe have suffered the anxiety of waiting months to find out if their homes have been damaged by wildfires. Now, once the smoke has cleared for aerial photography, researchers have found a way to identify building damage within minutes. Through a system called DamageMap, a team at Stanford University has brought an AI approach to building assessment: Instead of comparing before-and-after photos, they’ve trained a program using machine learning to rely only on post-fire images.The current method of assessing damage involves people going door-to-door to check every building. While DamageMap is not intended to replace in-person damage assessment, it could be used as a supplementary tool by offering immediate results and providing the exact locations of the affected buildings. The researchers tested it using a variety of satellite and aerial photography with at least 92 percent accuracy.Most computational systems now cannot efficiently classify building damage because the AI compares post-disaster photos with pre-disaster images that must use the same satellite, camera angle and lighting conditions, which can be expensive to obtain or unavailable. Therefore, DamageMap first uses pre-fire photos to map the area and confirm building locations. Then, the program analyzes post-wildfire images to identify damage through features like blackened surfaces, collapsed roofs or the absence of structures.Structural damage from wildfires in California is typically divided into four categories: almost no damage minor damage, major damage or destroyed. Because DamageMap is based on aerial images, the researchers quickly realized the system could not make such detailed assessments and trained the machine to simply determine if there was a fire damage or not.Because the team used a deep learning technique, their model can continue to be improved by feeding it more data. The researchers said the tool can be applied to any area suffering from wildfires and hope it could also be trained to classify damages from other disasters, such as floods or hurricanes. “So far our results suggest that this can be generalized, and we can keep improving it,” said lead study author Marios Galanis, a graduate student at Stanford’s School of Engineering.50. What is the advantage of using DamageMap A. It helps improve the evaluation efficiency. B. It operates automatically after self-learning.C. It analyzes large numbers of disaster photos. D. It takes the place of the traditional measures.51. How does DamageMap work A. It identifies damage with pre-fire photos.B. It confirms locations with post-fire photos.C. It assesses damage through the features of buildings.D. It maps the fire-affected area through comparing photos.52. What would the future study focus on according to Marios Galanis A. Accuracy improvement. B. A wider range of application.C. Techniques development. D. A higher speed of machine learning.53. What does the text mainly talk about A. The impact of wildfires on local residents. B. Main challenges to classify structural damage.C. Possible solutions to identify natural disasters. D. An AI system for rapid fire damage evaluation.14.(2026·襄阳·一模)John Hester, a retired software developer, who lives in Southern California, asked Grok 3, a large language model, to write him code (代码) for a game that he could play on his computer last February. Some two hours later, he had “a playable, functional game.” “It’s so amazing,” he says.Rather than being programmed to search through a set of options, generative AI models learn from a huge number of examples. Some video games now use generative AI. You can try a demo (演示版游戏) called Oasis, like an AI-generated version of Minecraft. In the real game Minecraft, a map and rules govern everything around you. Not here. Oasis, which was released in 2024, is based on a new type of generative AI called a world model. Whatever is on the screen now feeds into the AI world model. It predicts what you will see next based on what you’re seeing now and builds virtual environments you can move through on the spot. Millions of hours of Minecraft gameplay videos went into training the world model behind Oasis.Cook, a researcher and game designer, sees some drawbacks to using generative AI to create all or parts of video games. With generative AI, typically only big companies get to make decisions about how the models work. Besides, using generative AI or world models to make lots of automated game content “might lead to more boring stuff being made,” cautions Cook. A person’s creative work reflects their experience of living in the world. And today’s generative AI can only copy what people have already created.Tessa Kaur, editor at The Gamer magazine, writes that AI-generated dialogue doesn’t produce fascinating characters. AI “simply cannot be creative enough,” she writes. When you care about game characters, it’s “because someone took the time to craft that dialogue for you, with many rewrites and deep thought.”54. Why was John Hester impressed A. Grok 3 taught him game coding. B. He developed a new piece of software.C. Grok 3 coded a game for him quickly. D. He updated his computer successfully.55. What can Oasis provide for game players A. Pre-programmed game scenes. B. AI-generated virtual environments.C. Personalized game maps and rules. D. Numerous world model training data.56. What do Cook and Tessa both agree with A. AI crafts fascinating dialogue. B. Boring characters need AI polish.C. Humans create vivid game content. D. Big firms control AI game design.57. What can be a suitable title for the text A. AI, Create Awesome Video Games B. AI, Train World Game Models C. Grok 3, Generate Vivid Game Roles D. Grok 3, Beat the Original Game15.(2026·日照·一模)A new study by researchers at the Cluster of Excellence Science of Intelligence shows that a combination of uncertainty and heterogeneity (异质性) plays a crucial role in how groups reach agreement.Classic models of decision-making assume that all individuals contribute equally to consensus (共识), but in reality, groups are diverse and homogeneous in both knowledge and influence. Just as some people are experts in a topic, some individuals have more accurate or reliable information than the rest of the group. Others might be more “connected,” which causes their opinions to spread more widely.These two types of diversity, namely level of knowledge and number of connections, are not independent, as uncertainty influences how the two shape decision-making. In other words, individuals with more initial knowledge tend to become more central and influential, helping others reduce uncertainty, while those who interact with many others obtain more information and thus become less uncertain over time. This dynamic allows groups to naturally remove weak or biased information and come to reliable conclusions — as long as central individuals don’t become overconfident too quickly.To explore these effects, the researchers built a model where individuals adjust their beliefs and certainty dynamically as new information comes in. Uncertain individuals relied more on their peers, while confident ones shaped the group’s direction of opinion. But position within the network mattered just as much — highly connected agents spread their opinions widely, whether they were right or wrong.The researchers found that a mix of perspectives wasn’t enough to improve decisions. Groups reached smarter and faster decisions when guided by uncertainty. When everyone had equal certainty and connections, consensus was slow and unreliable. But in heterogeneous groups, uncertainty helped weigh opinions, so that decisions were faster and more accurate.In artificial intelligence and robotics, this research offers a new way to design systems that make better collective decisions. Self-driving cars could assess not just sensor inputs, but also the confidence of other nearby vehicles, improving safety. Many natural systems already follow the principle of adapting to uncertainty. Schools of fish, flocks of birds, and ant colonies don’t treat all input equally but adapt dynamically. We can use that knowledge to build better AI and improve human collaboration.58. What do classic models of decision-making ignore A. Group discussion. B. Individual difference.C. Equal contribution. D. Interpersonal relationship.59. What can be inferred about “knowledge” and “connections” A. They can be misleading. B. They can remove overconfidence.C. They rely on central individuals. D. They interact through uncertainty.60. How can uncertainty assist with decision-making according to the research A. By balancing different views. B. By encouraging more participation.C. By making people decisive. D. By reducing unnecessary conflicts.61. What does the author mainly discuss in the last paragraph A. Choice of new research methods. B. Possible directions of AI technology.C. Ways of adapting to uncertainty. D. Potential application of the findings.21世纪教育网(www.21cnjy.com)21世纪教育网(www.21cnjy.com)压轴题02 阅读理解C、D篇命题预测 分析近几年高考英语阅读理解 C、D 篇可知,人工智能类说明文是高频压轴题材,选材贴合时代热点,语篇多来自英美科技媒体、科研报告、高校研究发布,主题聚焦 AI 技术原理、应用场景、伦理争议、社会影响、未来发展等。文章逻辑性强、长难句多、专业术语常见,侧重考查信息定位、逻辑推理、主旨概括等高阶能力。2026 年高考仍会将人工智能类作为 C、D 篇核心考查方向,命题更关注 AI 与教育、医疗、生活、科研、环保、版权等领域的结合,强调辩证思考与实际应用。高频考法 推理判断题 标题归纳题 细节理解题 词义猜测题 主旨大意题 6. 观点态度题人工智能类说明文基本规律及解题要领高考人工智能类阅读通常无标题,结构稳定,逻辑清晰,一般分为四部分:首段:开门见山引出 AI 核心话题 —— 新技术、新模型、新争议、新研究。背景:介绍 AI 发展现状、传统技术局限、现实需求或争议起源。主干:详细说明 AI工作原理、功能特点、实验数据、应用场景、优势与问题。结尾:总结 AI 价值、未来前景、现存挑战、专家观点或社会反思。二、人工智能类说明文的解题技巧1. 抓结构,快速把握主旨用略读法浏览首尾段 + 各段首尾句,圈出 AI 相关核心词(model/algorithm/LLM/chatbot/robot 等)。人工智能类文章常见说明思路:技术介绍型:原理→特点→优势→应用→局限社会争议型:现象→正方观点→反方观点→作者态度研究发现型:实验目的→过程→数据→结论→展望2. 定位标志词,精准破解细节与推理题干关键词:人名、机构、专业术语、数字、时间、转折词。长难句处理:先找主句谓语,剥离定语、状语、插入语,抓核心意思。答案原则:原文同义替换 / 合理概括,不主观臆断。3. 重点关注观点态度与引语文中researchers/experts/developers/authorities所说的话,常是观点题、推理题题眼。把握情感词:positive/negative/concerned/skeptical/optimistic/cautious。4. 紧盯转折逻辑,锁定核心信息AI 类文章高频转折词:however / but / yet / while / although / on the other hand转折后往往是作者真正观点、核心问题、重要结论,是命题重灾区。5. 熟悉选项陷阱,快速排除干扰正确选项:原文信息同义改写、合理归纳。干扰项:张冠李戴(把 A 的功能安到 B)偷梁换柱(改变程度、范围、对象)无中生有(原文未提及)以偏概全(只讲局部当主旨)6. 标题归纳技巧(AI 类专用)必须包含AI/technology/model等核心概念。范围适中,不夸大、不缩小。常见格式:AI + 功能 / 争议 / 未来 / 应用。02 人工智能类1.(2026·青岛·一模)Artificial intelligence (AI) researchers have long dreamed of tools to supercharge science-asking novel questions, designing and running experiments. Recently, large language models (LLMs) have made discoveries that some AI developers claim have inched us closer to that future. But how do you test whether an AI model can truly do science For answers, researchers turn to benchmarks (基准): standardized sets of questions or tasks that help measure an AI’s efficiency and reliability and compare it against other models. But the complexity of science makes assessing their aptitude especially challenging. As Hao Peng, a computer scientist at the University of Illinois Urbana-Champaign, puts it: “Models have all this knowledge. Do they know how to use it ”Dozens of new science-focused benchmarks have emerged over the past year to answer that question, but scientists have yet to settle on a single best approach. One of the most popular, published in Nature, is Humanity’s Last Exam (HLE). It uses 2500 questions drawn from “the frontier of human knowledge” to put LLMs through their paces. One, for example, asks how many types of sensory receptors the human skin contains. “We wanted a diverse dataset that only experts who have been working on a field for a long time can answer,” says Long Phan, a research engineer with the HLE’s developer.Since the HLE first appeared as a preprint in January 2025, the benchmark has become an important proving ground for LLMs and HLE scores are now a common talking point for AI companies seeking to highlight the capabilities of their products. At the HLE’s launch, the leading developer OpenAI’s AI model won the best score at a mere 8.3%. Earlier this month, Google claimed that its latest reasoning model for science, called Gemini 3 Deep Think, had achieved a new record HLE score of 48.4%.But some scientists argue that many of the HLE’s questions test for little-known or even useless knowledge, rather than an ability to do meaningful research. A Nature editorial accompanying the HLE’s publication also raised this issue: “We think that more scientists should be asking: What would it take to develop an AI benchmark that truly measures expert-level thinking ”1. What does the underlined word “aptitude” in paragraph 2 mean A. Knowledge. B. Performance. C. Intelligence. D. Progress.2. What does Long Phan stress about HLE A. Its topic diversity. B. Experts’ involvement in it.C. The expertise of its dataset. D. Its data-backed popularity.3. What is paragraph 4 mainly about A. HLE’s role as a key AI test. B. Companies’ use of HLE.C. HLE scores of leading AI models. D. The process of HLE’s launch.4. By sharing its view, the Nature editorial aimed to ________.A. back the current testing B. express concern over HLEC. propose a workable solution D. predict future AI benchmarks【答案】1. B 2. C 3. A 4. B【解析】【导语】本文是一篇议论文。文章围绕如何检验一个人工智能模型是否真的能够进行科学研究展开,其中人类终极考试(HLE)作为核心AI测试平台备受关注,文章介绍了它的设计定位和作用;最后文章指出,一些科学家和《自然》社论对HLE提出质疑,引发了学界对“如何开发真正能测量专家级科研思维的AI基准”的思考。【1题详解】词句猜测题。根据划线词前文“For answers, researchers turn to benchmarks (基准): standardized sets of questions or tasks that help measure an AI’s efficiency and reliability and compare it against other models.(为寻找答案,研究人员转向了基准测试:一系列标准化的问题或任务,用于衡量人工智能的效率和可靠性,并将其与其他模型进行比较)”和后文“As Hao Peng, a computer scientist at the University of Illinois Urbana-Champaign, puts it: “Models have all this knowledge. Do they know how to use it ”(正如伊利诺伊大学厄巴纳-香槟分校的计算机科学家Hao Peng所说:“模型拥有所有这些知识。但它们是否知道如何运用这些知识呢?”)”可知,前文提出核心问题:如何测试AI是否真的能开展科学研究,后文补充“模型已经拥有大量知识,问题是它们会不会运用知识”,所以此处指科学的复杂性让评估AI做科学的能力/天资格外困难,aptitude意为“能力”,故选B。【2题详解】细节理解题。根据第三段中““We wanted a diverse dataset that only experts who have been working on a field for a long time can answer,” says Long Phan, a research engineer with the HLE’s developer.(“我们希望获得一个内容多样、只有长期深耕某一领域的专家才能解答的数据集,”HLE的开发者之一Long Phan说道)”可知,他强调HLE的数据集只有深耕领域的资深专家才能作答,核心是突出数据集的专业性,故选C。【3题详解】主旨大意题。根据第四段“Since the HLE first appeared as a preprint in January 2025, the benchmark has become an important proving ground for LLMs and HLE scores are now a common talking point for AI companies seeking to highlight the capabilities of their products. At the HLE’s launch, the leading developer OpenAI’s AI model won the best score at a mere 8. 3%. Earlier this month, Google claimed that its latest reasoning model for science, called Gemini 3 Deep Think, had achieved a new record HLE score of 48. 4%.(自2025年1月HLE以预印本形式首次亮相以来,该基准测试已成为大型语言模型的重要验证平台,而HLE的得分如今已成为AI公司展示其产品能力时的常见话题。在HLE的发布仪式上,领先的开发者OpenAI的模型以仅 8.3%的得分赢得了最佳成绩。本月早些时候,谷歌宣称其最新的科学推理模型——名为“Geminis 3 深度思考”的模型,已取得了新的HLE成绩记录——48.4%)”可知,本段主要介绍HLE问世后已经成为大语言模型的重要试验场,HLE分数是AI公司展示产品能力的通用依据,后文的OpenAI、Google分数例子都是细节支撑。因此本段主要介绍HLE作为核心AI测试平台的作用,故选A。【4题详解】推理判断题。根据最后一段“But some scientists argue that many of the HLE’s questions test for little-known or even useless knowledge, rather than an ability to do meaningful research. A Nature editorial accompanying the HLE’s publication also raised this issue: “We think that more scientists should be asking: What would it take to develop an AI benchmark that truly measures expert-level thinking ”(但一些科学家认为,HLE的许多问题所测试的更多是鲜为人知甚至毫无用处的知识,而非进行有意义研究的能力。与HLE发布相关的《自然》杂志的一篇社论也提出了这一问题:“我们认为,更多的科学家应该思考:要开发一个真正能衡量专家思维水平的AI基准,需要具备哪些条件?”)”可知,科学家批评HLE多考察偏门无用知识,而非真正的研究能力,《自然》社论也认同这个问题,呼吁学界思考“如何开发真正能测量专家级思维的AI基准”,因此社论的目的是对HLE现存的问题表达担忧,故选B。2.(2026·聊城·一模)In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, spread and consumed.LLMs, we are told, make us more efficient, simplify complex work, automate boring tasks and allow us to focus on what matters. But as we feel surprised at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they eroding our capacity for independent thought, judgment and critical reflection Efficiency is not a neutral term. The current narrative around generative AI treats efficiency as progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.The popular belief is that LLMs allow humans to assign repetitive work to machines and reserve their energy for more reflective tasks, but the opposite is often true. As the more intellectual labor — writing, summarizing and decision-making, for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.So what do we really mean by “efficiency” If it means shortening the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it’s not a gain; it’s a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to worsen the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight.5. What does the underlined word “eroding” in paragraph 2 mean A. Changing. B. Improving. C. Destroying. D. Expanding.6. What do LLMs lead to, according to paragraph 4 A. We get more reflective labor. B. We do independent thinking less.C. We engage in more repetitive tasks. D. We reduce our work efficiency indeed.7. What does the author advocate about our using LLMs A. Putting efficiency first. B. Reducing intellectual effort.C. Achieving more with less time. D. Increasing human engagement.8. What is the author’s purpose in writing the text A. To describe the fast development of LLMs.B. To reflect on the negative effects of LLMs.C. To question the necessity of pursuing efficiency.D. To challenge the traditional definition of efficiency.【答案】5. C 6. B 7. D 8. B【解析】【导语】本文是一篇说明文。主要介绍大型语言模型在带来效率提升的同时,可能削弱人们独立思考与批判性思维能力,并反思其真正价值。【5题详解】词句猜测题。根据第二段中的“But as we feel surprised at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they eroding our capacity for independent thought, judgment and critical reflection (但当我们对它们的能力感到惊讶时,一个迫切的担忧出现了:这些模型是真正提高了效率,还是eroding我们独立思考、判断和批判性反思的能力?)”可知,句中使用了选择对比结构 “genuinely boosting efficiency” 与“eroding our capacity”形成反义关系,boosting表示 “提升、增强”,与之相反的eroding应表示“逐渐损害、破坏、削弱”。故选C项。【6题详解】细节理解题。根据第四段中的“As the more intellectual labor — writing, summarizing and decision-making, for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically. (随着更多的智力劳动——例如写作、总结和决策——被交给人工智能,我们自己参与其中的程度就会越低。我们不会把思考留给更高层次的任务,反而会越来越失去批判性思考的机会,甚至可能失去这种能力。)”可知,大型语言模型会导致人们独立思考减少。故选B项。【7题详解】推理判断题。根据最后一段中的“Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight.(真正意义上的效率,不应该是用更少的思考做更多的事,而应该是用更深入的关注、更强的道德感和持续的人类洞察力把事情做得更好)”可知,作者主张使用大型语言模型时增加人类参与。故选D项。【8题详解】推理判断题。通读全文,尤其是第二段中的“But as we feel surprised at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they eroding our capacity for independent thought, judgment and critical reflection (但当我们对它们的能力感到惊讶时,一个迫切的担忧出现了:这些模型是真正提高了效率,还是正在削弱我们独立思考、判断和批判性反思的能力?)”可知,作者写作目的是反思大型语言模型带来的负面影响。故选B项。3.(2026·广州·一模)Survey data shows that most freshmen regularly use generative AI, often treating it as “an intellectual partner”, Professor John Hampson reported at a faculty (全体教师) meeting in Elite Technology University (ETU). Students most commonly use it to understand difficult concepts, search, generate study materials, and edit writing. Interestingly, the lowest reported use is for generating text.Meanwhile, students are using faculty office hours and the speaking and writing centers less. In last year’s computer science courses, scores on problem sets increased, yet exam scores declined. “This is concerning,” noted Hampson. “If they were using AI as a study pal, they weren’t absorbing as much as they might think.”Students want clearer AI policies, and Hampson advised faculty to carefully consider and share what level of use they permit, the reasoning behind it, how to cite use of AI, and examples of what’s permissible. He also encouraged department-wide discussions to best prepare students for a workplace where they will need to know how to write or code with its assistance. “I also believe that students need to learn to write and code unaided, to develop critical thinking skills, their agency as citizens, and also meaning — making the ideas that help them understand their own lives,” he added.Some professors expressed concerns about how AI use is impacting students’ mental health and learning. Professor George Wilson noted that students are often highly competitive, and “it’s important to create rules so that competition leads to healthy behaviors that make them better educated people.” While some suggested more one-on-one time with students, others noted that budget restrictions would make that difficult.Professor Poly Burnett observed that lecture attendance is also down. She urged faculty to make lectures something students genuinely want to attend. She also noted that many teachers are making small changes, in hopes of continuing teaching as they’ve previously taught. “We actually have to see this less as a problem and more as an opportunity,” Burnett suggested. “How can ETU lead in rethinking how we teach, how we learn... and have our students be benefiting and being at the leading edge of that ”9. What does the author imply about the survey findings by using “interestingly” in paragraph 1 A. They indicate a promising trend. B. They contradict a common assumption.C. They capture the faculty’s interest. D. They require further investigation.10. Which of the following changes is mentioned in paragraph 2 A. Students are interacting more with others.B. AI use has led to better learning outcomes.C. Exam scores rose while homework scores fell.D. Students are using off-line academic services less.11. Why does Hampson emphasize students writing and coding without AI A. To clarify acceptable uses of AI in coursework.B. To prepare students for future workplace demands.C. To ensure students develop essential human capacities.D. To improve students’ long-term academic performance.12. What is Burnett’s suggestion to the faculty A. Make lectures more entertaining.B. Let students take the leading role.C. Take the chance to reform education.D. Adjust teaching slightly to AI challenges.【答案】9. B 10. D 11. C 12. C【解析】【导语】本文是一篇议论文。主要介绍ETU大学关于新生使用生成式AI的调查结果、引发的教学问题及教师们的讨论与建议。【9题详解】推理判断题。根据第一段中的“Students most commonly use it to understand difficult concepts, search, generate study materials, and edit writing. Interestingly, the lowest reported use is for generating text. (学生们最常使用它来理解难懂的概念、搜索、生成学习资料和编辑写作。有趣的是,据报告,使用最少的是生成文本。)”可知,人们通常认为生成式AI主要用于生成文本,而调查结果与之相反,因此这与普遍的假设相矛盾。故选B项。【10题详解】细节理解题。根据第二段中的“Meanwhile, students are using faculty office hours and the speaking and writing centers less.(与此同时,学生去教师答疑时间和前往口语与写作中心求助的次数减少了。)”可知,学生们正在减少使用线下学术服务。故选D项。【11题详解】推理判断题。根据第三段中的““I also believe that students need to learn to write and code unaided, to develop critical thinking skills, their agency as citizens, and also meaning — making the ideas that help them understand their own lives,” he added. (他补充道:“我还认为,学生需要学会独立写作和编程,以此培养批判性思维能力、作为公民的自主能动性,同时也要建立意义——构建那些能帮助他们理解自身生活的理念。”)”可知,汉普森强调学生在没有AI的情况下写作和编程是为了确保学生发展基本的人类能力。故选C项。【12题详解】细节理解题。根据最后一段中的““We actually have to see this less as a problem and more as an opportunity,” Burnett suggested. “How can ETU lead in rethinking how we teach, how we learn… and have our students be benefiting and being at the leading edge of that ”(伯内特表示:“事实上,我们不该把这更多看作一个问题,而应更多看作一个机遇。ETU该如何在重新思考教学方式、学习方式……并让我们的学生从中受益、走在前沿这方面起到引领作用?”)”可知,伯内特建议教师们抓住机会改革教育。故选C项。4.(2026·苏北七市·二模)Generative AI tools have exploded in popularity, enabling users to create text, images, music and video in seconds. But behind the innovation lies a controversial issue: the unauthorized use of copyrighted material to train these models and the uncredited reproduction of protected works in AI-generated content.For artists, writers and filmmakers, the rise of generative AI feels like a threat. Many AI systems are trained on vast datasets scraped from the internet—including novels, paintings, songs and films—without permission or compensation to the original creators. When AI produces new content that closely mimics the style or even specific elements of copyrighted works, it often does so without attribution, leaving creators feeling their labor is exploited.Take visual artists as an example. A digital painter might spend years developing a unique style, only to find AI tools can replicate that style instantly. Some artists have filed lawsuits against AI companies, arguing that training models on their work without consent violates copyright law. They demand fair compensation and clearer rules on how AI can use creative content.Tech companies, however, argue that AI training falls under "fair use," a legal doctrine that allows limited use of copyrighted material without permission for purposes such as education, research or innovation. They claim AI transforms the original material into something new, thus not infringing on copyright. Yet this argument fails to address the core concern of many creators: that AI profits from their work without giving anything back.The debate is far from settled. Governments around the world are struggling to update copyright laws to keep pace with AI. The European Union’s AI Act requires transparency about training data, while the US Copyright Office has refused to grant copyright to purely AI-generated works. In China, new regulations mandate labeling AI-generated content to prevent deception.As generative AI continues to evolve, finding a balance between innovation and protection will be critical. Without clear rules, both creators and AI developers will face uncertainty. The goal should be to foster AI progress while ensuring that those who create original work are respected and rewarded.13.What problem is mainly discussed in Paragraph 1 The rapid development of AI technology.B. The lack of legal protection for AI users.C. The illegal use of copyrighted material by AI.D. The difficulty in creating original content.14.Why do many artists oppose generative AI AI makes their works less popular.AI copies their styles without permission.AI reduces the value of creative jobs.D. AI fails to produce high-quality content.15.What do tech companies claim about AI training It should be strictly banned by law.It belongs to the category of fair use.It needs full permission from creators.D. It has nothing to do with copyright.16.What can be inferred from the last two paragraphs Global laws on AI copyright are consistent.AI-generated works can get copyright in the US.China requires clear labels for AI content.D. Innovation should come before copyright protection.【答案】13.C 14. B 15. B 16. C【解析】【导语]本文是一篇议论文。主要介绍生成式AI的普及所引发的版权争议,围绕AI未经授权使用版权素材、创作者维权与科技公司的分歧展开,探讨全球版权法规如何适配AI发展,寻求创新与版权保护的平衡。【13题详解】细节理解题。根据第一段中的“But behind the innovation lies a controversial issue: the unauthorized use of copyrighted material to train these models and the uncredited reproduction of protected works in AI-generated content. (但在这项创新背后,存在一个有争议的问题:未经授权使用受版权保护的素材来训练这些模型,以及在AI生成的内容中未经署名复制受保护的作品。)”可知,第一段主要讨论的核心问题是AI非法使用受版权保护的素材。A项仅提及AI技术的快速发展,未涉及争议问题;B项“缺乏对AI用户的法律保护”文中未提及;D项“创作原创内容的困难”与第一段无关。故选C项。【14题详解】细节理解题。根据第二段中的“Many AI systems are trained on vast datasets scraped from the internet—including novels, paintings, songs and films—without permission or compensation to the original creators. (许多AI系统是在从互联网上抓取的海量数据集上训练的——包括小说、绘画、歌曲和电影——却没有获得原创创作者的许可,也没有向他们支付报酬。)”以及第三段中的“A digital painter might spend years developing a unique style, only to find AI tools can replicate that style instantly. (一位数字画家可能会花费数年时间培养独特的风格,却发现AI工具能立即复制这种风格。)”可知,许多艺术家反对生成式AI,是因为AI未经许可就复制他们的风格,侵犯了他们的权益。A项“AI使他们的作品不那么受欢迎”、C项“AI降低了创意工作的价值”、D项“AI无法生成高质量的内容”文中均未提及。故选B项。【15题详解】细节理解题。根据第四段中的“Tech companies, however, argue that AI training falls under "fair use," a legal doctrine that allows limited use of copyrighted material without permission for purposes such as education, research or innovation. (然而,科技公司认为,AI训练属于“合理使用”——这是一项法律原则,允许在未经许可的情况下,为教育、研究或创新等目的有限使用受版权保护的素材。)”可知,科技公司声称AI训练属于合理使用的范畴。A项“它应该被法律严格禁止”与科技公司的观点相反;C项“它需要获得创作者的完全许可”不符合文意;D项“它与版权无关”表述错误,科技公司只是认为属于合理使用,并非与版权无关。故选B项。【16题详解】推理判断题。根据倒数第二段中的“In China, new regulations mandate labeling AI-generated content to prevent deception. (在中国,新法规强制要求为AI生成的内容标注标签,以防止欺骗。)”可知,中国要求为AI生成的内容标注清晰的标签。A项“全球关于AI版权的法律是一致的”表述错误,文中提到欧盟、美国、中国的法规各有不同;B项“AI生成的作品在美国可以获得版权”与文意不符,美国版权局拒绝为纯AI生成的作品授予版权;D项“创新应该优先于版权保护”错误,最后一段明确提到“找到创新与保护之间的平衡至关重要”。故选C项。5.(2026·运城·一模)New research challenges the widespread belief that artificial intelligence (AI) is driving a major rise in global greenhouse gas emissions Scientists from the University of Waterloo and the Georgia Institute of Technology analyzed U.S. economic data alongside estimates of how frequently AI tools are used across different industries. Their aim was to understand what might happen to the environment if AI adoption increases along its current path.According to the U.S. Energy Information Administration, 83 percent of the nation’s economic activity relies on petrol, coal and natural gas. These fuels release greenhouse gases when burned. The researchers noted that total energy use from AI in the U.S. matched the electricity consumption of Iceland, yet this amount remained insignificant when viewed at national or global levels.“It is important to note that the increase in energy use is not going to be uniform. It’s going to be felt more in the places where electricity is produced to power the data centers,” said Dr Juan Moreno-Cruz, a professor at the School of Environment, Enterprise and Development at the University of Waterloo and Canada Research Chair in Energy Transitions. “If you look at that energy from the local perspective, that’s a big deal because some places could see double the amount of electricity output and emissions. But at a larger scale, AI’s use of energy won’t be noticeable.”“For people who believe that the use of AI will be a major problem for the climate and think we should avoid it, we’re offering a different perspective,” Moreno-Cruz added. “The effects on climate are not that significant, and we can use AI to develop green technologies or to improve existing ones.”To develop their findings, environmental economists Moreno-Cruz and Dr Anthony Harding reviewed a variety of economic sectors, the types of jobs within those sectors, and the share of tasks that could potentially be performed by AI. Moreno-Cruz and Harding intend to apply the same approach to additional countries in order to understand how AI adoption may affect environmental outcomes across different regions of the world.17. What is the primary goal of the research A. To promote the development of green AI. B. To measure energy consumption worldwide.C. To warn about AI’s growing energy demands. D. To assess AI’s potential environmental effects.18. What can be said about AI energy consumption in the U.S. A. It contributes to petrol-based activities. B. It will soon reach the global emission target.C. It has small influence at the national level. D. It exceeds Iceland’s electricity consumption.19 What do researchers plan to do next A. Extend their research to more countries. B. Shift focus to AI’s economic advantages.C. Develop AI applications to stop emissions. D. Reduce the energy use of AI in data centers.20. Which of the following is the main idea of the text A. AI technology drives greenhouse gas emissions.B. AI energy consumption urgently needs regulating.C. Data centers emit more than previously estimated.D. AI’s impact on climate is much smaller than believed.【答案】17. D 18. C 19. A 20. D【解析】【导语】本文是一篇说明文。文章主要讲述了新研究对人工智能(AI)是否会大幅增加全球温室气体排放这一普遍观点提出质疑,介绍了研究的过程、发现及未来计划。【17题详解】细节理解题。根据第一段“Their aim was to understand what might happen to the environment if AI adoption increases along its current path. (他们的目的是了解如果人工智能的采用沿着目前的路径增加,环境可能会发生什么)”可知,该研究的主要目的是评估人工智能对环境的潜在影响。故选D项。【18题详解】细节理解题。根据第二段“The researchers noted that total energy use from AI in the U. S. matched the electricity consumption of Iceland, yet this amount remained insignificant when viewed at national or global levels. (研究人员指出,美国人工智能的总能源使用量与冰岛的电力消耗相当,但从国家或全球层面来看,这一数字仍然微不足道)”可知,美国人工智能的能源消耗在国家层面上影响较小。故选C项。【19题详解】细节理解题。根据最后一段“Moreno-Cruz and Harding intend to apply the same approach to additional countries in order to understand how AI adoption may affect environmental outcomes across different regions of the world. (Moreno-Cruz和Harding打算将同样的方法应用于更多的国家,以便了解人工智能的采用可能如何影响世界不同地区的环境结果)”可知,研究人员计划将他们的研究扩展到更多国家。故选A项。【20题详解】主旨大意题。根据第一段“New research challenges the widespread belief that artificial intelligence (AI) is driving a major rise in global greenhouse gas emissions. (新研究对人工智能(AI)正在推动全球温室气体排放大幅上升的普遍看法提出了挑战)”以及全文内容可知,本文主要讲述了新研究对人工智能(AI)是否会大幅增加全球温室气体排放这一普遍观点提出质疑,研究发现人工智能对气候的影响比人们认为的要小得多。故选D项。6.(2026·郑州·一模)AI technology has long been able to recognize patterns in music preferences and create personalized playlists. Now, a new AI system has taken this a step further by analyzing how people listen to music and identifying their unique “listening styles”. This advancement changes how music streaming services tailor playlists to individual users, making them more enjoyable.Music recommendation algorithms (算法) have been highly effective at suggesting new songs and artists. But Dr. Emily Carter, a music data scientist at the University of Music and Technology, notes that these algorithms often use a one-size-fits-all approach that doesn’t record the slight differences of individual listening behavior. To better understand and satisfy individual preferences, researchers need to analyze each user’s unique listening patterns.To develop and train their AI, the researchers collected data from over 50 million listening sessions and fed it into a neural network. They tested the system by seeing how well it could distinguish between different users’ listening habits. The system was given 100 listening sessions from each of about 3,000 known users and 100 new sessions from an unknown user. The AI looked for the best match and identified the unknown user 86% of the time, according to a study presented at the International Society for Music Information Retrieval (ISMIR).“We were quite surprised by the accuracy,” says Alex Johnson, a doctoral student in Carter’s lab and the lead author of the study. A non-AI method was only 28% accurate.“The work is innovative,” says Dr. Sarah Kim, a music researcher. “Personalized music experiences could transform how we interact with music platforms.”The researchers are aware of the privacy impact of their system, which could potentially identify users based on their listening habits. In theory, similar systems could also analyze other behaviors, such as the types of podcasts (播客) people listen to or the timing of their music consumption. ISMIR organizers found the study impressive but questionable, and accepted it on condition that the researchers detail the privacy risks. Carter says they have decided, for now, not to release the software publicly.21. What advancement of AI is mentioned in paragraph 1 A. Protecting people’s privacy.B. Recognizing music patterns.C. Tailoring personalized playlists.D. Improving music streaming quality.22. What does Carter say about the music recommendation algorithms A. They consider listening styles.B. They renew networks constantly.C. They recommend popular songs.D. They ignore individual preferences.23. What is the main concern about the new AI system A. Its technical weaknesses in analyzing data.B. Its inability to distinguish between users’ habits.C. Its limited accuracy compared to non-AI methods.D. Its potential privacy risk from tracking listening habits.24. How do ISMIR organizers feel about the new AI system study A. Careful. B. Disappointed. C. Favorable. D. Uninterested.【答案】21. C 22. D 23. D 24. A【解析】【导语】本文是一篇说明文。文章介绍了一种新的AI系统,该系统能分析人们如何听音乐并识别其独特的“聆听风格”,进而为音乐流媒体服务定制个性化播放列表,同时研究人员也关注到了其潜在的隐私问题。【21题详解】细节理解题。根据文章第一段中的“Now, a new AI system has taken this a step further by analyzing how people listen to music and identifying their unique ‘listening styles’. This advancement changes how music streaming services tailor playlists to individual users, making them more enjoyable.(现在,一种新的人工智能系统通过分析人们如何听音乐并识别他们独特的“聆听风格”,将这一技术向前推进了一步。这一进步改变了音乐流媒体服务为个人用户定制播放列表的方式,使其更加令人愉快。)”可知,第一段中提到的AI的进步是为个人用户定制播放列表。故选C项。【22题详解】细节理解题。根据文章第二段中的“But Dr. Emily Carter, a music data scientist at the University of Music and Technology, notes that these algorithms often use a one-size-fits-all approach that doesn’t record the slight differences of individual listening behavior. To better understand and satisfy individual preferences, researchers need to analyze each user’s unique listening patterns.(但是,音乐与技术大学(University of Music and Technology)的音乐数据科学家Emily Carter博士指出,这些算法通常采用一刀切的方法,无法记录个体聆听行为的细微差异。为了更好地理解和满足个人偏好,研究人员需要分析每个用户独特的聆听模式。)”可知,Carter博士认为这些算法通常采用一刀切的方法,无法记录个体聆听行为的细微差异即音乐推荐算法忽略了个体偏好。故选D项。【23题详解】细节理解题。根据文章最后一段中的“The researchers are aware of the privacy impact of their system, which could potentially identify users based on their listening habits.(研究人员意识到他们的系统对隐私的影响,该系统可能会根据用户的聆听习惯识别用户。)”可知,新AI系统的主要问题是它可能通过追踪聆听习惯而带来的隐私风险。故选D项。【24题详解】推理判断题。根据文章最后一段中的“ISMIR organizers found the study impressive but questionable, and accepted it on condition that the researchers detail the privacy risks.(ISMIR组织者认为这项研究令人印象深刻,但也存在疑问,并在研究人员详细说明隐私风险的情况下接受了这项研究。)”可知,ISMIR组织者对新AI系统研究的态度是谨慎的,A选项“Careful(谨慎的)”符合题意。故选A项。7.(2026·西安·3月)An AI-powered robot was able to separate a gall bladder (胆藏) from the liver of a dead pig in what researchers claim is the first realistic surgery by a machine with almost no human intervention.The robot is powered by a two-tier AI system trained on 17 hours of video containing 16,000 motions performed by human surgeons during operations. When put to work, the first layer of the AI system watches video, monitors the surgery and issues plain-language instructions, while the second AI layer turns each instruction into 3D tool motions. In all, the gall bladder surgery requires 17 separate tasks. The robotic system has performed the operation eight times, achieving 100 percent success in all of the tasks.“Current surgical robotic technology has made some procedures less invasive,but risks haven't really dropped from previous laparoscopic(使用腹腔镜的)surgeries by human surgeons,” says team member Axel Krieger at Johns Hopkins University in Maryland. “This made us look into what is the next generation of robotic systems that can help patients and surgeons.”“The study really highlights the art of the possibility with Al and surgical robotics,” says Danail Stoyanov at University College London. “Incredible advances in computer vision for surgical video with the availability of open robotic platforms for research make it possible to demonstrate surgical automation. "But many challenges remain to make the system practical in clinical use. “While the robot completed the task with 100% success, it had to self-correct six times per case. For example, this could mean a gripper (夹持器) designed to grasp an artery missed its hold on the first try, " Stoyanov said.“There were a lot of instances where it had to self-correct, but this was all fully autonomous,” says Krieger. “It would correctly identify the initial mistake and then fix itself. " The robot also had to ask a human to change one of its surgical instruments for another, meaning some level of human intervention was required. The next step,says Krieger, is to let a robot operate autonomously on a live animal, where breathing and bleeding could complicate things. “But with continued research, we' re confident that we can overcome these obstacles step by step. "25. What are the two-tier tasks that the Al system is trained to perform A. Giving instructions and performing motions.B. Monitoring the surgery and issuing commands.C. Analyzing video and choosing surgical tools.D. Imitating human surgeons and separating tasks.26. What breakthrough does the new robot achieve over traditional laparoscopic surgeries A. Minimal invasiveness with no danger.B. Near autonomy with high success rate.C. Low risks in complex surgical tasks.D. Faster self-correction speed in operations.27. What may prove challenging in a robot operation according to the last paragraph A. Adapting to real-time variability.B. Identifying surgical mistakes quicker.C. Reducing human help for crucial tasks.D. Dealing with complicated surgeries.28. Which of the following best summarizes the passage A. The Robotic Surgery: Cutting Medical RisksB. The Robotic Surgery: Great Clinical ProgressC. The Robotic Surgery: Simplifying SurgeryD. The Robotic Surgery: Success and Ongoing Issue【答案】25. A 26. B 27. A 28. D【解析】【导语】本文是一篇说明文。主要介绍了一款由双层人工智能系统驱动的手术机器人,它能在几乎无人干预的情况下成功完成猪的胆囊分离手术,展现了手术自动化的突破,同时也指出其目前仍存在需要自我修正、依赖少量人工协助等待解决的问题。【25题详解】细节理解题。根据题干关键信息 “the two-tier tasks” 将信息线索定位至第二段。根据第二段第一、二句 “The robot is powered by a two-tier AI system trained on 17 hours of video containing 16,000 motions performed by human surgeons during operations. When put to work, the first layer of the AI system watches video, monitors the surgery and issues plain-language instructions, while the second AI layer turns each instruction into 3D tool motions.” 可知该机器人由一个双层人工智能系统驱动,该系统基于 17 个小时的视频训练而成,这些视频包含了人类外科医生在手术过程中做出的 16000 个动作。在投入使用后,人工智能系统的第一层会观看视频,监测手术并给出通俗指令,而第二层将每条指令转化为 3D 工具动作。由此可知,人工智能系统被训练执行的两层任务是给出指令并执行动作。故选 A 项。【26题详解】细节理解题。根据题干关键信息 “breakthrough”“over traditional laparoscopic surgeries” 可知,我们应在文中找出新型机器人相比传统腹腔镜手术的核心优势。根据第三段中的 “with the availability of open robotic platforms for research make it possible to demonstrate surgical automation” 可知,开放式的机器人研究平台使实现手术自动化成为可能。再结合第二段最后一句 “The robotic system has performed the operation eight times, achieving 100 percent success in all of the tasks.” 及第四段中的 “the robot completed the task with 100% success” 可知,新型机器人手术成功率很高。因此,B 项 “接近自主且成功率高” 符合题意。故选 B 项。【27题详解】推理判断题。根据最后一段最后两句 “The next step, says Krieger, is to let a robot operate autonomously on a live animal, where breathing and bleeding could complicate things. ‘But with continued research, we're confident that we can overcome these obstacles step by step.’” 可知,下一步要让机器人在活体动物上自主手术,在那里,呼吸和出血可能会使情况复杂化。这说明机器人需要处理活体手术中呼吸和出血带来的动态和不可预测的变化。由此,A 项 “适应实时变化” 可能是机器人手术面临的挑战。故选 A 项。【28题详解】主旨大意题。通读全文可知,文章前半部分重点介绍了这款 AI 手术机器人在无人干预下成功完成胆囊分离手术,实现了高成功率与手术自动化的成功突破;后半部分则阐述了其仍存在需要多次自我修正、需人工更换器械、难以应对活体复杂情况等现存挑战与待解决问题。D 项 “机器人手术:成功与现存问题” 能够全面概括文章内容。故选 D 项。8.(2026·衡阳·3月)Chinese scientists have uncovered the world’s first AI - powered breeding robot named GEAIR. It can cruise autonomously and carry out cross–pollination (异花授粉), promising reduced breeding costs, short breeding cycles, and improved breeding efficiency.GEAIR has been built with a combination of two technologies: AI and biotechnology. Xu Cao, a researcher from the Chinese Academy of Sciences, led the research team that built the robot.Cross-pollination, also known as hybrid pollination, is the process of transferring pollen (花粉) from a flower of one plant to another. This process helps in creating hybrid flowers of plants, also known as hybrid breeding.The aim of hybrid breeding is to develop crop varieties with improved traits, thereby achieving enhanced yield and quality. However, according to Xu, doing this process repeatedly is time - consuming. GEAIR can help reduce the time and also avoid human errors.Living up to its promised potential, the robot carried out a trial in a greenhouse. It identified a flower accurately and extended its arm gently to complete the hybrid pollination process. The entire breeding process was done with inch-perfect precision. The researchers also built the first “intelligent robotic breeding factory”, which can quickly and efficiently develop new, high-quality plant varieties.GEAIR will start a new era backed by AI and biotechnology in the breeding industry. “Our new study has initiated an intelligent breeding model of integrated biotechnology, AI and robot labor — marking China’s successful pioneering efforts in the construction of a closed-loop (闭环的) technology system for intelligent robotized hybrid breeding,” Xu said. “It also shows the application prospects of ‘AI for science’ in the sector of biological breeding.”With biotechnology as its foundation, AI as empowerment, and robots as operators, this study could help China take the lead in the race to create breeding robots that are fully autonomous and intelligent.29. What is the primary function of the GEAIR robot A. To take care of human gardeners.B. To monitor plant growth conditions.C. To conduct hybrid pollination tasks.D. To harvest mature crops automatically.30. What problem of traditional hybrid breeding does GEAIR solve A. Lack of pollen sources.B. Long time and mistakes.C. High costs of hybridization.D. A narrow range of hybrid types.31. What can we infer about the “intelligent robotic breeding factory” A. It is popular worldwide now.B. It can work without any power.C. It mainly focuses on common crops.D. It can enhance the diversity of agriculture.32. What is the significance of GEAIR’s development A. It makes organic farming possible.B. It lowers the cost of traditional farming.C. It proves robots can work better than humans.D. It shows China’s leadership in agricultural technology.【答案】29. C 30. B 31. D 32. D【解析】【导语】这是一篇说明文。本文介绍了中国科学家研发出全球首个人工智能授粉机器人GEAIR,该机器人结合了人工智能和生物技术,能够自主巡航并进行异花授粉,有望降低育种成本、缩短育种周期并提高育种效率。29题详解】细节理解题。根据第一段“It can cruise autonomously and carry out cross-pollination (异花授粉), promising reduced breeding costs, short breeding cycles, and improved breeding efficiency.(它可以自主巡航并进行异花授粉,有望降低育种成本、缩短育种周期并提高育种效率。)”可知,GEAIR机器人的主要功能是进行杂交授粉任务。故选C。【30题详解】细节理解题。根据第四段“However, according to Xu, doing this process repeatedly is time-consuming. GEAIR can help reduce the time and also avoid human errors.(然而,据徐说,重复这个过程很耗时。GEAIR可以帮助减少时间,也可以避免人为错误。)”可知,GEAIR解决了传统杂交育种耗时长且容易出错的问题。故选B。【31题详解】推理判断题。根据第五段“The researchers also built the first “intelligent robotic breeding factory”, which can quickly and efficiently develop new, high-quality plant varieties.(研究人员还建造了第一个“智能机器人育种工厂”,可以快速高效地开发出新的高质量植物品种。)”可知,智能机器人育种工厂可以快速高效地开发出新的高质量植物品种,这可以增强农业的多样性。故选D。【32题详解】细节理解题。根据最后一段“With biotechnology as its foundation, AI as empowerment, and robots as operators, this study could help China take the lead in the race to create breeding robots that are fully autonomous and intelligent.(这项研究以生物技术为基础,人工智能为赋能,机器人为操作员,可以帮助中国在创造完全自主和智能的育种机器人的竞赛中领先。)”可知,GEAIR的发展显示了中国在农业技术方面的领先地位。故选D。9.(2026·河南·一模)For years, the dream future kitchen looked like something from a sci-fi film: robots turning burgers, mechanical arms moving wildly. But at CES (International Consumer Electronics Show) 2026, industry experts painted a different prospect. The future isn’t arriving with robots looking like us. It’s arriving quietly, invisibly, and it’s already here.Early smart kitchen products made a critical mistake. As Nicole Papantoniou from the Good Housekeeping Institute put it, “A lot of people were putting smart features, which you didn’t really need, into products.” Today’s successful ideas aren’t about adding technology for its own purpose. They’re about friction reduction — making cooking easier without the user even noticing the intelligence at work.This shift is clear in the latest AI appliances. Several brands offer ovens (烤箱) with systems that “see” what you put inside. Simply place the food in, and the machine automatically selects the best cooking option. No buttons, no guesswork. Refrigerators are changing in a similar way. The latest AI models have cameras that identify ingredients, track best-before dates, and suggest recipes based on what you have. A partnership with chef Jamie Oliver brings AI-made recipes tailored to your needs. But perhaps the most unexpected use of AI in the kitchen has nothing to do with panies are developing smart range hoods (抽油烟机) that use airflow to create a low-pressure zone above the pan, trapping very small particles (颗粒) before they reach your lungs.So will robots replace human cooks At a CES Discussion, chef Tyler Florence gave a firm answer. “Human-made will become the new luxury item,” he said, “Machines excel at repetitive, boring tasks. But creativity, the human touch — these will only become precious as technology advances.”The vision from CES 2026 is not a kitchen without cooks. It’s a kitchen where invisible intelligence handles the heavy work, and humans are freed to turn ingredients into meals, and meals into memories.33. What is the big change of today’s smart kitchen ideas A. Creating more robot lookalikes. B. Reducing trouble while cooking.C. Designing more sci-fi products. D. Adding more complex functions.34. How do new AI ovens simplify the cooking process A. They recognize food and set the right mode.B. They bring AI-made recipes tailored to needs.C. They suggest recipes based on what you have.D. They use airflow to create a low-pressure zone.35. What can be inferred from Tyler Florence’s words A. Human creativity will be highly valuable.B. AI will take the place of human creativity.C. Human-made food is more than expensive.D. Machines are better at innovative cooking.36. What can be a suitable title for the text A. AI in Kitchens: A Smart Master for CookingB. Smart Kitchens: More Robotic, Less HumanC. CES 2026: When Kitchens Finally Go Sci-FiD. Hidden AI: The New Face of Future Kitchens【答案】33. B 34. A 35. A 36. D【解析】【导语】本文是一篇说明文。文章主要讲述了未来厨房中隐藏的人工智能技术及其带来的变革。【33题详解】细节理解题。根据第二段中“Today’s successful ideas aren’t about adding technology for its own purpose. They’re about friction reduction — making cooking easier without the user even noticing the intelligence at work.(如今,受欢迎的设计不再是为了科技而堆砌科技。它们旨在减少操作麻烦—— 让烹饪变得更简单,而用户甚至感受不到智能技术正在运转。)”可知,如今智能厨房理念的大变化是减少烹饪时的麻烦。故选B。【34题详解】细节理解题。根据第三段中“Several brands offer ovens (烤箱) with systems that “see” what you put inside. Simply place the food in, and the machine automatically selects the best cooking option. No buttons, no guesswork.(几个品牌提供带有“看到”你放入里面东西的系统的烤箱。只需将食物放入,机器就会自动选择最佳的烹饪选项。无需按钮,无需猜测。)”可知,新型人工智能烤箱通过识别食物并设置正确的模式来简化烹饪过程。故选A。【35题详解】推理判断题。根据倒数第二段中““Human-made will become the new luxury item,” he said, “Machines excel at repetitive, boring tasks. But creativity, the human touch — these will only become precious as technology advances.(“人工制作的将成为新的奢侈品,”他说,“机器擅长重复、无聊的任务。但是创造力,人的巧思——随着技术的进步,这些只会变得更加珍贵。”)”可知,从Tyler Florence的话中可以推断出人类创造力将具有极高的价值。故选A。【36题详解】主旨大意题。通读全文,尤其是根据第一段中“But at CES (International Consumer Electronics Show) 2026, industry experts painted a different prospect. The future isn’t arriving with robots looking like us. It’s arriving quietly, invisibly, and it’s already here.(但在2026年国际消费电子展上,行业专家描绘了一个不同的前景。未来不会以和我们长得一样的机器人形式到来。它正在悄悄地、无形地到来,而且已经在这里了。)”以及最后一段中“The vision from CES 2026 is not a kitchen without cooks. It’s a kitchen where invisible intelligence handles the heavy work, and humans are freed to turn ingredients into meals, and meals into memories.(2026年国际消费电子展上的愿景并不是没有厨师的厨房。这是一个厨房,无形的智能处理繁重的工作,人类得以自由地将食材变成食物,将食物变成回忆。)”可知,文章主要介绍了未来厨房中隐藏的人工智能技术及其带来的变革,因此D选项“Hidden AI: The New Face of Future Kitchens(隐藏的人工智能:未来厨房的新面貌)”最符合文章主旨。故选D。10.(2026·呼和浩特·一模)Around Christmas 50-year-old New Yorker Holly Jespersen felt unwell but hesitated to see a doctor. She turned to ChatGPT, which advised her against visiting. Days later, with a high fever and headaches, again using the chatbot to decide when, she finally went to urgent care and was diagnosed with influenza A.Holly is far from alone. According to OpenAI, over 40 million daily health-related enquiries, with 230 million weekly. In January, it announced ChatGPT Health, allowing users to upload medical records for customized (定制的) support. The company stresses it is meant to “support, not replace” medical care, not for diagnosis or treatment, but to help with everyday questions and pattern recognition.Yet concerns arise. Family physician Dr. Alexa Mieses Malchuk warns that ChatGPT, like WebMD, prioritizes being helpful over accurate. A 2023 study found ChatGPT’s cancer treatment plans contained many errors, some hard even for experts to detect. However, newer research on colon cancer showed its answers on symptoms and prevention were highly accurate, suggesting LLMs (大型语言模型) may assist patient education but not clinical decisions.Beyond accuracy, psychologists highlight anxiety risks. A 2013 study confirmed that online symptom searches can intensify health anxiety, especially for those intolerant of uncertainty. Clinical psychologist Elizabeth Sadock notes that ChatGPT, always available and affirming, fuels reassurance-seeking (寻求慰藉) behavior, trapping users in a cycle of anxiety. For some patients, limiting ChatGPT use may now be part of treatment.Privacy is another puzzle. Biomedical informatics professor Bradley Malin acknowledges OpenAI’s security efforts, but stresses ChatGPT Health falls outside HIPAA regulation. Patients may unknowingly lose legal protections when their data flows from secured medical records to an unregulated third party.Yet some see value. Dermatologist Kumar views ChatGPT Health as educational, clarifying terms like sunscreen types, not diagnostic. He distinguishes it from WebMD’s curated, reviewed content, while ChatGPT’s AI may mislead.Thus, ChatGPT Health enters America’s broken system as a double-edged sword: a round-the-clock assistant that may empower (赋权) patients, yet risks misinforming, over-reassuring, and exposing them to unregulated data practices.37. Why does OpenAI launch ChatGPT Health A. To replace medical care totally. B. To provide consultation timely.C. To treat the patients early. D. To diagnose diseases quickly.38. What can we learn from paragraphs 3-5 A. ChatGPT may lead to more risks than benefits.B. ChatGPT is always available, helpful and accurate.C. Psychologists advise people not to use ChatGPT.D. People will have no privacy when using ChatGPT.39. How does Kumar find ChatGPT A. It teaches patients some medical terms.B. It can be used as an assistant to patients.C. It can help more patients cure diseases.D. It has more advantages than disadvantages.40. What is the author’s attitude toward ChatGPT Health A. Enthusiastic and supportive. B. Cautious and optimistic.C. Disapproving and negative. D. Critical and loyal.【答案】37. B 38. A 39. A 40. B【解析】【导语】本文是一篇说明文。文章主要介绍OpenAI推出的ChatGPT Health及其用途,同时分析其在准确性、焦虑风险和隐私方面的隐患与部分价值。【37题详解】细节理解题。根据第二段中的“The company stresses it is meant to “support, not replace” medical care, not for diagnosis or treatment, but to help with everyday questions and pattern recognition.(该公司强调,它旨在“支持而非取代”医疗服务,不用于诊断或治疗,而是帮助解决日常问题和模式识别)”可知,OpenAI推出ChatGPT Health是为了帮助用户解决日常健康问题,提供及时的咨询帮助。故选B项。【38题详解】推理判断题。根据第三段中的“A 2023 study found ChatGPT’s cancer treatment plans contained many errors, some hard even for experts to detect.(2023年的一项研究发现,ChatGPT的癌症治疗方案包含许多错误,有些甚至专家都难以发现)”、第四段中的“Beyond accuracy, psychologists highlight anxiety risks.(除了准确性问题外,心理学家强调了焦虑风险)”以及第五段中的“Privacy is another puzzle.(隐私是另一个难题)”可推断,ChatGPT可能带来的风险多于益处。故选A项。【39题详解】细节理解题。根据第六段中的“Dermatologist Kumar views ChatGPT Health as educational, clarifying terms like sunscreen types, not diagnostic.(皮肤科医生Kumar认为ChatGPT Health具有教育意义,可解释防晒霜类型等术语,而非用于诊断)” 可知,Kumar认为ChatGPT可以教患者一些医学术语。故选A项。【40题详解】推理判断题。根据最后一段中的“Thus, ChatGPT Health enters America’s broken system as a double-edged sword: a round-the-clock assistant that may empower (赋权) patients, yet risks misinforming, over-reassuring, and exposing them to unregulated data practices.(因此,ChatGPT Health作为一把双刃剑进入美国不完善的医疗体系:它是一个全天候的助手,可能赋予患者权力,但也存在提供错误信息、过度安慰以及使他们面临不受监管的数据操作的风险)”可推断,作者对ChatGPT Health的态度是谨慎且乐观的,既看到了其价值,也指出了其隐患。故选B项。11.(2026·江西赣南·一模)During a golden sunset, Sharon Wilson pointed a thermal-imaging (热成像) camera at a flagship data centre, revealing the enormous heat its AI supercomputer had been releasing into the sky. Meanwhile, the facility’s core product, like many other AI chatbots, kept generating floods of false or harmful content for users worldwide. “It’s a horrible waste,” said Wilson, director of the campaign group Oilfield Witness.Wilson is not alone in having this concern. Scientists are watching the AI expansion with unease as it pollutes the natural world with carbon and the digital world with dangers ranging from misinformation to poisonous videos.Data centres currently consume about 1% of global electricity, but that share may jump soon. Their slice of power is projected to hit 8.6% by 2035, while the International Energy Agency (IEA) expects data centres to account for at least a fifth of electricity-demand growth to the end of the decade.What if AI could pay off its energy debts by saving carbon elsewhere That idea was put forward in an IEA report, which argued that AI applications could cut emissions (排放) by far more than data centres produce. A research paper reached a similar conclusion after modelling cases in which AI would help integrate solar and wind into power networks, improve battery chemistry in electric cars, and encourage consumers to make climate-friendly choices.The projected carbon savings carry large uncertainties-greater efficiency can lead to greater use, the IEA warns, and rebound effects may undercut the gains, such as self-driving cars undermining public transport. But other sectors are so polluting, the researchers say, AI would need to cut their emissions by only a small percentage to cover its own carbon cost.Ultimately, given the massive energy consumed by algorithms (算法), it is essential that AI be employed to “do good in terms of fighting the climate crisis-designing the next generation of batteries, tracking deforestation,” as Sasha Luccioni, climate lead at an AI firm, said, rather than “create social-media websites filled with rubbish while data centres are still powered by coal-fired generators.”41. What does the underlined words “this concern” in paragraph 2 refer to A. The shortage of AI service. B. The unreliability of AI output.C. The release of heat by AI centers. D. The misuse of energy by AI systems.42. What do the IEA report and the research paper in paragraph 4 agree on A. AI can be a net carbon saver. B. AI can be energy-efficient.C. AI can provide computing power. D. AI can direct electricity distribution.43 What is the purpose of paragraph 5 A. To put forward an opposite position. B. To offer a more comprehensive view.C. To add some background information D. To demonstrate the previous argument.44. What does Sasha Luccioni argue about AI A. Its design calls for improvement. B. Its energy use demands restriction.C. Its application requires wise guidance. D. Its development deserves public support.【答案】41. D 42. A 43. B 44. C【解析】【导语】本文是一篇说明文。主要介绍AI发展带来能源消耗与碳排放问题,同时探讨AI可助力减排的可能,并呼吁合理引导AI应用应对气候危机。【41题详解】词句猜测题。根据第一段中的“During a golden sunset, Sharon Wilson pointed a thermal-imaging (热成像) camera at a flagship data centre, revealing the enormous heat its AI supercomputer had been releasing into the sky. Meanwhile, the facility’s core product, like many other AI chatbots, kept generating floods of false or harmful content for users worldwide. “It’s a horrible waste,” said Wilson, director of the campaign group Oilfield Witness. (在金色的日落时分,莎伦·威尔逊将一台热成像相机对准一个旗舰数据中心,揭示出其人工智能超级计算机向空中释放的巨大热量。与此同时,该设施的核心产品,和许多其他人工智能聊天机器人一样,不断为全球用户生成大量虚假或有害内容。“这是一种可怕的浪费,”活动组织“油田见证”的负责人威尔逊说。)”可知,this concern指的是AI系统对能源的滥用。故选D项。【42题详解】细节理解题。根据第四段中的“That idea was put forward in an IEA report, which argued that AI applications could cut emissions (排放) by far more than data centres produce. A research paper reached a similar conclusion after modelling cases in which AI would help integrate solar and wind into power networks, improve battery chemistry in electric cars, and encourage consumers to make climate-friendly choices. (国际能源署的一份报告提出了这一观点,该报告认为人工智能应用减少的排放量将远远超过数据中心产生的排放量。一篇研究论文在模拟了相关案例后得出了类似的结论,在这些案例中,人工智能将有助于将太阳能和风能融入电力网络,改进电动汽车的电池化学性能,并鼓励消费者做出气候友好型选择。)”可知,两者都认为AI可能成为净碳减排者。故选A项。【43题详解】推理判断题。根据第五段中的“The projected carbon savings carry large uncertainties-greater efficiency can lead to greater use, the IEA warns, and rebound effects may undercut the gains, such as self-driving cars undermining public transport. But other sectors are so polluting, the researchers say, AI would need to cut their emissions by only a small percentage to cover its own carbon cost. (国际能源署警告说,预计的碳减排存在很大的不确定性——更高的效率可能促成更多的使用,反弹效应可能会削弱收益,例如自动驾驶汽车损害公共交通。但研究人员表示,其他行业的污染如此严重,人工智能只需将它们的排放量减少一小部分,就足以弥补自身的碳成本。)”可知,第五段既指出不确定性,又说明AI仍有减排价值,目的是提供更全面的观点。故选B项。【44题详解】推理判断题。根据最后一段中的“Ultimately, given the massive energy consumed by algorithms (算法), it is essential that AI be employed to “do good in terms of fighting the climate crisis-designing the next generation of batteries, tracking deforestation,” as Sasha Luccioni, climate lead at an AI firm, said, rather than “create social-media websites filled with rubbish while data centres are still powered by coal-fired generators.”(最终,考虑到算法消耗的巨大能量,正如一家人工智能公司的气候负责人萨沙·卢奇奥尼所说,必须利用人工智能“在应对气候危机方面发挥作用——设计下一代电池,追踪森林砍伐”,而不是“在数据中心仍由燃煤发电机供电的情况下,创建充斥垃圾的社交媒体网站。”)”可知,Sasha Luccioni认为AI的应用需要明智的引导。故选C项。12.(2026·天津·统考)The question of whether artificial intelligence (AI) will take away our jobs is on many people’s minds today. Current applications, from AI robotics performing complex surgeries to large language models like ChatGPT writing academic essays and solving tough problems, have not only demonstrated remarkable capabilities but also sparked significant moral concerns.Broadly speaking, public opinion is divided. Some view AI as the ultimate tool for solving society’s most pressing challenges, from disease to climate change. Others, however, fear that AI will overtake human intelligence. Both views rest on a common assumption that AI possesses, or will possess, a superior form of intelligence that could replace human decision-making. But given the fact that technology is the product of human civilization, the challenge from AI is something we have created for ourselves as we keep pushing our own boundaries. In other words, AI’s progress, functions and future direction are all directed by the human mind.Therefore, before AI evolves into a potential threat, the global community must reach an agreement on the role it is to play. More importantly, related laws and regulations must ensure that AI will benefit society and prevent it from threatening human life. For instance, while future robots might develop a form of emotional intelligence, enabling them to recognize, understand and express emotions in a way that is similar to humans, we must establish clear boundaries to prevent AI copying human emotions. Without legal restrictions, AI may become a social disaster.The new industrial revolution, driven by AI, is an unstoppable force. This change, much like the steam and internet revolutions that brought once-unimaginable shifts, will definitely reshape the world of work, meaning some jobs will disappear. Yet, history repeatedly shows that humanity possesses a great capacity for adaptation. Following each technological leap, new forms of work have emerged, often more creative and fulfilling than the previous ones. Consequently, it’s unnecessary to worry AI will replace our jobs. While technology advances at a rapid pace, what we need to do is to welcome the AI era rather than resisting its progress for fear of the unknown.45. Why does the author provide examples of AI applications in Paragraph 1 A. To compare the functions of different AIs.B. To explain the principles of deep learning.C. To show evidence for worries about AI.D. To predict breakthroughs in medical fields.46. What does the author imply about AI’s progress A. It will be too complex to control.B. It depends on human innovation.C. It will overtake human intelligence.D. It helps human break boundaries.47. How can we prevent AI’s potential threat A. By preventing it threatening humans.B. By stopping it expressing emotions.C. By changing global agreements.D. By setting clear rules and laws.48. What does the writer suggest readers do with the coming of the AI era A. Deal with it positively.B. Accept it passively.C. Respond to it randomly.D. Defend it unconditionally.49. Where is the passage most probably taken from A. A newspaper column on science.B. A textbook on computer science.C. An advertisement for AI software.D. A research paper on AI development.【答案】45. C 46. B 47. D 48. A 49. A【解析】【导语】这是一篇议论文。文章围绕人工智能是否会取代人类工作的问题展开,分析了公众对AI的不同看法,指出AI的发展由人类主导,并提出应对AI潜在威胁的措施,最后表明AI时代的变革不可阻挡,人类应积极迎接而非抗拒。【45题详解】推理判断题。根据第一段“The question of whether artificia 展开更多...... 收起↑ 资源列表 2026年高考英语终极冲刺讲义练习(全国通用)压轴题01阅读理解CD篇(人工智能类)(原卷版).docx 2026年高考英语终极冲刺讲义练习(全国通用)压轴题01阅读理解CD篇(人工智能类)(解析版).docx