2026年中考英语三轮复习备考时事热点人工智能-阅读理解高频考点押题练(含答案)

资源下载
  1. 二一教育资源

2026年中考英语三轮复习备考时事热点人工智能-阅读理解高频考点押题练(含答案)

资源简介

人工智能-阅读理解
With Manus, AI experimentation has burst into the open. Give Manus some easy tasks, such as building up a promotional network of social-media accounts, researching and writing strategy documents, or booking tickets and hotels for a conference and one of them can be accomplished online quickly. Manus can also complete some tough tasks. For instance, write a detailed plan, spin up a version of itself to browse the web and give it its best shot.
Manus is a general-purpose AI agent product globally launched by the Chinese large model team Monica in the early morning of March 6, 2025. It is a system built on top of existing models that can interact with the internet and perform a series of tasks without listening to a human user for permission. One of its makers said: “We have built the world’s first general AI agent that ‘turns your thoughts into actions’. It can think, plan, and carry out complex tasks independently. Yet Al labs around the world have already been experimenting with this ‘agentic’ approach in private.”
What makes Manus famous is not that it exists, but that it has been fully unleashed (释放) by its creators. A new age of experimentation is here, and it is happening not within labs, but out in the real world. Spend more time using Manus and it becomes clear that it still has a lot further to go to become consistently useful. Confusing answers, frustrating delays and never-ending loops make the experience disappointing. In releasing it, its makers have never-ending loops that make the experience disappointing.
The emergence of Manus shows important progress in AI in the field of general-purpose intelligent agents, and it provides users with a brand-new AI application mode. However, it is still in the development stage, and its future development potential and application prospects are worth paying attention to.
1.What can we know about Manus in the first paragraph
A.It has no limitations in handling online tasks.
B.It can perform a simple task like writing a specific plan.
C.It has a certain degree of independence in task-handling.
D.It always needs human guidance when performing tasks.
2.What did one of the makers highlight in paragraph 2
A.Manus can accomplish tasks like humans.
B.Their AI agent could do multiple tasks at a time.
C.Manus can take the place of humans completely.
D.Other AI labs experimented on Manus privately before.
3.What is the author’s attitude towards Manus
A.Positive. B.Negative. C.Unclear. D.Doubtful.
4.What is the best title of the passage
A.Model: The Perfect AI Agent B.Monica: An Appealing Creator
C.Manus: A Breakthrough in AI Labs D.Global AI Labs: Joint Efforts for Manus
Inside an amazing show center in Beijing, a two-armed robot is busy making sugar paintings. Its smooth movements attract a crowd of visitors. The robot is acting as a “skilled worker” of this traditional art of Beijing. But the machine also acts as both a witness (见证) to and a guide for the increasing mix of fields of knowledge that come from different places. This mix challenges the limits of time and geography.
At the show, hundreds of experts from home and abroad are considering the relationship between cultural heritage and new technologies. For many, the promotion of traditional culture is one of the areas in which technologies such as artificial intelligence (AI) can play a key role.
“AI helps pass on culture,” said Guo Chunchao, head of Tencent’s Hunyuan Text to 3D project. Yu Minjing, a teacher at Tianjin University, agreed with him. She led her team to use VR technology to conduct a firework show and make traditional Hanfu clothes. “With AI, the inheritance (传承) of traditional culture can solve the limits of space and geography, which helps traditional skills that used to be taught by talking, to be spread more widely and better,” she said.
“The mix of AI technology with VR, 3D and other technologies has also helped create cloud walking-around experiences, attracting young people to traditional culture,” Guo said.
However, some experts warned that while technology is important for preservation, it’s even more important to think about the people in the process.
Renata Sansone, from Italy, said that some archaeological sites are very difficult for everybody to understand, but the use of technologies may make a difference in getting more people involved and playing their part in protection. Many Chinese experts agree with her. Yu Minjing thinks that saving and passing on cultural relics should focus on people. How much people like and accept it is very important for protecting cultural relics well.
5.Why is the robot mentioned in paragraph 1
A.To introduce the challenges the robot faces.
B.To call for people to protect traditional cultures.
C.To praise sugar paintings’ role in cultural protection.
D.To show a new function of technology in traditional culture.
6.What is the main reason for Yu Minjing’s team using AI and VR technologies
A.Solving space problems in culture sharing.
B.Making new technologies develop well.
C.Attracting more young people to their show.
D.Taking the place of old teaching methods.
7.Which of the following statements would Renata Sansone agree with
A.Schools will play a key role in cultural protection.
B.Archaeological sites are best explored with technology.
C.It’s important to make the public join in cultural protection.
D.Cloud walking-around has become a major way to experience culture.
8.What’s the best title for the text
A.The Popular Rise of Robot Artists
B.The Role of Technology in Cultural Protection
C.The Influence of AI on Cultural Exchange
D.The Future Development of Traditional Skills
Developing a new drug is a long, expensive, and high-risk process. On average, the process takes over ten years and costs more than $ 2 billion before a drug receives approval. Even then, success is far from guaranteed — each year, the U. S. Food and Drug Administration(FDA)approves only about 53 new drugs. Given these high risks, pharmaceutical (制药) companies are eager for breakthroughs. Long regarded as a game-changer, artificial intelligence(AI) is now beginning to deliver on its promise in drug discovery.
Before a drug reaches the market, it must undergo three critical phases of clinical (临床的) trials. Phase one primarily assesses safety, testing a small group of healthy volunteers to identify potential side effects. Phase two, often the most vital stage, evaluates both safety and effectiveness in a larger group of patients with the target disease. Only if successful does the drug proceed to phase three, where it is tested on an even broader population to confirm its benefits and monitor for rare side effects.
AI is already proving its value in this process. Insilico, an AI-driven biotech startup, has demonstrated its potential by identifying a drug target and designing a molecule (分子) suitable for human trials in just 18 months — at a cost of only $ 2.7 million, a fraction of traditional costs. Encouragingly, AI-designed drugs are now advancing through clinical trials. In 2025, key results from phase-two trials will be revealed for several AI-developed treatments. According to Christoph Meier of BCG, AI could potentially double research and development productivity. If successful, four or five AI-designed drugs could advance to phase-three trials this year — a milestone for the industry.
Though AI has yet to shorten clinical trial timelines, its impact is already visible in cost reduction and smarter decision-making. Even a modest 20% decrease in failures at the phase-two stage could save nearly $ 450 million per drug, according to research from Cambridge University’s Andreas Bender. With AI-driven innovation speeding up, the future of drug development looks more promising than ever.
9.Why do pharmaceutical companies turn to AI
A.To improve the efficiency. B.To guarantee the success.
C.To monitor the industry. D.To deliver the promise.
10.What is the feature of phase two
A.Critical. B.Stable. C.Costly. D.Effective.
11.What does the underlined word “fraction” in paragraph three mean
A.Majority. B.Set. C.Slice. D.Matter.
12.What can AI contribute to drug development according to the text
A.Cut down its costs. B.Shorten its timeline.
C.Double its productivity. D.Reduce its failures sharply.
At the recycling center, two AI-powered robots act like powerful mechanical arms, sorting items from the conveyor belt covered with garbage all day long. One sorts juice boxes and plastic bottles that can be reprocessed, while the other searches for contaminants among paper products. This clearly shows that the recycling industry is also stepping into the AI revolution.
In theory, material recovery facilities (MRFs) are responsible for collecting, sorting waste, and selling recyclable materials. However, in practice, MRFs don’t perform well. Recycling plants have always struggled to classify materials with the precision required for reuse. Traditional recycling methods can only roughly separate waste into categories like paper, glass, and metal, and finer-level sorting, especially for plastics, is often overlooked. Recycling workers find it difficult to determine whether a container was originally used for shampoo, cooking oil, or some other products.
AI is expected to bring about changes. It can provide recycling plants with a more detailed perspective on packaging identification. AI-powered recycling robots are “vision systems” and are trained in a similar way to ChatGPT. They can absorb a large number of photos of discarded items in various damaged states, and then identify the subtle differences in the color, shape, texture, or logo of products. Robot companies claim that their accuracy rate can reach 99%, which is higher than the 85-95% of traditional systems.
Shifting to AI doesn’t completely solve the recycling problem. High-tech systems are not cheap. A single robot can cost up to $300,000. Even if the cost decreases, recycling robots can’t change the fact that recycling efficiency remains low. From the perspective of plastic pollution, the best solution may not be recycling single — use products, but not using one at all.
13.What is the problem with traditional recycling methods
A.They lack professional recycling workers.
B.They can’t separate waste into basic categories.
C.They are unable to accurately sort materials for reuse.
D.They pay too much attention to the sorting of plastics.
14.How do AI-powered recycling robots achieve high-precision item identification
A.By relying on workers to manually input product information.
B.By analyzing a vast number of photos to spot subtle differences.
C.Through recognizing the color and shape of the discarded items.
D.Through using a fixed pattern to match the items on the conveyor belt.
15.What can be expected for the future of the recycling industry
A.Depend completely on AI technology.
B.Abandon traditional recycling methods.
C.Focus greatly on cutting disposable items’ production.
D.See a significant decrease in the cost of all recycling equipment.
16.What is the main idea of the passage
A.Show AI’s role and limits in recycling.
B.Introduce MRFs’ working process.
C.Analyze traditional recycling methods.
D.Compare AI and traditional recycling robots.
In the desert in Peru, over 300 new ancient drawings were discovered by using Artificial Intelligence (AI) and drones (无人机). The Nazca people, who lived in the area over 2,000 years ago, made the drawings by removing the reddish top layer (层) of rocks in the desert. The drawings are called “geoglyphs (地画)”. The Nazca Lines, which are a famous group of huge ancient drawings, include many different kinds of geoglyphs, from simple shapes to pictures of animals and plants, usually drawn with a single long, winding line.
The first geoglyph was discovered in 1927. Since then, scientists have found around 430 geoglyphs in the area. Now, using AI and drones, scientists have found 303 new geoglyphs. The discovery of one geoglyph used to take three or four years, but now may be done in two or three months. However, the AI program also made lots of mistakes — for every 36 possible geoglyphs the program found, the scientists discovered only one actual geoglyph. Even so, the new method of turning up geoglyphs was much faster than the scientists could have managed without drones and AI.
To find the new geoglyphs, the scientists trained a special AI program to analyze satellite images for potential geoglyphs. They spent over 2,600 hours looking at places the AI had advised. They took lots of pictures with drones and worked on the ground in Peru to check out the locations. Most of the new-found geoglyphs are smaller than the ones already discovered, and are about nine meters in length. Many of them seem to show humans as well as animals.
The scientists are still trying to figure out what the geoglyphs mean. For one thing, the scientists discovered paths in the Nazca Desert. The scientists believe these paths were made by the Nazca people walking through the desert, and that the smaller geoglyphs were meant to be seen by these travellers. The scientists also believe the larger geoglyphs are different and may have been used for celebrations or other special events.
17.How were the Nazca geoglyphs created
A.By adding red rocks. B.By carving in the soil.
C.By copying winding lines. D.By clearing rock surface layers.
18.Why does the author list the figures in paragraph 2
A.To stress the efficiency of the new method. B.To show the drones’ imperfection.
C.To argue for more funding. D.To prove AI’s accuracy.
19.What can be inferred about the process of discovering new geoglyphs
A.Drones were used for initial checks. B.Fieldwork confirmed AI suggestions.
C.AI identified all geoglyphs precisely. D.Satellites ignored some new geoglyphs.
20.What do the scientists think the larger geoglyphs were likely to be used for
A.Wildlife adoration. B.Desert decoration. C.Special occasions. D.Road signs.
A team of scientists at Columbia University has developed an artificial intelligence model called general expression transformer (GET) to predict how genes within a cell influence its behavior. This breakthrough has the potential to deepen our understanding of cancer and genetic diseases.
Inspired by the approach used to train ChatGPT, GET learns the rules of gene regulation — how genes are turned on or off or adjusted in activity — a process known as gene expression. This process determines which proteins are produced and in what quantities, which is critical since proteins are involved in nearly every bodily function.
While still in its early stages, GET could follow in the footsteps of AlphaFold2, the AI system that predicts protein structures and was honored with the 2024 Nobel Prize in Chemistry. Both gene regulation and protein structure are pivotal to life, and disturbance in either can lead to disease. Raul Rabadan, one of the study’s authors, described this as part of a broader revolution (革命) in biology, transforming it into a predictive science.
The GET model was trained using data from over 1.3 million normal human cells across 213different cell types, a departure from previous efforts that often focused on abnormal cells like those found in cancers. Remarkably, the model could predict the behavior of a specific cell type, such as astrocytes (星状细胞组织) in the central nervous system, even when data from that cell type was left out during training.
Experts like Mark Gerstein of Yale School of Medicine and Jian Ma of Carnegie Mellon University have praised the model’s ability to tackle one of biology’s greatest challenges: understanding how the same genome (基因组) can drive diverse behaviors in different cell types. Humans have about 20,000 genes, but their expression varies widely across cell types, such as neurons, muscle cells, or skin cells. While much of this regulatory “grammar” remains unclear, GET represents a significant step toward decoding it. This advancement could ultimately lead to new insights into health and disease, offering hope for more precise and effective treatments.
21.What do the scientists probably expect from GET
A.Changing proteins’ structure and quantity. B.Helping uncover gene regulation rules.
C.Predicting human behaviors by genes. D.Improving human psychological health.
22.What does the underlined word “pivotal” mean in Paragraph 3
A.Fundamental. B.Permanent. C.Unpredictable. D.Beneficial.
23.What sets the GET model apart from previous ones
A.It included the central nervous system. B.It involved a larger amount of cell types.
C.It concentrated more on human behaviors. D.It employed data from normal human cells.
24.What will probably be talked about in the following paragraph
A.The potential applications of GET in medicine. B.The technical limitations of the GET model.
C.The challenges the GET model will face. D.The different opinions on the GET model.
January 20, 2025, a Chinese tech company named DeepSeek made a new AI called DeepSeek-R1, which has emerged as a pioneer especially in educational technology. This smart program can solve math problems, write code, and answer questions like top models such as OpenAI’s GPT-4o, but it costs much less to build. It combines advanced machine learning methods to provide personalized learning solutions for students worldwide.
Unlike traditional AI models that rely on pre-programmed answers, DeepSeek-R1 learns by trying many times and getting better, like how students practice maths. DeepSeek-R1 improves by itself by using reinforcement (强化) learning to simulate (模拟) human reasoning. This allows it to guide students through problem-solving step by step, much like a patient tutor. For example, when a student struggles with a math equation, DeepSeek-R1 doesn’t just give the answer; it breaks down the logic, identifies errors, and encourages critical thinking.
The model’s applications extend beyond academics. In language learning, it analyzes students’ pronunciation through AI speech recognition and offers real-time response. For teachers, DeepSeek-R1 can provide lesson plans that agree with curriculum standards and even predict students’ learning difficulties based on historical data. Its “adaptive testing” feature creates customized exams that adjust difficulty according to individual progress.
However, challenges remain. Critics argue that over-reliance on AI might reduce human interaction in education. DeepSeek’s developers address this by emphasizing its role as a “supplement (补充), not a replacement.” As Dr. Li, a DeepSeek researcher, stated, “Our goal is to free teachers from repetitive tasks so they can focus on inspiring creativity.”
Looking ahead, DeepSeek aims to bring virtual reality (VR) into its platform, allowing students to explore historical events or scientific concepts in immersive 3D environments. While ethical (道德的) debates about AI in education still exist, one thing is clear: tools like DeepSeek are reshaping how we learn, combining technology with human wisdom.
25.What makes DeepSeek-R1 different from traditional AI models
A.It uses pre-programmed answers.
B.It focuses on memorization techniques.
C.It creates fixed test patterns for all the students.
D.It employs reinforcement learning for reasoning.
26.What does the underlined word “adaptive” in paragraph 3 most likely mean
A.fixed B.adjustable C.complex D.outdated
27.What future development does DeepSeek plan to add to its platform
A.Real-time response to strengthen real-time classroom interaction.
B.Virtual reality (VR) to enable immersive learning experiences.
C.Advanced robotics to assist in repetitive tasks.
D.New technology to secure student data.
28.What’s the author’s attitude towards the use of DeepSeek-R1
A.doubtful B.critical C.positive D.indifferent
29.What is the main purpose of writing the passage
A.To advertise DeepSeek products.
B.To criticize the risks of DeepSeek in schools.
C.To discuss the role of DeepSeek in transforming education.
D.To compare different AI models.
GitHub is going multi-model for its Copilot code completion and programming tool (副驾驶代码完成和编程工具). Developers will soon be able to choose models from Anthropic, Google, and OpenAI for GitHub Copilot. GitHub is also announcing Spark, an AI tool for building web apps, and updates to GitHub Copilot in VS Code, Copilot for Xcode, and more at its GitHub Universe conference today.
GitHub Copilot users on the web or VS Code can select Claude 3.5, with Gemini 1.5 Pro in the coming weeks. OpenAI’s GPT-4o, ol-preview, and ol-mini models will also be available in GitHub Copilot soon. Developers will be able to toggle (切换) between models during a conversation with Copilot Chat to find the model that’s best for a particular task.
“There is no one model to rule every scenario (场景), and developers expect the agency to build with the models that work best for them,” says GitHub CEO Thomas Dohmke. “It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice.”
Microsoft-owned GitHub was the first to launch its AI tool called Copilot in 2021, ahead of Microsoft’s push to make Copilot the center of its AI efforts. It was the first major result of Microsoft’s initial $1 billion investment into OpenAI, and GitHub Copilot now has more than 1.8 million paid subscribers. It will be interesting to see if Microsoft adopts GitHub’ s multi-model approach and opens up its own Copilot AI assistant to models from rivals like Google and Anthropic.
GitHub is also announcing Spark today, an AI tool that makes it easier to build web apps using natural language. An initial prompt (初始提示) uses OpenAI and Anthropic models to produce live previews of what the web app will look like, and GitHub Spark users can compare versions as they make changes. GitHub Spark lets experienced developers directly manage code, while beginners can create a web app entirely using natural language.
Once the app is created, you can run it on a desktop, tablet, or mobile device and also share the app with others to let people remix and build on top of Spark apps. GitHub Spark is part of GitHub’s vision (愿景) to get to 1 billion developers. “For too long, there has been a barrier of entry separating a vast majority of the world’s population from building software,” says Dohnke. “With Spark, we will enable over one billion personal computer and mobile phone users to build and share their own micro apps directly on GitHub.”
GitHub is also announcing more updates to Copilot at its GitHub Universe today. Multi-file edit for GitHub Copilot in VS Code is arriving on November 1st, allowing users to make edits across multiple files at the same time using Copilot Chat. Copilot Extensions will also be available in early 2025, GitHub Copilot for Xcode enters public preview, and Copilot now has a new code review capability.
Sign up for Notepad by Tom Warren, a weekly newsletter uncovering the secrets and strategy behind Microsoft’s era-defining bets on AI, gaming, and computing. Subscribe to get the latest straight to your inbox.
30.Which of the following statement is true
A.Google and Anthropic have used Copilot to develop their own AI assistant.
B.Developers cannot modify models during a conversation.
C.Developers can choose models from Anthropic, OpenAI and Google for GitHub Copilot shortly.
D.GitHub Spark can let beginners directly manage code using natural language.
31.GitHub Copilot has gained a large amount of users because ______.
A.people nowadays are increasingly interested in building software
B.GitHub Spark can assist individuals in building software more easily
C.Microsoft initially invested $1 billion in OpenAI
D.developers expect the agency to build with the models that work best for them
32.What can we learn about GitHub Spark from paragraphs 5 and 6
A.GitHub Spark is a part of GitHub’s vision which has got 1 billion developers.
B.GitHub Spark can help individuals to create a web app which can be used on an iPad.
C.OpenAI and Anthropic models can compare versions as they make changes.
D.Individuals can create and share their own micro apps using GitHub Copilot.
33.What is the passage intended to do
A.Share some latest information about GitHub Spark.
B.Showcase the AI development of Microsoft.
C.Appeal to more users to create apps.
D.Make a subscription of a weekly newsletter.
Where is the best place to build a data centre Not on Earth at all, but in orbit, claims Philip Johnston, chief executive of Starcloud. The cost of launching things into space is falling fast, and once it has fallen far enough “It’s completely inevitable that all data centres will go into space,” he says.
An orbiting data centre, in a dawn-dusk sun-synchronous polar orbit that keeps it in continuous sunlight, could use abundant solar energy. The freezing vacuum (真空) of space should make cooling easier, too, because cooling systems are more efficient when the surrounding temperature is lower. Satellite-internet constellations (星座) can provide fast connectivity with the puting clusters (集群) could be arranged in three dimensions rather than two as on Earth, to speed up data transfer.
This summer it is due to launch Starcloud 1, a demonstrator satellite containing AI chips made by Nvidia. These chips will have 100 times more processing power than any put into space before. A second satellite, Starcloud 2, is planned for the end of 2026, with 100 times more solar capacity and 100 times the computing power. The first commercial Starcloud satellite would follow by the early 2030s. Several of these could then be connected and powered by an enormous solar array.
There is no doubting Starcloud’s ambition. But critics say its numbers do not add up. One analysis argues that Starcloud has overlooked the protection solar panels need in orbit, overestimated solar power output and ignored the problem of collision (撞击) avoidance.
Everything depends on launch costs. If they fall far enough, the cost of sending a data centre into space could be more than canceled out by availability of abundant, cheap solar energy. Starcloud expects reusable, heavy-lift rockets to cut launch costs by more than 99%within a few years. “The first thing you would do, if there’s low-cost launch,” says one analyst, “is build very large data centres in space.”
34.What does paragraph 2 mainly focus on
A.The possibility of building a space station. B.The importance of speeding up data transfer.
C.The working efficiency of the cooling systems. D.The strengths of building data centres in space.
35.What can be inferred about the critics’ view on Starcloud’s plan
A.They fully support the idea but question the timeline.
B.They think Starcloud has underestimated the challenges.
C.They believe space data centres are technically impossible.
D.They argue that Earth-based data centres are more efficient.
36.According to the text, which is the biggest barrier to building a data center in space
A.Launch costs. B.Collision possibility.
C.Power shortage. D.Environmental issues.
37.What is the main idea of the passage
A.The history of satellite technology development.
B.The challenges of building data centres in space.
C.The potential benefits and plans for orbital data centres.
D.A comparison between Earth-based and space-based data centres.
Esther Kimani, a 29-year-old pioneer in agritech, is changing the lives of smallholder farmers across Africa. As the founder of Farmer Lifeline Technologies (FLT), she has applied artificial intelligence (AI) to fight against crop pests (害虫) and diseases, significantly reducing losses for rural farmers.
Kimani's journey began in a small Kenyan village on the Aberdare Mountains. Witnessing firsthand the severe impact of pests and diseases on their crops — and consequently, their income — she understood early how agricultural losses could mean unmet basic needs like school fees and healthcare. Despite these challenges, Kimani became the first girl from her village to attend university, studying computer science. It was there that she recognized the potential of technology to solve rural farmers' struggles, and that's how FLT was born.
In Kenya alone,7.5 million smallholder farmers lose up to 50% of their yield (产量) to pests and diseases annually — losses that could feed millions. Traditional solutions like hiring agricultural consultants or using drones (无人机) are prohibitively expensive. To solve this critical issue, she developed an AI-powered camera, which is set up on farms at no upfront cost. It scans crops continuously and warns farmers through Short Message Service (SMS) for $3 per month when pests or diseases are detected.
A key focus for Kimani is supporting women farmers, who make up 43% of the agricultural labor force in developing nations, but who often lack access to technology. "Men in rural communities tend to have smartphones, while women rely on basic feature phones," she notes. "Through SMS, we ensure women aren't left behind."
Kimani's innovation has already impacted thousands of farmers, 78% of whom have reported a yield increase of over 36%. Her team aims to reach 200 thousand farms across the country within five years. For Kimani, success in 2030 means seeing 200 thousand smallholder farmers living with dignity — affording education, healthcare, and financial stability through improved yields. Kimani is not just building a company; she's reshaping the future of African agriculture.
38.Why did Kimani found FLT
A.To fund rural farmers. B.To transform farming.
C.To expand AI industry. D.To research crop types.
39.How does the AI-powered camera help farmers
A.By sending them timely warnings. B.By connecting them to consultants.
C.By controlling drones to scan crops. D.By driving pests away automatically.
40.What is an advantage of Kimani's innovation
A.Equal access. B.Tailored service.
C.Large storage. D.Easy maintenance.
41.What can be inferred from the last paragraph
A.Financial policies affect agriculture. B.African agriculture will take the lead.
C.Kimani will pursue further education. D.Kimani's innovation powers a bright future.
试卷第1页,共3页
试卷第1页,共3页
参考答案
1.C 2.A 3.A 4.C
5.D 6.A 7.C 8.B
9.A 10.A 11.C 12.A
13.C 14.B 15.C 16.A
17.D 18.A 19.B 20.C
21.B 22.A 23.D 24.A
25.D 26.B 27.B 28.C 29.C
30.C 31.D 32.B 33.A
34.D 35.B 36.A 37.C
38.B 39.A 40.A 41.D
答案第1页,共2页
答案第1页,共2页

展开更多......

收起↑

资源预览