云南师范大学附属中学2027届高二年级下学期体验活动(二)英语学科(PDF版,含答案)

资源下载
  1. 二一教育资源

云南师范大学附属中学2027届高二年级下学期体验活动(二)英语学科(PDF版,含答案)

资源简介

2027届高二年级下学期英语学科体验活动(二)
第一部分阅读(满分22.5分)
第一节(共4小题:每小题2.5分,满分10分)
A recent research revealed that one of the most popular AI chatbots many people rely on has
been sharing inaccurate coding and computer programming advice.This highlights a major issue
facing AI today:these evolving algorithms (can"hallucinate,"a term describing situations
where AI models generate statements that sound reasonable but have been entirely made up.
Generative AI applications like large language models serve,functionally,as prediction programs.
When users ask questions,the AI scans its knowledge base for relevant information and predicts
word sequences it considers appropriate as responses.These predictions build upon one another
again and again.
But Rayid Ghani,a professor at Carnegie Mellon University's Machine Learning Department
and Heinz College of Information Systems and Public Policy,explains that this process puts
greater emphasis on probability than truth."Most generative AI models have been trained on vast
Internet data pools,but no one has checked the accuracy of those data,nor can the AI distinguish
trustworthy sources,"he notes.This led to Google's stupid AI suggestion about using glue on
pizza to prevent cheese from sliding off-a proposal relied on a years-old Reddit joke.
Ghani observes that whilc humans easily understand human errors since we recognize that
people aren't perfect things.But we hold machines to higher criteria,which makes AI errors
harder to forgive.However,he emphasizes that AIs are human-made systems.By examining both
AI processes and the misleading datasets they train on,we might not only improve AI but also
reflect on our social and cultural biases embedded (in training data.
"We need guardrails (like quotes-checking tools for chatbots.So third-party checking
tools will likely emerge,"Ghani predicts."But remember:Humans also make mistakes.Just
because the book in the library doesn't guarantee its accuracy."
1.How do generative AI models respond to user's questions
A.By accessing checked databases.
B.Through repeated word predictions.
C.Using pre-written answer models.
D.By consulting human experts.
2.What message does the"glue on pizza"example convey
A.AI may fail to identify unreliable sources.
B.AI will prioritize outdated information.
C.AI should improve cooking techniques.
D.AI can intentionally create humorous content.
3.Why do people find AI mistakes harder to accept than human errors
A.AI lacks emotional intelligence.
B.AI errors cause greater harm.
C.Machines are expected to be perfect.
D.Human errors are less frequent.
第1页,共4页

展开更多......

收起↑

资源预览