New research by the psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, provides evidence that people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions.
The astonishing results achieved by artificial intelligence systems, which can, for example, hold a conversation as a human does, have given this technology an image of high reliability. More and more professional fields are implementing AI-based tools to support the decision-making of specialists to minimise errors in their decisions. However, this technology is not without risks due to biases in AI results. We must consider that the data used to train AI models reflects past human decisions. If this data hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. Indeed, extensive evidence indicates that AI systems do inherit and amplify human biases.
The most relevant finding of Vicente and Matute’s research is that the opposite effect may also occur: that humans inherit AI biases. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research.
In the series of three experiments conducted by these researchers, volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (it exhibited a systematic error) during this task, while the control group were unassisted. The AI, the medical diagnosis task, and the disease were fictitious. The whole setting was a simulation to avoid interference with real situations.
The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes. Thus, AI recommendations influenced participant’s decisions. Yet the most significant finding of the research was that, after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided. In other words, participants who were first assisted by the biased AI replicated its bias in a context without this support, thus showing an inherited bias. This effect was not observed for the participants in the control group, who performed the task unaided from the beginning.
These results show that biased information by an artificial intelligence model can have a perdurable negative impact on human decisions. The finding of an inheritance of AI bias effect points to the need for further psychological and multidisciplinary research on AI-human interaction. Furthermore, evidence-based regulation is also needed to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the IA and human collaboration.
Observatorio IA - artículo científico
Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q\&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16\% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.
The use of AI-powered educational technologies (AI-EdTech) offers a range of advantages to students, instructors, and educational institutions. While much has been achieved, several challenges in managing the data underpinning AI-EdTech are limiting progress in the field. This paper outlines some of these challenges and argues that data management research has the potential to provide solutions that can enable responsible and effective learner-supporting, teacher-supporting, and institution-supporting AI-EdTech. Our hope is to establish a common ground for collaboration and to foster partnerships among educational experts, AI developers and data management researchers in order to respond effectively to the rapidly evolving global educational landscape and drive the development of AI-EdTech.
This article examines the potential impact of large language models (LLMs) on higher education, using the integration of ChatGPT in Australian universities as a case study. Drawing on the experience of the first 100 days of integration, the authors conducted a content analysis of university websites and quotes from spokespeople in the media. Despite the potential benefits of LLMs in transforming teaching and learning, early media coverage has primarily focused on the obstacles to their adoption. The authors argue that the lack of official recommendations for Artificial Intelligence (AI) implementation has further impeded progress. Several recommendations for successful AI integration in higher education are proposed to address these challenges. These include developing a clear AI strategy that aligns with institutional goals, investing in infrastructure and staff training, and establishing guidelines for the ethical and transparent use of AI. The importance of involving all stakeholders in the decision-making process to ensure successful adoption is also stressed. This article offers valuable insights for policymakers and university leaders interested in harnessing the potential of AI to improve the quality of education and enhance the student experience.
This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly.
This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy
Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for artificial intelligence generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.
Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, Hélène Sauzéon, Pierre-Yves Oudeyer
arXiv
(01/05/2023)
In order to train children's ability to ask curiosity-driven questions, previous research has explored designing specific exercises relying on providing semantic and linguistic cues to help formulate such questions. But despite showing pedagogical efficiency, this method is still limited as it relies on generating the said cues by hand, which can be a very costly process. In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a large language model (LLM) for automating the production of the pedagogical content of a curious question-asking (QA) training. We study generating the said content using the "prompt-based" method that consists of explaining the task to the LLM in natural text. We evaluate the output using human experts annotations and comparisons with hand-generated content. Results suggested indeed the relevance and usefulness of this content. We also conduct a field study in primary school (75 children aged 9-10), where we evaluate children's QA performance when having this training. We compare 3 types of content : 1) hand-generated content that proposes "closed" cues leading to predefined questions; 2) GPT-3-generated content that proposes the same type of cues; 3) GPT-3-generated content that proposes "open" cues leading to several possible questions. We see a similar QA performance between the two "closed" trainings (showing the scalability of the approach using GPT-3), and a better one for participants with the "open" training. These results suggest the efficiency of using LLMs to support children in generating more curious questions, using a natural language prompting approach that affords usability by teachers and other users not specialists of AI techniques. Furthermore, results also show that open-ended content may be more suitable for training curious question-asking skills.
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
Pages
Tipo
-
aplicación (4)
-
artículo (86)
-
artículo científico (12)
-
boletín (3)
-
curso (1)
-
libro (2)
-
podcast (1)
-
presentación (6)
-
revista (1)
-
sitio web (11)
-
tuit (26)
-
vídeo (7)
Temas
-
aplicaciones (30)
-
aprendizaje (8)
-
Australia (1)
-
Bard (4)
-
bibliografía (1)
-
big data (1)
-
Bing (3)
-
brecha digital (1)
-
chatbots (8)
-
chatGPT (44)
-
código abierto (1)
-
cognición (1)
-
cómo citar (2)
-
comparación (3)
-
consejos de uso (2)
-
control (1)
-
curso (1)
-
cursos (1)
-
DALLE2 (1)
-
deep learning (2)
-
deepfakes (1)
-
desafíos (1)
-
destrezas (1)
-
detección de uso (6)
-
diseño educativo (1)
-
disrupción (1)
-
docentes (1)
-
e-learning (1)
-
economía (1)
-
educación (48)
-
educación superior (10)
-
educadores (1)
-
embeddings (1)
-
encuesta (2)
-
enseñanza (3)
-
enseñanza de IA (1)
-
entrevista (2)
-
escritura (2)
-
evaluación (4)
-
experimentación (1)
-
futuro de la IA (2)
-
futuro laboral (1)
-
Google (2)
-
Google Docs (1)
-
guía (1)
-
guía de uso (5)
-
historia (1)
-
IA generativa (3)
-
IA vs. humanos (1)
-
ideas (1)
-
ideas de uso (7)
-
impacto social (1)
-
información (1)
-
investigación (5)
-
Kahoot (1)
-
LLM (2)
-
machine learning (2)
-
mapas (1)
-
medicina (1)
-
Microsoft (1)
-
Midjourney (1)
-
mundo laboral (1)
-
música (1)
-
niños (1)
-
noticias (1)
-
OpenAI (1)
-
opinión (4)
-
orígenes (1)
-
pedagogía (1)
-
plagio (3)
-
plugins (2)
-
presentación (1)
-
problemas (2)
-
programación (1)
-
prompts (1)
-
recomendaciones (1)
-
recopilación (15)
-
recursos (1)
-
regulación (3)
-
revista (1)
-
riesgos (5)
-
robots (1)
-
sesgos (3)
-
trabajo (3)
-
traducción (2)
-
turismo (1)
-
tutorbots (1)
-
tutores de IA (2)
-
tutoriales (3)
-
uso de la lengua (1)
-
uso del español (1)
-
uso en educación (6)
-
usos (4)
-
valoración (1)
-
viajes (1)
Autores
-
A. Lockett (1)
-
AI Foreground (1)
-
Alejandro Tinoco (1)
-
Alfaiz Ali (1)
-
Anca Dragan (1)
-
Andrew Yao (1)
-
Anna Mills (1)
-
Antonio Byrd (1)
-
Ashwin Acharya (1)
-
Barnard College (1)
-
Barsee (1)
-
Ben Dickson (1)
-
Brian Basgen (1)
-
Brian Roemmele (1)
-
Brian X. Chen (4)
-
Carmen Rodríguez (1)
-
Carrie Spector (1)
-
Ceren Ocak (1)
-
Ceylan Yeginsu (1)
-
Charles Hodges (1)
-
Csaba Kissi (1)
-
Daniel Kahneman (1)
-
David Álvarez (1)
-
David Green (1)
-
David Krueger (1)
-
Dawn Song (1)
-
DeepLearning.AI (1)
-
Dennis Pierce (1)
-
Dimitri Kanaris (1)
-
Eli Collins (1)
-
Emily Bender (1)
-
Enrique Dans (3)
-
Eric M. Anderman (1)
-
Eric W. Dolan (1)
-
Eric Wu (1)
-
Ethan Mollick (1)
-
Eva M. González (1)
-
Francis Y (3)
-
Frank Hutter (1)
-
Gary Marcus (1)
-
Geoffrey Hinton (1)
-
George Siemens (3)
-
Gillian Hadfield (1)
-
Gonzalo Abio (1)
-
Google (3)
-
Gorka Garate (1)
-
Greg Brockman (1)
-
Guillaume Bardet (1)
-
Hasan Toor (4)
-
Hassan Khosravi (1)
-
Helen Beetham (1)
-
Helena Matute (1)
-
Hélène Sauzéon (1)
-
Holly Hassel (1)
-
Ian Roberts (1)
-
James Zou (1)
-
Jan Brauner (1)
-
Jas Singh (3)
-
Javier Pastor (1)
-
Jeff Clune (1)
-
Jeffrey Watumull (1)
-
Jenay Robert (1)
-
Johanna C. (1)
-
Johannes Wachs (1)
-
Josh Bersin (1)
-
Juan Cuccarese (1)
-
Julian Estevez (1)
-
Kalley Huang (1)
-
Karie Willyerd (1)
-
Kevin Roose (1)
-
Kui Xie (1)
-
Lan Xue (1)
-
Lance Eaton (1)
-
Leonardo Flores (1)
-
Lijia Chen (1)
-
Lorna Waddington (1)
-
Lucía Vicente (1)
-
Manuel Graña (1)
-
Mark McCormack (1)
-
Marko Kolanovic (1)
-
Melissa Heikkilä (1)
-
Mert Yuksekgonul (1)
-
Microsoft (1)
-
MLA Style Center (1)
-
Muzzammil (1)
-
Nada Lavrač (1)
-
Naomi S. Baron (1)
-
Natasha Singer (2)
-
Nathan Lands (1)
-
Nicole Muscanell (1)
-
Nikki Siapno (1)
-
NLLB Team (1)
-
Noam Chomsky (1)
-
Nuria Oliver (1)
-
Oliver Whang (1)
-
Olumide Popoola (1)
-
OpenAI (2)
-
Paul Couvert (5)
-
Paula Escobar (1)
-
Pauline Lucas (1)
-
Petr Šigut (1)
-
Philip Torr (1)
-
Philippa Hardman (18)
-
Pieter Abbeel (1)
-
Pingping Chen (1)
-
Pratham (1)
-
Qiqi Gao (1)
-
Rafael Ruiz (1)
-
Rania Abdelghani (1)
-
Rebecca Marrone (1)
-
Rishit Patel (1)
-
Rowan Cheung (2)
-
Russell Group (1)
-
Sal Khan (1)
-
Samuel A. Pilar (1)
-
Samuel Fowler (1)
-
Sarah Z. Johnson (1)
-
Sepp Hochreiter (1)
-
Serge Belongie (1)
-
Shazia Sadiq (1)
-
Sheila McIlraith (1)
-
Sihem Amer-Yahia (1)
-
Sonja Bjelobaba (1)
-
Sören Mindermann (1)
-
Stan Waddell (1)
-
Stella Tan (1)
-
Stephen Marche (1)
-
Steve Lohr (1)
-
Stuart Russell (1)
-
Tegan Maharaj (1)
-
Tiffany Hsu (1)
-
Tim Leberecht (1)
-
Timothy McAdoo (1)
-
Tom Graham (1)
-
Tom Warren (1)
-
Tomáš Foltýnek (1)
-
Tong Wang (1)
-
Trevor Darrell (1)
-
Tulsi Soni (2)
-
Vicki Boykis. (1)
-
Víctor Millán (1)
-
Weixin Liang (1)
-
Xingdi Yuan (1)
-
Ya-Qin Zhang (1)
-
Yejin Choi (1)
-
Yen-Hsiang Wang (1)
-
Yining Mao (1)
-
Yoshua Bengio (1)
-
Yurii Nykon (1)
-
Zhijian Lin (1)
Fuentes
-
APA Style (1)
-
arXiv (4)
-
E-aprendizaje.es (1)
-
EDUCAUSE (7)
-
Educaweb (1)
-
El País (1)
-
ElDiario.es (3)
-
Enrique Dans (1)
-
eSchool News (1)
-
Formación ELE (1)
-
Generación EZ (1)
-
GP Strategies (1)
-
HigherEdJobs (1)
-
IE Insights (1)
-
IEEE Access (2)
-
INTEF (1)
-
Intellias (1)
-
J.P.Morgan (1)
-
Joshbersin.com (1)
-
Kahoot! (1)
-
La Tercera (1)
-
Learning Letters (2)
-
Medium (1)
-
Meta AI (1)
-
Meta Research (1)
-
MLA (1)
-
Multiplex (1)
-
New York Times (14)
-
Open AI (1)
-
OpenAI (2)
-
PsyPost (1)
-
RTVE (1)
-
Russell Group (1)
-
Science (1)
-
TED (5)
-
TEDx (1)
-
The Atlantic (1)
-
The Conversation (4)
-
The Rundown (1)
-
The Verge (1)
-
ThinkBig (1)
-
Twitter (26)
-
Xataca (1)
-
Youtube (6)