Seguro que ya habrás escuchado hablar de Midjourney y probablemente sepas que es una de las nuevas herramientas de inteligencia artificial que más impacto está teniendo en la manera en que podemos crear materiales visuales. Mediante órdenes de texto, Midjourney puede generar todo tipo de imágenes, desde ilustraciones abstractas hasta representaciones realistas de personas, objetos y escenas. Como se ve, esta tecnología ofrece un gran potencial para que los profesores de idiomas creemos materiales más atractivos y relevantes para nuestras clases de español.
Observatorio IA
PDFPeer es una plataforma impulsada por inteligencia artificial que permite interactuar con documentos en PDF a través de un chatbot y extraer información de ellos.
New research by the psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, provides evidence that people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions.
The astonishing results achieved by artificial intelligence systems, which can, for example, hold a conversation as a human does, have given this technology an image of high reliability. More and more professional fields are implementing AI-based tools to support the decision-making of specialists to minimise errors in their decisions. However, this technology is not without risks due to biases in AI results. We must consider that the data used to train AI models reflects past human decisions. If this data hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. Indeed, extensive evidence indicates that AI systems do inherit and amplify human biases.
The most relevant finding of Vicente and Matute’s research is that the opposite effect may also occur: that humans inherit AI biases. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research.
In the series of three experiments conducted by these researchers, volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (it exhibited a systematic error) during this task, while the control group were unassisted. The AI, the medical diagnosis task, and the disease were fictitious. The whole setting was a simulation to avoid interference with real situations.
The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes. Thus, AI recommendations influenced participant’s decisions. Yet the most significant finding of the research was that, after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided. In other words, participants who were first assisted by the biased AI replicated its bias in a context without this support, thus showing an inherited bias. This effect was not observed for the participants in the control group, who performed the task unaided from the beginning.
These results show that biased information by an artificial intelligence model can have a perdurable negative impact on human decisions. The finding of an inheritance of AI bias effect points to the need for further psychological and multidisciplinary research on AI-human interaction. Furthermore, evidence-based regulation is also needed to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the IA and human collaboration.
Que los humanos introducimos nuestros sesgos en los algoritmos no es ninguna novedad y es una de las principales fuentes de preocupación sobre estas tecnologías. Lo que no sabíamos hasta ahora es que el proceso puede suceder a la inversa: la máquina también puede contagiarnos a nosotros sus errores sistemáticos y estos se pueden perpetuar. Es lo que concluyen las investigadoras españolas Lucía Vicente y Helena Matute después de realizar una serie de experimentos cuyos resultados publican este martes en la revista Scientific Reports.
Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q\&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16\% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.
This week, I attended a round table discussion at the House of Commons with politicians and experts from across the education sector to feed into UK policy on AI in Higher Education.
Unsurprisingly, one of the key areas of concern and discussion was the impact of AI on academic integrity: in a world where AI can write an essay, what does AI mean for what we assess and how we assess it? And how do we respond in the short term?
In this week’s blog post I’ll summarise the discussion and share what we agreed would be the most likely new model of assessment in HE in the post-AI world.
At the start of the new school year, here are MIT Technology Review’s six essential tips for how to get started on giving your kid an AI education.
Aquí están seis consejos esenciales publicados por el MIT Technology Review sobre cómo comenzar a dar a tu hijo una educación en IA.
Como me pareció interesante hice esta traducción que sirve también para los docentes.
The MLA’s method for citing sources uses a template of core elements—standardized criteria that writers can use to evaluate sources and create works-cited-list entries based on that evaluation. That new technologies like ChatGPT emerge is a key reason why the MLA has adopted this approach to citation—to give writers flexibility to apply the style when they encounter new types of sources. In what follows, we offer recommendations for citing generative AI, defined as a tool that “can analyze or summarize content from a huge set of information, including web pages, books and other writing available on the internet, and use that data to create original new content” (Weed).
Pages
Sobre el observatorio
Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.
Tipo
-
aplicación (4)
-
artículo (89)
-
artículo científico (12)
-
boletín (3)
-
curso (1)
-
libro (2)
-
podcast (1)
-
presentación (6)
-
revista (1)
-
sitio web (13)
-
tuit (26)
-
vídeo (7)
Temas
-
AI literacy (1)
-
aplicaciones (31)
-
aprendizaje (10)
-
Australia (1)
-
Bard (4)
-
bibliografía (1)
-
big data (1)
-
Bing (3)
-
brecha digital (1)
-
chatbots (8)
-
chatGPT (44)
-
código abierto (1)
-
cognición (1)
-
cómo citar (2)
-
comparación (3)
-
consejos de uso (2)
-
control (1)
-
curso (1)
-
cursos (1)
-
DALLE2 (1)
-
deep learning (2)
-
deepfakes (1)
-
desafíos (1)
-
destrezas (1)
-
detección de uso (6)
-
diseño educativo (1)
-
disrupción (1)
-
docentes (1)
-
e-learning (1)
-
economía (1)
-
educación (50)
-
educación superior (10)
-
educadores (1)
-
embeddings (1)
-
encuesta (2)
-
enseñanza (5)
-
enseñanza de ELE (1)
-
enseñanza de IA (1)
-
entrevista (2)
-
escritura (2)
-
evaluación (4)
-
experimentación (1)
-
futuro de la IA (2)
-
futuro laboral (1)
-
Google (2)
-
Google Docs (1)
-
guía (2)
-
guía de uso (5)
-
historia (1)
-
IA generativa (3)
-
IA vs. humanos (1)
-
ideas (1)
-
ideas de uso (7)
-
idiomas (1)
-
impacto social (1)
-
información (1)
-
inglés (1)
-
investigación (5)
-
Kahoot (1)
-
LLM (2)
-
machine learning (2)
-
mapas (1)
-
medicina (1)
-
Microsoft (1)
-
Midjourney (1)
-
mundo laboral (1)
-
música (1)
-
niños (1)
-
noticias (1)
-
OpenAI (1)
-
opinión (4)
-
orígenes (1)
-
pedagogía (1)
-
plagio (3)
-
plugins (2)
-
presentación (1)
-
problemas (2)
-
programación (1)
-
prompts (1)
-
recomendaciones (1)
-
recopilación (16)
-
recursos (1)
-
regulación (3)
-
revista (1)
-
riesgos (5)
-
robots (1)
-
sesgos (3)
-
trabajo (3)
-
traducción (2)
-
turismo (1)
-
tutorbots (1)
-
tutores de IA (2)
-
tutoriales (3)
-
uso de la lengua (1)
-
uso del español (1)
-
uso en educación (6)
-
usos (4)
-
valoración (1)
-
viajes (1)
Autores
-
A. Lockett (1)
-
AI Foreground (1)
-
Alejandro Tinoco (1)
-
Alfaiz Ali (1)
-
Anca Dragan (1)
-
Andrew Yao (1)
-
Anna Mills (1)
-
Antonio Byrd (1)
-
Ashwin Acharya (1)
-
Barnard College (1)
-
Barsee (1)
-
Ben Dickson (1)
-
Brian Basgen (1)
-
Brian Roemmele (1)
-
Brian X. Chen (4)
-
Carmen Rodríguez (1)
-
Carrie Spector (1)
-
Ceren Ocak (1)
-
Ceylan Yeginsu (1)
-
Charles Hodges (1)
-
Csaba Kissi (1)
-
Daniel Kahneman (1)
-
David Álvarez (1)
-
David Green (1)
-
David Krueger (1)
-
Dawn Song (1)
-
DeepLearning.AI (1)
-
Dennis Pierce (1)
-
Dimitri Kanaris (1)
-
Eli Collins (1)
-
Emily Bender (1)
-
Enrique Dans (3)
-
Eric M. Anderman (1)
-
Eric W. Dolan (1)
-
Eric Wu (1)
-
Ethan Mollick (1)
-
Eva M. González (1)
-
Francis Y (3)
-
Frank Hutter (1)
-
Gary Marcus (1)
-
Geoffrey Hinton (1)
-
George Siemens (3)
-
Gillian Hadfield (1)
-
Gonzalo Abio (1)
-
Google (3)
-
Gorka Garate (1)
-
Greg Brockman (1)
-
Guillaume Bardet (1)
-
Hasan Toor (4)
-
Hassan Khosravi (1)
-
Helen Beetham (1)
-
Helena Matute (1)
-
Hélène Sauzéon (1)
-
Holly Hassel (1)
-
Ian Roberts (1)
-
James Zou (1)
-
Jan Brauner (1)
-
Jas Singh (3)
-
Javier Pastor (1)
-
Jeff Clune (1)
-
Jeffrey Watumull (1)
-
Jenay Robert (1)
-
Jennifer Niño (1)
-
Johanna C. (1)
-
Johannes Wachs (1)
-
Josh Bersin (1)
-
Juan Cuccarese (1)
-
Julian Estevez (1)
-
Kalley Huang (1)
-
Karie Willyerd (1)
-
Kevin Roose (1)
-
Kui Xie (1)
-
Lan Xue (1)
-
Lance Eaton (1)
-
Leonardo Flores (1)
-
Lijia Chen (1)
-
Lorna Waddington (1)
-
Lucía Vicente (1)
-
Manuel Graña (1)
-
Mark McCormack (1)
-
Marko Kolanovic (1)
-
Melissa Heikkilä (1)
-
Mert Yuksekgonul (1)
-
Microsoft (1)
-
MLA Style Center (1)
-
Muzzammil (1)
-
Nada Lavrač (1)
-
Naomi S. Baron (1)
-
Natasha Singer (2)
-
Nathan Lands (1)
-
Nicole Muscanell (1)
-
Nikki Siapno (1)
-
NLLB Team (1)
-
Noam Chomsky (1)
-
Nuria Oliver (1)
-
Oliver Whang (1)
-
Olumide Popoola (1)
-
OpenAI (2)
-
Paul Couvert (5)
-
Paula Escobar (1)
-
Pauline Lucas (1)
-
Petr Šigut (1)
-
Philip Torr (1)
-
Philippa Hardman (18)
-
Pieter Abbeel (1)
-
Pingping Chen (1)
-
Pratham (1)
-
Qiqi Gao (1)
-
Rafael Ruiz (1)
-
Rania Abdelghani (1)
-
Rebecca Marrone (1)
-
Rishit Patel (1)
-
Rowan Cheung (2)
-
Russell Group (1)
-
Sal Khan (1)
-
Samuel A. Pilar (1)
-
Samuel Fowler (1)
-
Sarah Z. Johnson (1)
-
Sepp Hochreiter (1)
-
Serge Belongie (1)
-
Shazia Sadiq (1)
-
Sheila McIlraith (1)
-
Sihem Amer-Yahia (1)
-
Sonja Bjelobaba (1)
-
Sören Mindermann (1)
-
Stan Waddell (1)
-
Stella Tan (1)
-
Stephen Marche (1)
-
Steve Lohr (1)
-
Stuart Russell (1)
-
Tegan Maharaj (1)
-
Tiffany Hsu (1)
-
Tim Leberecht (1)
-
Timothy McAdoo (1)
-
Tom Graham (1)
-
Tom Warren (1)
-
Tomáš Foltýnek (1)
-
Tong Wang (1)
-
Trevor Darrell (1)
-
Tulsi Soni (2)
-
Vicki Boykis. (1)
-
Víctor Millán (1)
-
Weixin Liang (1)
-
Xingdi Yuan (1)
-
Ya-Qin Zhang (1)
-
Yejin Choi (1)
-
Yen-Hsiang Wang (1)
-
Yining Mao (1)
-
Yoshua Bengio (1)
-
Yurii Nykon (1)
-
Zhijian Lin (1)
Fuentes
-
APA Style (1)
-
Aprende (1)
-
arXiv (4)
-
E-aprendizaje.es (1)
-
EDUCAUSE (9)
-
Educaweb (1)
-
El País (1)
-
ElDiario.es (3)
-
Enrique Dans (1)
-
Enseña (1)
-
eSchool News (1)
-
Explora (1)
-
Formación ELE (1)
-
Generación EZ (1)
-
GP Strategies (1)
-
HigherEdJobs (1)
-
IE Insights (1)
-
IEEE Access (2)
-
INTEF (1)
-
Intellias (1)
-
J.P.Morgan (1)
-
Joshbersin.com (1)
-
Kahoot! (1)
-
La Tercera (1)
-
Learning Letters (2)
-
Medium (1)
-
Meta AI (1)
-
Meta Research (1)
-
MLA (1)
-
Multiplex (1)
-
New York Times (14)
-
Open AI (1)
-
OpenAI (2)
-
PsyPost (1)
-
RTVE (1)
-
Russell Group (1)
-
Science (1)
-
TED (5)
-
TEDx (1)
-
The Atlantic (1)
-
The Conversation (4)
-
The Rundown (1)
-
The Verge (1)
-
ThinkBig (1)
-
Twitter (26)
-
Xataca (1)
-
Youtube (6)