Observatorio IA

Seguro que ya habrás escuchado hablar de Midjourney y probablemente sepas que es una de las nuevas herramientas de inteligencia artificial que más impacto está teniendo en la manera en que podemos crear materiales visuales. Mediante órdenes de texto, Midjourney puede generar todo tipo de imágenes, desde ilustraciones abstractas hasta representaciones realistas de personas, objetos y escenas. Como se ve, esta tecnología ofrece un gran potencial para que los profesores de idiomas creemos materiales más atractivos y relevantes para nuestras clases de español.
New research by the psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, provides evidence that people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions. The astonishing results achieved by artificial intelligence systems, which can, for example, hold a conversation as a human does, have given this technology an image of high reliability. More and more professional fields are implementing AI-based tools to support the decision-making of specialists to minimise errors in their decisions. However, this technology is not without risks due to biases in AI results. We must consider that the data used to train AI models reflects past human decisions. If this data hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. Indeed, extensive evidence indicates that AI systems do inherit and amplify human biases. The most relevant finding of Vicente and Matute’s research is that the opposite effect may also occur: that humans inherit AI biases. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research. In the series of three experiments conducted by these researchers, volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (it exhibited a systematic error) during this task, while the control group were unassisted. The AI, the medical diagnosis task, and the disease were fictitious. The whole setting was a simulation to avoid interference with real situations. The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes. Thus, AI recommendations influenced participant’s decisions. Yet the most significant finding of the research was that, after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided. In other words, participants who were first assisted by the biased AI replicated its bias in a context without this support, thus showing an inherited bias. This effect was not observed for the participants in the control group, who performed the task unaided from the beginning. These results show that biased information by an artificial intelligence model can have a perdurable negative impact on human decisions. The finding of an inheritance of AI bias effect points to the need for further psychological and multidisciplinary research on AI-human interaction. Furthermore, evidence-based regulation is also needed to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the IA and human collaboration.
Que los humanos introducimos nuestros sesgos en los algoritmos no es ninguna novedad y es una de las principales fuentes de preocupación sobre estas tecnologías. Lo que no sabíamos hasta ahora es que el proceso puede suceder a la inversa: la máquina también puede contagiarnos a nosotros sus errores sistemáticos y estos se pueden perpetuar. Es lo que concluyen las investigadoras españolas Lucía Vicente y Helena Matute después de realizar una serie de experimentos cuyos resultados publican este martes en la revista Scientific Reports. 
Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q\&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16\% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.
This week, I attended a round table discussion at the House of Commons with politicians and experts from across the education sector to feed into UK policy on AI in Higher Education. Unsurprisingly, one of the key areas of concern and discussion was the impact of AI on academic integrity: in a world where AI can write an essay, what does AI mean for what we assess and how we assess it? And how do we respond in the short term? In this week’s blog post I’ll summarise the discussion and share what we agreed would be the most likely new model of assessment in HE in the post-AI world.
MLA Style Center MLA (17/03/2023)
The MLA’s method for citing sources uses a template of core elements—standardized criteria that writers can use to evaluate sources and create works-cited-list entries based on that evaluation. That new technologies like ChatGPT emerge is a key reason why the MLA has adopted this approach to citation—to give writers flexibility to apply the style when they encounter new types of sources. In what follows, we offer recommendations for citing generative AI, defined as a tool that “can analyze or summarize content from a huge set of information, including web pages, books and other writing available on the internet, and use that data to create original new content” (Weed). 

Pages

Sobre el observatorio

Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.

Temas

Autores