Observatorio IA


The use of generative AI tools on campus is an excellent opportunity for technology and other leaders to provide guidance to students, faculty, and staff about how to navigate these new technological waters. In April 2023, we were involved in a panel with students at College Unbound. The conversation —"Generative AI and Higher Education: Disruption, Opportunities, and Challenges"— offered many different highlights, and the students brought rich thoughts, provocative considerations, and smart ideas, reinforcing the fact that discussions around what to do about generative AI (or about anything else, for that matter) are enhanced when students are involved. Toward the end of the panel conversation, Stan asked the students what they thought could be done to help faculty, students, and staff navigate the rise of AI. Essentially, he was curious to hear about the roles that technology and other leaders could fulfill. After thinking about their answers and engaging in further reflection, we came up with ten suggestions for how to step up and in to the generative AI discussion in higher education.
Brian Basgen EDUCAUSE (15/08/2023)
Generative artificial intelligence (AI) is in a renaissance amid a profusion of new discoveries and a breathless frenzy to keep up with emergent developments. Yet understanding the current state of technology requires understanding its origins. With the state of AI science changing quickly, we should first take a breath and establish proper footings. To help, this article provides a reading list relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.
Me reúno con Sutskever, cofundador y director científico de OpenAI, en las oficinas de su empresa en una anodina calle del Mission District de San Francisco (California, EE UU). El objetivo es que comparta sus previsiones sobre el futuro de una tecnología mundialmente conocida, y en cuyo desarrollo ha tenido mucho que ver. También quiero saber qué cree que será lo próximo en llega y, en particular, por qué la construcción de la próxima generación de los modelos generativos insignia de OpenAI ya no es el centro de su trabajo. En lugar de construir el próximo GPT o el creador de imágenes DALL-E, Sutskever cuenta que su nueva prioridad es averiguar cómo impedir que una superinteligencia artificial, una hipotética tecnología futura que ve llegar con la previsión de un verdadero creyente, se rebele.
Grabación de la entrevista realizada por Lorena Fernández Álvarez, Directora de Comunicación Digital de la Universidad de Deusto, a Gemma Galdon Clavell, experta en análisis de políticas públicas especializada en impacto social de la digitalización e Inteligencia Artificial y CEO de Eticas Consulting. Reflexiones sobre el impacto social de los sistemas algorítmicos, la reproducción de los sesgos de la sociedad en los procesos de la Inteligencia Artificial. El impacto de los outputs de la IA tiende a la generación de mundos pequeños y homogéneos en vez de exigir mundos más grandes y heterogéneos. Esta entrevista se realizó el 26 de octubre de 2023, con motivo de su participación como ponente principal en el acto de DeustoForum, “Los Retos Éticos ante la Inteligencia Artificial” en la Universidad de Deusto (Campus Bilbao).
Seguro que ya habrás escuchado hablar de Midjourney y probablemente sepas que es una de las nuevas herramientas de inteligencia artificial que más impacto está teniendo en la manera en que podemos crear materiales visuales. Mediante órdenes de texto, Midjourney puede generar todo tipo de imágenes, desde ilustraciones abstractas hasta representaciones realistas de personas, objetos y escenas. Como se ve, esta tecnología ofrece un gran potencial para que los profesores de idiomas creemos materiales más atractivos y relevantes para nuestras clases de español.
New research by the psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, provides evidence that people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions. The astonishing results achieved by artificial intelligence systems, which can, for example, hold a conversation as a human does, have given this technology an image of high reliability. More and more professional fields are implementing AI-based tools to support the decision-making of specialists to minimise errors in their decisions. However, this technology is not without risks due to biases in AI results. We must consider that the data used to train AI models reflects past human decisions. If this data hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. Indeed, extensive evidence indicates that AI systems do inherit and amplify human biases. The most relevant finding of Vicente and Matute’s research is that the opposite effect may also occur: that humans inherit AI biases. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research. In the series of three experiments conducted by these researchers, volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (it exhibited a systematic error) during this task, while the control group were unassisted. The AI, the medical diagnosis task, and the disease were fictitious. The whole setting was a simulation to avoid interference with real situations. The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes. Thus, AI recommendations influenced participant’s decisions. Yet the most significant finding of the research was that, after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided. In other words, participants who were first assisted by the biased AI replicated its bias in a context without this support, thus showing an inherited bias. This effect was not observed for the participants in the control group, who performed the task unaided from the beginning. These results show that biased information by an artificial intelligence model can have a perdurable negative impact on human decisions. The finding of an inheritance of AI bias effect points to the need for further psychological and multidisciplinary research on AI-human interaction. Furthermore, evidence-based regulation is also needed to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the IA and human collaboration.
Que los humanos introducimos nuestros sesgos en los algoritmos no es ninguna novedad y es una de las principales fuentes de preocupación sobre estas tecnologías. Lo que no sabíamos hasta ahora es que el proceso puede suceder a la inversa: la máquina también puede contagiarnos a nosotros sus errores sistemáticos y estos se pueden perpetuar. Es lo que concluyen las investigadoras españolas Lucía Vicente y Helena Matute después de realizar una serie de experimentos cuyos resultados publican este martes en la revista Scientific Reports. 
Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q\&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16\% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.


Sobre el observatorio

Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.