Observatorio IA

Enrique Dans Enrique Dans (30/07/2023)
Estamos como en Poltergeist: delante de la televisión sin sintonizar, y diciendo eso de «ya están aquí«… No, no es malo, no es negativo, no lo vamos a prohibir, ni a regular de manera que no ocurra. Es, simplemente, inevitable. O desarrollamos una forma de distribuir la riqueza que se adapte a esos nuevos tiempos que ya están aquí, o tendremos un problema muy serio. Y no será un problema de la tecnología: será completamente nuestro.
There’s been a lot of discussion in recent months about the risks associated with the rise of generative AI for higher education. Much of the conversation has centred around the threat that tools like ChatGPT - which can generate essays and other text-based assessments in seconds - pose to academic integrity. More recently, others have started to explore more subtle risks of AI in the classroom, including issues and equity and the impact on the teacher-student relationship. Much less work has been done on exploring the negative consequences that might result from not embracing AI in education.
This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy
There are a lot of AI-powered “summariser” tools on the market. These tools allow us to paste in unstructured text and have AI identify important sentences, extract key phrases and summarise the main points of the document. My research shows that lots of us are using AI summariser tools to help us to learn more from notes that we take in class, in work, while reading documents, watching videos and listening to podcasts etc. But, while summarising and giving structure to information can help to manage cognitive load and support basic recall, it doesn’t in itself help us to learn
Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for artificial intelligence generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.
My initial research suggests that just six months after Open AI gave the world access to AI, we are already seeing the emergence of a significant AI-Education divide. If the current trend that continues, there is a very real risk that - rather than democratising education - the rise of AI will widen the digital divide and deepen socio-economic inequality. In this week’s blog post I’ll share some examples of how AI has impacted negatively on education equity and - on a more positive note - suggest some ways to reverse this trend and decrease, rather than increase, the digital and socio-economic divide.
Stella Tan New York Times (28/06/2023)
Since its introduction less than a year ago, ChatGPT, the artificial intelligence platform that can write essays, solve math problems and write computer code, has sparked an anguished debate in the world of education. Is it a useful research tool or an irresistible license to cheat? Stella Tan, a producer on The Daily, speaks to teachers and students as they finish their first semester with ChatGPT about how it is changing the classroom.

Pages

Sobre el observatorio

Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.

Temas

Autores