Observatorio IA - artículo

Stanford education scholars Victor Lee and Denise Pope discuss ongoing research into why and how often students cheat. The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release of ChatGPT.  
The use of generative AI tools on campus is an excellent opportunity for technology and other leaders to provide guidance to students, faculty, and staff about how to navigate these new technological waters. In April 2023, we were involved in a panel with students at College Unbound. The conversation —"Generative AI and Higher Education: Disruption, Opportunities, and Challenges"— offered many different highlights, and the students brought rich thoughts, provocative considerations, and smart ideas, reinforcing the fact that discussions around what to do about generative AI (or about anything else, for that matter) are enhanced when students are involved. Toward the end of the panel conversation, Stan asked the students what they thought could be done to help faculty, students, and staff navigate the rise of AI. Essentially, he was curious to hear about the roles that technology and other leaders could fulfill. After thinking about their answers and engaging in further reflection, we came up with ten suggestions for how to step up and in to the generative AI discussion in higher education.
Brian Basgen EDUCAUSE (15/08/2023)
Generative artificial intelligence (AI) is in a renaissance amid a profusion of new discoveries and a breathless frenzy to keep up with emergent developments. Yet understanding the current state of technology requires understanding its origins. With the state of AI science changing quickly, we should first take a breath and establish proper footings. To help, this article provides a reading list relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.
Me reúno con Sutskever, cofundador y director científico de OpenAI, en las oficinas de su empresa en una anodina calle del Mission District de San Francisco (California, EE UU). El objetivo es que comparta sus previsiones sobre el futuro de una tecnología mundialmente conocida, y en cuyo desarrollo ha tenido mucho que ver. También quiero saber qué cree que será lo próximo en llega y, en particular, por qué la construcción de la próxima generación de los modelos generativos insignia de OpenAI ya no es el centro de su trabajo. En lugar de construir el próximo GPT o el creador de imágenes DALL-E, Sutskever cuenta que su nueva prioridad es averiguar cómo impedir que una superinteligencia artificial, una hipotética tecnología futura que ve llegar con la previsión de un verdadero creyente, se rebele.
Seguro que ya habrás escuchado hablar de Midjourney y probablemente sepas que es una de las nuevas herramientas de inteligencia artificial que más impacto está teniendo en la manera en que podemos crear materiales visuales. Mediante órdenes de texto, Midjourney puede generar todo tipo de imágenes, desde ilustraciones abstractas hasta representaciones realistas de personas, objetos y escenas. Como se ve, esta tecnología ofrece un gran potencial para que los profesores de idiomas creemos materiales más atractivos y relevantes para nuestras clases de español.
Que los humanos introducimos nuestros sesgos en los algoritmos no es ninguna novedad y es una de las principales fuentes de preocupación sobre estas tecnologías. Lo que no sabíamos hasta ahora es que el proceso puede suceder a la inversa: la máquina también puede contagiarnos a nosotros sus errores sistemáticos y estos se pueden perpetuar. Es lo que concluyen las investigadoras españolas Lucía Vicente y Helena Matute después de realizar una serie de experimentos cuyos resultados publican este martes en la revista Scientific Reports. 
This week, I attended a round table discussion at the House of Commons with politicians and experts from across the education sector to feed into UK policy on AI in Higher Education. Unsurprisingly, one of the key areas of concern and discussion was the impact of AI on academic integrity: in a world where AI can write an essay, what does AI mean for what we assess and how we assess it? And how do we respond in the short term? In this week’s blog post I’ll summarise the discussion and share what we agreed would be the most likely new model of assessment in HE in the post-AI world.

Pages

Temas

Autores