Observatorio IA - educación

Ethan Mollick (28/08/2023)
I wanted a place to translate research into advice, or commentary, in a way that was short and useful - covering One Useful Thing in each post. Due to timing and circumstance, I have very much come to focus on AI, and its impacts on work and education. So that is now the One Useful Thing of this blog.
(01/01/2023)
The pace of progress in learning sciences, learning analytics, educational data mining, and AI in education is advancing. We are launching this new publication, Learning Letters, to accelerate the pace at which science in learning moves from the lab to dissemination. The traditional publication process often takes 12 to18 months to move from article submission through to publication. We are interested in developing a new approach to publishing research in the “educational technology, learning analytics, and AI in learning” space. In particular, we want to reduce time to publication and put a sharper focus on results and outputs of studies. Learning Letters features innovative discoveries and advanced conceptual papers at the intersection of technology, learning sciences, design, psychology, computer science, and AI. Our commitment is a two week turn-around from submission to notification. Once revisions have been made, the article will be published within a week of final editing. As a result, an article could move from submission to publication in less than four weeks, while having undergone rigorous peer review.
The use of AI-powered educational technologies (AI-EdTech) offers a range of advantages to students, instructors, and educational institutions. While much has been achieved, several challenges in managing the data underpinning AI-EdTech are limiting progress in the field. This paper outlines some of these challenges and argues that data management research has the potential to provide solutions that can enable responsible and effective learner-supporting, teacher-supporting, and institution-supporting AI-EdTech. Our hope is to establish a common ground for collaboration and to foster partnerships among educational experts, AI developers and data management researchers in order to respond effectively to the rapidly evolving global educational landscape and drive the development of AI-EdTech.
A new set of principles has been created to help universities ensure students and staff are ‘AI literate’ so they can capitalise on the opportunities technological breakthroughs provide for teaching and learning. The statement, published today (4 July) and backed by the 24 Vice Chancellors of the Russell Group, will shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT. Developed in partnership with AI and educational experts, the new principles recognise the risks and opportunities of generative AI and commit Russell Group universities to helping staff and students become leaders in an increasingly AI-enabled world. The five principles set out in today’s joint statement are: Universities will support students and staff to become AI-literate. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access. Universities will ensure academic rigour and integrity is upheld. Universities will work collaboratively to share best practice as the technology and its application in education evolves.
This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy
My initial research suggests that just six months after Open AI gave the world access to AI, we are already seeing the emergence of a significant AI-Education divide. If the current trend that continues, there is a very real risk that - rather than democratising education - the rise of AI will widen the digital divide and deepen socio-economic inequality. In this week’s blog post I’ll share some examples of how AI has impacted negatively on education equity and - on a more positive note - suggest some ways to reverse this trend and decrease, rather than increase, the digital and socio-economic divide.
Stella Tan New York Times (28/06/2023)
Since its introduction less than a year ago, ChatGPT, the artificial intelligence platform that can write essays, solve math problems and write computer code, has sparked an anguished debate in the world of education. Is it a useful research tool or an irresistible license to cheat? Stella Tan, a producer on The Daily, speaks to teachers and students as they finish their first semester with ChatGPT about how it is changing the classroom.
Generative A.I.’s specialty is language — guessing which word comes next — and students quickly realized that they could use ChatGPT and other chatbots to write essays. That created an awkward situation in many classrooms. It turns out, it’s easy to get caught cheating with generative A.I. because it is prone to making stuff up, a phenomena known as “hallucinating.” But generative A.I. can also be used as a study assistant. Some tools make highlights in long research papers and even answer questions about the material. Others can assemble study aids, like quizzes and flashcards.

Pages

Temas

Autores