Observatorio IA - LLM

Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q\&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16\% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future.
Emily Bender Youtube (12/08/2023)
Ever since OpenAI released ChatGPT, the internet has been awash in synthetic text, with suggested applications including robo-lawyers, robo-therapists, and robo-journalists. I will overview how language models work and why they can seem to be using language meaningfully-despite only modeling the distribution of word forms. This leads into a discussion of the risks we identified in the Stochastic Parrots paper (Bender, Gebru et al 2021) and how they are playing out in the era of ChatGPT. Finally, I will explore what must hold for an appropriate use case for text synthesis.