Skip to Main Content

AI @ Monsignor William Barry Memorial LIbrary: The Future of Education

At the Monsignor William Barry Memorial Library, we embrace the transformative power of artificial intelligence (AI) to shape our daily lives. This page will provide resources and general guidance on AI for students, faculty and staff.

The impact of generative AI on higher education learning and teaching

Artificial Intelligence (AI) is becoming a ubiquitous entity in modern developed societies. It is a significant factor in marketing, design and entertainment, and is also increasingly present in Higher Education (HE). Many AI systems exist and are operating behind the scenes in large numbers effecting a variety of aspects in current civilization. However, a significant development in AI occurred in November 2022 when the San Francisco based company OpenAI released to the public their Chat Generative Pre-trained Transformer: ChatGPT. ChatGPT is a Large Language Model (LLM) chatbot that uses Natural Language Processing (NLP) to create human-like responses to users’ prompts. This new development has already had a considerable impact on the education industry and has prompted the release of numerous other generative AI tools.

ChatGPT was rapidly adopted by the general public and reached 100 million users in the first two months after its public release, making it the fastest-growing consumer application in history (Hu, 2023). Other significant recent developments in generative AI which are influencing HE include Midjourney, released in July 2022, Microsoft's Bing AI Chat, which was released in February 2023, Google’s chatbot BARD, released in February 2023, and Dall-E, released in January 2021. Some more specific generative AI tools having an impact on HE include Synthesia, a video generation tool, and Amper Music, a cloud-based platform that quickly and easily generates soundtracks for films or digital games. The ultimate impact of AI, or what it might become, is as yet unknown. However, its potential to “trigger transformative change is undeniable” (Bozkurt, 2023, p. 199). Commentators have made strong claims regarding the impact of AI including claiming it is “poised to have a more substantial impact than the introduction of electricity” (Thurzo, Strunga, Urban, Surovková, & Afrashtehfar, 2023,

Education and Generative AI

Leaders in education explore the future of technology in and beyond the classroom.

 Generative AI has already changed education.

Students are already using generative AI tools like ChatGPT for homework assistance, which alarms educators because they may bypass the assignment’s intended learning objective. For example, essays are often used to teach the mechanics of writing, but learners won’t hone that skill if they’re prompting AI to generate an entire essay for them. Panelists framed this technology as both a potential opportunity and a potential hindrance. If educators reevaluate what they want students to learn, they can revise their curricula to facilitate higher levels of cognitive processing. They can also think about the new opportunities generative AI tools offer to both educators and learners.

However, this isn’t just about technical skills. The Scratch programming language, which is a product of the Lifelong Kindergarten group at the MIT Media Lab, is used by millions of children — and adults — around the world to create and share multimedia projects. But Mitch Resnick, professor and director of the Lifelong Kindergarten group, noted that “we want people to learn about processes and strategies of design that go beyond coding.” Educators should consider how to get students to work creatively, think beyond simple mechanics, and encourage them to ponder deeply about their work.

In a world where information and processes change rapidly and continually, educators and researchers are questioning the value of mere memorization and narrow skills. On the other hand, pedagogies that develop agile learners who are capable of adapting to new and unexpected scenarios are favored by many. Panelists emphasized the importance of fostering opportunities where learners can become creative, collaborative, and curious thinkers. With this in mind, educators could leverage generative AI in their teaching to foster higher-level skills such as critical thinking, analysis, and strategy.

Janet Rankin, director of the MIT Teaching + Learning Lab, said educators should be driven by thinking about what they want their students to do. Once educators know that, they can think about how generative AI fits with those ideas, she said. There have been plenty of disruptive technologies in schools including calculators and the internet. Schools also have a long history of other technologies that had a lot of hype and little impact. Panelists stressed the need to understand which path AI is on.

Educators and policymakers must rethink the existing education model.

Many educators and researchers advocate for hands-on constructionist learning, which centers students in the learning process and encourages students to develop their own understanding. However, the instructivist learning model, where teachers deliver instruction to students, is still the dominant education model in many schools. Panelists pointed out that, regardless of the technology at hand, our education system should be moving to more constructionist approaches, where students work on hands-on, project-based learning. With that idea in mind, the question becomes: How can AI support that model?

Modern education balances multiple purposes: instruction, workforce preparation, citizen development, and more. “Historically, technologists have not done a great job of understanding those complex, social, technical systems,” said Justin Reich, director of the MIT Teaching Systems Lab, resulting in new tools developed for the way we wished students learned and schools operated, instead of the ways they actually work. “If you don’t understand these systems you’re building for, then you’re not going to build things that work for those systems,” Reich said.

Pattie Maes, Germeshausen Professor of Media Arts and Sciences at MIT Media Lab, has been thinking about the future and the ways that AI could play a role in learning. When asked about her moonshot, Maes envisioned a context-aware device that is with learners at all times, so its educational assistance would be informed by learners’ experiences. The device could serve as “a mentor, thought-provoker, encouraging you to see things differently and go deeper,” Maes said.

Keep equity and access top of mind.

A recurring goal for multiple speakers was to give learners from a broad range of backgrounds control and agency over technology. They expressed concern if these powerful technologies are only developed with limited perspectives, whether it’s a small number of companies in the field or programmers from a narrow demographic. “Who gets included in technology and who does not? What happens when more people participate in tech?” said Randi Williams, research assistant in the MIT Personal Robots group.

Panelists also expressed concern over the increasing disparities that AI technology could bring. If the best AI technologies come with a price tag and take resources to be used effectively, they may privilege well-resourced schools. Panelists stressed the need to think about ways to address these concerns so that AI narrows, rather than widens, existing disparities.

Hal Abelson, professor of computer science and engineering at MIT, argued that generative AI technology should be a tool for everyone, not just highly educated or well-resourced people or those with a technical background. Computational action — a model that seeks to empower children to make a difference in their communities through technology — shows that all children can create tools that improve lives and have meaningful social impacts. For example, high schoolers in Moldova developed a mobile app where people can enter and view clean sources of water on a shared map, a resource that addresses a nationwide problem. Speakers called for the creation of policies to address biases in generative AI and ensure that everyone has access to these powerful technologies.

 

https://openlearning.mit.edu/news/what-will-future-education-look-world-generative-ai

 

 

AI and Education

AI and Potential Dangers

1. Algorithmic Bias and Discrimination:

  • For example, facial recognition systems have been shown to be less accurate on people with darker skin tones, leading to misidentification and potential wrongful arrests. 
  • AI systems learn from data, and if that data reflects existing societal biases, the AI will likely perpetuate and amplify those biases. 
  • This can lead to unfair or discriminatory outcomes in areas like hiring, loan applications, and even criminal justice. 

2. Cybersecurity Threats:

  • AI can be used to create more sophisticated and effective cyberattacks, making it harder to defend against them. 
  • AI-powered phishing and malware can be more convincing and harder to detect. 
  • AI can also be used to identify vulnerabilities in systems, making them easier to exploit. 

3. Misinformation and Manipulation:

  • AI can generate realistic deepfakes, making it difficult to distinguish between real and fabricated content.
  • AI-powered bots can be used to spread misinformation and manipulate public opinion on social media.
  • This can erode trust in institutions, fuel social division, and even threaten democratic processes. 

4. Data Privacy and Security:

  • Personal information could be compromised through data breaches or misused by AI systems.
  • There are also concerns about the lack of regulation and oversight in the development and deployment of AI, particularly in relation to data privacy. 

5. Job Displacement: 

  • AI and automation have the potential to displace workers in various industries, leading to job losses and economic disruption. 
  • While AI may also create new jobs, there are concerns about the pace of change and the ability of workers to adapt to new roles. 

6. Existential Risks:

  • Some researchers and experts express concerns about the long-term potential for AI to surpass human intelligence and potentially pose an existential threat. 
  • This could involve AI systems developing goals that are misaligned with human values, or even becoming uncontrollable and potentially harmful. 
  • While this scenario is considered unlikely by some, it remains a topic of serious debate and concern within the AI research community.