Artificial Intelligence at MFS and Beyond
The quest to understand and replicate human intelligence is not new. References to machines and creations that think like people date back to antiquity and continue to the modern day, with fascinating stories that include automatons (like Talos, the bronze giant created by the god Hephaestus to guard Crete), artificial humans (as in Mary Shelley’s Frankenstein), and also real-life inventions (like Blaise Pascal’s calculator in 1642). Ultimately, when a number of philosophical, mathematical, and computational theories coalesced at a workshop at Dartmouth in the Summer of 1956, the field of artificial intelligence was born. After 60 rocky years of policy and funding drama, in November 2022, artificial intelligence reached a pivotal moment that grabbed the attention of educators when OpenAI, a for-profit research company, publicly released its generative AI app called ChatGPT.
WHAT IS ARTIFICIAL INTELLIGENCE?
“We must never forget that artificial intelligence offers intelligence without consciousness.” – Office of Educational Technology’s May 2023 report “Artificial Intelligence and the Future of Teaching and Learning”
Artificial intelligence is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and even exercising creativity. This sounds pretty daunting, and in many ways, it is; in reality, however, you’ve been using different types of AI for a while now, even if you didn’t realize it. Examples include: Netflix or Hulu offering suggestions of what to watch next; anything that uses biometrics, like FaceID; voice assistants like Siri and Alexa; customer service chatbots that appear on websites; Google Translate; predictive text on your phone or in Google Docs, where I’m writing this article; and many more.
Generative artificial intelligence, like ChatGPT, is a subset of AI that uses machine learning, which understands and predicts — like the Netflix suggestions — to allow the creation of new, original content, such as images, text, or music, based on patterns and structures it learns from existing data. The key to understanding generative AI is to remember that it generates fresh and unique content from pre-existing sources.
Of course, it’s this generative capacity that both intrigues and concerns educators. News and policy articles about the promises of artificial intelligence sound a lot like the initial promises of computers in the classroom — the new technologies will make life easier for teachers; they will close equity gaps and make educational resources available to all; they will help differentiate instruction; and the old standby promise of saving time. But the reality is that developments in AI are outpacing our ability to craft thoughtful policies and to properly assess safety, bias, efficacy, equity, privacy, and ethical use.
MFS EMBRACES THE MOMENT
As educators, the easy availability of a tool like ChatGPT means that we have to acknowledge that there are now programs our students can use to generate content for a wide range of assignments. Educators are naturally very concerned that apps like ChatGPT can produce anything from pretty convincing research papers, to essays, to computer code, and more. While these programs are in their infancy (think MySpace vs. Facebook or a flip phone vs. an iPhone), they are constantly improving and changing, meaning that we have to be intentional about our approach to generative AI.
To this end, a group of faculty volunteers was tasked in Summer 2023 to research and discuss the promises and drawbacks of AI and to make policy recommendations. I was joined by Director of Technology Steve Kolaris, Director of Teaching and Learning Jackie Dawson, now retired Middle/Upper School Computer Science Teacher Gail Barna, Librarian Nicole Weber, and Chester Reagan Chair for Quaker & Religious Studies Dan Zemaitis ’98. Almost immediately, the group reached consensus that this new technology needs to be examined, understood, and evaluated — in short, it needs to be embraced and not banned. The committee also wanted our
thinking about AI to be deeply rooted in Quaker philosophy. We wrote the following statement to be woven into policies and syllabi:
Moorestown Friends School is committed to the Quaker values of integrity, inclusivity, and independent thinking. Our curriculum fosters the responsible and ethical use of artificial intelligence (AI) as we strive to provide a learning environment that equips students with the skills and knowledge required for the future. Concerns regarding privacy, bias, equity, and academic integrity presented by AI technologies will guide our divisional and departmental policies and practices related to generative AI. MFS embraces the still-evolving nature and transformative potential of AI in education and beyond and recognizes the need to continually reevaluate our engagement
with these technologies.
Our discussions were wide-ranging and productive. We identified many ways that AI could be a useful tool for both teachers and students, while acknowledging that there are many deeply concerning aspects about the technology, like bias and lack of reliability and privacy. Our group helped teachers by making policy recommendations and offering teachers language for their syllabi.
AI IN THE CLASSROOM AT MFS
Photoshop: Pixels to Print Course
ChatGPT isn’t the only generative AI; it’s just gotten the most media attention. There are many common apps and software programs that now have artificial intelligences embedded into their functions, like Firefly, Photoshop’s generative AI, for example. Among other things, Firefly allows Photoshop users to select areas of images and replace them with something else or with a background extrapolated from the existing background in the image. Students in this elective technology course used this tool last Fall to help them create their dispersion images. The goal was to create the illusion of an object, landscape, or person breaking apart and floating away.
Seventh Grade Computer Science Course
In Seventh Grade Computer Science, students use Google’s Teachable Machine to create a machine learning model that can distinguish between simple drawings of happy faces and sad faces. Teachable Machine makes it easy for students to create machine learning models and to test them; in our project, the students create the model and test it, first with a small amount of data, and then they add more data and retest to see if the model got “smarter” at distinguishing between the faces. Students learned that the more data the artificial intelligence has, the more it learns and the better it performs. We read news articles about the ways artificial intelligence tools are used in the world today, and the students asked questions and discussed any concerns that arose.
Seventh Grade Picture Books Project
Last year, seventh grade teachers Paul Rizzo (English) and Deborah Bruvik (Science) collaborated on a fun and meaningful research, writing, and image creation project where students wrote and illustrated picture books and then read them to Lower School students. First, in science class, Deborah asked students to research scientists from marginalized backgrounds. Students then wrote the text for their picture books.
Under Paul’s guidance, students then used AI image generators to illustrate the picture books. Rizzo used the activity as an opportunity to teach seventh graders about different aspects of artificial intelligence. First, he taught them how to create effective prompts by demonstrating that specifics, like strong action verbs, descriptive language, and using identity markers like age, race, and ethnicity, helped generate better images. Students experimented with different generators, like DALL-E, Pixlr, and Microsoft’s Copilot Designer, which ended up being the app that generated the best results.
Students concluded that the AI image generators were useful tools, but that they had their limits. They were not able to generate multiple images of the same characters, for example. Additionally, students quickly learned that the quality of the prompt was directly related to the quality of the output: garbage in, garbage out. Paul also took the opportunity to ask an AI to generate a piece of writing and to have students analyze it to identify strengths and weaknesses.
Finally, the students read an article about using AI for facial recognition and discussed how this type of software can unfairly target people of color.
OPPORTUNITIES AND CHALLENGES
“AI often arrives in new applications with the aura of magic, but educators and procurement policies require that edtech show efficacy.” – Office of Educational Technology’s May 2023 report “Artificial Intelligence and the Future of Teaching and Learning”
The National Education Association’s Task Force on Artificial Intelligence in Education recently published a report stressing the importance of teaching not only with artificial intelligence, but also teaching about AI so students can understand its positive and negative impacts. For example, teachers can teach students that we typically
don’t know the sources AI uses to generate its output and whether an AI app or program even had permission to use those sources. Additionally, AI is known to “hallucinate,” or to completely fabricate information, making it crucial to fact check every assertion made by an AI. Further, AI is susceptible to all sorts of bias; you might have read news stories about problems with facial recognition software not recognizing faces of non-white people or being used by stores and public venues to surveil customers. Artificial intelligence draws upon data framed by humans and is prone to all of the biases inherent in that data.
In order to make smart, ethical choices about the use of these tools, students need to be aware of concerns like those noted above. But because of the rapid developments in artificial intelligence, there is a lack of clarity around how to navigate issues like intellectual property, bias, reliability, and how much “help” researching and writing is too much or veers into academic dishonesty. Additionally, it is unclear how equitable AI can be as a replacement for other types of learning support, especially human ones. We don’t yet know whether AI saves time, or how it saves time; even if it does end up being a help, there are very real concerns about privacy, security, and liability, especially as related to vulnerable populations like children. Also, there is the potential risk of overreliance and dependency on AI tools that could have a negative impact on the development of critical thinking skills.
In spite of the many concerning aspects of artificial intelligence, there are a myriad of opportunities that capture the imagination of many educators. Artificial intelligence offers:
1. Pathways for unique and interactive assignments (fostering cross-departmental connections) and new forms of personalized learning, data-driven insights, augmented reality, virtual reality, and intelligent tutoring systems.
2. The ability for students to learn how to use tools that are becoming increasingly prevalent in many professions and are part of our future.
3. An opportunity to discuss important issues, like the ethical use of technology in education; the nature of human bias and how this bias can make its way into machines; intellectual property and the reasoning behind copyright and intellectual property laws.
4. An opening for educators to explore how AI could potentially provide opportunities for equity and accessibility; for example, with AI, do students who previously couldn’t access these benefits now have the ability to access high-quality tutoring or the chance to pursue additional learning opportunities?
5. Time-saving for teachers and administration, like automated grading and feedback, content and lesson planning, administrative tasks.
The potential power and mystery of artificial intelligence offers educators an invitation to reinvigorate important conversations about education, ethics, and the place of technology in the classroom. We can use this moment to make plain the importance of creativity and original thought and scholarship. We can structure (or restructure) our classrooms so students can think and create. Just like the launch of Wikipedia allowed us to teach students about research, transparency, and the reliability of information, the development of generative AI tools gives us opportunities to teach about the purpose of education and the nature of human thought, creativity, and
intelligence.
As educators, we must balance the creative and innovative capacities of new tools while still encouraging originality; in the end, centering human decision-making, consciousness, ethics, critical thinking, and, well, humanity, is the goal.