An EdTech Minimally Viable Product
Created with Image Creator from Designer with Prompt: “An AI bot sitting in a classroom with students learning alongside them”

An EdTech Minimally Viable Product

Since publishing my collaborative post with Nick Potkalitsky, PhD entitled “Reimagining Middle School Education: The Synergy of AI and Montessori Principles”

We’ve received numerous comments complementing our approach, choice of student age group (middle school), and integration into the Montessori Adolescent Guide pedagogical theory. One specific reply caught my attention, specifically:

I’m still new to my voyage of discovery about how AI can meaningfully support students’ learning, instead of just being the shallow EdTech bonanza I fear it may be. I came across this article posted by another educator, and it’s one of the first that actually helps me understand some of the possibilities.
What I like most about it is that the AI-driven learning opportunities are grounded in a larger construct of shifting pedagogy. I also love that this is grounded in middle school, which has a special place in my heart.

— Sarah Schumacher M.Ed

What occurred to me is that while I’ve enumerated numerous use cases for Generative AI within the Education sector, I have yet to lay the foundation for a Minimally Viable Product that would penetrate the school market and drive adoption among educators. Now, as a disclaimer, I do not claim to be on school boards, delve into individual state-wide education politics, nor informed myself on the disparate school policies that exist, what I do know is that a select few educators such as Nick have embraced Generative AI and starting to build curricula rooted in pedagogy that is taking an AI centric approach while others are still shying away in fear of the “shallow EdTech bonanza.” Furthermore, I will lean on educators such as Nick to formulate pedagogical practices that are AI-first and truly disrupt the student learning experience, similar to how analysts such as Ben Thompson have derived Aggregation Theory simply by understanding the internet. Maybe with enough experience in the Education space and pensive calming of the entropic thoughts that is my imaginative mind within Education I will arrive there too, but in the meantime, this blog will lay the ground work for augmenting education with AI to drive initial adoption as to catalyze new pedagogical paradigms with AI first.

Defining the EdTech AI MVP

A Minimally Viable Product (MVP) is like the simplest version of a new product that a company can release. It has just enough features to be usable by early customers who can then provide feedback for future product development. Think of it as a basic prototype that works well enough to start gathering user reactions. It’s a way for companies to test their product’s market viability before investing a lot of time and money into fully developing it.

For any Artificial Intelligence solution within Education, there exists a number of prerequisites that underpin the “Minimal” aspect of the MVP definition, including:

  1. Guardrailing
  2. Model Scope
  3. Response Output Correction
  4. Integrations
  5. Student Journey Analysis
  6. Common Use Cases
  7. Privacy

Guardrailing

While the general populus are blind to the various character traits hidden within Generative AI chatbots such as OpenAI’s ChatGPT or Google’s Gemini, there are inherent differences that set these two solutions apart, the aforementioned character traits. (This is assuming, by the way, that foundation models are inherently commoditized and the ongoing development of these engines are in an arms race for ever-growing context windows and knowledge sources).

For example, recently, Gemini got into trouble when it’s image generation solution started producing historically distorted images. One such example included a prompt requesting a picture that included the United States founding fathers, which factually included all white males whom we know their names, yet the system produced a picture with an African American male. What happened under the hood was that there was a system prompt that was injected prior to the user prompt that guided the system to err on the side of diversity as much as possible. Another example includes OpenAI’s ChatGPT that, when requesting explicit content or content that could be viewed as dangerous, the system will reply back stating that it goes against the chatbots policies.

This process is known within the industry as guardrailing. It entails a combination of system prompt injections before every prompt, utilizing natural language classification systems to filter bad content and associated synonyms that would use semantics to circumvent the guardrails, and reinforcement learning through human feedback (RLHF). Noting broadly the already assumed malicious and negative content filter that should be applied to nearly all solutions, a base EdTech solution should employ the use of guardrails.

What should these guardrails protect? Explicit inquiries that would imply cheating and semantic derivations thereof, age-appropriate content that falls within the bounds of the bots knowledge capabilities (more on that shortly), and use of any external function calls (again, more on that shortly) that would submit information outside of the AI system which could cause privacy issues.

Model Scope

Trying to adapt a foundation model to a single subject is simply not worth the time and investment thereof, rather, focus on the broad set of curriculum spanning the target student cohort for a particular school or set of schools within a district or state, pending the level of implementaton. The benefits include:

  • Ability to skill-up advanced students whom are paced above the current grade level
  • Reduce training cost and improve overall accuracy of the model
  • Create curriculum that ties together learned subject matters within a grade level

Specifically, the model should first start with modifying the parameters of the foundation model generally for a K-12 curriculum (using this cohort broadly as an example). This parameter fine-tuning will be costly and time intensive but could be outsourced to a larger provider (e.g Microsoft Education to a startup focused on EdTech) and adopted prior to performing the next step(s) of tuning.

Next, at an individual school level, IT professionals should be able to purchase licensing rights from publishers such as Mcgraw-Hill, Cengage, Pearson, or Scholastic to name a few. Once obtaining digital rights, putting those digital textbooks into a blob storage to either vectorize them (into a Vector database) or use them for Retrieval Augmented Generation. Both implementations would be highly beneficial to the model itself but the latter (RAG-based) would generate citations that would allow students to jump to specific pages of textbooks to read up on concepts the initial prompt was about, taking away a “control + f” approach and surrounding it with context.

Response Output Correction

Earlier I spoke about methods such as Reinforcement Learning Through Human Feedback (RLHF), a method to modify characteristics of a Large Language Model (more notably the chatbot providing the output). For an education use case, one should employ RLHF to modify the behavior in the following manners:

  • Guiding over Answering — have the model generate questions in response to student inquiries that guide the student to pragmatically solve their initial question instead of simply providing an answer, building upon critical thinking skills.
  • Gentle Instructive Tone — ensure the diction connotes a positive, gentle experience that encourages customers rather than discouraging them for a more friendly interaction.
  • Guardrails — mentioned earlier, preventing students from circumventing the curriculum itself or cheating out-right

Integrations

Such a education-based bot should be widely integrated via a set of easy-to-use APIs to create a broader ecosystem of integrations. Most specifically, this model should be supported by Learning Management Systems such as Canvas and Blackboard to pass back analytical statistics (should be embedded within the API response output for logging) and could correspond with these LMS providers who may have LLM-based bots of their own. Furthermore, the system could harness the power of third-party function calls such as Wolfram Alpha for mathematical computation or other approved bots, following the GPT Store model introduced by OpenAI.

Student Journey Analysis

Alluded to earlier, the solution should output logs that allow for student journey analysis. Some of the information educators would be interested in include:

  • Line of Questioning — what questions are the student learners asking? What are the follow-ups? Analyzing a students questions could yield valuable information about thought processes and comprehension.
  • Gaps in Understanding — highlighting the subject matters and particular topics of frequent inquiry to possibly surface gaps in areas where students are seeking further assistance
  • Use — bifurcate among simple inquiry out of intellectual curiosity (showing advancement), question for understanding (showing gaps), quizzing (for mastery), and others.
  • Quantitative Statistics — usage in a particular period of time, number of prompts submitted, etc which will gauge engagement by the student learner

Common Use Cases

Completing various use case scenarios I’ve enumerated in previous blog posts including:

  • Multimodality — translating among multiple modalities to explain a complex topic based on student learning style as explained in “Generative AI and Learning Modalities
  • Explain-As — questioning and subsequently using subject matter that is of interest to a student learner to explain a complex topic metaphorically to help facilitate comprehension as explained in “3 Generative AI Applications in Ed Tech

Privacy

At the forefront of all Ed Tech practitioners, product managers, student educators, IT professionals at schools, and the like, protecting student privacy and data usage which should be at the forefront of any AI-based conversation. This includes compliance with major legislation including:

  1. Family Educational Rights and Privacy Act (FERPA): This law protects the privacy of student education records. It applies to all schools that receive funds under an applicable program of the U.S. Department of Education.
  2. Protection of Pupil Rights Amendment (PPRA): This law deals with the rights of parents to information about their children’s participation in school surveys or evaluations
  3. Children’s Online Privacy Protection Act (COPPA): This law applies to the online collection of personal information from children under 13. It details what a website operator must include in a privacy policy, when and how to seek verifiable consent from a parent, and what responsibilities an operator has to protect children’s privacy and safety online.
  4. Children’s Internet Protection Act (CIPA): This law addresses concerns about children’s access to obscene or harmful content over the Internet.

This Minimally Viable Product should lay the groundwork for a future roadmap of innovation within the space. I am fully cognizant about the disruption of this model, as it should be, with AI-first curricula and solutioning which I long argue for in “The Disruption of Design.” My intent for this blog post is to catalyze the creation of these bots in an “open” model as I am publicly posting this guide on the internet. As I mentioned earlier, my hope is that educators such as Nick use an eventual implementation of this solution as a means to systematically change our education system to imbue AI as a fundamental tool for augmenting our education practices! I am truly excited for the future here.

Using AI to enhance student learning is the way forward. 📚 #EdTechRevolution

To view or add a comment, sign in

More articles by Sam Bobo

Insights from the community

Others also viewed

Explore topics