Contact North | Contact Nord: AI and the Future of Teaching and Learning!

Dear Commons Community,

Contact North | Contact Nord, an open-access resource for Canadians enrolled/interested in online courses, has an article this morning entitled, AI and the Future of Teaching & Learning.  It is a good summary of several of the critical aspects of AI as applied to instruction.  Its opening sentence mentions “a start-up company [that] recently created a 19-lesson, fully online, three-hour multimedia course in just 10 hours using ChatGPT.”   The video above is a brief interview with Alex Londo who is the CEO of the start-up.  Worth a view.

The entire Contact North | Contact Nord is below.

Tony

————————————————————————————–

Contact North [ Contact Nord

AI and the Future of Teaching & Learning

In Minnesota, a start-up company recently created a 19-lesson, fully online, three-hour multimedia course in just 10 hours using ChatGPT,(link is external) the artificial intelligence tool launched in November 2022.

ChatGPT found images and relevant video materials and developed a quiz to assess learning. Subsequent courses created by this same team are being created in less time — just one hour to create a three-hour learning module. Elsewhere, ChatGPT is used to create multimedia webpages(link is external) that can be quickly inserted into websites, and to create code in python (and other computer languages) that can be incorporated into apps or web spaces.(link is external)

ChatGPT is one of many similar AI services that use natural language to respond to user questions or requirements. Other AI systems can generate art(link is external), video(link is external), text to audio(link is external), music(link is external), simultaneous translation(link is external), solve math problems(link is external), provide career guidance(link is external) or engage in a deeply personal conversation.(link is external)

In higher education, AI systems and applications can be used to:

  1. Strengthen and automate student support: AI can provide students with instant answers to their questions and concerns. This includes academic support, such as help with coursework or research projects, and more general support such as information about campus resources and services.
  2. Improve course management: AI can help with course management tasks, such as posting announcements and answering frequently asked questions about assignments or exams. It can be used to “nudge” students to complete assignments, log into their learning management system or prepare for an exam.
  3. Increase student engagement: AI can facilitate student engagement in online or hybrid courses, for example, by acting as a discussion moderator or by providing prompts for group discussions.
  4. Provide research assistance: AI can help students with research tasks, such as finding and accessing relevant articles or data sources, reviewing available papers and books, and suggesting readings or videos for review. This could be exceptionally helpful for project-based or work-based learning.
  5. Expand tutoring: AI (especially chatbots) can provide individualized tutoring support, particularly in subjects with limited availability of human tutors. This is already occurring on sites such as tutorme.com(link is external), which offers a combination of chatbots and people to support its registered learners.
  6. Retention and completion: Using AI for “real-time” analytics and data to predict student performance and using these data to focus tutorial or student supports on those students most at risk of dropping out or failing.
  7. Pathway advising: Course choice is a major challenge for students. AI is increasingly being used to provide 24/7 course choice advice, using current student performance data to suggest which “next course” is best for them, given their program profile and career intentions.
  8. Student counselling: A growing number of online counselling systems display not only high levels of empathy with students struggling with stress, anxiety or depression but also high levels of efficacy. An evaluation by the UK’s National Institute for Clinical Excellence (NICE) showed that Velibra (link is external)— used without therapist guidance alongside usual care — was more effective than usual care alone in people with social anxiety disorder.(link is external)

Not all Good News

There are several pitfalls associated with AI in higher education, including:

  1. Lack of personalized support: AI systems are generally not able to provide personalized support to students based on their individual needs and learning styles. Indeed, the lack of empathy and genuine connection with those the systems are serving is a major criticism of many current AI systems. Although work is under way to add “artificial empathy” to client-facing systems, most systems do not connect well to their users.
  2. Dependence on technology: Using AI as a support resource depends on technology and Internet access, which we know in Canada is not available to all students. This can create a digital divide, with some students accessing more support resources than others. During the COVID-19 pandemic, this was a very real issue.
  3. Ethical considerations: There are ethical considerations to using chatbots in education, such as the potential for students to become overly reliant on automated support or not fully understand the limitations of the chatbot — that it is not the same as “chatting” with an instructor. The chatbot can only use the data and algorithms available and has no access to intuition, insights about the specific student. the class the student is a part of or the struggles many students have with specific forms of learning. A parallel is the number of automotive accidents (some fatal) with the inappropriate use of GPS systems (in the United Kingdom alone, in-car GPS devices caused an estimated 300,000 car accidents).
  4. Limited scope: AI systems are only able to provide support within the scope of their programming and “training.” If a student has a question or concern that falls outside this scope, the system may not be able to provide a helpful response. For example, most AI systems are poor at predicting economic futures. Chatbots and other AI systems have to be trained to respond to questions. In one very specific example, ChatGPT was asked to answer all the questions on the Institute of Chartered Accountants of England and Wales assurance exam. It scored 42% — less than the pass mark of 55%. The system was weak when more nuanced understanding and approaches were required. There were also some wrong answers and questionable mathematics. (link is external)
  5. Lack of transparency – We cannot easily trace the sources of information or data-pathways used to create responses in an AI system. Nor is it often clear what the algorithmic biases are in analytic systems that predict student success or failure. This lack of transparency is deeply problematic and is an issue many AI developers are working on.
  6. Abuse of AI – Students can use AI systems like ChatGPT to cheat. In fact, New York City schools, concerned about this possibility, have sought to ban it(link is external) as have others(link is external).  The concern about the abuse of AI is real(link is external) and has led to the development of a new kind of plagiarism detection system that can detect AI-generated materials(link is external).

Responsible and Trustworthy AI

For the above reasons, frameworks for the development of both responsible and trustworthy deployment of AI are emerging. Some are supported by major vendors (e.g. IBM, Google and Microsoft) as well as by the OECD(link is external). These frameworks require AI deployments in colleges and universities to be:

  • Inclusive – Significant efforts are made to ensure all students have access and support for their use of AI rather than AI being in the service of the privileged. To make this effective, the transparency of AI and exposure of bias within AI systems is essential. The intention should be to make education more accessible to all rather than less so.
  • Empathic and human-centered – Although accuracy and appropriateness of responses are critical, AI systems intended to interact with people should be empathic, warm and genuine. They must be able to respond not just accurately but, in a tone, and manner that reflects the identity of the user. They must also become increasingly sensitive to user needs.
  • Transparent and explainable –Transparency means enabling people to understand how an AI system is developed, trained, operated, and deployed so users can make more informed choices about the outputs such systems produce. A user needs to understand how AI came to the conclusion it did: What were its sources of information and how was it trained to use and interpret these sources?
  • Robust, secure and safe – To function, AI systems need access to significant datasets, including personal data about students, their backgrounds, performance and interaction with college or university systems. Such AI systems need to be able to withstand cybersecurity threats and be safe for students and staff to use. Colleges and universities are a target for such attacks.
  • Accountable – This refers to the expectation that organizations or individuals ensure the proper functioning, throughout their lifecycle, of the AI systems they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and that they demonstrate this through their actions and decision-making process (for example, by providing documentation on key decisions or conducting or allowing auditing where justified). AI systems must meet regulatory and legal requirements that all the university or college staff are required to meet — for example, with respect to disabilities and exceptionalities or privacy.

What to Expect in the Next Five Years

The launch of ChatGPT caused a significant stir in higher education, but we have seen nothing yet. As AI becomes more widespread, transparent, responsible and integrated (text + video + art + music + translation all in one place) we can expect more instructors to experiment and explore. Microsoft, which has an exclusive licence to deploy ChatGPT across its systems, intends to invest US$10 billion(link is external) over the next 3-5 years, integrating it into Office products used by the vast majority of educational institutions and its search engine.

New AI systems for assessment are also emerging, enabling automated item generation(link is external), real-time assessment of soft and hard skills during a simulation or game, automated grading(link is external) and personalized and adaptive assessment(link is external). Some of these systems are already integrated into widely available LMS systems (e.g. Examity(link is external) is integrated into Brightspace at Purdue University in the US) and others will follow.

We can also expect the more widespread deployment of more empathic and responsive chatbots as tutors, student advisors and counsellors. In the UK, the technology coordinating body for colleges and universities (JISC)(link is external) is supporting several sophisticated deployments of AI tools and resources: chatbots(link is external), question generators(link is external), research suggestions for readings(link is external) and other tools(link is external). They are evaluating them for their effectiveness and efficiency. The chatbot ADA(link is external) is in widespread use.

As more instructors and students experiment with AI, more best practice examples will emerge of the effective use of AI to support teaching and learning. We can expect a flood of new AI tools and examples of effective practice.

Comments are closed.