Who Does the Thinking: The Role of Generative AI in Higher Education

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I see new names and I also see familiar names so welcome to everyone I think that we will little by little start I'm very pleased to welcome you to this webinar which forms part of the iau global series uh on the future of higher education and today we have decided to vote this to devote this session to the role of generative AI on higher education and we look very much forward to having a conversation about the impact that we see and some conversations about the opportunities some of the challenges and then have some thoughts about what it means for the future of higher education to have this conversation I'm very pleased to have three excellent speakers with me today so um in a little while I'll introduce them to you but let me just explain the format first we are going to have this as a conversation so we will go through a series of questions if you have any questions from the audience you are very welcome to post them in the Q a function of the zoom I will try to monitor that one I cannot guarantee that we'll be able to respond to all questions but that is an opportunity if you want to introduce yourself you're very welcome to do so in the chat um but I will be monitoring questions only through the Q a function so welcome to everyone and looking very much forward to this exciting conversation but for us to get started let me then first start by introducing the three speakers that we have here today first of all it's my pleasure to introduce Chris DD who is a senior research fellow at the Howard Graduate School of Education and was for 22 years it's Timothy e worth professor in learning Technologies his fields of scholarship includes emerging Technologies policy and Leadership in 2020 Chris co-funded the silver lining for learning initiative a webinar series that I highly recommend that you go and discover as well and he's currently also a member of the oecg 2030 scientific committee and an advisor to the alliance for future for the future of digital learning Chris is also a co-principal investigator and associate director for the research of the NSF funded National artificial intelligence Institute in adult learning and online education so very welcome to you Chris we are very pleased to have you here let me now Turn to You Frank so Frank is an assistant professor of computational linguistics with within the center of a language and cognition at the University of greningen in the Netherlands previously he was also a lecturer in computer no Linguistics at the department of artificial intelligence in The Institute for mathematics and computer science and artificial intelligence at the same Institution he holds a PhD in a degree in neural and psycholinguistics and his current research focuses on the use of large language models and other machine learning approaches towards investigating speech language and human cognition with particular emphasis on disordered speech and languages so Frank as well we're very pleased to have you as part of this conversation today last but not least Kate I'm also very pleased to introduce you and very happy that you're with us despite the fact that you're in Australia and that it's very late on your your end so thank you for for this effort to be able to unite those regions uh despite the different time differences so Kate you're an associate professor at the faculty of creative Industries education and social justice in the school of teacher Education and Leadership at Queen's Queensland University of Technology in Australia Kate is the leader of the digital Learning Center for change a research group and places focus on the processes of learning and teaching particularly with technology her research is primarily focused on digital pedagogies understanding complex social environmental systems Collaborative Learning the learning Sciences more generally and the development of innovative methods to inform Design for Learning in complex learning environments and progress towards the completion of of a task that can be used to provide feedback to Learners instructors designers and researchers so Kate a very warm welcome to you as well so I think that now that we have done the general introductions maybe I should mention that myself my name is Trina Jensen and that I lead our priority on digital transformation here at the iau and we are very excited to have this conversation on the impact of generative AI on higher education foreign welcome to everyone and then I think that we one of the things that we see when it comes to generative AI is that there are very different levels of understanding of what it actually means so I think that what we're going to do is going to start kick off this conversation with the first question that is what is generative Ai and why is it creating this Buzz around the impact on higher education and Chris I'm going to start with you but before I give you the floor I just want to read out um a quote of Sam Altman who is the CEO of open AI behind the launch of chat GPT when he was asked to describe GPT 4 so the newest version and he responded saying it is slow it is buggy it doesn't do a lot of things very well but neither did the first and earliest computers and they still pointed a path to something that was going to be very important in our lives although it took a few decades to get us there so Chris what is your perception of generative AI how would you explain it in simple terms well I'm happy we have multiple people here because if you have three experts on AI You're Gonna Hear six or seven different opinions about what it is and isn't and we're still watching it evolve so I'll use some analogies generative AI is like moonlight and Moonlight is the moon doesn't actually have light it reflects light from the sun it doesn't generate its own light and similarly generative AI reflects things from humans what's on the web essentially it doesn't generate its own new things it's also then um like a mirror but a Cloudy Mirror because the current models only have parts of what's on the world wide web we know that the World Wide Web has misconceptions and myths as well as having many strengths in terms of what it captures about knowledge and because the web is compressed to get it into a large language model there's also some losses that take place in that uh condensation so it's a Cloudy Mirror that reflects sort of moonlight sunlight another way of thinking about it is that it's a brain without a mind so its structure is like the brain but it doesn't have a sensory system it doesn't have Consciousness it doesn't have experiences it's kind of like a brain and a vat all by itself and that doesn't mean that it can't do interesting things but it does mean that it it's it's not it can't explain itself and you can't kind of interrogate it to find out why it's doing what it's doing it's truly a black box the other analogy I'll use is that it's like a parrot and parrots can have an interesting vocabulary and can say a lot of things but they don't understand the meaning of what they're saying at all the pirate parrot that says Pieces Of Eight has no idea that it's talking about a currency or anything other than a set of syllables that it's heard that it's repeating back and because it doesn't understand the meaning of what it's saying when it says something that's false when it does what's technically called hallucinations based on how it works uh it doesn't understand that either so to sum it up it's it's like an alien semi-intelligence it's alien because there's nothing on this planet that thinks in if we can use that term about generative AI that thinks in the way that it does and it's a semi-intelligence because it it's really good with language but language is only one part of of how we would Define intelligence thank you very much Chris for explaining uh generative AI through these analogies and what I hear I'm in the cross-cutting way of these analogies is as well the fact that it is really a machine and not there's not hold all the same type of competencies to competencies as humans What About You Frank do you agree here or do you have other examples of how you would explain it you come from computer science so you have the more technical part background here I needed to unmute myself um thank you for this opportunity um I'm gonna pick up from where Chris left um when he mentioned language because that's also what I focus on um and I also do believe that most of you here have probably the most common generative AI that you know is chat GPT which is basically a light language model so the way I think of large language model is that well it assumes that somewhere on the Spectrum there is a small or the smallest language type of model so if you think of the smallest type of language model we call it unigram so unigram basically unit means one and gram could be it could be a word it could be a syllable it could be a letter but in this case let's take it to be a word so the word hello for example and what the unigram type of model does is basically to say given this document what is the most most frequent frequently occurring word in the document and if we know the most frequently occurring word like hello in the document then we do have some sort of an idea that if we are to predict the next word it's probably going to be hello because that defines the content of the document so then we have if you move from the on the Spectrum you move on towards large language model you have a diagram biogram is two ways so which basically says given the current word or previous word how can we predict the next word and then we have trigrams so trigrams if you have hello ladies which basically says giving these two sequence of words hello ladies how can we predict the next word which will probably be end if you want to predict hello ladies and gentlemen but if you think of gpt4 for example instead of having a unigram or a trigram or or a diagram it actually has 32 000 grams so if you put it into words it's a around 24 000 words if you put it in a form of a book it's a book with a length of 48 pages with words covering each page so what this model can do is that it can basically have all these 24 000 words process them at the same time and use it as a context to predict the next word so the principle here is that if it can predict the next word it can predict the next phrase it can predict the next sentences and that's the reason why chart Deputy is able to generate um complex or long output but what is really transformative with chart GPT has to do with something we call alignment so the previous models including gpt3 itself was very robotic in its output but what open AI did with that GPT was to add human feedback into the training Loop so instead of just training the algorithm by exposing it to text let's add human feedback to it so that the end output would look like something that aligns with what a human would predict so if you compare it with the Technologies we are used to like Google it is not Google because Google is a document retrieval system but uh a large language model or a generative AI is able to generate texts images I believe the next step will be videos based on next word or the next unit prediction thank you very much Frank explaining that in terms that I think that many people can can relate to and what about UK do you have something to add here from your perspective to this conversation about what it is all about um sure so the I think they were both great explanations I'm going to steal some of those analogies and and explanations next time I have to explain it to a class but um one of I think the um to get back to your original question about why it's generated this level of debate and interest there's also alongside the developments in the algorithm that sits behind it and all of all the models and everything has been the um the way that we're able to interact with it so there's there was a leap in the interaction design and so that has meant that it's now available to it's accessible to a about wider range of people they know how to actually interact with this um model but that's sitting behind it all of those features of the interaction design are really taking into account that end user and that's just meant that um because it's so accessible and it's so interesting to actually interact with and you can it does help you be more Curious and ask more questions and give feedback and engage you in that process that means it's more commonly used which means we all have to now know about it and figure out what to do about it thank you very much all three of you for getting us started this way so that we more or less have at least a common understanding within this room of what it is and and what it can do and some of its limitations now when we then move to what does this tool then imply for higher education and how does it impact higher education I think that right now of course we're in a phase of transition in the sense that it's relatively new we don't have that much experience in order to to to take informed decision based on past or previous experiences and so I think that what we're seeing right now to some extent is the fact that the tool is evolving more fast than universities are actually capable of responding through policies so in in preparation of this webinar I asked you all three to to reflect on how you would advise your leadership in your institutions today when it comes to to to generative Ai and these tools like chat GPT and how it then impacts and Kate in this round I would like to to start with you and that's also because we have seen very various responses in different parts of the world and I know that in Australia um there has gone a lot of thought into redesigning assessments the impact and I've seen even some examples of actually returning to assessments by pen and paper as a way of ensuring that there is no cheating using chat jpt so maybe you can bring in some perspectives from the response in Australia while then also giving us some thoughts about how you would advise the leadership in your institutions in terms of dealing with uh generative AI so over to ukate thank you um I think for the Australian context some of this helped um because our semester system means that our first semester for the year started sort of the end of February beginning of March which meant we actually did have the summer to consider a response to it in most universities and look some states and universities and schools band chat GPT some of that's because of age restrictions some of it was a reaction to academic Integrity so how are we going to stop people cheating um and so the response was just to ban it but I think what um everyone's sort of come back to really is to it's an opportunity to rethink how we're looking at assessment so yes absolutely in some disciplines it in my University everyone was allowed to do whatever they wanted with it we could for our whatever we were teaching week Banner we could embrace it um anything in that spectrum and I think in lots of ways that freedom meant there was a lot of uncertainty with people who didn't understand what it was and didn't understand what the potential implications could be so it was very hard to make those decisions and so their reaction was to just say none of it and yes pen and paper exams oral presentations sometimes uh direct questioning that way I've heard some examples of assessment that's been carried out so that but I think I think in any rethinking of the assessment um and this was coming back to the advice that I give to the university it's really an opportunity to come back to what is it that we think of as learning so what is it that we would like students to be able to demonstrate when they finish um it's not so much uh about a university as sort of stamping off that this person is qualified to do it but what do we actually think learning is about in all of the different disciplines and how can we think of um some authentic ways to actually assess that and have that uh in lots of ways not be as restricted as it was before so for some of the ways that we do assessment um some of the ways we do testing that's been implemented because we have to be able to do it at scale we have to do multiple choice um questions because we just have to do it for hundreds of students and we have to be able to Market and do that fairly so if we rethinking how generative AI can help us with that could that be the thing that's doing the testing um could we be thinking about other ways so that we can get to authentic assessment of student learning um at scale and I think it's it's just a really um uh it's a great opportunity if we embrace it as an opportunity instead of a reaction to putting a whole lot of barriers around its implementation then I think um we could actually do some really good work out of it thank you very much Kate uh so it is also kind of pushing us to rethink procedures and processes already in place and we think are these the right ones Etc Frank I know that you also form part of a group committee within the university I think a transdisciplinary committee that has been put together within the University of gruntingen in order to come up with policy advice right for the institution so maybe you can share a little bit on on that as well where you're at and what are the the type of recommendations that you're you're considering yeah um I think I uh in a way to some extent I understand um with the way some institutions have reacted um to this especially those on the side of um let's let's ban it although some of them have also reversed um their decisions as time uh went on and I think the reason has to do with the fact that we're not prepared for this we we sort of thought that okay somewhere in the future AI is going to be able to do this kind of things but we didn't think it was going to be now so it came a bit as a shock and um for that matter some of us reacted um this way but what I would say is that these models these generative AI models they are here to stay they are here to stay in the society so I think the way we as as an educationalist or we as academics need to approach is approach this is to sort of look at innovative ways to actually to make the best out of these tools that we we have I think one of the questions we've been thinking of is can we can we prevent students from uh from using this then I think the answer is no they would use it anyway can we Watermark it can we see what is coming from um chat GPT and at the moment there isn't any uh model out there which is really accurate in detecting a text from um coming from a generative AI model and actually the the goal of the manufacturers of these algorithms is actually to make it more human-like which is to make it more difficult to detect it or to classify it from from what a human wrote so I don't think that that's going to be possible in the future but I think what is what this is really doing um like what Kate said um I think it's an opportunity for us to really rethink what is it that we are teaching our students how are we preparing them to live in a society where child GPT um exists and I believe this is going to change the way it's actually it is going to change the way we do work within Academia and outside Academia so it's already going to change the society and I think the question has to bother around how do we actually change the things that we teach or the things that we assess students to make them fit better into a society in which church GPT exists so in in my committee I think the conversation has mostly be um been around um let's rethink the learning outcomes what are we what do we want each student to get from the courses that we teach and if we feel like if a student uses stat GPT um to um to have answers for an assignment and the learning outcomes of the course has been outsourced to a generative AI then we are not doing things the right way we are not meeting the learning outcomes but if the learning outcomes are met and the students you still use the a Model A generative AI model for part of the assignment which doesn't necessarily fall into the learning outcomes then this should be something something to embrace so in my department for example most of the courses in information science most of the courses are coding based and there is very little about writing although from time to time we give them some we ask them to to write reports and all of that which means that actually the the learning goal from our program is not to teach to them to be good writers it's not a central or the core part of of our program so if a student hands in an assignment where they use a generative AI chart GPT to correct their grammatical mistakes then we shouldn't be punishing them for for doing that because again coming back to the learning goals of our program this is not really key to the learning outcome so I think it's more of a let's revisit the learning outcomes and make sure the way we assess the way we teach the knowledge we give to students all aligned with the learning outcomes and everything else that revolve around then in outcomes let's embrace the use of chat keep it if it's going to make students more productive thank you very much Frank so I hear your voice is saying that it is actually pushing us to to reflect on how we we operate today in the the institution whether it comes to teaching and learning whether it comes to the assessment uh is that something that you you you you you you see yourself in as well Chris or do you have a different view on things here well I want to build on what Kayden and Frank has said but I'll begin by by saying that I have a history with this because I've been a faculty member for more than half a century and that means that I've lived through about nine hype cycles of AI and so I've heard a lot of gee whiz predictions each High School hype cycle and I've heard a lot of Doomsday predictions each hype cycle and I've have a healthy skepticism for for gee whiz um but certainly in every cycle AI has gotten more powerful but not as powerful as as AI enthusiasts would claim as a graduate student I read the first published scholarly paper on AI in education in 1970 and the author confidently predicted we wouldn't need teachers in six years and we've seen that that didn't turn out so well as a forecast so it's important to keep a sense of perspective it is true that this hype cycle has generated uh something unexpected and that there is a lot of curiosity about what the summit what the limits are of this unexpected attribute of large language models and what they mean so um let me talk about an immediate effect and and a longer term effect an immediate effect is a lot of concern about plagiarism and that certainly is is true at Harvard and different faculty have different views on this it may be the different schools that Harvard will have different views on this but I I have a view that is uh perhaps different than many of my colleagues so when we talk about learning I'm not talking now about using generative AI in in the workplace but when we talk about someone who's trying to learn in an educational system it's important to remember that what matters is not the destination but the journey and that in fact the destination is the journey so it we use proxies to measure whether or not students know something a multiple choice test a an essay some other kind of production a computer program and if they produce the proxy we say well they must have gone through the Journey and learned but of course that's only true if you don't take a shortcut and what plagiarism has been in the past has just having taken credit for the work of another human being without citing them in in short-circuiting the journey uh there was an interesting news article in the U.S uh this past month about graduates from nursing schools there were some bogus nursing schools that were set up in one of our regions and those bogus nursing schools students paid a lot of money to learn how to take the national nursing exam and they took the national nursing exam and they were they were licensed as nurses and they were hired but then the hospitals and doctors offices were really puzzled because they didn't know how to take blood pressure they didn't know how to give a shot they didn't know how to take Vital Signs they didn't in fact know how to be a nurse all that they knew was how to achieve the proxy and that's the risk I think that we have with generative AI um I'm going to quote from an article by um ten Chiang in in a magazine here called The New Yorker the article was called gbt is a fuzzy jpeg of the web because I think he says it so well he says your first draft isn't an unoriginal idea expressed clearly it's an original idea expressed poorly and it's accompanied by your dissatisfaction your awareness of the distance between what you it says and what you want it to say if you're a writer you're going to write a lot of unoriginal work before you write something original and the time spent on that unoriginal work isn't wasted it's precisely what enables you to eventually write something original the hours spent rearranging sentences and choosing the right words are what teach you how meaning is conveyed by prose having students write essays isn't merely a way to test their grasp but the material it gives them experience in articulating their thoughts and so what we risk when we say well you don't really need to draft the essay just ask chat gbt and then you can kind of add your own thoughts to Whatsapp gbd is done that's a huge mistake in my opinion now I'm not speaking here for any institution and I think that that we have many points of view about what plagiarism is and what what learning is but this is what I would say and what I tell my students frankly is suppose you're you're going for a job in marketing and when you get to the interview these days the the employer doesn't simply say um oh what a nice resume you have and look at these great letters of recommendation uh now employers are saying I want you to write a marketing plan right here on the spot I'll give you 30 minutes here's your prompt and show me what you can do but of course then the employer is going to type the same prompt into chat GPT if you can't write a substantially better marketing plan than what comes out of the generative AI you're not going to be hired because why hire somebody that you can get it for free so that's the Trap of sort of saying I'm gonna be as smart as generative AI by working in partnership with it no you you want to be smarter you want to be original so so there's an immediate question about just how you use it and I would differentiate between using it for advice and using it for production that's where this crucial journey and destination comes out I I've spent too long in an answer but later in the interview I want to talk about the work that I do with this National AI Institute on Adult Learning and online education and the concept of intelligence augmentation IA rather than AI that talks about what happens when humans and AIS work together productively much Chris and I think that you're you're answer although long is also a very well transitioning now into the next part of the questions because we had these two more General rounds and now I would like to focus in more specifically on the impact on teaching and learning going more into details on what it means for assessment the impact on Research so going over more thematic areas and I think that you all three already broached this in turn in in explaining of course how this tool is operating Frank I mean in the sense that how fast those machines can process data but there is limitations as well as we heard from Chris and from Kate as well to what it can do it can process data it can use statistical or calculations in order to predict certain things but it can't add judgment it can provide a specific analysis it cannot put things into context and from what I understand in terms of the important role of universities and the you know the the education that higher education institution is offering is those types of tools so having generative AI is it not just a way of actually saying that higher education is actually more important than ever to be able to learn how to have a critical mindset learn how to analyze the data these types of machines are providing you to be able to critically look at the the sources so with this comment I would like to then now tune in a little bit on teaching and learning Kate and how do you think that AI is going to to change teaching and learning you already talked about the fact that it's kind of pushing us to to think differently about how we do things but maybe you can elaborate a bit further on that question oh um I think uh I think there is some broad categories where it um you can talk about uh the use of Technology um in higher ed and we're always um in need of good ways to give feedback um to do assessment and think about innovative ways to do that um one of the things I'm enjoying so I think and I think there are people who are doing some really good research on what sort of feedback gender of AI could be giving students on their work how to build that into the process into their process of evaluating their own work if you're thinking about iterations through an essay or iterations through some sort of design task piece of work that they're doing one of the things I'm really enjoying doing myself is just listening to so many practitioners so Educators whether they're School Educators or university Educators because um I think we're still getting a lot like the technology is changing and our practice is changing and there are every time I talk to people there is a new way that I have not thought of um that someone is using this in their practice um could be to get a different perspective on a piece of work so that wouldn't normally be part of a process you might you know accidentally do that because you're telling someone about your piece of work but to actually build that into a process of thinking about some piece of writing um that's a really powerful tool in terms of learning and learning to get different perspectives on work that you don't always have the opportunity to do as much as we would like for we've got these um we've got some good research to tell us how ideal learning environments should work in terms of collaboration in terms of tech access to technology access to teaching staff and all the rest um that doesn't always work out and so having tools there that can also work in similar ways in case the people part is not as reliable as we would like it to be I think is really interesting to look at um I think it's uh another really interesting aspect of it is people using it to help them with communication so um I know lots of people at a very uh very easy example it's people using it to help them write emails but also using generative AI to help them understand the tone of things that are being presented to them so I can just see all of these impacts on how we help students learn how to do group work if they're often very intangible things they're difficult things for us to figure out what is potentially going wrong in a group if there are tools that can help students do that management themselves um but I think one of one of the things is lots of people are really scared to use it so um I think getting I I teach unique world big data and learning analytics to masters of education students so they are teachers who are doing a masters and many of them in my unit this year there's like 120 students had not used it because they were just worried about what it looked like what they were going to put in there um whether it was going to give them a right answer and so I think one of the impacts of it is to again it it lets us reflect on or what is truth what is it that we're expecting out of um these models what does risk mean when computer scientists are talking about risk which is different in other fields so I I see it as very I mean there's lots of I'm kind of with Chris on this there's there's lots of the impacts of this that are very similar to the impacts for other types of Technology on learning and teaching um it's just this one everyone is very excited about they would like to like to be part of and we've got universities kind of behind us implementing policies in a way that they don't usually do when it comes to technology and education thank you very much Kate for sharing that experience and I would like to continue with you Frank here uh because you were also sharing when we had the Preparatory talk here um that it also depends very much from discipline to discipline of course and what I'm hearing for Kate as well it's also this um I mean here right now in this room you are experts you're following the development you invest quite a lot of time to actually uh follow what is happening and and form opinions of course based on the on the knowledge that you you you gain but I mean across the institutions we have loads of different subjects and disciplines I think that as you were saying Frank chapter gtt will exist in society I think that students from all kinds of disciplines will actually make use of these types of tools like they use uh Navigator a browser to to go collect information so I think that it is going to be something that is going to be widely used but then what is required within the institutions in order to maybe build the capacities because that's also what you're alluding to okay the necessary time to actually understand the potential of these tools and how it actually impact your discipline also if you're not in the area of AI or new technology so Frank what is your experience on your your end here uh yeah I I wanted to quickly also talk a bit about teaching and how this the opportunities that this actually this tool actually present to uh to us as teachers and to us as Educators um so last week I gave a similar talk and um what one one of the things I did was to demonstrate how you can actually use charging PPT depending on how you prompt it to develop a whole curriculum so I came up with it with a course that didn't exist at the University and then I asked that GPT to come up with a curriculum and then suggest courses to teach for this one year master's program spread over two semesters and it did it for each semester it gave me suggestions on courses and this is a course that is related to natural language processing and with me as an expert I looked at the topics that it was recommending and then I was like okay this is this is not pretty far from from what we would teach in such such a program and then I went on to ask it okay now that I have courses take one course and then I want you to generate learning outcomes for this course and then it did that I went on weekly topics um spread over the entire semester for this particular course and then it did lecture plans assignment generate grading criteria rubric a date final exam so it was to a to to some extent able to generate all these and if I think of us in Academia I think one of the comments that is very common is that teaching takes up all the time and there is very little time for research but what we have now is a tool that can actually assist us in developing some of these materials it requires expertise because what happens in developing some of these materials especially using it as a lecture plan or using it for development lecture is that it also make up stuff it make up stuff that do not exist non-factual things so you need to First have a tab the expert expertise to be able to in that topic to be able to actually get the best out of chart GPT so what we see is a tool that could be very beneficial and give us a lot of space be a very good starting point in brainstorming on coming up with lectures coming up with with a course developing a whole curriculum for an entire research program on the learning side um I I think Katie mentioned most of it um and also Chris I think hinted on some of them in the beginning I think what I see to be the biggest challenge is the Outsourcing of our ability to think our ability to to code our ability to write and ask um Chris um mentioned do we want to Outsource the very things that define us as academics the very things that define us as a student who has been through higher education do we want to Outsource all of this to to a generative Ai and just hypothetically imagine a student who has been through a whole bachelor's program or a whole Masters program use the generative AI throughout he was never caught and then get a degree and at the end the question is is the degree worth anything um if a student has gone through such such a process but then again this brings us to the question of you know what what are our learning outcomes what do we really want to teach students and I think it we always have to go back to that what we want to gain from our education what do we want them to take um at the end of of their program so in also in in my uh working group chat GPT working group trying to come up with a policy um what we we've decided to do is that because there is so much diversity in the courses that are taught at the University it's really difficult to use a bottom-up approach which is the University coming up with a bigger policy and then the policy applying to all the courses it's almost impossible because the courses are so different so um what we what we are doing is that okay let's handle this from a bottom bottom up let's use a bottom-up approach right we ask each program each department to think about what is it that they want their student to be able to do at the end of their education at the end of their studies what are the learning objectives what are the learning goals and then build it up from there and then is there a space where we can then encourage students or even equip students to use a tool like a generative AI AI to make the most out of the learning objective so you are using AI generative AI in this way in this case as just a tool to actually equip students to be more productive in order to or to be more efficient in order to meet the learning outcomes or the learning goals and I think this approach for me I personally like this approach because in the end every Department comes up or comes up with some sort of uh some sort of a mini policy which will then develop into a bigger policy that will be applicable for the whole university so again I think the key Point here is that whatever we want to teach the students we want to make sure that they don't Outsource this responsibility to a generative AI because we want them to be responsible agents in the society that we are preparing them to be part of yes so part of the role as well is to make them understand the potential but also the limitations to the the tools that they're they're using and and how they're using it um Chris can I be productive in the sense that now hearing Franken saying you can actually use this as an assistant to build the curriculum and the the learning plans will higher education institution just become places where you buy your credentials that you need because I mean in theory you could learn all that yourself if you have access to it or what is really the added value of higher education here when we have tools like um a chat TPT available to us well I I've worked with many learning Technologies over my half century and one thing that you can say about every single one of them is that these are double-edged swords they can be used well they can be used poorly and I think that many of the proposed uses for generative AI are not very good uses but that doesn't mean that it can't be used well and it can't be uh solved some problems that we've had for a long time and make an exciting difference so I'm going to express one more concern and then I'm going to be balanced by talking about what I see some as some of the real opportunities um let's contrast chat GPT with the search engine which which is also AI it's not generative AI although people are now adding generative AI into search engines it's a different kind of AI it's a machine learning kind of AI and um at Harvard we have wonderful reference Librarians and if you turn a reference librarian loose with a search engine they can find all sorts of amazing sources that someone like me just using common terms with a search engine can't so in contrast to when I started my career when I would go to the library and wrestle with the Dewey Decimal System to try to find something that might be relevant now you know a skilled person with a search engine a partnership with that kind of AI can find amazing and important stuff but what I see students doing and faculty do for that matter and lawyers and and so on is thinking that the generative AI is like the reference librarian that this is some kind of intelligence it's going to do a search it's going to know better than the search engine what you really want because search engines as you know can send a lot of false hits and it's going to select those for you and give you the answer that you want and that's simply not the case this is not like Skynet in The Terminator this is not something with the Consciousness that can think and plan and For Better or For Worse decide that it knows better than human beings this is a large language model and as Frank said it it it's job is to complete sentences that's that's what it its mission is it's to complete sentences so what you're doing is taking your own intelligence out of the loop by looking at what the search engine gives you and deciding which is relevant and how to combine them and turning all of that over to a generative AI That's where generative comes from the agency passes to the AI and it gives you well here's what you wanted and of course there are a lot of problems with that uh there are problems with hallucinations because it will make up things because it doesn't understand what it's doing so if if the next word is a word that leads you in a path that doesn't exist it'll cheerfully go down that path but also it replicates uh I'm I'm on The Advisory Group for a major publisher and other members of The Advisory Group are lawyers both in in the US and in Europe and they deal with issues of intellectual property and issues of bias and issues of privacy uh generative AI can give you back up to 1500 words verbatim from something that was fed into it and um what's fed in there's no regard to copyright there's no regard to intellectual property these companies are just throwing in everything on the web and so um if I don't get a hallucination what I may get is a curriculum that somebody else has already written the generative AI is simply regurgitating without any attribution so that's that's dangerous I mean that's that's really dangerous I think I think using generative AI to write references is malpractice it has no idea who you're writing the reference for I think that that using generative AI to screen applicants for University or applicants for a job is malpractice it has no idea who these people are it's just finding patterns so having said all that I don't want to come across as the grumpy old man that hates generative AI what am I excited about I'm excited about one of the things that Frank discussed which is that this this has actually solved the natural language processing problem you can express degenerative AI anything using idioms using paraphrasing using uh unusual terms and it will understand what you're saying understand in the sense that it will make an appropriate response not understand in the way that we would understand and that's a big deal because now for many tools you can do natural language processing across the full range of languages and and interact in a whole different way a way that makes that tool much more accessible to you that's a really big deal and there is a company called Wanda w-o-n-d-a I'll put a link into the chat that is doing really interesting things with chat Bots with back ends from generative AI for language learning and for creating language learning situations where context and culture are important I've worked with my colleague Professor Nicole Mills at Harvard on this she's head of the Department of romance languages but the other thing that I'm excited about is that generative AI can take over routine parts of some work now I'm not talking about learning now where you can Short Circuit the journey I'm talking about in the workplace when you already know what you're doing you can use generative AI to help you with with routine parts of your work and that's where this concept of intelligence augmentation comes in that when just as with other forms of automation when you um are able to uh turn over routine parts of work you can be de-skilled by that if you don't choose to do anything more yourself or you can upskill and now you can do all the things that generative AI can't and I'll give you just one quick example because I know that this isn't a lecture science fiction has a long thought about intelligence augmentation although it hasn't used that term some of you may be fans of Star Trek the Next Generation uh and if so you know that Captain Picard is the wise human being who runs the Starship and data is the Android who the machine who looks like a human being but who in fact is a computer with artificial intelligence and the two of them form a really effective partnership because data can ingest far more data and make all sorts of calculative predictions with it but Picard is the human being that has wisdom that knows how to apply the predictions in a reasonable way and what what we're doing now in education is we're training people to lose to AI because our measure of success is things like high stakes tests and descriptive essays that AI can do is it's part of the intelligence augmentation what we need to be doing is Shifting our outcomes to judgment the things that AI can't do and and until we make that shift we're kind of automating the wrong things to teach to students in this Brave New World regenerative AI instead of innovating to take advantage of how we can teach the right things to students thank you very much Chris for adding both the the concerns and some of the the challenges before us in terms of how these tools can be as you say either used for good or misused and I'm just thinking is that not one of the reasons why educating students about these tools the limitations to those tools and the opportunities how it can be used is that not part of the role of universities today then as part of the learning process I would say more or less across the distance the different disciplines um in order to to avoid that I mean I also see questions in the the Q a about integrity academic integrity what is the way forward now are we going to rely on being able to trust humans to use those tools for good to educate humans to use the tools for good or are we going to try to get rid of them that's not what I'm hearing from you I hear this is something that will exist in society whether we adopted or not within higher edge of the walls of higher education institutions and so so what do we do in order to um make sure that there is the necessary knowledge about the use and also for the students to be prepared and know how to act what can they do what can they not do in terms of of using these tools I mean we have different types of exams as well depending on the disciplines the Tim depending on the numbers as you were saying Kate as well when it comes to assessment because it's about scaling up and we have limited resources in order to sometimes do what we would actually prefer to do but what would your advice be in terms of integrating this within higher education institutions and um as part of the curriculum and in different disciplines is that something that you see for the future Kate I'll start with you this time um one of the things I think because I think it is um it is being used in people's lives and workplaces and so um one of the things I think is important is to acknowledge the way in which the students are using it so not make it a you can or you can't use it but in what way have you used it to inform your thinking I was talking to someone just last week who uh has adopted it in the referencing style that the students need to use in them I think they were doing lesson plans or something and so they they cite that it that the idea or whatever it is came from chat gbt and in the reference list they put the prompt that they used so I think some of it is about um that helps students understand how they're connecting that information and using other information as well from searching from other sources all of that and they're putting it together so I think that's really powerful way to do it that's certainly how we're looking at it in the like in the research space as well um with the journal that I'm one of the editors of it's really about acknowledging how people are using it to develop the ideas that they have um and so I think if it's it if we're thinking back to it being an authentic assessment um and trying to do things at scale that that would be like the bare minimum of you actually don't have to change too much about what you're doing you're just it's just another source um that people are drawing on but then we can be thinking about um broadening the way that we are that that the AI is interacting with the students and how they're developing knowledge um but just one thing to acknowledge in all of that when it comes to assessment when it comes to any of these things is that universities are so slow to change at least in Australia so and particularly at some disciplines more than others so you know teacher education in Australia is uh something where there are accreditation processes that exist nationally as well so to change any part of the assessment process can take years so I think in the meantime while we're thinking about all of these great ideas for for some disciplines we need to figure out ways that we can incorporate it into current practice um as well yes thank you Kate so this is of course also a challenge in terms of the national policies in terms of the the the traditional ways of operating and then how to to to change that um Frank in terms of Assessments uh are there other things that you see beyond the points that have already been mentioned here that use that you see that would need to change or some advice that you would give in terms of um I mean is it the end of essays and dissertations for assessments or I mean do we have to think differently or can we also set up rules like for some exams you are allowed to use books or calculators or and then we can also say yes or no you're allowed to use a chat GPT as one of the tools to for your exam is what you see as as a way of coming about the some of the challenges here yeah I think I would approach it in two two ways before I talk about the assessment of thesis for example specifically I think the first issue here the first issue that we are dealing with here has to do with do we even trust our students to responsibly use this technology for me I see this to be the key question because when when when this was launched and all of a sudden we were shocked that you know we have this technology Among Us which students can easily copy from I think the immediate conclusion at least from the reaction I saw around was that we actually don't trust our students because they are going to cheat because they are going to copy because they are going to take things from cell GPT and then present them as they are as their own work and I think this raises for me it raises a further question and a very important one what is it that we are teaching our students I think one of the key things we want students to take from higher education is academic Integrity that we are raising a student who would be out there whether in Academia or outside Academia to exercise this Integrity this transparency what wherever you take something from right Google has been in existence for a very long time if you take something from Google if you take something from a book you read at a library if you take something from a blog post you have to cite it you have to reference this reference it if it's not your own idea so for me the the reaction that all of a sudden we can't trust our students anymore and uh they are going to cheat they are going to take things from chat GPT for me sort of uh brings up this issue of perhaps again we are not teaching our students to uphold academic academic integrity and I also believe that we should include students in this discussion I think much of it has been among greater staff um you know how do we deal with this how do we prevent students from cheating without hearing from students what do students really want how do students see this and including them in in the process of policy making I think that's really important on the second part which has to do with dissertations I think I I'm personally not so worried about about that I don't believe you can use chat GPT to write a whole dissertation and anyone who is going to be able to do that actually needs to First has an expertise in that topic to be able to write a whole thesis with chart GPT and get a very good grade for it because it's going to be very shallow it's going to make up things it's going to give you um references or citations that do not exist and it means that for every output you are going to take from third DPT to include in your thesis you would have to spend time reading on that particular topic to check if the output you are getting from that GPT is actually correct so I think anybody who would go on this path would end up becoming an expert on that topic by using charge gbt and then cross checking its output with with real and actual risk research purpose and doing proper citations and all of that so I yeah I personally I'm personally not worried about about thesis and I think as I mentioned earlier on if my student is using it to correct um grammatical errors to correct typos with content being intact so a student has written the contact not sure of the grammatical mistakes in the output for me this is not problematic as long as they also reference that chat GPT was used to correct the grammatical um Eros I actually experimented this in one of the courses I told I taught last semester where I said to my students so I made up the assignment in a way that they could not use directly use chart GPT and then I said to them if you happen to use chart GPT make sure you cite it um references that I use chat GPT for this part of the assignment to do the specific task and all of that and to my surprise three students did use charity PC and then they acknowledged it as well that they used it and for me this I was happy to see that because at least they were transparent about how they used it and the way they used it wasn't to get or generate a Content which was actually not possible because of how I structure the assignment so I think the issue here has to be has to do with are we teaching students to be transparent and if we are scared that they are going to teach then perhaps we should revisit this goal that we want to teach our students to be transparent with wherever they are getting their ideas and their output from so actually well we hear some saying that tools like chat TPT will be undermining the the academic values maybe on the other hand it's actually calling for a reinforcement of those academic values to be at the center because with those as The Guiding or steering wheel then you can use tools as chat GPT being aware of their limitations or not I think that also makes us shift into the conversation about uh the the information that is used during these tools Chris of course I'll give you the floor now but and you can still comment on the assessment part but I know you already started that in the end of your previous uh round of comments so I also want to to bring in this time the question about reliability accuracy representation of the data that comes in in this tool the biases uh that may be generated here what role for higher education here I mean we cannot necessarily control the amount of data that is being used in those tools what is the role of higher education here to to discuss these types of I mean Chris in your introduction you called it a foggy mirror which I really thought was a nice allegory as well because the way that I see the information in chat GPT basically is the the digital human products that are being re given to us in some way but the the problem is that not necessarily in all parts of the world we have the same level of digital content so what is at stake here for higher education as well Chris well let's let's talk for a minute about dissertations and comprehensive exams because for a doctoral student you have a written and oral comprehensive exam that isn't about regurgitating what you've read it's about synthesizing what you've read and experienced and forming an original opinion about it your opinion about it rather than just quoting everybody else's opinion about it and a dissertation is supposed to be an original contribution to knowledge now I started by saying that chat GPT and and generative AI is Moonlight not sunlight um if we turn off the sunlight the Moonlight is going to quickly fade and um the the human contribution of originality and extension of knowledge is the sunlight so we certainly don't want to say well we've got chat uh generative AI now we don't need this this sunlight our moon is going to take care of us that's not not how it works and and so we need to think about keeping performance assessments so um I and colleagues build virtual worlds in which students can learn ecosystem science by entering the virtual world as an avatar just like an internet game and experiencing a virtual ecosystem and Performing as an ecosystem scientist would and then you know you look at the kind of summative products that they produced and the kind of step-by-step formative actions that they took and you can assess what they knew no and don't know about inquiry and ecosystem science and so on um that's a kind of assessment that generative AI cannot do it's not a psychometric you know find the right answer embedded in tempting wrong answers proxy it's it's actually the behavior that you're trying to measure and there are there are um systems now for producing different kinds of practice environments and summative assessments that I think are very interesting and Powerful for example I'm an advisor to a company called immersion and that's like a flight simulator for human skills and so we're looking at immersion to teach negotiation we're looking at version to teach um teach us how to be Equitable in how they distribute classroom discussions and personalized learning to each student and so on so we should be keeping our assessments that involve creation and that involve not proxies but direct performance of what people are going to be doing in life and throw out the ones that are proxies that generative AI can do well on anything the generative AI can do on a test it's going to do in the workplace that's going to go into the AI part of the job and so then we'd better be teaching people what they can perform so what are the kinds of bias um one of the things is that um AI as a large language model is limited to whatever was put into its training and um and aligned through the process that Frank described and as the world changes and things like new words come along or as the alignment that was performed becomes different and an example of everything changing would be the pandemic for example that AI is a pre-pandemic chain trained AI is dead as a doornail the minute that the pandemic comes along because it it has to be redone from scratch um in terms of the training that it receives so we don't live in the kind of stable world where we can train in Ai and say okay it isn't going to adapt it isn't going to grow it's going to be what what you put in is what you get out there's also issues with um words themselves and an interesting example is the word not so we use the word not frequently in natural language it's an important way of expressing things not is very difficult for language large language models because if you say a dog is the large language model can find all sorts of stuff that a dog is on the web and use those words to answer if you say a dog is not well a dog isn't a shoe but that may not be a useful thing to say right a dog is not president in the United States that may not be a useful thing to say and so it's sort of frantically searches for knots that if it can't find that's for dogs it finds them for pets and so you can say a dog is is um not good at at repeating things like a parent well I mean so that's not something that people think about in terms of dogs so even in its understanding of of what human beings are saying to it it's understanding and how to give a good response between um outright plagiarism through um copying things that it was trained on hallucinating in different ways uh Auto completing in in random ways when it hits a word like not it's it's it's problematic and um and it's problematic to use one of the things that that my word processor does is what's called autocompletion you know you're typing and it suggests what you want to do next that's a very dangerous thing I always turn off auto completion let's say I'm writing a recommendation letter and this thing is busy Auto completing for me I'm I'm being biased unconsciously towards what it says I have to consciously say no that isn't right and deleted as opposed to just hitting Tab and moving on that's that's something sitting on your shoulder whispering in your ear I would be very careful about what you have whispered into your ear so um I think there's a lot of reasons to be cautious about the weaknesses of these large language models in addition to their strengths thank you very much Chris what about you Kate and in terms of the the data I mean you and Chris come from countries where English is the primary language I think at least for for most um people or at least part of the the language used in schools where Frank and I not necessarily um so what does this mean as well that we have different languages different cultures different representations what what does it mean in terms of the type of answers that we're getting is is it going to influence the way we perceive the world in a way that is actually limited to a biased way of production of digital outcome because this is actually how we operate in the real world and it's then mirrored through these systems and and what can we do here is should it be part of the the education cycle as well um yeah I think it's absolutely a risk in terms of the diversity of the data that the models are drawing on um and definitely English language being that that primary source of it and if you then think about who is producing that text so even if you're talking about English language speaking countries that doesn't necessarily mean that that those countries are generating a lot of the texts that these models draw on so um I think that if they're then using that to make predictions about what um how the world is being seen how how what the answers are to these questions um that we're not getting a diversity of use we're getting um one couple maybe years from putting it together um but I think uh one of the um interesting things I heard about just the other week I was at a conference um for the learning Sciences so the science of how people do learning and some of the universities in the US are talking about the universities having a responsibility to create databases or the data that models would then draw on so I think that there are steps being taken to try and um address that at least I think there were I I see it as a really big problem so I think it's really big um topic to be talking about with students if they're critically if they're looking at um the the answers that they're getting from their prompts to not only be looking at it in terms of well does the sentence make sense is it hallucinating all the rest but so what assumptions sat behind the data that this is drawing on uh yeah thank you very much Kate I also want to pose this question to you Frank with your experience both in the the generation but uh also from your own life experience um yeah um so a few things to to mention I think um the first one has to do so the question has to do with the fact that it can skew um information or the fact that it's gonna manipulate information for some specific interest so first of all it has to do with the data so this has been trained on data produced by humans and a lot of times whatever it's going to give you is based on the data so if I should give you an exercise right now to go on Google and then search for professor you would have to scroll you may have to scroll before you can find a female professor in there so almost everything if you look at the images almost everything is going to give you will be will be men male dominant this is because the problem is coming from the data the other important thing has to do with the question of these models are being created by companies who are in for profits in Silicon valleys Silicon Valley and I'm not sure if we want ideologies and um you know interests of Silicon Valley to be represented in the society I'm not sure to put it in different ways I'm not sure if we can trust Silicon Valley with the instruction they put into the model for society to use at large because to to build the model actually part of chart GPT which is GPT 3.5 um is is based on instruct GPT um and what they do what you would do is in all of these parameters you give it some sort of instructions on what to say what not to say um what is a no-go what is a goal and all of that and if you have um you know guys working on Silicon Valley given these sort of instruction to these models I'm not sure if it's the right way uh to go about it and then on the topic of uh inequality um which has to do with you know the fact that about 92 percent of all the data on which we don't know for sure because open AI was not very transparent about the exact data they used but at least we can infer from the data they use from gpt3 based on which we had chart GPT uh 92.5 percent of such data is from English there are other languages involved as well and if the idea of you know every language comes with it it's culture and again it goes back to the comments I may and I think Chris also mentioned that right if if the language we get from these models come with a specific culture then again we should be cautious on what the model actually say to us right it can lead you on a path easily but at the same time um if if language again comes with culture and the training data is mostly based on English with its particular or peculiar culture and then this model being made accessible to you know everyone across the globe I I'm not sure if it helps with the inequality problem we already have if you have a model which is being trained with 92.5 percent of its data coming from one specific language with one specific culture how about the other languages I mean if you look at 10 years from now if we if we should think of advancement in technology and then you you sort of Imagine one specific technology being shipped by one single culture I mean who even decides or who even determines that that is the right culture to make accessible to to to all people across the across the globe so the idea of inequality already exists in our society but having models like this are only going to exponent this type of um in inequalities one thing I did the other day was to hypothetically come up with um with a non-function non-factual event in Ghana so I talked about I gave the prompt can you describe to me how great hall which is one of the Halls at my alma mater in Ghana University of Ghana was destroyed by an earthquake in 100 Words and 30 pity did it described how the hall was destroyed by an earthquake yes back but this is not true this doesn't exist anyway and the reason is because it had only very little data being trained in English again from Ghana it may be the same for someone from Sri Lanka it may be the same for someone from Lithuania so this inequality is only going to get wider and if it's in the hands of people who are in for business then yeah I don't see much uh much hope for for the for the future unless we do something about this for for pointing that out to us as well and I think that we're pointing to something that is of course also of my concern personally it is the commercial and the political interest that can be then injected into these data tools and that could screw or bias the information that we're actually using but is that not one of the reasons why again higher education has a key role to play here as you were saying Frank as well that that we are aware of the limitations to those tools and that we try to optimize the the opportunities Chris you were mentioning some of the examples that you're saying that this is actually exciting for Learning and this is how we can actually use these tools to to enhance the quality of teaching and learning but we also have to be very aware of some of the the limitations to the data that we're using and the potential uniformisation of the the understanding of the the world that could come through biased uh information in the world I actually saw in one of them um I think interviews with um Sam Altman from open Ai and he himself was calling for a similar entity like the international atomic energy agency which was set up at the time that nuclear technology became something that was considered a potential human or threat to humanity maybe in this last round and then we have to to to to to round off this very exciting conversation even though there's a lot of questions that we could continue a debate but is this something that we would need is this something where we have a I mean this goes a bit beyond the domain of higher education but at the same time it also impacts higher education in terms of the societies that we interact with do we need like an international regulating body so that it's not left within the hands of commercial interest to develop these types of Technologies is that one solution in order to try to fix the problem in terms of the data the representation inequality and then actually maybe enhance those tools or is that utopian is never going to happen Chris I give you the the floor here well I I worry about regulatory bodies in a different way than I worry about political groups or um you know commercial groups but the regulatory bodies have their problems too I think that that this should be something that everybody talks about and that everybody develops opinions about and that and that bottom-up initiatives advocate for different ways of thinking about it I don't want to delegate my trust to any small organization whether it be Silicon Valley you know the U.S government or a regulatory body I I think that we have the tools now with the internet and remote interaction to be able to do better than that um but I do worry about um if we're not very thoughtful throwing the baby out with the bath water with this technology and to build on what what both Kate and Frank said which was terrific um languages are ways of thinking a different language is not simply using different symbols to say the same thing or using different sounds to say the same thing it's a way of thinking that comes out of context and culture and so making any advising device multilingual is a risk as they have said and also you know sort of saying well now that we've got flu in Translation let's just do everything in English and translate everything from the other languages and they'll gradually just vanish because we'll all speak the same thing that's a terrible mistake as well so we we really really have to think deeply about what we're doing which may be a good thing it may be that in trying to avoid these dangers that we actually find better ways to do what we're doing now thank you very very much Chris Kate over to you um just a really quick one on this so the two roles I so say I'm okay with some regulations mostly because I can see it can help people know how to use these tools in a safe way in a productive way um we've had some recent uh recent article just come out that was a survey of Australian students and a lot of them didn't use chat TPT this year so far because they just didn't know if they were allowed to or what the rules were so some regulations that's the micro stage um level but some regulations I don't think are a terrible thing but I think part of the role of higher education is that we're preparing the people who in the future are going to be making these laws so um looking at the changes in systems developing all of these approaches to it so yeah all of this um that we've all been talking about about Preparing People to be critical thinkers around it and actually understand what's going on is going to mean that they can participate in society um in making decisions around this in the future thank you Kate you get the last words here yes um so two quick things uh first one has to do with I think Chris also mentioned it um a while back has to do with the fact that in these models there is something in natural language processing we call Black Box so we actually don't know how these models um learn we can speculate about it there are people who are now doing research on this but there is this big black box which we don't know now what makes it worse is that on top of that there is also lack of transparency from the companies that make this model so open AI for example hasn't made it public it's not open source we have no idea the day of the data sets how it was trained and all of that and that makes it worse so this is where I see regulation being very very important that if um Regulators would require legally required these companies to disclose how they built these models I think that would then be very important and that ties into my next point which is the role of higher education or the role of research institute in this if these models are made transparent then the next step I believe is to equip research institutes or indicators to actually you know with investment in infrastructure to be able to build and evaluate such such models so recently my University got so that the way we the way these models are trained is that they are trained on a cluster of super computers so you take like 100 supercomputers you put their power together and then you'll be able to train one model so Microsoft for example built something specific for for for this cluster of computers so we got a new cluster of computers for training large language models and other tasks as well and if we compare if we should train like a model like chat gpt4 or gpt4 on our computer cluster it will take us 64 years open AI did this in one month to three months so you can see that we are already research institutes are already way behind what the companies are doing so if we can have some sort of investment infrastructure um in research um in people in in education then we can't be able to in a way be in a better position to evaluate uh the harms the biases and all the ins and outs of some of these models that are on the markets and already having their influence on the society so thank you and I see that unfortunately we have come to an end we are running out of time it has been a fantastic uh um session I mean it's been a great pleasure for me to to listening to all three of you to thank you for sharing your views your expertise I think that we have not provided absolute answers to everything here but we have broached a lot of important topics showed some of the opportunities some of the the qualities uh I think that there was agreement to some extent that this is going to be part of society whether higher education embarks on that road or not so it's also a matter of then adapting to society and then as you're saying Frank maybe call for more transparency for universities to not only be engaged in terms of educating the students being knowledgeable knowledgeable about the limitations or opportunities at those tools but also understanding longer term impact being able to do research on the impact and to inform uh policy making hopefully as you were also referring to Kate through as well the the students exiting the universities I think that this is only the first conversation about Ai and the impact on higher education from our side at iau I think that it is a topic that will come back in different forms but I was very pleased to have you three with me here today so I thank you all and I thank all the participants for being with us and from listening in and I look forward to continuing the conversations Beyond this webinar so thank you to all three of you bye all right thank you for having us bye thank you
Info
Channel: International Association of Universities
Views: 9,445
Rating: undefined out of 5
Keywords:
Id: gE_GKsdTPAs
Channel Id: undefined
Length: 92min 39sec (5559 seconds)
Published: Tue Jun 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.