22 00:17:33.840 --> 00:17:56.990 Debbie McVitty: Good afternoon, everyone, and a really warm welcome to this webinar agents of change. Inevitably, because this is a webinar about technology. The 1st thing that happened was we had some tech snafus. So thank goodness, we're all here, and thank you very much indeed for your patience while we get started. This promises to be a really interesting conversation. And can I encourage you to get posting in the chat box? 23 00:17:56.990 --> 00:18:12.789 Debbie McVitty: Ask lots of questions, get involved. Our topic today is autonomous AI agents. We're working in partnership with our friends at Salesforce who are doing some really, really interesting work in this space. And we want to talk about what the implications might be for universities. 24 00:18:12.790 --> 00:18:16.020 Debbie McVitty: activity around recruitment and admissions, which is obviously an incredibly 25 00:18:16.020 --> 00:18:22.880 Debbie McVitty: timely moments to be thinking about, how we make that work more efficient, more streamlined, more impactful, and so on. 26 00:18:22.880 --> 00:18:45.420 Debbie McVitty: Autonomous AI agents are very definitely the kind of the subject of an awful lot of chatter over in tech world, and I think, as ever, there's an awful lot of excitement and hype. We're going to talk a little bit about kind of what they are and how they might be used. But then we're going to try and kind of make that real through conversations with our expert interlocutors. 27 00:18:45.740 --> 00:18:46.670 Debbie McVitty: So 28 00:18:46.890 --> 00:18:56.220 Debbie McVitty: if you discount the idea that there's going to be some kind of general artificial intelligence that's going to take over the world in the next 10 years, you may believe that's true, but there's very little we can do about it. 29 00:18:56.540 --> 00:19:14.029 Debbie McVitty: The big promise of AI is really about transforming work, and that's about doing things like picking up complicated but routine tasks and automating those really importantly, it's about becoming more efficient and more effective by using AI to interpret 30 00:19:14.030 --> 00:19:41.659 Debbie McVitty: and large data sets and then inform decisions off the back of that. An autonomous AI agent is very firmly in that category of in that sort of second category of sort of efficiency and impact rather than kind of, you know, going out there talking to students on your behalf, it's being able to interpret specific data sets and respond and adapt what it's doing, but not without kind of human intervention at every stage. 31 00:19:41.810 --> 00:19:48.849 Debbie McVitty: but rather than trying to kind of define precisely what autonomous agents are. It's probably easier just to look at some use cases. So 32 00:19:49.100 --> 00:19:50.970 Debbie McVitty: in autonomous agents. 33 00:19:51.050 --> 00:20:14.360 Debbie McVitty: both in higher education and in lots and lots of other industries and contexts can do things like handle routine customer inquiries make personalized recommendations to customers. If you've been to the cinema recently, you might have seen Matthew Mcconaughey, wearing a silly hat because his personal shopper had not had not had not deployed an autonomous agent to personalize the recommendations 34 00:20:14.360 --> 00:20:29.039 Debbie McVitty: autonomous agents can create and optimize marketing campaigns so it can send out content to a segmented group of users. Look at how those users are responding, and then adapt the content in light of that user response. 35 00:20:29.040 --> 00:20:44.500 Debbie McVitty: and autonomous agents can also act to an extent as coaches. They can kind of role play scenarios. They can give feedback, and then they can link that back to the performance of the thing that they're trying to coach on and so on. And you know we're very much also, at the start of this technology, and 36 00:20:44.500 --> 00:21:00.770 Debbie McVitty: the possibilities for its application are quite, quite broad. But also we do want to detach the hype and the speculation from what's actually possible right now, and what might be possible in the future. 37 00:21:00.960 --> 00:21:29.379 Debbie McVitty: Technology is a tool, not a panacea. It is only as good. Any AI autonomous agent is only going to be as good as the people and organizations that are deploying them. So with that in mind here to help us get to grips with all those possibilities and pitfalls, are 2 expert guests. Paul Napleton is head of digital and marketing automation at the University of East Anglia, so you can imagine the sorts of questions he gets in his inbox every day, and Maya Livenon is senior product marketing manager at Salesforce. So, Paul, we're going to start with you. 38 00:21:29.670 --> 00:21:49.189 Debbie McVitty: and I guess the kind of just you know. You've got some sort of thoughts to set the scene. But broadly what you know, that sort of picture from where you sit about. I guess those broad trends in the world of tech, and then how they apply in the recruitment and admission space. And I'm going to have to ask, are you in a position to share slides? Or would you like us to do. 39 00:21:49.190 --> 00:22:05.100 Paul Napleton: Yeah, sure. Yeah. 1st of all, it's a pleasure to be here, and and thanks for the introduction. This is such a fast paced, exciting topic. And what's really fascinating is this week alone. We've seen, you know, loads of government announcements and excitement around 40 00:22:05.100 --> 00:22:29.869 Paul Napleton: AI, both from Keir Starmer talking about Uk being open for business for AI and a big investment as well as then. The Education Secretary, thinking about AI, could be a real force for good in higher education, so fast moving topic. And it's fascinating to be able to talk about some of the things that we've been doing at Uea, but also kind of how I see the uses of it going forward when it comes to kind of AI, and in particular agenda 41 00:22:29.870 --> 00:22:42.949 Paul Napleton: going forward, I will just trust in the technology and try and share my screen, because I've only got a couple of scene setting slides, which I hope will be useful to provide some kind of context. So I'm going to 42 00:22:43.110 --> 00:22:50.380 Paul Napleton: hopefully see if you can see my screen does. Does this look okay from your perspective. 43 00:22:53.070 --> 00:22:53.905 Paul Napleton: So 44 00:22:55.100 --> 00:23:08.990 Paul Napleton: what I've got here is just an infographic, which I think is really really nice. And hopefully, we'll talk to a lot of the kind of common themes that we've all been living through over the last kind of few years, but this was. 45 00:23:09.200 --> 00:23:27.550 Paul Napleton: there's no particular order to this, but it's just kind of highlighting some of the kind of geopolitical changes, some of the economic changes, the brand changes both macro and micro that are affecting, how we do things and why the importance of technology. And in particular, AI is going to help us navigate our way through all of this. 46 00:23:27.550 --> 00:23:40.359 Paul Napleton: So we know increasingly that our students and potential students are kind of demanding those Netflix type experiences. They don't see a distinction as such. So increasingly we have to be thinking about. How do we personalize? 47 00:23:40.360 --> 00:24:02.850 Paul Napleton: How do we enable our prospects and our existing students to see themselves at the university, so building that sense of community and engagement, and that cuts across all different touch points through kind of open days, through to social media, through, to peer, to peer marketing. But the website and Crm is an obvious one as well, so how do you build that sense of belonging? How do you evidence that 48 00:24:02.850 --> 00:24:26.809 Paul Napleton: we know channels are changing rapidly? We know technology is changing. We can't just rely on, say, Google to help us because you can rank well for your web pages. But AI overviews are changing search engine optimization. They're not even getting to your pages. They're trusting in the AI apple. Intelligence, for example, is now changing how people consume emails and so on. So the technology is constantly changing 49 00:24:26.810 --> 00:24:50.390 Paul Napleton: universities are thinking about, how do you implement this new technology into existing legacy systems and so on. So there's a lot of change going on in the sector. But technology can be a positive force for good, and in particular, AI, which I believe can be a great way to kind of save you some time. Take some of the monotony out of the work that we do. Spot those opportunities early, and give you a chance of having a go at 50 00:24:50.490 --> 00:25:00.110 Paul Napleton: building those lifelong relations that 360 view of students which has always slightly eluded us. But technology is the best chance that we've got at trying to build that. 51 00:25:01.540 --> 00:25:10.500 Paul Napleton: But if you think of AI, it isn't just one thing. You know, we need to be thinking about AI as part of a broader 52 00:25:10.820 --> 00:25:30.240 Paul Napleton: digital transformation, which I'm sure all of us are going through at different paces and different speeds, and so on. But if you know in my world I kind of work predominantly in marketing and the marketing data ecosystem alone is really fast changing in higher education. So you know, I see AI as a real enabler when we're thinking about 53 00:25:30.370 --> 00:25:43.850 Paul Napleton: that data ecosystem. So some of the things around this slide are other areas that are kind of related to AI, but we also have to be considering now. So you know, at the heart of it should be the customer data platform and increasingly kind of 54 00:25:44.100 --> 00:26:12.679 Paul Napleton: education, specific student information systems. So that's your kind of Mission Central, if you like, your kind of Hq. For kind of routing everything through it should be at the heart of your ecosystem. But then, in order to build that 360 view of students, we need to be thinking about, how do we build kind of digital first, st mobile. First, st how do we think about the evolution of kind of chat bots and and agents as we'll kind of come on to talk about, how do we transfer data between those bots? 55 00:26:12.680 --> 00:26:23.270 Paul Napleton: You know, how do we make increasingly real time insights and optimization when it comes to our marketing campaigns? Augmented virtual reality isn't just some kind of 56 00:26:23.270 --> 00:26:36.129 Paul Napleton: kind of futuristic thing. It's a great way to engage people and connect and build that sense of community and so on. So you know that ecosystem is radically changing. AI is a core part of that. 57 00:26:36.160 --> 00:26:36.970 Paul Napleton: and 58 00:26:37.370 --> 00:27:02.349 Paul Napleton: just leaning more into AI itself. I pulled this slide together, because I think there's different waves of AI. Everyone's been getting very excited recently over the last year or 2, because, you know, people were thinking about kind of generative AI and the explosion of things like Chat Gpt, and so on. But AI itself, as a term has been around since the 19 fifties. And 59 00:27:02.664 --> 00:27:15.559 Paul Napleton: you know, we've all been living with predictive AI for kind of some time. Now, that kind of analytical, next best action or recommendation engines that you know, you see, on Amazon, and you know things like that. However, the different. 60 00:27:15.580 --> 00:27:37.090 Paul Napleton: The pace of the technology is moving so much that the horse is very much bolted, and if you're waiting for it to settle, the best thing is just to kind of jump on board and try and ride the waves, because we've progressed into kind of generative AI in terms of the ability to kind of not just spot opportunities, which is what the predictive AI does. 61 00:27:37.090 --> 00:27:59.729 Paul Napleton: but then increasingly to kind of create content and to utilize that, to act on the opportunities that we are predicting. And we're spotting. We can use AI to kind of gap analyze and to kind of make all these kind of cool improvements when it comes to content. So our marketing is enhanced because of that. Our recruitment. Our ability to attract students retain students has improved. 62 00:27:59.800 --> 00:28:11.540 Paul Napleton: However, what we're now coming onto which is really exciting is the ability to kind of not just talk to data, which is where the kind of copilot wave came in the advancements of data in that sense. But 63 00:28:11.620 --> 00:28:23.119 Paul Napleton: agentic AI, you know the ability to have proactive, autonomous AI that can kind of have a focus on a specific task that you set it and control. 64 00:28:23.120 --> 00:28:45.700 Paul Napleton: but then have very limited human interaction after that, because it can talk to other AI. It can have that transfer of data. It can work 24, 7 to kind of help deliver added value. There's numerous waves after that, too. So you know, progressing more towards the more Science Fiction end of films like, you know. 65 00:28:45.730 --> 00:29:10.440 Paul Napleton: Wally and Blade runner, and so on, and Star Wars. But we're kind of some way off that yet, in terms of that artificial general intelligence. But it's a really exciting thing to consider, isn't it? AI isn't just one thing, it isn't new. It's been around a while, but it is evolving, and the way in which we can utilize it within higher education will only improve as we can kind of think about. How do we find ways to spot opportunities. 66 00:29:10.540 --> 00:29:16.240 Paul Napleton: enrich our student services and improve the student experience. 67 00:29:24.470 --> 00:29:38.989 Debbie McVitty: Fantastic. Thank you so much, Paul, and a really useful kind of scene setter for thinking about not only how things are changing now, but how they might change in the future. I guess one of the things I mean, what's really interesting about your role is that it speaks to the way that 68 00:29:39.270 --> 00:29:53.610 Debbie McVitty: different departments of universities. So in this instance, the kind of marketing, recruitment admissions area of the university are increasingly kind of bringing in digital expertise people who have to kind of have one foot in both camps. You've got a foot in that digital camp, but also a foot in that kind of marketing, recruitment, admissions camp and being 69 00:29:53.610 --> 00:29:54.170 Debbie McVitty: able to kind of 70 00:29:54.170 --> 00:30:09.510 Debbie McVitty: integrate those things. But from that perspective, I guess what's your view about how marketing, recruitment and admissions is changing and evolving, and what I guess what the sort of cutting edge challenges there are there such that then you might want to kind of bring in technology to try and address those. 71 00:30:09.760 --> 00:30:31.139 Paul Napleton: Yeah, I mean, I'm always a big proponent of that. If you have to use the words digital, then you probably don't quite understand it well enough yet, or it's not mature enough yet, because it means that you're doing something a little bit different. And there was some, you know, some consultancy work done. I think it was by Deloitte a year or 2 ago. That kind of described. The difference between doing digital 72 00:30:31.140 --> 00:30:55.020 Paul Napleton: which everyone on this call will be kind of doing that stuff that we do digitally now that used to be more physical. But then, to truly be digital, you have to transcend, and you have to use new technologies like AI and automation and advanced analytics, and so on to kind of free up the time to create added value opportunities for your kind of human resources. So once you start thinking about it like that. 73 00:30:55.020 --> 00:31:08.730 Paul Napleton: that's actually it becomes more exciting and it becomes more empowering for your teams, because, you know, inevitably, when you're talking about AI and digital change generally, there can be some resistance, you know. It's new. I don't understand it. Will it affect my jobs? And so on. And 74 00:31:08.730 --> 00:31:23.669 Paul Napleton: I'm a big proponent of the fact that it isn't AI or humans, it's AI with humans. And I think if you start thinking about it in that sense, then your marketing and your ability to kind of drive admissions, recruitment and marketing 75 00:31:23.670 --> 00:31:44.449 Paul Napleton: takes a slightly different lens. It's how can we use the technology to enable us to kind of make our content get seen better to spot those opportunities, better to reassure prospective students that this is a place where they belong and is the best place for them. How do we create that kind of learner to earner journey that's empowered by technology, because. 76 00:31:44.450 --> 00:32:02.939 Paul Napleton: you know, to do all that personalization can take a bit of time on the old ways of doing things. It can be quite inefficient. We can waste a lot of money with wasted advertising, and so on, the more that we can utilize technology to save money, improve efficiencies, build those bridges across all of our different pockets of data that we have across 77 00:32:02.940 --> 00:32:15.900 Paul Napleton: recruitment, housing, finance, alumni, and so on. The better we can build that 360 view that I talked about earlier. So I think technology as an enabler technology is a way to focus on genuine. 78 00:32:15.900 --> 00:32:36.489 Paul Napleton: really new added value means that we can begin to think about, how can we win through data we talk a lot about in this sector, you know, having the best course or the best rankings, or the best location, and so on. But what about having the best relationships with our students having the best data and to be able to do that. You can build those relationships so that 79 00:32:36.520 --> 00:33:04.160 Paul Napleton: you know you're more likely to retain those students once they join, you're less likely to have dropouts and people wanting to kind of shift in clearing and so on. So it's a really exciting kind of way to approach data thinking about that as an enabler, and it brings teams along with you. It can be very easy to get excited and want everything to be perfect, and wait for all the planets to align. But this technology is moving fast, and my recommendation is always to start somewhere. 80 00:33:04.190 --> 00:33:33.049 Paul Napleton: I talk to my teams a lot about kind of having a crawl, walk, run, methodology. So find those use cases, those journeys, whether it's I don't know, undergraduate prospecting, or whatever it might be, show that there's an improvement by using AI or automation show the time savings that can be gained by kind of experimenting with a kind of energetic AI kind of bot, and so on, build a Chatbot, build a virtual assistant, find ways in which you can 81 00:33:33.050 --> 00:33:51.349 Paul Napleton: reduce 80% of the kind of repetitive questions that come into your service desk, and so on. Free up the time, show it works. And then all of a sudden, that builds the snowball, and other teams are wanting to kind of get on board with it. And again it becomes less about digital's done over there by a team. It just becomes the way we do things. Now. 82 00:33:51.350 --> 00:33:52.070 Debbie McVitty: Hmm! 83 00:33:52.340 --> 00:34:13.940 Debbie McVitty: There's something really important. There isn't there about taking control of the thing you're trying to achieve, which, again, is not a technology thing. But it's about kind of saying, actually, it's not. This is, you know. For example, the rankings are not controlled by the institution. But knowing knowing that you're achieving your objectives, knowing that you're the best. It's something that you have a kind of sense of ownership over. You've got the capability to build that data set. 84 00:34:13.940 --> 00:34:14.510 Paul Napleton: Yeah. 85 00:34:14.510 --> 00:34:19.129 Debbie McVitty: You want to, or to sort of decide how you measure that, and then kind of work within those parameters. 86 00:34:19.270 --> 00:34:44.260 Paul Napleton: Yeah, absolutely. And AI is only as good as people's trust in the data that kind of runs it. So trust is a big thing when it comes to AI, and the natural thing to kind of progress onto, I guess, is to think about the kind of the ethics of AI and having your kind of north star of where you want to get to when it comes to AI. So have your AI principles across, not just the admissions or group and marketing world. But think about it 87 00:34:44.260 --> 00:34:54.749 Paul Napleton: at a university level. How could AI help you with your kind of recruitment and enrollment? And so on. But also, what does it mean when it comes to kind of teaching and research and innovation and that kind of thing? Because. 88 00:34:54.860 --> 00:34:58.180 Paul Napleton: you know, if why should students 89 00:34:58.250 --> 00:35:27.029 Paul Napleton: kind of trust to come and spend many thousands of pounds and investment of their time in their future organizations that really don't understand AI and don't prepare them for a future where it'll be front and center, you know. So I think, having that North Star and those kind of principles which we're trying to align it with our brand values, we can say, Look, okay, we're going to work with only a few trusted partners that share our principles. And you know, these considerations around kind of ethics and around sustainability and around 90 00:35:27.030 --> 00:35:41.770 Paul Napleton: transparency. That kind of thing that really helps because it is a bit like the Wild West up there at the moment when it comes to AI, everyone's got a solution, and so on. And I think it's important to anyone who's interested in AI within higher education to kind of 91 00:35:41.770 --> 00:35:55.759 Paul Napleton: PIN a star on their chest and be like the sheriff of AI. Right? So have those principles know who you're going to work with. Set yourself up for success in a trusted way, and I think that builds confidence, and that enables you then to move faster than the competition. 92 00:35:56.460 --> 00:36:02.620 Debbie McVitty: Have you got a use case example you can share with us to sort of, I don't know, I guess. Bring some of this to life. 93 00:36:02.620 --> 00:36:25.609 Paul Napleton: Yeah, sure. So I mean, we use artificial intelligence in 3 key ways, particularly in kind of my world. And what I've tried to bring into the team since I joined U Ea. So the 1st one is that we use AI to kind of make data driven decision making. So that's the sort of thing where we use AI to help us with gap analysis. Where do we have some information about some students, and where don't we. 94 00:36:25.610 --> 00:36:44.479 Paul Napleton: you know, where do we have incomplete information? Where are the trends that we can't quite predict because of the information, and so on. We can use. We use it for targeted advertising. So you know, it comes to look like audiences and customers like this and so on. That match to records. It helps us a lot with search engine optimization. 95 00:36:44.480 --> 00:36:50.740 Paul Napleton: for example, in terms of improving the copy and make sure that it helps with the rankings and so forth. 96 00:36:50.780 --> 00:36:59.419 Paul Napleton: The second big area is around personalizing our marketing. So I touched on that on the earlier infographic. But that's a big one. So you know, how can we 97 00:36:59.420 --> 00:37:22.650 Paul Napleton: use things like send time, optimization or engagement rate scoring within our Crm journeys. So sending an email to you, Debbie, at 4 o'clock on a Thursday afternoon, because that's when the AI knows that you open your emails better, let's send you a different journey. That includes SMS and Whatsapp depending on how you like to consume your channels. And again, AI can spot that. 98 00:37:22.830 --> 00:37:41.609 Paul Napleton: It's about hyper personalization at scale, which is kind of buzzwordy. But it's kind of what we're trying to achieve. You know it. Don't forget about personas and segments to an extent. Think about how I can communicate and engage you as a person and build that relationship through technology. So that's what we're trying to do there 99 00:37:41.610 --> 00:38:01.570 Paul Napleton: as well as then. Of course, with our marketing, looking at ways in which we can use AI to help us with content, creation or content, augmentation, or even things like ideation. You know, you're kicking off something. Let's get a 1st draft from AI. But then let's think about how it can be improved and added to, and so on. 100 00:38:01.770 --> 00:38:04.200 Paul Napleton: So there's a whole kind of area there 101 00:38:04.220 --> 00:38:15.839 Paul Napleton: and then the 3rd key area where we're using AI that might be useful for people to know is, you know, thinking about, how do we use AI to improve the student experience? And this is an area that kind of goes well beyond marketing and kind of goes into 102 00:38:15.840 --> 00:38:41.419 Paul Napleton: sort of the rest of the division. So how can we think about how through I don't know. Virtual assistants and kind of chatbots. How can we use it to kind of augment kind of service desk, and so on. So a student could be chatting to a person. But the AI is running in the background, and it's kind of going. Oh, okay, did you? This is really interesting. Why don't you recommend this open day? Why don't you recommend this webinar, or this event that you've got coming up, which the human may or may not know about. 103 00:38:41.470 --> 00:38:56.660 Paul Napleton: And then the it's already written. Then an automated email follow up so the person can get on to helping another person. And then all of that data gets stored into the the sort of Crm, and then you've got a perfect record. What they're interested in. What are they concerned about. What are they worried about? 104 00:38:58.160 --> 00:39:26.550 Paul Napleton: Another way in which we use AI is in things like kind of conversation. Summaries on peer to peer chat. We work with Unibuddy, and we have a lot of peer to peer chat conversations, and that's great. But you know the back and forth can sometimes be quite lengthy, and you wouldn't. You couldn't extract the data necessarily, or the insights. What might we need to create a blog on or a web page on? What are the trends. AI. Conversation summaries can summarize all of that, for example, and give you a perfect super summary. So 105 00:39:26.620 --> 00:39:50.200 Paul Napleton: the list goes on. But I think the this kind of throws 3 core ways in which we're using it at the moment. I think agenda Ki is only going to kind of further explode the the opportunities for us as we begin to think about, how can we use that kind of 24, 7 always on task focus to kind of make more proactive autonomous. It's kind of savings to efficiency and time. 106 00:39:50.350 --> 00:39:51.000 Debbie McVitty: Me. 107 00:39:51.310 --> 00:40:08.430 Debbie McVitty: I think I mean, Paul. I'm definitely going to come back to you to chat about. I guess the sort of conditions for making something like this really work in a you know, in a team in a university. But I'd like to bring my L on now to kind of chat, chat a bit more about some of some of the things that are coming out in the chat box chat. 108 00:40:09.180 --> 00:40:28.580 Debbie McVitty: These webinars get confused in chatbots and chat boxes, things that are coming out of the chat box, people sort of trying to understand how this works in practice, and I guess what sort of guardrails you put around it, and that sort of thing. So actually, Maya, let's start with Chatbots, because one of the things I think people might sort of 109 00:40:28.760 --> 00:40:42.299 Debbie McVitty: be, or I would. Certainly I certainly struggle with is what's the difference between, I guess the sort of traditional Chatbot and something like an autonomous agent that can do chatbotty type things, but perhaps in a more exciting way. 110 00:40:42.300 --> 00:41:11.799 Maëlle Lavenant: That's a great question. And one we get often asked actually, is autonomous agent just another Chatbot. And I think the answer is no, because and I'll explain, Paul. Just walk us through the different wave of AI. The 1st was predictive, AI, the next generative AI, and we are surfing right now the agentic wave, and each wave has brought new capabilities to support recruiters and admission staff more effectively. Chatbot belong more to the predictive wave. 111 00:41:12.110 --> 00:41:18.149 Maëlle Lavenant: and while they are useful. They are also sometimes limited, because they 112 00:41:18.360 --> 00:41:34.869 Maëlle Lavenant: they are great at handling simple tasks, repetitive queries, they will simulate conversation, and they will deliver consistent standardized answers response to your prospective students, for instance, like, what is my application deadline? Or where can I find my course catalog? 113 00:41:34.940 --> 00:42:02.889 Maëlle Lavenant: But they do have limitation. They can't handle complexity and will struggle with nuanced, I would say vocabulary or languages, sometimes, or unscripted questions. For instance, if you have an international student, let's say, I'm an international student with 3 years undergraduate degree and a work experience. Which program do you have for me? Well, the Chatbot will often fall short because he won't be 114 00:42:02.910 --> 00:42:11.170 Maëlle Lavenant: ready and set or program to to answer this question in a personalized way. And this is where autonomous agents 115 00:42:11.480 --> 00:42:28.979 Maëlle Lavenant: truly shine, because, unlike Chatbot. Those autonomous agents are capable of making a decision instantly and take action without human intervention. They aren't just another tool. They are truly transformative in the way that they will allow teams to do more with less, and they will 116 00:42:29.310 --> 00:42:54.850 Maëlle Lavenant: really act as an extension of your team. And I'm just calling Paul from previous webinar, where it was saying that AI was like having an additional member of your team on board. It's a new digital workforce that can step in whenever human can scale. And if you think about like application period like, we are about to enter when there's a lot of queries flooded in. But the recruitment and admission 117 00:42:54.890 --> 00:43:13.529 Maëlle Lavenant: office isn't like bigger, or will struggle to recruit new hands on board like agent can step in and just handle those interaction and answer the question on the fly, because they can manage complex and nuanced conversation. For instance. 118 00:43:13.890 --> 00:43:36.709 Maëlle Lavenant: if a student is transferring from another institution like the agent can dynamically adjust and provide the tailored guidance based on this profile, because it will have the contextual information and the memory to continue the conversation. So it eventually achieve the best outcome for the student questions. 119 00:43:37.100 --> 00:43:53.800 Maëlle Lavenant: and it works around the clock like it doesn't stop at 7, or it doesn't stop at 9 Pm. Like it can even work at one Am. And answer questions to students wherever they are, whenever they have their query. And that's really, really. 120 00:43:54.380 --> 00:44:21.770 Maëlle Lavenant: I mean helping teams focus on other things, that inquiries, and like every morning scheduling 1 h to handle all of the in case the sorry, the cases that came, or the questions that they had, and just focus on on building those deep relationships that Paul was just talking about. Like they will. I think autonomous agent will reduce the the workload 121 00:44:21.980 --> 00:44:28.749 Maëlle Lavenant: of recruiters and admission officers even to and ensure that no student feels overlooked 122 00:44:29.180 --> 00:44:39.190 Maëlle Lavenant: and that stuff truly focus on what matters which can be building relationship or brand advocacy or application decisioning. Even. 123 00:44:39.380 --> 00:44:44.500 Maëlle Lavenant: That's, I think, where the the dynamic will play a big difference. 124 00:44:48.100 --> 00:44:49.340 Debbie McVitty: Brilliant. I think I mean 125 00:44:49.430 --> 00:44:58.350 Debbie McVitty: the chat that's unfolding in the chat box right now is, I think if I was to sort of parse it out, because I think it is inevitable that 126 00:44:58.400 --> 00:45:21.480 Debbie McVitty: everybody wants to sort of understand the ethics of the choices that are being made when we deploy technology, particularly in enabling interaction. I wouldn't say. And I think the idea of being autonomous is particularly challenging, because I think there's always that worry that you know the robot will go off and do a thing, and it'll sort of be out of your control. 127 00:45:21.590 --> 00:45:37.659 Debbie McVitty: And I wonder can you sort of chat a little bit, I guess, about that sort of that kind of machine human interface in the context of autonomous agents. And I guess what the limits are, because we're not talking about them going off and kind of deploying and delivering your entire marketing strategy. We're. 128 00:45:37.660 --> 00:45:38.159 Maëlle Lavenant: Thing about the. 129 00:45:38.160 --> 00:45:41.220 Debbie McVitty: Doing quite, quite well defined tasks. 130 00:45:41.220 --> 00:45:41.580 Debbie McVitty: Yeah. 131 00:45:41.580 --> 00:45:55.370 Debbie McVitty: but also just that sort of I guess that that broader ethical picture from a salesforce perspective. I mean, I know as an as a company. You've you know, you have obviously had to think about this in a lot of depth. So what sort of, I guess. What guardrails do you do you put around your your technology. 132 00:45:55.370 --> 00:46:03.159 Maëlle Lavenant: Yeah, this is a I mean a crucial question to address, and that needs to address before I even started to be honest. And I think it's really 133 00:46:04.250 --> 00:46:07.325 Maëlle Lavenant: I mean, the answer isn't is twofold, because 134 00:46:07.980 --> 00:46:29.780 Maëlle Lavenant: sorry, because on I mean, I speak to. It's twofold, because there is, for I'll talk just for salesforce and what we provide currently to our customers. And and the second fault is like, what institution are doing about it or doing with it the first.st If you look at Agent Force, which is our platform to build and deploy autonomous agent. 135 00:46:29.930 --> 00:46:51.140 Maëlle Lavenant: it's powered by a reasoning engine, so you can think of a reasoning engine as the brain behind this agent that will operate ours is called Atlas, the Atlas reasoning engine, and it rezoned over your data and business processes until it feels confident that it knows how to accomplish your goal. 136 00:46:51.520 --> 00:47:13.900 Maëlle Lavenant: The reasoning engine will generate a plan. It will evaluate it, it will refine it, and it will figure out what structure or instruction data it needs to pull from data cloud in our situation to achieve this goal. And it will take action across the entire ecosystem. But while ensuring all the policies and guardrails are respected. 137 00:47:14.280 --> 00:47:16.839 Maëlle Lavenant: and the best part is that it 138 00:47:17.180 --> 00:47:27.850 Maëlle Lavenant: it learns as it's being used like it learns from the outcome. It's feeding new information back into the system. So it gets smarter with each interaction. So that's 139 00:47:27.960 --> 00:47:30.000 Maëlle Lavenant: on on one hand like what? 140 00:47:30.660 --> 00:47:35.920 Maëlle Lavenant: The one of the guardrails that salesforce providing on top of all the AI 141 00:47:36.380 --> 00:47:55.369 Maëlle Lavenant: architecture of the platform, that, for instance, I think I saw popped in the chat earlier, someone asking about like, How do we ensure that the data isn't? Train isn't used to train a large language model, etc. Well, there are mechanism embedded in the salesforce platform to to avoid that like 142 00:47:55.560 --> 00:48:10.740 Maëlle Lavenant: data, masking secure data, retrieval and 0 data retention mechanism like this that ensure that the data remain yours, the prompt remain yours, and isn't used to 143 00:48:11.090 --> 00:48:17.629 Maëlle Lavenant: to train a large language model which I can understand why it can create 144 00:48:18.030 --> 00:48:31.360 Maëlle Lavenant: question or concern for for institution, but the next, which to me it's very critical. It's whenever we work with the customer implementing agents, we, we always 145 00:48:31.670 --> 00:48:55.009 Maëlle Lavenant: tell them to follow a simple 5 sorry 5 simple key steps to creating, so they can create an effective and responsible agent, and I will list them out for the attendees to bear in mind, because they are pretty simple. The 1st is to define the agent role. What you want it to do, what you want. Who will it be served? 146 00:48:55.270 --> 00:49:20.340 Maëlle Lavenant: And this is a foundational step to ensure that the agent is purpose built and is aligned with the organization goal. So that's the 1st step, the second is to ensure it has access to the right data, so you will define which data it can draw from to deliver those meaningful interaction and accurate responses. 147 00:49:21.090 --> 00:49:23.969 Maëlle Lavenant: The 3rd is, what action will it be able to take 148 00:49:24.070 --> 00:49:52.200 Maëlle Lavenant: so it can go beyond simply answering questions it can go through. Oh, you based on the information that the prospective student will provide you. You can invite him to register for a campus tour, or you can ask him for their information about its academic records, etc. You can set that from the before deploying, that when you're in our agent builder you will set all of this 149 00:49:52.200 --> 00:50:03.739 Maëlle Lavenant: different action that the agent will be able to take, and that will be the only one that you will perform. Then you have what we call the guardrails like you establish the guardrail within. The force 150 00:50:03.900 --> 00:50:13.080 Maëlle Lavenant: step that we ask our customer to take is to define what he should do, what he shouldn't do. For example, if an agent. 151 00:50:13.340 --> 00:50:14.982 Maëlle Lavenant: if sorry if 152 00:50:16.070 --> 00:50:44.610 Maëlle Lavenant: an agent can help a student find course information. But if a student is deciding on a major considering, taking time off, etc. The agent can escalate to an admission officer or to someone at the university to explore further and not give a scripted answer, just like, probably a Chatbot will have done, or will have just said, sorry I can't answer your question, lets me connect you to a real human. And finally. 153 00:50:44.700 --> 00:50:57.360 Maëlle Lavenant: the last one is to identify the channel in which it will operate. Will it be the website? Will it be your Crm, will it be over email like, there are many platform 154 00:50:57.490 --> 00:50:58.830 Maëlle Lavenant: with that. 155 00:50:58.960 --> 00:51:06.160 Maëlle Lavenant: Universities and institutions are using to interact with the students you can define on which one your agent will operate. 156 00:51:06.340 --> 00:51:07.850 Maëlle Lavenant: And that's, I think. 157 00:51:08.280 --> 00:51:21.129 Maëlle Lavenant: by following those steps. That's how you ensure that autonomous technology is responsible and valuable and aligned with your institution goals and the key use case where it can deliver value. 158 00:51:22.620 --> 00:51:35.830 Debbie McVitty: Fantastic. We've got some, I guess, sort of more of those contextual questions coming in coming through now, which is timely, because that's kind of where we want to take the conversation. So, Paul, why don't you come? Why don't you come back and join us? Let me just spotlight you back in and 159 00:51:37.220 --> 00:51:54.120 Debbie McVitty: and and we've got. We've got a poll for you out there to kind of tell us, I guess, where where your thinking is. So so do kind of do. Do fill that in. Have a little. Think about some about sort of where you're up to with this conversation and and kind of we'll sort of get get the mood of the group. But. 160 00:51:54.410 --> 00:51:55.270 Debbie McVitty: Paul. 161 00:51:56.530 --> 00:52:16.110 Debbie McVitty: that question about kind of, I guess if you're going to implement stuff like this, you've obviously as an institution, been on quite a journey with deployment, and I guess one of the things that has come up in the chat box. But really really mindful of. Of course, if the if you're going to deploy an AI in any way, you need your data to be in good shape right. 162 00:52:16.110 --> 00:52:16.570 Paul Napleton: Good evening. 163 00:52:16.570 --> 00:52:33.590 Debbie McVitty: Fundamental. And you talked about kind of pulling in data from lots of different disparate places. And of course, lots of universities do really struggle with that kind of silo data platforms, not talking to each other and all the rest of it. So what's kind of, I guess? What conditions would you say, need to be in place for the. 164 00:52:33.590 --> 00:52:34.040 Paul Napleton: Yeah. 165 00:52:34.040 --> 00:52:35.729 Debbie McVitty: Something like this to actually be realized. 166 00:52:35.880 --> 00:52:49.619 Paul Napleton: I think it's a really good question, and I think you know that if if there's ever a year for this, this should be the year 2025 to really kind of try and bust those silos that can exist across institutions. Because, you know, there's, you know. 167 00:52:50.150 --> 00:53:13.079 Paul Napleton: you know, there's loads of examples. I think in probably any institution where there's legacy systems, we're still working of spreadsheets. And you know, word documents when the Internet exists and cloud exists and and agentic AI exists, and so on. So there's lots of legacy systems. They don't talk to each other. People are at different levels of their digital skill set and understanding, and so on. So I think it kind of comes back to having a 168 00:53:13.390 --> 00:53:31.110 Paul Napleton: a central kind of Crm or customer data platform, having having those trusted partners that you can work with. I think that's the 1st step. So you know, we work really closely with salesforce. And you know that's been a core part of our success. But then you build that ecosystem around it, and I think you try and 169 00:53:31.350 --> 00:53:46.810 Paul Napleton: bring people with you on the journey. So I mentioned about the Crawl run. And but it's more than that. It's about thinking about having a combination of vision, the skills that you need the technology itself, the platform and then an action plan. 170 00:53:46.840 --> 00:54:12.040 Paul Napleton: And those 4 things are really quite crucial. Because if you don't have a vision, then there's just confusion. So without that North Star, where's our digital transformation strategy? You don't have that vision. There's confusion. If you don't have the skills. If you don't invest in your people and invest in the skills and the training, and so on. Then that leads to kind of anxiety and worry. Am I doing this right? Can I trust it is the data kind of correct, and so on. 171 00:54:12.040 --> 00:54:19.459 Paul Napleton: If you don't have the technology, then of course, you're going to fall behind. And it's progress is going to be slow. 172 00:54:19.460 --> 00:54:38.180 Paul Napleton: But then, if you don't have an action plan, if you don't actually start doing stuff, then there's no payoff, and then there's no that you don't begin to build the snowball. So I think those 4 things are probably kind of really useful kind of shorthand for how you can accelerate time to value when it comes to AI and and digital transformation. In my opinion. 173 00:54:39.140 --> 00:54:56.759 Debbie McVitty: And we have got some chatter going on in the box about that sort of sustainability dimension of some of this. Because I think if we're talking about using AI to a greater and greater extent. Obviously, there's environmental implications. You talked about those kind of North Star principles. How are you. 174 00:54:56.760 --> 00:54:57.300 Paul Napleton: Yeah. 175 00:54:57.300 --> 00:54:58.890 Debbie McVitty: Sustainability challenge at Uea. 176 00:54:59.070 --> 00:55:02.489 Paul Napleton: I mean, I think there's that again. If you stop thinking about it 177 00:55:02.830 --> 00:55:17.320 Paul Napleton: just in a admissions and marketing perspective, then then it's 1 thing. If you start thinking about a holistic all of university perspective, it's another. But you know, like everyone, you know, we're we're concerned and and about sustainability within. That's a really important thing, and I think sometimes you 178 00:55:17.320 --> 00:55:41.130 Paul Napleton: that absolutely should be one of the principles before you do anything like, I said, we have kind of key principles that we would look to try and adhere to. But when it comes to sustainability, I think sometimes we can offset some of the wastage that we see elsewhere if we weren't using AI. So I'm not saying it's completely equal. But I think we have to be aware that there's both positives and negatives in anything that we decide to do. So 179 00:55:41.130 --> 00:56:00.399 Paul Napleton: there's wastage and inefficiencies and wasted spend as well. But you absolutely should be focusing on sustainability and the kind of the true cost of all of this alongside all of the other kind of key things that you know, hallucinations and bias and everything else and thinking about it holistically and ethically before you do anything is really important, I think. 180 00:56:01.180 --> 00:56:25.690 Paul Napleton: and aligning it in with other sustainability goals that you have at an institutional level as well. But sometimes AI can kind of just get discussed over here or in certain teams, and I'm a big believer in trying to kind of go look, this is transformational technology guys. We really should be thinking about this and integrating it across all kind of avenues of the university. So I think you know not everyone's in the same boat, but I think that's where we're beginning to think and head towards. 181 00:56:26.170 --> 00:56:36.700 Debbie McVitty: And in terms of, I guess, in terms of supporting people. I mean one of the real kind of, I guess goals and kind of aspirations. Here is that freeing up of time for humans to do more human things right. 182 00:56:37.050 --> 00:56:42.709 Debbie McVitty: you know. So there's I mean, I guess there's 2 dimensions. This one is about really making that very 183 00:56:43.440 --> 00:57:07.700 Debbie McVitty: that decision about what a human needs to be able to see in order to make sure that the autonomous agents are kind of doing what they're supposed to do, and I guess skilling them up to be able to do that effectively and give the prompts and set up the agent to go and do its business. The other thing, I suppose, is then kind of what your aspiration for, what you might want humans to be doing if they didn't have to do some of that other stuff. 184 00:57:07.700 --> 00:57:31.029 Paul Napleton: Exactly. Yeah. It's, you know, going back to what we said earlier. As Maya mentioned, it is like having an extra member or 2 of your team, and that's really exciting, because I'm sure if you said to everyone on this call, would you like to have an extra couple of pairs of hands and people who could help you on a day-to-day basis that everyone would go. Yeah, great. And AI is like that. But I think we have to be mindful of 185 00:57:31.030 --> 00:57:53.960 Paul Napleton: that if we're taking, if we're removing monotony out, if we're spotting opportunities early and so on, some ways that might take a bit of time to set up. If we're being honest initially to get that all right. The transformation that might take a bit of time to establish no one's sitting here as an expert in an agent. Necessarily, you have to try these things, and it's so much easier than ever before when you use tools like salesforce and agent force, and so on. 186 00:57:53.960 --> 00:58:18.050 Paul Napleton: and that's the beauty of it. But it still. You still need some time to kind of work it all out, because there's no one who's really massively leading the way in higher education, in my opinion yet. So you need to do that time initially. But then, afterwards, you're going to save. Some time you're going to save efficiencies. You're going to save Budget. You're going to take monotony out, and people will start, perhaps having more time for that added value piece, which is incredibly motivating, I think, for people 187 00:58:18.050 --> 00:58:26.520 Paul Napleton: to be able to say, to ask that. What if question you know to encourage curiosity. I think that's always quite motivating for teams. 188 00:58:26.520 --> 00:58:41.799 Paul Napleton: So the way that I often pitch it is kind of have a growth mindset approach technology with a growth mindset. You know, you're going to fail. Sometimes it's not always going to work, but fail forward. You know, we we learn, we move again, and it becomes even better next time. 189 00:58:43.080 --> 00:59:09.019 Debbie McVitty: Well, when people come to salesforce, and they talk about sort of saying we want to deploy autonomous agents to do things. What, generally speaking in your experience are the problems that they're trying to solve. I'm going to go into the answers to the poll now and kind of pull out some of that, particularly those questions about what are the things that are actually taking up? Lots of people's time, and I suppose kind of the subtext of the question is a little bit. 190 00:59:09.020 --> 00:59:18.000 Debbie McVitty: are people trying to solve the problems of today? Or are they thinking a bit more about the problems of tomorrow as well. 191 00:59:18.000 --> 00:59:18.600 Maëlle Lavenant: Yes. 192 00:59:18.600 --> 00:59:33.580 Debbie McVitty: There's lots of things that could probably be fixed kind of straight out the gate. But then there's also potential as Paul's talking about to perhaps even rethink the process in its entirety. So I guess kind of, I guess. Where are people's heads at with that? And while you answer that I'm going to go and take a look at these answers. 193 00:59:34.290 --> 00:59:57.100 Maëlle Lavenant: I think there is. I mean, if we look at the recruitment and admissions team, and I love how you explain, Paul. The crawl, walk, run, approach, like I think you have to start with the present and act on what you can control to your point, Debbie, like when you deploy those autonomous agent when we were 194 00:59:57.330 --> 01:00:12.359 Maëlle Lavenant: talking with our customer. Like the one thing we hear time and again is this incredible opportunity that agents are to completely transform the recruitment for info, the recruitment, the request for information process. 195 01:00:12.380 --> 01:00:32.360 Maëlle Lavenant: And that's why we will be launching in February our student recruitment agent to make this vision a reality, so they can immediately deliver value to a prospective student who need answer fast and who don't want to wait on hold, who wants empathetic services and 196 01:00:32.670 --> 01:00:41.079 Maëlle Lavenant: and full information summarized at their fingertip quickly, that being said like they are to Paul's point on 197 01:00:41.440 --> 01:00:49.650 Maëlle Lavenant: on how to make, and and your question on how to make staff work more valuable and more interesting. 198 01:00:49.850 --> 01:01:10.950 Maëlle Lavenant: We can also look at agent like internally like, think about, like those marketers in your team who are building a outreach campaign and need like to dive deep into the low funnel data to understand, like their their different segment, and and how how they can engage with the various students like, I think 199 01:01:11.230 --> 01:01:33.630 Maëlle Lavenant: today. Still, there are many institutions who will jump on one platform, jump on another and try to juggle with different environment, to do their work well if they have their autonomous agent right in their Crm. Like, they can just ask them for the information, and assuming that it's set and 200 01:01:33.900 --> 01:01:55.200 Maëlle Lavenant: and built the right way, following the steps that you want to do, you can provide this information to them and accelerate the game, giving them more time to work on creatives, or on messaging and strengthening their brand with a human identity per se. 201 01:01:55.320 --> 01:02:05.649 Maëlle Lavenant: There is also, I think another case I wanted to share was around like application experiences. I think there are many. Actually. 202 01:02:06.050 --> 01:02:20.179 Maëlle Lavenant: I wouldn't be able to quote the source. But I recall like there were about like 25% of applicant who wouldn't finish the application because the process is, too. 203 01:02:20.470 --> 01:02:35.019 Maëlle Lavenant: it's too cumbersome, so they will eventually they won't eventually complete their application. But the agent can help streamline this process as well and orient the applicant. 204 01:02:35.360 --> 01:02:54.980 Maëlle Lavenant: as he progressed through the steps and explaining what needs to happen like. And when you think about like international students like sometimes, it's really it can be very overwhelming to like complete all the form, gather all the documents, keep track of all the deadlines and time zone to submit everything in time, and 205 01:02:54.980 --> 01:03:07.239 Maëlle Lavenant: and even when you have the language on top of that. It can be very complicated where an agent can step in and just provide the personalized support that will reduce the anxiety and make the process 206 01:03:07.290 --> 01:03:13.190 Maëlle Lavenant: smoother and far more accessible. I think there are definitely 207 01:03:13.420 --> 01:03:30.489 Maëlle Lavenant: key use cases that come to mind. For me, the 1st one is the request for information like, it's pretty. Actually, we have one customer who has been deploying it and speaking about it in the news recently, Unity Environmental University. 208 01:03:30.710 --> 01:04:00.610 Maëlle Lavenant: who has been using salesforce for to recruit the cohort. And they implemented an agent on their website that were able to answer questions about programs how to apply. And it has saved like days, even weeks to staff, to provide the, to to be able to person, to respond to student and respond to this, those surgeon inquiries when application flood in. So it's 209 01:04:00.840 --> 01:04:03.010 Maëlle Lavenant: I think it's it's 210 01:04:04.030 --> 01:04:14.860 Maëlle Lavenant: It can be student facing, but it can also be internal or staff facing as well. And here I mean many, I mean. 211 01:04:15.080 --> 01:04:41.079 Maëlle Lavenant: how is just chatting? Actually with an agent? An It agent at Salesforce, where we can. I mean, we have deployed an agent for it with it internally, who's helping us like navigate, like all the different tech glitches or difficulty that we may have or password resetting or whatnot. And we can just like, interact with them, live and have like this natural conversation. So it has, like many, many use cases 212 01:04:41.110 --> 01:04:51.240 Maëlle Lavenant: that autonomous agent has many use cases that they can address. I think I help like paint some light on some of those happy to. 213 01:04:51.390 --> 01:04:52.959 Maëlle Lavenant: We'll talk more about it. 214 01:04:53.370 --> 01:05:18.590 Debbie McVitty: I've now outpaced, hacked the responses to question 4 of the poll. And I think so. What's coming out of the poll, I think, is quite interesting, I mean, inevitably, for this audience is quite a high level of interest in the topic of autonomous agents. Right? Otherwise you don't sign up for a webinar, but the majority sort of 60% or so think that they'll probably be deploying autonomous agents within the next one to 3 years. 215 01:05:18.630 --> 01:05:37.749 Debbie McVitty: But looking at the sorts of things that are taking up people's time in recruitment and admissions offices, and appreciate that not everyone on the call is working in that space. But I think there's almost certainly resonance. I think I can identify 4 things, and I suppose I would test with you, Myel whether whether these kind of align with the capabilities 216 01:05:37.750 --> 01:05:50.059 Debbie McVitty: of an autonomous agent. So the 1st one obviously, we've talked about is inquiries. This seems to be, you know, the sort of people saying, What fees am I eligible for? And just that kind of really really very routine inquiries? 217 01:05:50.150 --> 01:06:13.840 Debbie McVitty: But I think there's another one that I think we've probably touched on is scheduling interviews. So so you know, if if you need, if that's kind of the environment that you're working in, where you're doing a lot of kind of working with individuals trying to kind of be at a particular time and place. But there are 2 that I think are probably also relevant. One is about screening applications for eligibility and completeness. So if you're at the. 218 01:06:13.840 --> 01:06:27.940 Debbie McVitty: if the application isn't complete, and then you have to kind of check whether all the bits are there, but also whether actually the candidate is eligible. These are the sorts of things, presumably, that you could easily deploy AI to tell you without having to kind of do that manually 219 01:06:29.160 --> 01:06:42.189 Debbie McVitty: and and that have, and that have genuinely no bearing on that applicant's experience of whether they feel kind of that they've been seen by a human because it is just a kind of it's just about routine checks rather than but am I correct in saying that, or is that? Am I jumping to conclusions? There. 220 01:06:42.190 --> 01:07:02.480 Maëlle Lavenant: No, no, you're correct. I think the the key inquiries is the. It will be the game changer next year, for sure like, because it's it's so complete when you think about like, I mean, actually, one question we haven't asked our attendees is, how many inquiries do they receive per days like, and how many people do they have in their team 221 01:07:02.480 --> 01:07:16.320 Maëlle Lavenant: to tackle them? And when you look at the ratio it can be pretty overwhelming just to look at it and realize that there is not so many hands. So this managing inquiries is definitely an area where agent and 222 01:07:16.530 --> 01:07:18.680 Maëlle Lavenant: will will shine. 223 01:07:18.920 --> 01:07:30.980 Maëlle Lavenant: And and next, there's to your point, like there are bits where it will help accelerate the game, boost efficiency and productivity, because it will allow to 224 01:07:31.210 --> 01:07:37.119 Maëlle Lavenant: reduce the number of repetitive tasks. They don't really value. 225 01:07:37.830 --> 01:07:59.269 Maëlle Lavenant: That won't really be gratifying for the staff to do constantly like checking a test score against a criteria. It's pretty easy, pretty straightforward, and you can program that pretty easily, giving a simple, prompt in natural languages to your agents. So we can do that seamlessly 226 01:07:59.540 --> 01:08:01.723 Maëlle Lavenant: on your behalf. And 227 01:08:02.450 --> 01:08:16.030 Maëlle Lavenant: and yeah, which is that I think the the 2 other situation that you raised when it comes to application support, too. That's exactly where agent will be able to support. 228 01:08:16.550 --> 01:08:33.810 Debbie McVitty: I've got one more, which I thought was quite interesting, which is chasing applicants so presumably having determined that an applicant is eligible, but their admission, their form is incomplete. You then want to kind of follow up with them and say, Hello! Can you send me the other bit of information? I haven't got your portfolio hasn't come through, or whatever it is. 229 01:08:34.330 --> 01:08:37.069 Debbie McVitty: Could could you, could you train an agent to do that for you? 230 01:08:37.620 --> 01:08:41.640 Maëlle Lavenant: You don't have to train an agent. That's actually something I did. 231 01:08:41.640 --> 01:08:42.510 Maëlle Lavenant: But at least. 232 01:08:42.510 --> 01:08:46.540 Debbie McVitty: And this is the it doesn't require technology. Quite that advanced. It is. 233 01:08:46.540 --> 01:09:15.319 Maëlle Lavenant: No, I mean, I'm sorry I'm speaking when it comes to salesforce like, if you are leveraging our platform to build and deploy your agent. You don't have to train the the agent. You just have to build the prompt or and define, like those 5 steps about what is supposed to do, or leverage some of our out of the I mean template agent, the suite of agent that we have already available, and enable them 234 01:09:15.590 --> 01:09:21.850 Maëlle Lavenant: you I mean to? I think the question. It's also a 235 01:09:22.470 --> 01:09:39.329 Maëlle Lavenant: make us flag the importance of of automation in terms like this depending on the behavior or the the interaction and the engagement that a prospective student will have with your institution. You can trigger workflows to 236 01:09:39.380 --> 01:09:56.870 Maëlle Lavenant: to actually engage with them or send a personalized email. I think you can have an agent working in the background. That's that, sure. But you can also already have that with. And Paul will probably confirm, like with marketing cloud and marketing automation platform, because they will be able to 237 01:09:57.850 --> 01:10:05.740 Maëlle Lavenant: read I mean, against an engagement score like, define, like what action to to do next. So 238 01:10:06.810 --> 01:10:08.900 Maëlle Lavenant: yeah, you don't even need an agent. 239 01:10:08.900 --> 01:10:28.400 Debbie McVitty: Yeah, so you could, you could absolutely build that kind of process into it. And I mean, and, Paul, I think just, you know, to sort of conclude, and coming back to the that sort of question about the sort of conditions for being able to deploy this kind of thing, I guess one of the other. We talked a little bit about data, but I guess one of the other things is, you do need to be quite clear about 240 01:10:28.550 --> 01:10:38.089 Debbie McVitty: what your strategy is, what those processes look like in terms of. Okay. If this, then that you know, and as a human being, you still really need to have a grip on those things, don't you. 241 01:10:38.460 --> 01:11:01.959 Paul Napleton: Absolutely. And if anything, digital transformation, and it generally just kind of helps, you focus on the things that you kind of need to do anyway. So in terms of really focusing on getting your data right and building that transparency of added value in terms of collecting your data, because at the end of the day. A lot of the data is our customers data, and they choose to share it with us. So thinking about it, that way is really important, and we should be doing that anyway. 242 01:11:02.020 --> 01:11:26.289 Paul Napleton: we should be creating content. That's engaging and relevant and superb. You know, we should be looking to kind of build, personal, one-to-one relationships. But the thing is by looking at this kind of in the round. And as part of digital transformation, you've got a chance to kind of influence multiple parts of that ecosystem I talked about earlier and have it all in one place. And so you've got your clear north star. You've got your vision, your strategy. You've got your principles. 243 01:11:26.420 --> 01:11:44.940 Paul Napleton: These are all the things that you can build solid foundations on, and then start incrementally cool running again, and so that you know that you're heading in the right direction. So I think that's the approach that I've taken a Uea, and how I work with the team and how we're encouraged to kind of think. And I think that certainly helps because this is such a fast. 244 01:11:44.970 --> 01:12:06.180 Paul Napleton: This is a really fascinating time to live through, because you've got a changing of how things are in terms of the enablers and technology and so on. So there's a Us. AI. Writer that I like called Elisa Yukowski, who sort of there's a great quote here. He says, by far the greatest danger of AI is that people conclude too early that they understand it. 245 01:12:06.230 --> 01:12:27.089 Paul Napleton: And I talked about those waves earlier. And technology is only going to kind of improve and change. And we have to sort of jump on board and start trialing some of this out and failing forward, but having those guardrails, as male says, having those clear principles and working with trusted partners. I think if you do that, and you have those kind of vision skills, technology action plan 246 01:12:27.290 --> 01:12:30.250 Paul Napleton: in the right kind of balance. Then I think you're on to a winner. 247 01:12:30.250 --> 01:12:54.220 Debbie McVitty: Yeah, and going back and going back to that kind of broader picture of change. I guess that just that ability as an organization to test something on a small scale, to check that. You're comfortable with it, to have that conversation to, you know. Try on a larger scale to move forward. That's just really important. We're not going to be able to kind of say, it's going to do this, this and this and this, and it's going to be straight out the gate. It's going to be much more about as an organization, how you're able to pick that up and test it and try and kind of develop those internal capabilities, isn't it? 248 01:12:54.800 --> 01:13:12.570 Debbie McVitty: I'm sure we could talk about this for much longer. But we are out of time, Paul Mayel. Thank you so much for joining us. It's been a really fascinating conversation. Thank you to all of you out there for coming on the call today. We will be sharing poll slides, and the recording of the whole conversation, as well as the Chatbot 249 01:13:12.570 --> 01:13:26.549 Debbie McVitty: Chat box Transcript. So you can kind of see how the conversation panned out as well as the poll results, of course, as well, and do get in touch. If you have any feedback or questions, it's been a pleasure talking to you all today, and thank you so much. Have a lovely day. 250 01:13:26.790 --> 01:13:27.250 Maëlle Lavenant: Take care! 251 01:13:27.250 --> 01:13:28.023 Paul Napleton: Thank you.