SPECIAL PODCAST

What will businesses get right—and wrong—using AI in 2025?

ai-2025-podcast-header
SPECIAL FEATURE

78% of workers think AI will impact their role in the next 2 yrs

ai-survey-header

Your AI questions, answered

Looking for clear-cut answers to your AI questions? Straightforward answers empower you to forge the most informed choices for your organization. We are committed to offering you authoritative insights and impartial analysis.

Graham Glass, CEO and Founder of CYPHER Learning, provides his candid responses to AI questions.

Can a managed AI do a better job than open ChatGPT?

Check out Graham’s latest answer to this question.

or

Managed-AI-vs-ChatGPT
Data privacy and security

or read the answer:

The answer really depends on how you are sharing that data.

So for example, OpenAI is very upfront that if you're using the free version of ChatGPT and you're copying and pasting your data into its buffer, so to speak, then they are going to train OpenAI using that data. So you don't want to share important proprietary data with a free version of ChatGPT.

That being said, they're also upfront that if you're using the enterprise version of ChatGPT or if you are sharing data via their APIs, it is not used for training, it's not stored anywhere, it's just transient and will be destroyed.

At CYPHER Learning, we interface with AIs through these APIs and we only do that when the vendor and the owner of the API says this data is completely private. So if you're using a system like the CYPHER platform, which is using the APIs, you do not have to have any concern about your private data being shared with anyone.

or read the answer:

The first thing is that the accuracy of AI is rapidly increasing. We've all heard of this so-called hallucination effect where sometimes it gets the answer wrong or it just makes up an answer, which sounds very convincing, but it's in fact incorrect. Which by the way is what humans also do. So let's also acknowledge the fact that this is something that's not unique to AIs - it's really a general concept with any information source. That being said, GPT 3.5 as an example, does it more often than GPT 4 and OpenAI has stated publicly that they believe that the future iterations are going to get more and more accurate. So the first thing is that I do think the error rate of AI is going to go down dramatically over the next year or so.

The second thing is, let's face it, we don't want to be silly. Just because an AI gives it to you doesn't mean that it's perfect. So generally speaking, you're going to apply the same kind of critical thinking and cross checking with information from an AI, that you would also perform whether it was coming from the web or from another human being. So basically put on your critical thinking hat.

In the CYPHER Learning platform, when we're creating educational content using AI 360 with Copilot, we are very honest and upfront and say AIs do sometimes make mistakes, you have to review the information. So we're not trying to sell someone that everything that you do using an AI is perfect.

We are, however, doing some things to reduce the probability of a hallucination and also making it easier for human reviewers to find possible errors.

So something called AI cross check, which will be coming out fairly soon from CYPHER Learning will use more than one AI and it will use one AI to precheck the accuracy of information coming out from the other AI. Once again, that doesn't guarantee that it's going to be 100% accurate, but it will tend to minimize that.

So I think generally speaking AIs are just like any other information source. You've got to be careful, you've got to check them, but they are going to get more and more accurate quite quickly.

or read the answer:

99% of the time, no, but 1% of the time, maybe.

A little bit of a review on how AIs actually work. AIs do not copy and paste all the content in the internet into their digital brain and then regurgitate it. What they do is review huge amounts of content, extract the essence of that content, then codify it in terms of numbers.

So the AI almost certainly could never actually come up with exactly what it's read in the first place because that's not how it stores information. And much like a human being, it generates fresh information from everything that it's seen. And anyone who has used something like ChatGPT knows that every time you ask it a question, it gives you a slightly different answer. And that is one example of how it is not copying and pasting.

One of the things that OpenAI and Microsoft has done, is they've announced something called "Copyright Shield", which underscores their confidence and commitment to protecting the output of AI against any kind of copyright issues. So if you're using one of their products and someone comes after you and says, you violated a copyright, then Microsoft or OpenAI will actually come to your legal defense, which is a really great move. So generally, you can use the output in confidence, knowing that it's not copying and pasting from any particular source. And that covers 99% of the cases.

The 1% of the cases, that I think is worth acknowledging, is that there are some content creators, especially artists, that have spent years crafting a very, very specific, very unique identity. Picasso is an obvious example from the past, but there are current people doing this. And if an image generation AI learned all about Picasso and then started generating 100,000 Picassos using his specific, crafted, identity, then that almost certainly would be a case for a copyright violation.

So there are very, very few vertical cases where you do need to be careful, but in 99% of them minimum, you don't have to worry at all.

or read the answer:

Sometimes, if you give it a high degree of autonomy. If you have an agent in the box, you give it limited access to various proprietary data, you don't give it access to your wallet, then what is the risk? There's really not much risk. It's just like regular AI. But if you say "I'm going to give you my credit card number, my security code, my driver's license number" and give it all that information and let it loose, then yeah there's obviously security concerns - one of which is it might empty my bank account. So we are obviously focused on the style of agent where you don't give it a lot of personal information and it cannot do irreversible things on your behalf.

By ethical what we really mean in not using it for destructive, nefarious purposes. Now, the most obvious approach to this is by using guardrails, which does assume that you have some central place where you can apply guardrails across all your enterprise use of AI. So for example, in an upcoming learner agent, an administrator can go in and put a guardrail which doesn’t allow people to want to learn about anything that's not directly related to their job title. That's a simple one. Or in the case of K-20 you don't want them learning about building bombs and guns and stuff. So obviously, you would put a guardrail in for that. So I would say obviously, you can have standards within your company. You can have HR training about how to use AI ethically. But from a pure technical perspective, having a product that allows you to put in guardrails, I think is a must-have.

I think there's a few things about that. Obviously, if you're building courses, you would like a human in the loop. So a human expert can make sure that these things are not obviously fake.

The second thing, which is something that we've implemented, is an AI Crosscheck technology. And what that does is, in addition to a human, it will use another different AI to scrub your content. So if it's a video, listening to the video, making sure that the video is not talking obvious falsehoods. So those are two things that you can do, but it's certainly a concern, especially outside the area of learning because humans do need to be able to rely on factual truth in order to make good decisions.

AI implementation

or read the answer:

I think there are some areas of your business that could immediately take advantage of AI and it's not introducing any existential risk to your business. And there are other areas, especially one subject to high degrees of government regulation, where you might need to be careful.

But the other thing is you have to remember, your competitors might not hesitate in certain key areas and the longer that you take to apply AI, the more and more of a disadvantage you're going to have.

For example with e-learning, if you are building courses 100 times faster than and at a 10th of the cost of your competitors, is that going to give you a strategic advantage?

If you are a training organization or an IT organization, the answer is almost certainly, yes, you would be at a disadvantage.

or read the answer:

Now, I have to admit I'm slightly biased because I do use Chat GPT Enterprise for at least one hour a day, which is quite a lot. One of the things that I really like in ChatGPT recently is a brand new feature that learns about you and updates its memory of you and then ensures that that updated memory is taken into account in every interaction. So for example, if I tell ChatGPT, “Hey, can you keep it a bit more concise?” you'll actually see it go updating my memory and now the interactions are more concise. Similarly, if I said I like astronomy, maybe it'll find that relevant and then tell the AI every time I interact with it, “By the way, Graham likes astronomy”. So I think that's a pretty cool feature. I haven't seen that in any other AI so far.

I also get the impression that ChatGPT right now is probably either the most superior or the top one or two in terms of accuracy and the enjoyment of interaction with it. It certainly gets a lot right when I'm talking with it and there is a new ChatGPT 5.0 coming out in the not-too-distant future. So that will raise the bar again. I think it also has extremely good integration with tools. So if you want to do some financial analysis, you can automatically upload a spreadsheet, and it'll automatically generate a new spreadsheet for you. I think that's another area that it really excels. I'm a fan but, you know, the world is changing quickly, maybe in six month’s time, I'll have a different favorite.

or read the answer:

I think there are actually quite a few in different buckets. The first one I'll say is that it's not real-time, and what I mean by that is it doesn't have access to what happened five minutes ago or half an hour ago, or even a week ago. And I'm sure that's coming. So the way it works right now is they do a big training push where it looks at huge amounts of data at a certain snapshot in time. And then that is what the new version of their model starts with. But it's not incrementally daily updated, but I almost guarantee it will be. So it'll be one where pretty soon it'll know up to the last week, and then it'll be the last day, and at a certain point, it'll be up to the last second. So that's one limitation.

Another limitation is to do with its depth of reasoning. Now, I use ChatGPT all the time and I have to say, I think that its reasoning is better than most humans. I don't view humans as the ultimate reasoning machines. So it's really, really good, but there are some times where it's not as good as a human, especially when you give it really complicated logic problems. But that is being addressed quite quickly. So ChatGPT 5.0, the emphasis is going to be raising the bar on its reasoning abilities. My working assumption is that ChatGPT 5.0 is then going to be better than 98% of humans. And in ChatGPT 6.0, it'll be 99% of humans. There's always going to be humans that are better than ChatGPT as something for the foreseeable future. But I don't think lack of reasoning skills is going to be anything more than just a short-term weakness.

I think another weakness is the way that it handles proprietary data. So right now ChatGPT, it's trained on a huge corpus of public information. But if you've got a whole bunch of private data, there's no way right now at any inexpensive way to say, “By the way, for the purposes of our interaction, here are five gigabytes of additional information and I want you to index this to the same level of depth as you index everything else”. So ChatGPT 4.0 does have the ability to remember files you can upload. It has the ability to index them much like a vector database, but it doesn't ingest them to the level of depth of public knowledge. But I think that will be a breakthrough when you can give it a whole bunch of your information and its brain gets bigger and different when it's talking to you versus somebody else. So those are the main three limitations that I can see right now.

or read the answer:

The answer is definitely yes. The reason being: let's just take the example of building a sophisticated modern course. Now in a managed AI, you basically just fill in the blanks of what you need in the course, including nudges and certain special things that you need. And it will turn around and actually use, under the hood, the same technology as open ChatGPT, but it does it super fast. Every single prompt is very carefully engineered and, not only that, but it actually builds the course in our platform in minutes. If you were to do the same thing in ChatGPT, that same exact task could take you hours because you would have to carefully get the prompts, carefully scrub all the answers, and then manually copy and paste every single thing into our platform. So the answer is unequivocally yes.

or read the answer:

The answer is not really, but sort of. So what I mean by that is that CYPHER Copilot is something from CYPHER Learning, which is completely focused around making the life of educators and learners more pleasant and efficient. In that regard, it's a completely separate product from Microsoft Copilot. There are lots of copilots, there's Github Copilot, there's Fubar Copilot. So copilot is just a term that we use to emphasize the fact that the human is in charge, it's like your psychic. The way that they are a little bit similar is that they both use the OpenAI technology behind the scenes.

Microsoft licensed the technology from OpenAI. So Microsoft Copilot behind the scenes happens to use OpenAI API. CYPHER Copilot uses a lot of AI, so it doesn't just use one AI for generating text, it uses multiple. It uses image AI, and pretty soon it's going to have video AI. It has voice AI, but one of the AIs we use happens to be the OpenAI large language model. So they have that tiny, little bit in common. But aside from that, they're really quite different.

or read the answer:

We currently use 5 different AIs and over the next month or so, that's probably going to grow to 7 or 8 AIs.

The general idea is that we pick AIs based on what they're best at, how fast they are, and how much they cost. So we use one AI, for example, for image synthesis, another AI for voiceover, another AI for video transcription, another AI for content generation.

But the nice thing about our system is that none of this complexity surfaces to the user. You just click the checkbox, I want to synthesize images. And by hiding this from the user that gives us the flexibility to replace these AIs at any point when something better becomes available.

So obviously, we commit to our end customers that all their private data is completely private. So we're not going to share it with anyone. We strive very hard to make our systems work very fast and reliably. But aside from that, there's really no need for anyone to know what AIs we're using behind the scenes.

or read the answer:

Our belief is that you should be able to create state of the art, engaging, full-featured courses in minutes using an AI and you should not have to know anything whatsoever about AI in order to do that.

So for example, in our system, using AI 360 with CYPHER Copilot, you just say build me a course and you can specify exactly what you want using simple checkboxes and drop down menus i.e.

  • I want a professional tone of voice
  • I want to have an automatic voiceover
  • I want it to generate images synthetically
  • I want automated questions
  • I want group projects
  • I want rubrics
  • I want a study guide
  • I want a glossary
  • I want gamification
  • I want competency-based learning

Then just click that button and a few minutes later, it will do absolutely everything and not only can it do that, but you can drag and drop a PDF in, or a video in, or a Microsoft document, and it will generate all of those courses inspired by the materials in that private information.

None of our competitors do this at all. 

They might be able to generate a question bank, they might be able to generate a small course outline, they might have a little tool that you can invoke ChatGPT from your learning platform, but you still have a huge amount of work to do to get to the point that you have a decent course. But we've figured out a way to capture that AI, make it super easy, super efficient. No compromise.

or read the answer:

An AI prompt is basically just something that you say to an AI and depending on how you ask the question, you might get a different kind of response. Within our engineering team, we've got a lot of expertise now in how to ask the question in exactly the right way to get the best response back.

You don't have to know anything about AI prompts when you're using the CYPHER platform because we package the AIs, we hide all the nasty details from you. All you need to do is to check a few boxes, select a few menu options, and say Go.

Another question is, why can't I just do exactly what you're doing using AI prompts? And the answer is you actually could as long as you don't mind working about 100 times slower than what the CYPHER Learning platform can do for you. So to give you an idea, when you build a course on our platform, we trigger a minimum of 200 prompts, many of which are operating in parallel at superhuman speed. Then we take the results for all of those prompts and through a tight integration, we will generate the modules, the pages, the assessments, the quizzes, and the glossaries at superhuman speed.

If you were trying to do this with ChatGPT, what we do in 10 minutes would probably take you around 10 to 20 hours and every single time you want to repeat it, you'd have to spend another 10 to 20 hours doing that.

So, yes, you could do it. But I don't think it's a good use of your time.

or read the answer:

Generally speaking, what we do is we try and make AI extremely accessible to anyone in the educational industry. It's packaged up, it has guard rails, and you can't really do anything crazy with it. So we've made it as safe as we possibly can. We can't force anyone to use AI against their wishes. But anyone who does want to use AI will find that it's very, very global policy friendly.

or read the answer:

So AI 360 is the label that we give to our technology, which is going to help all stakeholders in the educational process. And our first offering as part of AI 360 was CYPHER Copilot, which is incredibly good at building sophisticated, modern classes, skills-ready classes - in less than five minutes now, because it keeps on getting faster. And that particular feature has automatic AI Crosscheck for accuracy, voice synthesis, image synthesis, content generation - you name it, it's in there. 

But we're working on some stuff that's quite amazing. We haven't released it yet, which is not related to course development. It's actually another part of the AI 360 umbrella. And overall, I would say what makes us unique is the velocity at which we're moving and the breadth of stakeholders that we are helping with AI.

or read the answer:

If you look at how people typically use learning platforms, it's to log in and then take a traditional course which is typically broken up into modules and sections. If you're lucky, it's gamified and if you're super lucky, it will also track your competencies. And that is an obvious area that AI can assist.

So specifically, AI can allow an instructor, professor, or a teacher to automatically create amazingly sophisticated courses in as little as 10 minutes. They're not gonna do 100% of the job, but they will do at least 80% of the job. But that's a fairly straightforward answer.

I think there's a much more interesting answer looking beyond the way that people have traditionally learnt using platforms. So for example, a lot of people have something they want to learn right now, something that's very important, mission critical, they don't have time or the inclination to try rummage around and find a course in your jukebox of courses and take it.

So one of the cool things that AI can do is that you can use generative AI. So you can learn something right there, on demand in an easy to use consumable format without having to take a course. And this is something that I think of as being kind of instant learning or just in time learning if you like.

I think the more profound thing that's going to occur as well, is that these platforms pretty soon are going to have personal agents. A personal agent is going to be a thing running 24/7 in the background, trying to find ways to help you to accomplish your learning objectives. They might be recommending you content, they might be automatically noticing useful news articles and summarizing them for you, they might be automatically following a week after you've consumed some material and giving you a little refresher. This is something which I think is going to revolutionize the way that learning platforms work. So stay tuned and let's see if this all comes true.

or read the answer:

The short answer is no, why do I feel this? I used to be a teacher. I was an educator. I used to teach Computer Science at the University of Texas at Dallas. People really liked my classes and one of the reasons is the energy, enthusiasm, and anecdotes that I brought to the classroom, which then got everyone really excited about learning and put them in the right frame of mind. And obviously I delivered the core materials, but there was so much more that I did. I did lots of really cool hands-on projects - let's build a neural network simulator. And so I always got great reviews and people really enjoyed my classes and they still send me emails randomly today.

So wouldn't AI have done that? I don't think so. However, if I had gone into a lecture and all I said was "Ok everybody, turn to page 15. Now let's all read together", yeah, I could have probably been replaced by an AI if I was that bad and that boring.

But I do like to think that a majority of educators actually do a lot more. And the most important thing that you can do is to inspire and motivate your students.

or read the answer:

Part one is the AI does not generate the whole course. The AI is meant to generate about 80% of the course, the core that frankly no one really wants to spend their time doing so that an instructor can spend the additional 20% adding their own flavor, adding their own anecdotes, making it theirs. If all the courses are the same, that means people aren't doing their job and they're just pressing a button and then they're just teaching it, which is not our philosophy at all.

The second thing is if you are going to build courses using AI, then you also want those courses to be modern and engaging, which means that you would like them to adopt a competency-based approach so that all of the materials are tagged back to the things that they are teaching and assessing. You would like them to be gamified. Ideally, you would like things like digital voice overs to make them engaging. What we've done in our course-building Copilot tool, is to build all of that directly into the tool to save people a lot of time and money.

I would say the last thing, honestly, which gives universities the biggest competitive edge is the kinds of instructors that they hire. There are poor instructors who will just teach by the book and there are really gifted instructors who know how to make any subject lively, fun, and memorable. You want to hire those kinds of people, the best platform you can deploy for them, the best AI integration you can give them, that'll make them happy and then they can spend more time doing what they do best, which is to inspire their students.

or read the answer:

You may or may not know that I used to be a teacher, and specifically, I used to teach computer science at UT Dallas. I distinctly remember my summer vacations were filled with tons of course preparation. I had to build a new course quite regularly and my summer was the time that I built it, and boom there goes my summer. I would say the number one thing that you can do with an AI is to help it to create your new courses for the next semester. Now obviously, CYPHER Learning has an amazing tool for helping you build engaging courses in 10 minutes or less. But you could probably do something similar with ChatGPT, it might take you a lot longer but it's still going to be a lot quicker than starting from scratch.

or read the answer:

So, definitely use CYPHER Learning, but apart from the obvious one, I would say that there are different ways to use AI. There are ways which are quite shallow. You create a traditional class, but maybe you just tag a few videos using AI and then you're done. That is shallow use. 

And then there is deep use of AI, where you start auto-generating some of the content using AI, you use AI Crosscheck to make sure that it's accurate, where you're tagging almost everything with the various different skills, where you're building skills maps dynamically using AI. 

There's so much more you can do with AI and at CYPHER Learning, we're focused on deep use of AI. So if you want to check out our platform, you'll find we do a lot with artificial intelligence.

or read the answer:

Now, if you think about it, one of the main things that you want to do in HR or L&D, is to help people evolve in their careers, help them to do their job better, make sure they stay with the latest technologies, et cetera. And as the world moves faster and faster, it's more and more important for people to learn stuff.  But you just don't learn random stuff, you learn very specific skills that are specific to your job. 

So in the ideal environment, what you'd like to be able to do is for somebody, which could be your manager, could be your peers, it could be by an AI agent that says, "Hey Graham, to do your job better, you should get better at skills ABC and D" and I go, "Great, please help me do that." And then ideally, it would analyze the skills that you had, all the materials out there, including potentially human experts that could help you - cohorts - map those things together and then help you on your merry way. 

But, that matching process is nontrivial, because one department might call a particular skill X, a different department calls it Y, an educational vendor calls it Z. How do you know these three things are all actually the same thing? 

It is possible to do this without AI, but it would require quite a brittle one-to-many set of connections. Whereas an AI can adapt and is much more fluid in its understanding. And an AI can say: of course, X Y and Z are all the same thing - I'm gonna bring all those things together.

So AI has the potential to break down a lot of the barriers to understanding skills, mapping them, and then helping people to get better at the skills they need for their job.

or read the answer:

So one of the things that we're doing is we're making it much easier to create fresh, educational content that's modern and associate it with fine-grained skills. What I mean by fine-grained skills are, say if you build a course about digital marketing, you don't want to say this course is just marketing. You want to say here's SEO, here's pay-per-click, and here's brand management. 

So every single assessment, every single quiz question, every single video, everything is mapped to those skills. Then by doing that, if somebody takes a course and they go through the course, you're going to be able to figure out their understanding of all those various different skills. By tagging those things, people who have a deficiency in a skill, who want to get good at a skill, can find those materials very, very easily. And we have found that without using AI, it's a much, much bigger burden on the L&D staff to create skills ready materials. 

Now, we are actually doing even cooler stuff by actually dynamically generating personalized content based on your skill requirements, more about that later. So stay tuned. 

or read the answer:

I would say Yes and No. I think it's very good for people to learn enough about an AI to realize that they make mistakes, that you have to be persistent and conversational with them to get to where you want to be. I think that is good. 

But I think spending time training about how to become your own prompt engineer is mostly a waste of time. So, for example, if you want to build courses in a few minutes, you go to our platform, you don't have to know anything about prompt engineering. You just say, build me a course about this, fill in the blanks and then say Go. 

So we do think more and more AI is going to be packaged and managed, so you don't have to know much about AI. But you do need to realize it's not perfect and you are still in control.

or read the answer:

It depends. I'll give you an obvious case of cheating. Case number one is, you've got a kid at school. They get a homework question. They put it into ChatGPT, copy and paste, and don't use their brain. Obviously cheating. Second one, I ask an engineer for a daily report and rather than actually writing out what they did, they use ChatGPT, obviously cheating.

However, a couple of other examples, let's just say you're in K-20, you're working on a term paper and it's really complicated and you want to run it by AI at the end to improve your writing style, et cetera. Not cheating. That's using an AI to augment what your brain has already mostly done. You're a developer and you're working on a really tough bit of engineering and there's one particular piece of code that you want to crank out. It's fairly junior code, but you don't want to spend the time to do it. You use an AI to do that, not cheating. So there's a couple of examples where, definitely cheating, definitely not cheating.

or read the answer:

I think AI is certainly going to get better and better and better. Which means that a human being who knows how to tag team with an AI will be able to magnify their productive powers enormously. But it also means that humans who don't know how to use an AI will gradually get left behind in their chosen careers. So I think AIs, and especially with the advent of agent technology, are definitely going to get better, they're going to be increasingly productive when tag teamed with a human who knows how to utilize them.

or read the answer:

I would say generally speaking, yes. There's a few reasons that come to mind. First of all, these AIs are getting better, which means that people are going to have to stay more and more in touch with modern skills. And if you're in a workplace that obviously ultimately is going to get completely replaced by AI, you need to be thinking about how you take your skills and apply them to an industry which is completely open for human workers. So I would say that's an obvious one.

I would say the second one, which is not quite so obvious, is some businesses are too enthusiastic and gung-ho about using AI and your boss might be telling you “yeah, use an AI for that,” and you as the experienced worker might be, “it's not ready for that yet, please don't do it.” So I think the other thing is that people do have to push back sometimes and maybe alert senior management if they can see AI being used in ways that are not appropriate, at least at this point in time.

Future trends and impacts of AI

or read the answer:

Overall I think it makes complete sense. I don't think there's anything weird in there. To give you an idea of the kind of things that they're thinking about, one of them is privacy. So if you're going to use an AI, you want to be assured that it's not going to take all of your data and share it with other people. So that makes good sense.

Another thing is related to national security. The government is going to work with key vendors such as OpenAI and Microsoft to try and make it as difficult as possible to use an advanced AI for nefarious purposes - such as building a biological agent or blowing up a nuclear reactor.

Another thing which is very important is there's no question that AIs are going to get used more and more for calculating algorithmically various outcomes which can affect your livelihood. So for example, you might be applying for credit, you might be applying for housing and we want to make sure that the AIs do not either directly or indirectly discriminate against any one particular audience, thereby putting them at a disadvantage.

And the last one is related to jobs. It's quite obvious that AI is going to displace a whole bunch of jobs over the next few decades, but ideally, people in those particular jobs can transition through a process of upskilling to other jobs. And one of the things that the US government wants to do is to try and make sure that that does not happen too abruptly, to give people in those jobs plenty of time to transition to other jobs.

So overall, I view the US Bill of Rights for AI as being very sensible and I don't see any particular issues with it.

or read the answer:

I would say the answer is yes but with caveats.

Let's assume that the AI has access to your outputs - if you're an engineer, what kind of code do you write, if you're a creative writer, the text that you write. You can certainly evaluate that, I don't think there's any magic involved in that. But there's a huge number of things that you do during the day which really help your organization - whether you are fun to be around, you've got a great attitude, you've got a good team spirit, you rise to the challenge when the tough times hit. All of those are things that the AI probably won't have direct access to. Maybe someone could write it up, but unless we're surrounded by security cameras, they're not going to see you at 10pm at night fixing that really tough bug for an important customer.

So I think that yes, they will be able to evaluate outputs over time, but I think there's a lot more to evaluation than just evaluating your outputs.

or read the answer:

I'm going to use a few examples, one from schools and universities, but it also applies equally well to businesses.

The thing that most people don't talk that much about right now is personal agents. What I mean by a personal agent is something that knows a lot about you, who is helping you, who is available 24/7, and is continually adjusting to your goals and your interactions with the agent.

Let's say you're interested in a particular topic and you talk to your agent and you say, “Hey, can you tell me a bit more about X?” So the agent knows now that you're interested in X. It has access to the entire internet, knows it can start finding lots of interesting related topics once it's interacted with you. If it gets the sense that you didn't quite understand, it might say, “Hey, let me explain this in a different way”. A week later, it might say, “I just want to do a quick pop quiz for you to see if you remember.” Which, by the way, I do all of these things for my kids, so I'm like their personal agent if you like.

You can compare that against what you see in a traditional learning environment where you go from one teacher in one year to another teacher, then you change schools or in the world of L&D you learn in this organization, then you move to a different organization in the same business, then you join a different business. It's constantly changing.

But with the personal agent, it won't matter because it's yours. It doesn't report to the business per se. The other thing is it will have access to so much information out there on the internet in real time. So if there is a new subject or a new related material, it can let you know within 10 minutes. Try doing that at school.

I think that the new AI generation, my kids for example, are already beginning to interact with AI and they're enjoying it. I think they're going to come to expect this form of customization and detailed interaction. There's always the danger that if they go to their school, university, or business and it's not somewhat up to speed, they are going to feel like they are in the dark ages in this place and I'm just going to get back to my trusty personal agent.

That's what I think is going to be a pretty big deal in the world of education.

or read the answer:

It really depends on how they're using it. If you teach the class the same way that you always did, you give the same kind of questions, and the students can just turn around and use ChatGPT to quickly find an answer, then yes it's probably going to make lazy students even lazier, shall we say. That's asking for problems if you're just teaching the same way while people are using AI, you're going to be doing them a disservice.

What you need to do is to create a much more complex set of tasks for them to do that are going to get them to think deeply about how do I use an AI? How do I team up with it to achieve this much higher level of performance? In those situations, the students are actually having to think creatively about how to engage with the AI, whether it's how do I prompt engineer, how do I scrub it to make sure it's true because if I just copy and paste something and it's obviously wrong it's going to make the student look bad.

There are a lot of things that they have to do, but as I said, I think the main onus is going to be on the teacher or instructor to raise the bar of what's expected so that it doesn't allow people to get mentally lazy.

or read the answer:

One of the things that many times goes unspoken is the fact that during the daytime you interact with a lot of different people, whether it's talking with a creative writer who's trying to pick your brain about which particular angle to take something on, if you happen to be in the medical services it might be people working in ER for example, or a neurosurgeon. I can think of tons of people that you interact with where the way that they interact with you is crucial to both of your outcomes.

If I go into an ER and there's a doctor who's not empathetic, is rude, and doesn't see that I'm in huge pain and I need to get priority, then obviously I'm going to be very dissatisfied. Whereas I've had some very positive interactions with the medical profession where they make you feel so at ease, they understand where you're coming from, they give you priority when it's necessary and it's a huge part of being a successful professional and I think the soft skills have always been pretty important.

I think now people are starting to say, “Hey, what is the difference between me and some AI?” One of the first things they will come up with is the soft skills aspect. I think there's going to be a lot of people around the world who are going to be quite motivated to improve their soft skills. Interestingly enough, software powered by AI might be one of the ways that you can accelerate your ability to have very good soft skills.

I know at CYPHER we're doing a lot of work with organizations and schools to use competency-based learning to measure soft skills and to target specific areas of strength and weakness.

I think, yes, they're definitely related and I think there's going to be a lot more emphasis on soft skills as a result of the resurgence of AI.

or read the answer:

So today's question is “I know that CYPHER Copilot does an amazing job at allowing me to build engaging modern classes in minutes. However, do you see a time where Copilot will, in addition, help a learner to recognize their talent, to learn things much quicker that they're confused about, and otherwise help them on their educational journey?” 

The answer is absolutely! What you're referring to is somewhat known as an agent. CYPHER Learning is dedicated to building out our Copilot functionality to include exactly this kind of thing.

or read the answer:

Generally speaking, an agent is something that will operate autonomously on your behalf. So it's not something that you have to ask a question and then it responds. You basically can say, “Hey, I want to learn about all kinds of topics” and then it will just operate like a little squirrel 24/7, hunting down nuggets of information, serving them up, making suggestions. We really think that agents are the future of AI, and a lot of other people do, this is not just my personal opinion. We're in the education industry so we're working very hard on bringing the power of personal agents to educators. And that would apply to both learners, so how do you help a learner along with their educational journey? But also educators, who want to create engaging materials and an agent can work on your behalf as well.

or read the answer:

Turnitin recently announced that they were going to be able to significantly downsize their engineering team and replace a bunch of the engineers with AI. I've a few thoughts on that.

So first of all, let's just take AI off the plate completely. The best way to build software efficiently is to have small teams of really experienced people who create reusable frameworks and bring in the most advanced tooling, so that a small number of people can do a lot. Those are high performance engineering teams and that's the kind of team that we have here at CYPHER Learning.

There are certainly engineering teams that haven't adopted the latest software best practices where they do have more less talented engineers, shall we say, building a lot of mediocre stuff that could be done in a much more efficient way. And I think if you have an engineering organization like that, then it's certainly possible that you could downsize somewhat by leveraging AI. But I really do think in this case that AI is like a red herring. I think that you would normally reduce an engineering team regardless, if your team is so easily replaceable by AI.

At CYPHER Learning there's absolutely no way I can replace anyone in our engineering team with AI because they're just way too good, but they do use AI on a daily basis to make their own productivity more efficient.

or read the answer:

I think that almost any job in the workforce can benefit from AI unless it's flipping burgers at your local In-N-Out. But I think it's really important for people joining the workforce to be well-versed in how to use AI right out of the starting gate. Like I know at CYPHER Learning, we pretty much expect most people at this point to be using AI to become more productive. And I would get quite concerned if there was an employee in almost any position who wasn't using AI. So you don't want to be that one clueless graduate student getting a new job and everybody else is using AI around you and you're not really sure what it's about. I would say it definitely is going to make you more productive, but you don't want to be that person who doesn't know how to use an AI.

or read the answer:

There are a couple of things. First of all, let's look at the word ensure. So there are a lot of products, courses out there that will teach you generic things like marketing or sales. But most companies who want to ensure somebody has that training, they want some degree of traceability down to individual skills. So within marketing, for example, there might be 30 or 40 different skills that you want to ensure somebody actually understands and has mastered, not just simply taking a course and maybe they didn't understand anything. I would say adopting a skills-based training methodology where all the content, education, material, quizzes, and assessments are all tied back to skill is a really important way to ensure that they have it. 

Now, in terms of doing this quickly, at CYPHER Learning we've got a copilot technology - CYPHER Copilot, not to be confused with Microsoft Copilot, by the way. Which allows L&D staff to be able to create these courses in about 10 minutes that are completely tied back to skills. Otherwise, it is somewhat laborious work. But I want to let everyone know there are actually AI-powered tools right now that can create these really engaging courses that tie back to skills. I will also say that coming soon, I think you're going to start seeing straight to the learner on the marketplace where somebody who does not have access to a course or is not inclined to do so can engage with an AI, learn those new skills, but have them once again tied back to skills in a way that their learning and understanding is actually ensured. So stay tuned coming soon from CYPHER Learning.

or read the answer:

My viewpoint is that right now, there is a whole cluster of AI with somewhat similar capabilities. Whether it's ChatGPT 4.0 or Anthropic Claude, there's Gemini from Google. But they're all in the same ballpark, there's nothing that's one massive breakaway or a huge step up. I think everyone in the AI industry is expecting a step up fairly soon. There have been various leaks about a ChatGPT 5.0, a new version of Google Gemini, etc. I think you're going to see them quite soon get more accurate, so less hallucinations and much better more powerful reasoning.

I read somewhere that there's a guy who's an expert in electronics and one of the things that he does to find out how good an AI is, is to design a complicated bit of electronic devices with all the resistors and capacitors and everything. This guy apparently stumbled across an early version of the next version of ChatGPT and said they did an amazing job and no other AI had done that before. So I think you're going to see a step function pretty soon, you're going to see prices continue to drop on the basics. You're going to see much more powerful video synthesis, much more powerful image synthesis. I think all of these things are going to happen in the next 6 to 9 months. But then I think the bigger wave is going to be towards AI agents, which was the subject of a previous LinkedIn post from yours truly.

or read the answer:

The answer is yes. You can typically think about a spectrum. On one side you've got an agent in a box where the agent can do things for you, but it can't necessarily spend your money or actually book a plane flight, versus all the way to very autonomous agents where you give them the freedom to actually open your wallet and buy things. Now we think that the agent in the box, at least for the time being, is going to be much more popular, much more common. That's the kind of agent that we are working on right now.

or read the answer:

A copilot is something that you typically tag-team on a task. It's literally like you're flying the airplane and your copilot's next to you. So for example, if you're using the CYPHER platform to build a course, we call that a copilot because you are building the course with the help of an AI. 

Agents, on the other hand, are typically a bit more autonomous. You can give them a task and it might take them an hour, a day, a week to complete that task. And while they'll probably touch base with you, just to let you know what's going on. They're a lot more autonomous than a copilot. 

And at CYPHER, we are actually working on agent technology as we speak. So stay tuned for more information.

or read the answer:

A stateful AI is particularly important when you're talking about agents, because in the case of an agent, you're given a mission. It might be to book a lovely vacation to Barbados or something and it's going to know all the steps that are necessary. It's going to go from step one to step two to step three. It always knows where it is towards its goal. You will be in the loop. So you are part of the stateful agent's goal. 

Whereas for a copilot, there's typically not very much state. There's a short job and you're finished and you're done. So that's why state is very important - especially with regards to AI agents.

or read the answer:

I have to say, I don't think the tech industry is rapidly evolving. I mean, we know who our competitors are and they seem to be moving quite slowly, to be honest with you. The way that we innovate is not based on how rapidly anybody else is innovating. 

We're very passionate about where we think the future is going and we are building it as fast as we can, independently of how everybody else is doing. But we do have to say that when we look around, it seems like a lot of companies are moving really slowly right now.

or read the answer:

AI is moving so fast, "recent" could be yesterday for all I know, but I will say there's been a lot of advancements in video generation. So pretty soon you're going to see real-time videos teaching you almost anything, without you needing to have to go to YouTube, for example. They could be personalized, they could even be done with your voice or your face - it's gonna be quite crazy. 

The reasoning aspect is getting stronger and stronger. You guys have probably heard about Project Strawberry. And so we're expecting to see some more advancements there. Advanced reasoning means that AI agents can become smarter. So they will be able to assist you in much more profound ways, to help you learn whatever it is that you want to learn. 

And I also think the price performance is amazing. You know, we dropped the prices of Copilot by say 30% last week and it's getting faster. Its multi-language support is increasing. So the prices are just going to keep going down, which just makes this more accessible to all of our customers.

or read the answer:

Now, it's basically being very, very useful in pretty much all industries as far as I can tell. But one area in particular would be drug discovery. You know, drug discovery has typically been a quite hit-or-miss problem. And thanks to innovations such as AlphaFold out of the Google Research labs and other things that OpenAI are doing, they're using AI to increasingly speed up drug discovery and make it a far more directed approach versus a hit-or-miss approach. So there's just one example where I think AI can do an amazing job and it would be really, really hard for human beings to be as sophisticated in terms of drug discovery without using AI.

or read the answer:

I would say the biggest misconception is people assume that it's always going to have hallucinations and make mistakes. Now, it's always possible to make a mistake. But these AIs are getting better and better and more thoughtful and scrubbing their answers. And if you've been tracking OpenAI, yesterday, they released the o1 model, which in the professional version of the o1 model can actually think up to one minute before it gives you the answer. So I think it's not going to be that long until AIs make way less mistakes than they ever used to. It'll never go to zero, but it'll be way, way more accurate than even a human expert.

What's my biggest prediction for AI in 2025? And it's three words, agents, agents, agents.

And what I mean by that, I'll give you an example from the learning industry, which is obviously where CYPHER is. In the past, people were not necessarily that great at self-directed learning. They weren't sure what they really needed, they weren't sure how to study appropriately. Managers might have a difficult time tracking what their learners are up to and making recommendations. But AI is pretty good at this stuff. If it knows what you want out of your career, if it knows what you're good at, what you need help with, then it can craft a personalized learning path for you, it can generate personalized content just for you. If you're a manager, it can automatically track what's going on with all of your workers and how well they're doing and where they need help. And it can all be done proactively by an AI agent.

We're really working hard on agent technology. We've got the first learner agent coming out very soon, but a lot of what we're doing in 2025 is applying proactive AI in the form of agents to all of the stakeholders who use the CYPHER platform.

CNN amazed by AI 360 course creation

Ask Graham!

Got a question about AI? Ask it here and we'll see what Graham says!