The data shows that employees – perhaps your own employees are really concerned about AI replacing them. This isn’t from one source, or just one well-known entity espousing their study results, rather it is appearing in multiple studies, and not with just a few employees – but looking at it globally, with thousands of employees in most cases.
What I have seen though is the lack of constant and consistent training/learning for employees, association members, customers (B2B), even execs when it comes to the latest around AI – in this case Generative AI. This isn’t a one-time let’s put out an online course or some online materials or send folks to our intranet or have them go to a repository to find these materials or send them links to some site that has links to free online courses around AI including learning more about it.
The latter is one of the worst options because those courses, quality aside – you have no idea who created them – in terms of instructional design skill sets (and a lot are beyond awful – the courses design wise – okay they are all awful) – but are they updated weekly? What happens if Jane Widget rumbles into the “let’s learn about Gen AI” – and it was last updated five months ago. OR let’s say it is you that sees a bunch of course options and takes one. Congrats you know the latest.
There is of course another option – you, yes you – spends your wonderful evenings or day and reads lots of publications and some studies that go beyond the usual, and then sends daily tidings to everyone so that they know the latest. Now you can say, “Well, what’s the big deal if so-so came out, and why is that even relevant to my employees or customers or members?
Yes, it is a fair objection because one LLM is the same as another (it isn’t) and therefore if one let’s say does better with links compared to another LLM (which is common – i.e. each has strengths and weaknesses) and the vendor’s system you are using pushes out links with the LLM they are using – and Tada – someone clicks and it goes to Wild World of XXX. Who’s responsible here? Trust me, it won’t be the vendor. They will scatter like flies right after you bring out the fly swatter and unleash the Zorro in you.
On your side though, as the person overseeing L&D or Training or HR (and L&D has been gutted so you are now running the show OR you lead onboarding, while Training does external for example), it is your responsibility to be the expert, not some person’s YouTube video, or stuff some person puts links to.
As the expert it is the company’s expectation that you will learn yourself – or at least it should be. However, you have lots on your plate. And this Gen AI thing is what? Isn’t it AI? What is this generative thing? I see transformers – what the heck? Prompt leaking? I thought my employees can type in anything on ChatGPT (because they still are using it) or another Gen AI offering, and it will push out accurate stuff.
Guardrails? Why does Craig always mention hallucinations? What is he on drugs? (Drum roll). He always seems to bring it up. Who cares!
Trust me you will. Which is why my good friend, I’m going to cover some of the latest – info that is/will be relevant not just to you in L&D or Training or overseeing education for an association or organization, or people in HR, or the CEO who went with this AI thing and is now upset, or the learning tech or learning system vendor who hasn’t decided what to use, or has, and isn’t paying attention besides making sure it works.
The last you would think would be able to answer lots of key questions around that LLM or LLMs or SLM (I haven’t seen any vendor in our space with one) in their solution, that you have signed an agreement.
The Terms of Relevance – Right Now – Beyond what you may have already heard
There is so much out there – and a lot of the jargon is beyond confusing. Multimodal? Doesn’t every LLM offer that? (No). Different versions, different vendors – why is everyone sticking with OpenAI? – Way too many vendors in our space do, even if they are using another LLM vendor.
Prompt leaking – it’s real, and you do not need a Doctorate to pull it off. You just need someone who is lazy and has nothing to do and wants to see if they can make it happen, or are re-using ones that do work – hoping the LLM vendor hasn’t plugged it (i.e. fixed it).
Hallucinations – it isn’t going away – and while there are many AI companies out there trying to find a way to do it, no one has succeeded. There are enough experts who doubt it will ever go away – be fixed. My gut says it will, too many people trying to solve it, and too many companies holding back from implementing gen AI (It has become a major issue, and very obvious why).
But it won’t be fixed in 2024, and the earliest I could see it, is late 2025 or into 2026. Nowadays, there are folks who have created AI solutions that go either with a R.A.G. or enable you to use it without using a R.A.G. (most people just say “RAG”). The real name is Retrieval Augmented Generation.
Why is it relevant?
It’s been around for a few years, but as of late, it is starting to get more eyeballs and interest with it for AI.
One type of SSM architecture. Big win – handles long range sequences of data far better than what exists today (well, in theory at least).
Should I pay attention to this?
For folks who are developers, yes. For others? Well, not likely, although the SSM is something to just put down for a rainy day, because of what the possibilities are when it comes to the whole architecture thing.
Mamba with a SSM foundation improves performance and efficiency. That’s a good thing. Whether or not, it does so, far more than an LLM that isn’t using it, is still up to debate. This thing gets into transformers, which will bore you, so let’s move on.
Citations – with your Gen AI
Oh boy is this one starting to appear with a few learning systems. Heck, you can buy Gen AI with the citation capability ready to go, and yep, it can do quite a bit including the ol APA style that nobody cares about, except some old stuffy professor looking at your thesis with a microscope. Or was that just mine?
Anyway, citations seem like a great capability, because the output shows the response with a citation of where it is within the context of the content. Yeah. Seems perfect.
And yes, it could be, somewhat solid, but any vendor even AI tech vendors that use or include citation mode and claim it works 100%, with your own content is not being really clear here. Those pesky hallucinations still can exist. You may get a cite, click the cite and it goes nowhere in your material(s). You may click it and it shows the wrong section or page or specific area.
If your employees at home are typing into a Gen AI solution with citations, and says “Hey, this is perfect, I’ll just use this cite,” without verifying it – it may just lead to nowhere or someplace nobody show could to in eons.
I see citation mode as a nice to have, depending on the situation. But it still needs to be made clear that even with your own materials/content within the system even if it is via an intranet where you house your content, it can still create a hallucination (Fake or false information – vendors like to call it little mistakes).
This is where after someone has inputted a question or statement or whatever, the output produces the information with a citation to go to that exact source. The pluses are obvious, but as with anything if the material doesn’t have that exact sentence or paragraph or whatever the cite points towards, then citation mode while looking cool, isn’t effective.
MOE – Mixture of Experts
What is it?
It’s me hanging out with a few other experts talking about infinity and learning. I kid I kid.
Rather than dive into a bunch of what is this and huh, I found this quip to be the best to articulate it:
“Enables the ability to “combine large “tradeoff” between the greater capacity of a large model and greater efficacy of a smaller model.” (Bergman, D. (2025, April 5). What is mixture of experts?)
Does it matter to me?
If you are a vendor who is looking for an LLM, yeah it should matter. There are numerous LLMs that have MOE and all the big players including Google, AWS and Microsoft have LLMs that have it. From the SLM standpoint it is rare, but there are a few out there. SLM – is a small language model.
As noted earlier, I am really surprised that nobody in the industry – ours – has a SLM, especially when they have a mobile app, i.e. mobile learning. A SLM is ideal for this. Yes, the parameters are less, yes the inputs are less (i.e. the amount of text you can input at one time – but it is still quite large – highly doubt someone will burn through it) and yes, there are a couple of other minuses. However, a huge plus is that the whole water thing is dramatically lowered.
Oh, didn’t know about water and those CPUs for AI? It takes a lot of water to cool them down, and there are real worries about the environment, especially since the amount of water in relation to droughts is nothing to ignore.
For example, Microsoft reported 42% of the water it consumed for its data centers were from areas with “water stress” in 2023. Google? 15%.
Hello Doctor? Is there an LLM you can recommend for my learning?
There continues to be either an ignorance or lack of foresight when it comes to what LLM or LLMs to use. This on top of the continuance of seeing the same, seriously the same stuff vendors are using Gen AI for in their system. Let’s run them down – assessment tool – some are able to do multimodal because the LLM can; ditto on the content creator. Tada!
There is a couple that are starting to use Gen AI in the form of the WalkMe solution, which besides the WM folks never figuring out a pricing model for learning system vendors could use, the offering has limitations. The cool angle of say a learning guide (Cornerstone for example has one) is that it is available on the learner side too. Most of these things are admin only. Think of the WalkMe as a DAP – that’s really what it is.
What – another term? This has nothing to do with AI, but DAP – digital adoption platform – track record in the industry is mixed. There are enough of them out there to showcase at a show, but not that many ever appear, and those that do – it looks cool, but applying it into a system per se, is all over the map.
Thus, there are vendors now using DAP with Gen AI on their learner side. I think that is brilliant. Because think this way – who is going to run into more issues – the learner or the admin? My experience is the learner, especially nowadays where providing how to guides, etc. just isn’t on the top of rollouts by L&D or Training, let alone Marketing using a learning system for their customer base.
Anyway, I think you are going to see more learning system vendors using a DAP with Gen AI on their learner side by the end of 2025. I know of one vendor who already has it being developed as we speak. Depending on how the Gen AI model works – and it must be multimodal – along with more pluses than minuses on what it is going to be used in the system, it opens up a lot of opportunities.
Right now, I see a lot of chatbots in systems. Those are the worst. I hate ’em. I hate seeing them on web sites – a lot of vendors have these babies pop up – even one vendor goes old school MIDI and plays music when you rumble into their site. Hey, it is 1995 again!
I digress.
A worry – this angles more on the marketing side of the house, but let’s face it learning system vendors succeed or fail based on their marketing strategy especially with the hype and wide usage of content marketing. Nowadays zing over to the net and see how much garbage is out there thanks to content marketing. It is beyond awful.
Let’s say though, you are allowing Gen AI to create your learning content that you have visible when someone goes to your web site – I’m talking to you – company and your L&D or Training person who may not have the skill sets for marketing, so they go Gen AI.
Then let’s toss in those vendors, in e-learning, regardless of authoring tool, learning system, learning tech, publisher, etc. who decide to use Gen AI to create content for the web site.
What do you think may happen?
Oh, let’s not forget that an LLM is trained on the data it finds off the internet (sorry to surprise many who had no idea – well, surprise!!) – I mean how do you think it figures out the words to use and info?
The LLM is scouring the net to train on.
And the content is being generated by AI.
In basic sense it becomes the snake eating its tail. In the actual term sense, it is called Model Collapse.
The LLM starts to train itself on the same data and content generated by previous versions of itself.
Think about that, the next time, you decide to save time by just using Gen AI to create all your content on the web or in a document that an LLM can find – because you are aware of that little issue around authors, publishers, videos and so on, with an LLM extracting the data to train itself on.
The end result of all this? The LLM will output gibberish. And someone will still use it.
Ice cream break anyone?
The Data you have missed
- S&P Global Partners with Accenture rolled out an AI training program for 35,000 employees
- 40% of the workforce will need to reskill over the next three years to stay relevant (IBM Institute of Business Value in talking to a vareity of executives) – Hey, who is that guy who keeps screaming about reskilling as the focus and not just upskilling? Hmm, I think he is writing this as you read.
- 85 million jobs (worldwide) could be disrupted in medium and large businesses as soon as 2025, due to automation (World Economic Forum) – that’s automation with AI and it’s across quite of number of industries.
- Over in China, Longwriter-6k can generate over 10,000 words (which trust me, is a lot) – the downside? Those pesky hallucinations and stuff like spam and other garbage being outputted. The Pluses? Longwriter -9B-DPO did better overall in two categories than anything OpenAI, Anthropic, Meta, Mistral produced (Github)
- Microsoft’s Phi-3.5-V-4.2B (Vision, SLM) outperformed GPT-4o (latest from OpenAI); Phi 3.5 offers a mini version, a MOE version, and Vision version (Results based on the 5-shot MMLU)
- Phi 3.5 – all its versions are SLMs. There are vendors who use the LLM of Llama (Meta) – its one of the open source language models out there; but because nobody is using a SLM yet, Nvidia created Llama-3.1-Minitron 4B – (For those interested in learning more about pruning and distillation, LMK – otherwise, let’s continue)
Fun Stuff
Midjourney the folks suing Stability AI (and probably will win even if it goes to arbitration – uh, not talking about Stability here) now anyone can access more than 20 AI image generations. Midjourney is by far the best image generation model out there. There are others who are close, but MJ dominates. I should note there is a slight learning curve for it, but the communities are outstanding, especially one for newbies. If you want all the goodies, you pay. But still.
I use it, and getimg.ai (which rocks and offers a lot of image LLMs out there). It’s free version should suffice for a lot of people. Plus, that whole issue of generating copyrighted images or someone pulling an image of the web, ignoring that it might be copyrighted goes right out the window.
What you want those tool directories? I’ve used a few – and you can’t rely on just one, just like a grocery store – some are better at buying vegetables, others are better at buying stale bread.
- Futurepedia
- Creati.ai
- Producthunt – It isn’t just for AI, it’s for everything – same angle though – you pay us – and tada – you are listed. Then folks rank up or down. I love how they “review” them. Sorry, I’m laughing too hard now. That said, there are some cool ones in there.
- AI Tools Directory
- Freework.ai
- Github AI Directories – Github does far more than directories, but it’s a nice list of well, directories
Just like any directory site even in our industry, you will find “vendors” who are no longer in business but has a listing in the directory. This is a problem with AI platforms – i.e. products – plenty of companies go out of business in a short timeframe. I used several from late 23 into early 24, that are out of business today. When you have so many flooding the market, there are well, only so many that can generate enough revenue to offset the costs of those cpus needed for AI. That’s the problem. Even with capital being infused (and that is going down), at some point, those people putting up the money expect results.
Right now, AI is a losing business – revenue wise.
I always said from the get go that it reminds me so much of the dot com days, where a lot of products rolled out, a lot of money went in, and folks such as pets.com (sniff), Webvan (Instacart should thank this company), raised a lot of cash, and then were gone.
I think you are going to see the repeat here with a lot of AI products in the market and those coming.
Bottom Line
This is just a few tadbits of the latest with Gen AI without me trying to bore you too much. For fans of Gen AI, I think you will like this post. For others? A snoozer.
Here’s the thing though
AI isn’t going anywhere, and while it still is not at the level of a human when it comes to reasoning from a business standpoint, let alone a human standpoint (Beyond STEM and similar), it is coming for people’s jobs. Coders are already experiencing it.
Yes, there will be jobs lost, and yes, new too.
Which is why I won’t stop saying Reskill should be your priority, not upskilling. They can go hand in hand, but as with anything, one will eventually take over.
Today is still is upskilling
Tommorow?
You may not find out.
Because nobody reskilled you.
E-Learning 24/7