Gen AI and Your Workforce

0
349
Gen AI and Your Workforce
  • Prompt leaking is getting worse – This happens with any LLM I should add, but researchers focus on ChatGPT (since it is very popular). The version they eye is 3.5 (ChatGPT), 4 (the paid version, and better than 3.5), Enterprise ChatGPT (the business version, which is mainly like 3.5 Turbo – which vendors in our space tend to go to) and that’s it. 4.5 Turbo available only to developers at this time, is also starting to show issues with something else. The idea that no employee at your company will try to jailbreak ChatGPT, may be foolish on your side. The way to counter it, for all those CLOs out there – is to come up with a gameplan. Ditto for the company themselves – hello Compliance Officer and CTO. L&D will end up getting involved – those who oversee it at companies, who do not have the title of CLO (depends on company, organization, etc.) Ditto again, if you are running training for an internal workforce.

Some zingers that have come as a result of prompt leaking include:

  • Getting the directions on how to make illicit items such as napalm (using the grandmother hack)
  • Getting personal information, including addresses, phone numbers and other personal data on a random individual (not working at your company) This was achieved multiple times. One way, was the repeating of the word poem, over and over again.
  • Getting private company information including the company’s financials. This was the company’s own information, and not from a public generative AI. Rather an enterprise version

Prompt Leaking doesn’t take a Ph.D. to jailbreak, and that is a problem. Anyone can jailbreak, by chance. Granted the ones you hear often are do to researchers testing it or testing samples. Just recently, a publication called Decrypt was able to prompt leak (jailbreak), of Grok, the LLM from Elon Musk which is available only to paid subscribers of X (formally known as Twitter).

While OpenAI will make the fix, the challenge is that this isn’t going to be one-off. There will be more.

For those thinking, no worry I will use another LLM and it won’t happen to me, recognize that any LLM can present prompt leaking. Even with your own data and information; and not public – i.e. the internet.

  • Lazy Results – A brand new to start appearing. In November, people started complaining on Reddit that they got far fewer results when using ChatGPT. Many folks scoffed. Well, surprise! It is happening with ChatGPT-4 (the paid version). The AI responds with less characters (words) or says something along the lines that it can’t respond (even though the question is something it could respond to)
  • AI Bias – It is real, it happens when you shove in your own materials, your own reports, data, etc. into an LLM. This is a problem with generative AI. I don’t care if you are using an Enterprise version of an LLM, AI bias can occur
  • Hallucinations – One vendor in our space, identifies it as a mistake (when folks are starting to use the generative AI. A mistake is when I am not paying attention to my cell phone, and the President of my country calls me. Hallucinations produce fake and false information. And it doesn’t matter if its your own materials, information or not. I think way too many companies, believe it is their own stuff going into the LLM, then this won’t happen. Sorry, it will.
  • I cannot stress this enough: there is no such thing as 100% accuracy. Even vendors who say well it is 98% accurate, can’t guarantee this to take place with your stuff. It might end up 95%, 92%, who knows. The point here is if you are fine with, let’s say 3% inaccuracy and it is happening with your employees using it for their work, and that work happens to be crucial to the company, that will be okay with you? Especially if the employee has no idea that it is wrong? I don’t think so.

The Latest LLMs that vendors are using, and perhaps your company is too

  1. Create a course covering the basics around generative AI, specifically about hallucinations, which exist in any LLM (you can ignore that part), so you can just say ChatGPT (or whatever solution you are using). This means fake or false information – and state as that.
  2. Say that before you place it into a task, or just learn new information (or whatever vernacular you decide on – based on your audience), – the learner needs to review it before sending it off (let’s say the person is creating a deck, and it is a cut and paste thing OR taking that info and applying it to some task they need to do). Correct the information. If they are unsure, ask someone at the company to validate – let them know this is a positive thing to do (I think a lot of people won’t, but if you are using or having mentors, this is a plus for them)
  • Do not start with an ambiguous question. The results will be all over the place – plus your company is paying for that LLM, and token fees will escalate when folks are entering in statements or phrases in the prompt. If you are a company with 5,000 or 100,000, those numbers skyrocket, and your company will take a financial hit – especially at the 50,000 or more. This again, is more along them using ChatGPT (for example) for work – and a lot are. This is another screen recording, plus some text. Or just screen recording. I mean you might have a course title and then have specific modules for them. Ensure your vendor provides analytics based on how often the person went to that specific module and how long.
  • You can create a list with bullet points on anything specifically, for example:

Newsletter on Generative AI

  • They do not know all the regulations that countries or the EU have implemented for AI. They are not all the same,; they must ensure their system is compliant. California has its regulations in the states – similar to the EU, but not exactly; New York has their own, and other states are going that way (clearly not all). India has their own, the US has started (but I am not expecting EU level), China has its own, Australia has its own, and other countries I did not mention may have their own. Only one vendor I am aware of has a 3rd party company keeping track of all the regs and passing it through to the vendor. These regulations will continue to change; a vendor must know this and understand and implement it. And it will take more than one person, IMO to deal with this.
  • Some of the responses around not listing information in their system about hallucinations that might occur are frightening to me. I have seen this with a few vendors. Holy Moly. And their reasons for doing it, are concerning. Again, not just one vendor, multiple vendors. This isn’t a game. This is a must. If you are creating a course (which is popular with vendors for generative ai) you need to let folks know this. Ditto, say AI bias. I have seen only two vendors really stipulate this, with one, on everything that generative ai does in their system – which is the way to go. The other vendor, which has generative ai on the learner side, mentions it, but in small text.
  • None of them have context windows (the prompt window you see) that, thru icons or whatever, allows the admin or whomever to identify that if the information is incorrect, what is the correct response and submit it. Despite them seeing this in any LLM out there, such as ChatGPT-4. The AI learns from itself, so if you edit the response, which even the vendor does, and save it, the generative AI doesn’t know this. Thus wrong response, the AI says, okay, it must be right.
  • Pricing for those tokens, the cost to you, is really low right now – because first the cost per character is low – but the systems that have gen ai, only a few have it on the learner side, so yeah, it should be low cost to the client. A couple are rolling it out on the learner side next quarter. What I have seen are three pricing angles:

E-Learning 24/7