Survey Report: AI Adoption Within Learning and Development Teams

Peter Enestrom
CEO & Founder, Learnexus
Survey Report: AI Adoption Within Learning and Development Teams

Executive Summary

This report presents findings on how AI technologies are being incorporated into learning and development sectors. A significant finding is that 68% of Learning & Development Teams encounter some level of resistance to using AI for creating learning materials.

graphy export 1711473750 2

68% of Learning & Development Teams face at least some resistance in using AI to help create learning assets.

Learnexus March 2024 Survey on AI Adoption

Large and regulated organizations exhibit the greatest reluctance, primarily due to concerns over proprietary information ownership, potential legal issues related to plagiarism and copyright conflicts, and the accuracy and safety of the training content.

This resistance is exemplified by instances where companies contemplate banning AI-driven tools like Vyond and Articulate, and the reluctance to use open-source AI like ChatGPT due to the risks of exposing proprietary data.

Large and regulated organizations exhibit the greatest reluctance to AI due to concerns over proprietary information ownership, potential legal issues…and the accuracy of the training content.

Learnexus March 2024 Survey on AI Adoption

To counter these challenges, many firms are developing internal Large Language Models (LLMs) to maintain control over their data and address security concerns. These in-house AI solutions serve as content assistants, though they require validation by subject matter experts, indicating they are viewed more as time-saving tools than autonomous content creators.

Despite the adoption of these AI tools, there remains a pervasive skepticism regarding the quality and accuracy of AI-generated content.

We expect the speed of adoption within Learning & Development teams, particularly at large and regulated companies, to lag behind other corporate functions.

Learnexus March 2024 Survey on AI Adoption

Managers express concerns about AI’s tendency to fabricate information and the potential devaluation of human input in content creation, resulting in a cautious approach to integrating AI into the workflow, primarily for drafting outlines or basic communications rather than fully developed content.

We therefore expect the speed of adoption within enterprise Learning & Development teams, particularly at large and regulated companies, to lag behind other parts of the organization.

Large & regulated organizations face the most resistance because of fears around ownership

The reluctance to integrate AI in Learning & Development (L&D) roles, particularly in content development, is underscored by proprietary and security concerns.

I’ve heard of Instructional Designers not being allowed to use chat bots because the company’s information is proprietary.

Learning Professional

There are instances where companies restrict the use of AI tools like chatbots to protect sensitive information. Some organizations are contemplating banning popular AI-driven platforms such as Vyond and Articulate, fearing the integration of AI might lead to unintended disclosure of proprietary data

My company is considering banning Vyond and Articulate because they include AI.

Learning Professional

The hesitation also stems from the potential legal entanglements around plagiarism and copyright violations, with a sentiment that the use of open AI systems like ChatGPT could expose a firm to significant risks, especially for larger corporations.

We do not and will not use the open ChatGPT because of the way it works. There is almost no way to manage for potential plagiarism or even copyright conflicts. It also learns from the user, which potentially exposes proprietary information back out into the Web. It’s a legal nightmare waiting to happen for any corporation. The bigger you are, the bigger the risk.

Learning Professional

Furthermore, the need for content accuracy and adherence to stringent government regulations has led some companies to possibly consider developing internal AI solutions, though such plans remain under wraps.

There is zero discussion within my company currently (I am assuming they are maybe developing an internal system, but that hasn’t been communicated). However, I recognize this is due to government contact restrictions and the need to safeguard information. Another reason is because we need to make sure content is 100% accurate or safety is impacted for trainees.

Learning Professional

These challenges highlight the intricate balance L&D teams must navigate between leveraging AI’s potential and safeguarding against its perceived threats.

Many organizations are leveraging internal LLMs to safeguard data

The development of proprietary AI solutions, such as internal LLMs, reflects a strategic move by organizations to harness the benefits of AI while mitigating risks associated with external platforms. Companies are pioneering the creation of in-house AI tools tailored to their specific needs.

Our IT innovations team has developed an internal version of ChatGPT that we can use as a content assistant. However, just like the other one, everything it generates needs to be validated by SMEs, so it’s a time- saver, but not a ‘content generator’.

Learning Professional

These tools serve as content assistants, streamlining the content creation process by providing initial drafts and suggestions that, while requiring further validation by Subject Matter Experts (SMEs), significantly reduce the time and effort involved in content generation. This approach not only safeguards proprietary information but also allows firms to closely monitor and control the quality and accuracy of the AI-generated content, ensuring it aligns with their stringent standards and operational requirements.

I’m working at one of the Big Four and they made their own chatGPT app that they have released for internal use. My bosses have directed us to see where it can be employed.

Learning Professional

The direction from leadership to explore the application of these internal AI tools signifies a proactive stance towards integrating AI into their workflow, albeit with a careful, measured approach to its deployment and utilization.

Managers remain concerned over the quality and accuracy of AI

The cautious approach to AI adoption in content development is reflected in the sentiments expressed by professionals who emphasize the importance of human oversight in AI-generated outputs.

We’ve been told to experiment with it if we want but at this point the quality is poor enough no one is using it for anything except creating a general framework for our scripts.

Learning Professional

While organizations encourage experimentation with AI tools, the quality concerns limit their use primarily to creating rough frameworks or initial drafts.

I’d be very hesitant to use it for anything more than that considering ChatGPT’s habit of making up information on a pretty regular basis.

Learning Professional

The skepticism around the accuracy of AI, with instances of it generating misleading or fabricated information, has led to hesitancy in relying on it for comprehensive content creation.

AI can be a great tool for efficiency, such as content length truncating or voiceover, but letting the machine develop the content is simply lazy, and will lower your value to the organization.

Learning Professional

Instead, AI is viewed as a supportive tool to enhance efficiency in specific tasks, such as truncating content length or assisting in voiceovers.

I only use it to draft up communications about a new course. Other than that I haven’t been satisfied with AI outputs.

Learning Professional

This pragmatic use of AI, where it serves to draft preliminary communications or provide an outline for new courses, underscores the prevailing opinion that while AI can streamline certain aspects of content development, the critical and creative input necessary for high-quality, valuable content must remain a human-driven process.

We are only allowed to use one of the Generative AI tools for content development, that said we can only use it for an outline, and we have to review the content before we start working with it. We are not allowed to use it as it is.

Learning Professional

Consequently, the integration of AI in content development workflows is carefully managed, with a clear emphasis on its role as an assistant rather than a replacement for human expertise.

Slow Adoption But Roles Will Change

Even though adoption of AI within Learning & Development teams is expected to be slower than other corporate functions, roles within Training & Learning groups will undoubtedly shift.

Over 75% of Learning & Development professionals expect their job roles to substantially transform over the next 3-5 years. It will thus be more crucial than ever for Learning & Development leaders to adapt to this new normal and ensure they are ahead of technological change.

image 2

Apply to join LHQ, the Community for Learning Leaders

We invite you to apply to join LHQ, the number one place for L&D leaders to learn from each other through expert-led roundtables, our active forum, and data-driven resources.

You’ll have access to reports like these on a weekly basis, must-know stats & data around AI and Learning & Development, and shared knowledge from your peers. Plus, it’s free to join. Apply to join here.