Innovative solutions driving your business forward.
Discover our insights & resources.
Explore your career opportunities.
Learn more about Sogeti.
Start typing keywords to search the site. Press enter to submit.
Generative AI
Cloud
Testing
Artificial intelligence
Security
July 15, 2024
In the world of generative AI (Gen AI), the true challenge lies not just in providing clear and rigorous prompt instructions (explicit knowledge) but also in applying the obvious reasoning and logic embedded within the tasks (implicit knowledge).
This is crucial for enhancing the performance and efficiency of large language models (LLMs), like those integrated in the Gen AI Amplifier for Software & Quality Engineering, a platform that accelerates and improves the effectiveness of quality engineering for their applications from the start – and at every step of the way. This article delves into the importance of both explicit and implicit knowledge, illustrating these concepts through practical examples from the Gen AI Amplifier platform.
In academic terms, Polanyi (1966) famously stated, “We can know more than we can tell.” Polanyi’s paradox, named after the British-Hungarian philosopher Michael Polanyi, proposes that a significant portion of human knowledge and our understanding of the world and our abilities remain beyond our explicit comprehension. This highlights the nature of implicit knowledge—much of what we understand and use in our daily tasks is not easily articulated.
While LLMs excel in processing explicit knowledge due to their training on vast datasets, they often struggle with tasks that require implicit knowledge. This limitation is evident in scenarios where nuanced understanding and contextual awareness are critical.
Consider an example involving a “book-ordering system.” Here, the parameters have multiple equivalence classes, and an additional parameter, “Ordering period,” has been added:
We ask ChatGPT 4o to “generate test cases” for this scenario, and it displays 108 test cases to cover all possible combinations.
Implicitly, Gen AI should have:
Before merely generating test cases, document the testing process and break it down into chunks. Have generative AI focus on pre-steps, with clarifying questions about the business and risk value of the input before generating test cases.
Implement a “Test Advisor” to guide users in selecting appropriate test design techniques based on the context and requirements. The “Test Advisor” can prompt users with questions to determine the most suitable testing method.
Clearly delineate which parts of the process should be handled by Gen AI and which by traditional code. Use Gen AI for initial data combination suggestions and traditional code for precise pairwise testing generation.
Enable users to interact with the system to integrate deeper contextual knowledge and typical testing scenarios. Users can provide feedback and additional context to refine test case generation.
Implicit knowledge is a cornerstone of effective AI utilization, transforming basic responses into deeply informed, contextually aware solutions. Through the Gen AI Amplifier, we’ve experienced the profound impact of integrating both explicit and implicit knowledge into our AI systems.
By iteratively refining our prompts and embedding domain expertise, we can unlock the full potential of LLMs, ensuring that they not only meet explicit requirements but also address the nuanced challenges of real-world applications. This approach both enhances AI performance and also drives greater efficiency and accuracy in software development processes, paving the way for more reliable and innovative solutions.
As Polanyi might say in the context of generative AI, “We can know more than we can instruct.” This underlines the necessity of having an expert in the loop to harness both explicit and implicit knowledge effectively.
CTO for Quality Engineering & Testing, Sogeti