The adoption of GenAI is becoming a priority within company strategies this year as they seek innovative solutions. Yet, the implementation of AI is not without its complexities to ensure governance is at the core of the process. We previously discussed the importance of defining policies, principles and templates to mitigate the risks of AI , but how do you build a use case for AI? Here are the 5 stages to build an AI Usecase.
1. What Input is Required
The successful use of a Large Language Model (LLM) begins with the input, where you consider what information you have to feed into the model and your desired result. Here are some key aspects to consider what creating the input:
- Identify Data:
Pinpoint the relevant data in the platform or files required to submit to the Language Model (LLM).
- Data Formatting:
Organise the data before submission ensuring clarity with an easy-to-understand objective.
- Data Classification:
Determine if open-source AI can be utilised based on the classification of the data.
- Word Submissionata Classification:
AI Models are typically priced using tokens, charged based on 75% of the words submitted. For example, if you submit 10 words, you will be charged 8 tokens for your submission. Ensure that you have an understanding of the token pricing model.
- Response Size:
Do you have a limit for your response length? If you need it to be a certain number of characters e.g. 250 characters, specify this in your input.
- Question Submission:
It is assumed that you can get the desired response by submitting one question without a conversational exchange. These are much more complex use cases where the user would need unfettered access directly to the LLM.
2. Choosing the Right GPT LLM Model
How do you know what is the right LLM model to use? Selecting an appropriate LLM model is crucial for the success of your usecase. Here are some things to consider when making this decision:
- LLM Type:
What type of LLM are you looking to use based on the input and data you have.? For example, is a language or image model more suitable for your needs?
- Capabilities:
Clearly define the capabilities required, such as generating new text or locating specific knowledge.
- Cost Model:
Understand the cost implications, including the cost per token for both input and output in the selected model.
3. Testing the Waters
Before full implementation of your evaluated LLM model, make sure you have chosen the best model for your desired outcome. To do this, consider the following steps:
- Questioning Approach:
Detail how questions were posed during previous testing
- Result Evaluation:
Analyse the results obtained in previous examples and assess whether they are immediately useable or require further refinement for what is the desired result.
- Transformation Process:
Understand how results need to be transformed into a usable format.
4. Inference Data Requirements
To ensure accurate outputs, meticulous consideration of inference data is essential. Address the following:
- Data Needs:
Identify the data that the LLM requires but can’t be submitted with the prompt.
- Data Refresh:
Assess the frequency of data changes and whether regular updates are necessary.
- Data Location:
Determine where the required data is currently stored.
- Data Accessibility:
Clarify if the requestor possesses the needed data or if it needs to be sourced from elsewhere.
5. Output Management
Efficient handling of output is critical for deriving maximum value from your AI use case. Consider the following:
- Presentation Format:
Define how the output should ideally be presented.
- Storage Requirements:
Decide if the output needs to be stored for future use and determine where.
- Indexing and Searchability:
Evaluate whether indexing and searchability are necessary for efficient retrieval.
6. Next Steps
Are you just getting started on your AI journey? For a deeper dive into GenAI and how it can help transform your Customer-Supplier management strategy, download our WhitePaper.