Building Bespoke AI Products: 5 Key Options

Many organisations are now looking to build AI into various areas within their operations and customer journeys, and a custom generative AI solution is often the best choice.
Tools such as ChatGPT do allow for basic training within the platform’s regular user interface, but to create a truly bespoke solution we need to consider other options that allow for deeper integration.
Essentially, these services mean that an organisation can utilise a pre-trained model with their own data, meaning the model is acting in a way customised to the users needs. As an organisation you would reap the benefits of intelligent AI without having to train your own model - which is quite an effort in many respects.
What this means for us as a digital products agency is that we can create incredible customised solutions that focus on the product - and therefore the user experience and representation of your brand that naturally shows.
Whether you’re CTO, Head of Product or CEO, strategic leaders always need to know their options.
Here we look at some examples with use cases to highlight the options that are currently available and their common use cases.
1. Google Vertex AI: For End-to-End Machine Learning Operations
Vertex AI offers a unified platform for building and managing custom ML models at scale. It provides access to Google's powerful foundation models for highly tailored solutions.
Google Cloud's Vertex AI stands as a unified MLOps (Machine Learning Operations) platform, engineered to support businesses in building, deploying, and managing machine learning models at scale.
It provides a comprehensive toolkit that addresses every stage of the ML lifecycle, from data preparation and model training to the point of even custom training, model deployment, monitoring, and governance.
A key advantage is its access to Google's own powerful foundation models such as Gemini and Imagen which can serve as a base for custom solutions.
Where Vercel is great at allowing us to prototype quickly by tapping into multiple different models, Google Vertex provides the tools to truly create something custom based on training data the organisation provides.
The main characteristics are:
Ideal Business Cases
Vertex is particularly well-suited for:
Technical Considerations
Google's AI services are worth considering even if you aren't using the Google ecosystem. Most companies will be with Azure or AWS, but should still consider using Google simply because they have very strong models and may have more attractive pricing.
A primary consideration is again data, and with Vertex, you need as much of it as possible. The development of effective custom models within Vertex AI is fundamentally dependent on access to substantial, high-quality, and relevant training datasets. Vertex AI readily integrates with GCP data sources like BigQuery and Cloud Storage which may be an advantage.
Vertex could require team expertise within machine learning concepts and MLOps principles, and an advantage would be to have a working knowledge of the broader Google Cloud Platform and its services. The platform would suit an enterprise with a current data team in this respect.
https://cloud.google.com/vertex-ai
2. OpenAI API & Fine-Tuning: Tailoring to Niche Tasks
OpenAI provides programmatic access to its advanced large language models (LLMs), including the GPT series, via an API.
Beyond standard prompting, a key capability offered is fine-tuning. This process allows organisations to adapt these powerful, pre-trained foundation models by further training them on their own specific datasets.
The result is a customised model that exhibits improved performance, adheres to a particular style, or possesses specialised knowledge relevant to unique requirements.
Whilst this isn’t a custom solution like Vertex, it can feel much more personalised due to using a fine tuned set of prompts to train it.
The main characteristics are:
Ideal Business Cases
These are particularly well-suited for:
Technical Considerations
OpenAI advises that dataset quality is more important than quantity, though still, hundreds or even thousands of well-crafted examples are often needed for noticeable improvements. This data must be highly representative of the tasks the fine-tuned model is expected to perform post-deployment.
From a skills perspective, an understanding of LLM prompting techniques ("prompt engineering") is also beneficial, as well-crafted prompts in the training data lead to better fine-tuned models. However we are also at a point where the context windows are getting larger and larger and it’s possible that customising prompts will become less important in the future.
https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset
3. AWS AI: For Enhanced Customer Experience
Amazon Web Services (AWS) offers an extensive suite of AI services and machine learning capabilities, allowing organisations to build customer experience (CX) solutions using infrastructure provided by AWS.
They present a portfolio of interoperable services:
The core philosophy behind AWS AI for CX is to provide the components that can be built upon, forming the backbone of you CX stack..
Ideal Business Cases
These are particularly well-suited for:
Technical Considerations
Data is the main one here. It is important that high-quality, relevant data is used and would need to be scoped out in advance.
For example Amazon Personalise requires structured historical user-item interaction data, item metadata, and user metadata.
Data often requires preprocessing, cleaning, and transformation to meet the specific formatting requirements of each AWS service. This is a key consideration for these services, and expertise in this area is essential.
Similarly, an integration strategy would be another essential piece. Seamless integration with existing enterprise applications such as CRMs, ERPs, websites and mobile apps is key.
This typically involves making secure API calls to the respective AWS AI service endpoints. AWS SDKs provide the necessary tools to facilitate these integrations, but a clear understanding of API versioning, error handling, and network architecture is needed.
https://aws.amazon.com/ai/generative-ai/
4. Vercel AI SDK: Prototyping, Quickly
The Vercel AI SDK emerges as a compelling toolkit specifically engineered to simplify the integration of AI capabilities directly into web user interfaces.
Its primary design goal is to provide developers with a streamlined approach to building AI-powered applications with first-class support for streaming text, structured data, and components.
This allows for dynamic, real-time interactions without complex backend configurations for managing AI model responses on the client side. Whilst it is not as robust as the other methods mentioned, it provides us with such a streamlined approach that it has other advantages.
The main characteristics are:
Ideal Business Cases
The Vercel AI SDK is particularly advantageous for:
Technical Considerations
While the SDK itself is a frontend library, the infrastructure you'll need is a Node.js environment for development and deployment - meaning Vercel’s solution is particularly useful for those already using their main platform.
And when it comes to data considerations the Vercel AI SDK itself does not directly handle model training or data preparation for the AI models it connects to. Instead, we are focused on the quality and nature of prompts sent from the frontend application, as these will significantly influence the output.
Once again a robust integration strategy will be needed, however integration is primarily at the frontend layer, and the SDK provides many methods for making this as seamless as possible.
5. Anthropic Claude: Enterprise-Grade Conversational AI
Anthropic, an AI safety and research company, offers its advanced large language models (LLMs), primarily the Claude series, for enterprise integration. Their focus is on developing helpful, harmless, and honest AI, aiming to provide robust and secure conversational AI solutions for businesses. Anthropic emphasizes enterprise-grade features, including significant context windows and data security, to enable complex, knowledge-intensive applications.
Essentially, Anthropic's offerings empower organizations to leverage powerful, pre-trained conversational AI, which can then be deeply integrated and tailored to specific business contexts and workflows.
The main characteristics are:
Ideal Business Cases
Anthropic Claude is particularly well-suited for:
Technical Considerations
The primary technical consideration involves data preparation for fine-tuning and contextualization. While Claude's large context window allows for substantial input, fine-tuning requires high-quality, relevant prompt-completion pairs that are representative of the desired behavior and knowledge. This data quality is paramount for achieving optimal performance.
Integration with existing systems is achieved through Anthropic's API. For enterprises already within the AWS ecosystem, Claude models are also available through Amazon Bedrock, which can streamline deployment and leverage existing cloud infrastructure, security protocols, and data sources within AWS. Expertise in API integration and understanding of prompt engineering techniques are beneficial for maximizing Claude's effectiveness.
https://www.anthropic.com/enterprise
—
Choosing the right approach for bespoke AI development is a critical decision for technical leaders. If you're ready to translate these insights into tangible value-driven products or wish to delve deeper into how specific platforms can meet your unique needs, our team is here to help. Get in touch to schedule a discussion with our team.