Building Bespoke AI Products: 5 Key Options

18 July 2025

Many organisations are now looking to build AI into various areas within their operations and customer journeys, and a custom generative AI solution is often the best choice.

Tools such as ChatGPT do allow for basic training within the platform’s regular user interface, but to create a truly bespoke solution we need to consider other options that allow for deeper integration.

Essentially, these services mean that an organisation can utilise a pre-trained model with their own data, meaning the model is acting in a way customised to the users needs. As an organisation you would reap the benefits of intelligent AI without having to train your own model - which is quite an effort in many respects.

What this means for us as a digital products agency is that we can create incredible customised solutions that focus on the product - and therefore the user experience and representation of your brand that naturally shows.

Whether you’re CTO, Head of Product or CEO, strategic leaders always need to know their options.

Here we look at some examples with use cases to highlight the options that are currently available and their common use cases.

1. Google Vertex AI: For End-to-End Machine Learning Operations

Vertex AI offers a unified platform for building and managing custom ML models at scale. It provides access to Google's powerful foundation models for highly tailored solutions.

Google Cloud's Vertex AI stands as a unified MLOps (Machine Learning Operations) platform, engineered to support businesses in building, deploying, and managing machine learning models at scale.

It provides a comprehensive toolkit that addresses every stage of the ML lifecycle, from data preparation and model training to the point of even custom training, model deployment, monitoring, and governance.

A key advantage is its access to Google's own powerful foundation models such as Gemini and Imagen which can serve as a base for custom solutions.

Where Vercel is great at allowing us to prototype quickly by tapping into multiple different models, Google Vertex provides the tools to truly create something custom based on training data the organisation provides.

The main characteristics are:

  • Custom training - teams can build and deploy custom models alongside robust support for custom model development using popular frameworks like TensorFlow
  • Access to Google’s foundation models - it provides access to Google's cutting-edge foundation models, including Gemini, allowing organisations to power their data with these models
  • Scalable training and prediction - much like AWS, Vertex leverages Google Cloud’s scalable infrastructure for deploying models with high availability
  • Ideal Business Cases

    Vertex is particularly well-suited for:

  • Organisations committed to custom: developing sophisticated, custom machine learning models, moving beyond simply consuming pre-built AI APIs
  • Integrated platforms: for managing the entire ML lifecycles, to ensure reproducibility, scalability, governance, and maintainability of AI solutions
  • Companies dealing with large, complex datasets: that necessitate significant custom model development, intricate training procedures, and meticulous tuning
  • Technical Considerations

    Google's AI services are worth considering even if you aren't using the Google ecosystem. Most companies will be with Azure or AWS, but should still consider using Google simply because they have very strong models and may have more attractive pricing.

    A primary consideration is again data, and with Vertex, you need as much of it as possible. The development of effective custom models within Vertex AI is fundamentally dependent on access to substantial, high-quality, and relevant training datasets. Vertex AI readily integrates with GCP data sources like BigQuery and Cloud Storage which may be an advantage.

    Vertex could require team expertise within machine learning concepts and MLOps principles, and an advantage would be to have a working knowledge of the broader Google Cloud Platform and its services. The platform would suit an enterprise with a current data team in this respect.

    https://cloud.google.com/vertex-ai

    2. OpenAI API & Fine-Tuning: Tailoring to Niche Tasks

    OpenAI provides programmatic access to its advanced large language models (LLMs), including the GPT series, via an API.

    Beyond standard prompting, a key capability offered is fine-tuning. This process allows organisations to adapt these powerful, pre-trained foundation models by further training them on their own specific datasets.

    The result is a customised model that exhibits improved performance, adheres to a particular style, or possesses specialised knowledge relevant to unique requirements.

    Whilst this isn’t a custom solution like Vertex, it can feel much more personalised due to using a fine tuned set of prompts to train it.

    The main characteristics are:

  • Adaptation of powerful foundation models - it enables the customisation of state-of-the-art LLMs like GPT-4o that has been pre-trained on vast amounts of diverse text data, or the use of o3 which specifically focuses on reasoning - something that is very important when building agentic AI systems
  • API-driven access and deployment - both base models and fine-tuned models are accessed via the same API structure, simplifying integration into existing applications and workflows once a model is prepared
  • Task-specific - while general models are broadly capable, fine-tuning can significantly enhance performance on narrow, well-defined tasks where specific context or output style is critical
  • Managed training process - the actual fine-tuning computation is handled by OpenAI's infrastructure, abstracting away the complexities of managing GPU clusters and training environments from the end-user
  • Ideal Business Cases

    These are particularly well-suited for:

  • Businesses that require high-quality generative AI outputs that must incorporate specific company context, internal jargon, detailed product knowledge, or industry-specific language that a general-purpose foundation model might not fully grasp.
  • Applications where a consistent and distinct brand voice tone, or persona is critical in AI-generated text, such as customer service communications, marketing copy, or internal documentation
  • Use cases where the performance of a base OpenAI model is already strong but requires a demonstrable uplift in accuracy, relevance, or adherence to specific formats
  • Technical Considerations

    OpenAI advises that dataset quality is more important than quantity, though still, hundreds or even thousands of well-crafted examples are often needed for noticeable improvements. This data must be highly representative of the tasks the fine-tuned model is expected to perform post-deployment.

    From a skills perspective, an understanding of LLM prompting techniques ("prompt engineering") is also beneficial, as well-crafted prompts in the training data lead to better fine-tuned models. However we are also at a point where the context windows are getting larger and larger and it’s possible that customising prompts will become less important in the future.

    https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset

    3. AWS AI: For Enhanced Customer Experience

    Amazon Web Services (AWS) offers an extensive suite of AI services and machine learning capabilities, allowing organisations to build customer experience (CX) solutions using infrastructure provided by AWS.

    They present a portfolio of interoperable services:

  • Amazon Bedrock - provides access to many different Large Language Models (LLMs) that can be used when communicating with customers
  • Amazon Polly - transforms text into lifelike speech for voice-enabled applications
  • Amazon Transcribe - provides accurate speech recognition for converting audio to text
  • Amazon Comprehend - a natural language processing (NLP) service to extract insights and relationships from text (e.g. sentiment analysis)
  • Amazon Personalise - for creating applications with real-time personalised recommendations
  • The core philosophy behind AWS AI for CX is to provide the components that can be built upon, forming the backbone of you CX stack..

    Ideal Business Cases

    These are particularly well-suited for:

  • Organisations already integrated with the AWS ecosystem: leveraging existing infrastructure, data stores like S3 and security protocols for AWS means that we can streamline development from the start for businesses already in the ecosystem
  • Enterprises requiring high reliability: AWS is almost the go-to for global app infrastructure, meaning enterprises are likely to consider using them
  • Businesses aiming to embed AI across multiple customer touchpoints: an important point, due to the modular nature of AWS AI services they allow for their combined use or single use across the products mentioned, meaning tools can be rolled out to various aspects of the customer journey
  • Technical Considerations

    Data is the main one here. It is important that high-quality, relevant data is used and would need to be scoped out in advance.

    For example Amazon Personalise requires structured historical user-item interaction data, item metadata, and user metadata.

    Data often requires preprocessing, cleaning, and transformation to meet the specific formatting requirements of each AWS service. This is a key consideration for these services, and expertise in this area is essential.

    Similarly, an integration strategy would be another essential piece. Seamless integration with existing enterprise applications such as CRMs, ERPs, websites and mobile apps is key.

    This typically involves making secure API calls to the respective AWS AI service endpoints. AWS SDKs provide the necessary tools to facilitate these integrations, but a clear understanding of API versioning, error handling, and network architecture is needed.

    https://aws.amazon.com/ai/generative-ai/

    4. Vercel AI SDK: Prototyping, Quickly

    The Vercel AI SDK emerges as a compelling toolkit specifically engineered to simplify the integration of AI capabilities directly into web user interfaces.

    Its primary design goal is to provide developers with a streamlined approach to building AI-powered applications with first-class support for streaming text, structured data, and components.

    This allows for dynamic, real-time interactions without complex backend configurations for managing AI model responses on the client side. Whilst it is not as robust as the other methods mentioned, it provides us with such a streamlined approach that it has other advantages.

    The main characteristics are:

  • Frontend-First AI - it abstracts away many of the complexities of dealing directly with AI model
  • Model agnostic - while developed by Vercel (the creators of Next.js, a technology we’re a big fan of), the SDK is not tied to a specific AI provider, meaning you can use models from OpenAI, Hugging Face, Anthropic and more
  • Streaming support - a core strength is its native support for streaming responses. This is crucial for applications like chatbots or live content generation, where users expect immediate, incremental feedback
  • Ideal Business Cases

    The Vercel AI SDK is particularly advantageous for:

  • Web-centric organisations: companies whose primary customer touchpoints are websites and web applications
  • Development teams leveraging modern JavaScript: especially those already using Next.js, React, Svelte, or Vue, where the SDK’s design patterns and hooks feel native
  • Rapid prototyping and iteration: certainly its main strength, we can quickly experiment with and deploy AI-enhanced user experiences without incurring significant backend overhead
  • Technical Considerations

    While the SDK itself is a frontend library, the infrastructure you'll need is a Node.js environment for development and deployment - meaning Vercel’s solution is particularly useful for those already using their main platform.

    And when it comes to data considerations the Vercel AI SDK itself does not directly handle model training or data preparation for the AI models it connects to. Instead, we are focused on the quality and nature of prompts sent from the frontend application, as these will significantly influence the output.

    Once again a robust integration strategy will be needed, however integration is primarily at the frontend layer, and the SDK provides many methods for making this as seamless as possible.

    https://vercel.com/ai

    5. Anthropic Claude: Enterprise-Grade Conversational AI

    Anthropic, an AI safety and research company, offers its advanced large language models (LLMs), primarily the Claude series, for enterprise integration. Their focus is on developing helpful, harmless, and honest AI, aiming to provide robust and secure conversational AI solutions for businesses. Anthropic emphasizes enterprise-grade features, including significant context windows and data security, to enable complex, knowledge-intensive applications.

    Essentially, Anthropic's offerings empower organizations to leverage powerful, pre-trained conversational AI, which can then be deeply integrated and tailored to specific business contexts and workflows.

    The main characteristics are:

  • Expanded context window - Claude models, especially the enterprise versions, boast exceptionally large context windows (e.g. 500k tokens), allowing them to process and recall vast amounts of information in a single interaction. This is crucial for handling lengthy documents, complex codebases, or extended conversations
  • Safety and responsible AI - Anthropic builds its models with a strong emphasis on "constitutional AI", guiding the models to adhere to ethical principles and minimize harmful or biased outputs. This focus on safety and alignment is a core differentiator for enterprise deployments
  • Fine-tuning capabilities - similar to OpenAI, Anthropic offers fine-tuning for its Claude models. This allows businesses to adapt the pre-trained models to specific tasks, integrate proprietary knowledge, or align the model's output with a consistent brand voice and style
  • Enterprise-grade security & controls - Anthropic's enterprise offerings include features like single sign-on (SSO), role-based access controls, audit logs, and data retention policies, ensuring secure management and protection of sensitive company data. They explicitly state that customer data is not used to train their models by default
  • Ideal Business Cases

    Anthropic Claude is particularly well-suited for:

  • Organizations requiring extensive context understanding with a need for AI to process and understand vast amounts of internal documents, customer interactions, or technical specifications for tasks like legal summarization, research analysis, or comprehensive customer support
  • Enterprises prioritizing AI safety and ethics or a strong commitment to responsible AI development and deployment, valuing Anthropic's foundational focus on harmlessness and honesty
  • Development teams integrating AI into web-centric workflows as capabilities like GitHub integration and a focus on assisting with coding tasks, Claude is valuable for engineering teams looking to enhance productivity and streamline software development processes
  • Technical Considerations

    The primary technical consideration involves data preparation for fine-tuning and contextualization. While Claude's large context window allows for substantial input, fine-tuning requires high-quality, relevant prompt-completion pairs that are representative of the desired behavior and knowledge. This data quality is paramount for achieving optimal performance.

    Integration with existing systems is achieved through Anthropic's API. For enterprises already within the AWS ecosystem, Claude models are also available through Amazon Bedrock, which can streamline deployment and leverage existing cloud infrastructure, security protocols, and data sources within AWS. Expertise in API integration and understanding of prompt engineering techniques are beneficial for maximizing Claude's effectiveness.

    https://www.anthropic.com/enterprise

    Choosing the right approach for bespoke AI development is a critical decision for technical leaders. If you're ready to translate these insights into tangible value-driven products or wish to delve deeper into how specific platforms can meet your unique needs, our team is here to help. Get in touch to schedule a discussion with our team.

    Insights