5 AI Myths Slowing You Down

November 20, 2024

I’ve scoped hundreds of applied AI projects over the last 6 months at Fractional AI, and there’s a common set of myths and misconceptions that pop up over and over again. 

I get it—genAI chatter is incessant, the field is evolving fast, every company you know is 'AI washing,' and 'hot takes' are nearly impossible to escape (irony noted).  

Distinguishing signal from noise is tough, so we wrote this guide to some of the more common places where people get tripped up.

Myth 1: “I need to train my own model” 

Reality: No, you don’t. 

Training a custom model is an incredibly resource-intensive process that requires massive amounts of data, computational power, and expertise for results that won’t keep up with frontier models.

Another way of thinking about this: OpenAI and Anthropic already did the hard work of building models trained on the vast knowledge of the internet. By training your own, you’re throwing away the most significant part of their invention. 

Don’t take my word for it, take Bloomberg’s: Bloomberg spent over $10M training a GPT-3.5-era model on their own financial data, only to find out that GTP-4 (without any specialized finance training) was able to beat it on almost all finance tasks out of the box.

A few caveats: 

  • Training is distinct from fine-tuning
    • Often folks refer to a “custom model” or “training their own model,” when they’re thinking about fine-tuning.
    • Fine-tuning a frontier model with your specific data is often a powerful technique depending on the use case.
  • Edge cases where a custom model might make sense
    • Non-NLP tasks: There are some ML applications (e.g., time series anomaly detection) where your ML team might want to try replacing a legacy model with a genAI-era model.
    • Resources and scale: Let’s say you’re Apple – with all its resources and reach. You may want to build a custom model especially suited to run on Apple devices (...but even so, Apple has already announced a partnership with OpenAI).

Myth 2: “I need to pick which model to use” 

Reality: Model selection is a minor implementation detail, not the main event. 

You should architect your AI solution so that you can easily change the models in your pipeline as you go, if for no other reason than you’ll want your solution to be able to easily upgrade to the next version of whatever model you choose.

More substantively, you should build an evaluation framework that enables you to easily experiment with various models. In the vast majority of cases, start with a frontier model from OpenAI (GPT-4o) or Anthropic (3.5 Sonnet) and then let data-driven experiments guide your selection from there. 

A note on open v. closed source models - 

  • Open models (Llama, Mistral) can have their benefits (e.g., they’re cheaper and run locally), but also have their costs: namely you need to find a place to host them. Sometimes that’s your own hardware (expensive), sometimes you pay a service to host them for you, but in either scenario, it means extra hoops to jump through for models that usually perform a little bit worse than frontier models. The caveat here is there is a long tail of open sourced models for specific use cases that might be particularly good for that application. 
  • Beyond typically being higher performing, closed models (OpenAI, Anthropic) are also feature-rich and highly ergonomic for developers to use: things like structured output, cost tracking, and prompt playgrounds are all helpful.

Myth 3: “I need to self host”

Reality: No, unless you’re the CIA, you don’t. 

While there are a small sliver of use cases where self-hosting (running models on your own hardware) may be required, in the vast majority of cases self-hosting is distractingly difficult. Your team will spend time, money, brainpower reinventing the wheel instead of working on the truly hard parts of your AI project. If you’re self hosting that also means you’re using open source models or your own custom model (see myths 1 and 2 for considerations there). 

There are a few versions of this that I hear, and it’s worth breaking them down. 

Version 1: “I need to buy GPUs”

I get where this one is coming from: Everyone’s talking about NVIDIA, and with all the billboards on the 101, it’s hard not to feel FOMO and like everyone else is buying GPUs.

Unless you're an AI infrastructure company, you should be able to build on a combination of cloud services and hosted products without having to go down the rabbit hole of buying or renting GPUs. 

Version 2: “My data is so private that we need to self-host”

This one is more common nowadays. Like most things with privacy and security, this is a risk/ reward tradeoff. For the vast majority of use cases, your data is not all that special, the cloud options available today are just as secure as the methods you’re already using, and the risks of any sort of data breach do not outweigh the benefits of AI development. 

Let’s look at three scenarios - 

  • Tier 1 - The vast majority of companies
    • You should be fine using the APIs provided by OpenAI or Anthropic 
    • If you review their privacy policies, you’ll see that the guarantees you’re looking for are largely already in place: see OpenAI’s policy (“you own and control your data…we do not train on your business data”)
  • Tier 2 - You’re in a regulated space and/or have contractual obligations
    • Frontier models and cloud providers have already teamed up to make these models accessible to you in such a way that the data never leaves your cloud infrastructure.some text
      • Azure makes OpenAI models available through Azure OpenAI Service 
      • Amazon makes Anthropic models available through AWS Bedrock
  • Tier 3 - You are the CIA
    • If you’re working on classified info in an air gapped environment, then, yes, you should self host. 

Myth 4: “I should be fine-tuning or doing RAG”

Reality: These are just techniques for building with LLMs. They’re useful in the right circumstances, but seeking a “RAG solution” is putting the cart before the horse. 

The way this one usually comes up: business leader X is getting pressure from the Board to invest in AI. They, understandably, are seeing buzzwords like “RAG” (retrieval augmented generation) and “fine-tuning” everywhere. Rationally, they think “I need AI transformation, I should invest in RAG.” This is a bit like “we need spreadsheets, I should invest in VLOOKUPs.”

Instead, leaders should start by asking “What existing manual workflows take a lot of time and money?”and then work with their AI engineering partner to identify the best way to automate those workflows (which may or may not include techniques like fine-tuning and RAG).

One caveat: It’s worth distinguishing between i/ RAG as a technique, and ii/ RAG as a short-hand for an application you’re trying to buy. Sometimes when people say, “I should be doing RAG,” it’s coming from a place of “I want a chat with my documents tool” (a common RAG application). If it’s the latter, there are off-the-shelf products you can turn to, though I would approach these with some caution since I've seen many “chat with my documents” initiatives end in disappointment. 

Myth 5: “I need a chatbot”

Reality: A chatbot is just one of many user interfaces, and it’s often not the right fit.

Again, all these myths have understandable origins. We all have such a strong association between AI and chatbots thanks to great tools like ChatGPT. 

The reality is that there are many possible ways for a human to interact with an AI system – a chatbot is one such way, but not the optimal way for every use case. The open-ended nature of a conversational interface can obscure the underlying functionality in a way that’s confusing or frustrating to users.

If you’re automating a manual workflow with AI, you’ll want to start by studying the current user experience and how to best design the AI solution to seamlessly integrate into the flow of work.

Let’s look at two non-chatbot examples: 

  • Airbyte - We partnered with Airbyte to automate the process of building API integrations. Developers already had access to a powerful UI for building new connectors, but they were forced to tab back/forth to review complex API documentation. So the AI solution integrated right into this UI and pre-fills each textbox and dropdown for you – no chatbot. 
  • Sincera - We partnered with Sincera to make a really messy data stream of millions of monthly records usable. The user experience that would work best for their team wasn’t all that sexy: on a recurring basis, a CSV with the messy data is dropped into an S3 bucket, the AI system runs, and then it outputs a spreadsheet with a new, clean version of the data in a new S3 bucket. No chatbot, but integrated into the flow of work.

Ultimately, automating workflows with AI looks more familiar than most people expect. It looks less like buying GPUs and more like software engineering.. 

These myths – worrying about training a custom model, figuring out how to self-host, debating different model choices at the onset, getting distracted by specific techniques (RAG, fine-tuning), fitting every AI project into the mental model of a chatbot – just slow you down. 

If you’re ready to get started, check out our white paper on scoping AI projects.

Eddie Siegel is the Chief Technology Officer at Fractional AI. Before launching Fractional AI, Eddie was CTO of Xip, CEO of Wove, and VP of Engineering at LiveRamp.

Explore other blog posts

see all