AI use cases don't scale (yet) and it's fine

One common argument against AI new projects in SMEs and Large Corporates is: "Yes, but it doesn't scale."

This implies that if a solution can't be easily expanded to serve a massive audience or handle enormous amounts of data, it's not worth pursuing.

Relax, breathe and repeat this mantra after me : “not everything needs to scale".

Let's not have a short memory and remember that in many corporate environments, much of the work done is not at scale.

We use Excel files every day for ad-hoc analysis or modeling. And most of the time it isn't built to handle thousands of rows of data or to be shared accross hundreds of users. And when it's shared, #REF errors might pop up, they are annoying but it’s fine we accept it and it gets fixed.

Same goes with other tools or even processes that we use individually for our own productivity. We don't create them for mass distribution.

So why is it that we set different expectations for LLMs applications ?

Many tasks would benefit from AI without neeeing robust, scalable deployment. Avoiding manual and repetitive copy-pasting or waiting for complex Excel formulas can be entirely outsourced to an LLM. Drafting contracts, performing a one-shot sentiment analysis on customer reviews, reviewing the syntax and grammar of a memo, etc., are examples of tasks that can already benefit from having small LLM applications tailored to a single person or small team needs without needing to scale them across the entire organization.

Sometimes, the most powerful and effective tools are the ones designed for you, the individual user. This can only be discovered through trial and error and by putting these tools in the hands of every single person in the organization.

Previous
Previous

Downward-facing code

Next
Next

Whatever PE wants… #1