I’ve been spending a lot time thinking making AI actionable. The motivation for this is simple: we live in a world of abundance. Every organisation has tonnes of data and things to do. Every organisation also has resource constraints, and never enough people to do the things they want to do.
So, if AI were actionable, and could take things the rudimentary things off our plate, we could focus on tasks that are more cognitively demanding. This means AI will change how we work and what we spend time on; it does not mean AI will replace us.
Today’s essay is about the how AI might become more actionable than it is today.
Software is text
You’ve probably heard the saying software is just 0’s and 1’s. Here’s what that means under the hood.
The first computer program was written back in the 1800’s by Ada Lovelace. The goal was to calculate a set of Bernoulli numbers. The beauty of this invention was that Ada wrote the program based on what could be done. The program itself was never tested. Incidentally, OpenAI’s fastest text model is named after her.
In the 1940’s, programmers used Assembly Language to write programs. Assembly is a “low-level” programming language because it communicates directly with the hardware of a computer.
The picture below is a sample command using Assembly Language. Notice the amount of text you to need to multiply a number by 6. As is obvious from the picture below, it took a lot of effort to write a simple program.
Eventually, we started using “high-level” programming languages. These languages have a high degree of abstraction from what is understood by a computer. A “compiler” translates this high-level programming language into something the computer can understand.
Whilst new computer languages come and go, the way we communicate with machines has not changed. We either communicate directly, which no one really does anymore, or we communicate through a conduit.
AI as a compiler
AI is the next level of abstraction.
Instead of writing “code”, you ask for what you want. AI is one of the conduits between you, the user, and the computer.
Today, you can already convert text into code:
When I write code, Github Copilot takes a comment and converts into a code. For example, I can say “function to convert a string into a number” and it will write the function for me. It’s not always perfect but it saves me a tonne of time. I’ll happily pay Github $10 a month for the time saved. If you’re feeling adventurous, go to OpenAI’s examples page and play with the natural language to Stripe API to see this in action.
Basic actions
Let’s use an example to illustrate. Imagine you want to make a dinner reservation. Here are your options today:
Walk to the restaurant and ask for a reservation
Call the restaurant
Book online using opentable.com or an alternative
In it’s simplest form, an AI experience will look something like this. A chat like interface that lets a user complete their reservation. If a table is not available, the assistant is smart enough to have a back & forth, offer some alternatives and complete the task.
This neither new or revolutionary. Technically speaking, you’ve been able to do this for a few years. It’s a closed loop problem and solving it with a computer program is trivial. It’s simple because the range of outcomes are finite.
Complex actions
Let’s consider a more complex example.
Imagine you get 1,000 emails a day. It’s impossible for you to respond, and you want AI to help. Reasonable request. In fact, lots of people building tools to try and solve this.
The dynamics of this problem are different:
Variations are infinite → it is not a closed loop like the restaurant reservation.
Accuracy is critical → You do not want to send a random email to someone that is gibberish.
This is precisely why the builder of the tool above has chosen to augment rather than automate. Because of the variance of outcomes and the importance of accuracy, the app will write responses but not send them. The user reviews the response, edits and then sends.
The final frontier
The final frontier is really not having to say anything at all. Meaning, there is no “email” assistant. There is just an AI for everyone. It does things without you telling it to do so. It remembers to wake you up at 6 AM, and tells you to exercise if you haven’t already. Some people call this artificial general intelligence (AGI) — though I don’t think anyone agrees on what the definition of AGI is.
I don’t spend a lot of time thinking about this because it seems very far away. And unless you are building an LLM from scratch and are one of OpenAI, Google or Stability, it’s largely a philosophical debate for now.
How AI becomes actionable
Anyway, the purpose of this essay was to lay out a framework for how AI will become more actionable. Here’s my view:
We start with small confined use cases. This is like the restaurant reservation problem. The number of outcomes is limited and it’s a closed loop. These are not the highest value problems but they are easy to do.
Then, we start to look at problems that have a larger variety of outcomes. Within this set, we start with those that have the lowest consequences of failure (e.g. copy writing). As AI improves, it makes inroads into problems where failure is less acceptable: medicine, law, accounting.
In all of this, I strongly believe AI will augment, not replace. I believe the greatest strength humans have is the ability to adapt. So the rules of the game change, but we continue to play. I’m excited about the future, are you?