Skip to content

Rich Stokoe

aerospace, cars, finance, leadership, opinion, tech

Menu
  • Learning to Fly
  • Patents
  • Publications
  • Reading List
Menu

What if your Operating System was an LLM?

Posted on March 13, 2026March 13, 2026 by rich

I recently posed myself a question: what if your operating system understood English? And so began a little thought experiment….

Your operating system (OS) consists of a windows, a desktop wallpaper showing a recent holiday and a collection of small tools that help you manage files, run programs, and interact with your hardware.

Your OS also comes with hundreds of small, composable command-line utilities making up a powerful toolkit. On Linux, Mac and anything else with UNIX DNA you’ll recognise ls, cp, cat, grep, sort, find. Each one does exactly one thing, does it well, and plays nicely with others through pipes and standard I/O.

This is the UNIX philosophy, and it’s beautiful.

But it’s also showing its age. Not in capability – these tools are as powerful as ever – but in usability.

For decades we’ve been typing incomprehensible strings of letters and punctuation into terminals such as ls -laSh and find / -name "*.pdf" -mtime -7 -size +1M. Terse, I’ll grant you. And you definitely feel smarter using the command line like this, but impenetrable abbreviations, flags, options and arcane syntax simply aren’t required anymore – and they’re a huge barrier for the ordinary user who just wants to get something done when there isn’t a nice GUI option.

You feel smarter when you use a command line interface

– Me, just now

Introducing ai-coreutils

My thought experiment led me to building ai-coreutils, a collection of command-line utilities, each powered by generative AI. Think of it as a natural-language layer for your operating system.

Every utility is a small, focused application that does one job well. (Depending on the LLM you plug them into) they can understand your intent, reason about the outcome you really want, and can even protect you from making silly mistakes such as overwriting your boot image, or running rm -rf / (don’t do that, kids). And they stream to stdout like any good UNIX citizen.

Built with .NET 10 and the Microsoft.Agents SDK, ai-coreutils talks to an LLM host via an OpenAI- or Ollama-compatible API (I personally use LM Studio and this is all I’ve tested it with) so you can choose the power of frontier models, or local models where nothing leaves your machine. Nobody will know you wanted to list your Music folder by the length of the lead guitarist’s hair. For example….

Let’s look at some examples, starting with the llm utility, which simply sends a prompt to an LLM and displays a response:

llm "Explain the difference between TCP and UDP in two sentences"

That’s it. Your prompt goes in and a response comes out, on stdout. No browser. No chat window. Just text.

And because it’s stdout, you can pipe utilities together. The summarise utility can be used either by itself – where you pass in the path to a text file – or you can pipe a stream into it from anything (including curl and non-AI powered apps) and do this:

llm "Explain quantum physics to me" | summarise

or

curl -v "https://feeds.bbci.co.uk/news/rss.xml" | summarise

The summarise tool reads from stdin, sends the content to the LLM with a summarisation prompt, and streams the result back. Piping LLM output into another LLM prompt is very UNIX (in my opinion).

Because it’s all just command line stuff, you can use them in bash variables too.

A Conversational Operating System

Now we’re in a world where we can just as easily ask our computer to “copy the newest file from my downloads folder to the desktop” just as well as cp $(ls -t ~/Downloads | head -1) ~/Desktop/.

Here’s what’s in the box so far:

list-models and select-model – Utilities for managing which model you want the utilities to use. When you’ve got a dozen models downloaded in LM Studio, you need a way to switch without leaving the terminal.

llm – General-purpose prompt from the command line. You type a question, it streams the answer. Dead simple.

summarise – Feed it a file or pipe text into it. It produces a concise summary scaled to the complexity of the input. Handy for those 47-page PDFs that could have been an email. Feel free to rename this to “summarize” if you’re averse to OG English…

aicp – This is where things get interesting. Natural-language file copy. You can type aicp "copy the newest file from ~/AppImages to /tmp" and the LLM figures out what you mean. Behind the scenes, the Microsoft Agent Framework agent has access to a ListDirectory tool so it can actually inspect your file system, find the newest file, and propose a plan.

ails – Natural-language ls. Can’t remember the flags for sorting by size in human-readable format? Neither can I. Just tell it what you want: ails "show all files sorted by size, human readable" and let the LLM translate your intent into the right command.

remindme – Schedule a desktop notification using natural language. remindme "in 15 minutes" "Take the pizza out of the oven" parses the time expression via the LLM and schedules a notify-send via systemd-run (cron doesn’t support sub-minute timings). Linux only for now – but feel free to submit a PR to enable it cross-platform as this is all open source – and honestly, this one has already saved a pizza.

And now your OS has the foundations for agentic superpowers….

How It Works Under The Hood

All the utilities read endpoint and model configuration from ~/.ai-coreutils/config.json and then create an Agent to interact with the LLM.

Each of the utilities then prompts the LLM and then either streams the response directly (like llm and summarise) or parses structured output to take action (like aicp and remindme).

Some utilities also use tools – functions that the LLM can invoke during execution. For example, the agent in aicp lets the LLM know about a DirectoryListTool which runs ls to inspect directories by date, size, or name. This is the agentic pattern I wrote about in State of the AI Nation 5 – giving LLMs not just the ability to chat, but to do things.

It’s déjà vu all over again

Every major shift in computing has been about raising the level of abstraction. Assembly gave way to C. C gave way to Python. Command lines gave way to GUIs. GUIs gave way to touch interfaces. Each layer didn’t replace the one below – it made it accessible to more people and reduced the friction for everyone.

I see natural language as just the next layer. Software developers don’t need to code anymore (caveats abound), they “just” need to provide a comprehensive set of requirements (a “spec”).

(Fate is not without a sense of irony here as developers – myself included – have complained for aeons about not having “good requirements”, and now it’s our job).

And now, the same is true of interacting with your operating system. (Don’t even try to tell me this is what Cortana could do – it couldn’t, whatever you were about to say. This is a hill I’m happy to battle and die on).

What’s Next?

Natural language may replace flags and options for many operating system interactions (probably via voice not the command line in reality) and generative AI enable completely new utilities such as summarise. The original coreutils are’t going anywhere though, – you can pry grep from my cold, dead hands – but for those commands you use twice a year, can’t remember the syntax, and can’t be bothered to RTFManpage, an LLM intermediary is a genuine quality-of-life improvement.

There are a few directions I’m keen to explore:

More tools. The GNU coreutils has over a hundred utilities. There’s no reason AI-powered equivalents of find, diff, sed, and friends couldn’t exist. Consider aidiff "what changed in this config file since last Tuesday?" or aifind "locate all Python files that import requests and haven't been modified in 6 months".

Multi-tool pipelines. The real power of UNIX is chaining tools together. Right now, llm | summarise works. But imagine longer chains: fetch an RSS feed, extract the interesting articles, summarise each one, and compile a daily digest – all in a single pipeline, all powered by a local LLM, all without opening a browser.

Better local models. As I covered in a previous post, the open-source model space is moving at breakneck speed. Qwen, Deepseek, and K2 models are getting smarter and smaller. The models that run on a laptop today are better than the cloud-hosted models of two years ago. This only makes local, private, AI-powered command-line tools more practical.

A natural-language shell? This is a natural progression. What if instead of bash or zsh, your default shell was an LLM-powered interpreter that could understand both traditional commands and natural language, seamlessly? You’d type ls when you know what you want, and “show me the biggest files modified today” when you don’t. But I think voice is the right interaction mode for natural language, not typing.

In Conclusion

The operating system is the bridge between human intent and machine execution. For fifty years, we’ve crossed that bridge via operating system utilities, flags, options, pipes, complex syntax.

LLMs offer a different bridge – one built in a much more expressive language. “ai-coreutils” is my attempt to bring that idea to life.

I think Doug McIlroy would approve of us piping the output from one simple LLM-enabled tool into another.

The project is open source and available at github.com/richstokoe/ai-coreutils. Contributions and suggestions are all welcome.

Please note, this is a thought experiment. There is no warranty. Use this at your own risk and inside a sandbox / virtual machine you don’t mind destroying.

Category: Artificial Intelligence, Code, Featured, Software

Pages

  • Learning to Fly
  • Patents
  • Publications
  • Reading List

Categories

Architecture (2) Artificial Intelligence (9) Aviation (25) Bad Company (2) Books (1) Cars (2) Civil Aviation (27) Code (17) Companies (28) Entertainment (1) Featured (19) Film (1) Finance (4) Fitness (2) Gaming (4) General Aviation (26) Internet (28) Leadership (1) Military Aviation and Space (1) Mobile (7) Motoring (2) Open Finance (1) Payments (1) People (11) Places and Events (12) Politics (8) Products (6) Quantum Computing (1) Search (6) Security (1) Social (5) Social Business (3) Social Media (6) Software (23) Sports (2) Technology (48) TV (1) Uncategorized (22)

Recent Posts

  • What if your Operating System was an LLM?
  • TOON – The AI Data Format Designed to Save You Money
  • Building a 2nd Brain: Your Personal, AI-Augmented Knowledge Manager
  • State of the AI Nation 5: Walking the Talk
  • Learning to Fly: Lesson 25 (Video)
  • Learning to Fly: Lesson 24
  • Learning to Fly: Lesson 23 (Full Flight Video)
  • State of the AI Nation 4: Open AI Update
  • Learning to Fly: Lesson 22
  • Q-Day and Post-Quantum Cryptography
© 2026 Rich Stokoe | Powered by Minimalist Blog WordPress Theme