People use their brains differently today than they did 30 years ago.
In 1995, we all had to learn and retain knowledge. Referencing books and even using Microsoft Encarta was slow and cumbersome. And then Google arrived.
In the early 2000s, we learned fewer things and retained less knowledge, but we started to learn and retain something new: an “index” of places online we could get information from when we needed it.
Remembering the addresses for Wikipedia and Google were often all we needed to reach our desired knowledge.
More recently, with the launch of generative AI and – most importantly – agentic AI assistants which can do things on our behalf, we learn and retain even less knowledge. We don’t even need to build an index of places where knowledge can be found.
Instead, we only need to learn and retain the prompts needed to make AI Agents retrieve the knowledge, but now, they can apply that knowledge too.
Building a 2nd Brain
The challenge with using off-the-shelf generative AI (such as ChatGPT), is they’re general-purpose and not focused on your particular knowledge-management problems.
With the proliferation of AI tools that you can personalise and run on our own computer, we can build our own reference library and use these AI tools to form a virtual assistant to help us manage it.

Building a personal, AI-powered, knowledge management assistant is simple and inexpensive. Our goal is to build a library that is equally as easy for humans and AIs to contribute towards and use. And as I value control and privacy of my data – as well as being a total cheapskate! – you’ll note my setup prefers locally-hosted, open-source and free technologies throughout.
First let’s review which components we need to build a conventional knowledge management library and then we’ll move on to adding AI superpowers:
1. Document Library
Start with a folder on your computer. Many AI assistants are designed to work with software codebases, so working with text files in folders is just fine.
My recommendation is to use a highly-interoperable format to ensure you can use your library in as many tools and ways as possible – Markdown is a great choice as it supports tables, headings, images and links, as well as a range of other styling features. Markdown is also easily readable and editable by humans as well as machines and AIs, and there are myriad tools out there which support Markdown including Visual Studio Code.
2. Document Editor and Cross-Referencing
I’ve found that Obsidian is a great (free!) tool for editing markdown files.
You give Obsidian a root folder to work within (this root folder and all the child files and folders are called a “vault”). The UI presents a simple tree view of directories and files (called “notes”) in the vault and automatically displays connections between files where information is cross-referenced using Markdown hyperlinks.
One challenge with using Obsidian is that it is installed on each device separately and works with local file systems. You can pay $4/month (at the time of writing) for Obsidian Sync which will copy files to the cloud and synchronise them across devices, but I will describe a free alternative to this…
A huge amount of credit is due to my good friends Scott Altham and Jesse Cary for educating on me on the extensibility features of Obsidian, I now pretty much use it for everything from having a personal Markdown-based Trello-style kanban board, to Mermaid diagrams to Miro-style whiteboarding with Excalidraw:

3. Synchronising your Library Across Devices
Rather than paying $4 for Obsidian Sync (which I have to say, is perfectly reasonable and represents good value for money), I have set up a free Dropbox account (for transparency, that’s my referral link) which offers 2GB of cloud space and synchronises across devices where the Dropbox app is installed.
Dropbox is a good choice for me as I exclusively use Linux and Mac machines, but you might find OneDrive or Box.com better for your circumstances. They all offer free tiers which are usually more than sufficient to store text files and attachments. Add the root folder (or “vault” if you’re using Obsidian) of your library to your cloud storage folder (e.g. your Dropbox folder, as in my case) and you’ll see your library synchronise across devices seamlessly.
You can also use a hosted Git repository, such as Github or Gitlab. This will give you version history too, but sync is more involved.
Now that you have a personal reference library let’s sprinkle some AI fairy dust on it.

4. Exposing your Knowledge Library to an LLM
There are several way you can connect a large language model to your knowledge library
a) Connect a local LLM via the FileSystem MCP Server
By adding the Filesystem MCP Server to your locally-hosted LLM (Ollama or LM Studio are popular options to do this) you can give the model direct access to a specific folder on your machine – for example, your Obsidian vault.
Through this MCP Server – and other MCP servers such as Fetcher which enabled Web Search – you can ensure maximum privacy and be assured that your input to the LLM isn’t being used to train it!

b) Build your own Knowledge Management Agents
By far the most complicated option, but the one that offers the most power and control.
There are many Agent frameworks you can choose from:
You can build agents which are specifically designed to research, write, proof-read and publish information.
If you’re new to building agentic systems, there are worse places to start than this amazing free, 4-hour workshop, which includes a step-by-step walk-through of building the agents I’ve mentioned above!
c) Use a general purpose Coding Assistant
I tried the local LLM option with MCP servers for a little while but it didn’t demonstrate the directory-wide reasoning that I’d come to expect from Cursor and Claude Code.
I also made a start on building an set of Agents to manage my Library, however, this quickly took over my life as I had to iterate intensively to get them to work as I wanted.
My advice would be to not try and reinvent the wheel.
I landed on a really nice solution that simply uses Claude Code. I’m already paying for a Pro subscription for coding, but when I point it to my knowledge library it demonstrated the same powerful reasoning skills and treats the whole library as a connected set of information.
I’ve found this combination to be extremely powerful and effective.
For example, when a new popular LLM arrives on the scene, Claude Code researches it and generates Markdown files summarising the model for me. Claude Code also follows my strict “Documentation Guidelines” markdown file, which requires all documents to have a Cheat Sheet summary at the top for quick-reference.

Conclusion
Information growth is only accelerating so it’s vital to have an effective way to manage all of this knowledge. Most generative AI models today understand natural language deeply, making it a good companion to help manage your personal reference library.
But by adding the power of AI agents, specifically those that are designed to work with directories of complex, inter-linked text files, you can automate and simplify how you can extract actionable meaning from that information.
