Founder Insight

Why Knowledge Graphs Beat Fine-Tuning for Representing Real Human Minds

Dara Ladjevardian, CEO & Co-Founder at Delphi

Listen on TL;Listen Prefer to listen? Hear this article read aloud.

When most AI companies fine-tune a model on a person’s data, they’re making a trade-off: speed and accuracy for a black box. You get a model that answers questions well, but you lose visibility into why. Delphi made the opposite bet. Instead of fine-tuning, Dara Ladjevardian’s team built a temporal knowledge graph—a structured representation of how a person thinks that prioritizes explainability and control above all else.

The technical choice reflects a philosophical one: if you’re representing a real human mind, people need to trust it. And trust requires visibility.

“The problem with fine-tuning models is you lose explainability and you lose control,” Dara explains. “But if you’re representing a human, explainability and control are really important because this is all about trust. This is the difference between us and ChatGPT or Claude: this is how this human mind works.”

The Architecture of a Mind

The foundation of Delphi’s approach starts with a borrowed concept from cognitive science. In 2014, Ray Kurzweil published “How to Create a Mind,” arguing that the human mind is a hierarchy of pattern recognizers. When deep learning models emerged in 2021, Dara realized something: an LLM is also a pattern recognizer. It recognizes patterns in words and generates output based on those patterns. A human mind, then, could be modeled as a hierarchy of LLMs working together—if you represent the data correctly.

But raw data is noise. You need structure. That’s where the knowledge graph comes in.

A Delphi knowledge graph isn’t a simple embedding space or a fine-tuned model parameter set. It’s a structured graph of primitives: events, entities, heuristics, and the connections between them. When someone creates a Delphi, their podcasts, tweets, essays, and interviews are ingested not as raw text but as a knowledge graph. An event is a specific decision or moment. An entity is a person, company, or concept. A heuristic is a decision rule: “If I’m asked about hiring, I think about culture first.”

These primitives are connected. The graph learns how the person reasons across different domains by observing patterns in how they’ve connected events, entities, and heuristics in their past decisions and statements.

Temporal Learning: Why Time Matters

Most AI representations treat all data as equivalent. A person’s opinion from five years ago carries the same weight as a statement made last week. But humans aren’t static. We change. We learn. We update our beliefs. A knowledge graph that doesn’t track time is already obsolete.

“The mind is temporal. It changes over time,” Dara says. “You need to be able to track how someone’s beliefs and opinions change over time and the recency bias of new things.”

Delphi’s temporal knowledge graph weights recent data more heavily and explicitly models how a person’s positions have evolved. If you changed your mind about a technology, the graph knows when and why. If you doubled down on a belief after an experience, it reflects that commitment in the structure.

This matters practically. When someone asks Delphi a question about a new situation, the system can say: “Based on how you reasoned about similar situations in 2023, and updating for your recent statements about X, I think you’d approach this like Y.” It’s not inventing new opinions—it’s extrapolating along a trajectory the person has been on.

Two Modes: Strict and Adaptive

To preserve trust, Delphi gives users two knobs: strict mode and adaptive mode.

In strict mode, Delphi will only answer questions it has direct evidence for. If you’ve never explicitly discussed a topic, it says “I don’t know.” This is maximally conservative and maintains zero hallucination risk—you can never be surprised by your own Delphi saying something you’d never say.

Adaptive mode is more powerful. It answers questions where the knowledge graph can infer what you might say in a new situation. If you’ve written about how you think about hiring, professional development, and compensation, and someone asks how you’d think about paying a new role, adaptive mode might extrapolate: “Based on how you’ve reasoned about these adjacent topics, here’s what I think you’d consider.”

But this extrapolation isn’t magical. It’s based on heuristics learned from the graph structure. “If someone asks my Delphi how they can use the product as a lobbyist, and I have no direct data about lobbying, the system might infer what I’d say based on how I’ve thought about other professions,” Dara explains. The system looks for the heuristic—the reasoning pattern—and applies it to new ground.

Why This Matters More Than Model Size

The industry’s instinct is to throw more data and parameters at the problem. Bigger model, better performance. But Dara’s argument is that you’re solving the wrong problem if you’re starting with a model-centric view.

“The architecture of the data is really important,” he emphasizes. You can fine-tune a model on someone’s life’s work and get something that passes a test. But a few months later, when you need to explain why that model made a decision—or worse, when you need to correct an error—you’re stuck. You can’t point to the heuristic it got wrong because the heuristic is diffused across billions of parameters.

With a knowledge graph, corrections are surgical. If the system inferred something wrong, you can see which heuristic led to it and update that specific piece of reasoning. The owner maintains control over their own representation.

This also means the model doesn’t become a ghost version of the person that makes their own decisions. It’s a decision support tool grounded in their actual thinking patterns, not an autonomous agent pretending to be them.

The Evaluation Problem

Building this works only if you can verify that the inferences are actually faithful to the person. How do you test whether adaptive mode is capturing the person correctly?

“We’re creating data sets on what it means to reason, what are the heuristics,” Dara explains. The process is still emerging, but the intuition is: if you can show that the system consistently applies the same heuristics the person applies in new domains, then you’ve captured something real about how they think.

This is where the temporal graph becomes crucial. You’re not evaluating against a frozen snapshot of the person. You’re evaluating whether the system tracks their evolution and can extrapolate along a consistent trajectory.

FAQ

What’s the difference between a knowledge graph and a fine-tuned LLM?

A fine-tuned LLM is a black box: you train on data, get back weights, and hope it generalizes. A knowledge graph is explicit structure—events, entities, heuristics, and connections. The difference is like the difference between knowing a neural network learned to recognize cats versus understanding that cats have whiskers, pointy ears, and a certain movement pattern.

Can’t a large language model also capture how someone thinks?

A large model trained on a person’s data will capture patterns, but you lose visibility. Fine-tuning creates a model that’s difficult to audit, correct, or explain. If your digital mind says something you’d never say, you can’t trace why. With a knowledge graph, you can trace the heuristic and fix it.

Why does temporal structure matter for representing beliefs?

People change. A knowledge graph that treats your 2020 opinion the same as your 2024 belief is out of date. Temporal structure lets the system weight recent positions more heavily and understand how your thinking has evolved, so it can extrapolate more accurately.

Is strict mode too limited? Can it actually answer meaningful questions?

Strict mode is limiting by design—it’s the safety mode. It works well for common questions you’ve already answered. For new situations, most users will opt into adaptive mode. The key is that you choose, not that the system silently makes a choice for you.

How do you prevent adaptive mode from hallucinating?

By building evals that verify the inference actually reflects your heuristics, not the LLM’s best guess. If you’ve reasoned about hiring in a way that shows you prioritize culture, adaptive mode should infer your approach to other talent decisions along those same lines—not based on general knowledge about hiring, but based on your specific framework.

Can someone else game my knowledge graph and make me say things I wouldn’t say?

No. The knowledge graph is owned by the person it represents, trained only on curated data they provide, and not exposed to the open internet. Your heuristics can’t be poisoned by public information you didn’t personally validate.

What if I want to hide part of how I think?

You choose what data goes into your knowledge graph. Podcasts, tweets, essays, Google Drive documents—you curate the input. Sensitive reasoning doesn’t have to be included. The graph only knows what you feed it.

Does a knowledge graph scale to complex human reasoning?

That’s the active research problem. Simple heuristics scale easily. Deep, domain-specific reasoning scales if you have enough structured data. The bigger the person’s body of work, the more sophisticated the graph can be. That’s why Delphi works well for experts and founders—they have plenty of public reasoning to work from.

What happens if my thinking is contradictory or changes rapidly?

The temporal graph captures that. Contradictions are tracked—the system knows when you held position A and when you shifted to position B. Rapid change gets reflected in high recency bias. The goal isn’t to flatten out contradictions but to represent your actual reasoning, including its evolution.

Is this approach better than fine-tuning for all use cases?

No. Fine-tuning is faster and can scale to enormous amounts of data. But if explainability and control matter—and they do when representing a real person’s mind—knowledge graphs win. It’s a trade-off between performance and transparency.

Can Delphi also support a fine-tuned model layer on top of the knowledge graph?

Technically, yes. Right now Delphi uses the knowledge graph as the primary reasoning engine. But a fine-tuned model could provide auxiliary pattern matching. The graph stays as the interpretable core—the human-understandable layer that maintains trust.

Full episode coming soon

This conversation with Dara Ladjevardian is on its way. Check out other episodes in the meantime.

Visit the Channel

Related Insights