Who Is Jan Liphardt?

Jan Liphardt is a Stanford bioengineering professor who couldn't stay in the classroom once he realized the stakes were too high. Two years ago, he opened a box in his living room containing a humanoid robot — and was underwhelmed. But what could have been a disappointing consumer unboxing became a founding impulse: if this technology is important enough to put in people's homes, it's too important to leave proprietary. Today, he's building OpenMind, an open-source robotic operating system that lets developers, educators, and families customize what robots do — instead of accepting whatever the manufacturer decided was "magic."

He brings a physicist's systematic mind, a teacher's patience for explanation, and a father's conviction about what kids should be able to see and understand inside the technology around them.


The Archetype: The Sage

Primary

The Sage

Secondary

The Sage

Journey Stage

Tests & Allies

Jan's default mode is understanding and teaching. In any conversation, he organizes the world into taxonomies: form factors bifurcate into tall-strong or kid-sized; humanoid use cases sort into education, safety, and health companionship; the school bus dilemma becomes a case study in competing ethics. His signature move is taking abstract questions and building frameworks to hold them.

But Jan isn't content with understanding alone. He's building to make understanding possible. His entire company is structured around transparency: "I want children to be able to look inside and regulators and teachers and developers everywhere to be able to peer into the brain of these robots." This is the Sage's core drive — not gatekeeping knowledge, but opening it.

His secondary archetype is The Creator. The underwhelming box became software. The kitchen table moment with his son asking a robot dog for homework help became the seed for the education use case. The Sage notices problems; the Creator builds solutions. "I learn best by doing things with my hands," he says, and his lab is proof — piles of robot heads, cameras, soldering irons, battery chargers. Ideas live in explanation; reality lives in the maker's space.


The Hero Match

Classical Hero

Prometheus

Prometheus stole fire from the gods and gave it to humanity — not for power or profit, but because he believed people deserved access to the tools that would let them shape their own futures. Jan's conviction about open-source software echoes that exact sacrifice:

"I do not want to live in a future where a humanoid shows up at your front door and says, 'my software is secret and magic.' That would be horrifically sad. It would be very depressing because this technology would appear like complete magic that parachutes from the sky."

The parallel is precise. Magic, to Prometheus, was the problem — divine mystery kept humans dependent. Fire was the antidote: understandable, shareable, your own. For Jan, proprietary software is the modern magic. Open source is fire.

And like Prometheus, Jan faces his own version of the chaining. Raising $20M from a crypto-heavy cap table while trying to keep the technology genuinely open isn't comfortable. The .org domain choice — infinitesimally smaller cost than .com, but philosophically aligned — is a literal sacrifice of commercial signaling for principle. His investors expect returns on tokenomics; his conviction demands accessibility. That tension is the rock.

Pop Culture Hero

Doc Emmett Brown — Back to the Future Part III

Late-trilogy Doc Brown is the professor whose greatest satisfaction isn't legacy or status — it's still building. He's still tinkering, still curious, still delighted by his own invention. Jan mirrors that energy exactly: "I'm still building. I'm not in my golden years. I'm not sitting there contemplating the sunset yet."

Both are scientists who left the institutional path (Doc's time machine obsession would never pass departmental review; Jan's conviction about open robotics required leaving tenure comfort for startup risk). Both live in workshops surrounded by their own creations' parts. Both integrate their inventions into family life without drama — it's just what they do. Doc's kids ride the time train; Jan's son asks a robot dog for homework help.

But the deepest parallel is in their explanation style. When Doc walks to the chalkboard, he doesn't perform intelligence — he delights in the problem itself. Jan does the same. When he describes the school bus dilemma or walks through the architecture choices around safety, he's not lecturing. He's solving out loud, inviting you into the thinking. That's what makes him compelling.


The Story Behind OpenMind

The Founder's Journey ↔ The Company's Journey

Jan Liphardt's Arc

Physicist → professor → fascinated by LLMs → convinced robotics would change daily life → underwhelmed by a box in his living room → founded a company to solve the problem he experienced.

OpenMind's Arc

Software platform for humanoid operating systems → open-source architecture → App Store for robots → infrastructure for guardrails and governance → moving toward a future where the technology is transparent enough that teachers, regulators, developers, and children can understand what the robot is actually doing.

The same conviction drives both: understanding is freedom. Jan couldn't stay in academia because the technology was too important to leave in closed systems. OpenMind couldn't be a proprietary platform because robotics in homes required transparency. The founder's personal breakthrough (I want to modify what this robot does) became the company's structural principle (anyone should be able to modify what a robot does).


How Jan Leads

Jan is a consensus builder on engineering and business decisions. He frames problems systematically, considers tradeoffs explicitly, and invites input. But on questions of principle, he's a sole decision-maker. When it comes to open-source philosophy, children's access to technology, or what safety means, he shifts from "we" to "I."

"I do not want to live in a future where..." These declarative statements — rare in his otherwise careful speech — mark where his conviction overrides strategic calculation.

He leads with humility about what he doesn't know. He admits he's a horrible driver, that his fortune-telling abilities are limited, and that he doesn't have the legal or philosophical acumen to optimize robot rule sets. But this humility sits alongside clarity on core values. He's uncomfortable with his own limitations on governance questions — so he's built infrastructure that allows others to contribute.

The core tension: Professor vs. Builder. Jan oscillates between explaining the world and changing it. The professor in him builds taxonomies and references Asimov. The builder in him opens a box, gets underwhelmed, and writes code. The professor asks questions; the builder demands answers. This tension produces his energy. He can't just explain — the technology is too urgent. He can't just build — the implications demand careful thought. The professor makes the builder responsible. The builder makes the professor relevant.

Founder Superpowers

Superpower

Translating Complexity Through Everyday Scenes

Jan doesn't simplify ideas — he relocates them. When he needs someone to understand Asimov's Laws, he doesn't explain the philosophy. He puts you in a school bus full of kids and asks what your main priority is. When he explains hallucination, he doesn't talk about model architecture — he says he's been grading student homework for 15 years, and humans hallucinate too. Every abstract concept gets a kitchen table, a living room, a Walmart car seat. The audience understands the stakes without any jargon.

Superpower

Disarming Through Genuine Self-Deprecation

"I'm a horrible driver." "I have a horrible secret." "I was super underwhelmed." Jan uses real admissions as bridges to deeper points. Horrible driver leads to why his kids are safer in a Waymo. Horrible secret leads to a reframe on hallucination. Underwhelmed leads to his founding conviction. Most founders protect their credibility. Jan spends his strategically — and the result is that the people around him open up in return.

Superpower

Building Moral Clarity From Disappointment

Most founders start from opportunity. Jan starts from disappointment that becomes conviction. He opened a box, got underwhelmed, and instead of moving on, turned that moment into a moral position about open technology. The arc — encounter, disappointment, moral clarity, company — is what makes his founding story land. He doesn't skip to the vision. He lingers in the moment where things weren't good enough.


What It's Like to Work With Jan

Jan's conversational style is measured and warm — a rare combination. He pauses before answering, respects turn-taking, and rarely dominates a conversation. His word choice is precise: "my suspicion is" instead of "I think," "it seems like" instead of declarative statements. He listens to questions fully before responding.

But this carefulness isn't cold. He remembers details about people he's just met. He finds genuine curiosity in tangents — a COVID mask supply chain story emerges naturally from a geopolitics question; a homework story emerges from a household adoption question. He's high-consideration with high warmth.

His growth mindset is explicit and constant. "Every one of us for the rest of our lives should spend a moment every day learning something new." He means it. When he describes learning by doing, he's not being inspirational — he's describing how his brain actually works. He pays attention to what's happening and anticipates learning something tomorrow.

This creates a particular kind of work environment: systematic, humble, growth-oriented, but grounded in principle. People around Jan know where the lines are (open source is not negotiable) and where the space is (everything else is up for discussion).


Why This Matters (For You)

If You're an Institution Deploying Robots

Schools, universities, research labs, and organizations evaluating robotics platforms face a critical question: how much control do we actually have over what these machines do? Jan's work on OpenMind directly addresses that tension. He's built a system where educators, administrators, and institutional leaders can look inside the robot's decision-making, understand the rules it operates under, and modify them if needed—without waiting for a manufacturer to release updates or having to accept someone else's vision of "smart." The kitchen table moment with his son tutoring instead of solving homework illustrates the stakes: your institution's values need to shape how robots behave, not the other way around. If you're deploying humanoids in schools or labs, that verifiability isn't nice-to-have—it's foundational to responsible adoption.

If You're an Engineer Building Robotics or AI Systems

Jan's architecture choices reveal a philosophy: never optimize for closed elegance. The OpenMind design explicitly accommodates third-party modification. The OM1 architecture includes infrastructure for guardrails and rules that others can verify and change. This is harder than shipping a finished, proprietary system. It's also more resilient.

His decision to use a one-to-two-second decision cycle for safety-critical operations isn't because faster is impossible — it's because imperfect-but-verifiable is more trustworthy than perfect-and-opaque. When you're building systems that will be in homes, this distinction matters.

What would it change about your own architecture if you had to make every decision verifiable to a non-engineer?

If You're Early in Your Career

Jan's path — physics professor, healthcare computing interest, fascination with LLMs, then robotics — wasn't linear. But each step followed curiosity. He didn't move because it was strategic. He moved because the problem became more interesting than the previous one.

He also made trade-offs. Stanford tenure is comfortable. OpenMind is not. He chose the more uncertain path because the stakes of the technology demanded it. His advice is consistent: "Try stuff out, pay attention to what's going on, and you should anticipate learning new things every day for the rest of your life."

The impliation: career strategy comes later. Curiosity and attention to what's actually happening come first.

If You're Considering Joining OpenMind

Jan's founding conviction — open source is non-negotiable — sets the tone for the entire organization. People here aren't shipping a closed platform and pretending it's open. The .org domain was intentional. The App Store for robots was intentional. The infrastructure for verifiable guardrails was intentional.

He's also honest about what he doesn't know. OpenMind is actively looking for lawyers, historians, and philosophers who can improve the rule sets for robot behavior. This isn't rhetoric — there's a specific gap he's identified and he's willing to bring in expertise that outmatches his own.

The trade-off: working here means working on hard problems without perfect clarity. But it means the problems you're solving actually matter to the future Jan is trying to build.


Go Deeper

The full conversation with Jan Liphardt is on its way. Check out other episodes in the meantime.

Join OpenMind

Now that you know how Jan Liphardt leads, see if there's a role for you.