AI Literacy Isn't Optional — Here's How to Give Your Kids What Schools Can't
Steve Ruiz, Founder & CEO at tldraw
Steve Ruiz’s daughter started playing with tldraw Computer around age 7. It’s an AI-powered canvas where you connect nodes, write prompts, and watch models execute them visually. She wasn’t learning to code. She wasn’t learning AI in the academic sense. She was doing something more important: learning to think with tools that amplify her thinking.
“I spend a lot of time with tools like tldraw Computer or, you know, I wouldn’t say maybe a lot of time, but this is something that I think is, I’m very happy to be able to know enough about the technology to kind of safely bring someone or like give some experience with this,” Ruiz explains.
The underlying philosophy is uncontroversial but radical: his daughter needs AI literacy the way previous generations needed computer literacy. Not because she’ll become an AI researcher, but because AI will be a tool in whatever she does.
The radical part: most schools, most parents, and most of the debate about kids and AI are treating it all wrong.
The Literacy Framing Changes Everything
When people worry about “kids and AI,” they usually mean two things: (1) Will AI take their jobs? (2) Will AI make them stupid?
Ruiz isn’t dismissive of those concerns. But they’re backward. The real risk is illiteracy.
“Whether someone is like comfortable with these tools or whether that’s afraid of these tools or whether they’re afraid of you know or they think that they can do too many things or too few like I think there’s a like having a familiarity a kind of like a numeracy with AI, I think is something that I can give my kid that I realize that not a lot of parents can.”
That word — “numeracy” — is the key. Numeracy isn’t about being a mathematician. It’s about understanding numbers well enough to navigate the world without being scammed or confused. You need it to read a loan agreement, understand inflation, tip correctly, budget.
AI numeracy means: understanding what AI can do, what it can’t do, where it hallucinates, where it excels, how to talk to it, when to use it, when not to. It means your kid won’t be intimidated by AI tools they encounter. They’ll also won’t over-trust them.
Ruiz’s approach is to let her use them, safely, early.
Why “Ridiculous Images” Matter
Ruiz’s commitment to teaching AI literacy starts with something parents might find alarming: he lets his daughter generate absurd AI images. “Even if it starts off with ridiculous images and stuff like that, it’s still a good, I think it’s an important thing to be doing.”
This is the opposite of the protective parenting instinct (keep kids away from AI until they’re older). It’s closer to: familiarity, experimentation, safety.
The logic: if your daughter generates silly images with AI at age 7, she’s learning:
- How to prompt (writing matters)
- How AI interprets language (sometimes literal, sometimes not)
- Iteration (if the image is wrong, can I fix it with a better prompt?)
- Limits (AI can’t always do what you ask)
- Delight without worship (it’s a tool, sometimes it’s funny, sometimes it’s useful)
She’s not afraid. She’s not a believer. She’s curious.
By the time she’s 15 and peers are either worshipping or fearing AI, she’ll have 8 years of lived experience. That’s a huge difference.
The Future-of-Work Argument
Ruiz’s optimism about his daughter’s future isn’t naive. It’s informed by what he’s seeing with modern AI tools.
“I think what I’m recognizing is that like, wow, there’s actually a lot of work to do. Like there’s like, there’s a lot of stuff that like we were never trying to solve… a lot of tedious knowledge work, a lot of like things where, wow, if I had like 15 really smart consultants, like I could do a lot with that.”
This is the unstated argument: AI isn’t going to eliminate work. It’s going to eliminate tedious work, which means there’s more actual work to do.
“I’ll make it faster? How do I run two of these things at once? How do I optimize my God box in order to make this thing more powerful? That the impulse to innovate, the impulse to engineer, the impulse to like, it’s just like do more and to do better and to go deeper and stuff like that.”
The future needs kids who can do more, not kids who wait for AI to do it for them. And the only way a kid learns to do more is by having AI tools available to extend what they can think and build.
The Shallow Work Thesis
Here’s where Ruiz gets provocative. The real reason to teach kids AI literacy isn’t to prepare them for job loss. It’s to prepare them for a world where knowledge work goes deep.
“I think what we’re gonna find is that we’ve actually been having a really shallow engagement with the amount of knowledge work that could be done historically and that it’s probably much, much, much deeper than we’ve thought about.”
Right now, most organizations spend their labor budget on tedious work: formatting reports, copy-pasting data, rewriting the same email in different styles, updating spreadsheets. That’s not because it’s fun. It’s because they don’t have the resources to do deeper work.
But if AI handles the tedious layer, what unlocks is the capacity to do work that actually matters: better business decisions, better education, better data analysis, better product design.
“Our business decisions could have been better informed, our kids could have been better educated. Are politicians, could have had better data, everything that could benefit from essentially just applied engineered knowledge work will be done and there will still be more to do.”
A kid who grows up with AI literacy won’t be afraid of deeper work. She’ll expect it. She’ll know how to use tools to do it. She’ll thrive.
What AI Literacy Doesn’t Mean
It’s important to note what Ruiz is not saying:
He’s not saying schools should teach kids to code. He’s not saying everyone needs to understand transformers or gradients. He’s not saying kids should spend all day on screens.
He’s saying: “I’m very happy to be able to know enough about the technology to kind of safely bring someone or like give some experience with this.”
Safe experience. That’s the bar. Not expertise. Not obsession. Comfort.
The irony is that parents who understand technology don’t need to worry about their kids becoming AI-obsessed. They can give them healthy exposure, boundaries, and perspective. It’s the parents who don’t understand it who either avoid it entirely (creating a literacy gap) or panic and over-restrict it (creating a forbidden-fruit problem).
The Broader Moment
Ruiz closed his thoughts on this with a nod to his own experience: “I never thought that I would be doing firmware for little development boards, but I’m doing that now thanks to AI.”
He’s in his 30s. His neural pathways are set. He’s re-teaching himself electronics because AI tools make it possible. He can ask Claude questions in real time, iterate on code, learn by doing.
His daughter is 7. She has 80 years ahead. If she starts now, learning to think with AI tools as a natural extension of thinking in general, the kinds of problems she can solve, the depth she can go to — it’ll be wildly different from the previous generation.
Not because AI is magic. But because starting early with tools compounds.
FAQ
Won’t AI literacy become obsolete if AI keeps changing?
The tools change; the principles persist. Understanding what AI can and can’t do, how to iterate with it, when to trust it — those transfer across generations of models. It’s like computer literacy in 2000 vs. 2024: the specific tools are totally different, but the underlying principles hold.
Is it safe to let kids use AI tools at age 7?
It depends on the tool and parental involvement. tldraw Computer is visual, exploratory, and safe (it’s generating images and diagrams, not accessing the internet). Ruiz is present when his daughter uses it. That’s the safety model: supervised, exploratory, low-risk.
Will AI literacy create a bigger divide between kids with tech-savvy parents and those without?
Probably, yes — at least in the short term. That’s an argument for schools to teach it, or for communities to make tools accessible. But Ruiz’s point is that parents who understand the technology can model healthy use and demystify it.
How is “AI literacy” different from “computer literacy”?
Computer literacy was about understanding computers as tools, knowing how to use an OS, eventually using the internet. AI literacy is about understanding what AI can do, how to interact with it, what outputs to trust. Similar framing; different domain.
Should parents be teaching this or schools?
Ideally both. But schools move slowly. Parents who understand it can start now. Schools can catch up. Right now, the gap matters.
What if my kid becomes obsessed with AI and loses interest in other things?
That’s possible, and it’s a parental boundary-setting issue, not a technology issue. Same way some kids become obsessed with video games. Ruiz’s approach is: exposure, safety, balance, not avoidance.
Will kids who grow up with AI have an unfair advantage?
Almost certainly yes, at least initially. Kids who code have an advantage over kids who don’t. Kids who can think with AI tools will have an advantage over kids who can’t. That’s an argument for access, not an argument for keeping it away.
Should I teach my kid to code if they’re interested in AI?
Not necessarily. Traditional coding is becoming less critical as AI handles more of it. AI literacy and a strong understanding of logic, problem-solving, and how to explain things to a tool matter more.
How do I know when my kid is ready for AI tools?
When they’re curious and you feel comfortable supervising. 7 is young but reasonable for simple, visual tools. 12-13 feels like a natural point where they can handle more complex interfaces. Trust your judgment and start safe.
Isn’t there a risk that AI tools make kids lazy?
Same risk as calculators making kids bad at math, or GPS making kids bad at navigation. The risk is real but manageable with good parenting. Use tools to go deeper, not to avoid thinking.
What’s the one thing parents should know about AI and kids?
Literacy beats fear beats obsession. Teach your kid what AI can do. Let them experiment. Set boundaries. Watch for unhealthy dependence. That’s a healthier outcome than banning it or worshipping it.
Watch the full conversation
Hear Steve Ruiz share the full story on Heroes Behind AI.
Watch on YouTubeRelated Insights
AI Interfaces Are Stuck in the Chatbot Era—Here's What Comes Next
Mercedes Bent, Co-Founder & Partner at Premise Ventures
Canvas vs. Chat — Why Spatial Interfaces Win for AI Collaboration
Steve Ruiz, Founder & CEO at tldraw
How to Become Valuable in the AI Era — The Ikigai Framework
John Kim, CEO & Co-Founder at Paraform