Day 1 for Ubundi was on 5 January. A blank canvas with an engineering team that is early-career by title, but unusually senior judged by early outputs.
That however was a very intentional bet we've decided to place. In a world where there are now no 20-year experts in AI (at least not in its application layer), we found ourselves in a moment where the playing field was more level than it had been in tech in the recent past.
Combining a "beginner's mind" perspective with some youthful exuberance, the team's mandate has been very clear: dive into the deep end, try to not rely on the previous way of doing things, and just keep swimming.
One can define this fast-paced, near-chaotic moment (in society and tech) with varying heuristics. What does seem to be true is that the moment can be seized by those that get their hands dirty to figure things out and build.
For Ubundi, taking action and building has been especially important given our mission of taking a more humanistic approach to AI. Beyond the implicit black boxes with which we currently often interact with AI, we believe there is a clearer, more explicit way (that better aligns with our unique humanism) to influence and work with AI.
The Challenge of Context
Our first explorations with Ubundi started with a very simple realisation: In a world where we expect access to AI models to be commoditised and the learning curve for most to be much shorter than historical tech, our usage and outputs could suddenly become very homogenous. But capitalism and our economies still reward differentiation.
So to differentiate my usage of AI from yours, I'd need to make sure that I give it enough context of who I am so that it can uniquely represent me.
But we've mostly defined context relative to our understanding of the context window. And we mostly use "data" and "context" almost interchangeably.
Real context is much more nuanced though; it's both about understanding the data and how/why it was created in the first place.
Assuming that we did have real context to share with AI, the next challenge is to be able to do so in the most holistic way possible. If you're directly using a frontier LLM—say ChatGPT in this case—its own memory functionality (albeit black box-y in nature) is very good about building broader context about you. But you're also using many other point solutions that are leveraging these frontier models via API, which means your real context is fragmented with no viable manner to stitch it together.
Enter tootoo
Today marks the public release of our first product: tootoo.ai.
Put simply: tootoo helps you extract your personal context, turn it into something you can own, and then reuse it across the different AI tools you already use—so the outputs feel less generic and more like you.
For the first version, we were greatly influenced by existing ideas or ways in which humans have tried to either get clarity about themselves or use that clarity/context in some way:
- Whether it's some psychometric test or a consumer/hobbyist personality assessment, humans have used them extensively over the years. The challenge is just that they're kinda boring (nobody gets excited about answering 100+ Likert scale questions) and then it squeezes you into predefined boxes.
- Buster Benson's Book of Beliefs, especially since it has a changelog too to capture both event and state data. This greatly influenced how we implemented your personal codex into tootoo.
- The llms-txt project was an early attempt at creating a standard for AI/agents to better understand and read websites. We borrowed some of that thinking in how we make your tootoo codex available in multiple formats, so that AI can reference this in different ways.
If you want to play with it:
- Start a codex on tootoo and answer a few different questions.
- Stress test it in your real prompts and workflows.
- Tell us where it breaks (we're early, and we want the sharp edges).
Surface Areas
One of the more interesting realisations in the last month of building has been that I will need to unlearn many of the things I know about how to build software.
In the past (especially with SaaS), it was easier to define a specific customer with a specific problem, and then reverse engineer that to the present day. One could then build a step-by-step, iterative plan to hopefully create the thing that the defined ideal customer would pay for once built.
It feels like AI requires an evolved paradigm to the building process. I don't think being ignorant of real users/customers with real problems is a good posture to take. But I suspect that when one is only trying to tackle those in the most direct and linear fashion, we're not going to realise some of the value that AI presents.
To lean into that, we've been thinking more about surface areas where we can acquire data so that we may learn. If we can do that, then maybe we can leverage that into a significant invention. The key here being that much of what will be valuable in the future needs to be discovered and I think second order learning becomes a great way to try to do that.
We experienced that quite unexpectedly in our first couple of weeks in building tootoo. In constructing the product, what emerged is something that now looks more like an agent that is really good at extracting and retaining context. And irrespective of whether tootoo—as a product—is the first thing we build that shoots the lights out, we might release this agent in some shape in the coming weeks.
What's Next
Much of what we're doing right now is evolving our blank canvas into a solid foundation, so that we can build products of real impact (and commercial value) this year. If this were a typical startup, I'd rely on my many years of experience as a startup founder. A venture studio is a very specific type of startup though and in that respect I'm a first-time founder also figuring things out.
With tootoo — our first surface area — out in the wild, we'll spend the next couple of weeks seeing how deep this rabbithole is. We're equally curious about how early users use the product, as well as whether we're onto something interesting with this agent that we've developed.
We're looking forward to sharing all of our learnings and progress with you as we swim from this deep end. We'll be publishing our Field Notes every second week and it is our intention to not only try put our best foot forward, but share as broadly as we can.
Part of our bigger mission is to build a community of awesome people that cares as much as we do about adding as much human context into AI. We're of the strong opinion that it requires a village to make a real impact here, which is also why we've created our "Community Capital" initiative where community members can earn real equity in Ubundi for their contributions.
