Welcome to our Tools for Thought interview series, where we meet with founders on a mission to help us think better and work smarter. Arun Bahl is the cofounder of Aloe, a tool designed to support human cognition in our information-dense world – an AI computer that acts as a “superhuman attention.”
In this interview, we discussed how information overload hinders deep work and creativity, the power of metacognition, why we should aim to augment human thinking and not automate it away, the importance of mapping a problem space rather than just getting answers, and his plans to continue to develop tools that promote clarity of thought and align with shared human interests. Enjoy the read!
Hi Arun, thanks for agreeing to this interview! You’ve written extensively about the distraction economy and its impact on human cognition. Why do you think we need better tools to handle information overload?
Glad to be here! Absolutely. In short: the distraction economy is a Bad Thing for humanity. We already know that it’s bad for individual mental health, especially social media and its effects on young people. We know that it’s bad for us geopolitically, too – when we monetize the delivery of information independently of whether that information is true or not, we create poor outcomes for any society. Governance needs fact-based discourse.
But there’s a third crucial reason that has slipped under the radar to-date: information overload breaks human thinking. I’ll share an anecdote from our own cognitive science research: we spent a lot of time with Millennial and Gen Z knowledge workers getting to know their experiences and pain points, and across our studies there was a surprisingly unanimous baseline. These individuals all expressed the same emotional state: feeling brittle. Feeling stretched too thin, reacting to the next Slack message or rushing to a Zoom call, squeezing in an errand, never getting into deep work. Feeling like their “real” creativity was always a little out of reach.
The problem is much bigger than productivity, however. Let’s imagine modern human thinking as one app among several, running on our brains. When we overtax that modern thinking piece – overtax our attention – we have other apps that will increase their activity to fill in the gaps. Those other systems are prejudice. They are cognitive bias. They are the poor heuristics that make us more susceptible to mis- and dis-information, for example.
From the data, we find that human cognition didn’t evolve to handle this amount of information saturation. It’s just not the environment human thinking evolved for. This isn’t a personal productivity problem – it’s a civilizational one.
Humans must now adapt to a fundamental shift to our ecology of information. That’s why we started Aloe – to build the tools for human thinking to succeed, even in an information environment we are not evolved to handle. We think it’s the most important problem there is to solve today.
That’s why you decided to build an AI computer to address this fundamental mismatch between human cognition and our information environment.
Exactly. Today’s information world is too large for unassisted human cognition. Our species puts out two and a half quintillion bytes of data per day onto the Internet that we consume again each day. And human working memory is fixed at birth, at seven slots. The collision between these two numbers symbolizes the problem we’re trying to solve: the gap between the volume of information we have to contend with, and the biological limits of our attention. We need a superhuman attention that we can trust to help boost our own.
As an analogy: humans can only see colors from red to violet. That’s a small fraction of the electromagnetic spectrum, but we’ve long had tools that could see other wavelengths. Those tools take information from outside our visual range and change its representation, presenting it to us in the colors that we can see. Similarly, an AI computer must have attention that’s far greater than ours, but that knows how to present information to us when it’s important, and at the level of detail that’s appropriate to what we’re currently trying to achieve.
We call this ‘pinch-to-zoom’ for information. Just like with a map on my phone, Aloe lets us zoom in and out of any website, document, conversation, and know that at this level of zoom, you may not need to see every surface street, but the capital cities and freeways are still there.
Aloe pairs the most capable generalist AI agent with an intuitive graphical experience – an AI desktop, not just a chatbot. It knows how to present information in-context, like how to visualize and animate concepts within a large dataset, present the top-level items from a document, or engage you in a verbal conversation. You choose the right way to interact for your situation, headspace, and learning style.
Aloe also executes tasks on your behalf as you work together, and it understands collaboration and shared information. It uses tools, and even creates its own tools, as it works.
I’ve been an AI researcher for many years, but my background was in cognitive science. My cofounder and I are longtime Vipassana meditators. Aloe was born out of lifelong reflection on what humans need to think well. Not to automate us away, but to augment the thing that makes us human in the first place: our capacity for clear thought and creativity.
Let’s talk about how the Aloe agent actually works. You mentioned that Aloe can create tools when it encounters problems that existing tools can’t solve. How does this capability work in practice?
That’s right. As you work together, Aloe determines what its objective is, and makes a plan to achieve that objective. Just like other generalist agents out there, Aloe uses tools to help along the way. But importantly, when Aloe determines it needs a tool that doesn’t exist in order to accomplish something for you, it stops and creates that tool first. If it works, it keeps that new tool in its toolbox – remixing and growing its capabilities over time. An individual’s data is always kept private and secure – we take privacy very seriously. But as Aloe builds its toolbox, all users benefit simultaneously.
This is a new species of AI – internally, we nicknamed this version Aloe habilis. It’s what has allowed Aloe to already outperform every other generalist agent you’ve heard of on industry measures of intelligence like the GAIA benchmark. And we’re just getting started.
You’ve emphasized that Aloe has metacognition – the ability to think about its own thinking. Can you explain why this is crucial?
Metacognition is strategic skepticism. As smart humans, we all know to apply skepticism to the information we encounter in the world. We also know that the smartest humans don’t just question external sources – they apply that same skepticism internally, too. Why do I think what I’m thinking? Why do I feel this way? Can I bring that initial intuition into my conscious experience to learn something and make a better decision?
Similarly, Aloe is trained to question itself and recognize that it too is susceptible to a kind of cognitive bias. Language models inherit society’s biases from the internet data they’re trained on. Aloe is a neurosymbolic system that engages in symbolic reasoning beyond what LLMs can do. This reflective reasoning is essential because we don’t want an AI that just gives us answers – it must explain how it got there, and understand when it’s unsure.
How does it integrate with someone’s existing information ecosystem?
Aloe sees and understands the same information you do. It can connect to sources like websites, your email, Notion, Linear, Slack, Google Docs – and it can reference, and understand things before you’ve even seen them. We liken it to one smart trusted individual that has access, and you can interact naturally with them without having to manage individual conversations, apps, or agents and context windows.
A lot of the heavy lifting we’ve done is to learn how to make information self-organizing. Imagine that as you work, you’re moving through a map of your own knowledge of the resources, people, and processes that are important to what you’re trying to do. An AI computer like Aloe understands how to illuminate the knowledge neighborhood around where you are so that you can see and understand with clarity the most important questions and get verifiable answers. You become more effective, with less effort.
How did you approach building what you call a “synthetic mind” that can actually help humans think better?
Humans evolved to collaborate with other minds. Dunbar’s work on the evolution of human intelligence suggests that our brain size expanded to manage larger social communities – interacting with other minds is our native interface. It’s why humans anthropomorphize everything – cars, dogs, chatbots. Rather than fighting that, Aloe embraces it. It acts like a single smart individual that you get to bring with you everywhere, both on your desktop and mobile.
But for this to work, you need exceptional privacy and trust. Forget AI for a moment – how does human-to-human delegation work? You wouldn’t trust me with anything important unless you believe three things are true: that I can reason, that I have good information, and that we have aligned interests (I have no ulterior agenda). If any one of those isn’t true, you shouldn’t trust me.
The threshold for trusting an AI is no different. This is why, for example, our business model can never be based on advertising. If my goal is to always be selling you something, our interests aren’t fully aligned and I can’t be trusted. If my goal is to sell your data, absolutely do not trust me. We’ve built this foundation of trust directly into Aloe, but crucially, not in order to automate your thinking. Its job is to show you the concepts behind the information – not just give you answers – so you create understanding. Like showing your work in math class, Aloe reveals its reasoning so your mental map improves through the interaction.
What kind of people are drawn to Aloe, and how do they use it?
There’s a surprising variety. Professionally, they’re executives, consultants, designers, researchers, creators, and students – but they use Aloe in their personal lives just as much. They work in multidisciplinary teams, and sometimes solo. Work happens at a desk or on a smartphone. They don’t have separation between their work and personal lives anymore. They tend to be curious and intentional, and many already practice some form of digital hygiene or social media detox once in a while.
The common thread is that they context switch heavily – and want less time pressure and more spaciousness for the things that matter to them. They want tools that don’t get in the way. They’re aware they’d be better off with less noise. More presence. More creative time. They know they’re the best version of themselves when they have taken back control over their time and attention.
In everyday interactions, Aloe gives them time back by proactively helping them assemble their workspace – an information mise en place – so they don’t have to waste time trying to find relevant bits of data to dive into a project. Aloe shows them the provenance of their information. It helps them stay on top of things despite their back-to-back meeting schedules. It collects relevant information, briefs them on background from both public and private sources, and helps them discern what questions are important to ask. And Aloe helps them get into deep work mode more frequently, for longer – as they have delegated busywork away and freed up time to be goal-focused rather than task-focused.
What about you, how do you use Aloe?
I’m in it all the time – at my desk or out in the world – when I need to think or do. Where I really feel the biggest difference is when I’m working on something but I don’t have clarity yet – and the question I need to answer depends on combining a bunch of different kinds of information – my private notes, my team’s internal docs, prior conversations I’ve had, and public sources on the internet. I use Aloe fundamentally as a tool for visualizing, understanding, and talking through – not just getting an answer, but seeing the landscape of concepts behind the information so I can think clearly and decide where to go next.
And finally… Looking ahead, what’s next for Aloe?
We’re excited to open up Aloe for more people to use. Our earliest users are already shaping its development in big ways and telling us how they want to use Aloe next. We’re also partnering with some amazing teams in astrophysics and health tech, where privacy and advanced reasoning are critical and current solutions simply don’t cut it. And we’re hiring!
We’re just scratching the surface on Aloe. But more broadly, we want to make a case for the kinds of tools humanity deserves. Can we learn from the previous generation of tech and the ill-effects of dopamine mining? Can we steer ourselves away from cognitive offloading and the atrophy of our collective intelligence?
Can we understand the most important challenges we face, and build tools that help nudge us in the right direction?
In a world where anything can be automated, we believe clarity of thought will be a human’s defining trait. Seeing the larger picture, knowing what questions to ask, reclaiming personal agency. And making sure that our tools are deployed in concert with our shared human interests.
Thank you so much for your time, Arun! Where can people learn more about Aloe?
We’d love to have Ness Labs’ community join our priority waitlist to get earlier access. Use the following link. More info on us at our website, and our blog is a place to dig more into these ideas we’ve covered today.
You can also find us occasionally on X, BlueSky, or LinkedIn. And if you’d like to see Aloe in action, check out this short video.