Last April, two Brown University students began building the prototype of an AI-driven platform designed to help young people struggling with addiction. Next week, their fledgling company, Peerakeet, will become one of 15 companies to join Character Labs, a highly selective startup accelerator that supports founders building new AI applications.
Peerakeet was founded by Ishan Shah, a Brown MPH student and the company’s chief executive officer, and Hunter Adrian ’25, its chief technical officer. Together, they’ve created a peer-support addiction and mental health tool that is accessible, private and engaging for their Gen Z peers. The app enables daily check-ins, anonymous matching and moderated peer conversations guided by emotionally intelligent AI. The team has launched pilots at two universities so far and is collaborating with campus wellness programs nationwide.
We spoke with Shah about Peerakeet’s founding, the role of AI in recovery and his plans for the company’s future.
What led you to create Peerakeet?
I’ve always been interested in addiction; it’s been a major force in my life. It affected my childhood, and later on, during college, I developed some habits of my own. During my recovery, I met an anonymous peer online through a social network that wasn’t designed for addiction support, but that person ended up helping me a lot—alongside medical treatment and therapy. It wasn’t a perfect experience online, but it showed me the power of connection, even between strangers. Just knowing someone else had gone through what I was experiencing was transformative for me and changed how I viewed recovery.
That moment stuck with me. When I came to Brown for my MPH, I kept thinking about how I could combine what I was learning about addiction with that experience, how to create a platform that could replicate the kind of support I received, but built intentionally with healthcare, research and safety in mind.
So I brought the idea to my co-founder, Hunter, a computer science concentrator. He was a captain on the wrestling team at Brown, and I worked as the team manager. One day on a team bus, I said something like, “Dude, just look around, our whole generation is struggling, and no one’s really talking about it.” He immediately connected with that, and from there we started building. The project has grown a lot since that first idea. It’s become something really complex and meaningful, which is exciting.
How does AI moderate and guide peer conversations?
A core part of our safety model is an emotionally intelligent moderation system. It’s designed to guide the real human conversations happening on the platform—not replace them, which is something you see a lot right now, especially in public health, where there’s a trend toward bots replacing people.
Our hypothesis was that AI can never fully replicate what a human feels, especially when it comes to something as nuanced as addiction. So, we built an AI that can flag moments of crisis—like signs of self-harm or illegal activity—and respond in real time using compassionate, human-centered language. It redirects the conversation and connects people to safety resources.
If someone is in danger, it will automatically display emergency contacts or local support options. The schools and communities we partner with determine what those resources are, so it’s very collaborative—we’re integrating with existing support systems. The idea is that users can have authentic peer-to-peer conversations without a human moderator reading their messages, but with the reassurance that there’s still safety and accountability built in.
