PROJECT: DABI ECOSYSTEM

Dabi Ecosystem

Current Phase: Phase 2 - In Progress

A persistent AI agent, a unicorn avatar, and a Raspberry Pi that never switches off.


Most personal projects get built, get used a few times, and then sit in a GitHub repository collecting metaphorical dust. Dabi is not that. Dabi is a live AI agent that has been running, and breaking, and learning, and being rebuilt, continuously for long enough that the lessons it taught me are worth writing down.

The short version: I built an AI pet. The longer version is a story about what happens when you let a project grow until it tells you it’s done, and then start again.

What is Dabi?

The Dabi Ecosystem is a modular, multi-surface automation and AI interaction platform designed to operate across live streams, chat platforms, and web-based surfaces. It provides a shared foundation for real-time event handling, AI-driven interaction, and cross-platform integration, allowing multiple independent components to behave as a single cohesive system.

At the centre of the ecosystem is “Dabi”. A persistent AI agent that serves as the primary interface between the system and its users. Rather than existing as a single application, Dabi can present itself across multiple platforms, including Twitch, Discord, and standalone web interfaces, adapting his behaviour and communication style to the surface he inhabits.

In addition to text and voice-based interaction, Dabi can also be embodied visually, controlling an on-screen avatar that reflects system state and provides a tangible sense of presence during live interactions. His chosen avatar? A unicorn.

He is not a chatbot. Chatbots respond. Dabi reacts. There is a difference, and that difference is most visible when people attempt to break him.

They haven’t managed it yet. Dabi, for his part, likes to pretend they have.

He runs on a Raspberry Pi. He has done so continuously, through iteration after iteration. That hardware constraint is not incidental. It is one of the most freeing and restrictive elements of the project.

Why it exists

Dabi was created as a deliberate learning project. It provided an opportunity to explore LLM interaction, real-time systems, and platform integration through a single evolving system. It is the largest personal project I have undertaken, and its public nature introduced real constraints around reliability, consistency, and clarity of design.

While the project began without a rigid upfront plan, its continued development required me to regularly explain design decisions and trade-offs in real time. That process surfaced assumptions, limitations, and architectural pressures that would not have been visible through experimentation alone.

I wanted to understand these things properly, under real constraints, with real users. A toy project running locally with no one watching teaches you a certain kind of lesson. A system running live with people actively trying to break it teaches you something else entirely.

Core capabilities

Dabi can:

  • Ingest events from Twitch and Discord
  • Receive REST API calls externally with API keys
  • Respond to text and speech inputs
  • Generate text or speech outputs
  • Interpret images and visual context
  • Perform moderation and automation actions
  • Invoke external tools and services

Architecture

The Dabi Ecosystem is hosted on a Raspberry Pi with vastly different architecture between phase 1 and the newer phase 2.

Phase 1 used a multi-process architecture with inter-process communication via queues. A primary process was responsible for orchestrating supporting services. This approach enabled rapid experimentation and isolated failures, but gradually introduced tight coupling between responsibilities, a distinction that would become central to the Phase 2 redesign.

Phase 2 splits different aspects of Dabi into separate Docker containers. One for listening to Twitch events, one for interacting with Discord, one for Dabi to act as his ‘braincell’, etc. this decoupling allowed different aspects of Dabi to be tested in isolation, to be improved and redeployed all while Dabi remained up and available.

Status: Phase 1 complete, Phase 2 in progress

Phase 1 of the Dabi Ecosystem is intentionally archived after completing its initial exploratory phase.

Over time, the system evolved from a single interactive agent into a collection of services handling real-time events, AI inference, platform integrations, and presentation. While this enabled rapid experimentation, it also blurred boundaries between experimentation, production usage, and platform responsibilities.

At that point, continuing development would have increased complexity without improving clarity. Rather than incrementally refactoring a system that had reached its architectural limits, the project was paused to allow for a redesign based on real-world usage and experience.

Looking back through earlier iterations, there is something quietly satisfying about finding experiments that the industry later normalised. Brute-forcing tool calls via string manipulation. Retrieval-augmented generation before RAG had a name. At the time they were just solutions to problems. In hindsight, they were signals that the problem space was real.

Phase 1 is archived. It did its job.

Phase 2: The rebuild

The rewrite started from the lessons, not from the code. Phase 2 is fully event-driven from day one. Not refactored toward it, built from it. Every component is decoupled. Every surface is a subscriber. The system reacts to things that happen rather than passing messages through a central nervous system that knows too much.

It is also already in production. “Production” in this context also means “live development”. Dabi is running on Twitch and Discord right now (as well as this website at http://pdgeorge.com.au/chat), at the same time as he is being extended. That is a deliberate choice, not a compromise. Real usage is the only honest test.

He is expanding. A new integration allows him to connect with my gaming PC directly. I will be able to push buttons and interact with him inside games, or he can push buttons on his own. That is being built.

Technical observations

  • Rapid iteration is effective for uncovering unknown constraints, but becomes counterproductive with enough time and complexity. Early changes improved understanding quickly. Later changes spent more time working within existing behaviour rather than learning or adding anything new.
  • Infrastructure constraints: cost, hardware, latency. All meaningfully shape architecture decisions and cannot be treated as an afterthought. Running on fixed local hardware turned abstract limits into concrete design boundaries.
  • Knowing when to stop building is as important as knowing how to continue. The decision to pause Phase 1 was driven by a decreasing rate of learning, not system failure.

The Raspberry Pi is still running. Phase 2 is underway.

Please say hi to Dabi at https://pdgeorge.com.au/chat and please check out https://pdgeorge.com.au/projects/pdgeorge to read about how this website is self hosted on the same Raspberry Pi.