Building Lurk in One Day: A Technical Deep Dive
2026-03-28
by Uri Walevski
Have you ever wanted an AI assistant to filter your WhatsApp groups, notify you only when something aligns with your interests, and even help you draft the perfect reply? That's exactly what Lurk does.
What's more impressive than the product itself is the development velocity: Lurk was built in just a single day. This was made possible by leveraging the power of three existing projects:
- Alice & Bot: An open-source WhatsApp clone with a rich chat interface, enabling you to build web chat interfaces for agents or even host your own messenger talking to many agents.
- Supergreen: A robust ingestion engine that connects seamlessly to WhatsApp.
- prompt2bot: An AI agent hosting platform that acts as the "brain," managing memory and API tooling capabilities.
In this post, we'll dive into the architecture of Lurk and explain how these powerful building blocks interact to create a seamless, intelligent service.
The Architecture: Three Core Pillars
Lurk's architecture is divided into three distinct parts that work together to ingest data, process it intelligently, and provide a user-friendly frontend.
1. Supergreen: The WhatsApp Data Layer
To monitor WhatsApp activity, we needed a reliable way to ingest messages. That's where Supergreen comes in. Supergreen acts as the ingestion pipeline, continuously listening to WhatsApp groups and extracting messages in real-time. Instead of building complex WebSockets or browser automation from scratch, Supergreen provides a clean, stable stream of incoming chat data.
2. The Custom Engine (Server & DB)
Sitting in the middle is our custom backend server—the true "brain" of the operation.
This server handles several critical responsibilities:
- Threading and Context: It takes the raw, sequential message stream from Supergreen and organizes it into logical threads.
- Interest Matching: It stores user profiles and their specific "interests" in a database. Whenever a new thread is formed, the server evaluates it against these interests to see if there's a match.
- Group Sharing: The server logic allows users to seamlessly share their monitored WhatsApp groups with other users on the platform.
- AI Tooling API: It exposes a set of custom tools that the AI agents can call. This means the AI can query the database, retrieve context, and interact with the backend directly.
3. Prompt2Bot & AliceAndBob: The Agent Frontend
The user-facing side of Lurk is powered by prompt2bot. Instead of building a custom conversational UI, we used prompt2bot to act as the "friendly agent" interface.
- Omnichannel Access: Prompt2bot allows users to interact with Lurk via Telegram or a beautiful, ChatGPT-like web interface provided by the AliceAndBob framework.
- Proactive Notifications: When the custom server finds a thread matching a user's interest, the server uses the prompt2bot API to inject context into the Lurk agent, which then messages the user on Telegram or through the Alice & Bot messenger.
- Drafting Responses: Users can ask the prompt2bot agent to summarize the context of a thread or even phrase a response for them. Because the agent has access to the backend tools, it fetches exactly what it needs to generate a highly contextual, accurate reply.
How It All Comes Together
Here is the typical lifecycle of a message in Lurk:
- A new message is sent in a monitored WhatsApp group.
- Supergreen captures the message and forwards it to the Custom Server.
- The server analyzes the message, attaches it to an ongoing thread, and runs a matching algorithm against the user's saved interests.
- If a match is found, the server triggers an alert through prompt2bot.
- The user receives a notification on Telegram or the AliceAndBob web UI: "Hey, there's a conversation about [Interest] happening right now."
- The user chats with the AI: "Can you summarize what they're saying and draft a polite reply?"
- The AI agent uses its backend tools to pull the thread context, drafts the response, and hands it back to the user.
Show Me The Code
Connecting these services together requires surprisingly little code. Here is a simplified look at how the custom server glues Supergreen and prompt2bot together.
1. Ingesting Messages via Supergreen Webhook
Supergreen sends HTTP POST requests whenever a new message arrives. Your server catches this, threads the message, and checks for user interest matches:
2. Proactively Notifying Users with prompt2bot
When a match is found, the server uses the prompt2bot client to trigger a remote task. This wakes up the agent and tells it to message the user via Alice & Bot or Telegram:
3. Exposing Tools to the Agent
To let the agent actually read the thread or take actions, the server registers its endpoints as tools in prompt2bot. This allows the AI to dynamically request more context if the user asks for a deeper summary:
Conclusion
Building a real-time, AI-driven messaging assistant from scratch could take weeks or months. By orchestrating existing tools—using Supergreen for messaging ingestion and prompt2bot/AliceAndBob for the conversational AI interface—we were able to focus purely on the core business logic (threading and interest matching).
The result? A highly capable, production-ready AI service, built and shipped in a single day.
Try Lurk yourself and see what it can do.
← All posts