At Mailbutler, we pride ourselves on delivering exceptional customer support. Even though our support team works almost around the clock, we wanted to offer our clients immediate answers at any time of day. That’s when the idea of introducing an AI bot came up — a solution to provide fast, reliable responses to common questions and free up our human agents for more complex issues.
This project became a personal initiative I led, using Helply to build and train the AI bot specifically for our customer success needs.
Day 1: The challenge appears
The challenge was clear: While our support team handled queries efficiently, some customers still faced waiting times for answers to common, repetitive questions — especially outside peak support hours.
We discovered this by analyzing ticket data and customer feedback, which frequently mentioned delays in getting simple information like plan details, refund policies, or troubleshooting steps for common issues.
The initial assumption was that our Support Center might be enough, but it quickly became apparent that many users prefer interactive, conversational support rather than searching through long articles.
Day 5: Brainstorming begins
We explored several solutions:
- Relying solely on Support Center articles (which lacked interactivity)
- Introducing a basic chatbot with pre-programmed answers (too limited)
- Implementing an AI-driven bot with a trained knowledge base (the most promising option)
While there were internal debates about the bot’s accuracy and whether it could truly handle real customer queries without sounding robotic, we agreed that the right training and data input could make a real difference.
Collaboration between our customer success team was crucial. This helped identify the most frequent customer questions and provided valuable insight into how these are usually answered by us, the human agents.
Day 12: The breakthrough
The “aha” moment came when we realized that the bot wouldn’t have to rely solely on static FAQs. Using Helply, we could train its knowledge base with:
- Support Center articles
- Past conversations from our customer success team
- Manual, scenario-based training sessions I personally added
This approach allowed the bot to understand context better and provide more human-like responses, rather than simply serving predefined answers.
Day 20: Execution in progress
Implementation wasn’t without its hurdles. One key challenge was ensuring that the AI bot didn’t provide outdated or incorrect information. We tackled this by regularly updating the knowledge base and running test conversations to fine-tune its responses.
Another creative solution was introducing a fallback option: if the bot couldn’t answer a question confidently, it would escalate the conversation directly to a human agent. This ensured that customers always received the help they needed, even if the bot reached its limits.
Unexpectedly, some customers even provided positive feedback on how “friendly” and “natural” the bot felt — a testament to the careful training process.
Day 30: Results + reflections

We started to see positive effects shortly after implementation:
- Improved response times for common questions
- We were able to dedicate a bit more time to complex, high-value queries
- We also noticed some encouraging feedback regarding the handling of quick questions
If we were to start over, I’d likely include more real-user testing earlier on to better refine the bot’s conversational tone. Overall, though, we’re pleased with the first results and view this as a solid foundation for further improvement.
Day 45 and beyond: Continuous improvement
The implementation of the AI bot wasn’t the end of the journey — it marked the beginning of an ongoing process.
To ensure the bot continues to provide relevant and helpful answers, I’ve made it a regular practice to review our customer conversations on a weekly basis. This allows me to identify areas where the bot’s responses could be improved or where new topics have emerged that weren’t previously covered.
By continuously updating the knowledge base and manually fine-tuning its training, we’re making sure the bot evolves alongside our customers’ needs and maintains a high standard of helpful, accurate responses.
This ongoing process has already shown positive effects, and we’re excited to keep refining the bot’s capabilities in the coming months.
This project highlighted how combining the right technology with hands-on training can create meaningful improvements in customer experience.
As one of our support agents put it:
It’s like we’ve gained an extra team member who never sleeps!
And who knows — maybe the next step is training our AI bot to handle even more advanced product support cases. Stay tuned!