Someone DMs an agent in Slack with a client name. Ten seconds later they get back a formatted summary: recent activity, open opportunities, last meeting notes, upcoming tasks. No browser tabs. No Salesforce login. Just a message in the channel they already had open.
That's an OpenClaw agent. Not a chatbot. Not a glorified FAQ widget. A custom-built AI agent with real skills, connected to real systems, living where your team actually works.
I've been building these for clients over the past year and the pattern has gotten pretty dialed in. Here's how the process works from first call to production.
Start with the Skills, Not the Tech
The first conversation isn't about AI models or APIs. It's about friction.
I ask things like: what do you repeat multiple times a week that involves pulling data from one system and using it in another? What questions does your team ask each other in Slack that a system could answer? Where do people lose 10 minutes because they had to context-switch between three tools?
From that, I map out skills. A skill is a discrete thing the agent can do. "Look up a contact in Salesforce" is a skill. "Summarize this week's pipeline changes" is a skill. "Draft a follow-up email from meeting notes" is a skill.
I aim for 3 to 5 high-impact skills per agent. Not 20 hypothetical features. A small set that people would actually use every day. That constraint is the difference between an agent that sticks and one that gets forgotten after a week.
Once the skills are defined, I map inputs, outputs, and data sources for each one. That becomes the blueprint.
40 Hours. One Month. Ship It.
I scope every OpenClaw engagement as a 40-hour build over roughly one month, plus a month of support after delivery. The constraint is intentional. It forces hard decisions about what matters and kills the "wouldn't it be cool if..." scope creep that derails AI projects.
The first week is architecture and integrations. I set up the agent framework, configure the LLM layer, and wire up connections to whatever systems the agent needs: Salesforce REST API, a database, an internal service, whatever. I also define the agent's persona and guardrails during this phase. Should it be formal or casual? Should it ever say "I don't know" or escalate? Those decisions affect everything downstream and people don't think about them enough early on.
Week two is the core build. Each skill gets implemented, tested, and refined. Prompt engineering, data retrieval logic, response formatting, error handling. I test with real data (or realistic test data) so the responses are actually useful, not just technically correct.
Week three is deployment and integration testing. The agent goes live in Slack (or WhatsApp, or Telegram) and I run it through edge cases. What happens when someone asks something the agent wasn't built for? What about unexpected API responses? Ambiguous requests? This is where you find the gaps.
Week four is refinement. Tuning prompts based on how real people actually talk to the thing (always different from how you expected). Adjusting response formatting. Documenting everything for the handoff.
Why Slack (or WhatsApp, or Telegram)
I don't build web apps with chat widgets. The agent goes into whatever tool the team already has open all day.
For most of my clients, that's Slack. The agent shows up as a bot in the workspace. DM it or mention it in a channel. It responds in seconds.
For client-facing use cases or field teams, WhatsApp and Telegram work well. Same agent, different interface.
The principle is simple: zero adoption friction. If people have to open a new app or bookmark a new URL, usage falls off a cliff within two weeks. I've seen it happen. Put the agent where the conversations already are and people just... use it.
The Support Month Matters More Than You Think
Every engagement includes a month of support after delivery. I used to think of this as a safety net. Now I think of it as the second most important phase after skill design.
The first version of an agent is never the final version. Once real users start poking at it, you learn things fast. A skill that seemed complete during testing needs to handle a data format nobody mentioned. The response length that looked perfect on desktop is too long on mobile. Someone discovers a use case that's a 2-hour add and suddenly the agent is twice as valuable.
The support window covers prompt tuning, bug fixes, and minor skill additions. It's the gap between "we shipped an agent" and "we shipped an agent people rely on."
When This Fits (and When It Doesn't)
OpenClaw works when your team lives in Slack or a messaging platform, you have specific repeatable workflows that span multiple systems, and you want something deployed in weeks.
It's not the move when the use case is deeply Salesforce-native (that's Agentforce territory), when you need complex multi-step orchestration across dozens of systems, or when the APIs and data sources you'd need to connect to don't exist yet.
If you're not sure which camp your use case falls into, grab a 30-minute call and I'll tell you straight.