I read this piece about AI agents yesterday and I thought that it gives some principles that should be interesting when designing an AI Agent using Tryton or whatever:
I’ll juste quote him:
If you’re thinking about building with AI agents, start with these principles:
Define clear boundaries. What exactly can your agent do, and what does it hand off to humans or deterministic systems?
Design for failure. How do you handle the 20-40% of cases where the AI makes mistakes? What are your rollback mechanisms?
Solve the economics. How much does each interaction cost, and how does that scale with usage? Stateless often beats stateful.
Prioritize reliability over autonomy. Users trust tools that work consistently more than they value systems that occasionally do magic.
Build on solid foundations. Use AI for the hard parts (understanding intent, generating content), but rely on traditional software engineering for the critical parts (execution, error handling, state management).