Taking advantage of LLM

I read this piece about AI agents yesterday and I thought that it gives some principles that should be interesting when designing an AI Agent using Tryton or whatever:

I’ll juste quote him:

If you’re thinking about building with AI agents, start with these principles:

Define clear boundaries. What exactly can your agent do, and what does it hand off to humans or deterministic systems?

Design for failure. How do you handle the 20-40% of cases where the AI makes mistakes? What are your rollback mechanisms?

Solve the economics. How much does each interaction cost, and how does that scale with usage? Stateless often beats stateful.

Prioritize reliability over autonomy. Users trust tools that work consistently more than they value systems that occasionally do magic.

Build on solid foundations. Use AI for the hard parts (understanding intent, generating content), but rely on traditional software engineering for the critical parts (execution, error handling, state management).

2 Likes

Thank you @nicoe that is a good read.

I completely agree with those principles and I think that if we work on making Tryton robust in subtransactions and maybe other rollback mechanisms, that is a great step forward, even if no agentic support is added in core at the moment.

But would also add that at some point one needs to start working on this stuff, knowing that several things won’t work, in order to start learning and find the right solution.