Rational
It might be helpful integrating AI into Tryton. Imagine writing descriptions for a product or text about a subject in a quote or an answer to a customer in the upcoming chat module.
Also to produce pictures could be interesting. I you have a picture of a product, it would be nice to see it in a real world situation. For example you have a pan and tell AI to make a picture of a cook working with it in a lovely kitchen.
And there this is only the beginning…
Proposal
Could there be a way to integrate an own AI system with Tryton? Or is it better to connect to an existing one through an API?
Implementation
No idea, sorry. But if it was kind of “service” on each record (like notes and attachments and chat), that might be helpful.
I have doubt about using generate image to sell a product. Usually the pictures of a product have legal obligations. So I can not see how a generated image could be representing something that exists.
I’ve invested so much time with those topics, so here are my two cents:
Things that can be valid use cases for Tryton:
Generate from the product images of a product the product description
Generate product translation descriptions for other languages, you write one description, AI rewrites the translations.
Record the commercial calls on sale opportunities to generate transcriptions and summaries.
Improve user docs and dev docs to train a model to help users and developers navigate through Tryton or develop (1)
Draft answers to customer inquiries, but we still miss a chat module (the ongoing development chat is just for internal communication)
Draft emails to send records (It’s possible but with the simple email templates we’ve enough for now, I can not find a valid use case)
About product Images:
You will need to use 2 strategies here: Masks (to replace an item in an existing image) and LORA trainings. You will need to provide ~10 photos of your product to generate a training. Also little hacks here and there (like generate noise on images, upscaling, correction models, in paintings).
Generations can vary due to a lot of customizable parameters depending on the model.
There’s not a model that works well for all use cases.
Mokker AI for example have a different approach, they don’t only use a text description to create the background, they have assets you can move and compose like with a sketch editor.
For all of the above, my recommendation is that you create your custom flow and experiment util you have a consistent generation of images.
Implementation:
Most of the models work with API, so you can set up your own model in your own server, if the server is hungry in resources and your use of the model is low, normally the costs of the server don’t make sense, so for me it’s better to use external pay per use API.
Set up running AI servers it’s not an easy task due to incompatibilities with the installed hardware, so for me is another reason to use external services.
For text, there are some libraries that allow to set the base api url, the key and use a generic python wrapper so you could change the model easily. So you can connect the base api url to it or use any provider (your own, replicate, anthropic, openai…)