Return result to a queue worker

I’m experimenting with a very simple worker which reads from ir.queue and execute a task when they appear. Everything works but I never get data back after the task is finished.

The idea is that the worker sits between the queue and an external system. So when data is changed or added in Tryton, a new task is added to the queue which is picked up by the worker and processed. This processing it still a function in Tryton itself which is ok because it can build a meaningful JSON object. That JSON object can be returned back to the worker so the worker can use that to send it over to the other side to be processed there.

I got this working with putting the result of the getattr in a variable and returning that variable at the end of the function ( trytond: 738c1d725255 trytond/ir/queue.py ).

Is this something anybody is interested in?

We have this proposal Issue 8318: Parallelism tasks - Tryton issue tracker
But we do not have really use case for that.

I know that proposal, but the data is still kept inside Tryton. I’m kind of misusing the queue as a messagebroker. So it’s not about a queue for long running tasks or too much tasks at once. It’s about storing information in Tryton and then send that information to the other side. Of cource I can use RabbitMQ or other message brokers, but this is way easier and more then enough. Also why use another piece of software when there is a queue integrated?

My use case is when data is changed, the changes are also put as a task into the queue. Then the worker takes that task and processes it. The result from that task comes back into the worker which than can send it elsewhere. Below an example:

Inside a module in Tryton

@classmethod
def write(cls, records, values, *args):
    super(MasterClass, cls).write(records, values, *args)

    cls.__queue__.send_overseas(records)

def send_overseas(self):
    res = []
    for data in self:
        res.append({
            'id': data.id,
            'name': data.name,
        })
    return res

Part of a worker. This worker has a connection with Tryton AND a connection with another system:

def ship_overseas(self, data):
    print("shipping data overseas", data)

def do_some_work(self):
    tasks = self.queue.search([('finished_at', '=', None)])
    if tasks:
        for task in tasks:
            res = task.run()
            if res:
                self.ship_overseas(res)

So the worker get a new task and process it. The task will execute the send_overseas and get some data back. That data can also returned back to the worker so the worker can call another function which has a connection with another system and transfers the data.
Currently this data is dropped when the task is run, but it would be nice to have the data available.

I see not point to make a pseudo worker. The first task can just post another task.

I don’t either. But that’s not the case. I want certain data to be stored outside Tryton. I can do that by directly changing the create and write methods, but I want to loosely couple things, so I thought about using the queue and worker. It’s about a worker which looks at the queue at the Tryton side, but also have a connection with another system and is basically passing data from Tryton to that system.

You can also see it as a kind of notification, send messages to the client. Or sending an email is now tightly connected to the transaction. This can also be a task in the queue which generates the email in Tryton and return the generated email to the worker. The worker then sends the email by connecting to the SMTP server.

Yes it’s maybe a wrong use of the queue.

I see no need to store any result in your use case. The method queued can just do all of that.