The idea is simple: give an economic reward to security professionals that are able to find security issues in Tryton using one of this platforms.
We could set the scope of the bounty program including types of vulnerabilities accepted and rewards for each of them, as well as rules to participate in it.
You can take a look at the provided lists to get an idea of what other companies are doing and offering. There are huge price differences and I think we could afford some investment given enough companies are interested.
For me the problem is who will validate the bounty? and who will fix the issues? What will be the reward for those doing this work?
But also why security issues should have a reward and not the others?
We, that’s something we should set but I don’t think it would be difficult to find a solution for that if there’s agreement that it’s worth to go on with such initiative. For me, if b2ck agreed we could pay for you to be in charge of those fixes.
On the first place it seems to me that almost everybody considers security issues to be somewhat special. That’s why these kind of issues are hidden from the general public in the Tryton bugtracker, for example.
Also it is a type of issue in which a bad actor can cause harm to others (in this case to the company). So if Tryton has a security issue that can be exploded without having user access, for example, that means that someone that is not part of my organization can harm my company. In contrast, another kind of bug, will require for me to take the action that causes the problem.
Finally, security issues are also special because in many cases users don’t stumble with them. In those cases, issues do not cause the program to misbehave but intentionally looking for those problems may make them apparent.
So to summarize, two issues make it different: the fact that somebody outside the company (or a certain circle of trust) can hurt + the fact that the issue may not be found without intention.
I think it’s a good idea to test on this to get an idea where security can be improved. The question is how the tests are going to be done. By sourcecode analysis, pen-testing or both? And using the client or trying to use scripting to see where Tryton starts to fail?