Issues and Improvements in Freecash Contribution Evaluation
jennifer last edited by
The free cash ecosystem has undergone two contribution evaluations and distributions, achieving fair and equitable distribution based on contributions. However, there are some issues that can be improved.
Dependency on Center
Issue: Many people are accustomed to traditional management evaluations, believing that dedicated managers should record and evaluate the contributions of each person. This is often the root cause of other problems.
Improvement: By promoting and encouraging participation in the evaluation process, more people can understand FCH's decentralized evaluation mechanism.
Issue: CY_vpAv set up six categories when initiating the evaluation, which easily deviated from the real contributions covered by each category during the first-level evaluation. For example, the finance category has few actual contribution projects and correspondingly fewer achievements, and was assigned a weight of 10% in the first round of evaluation. The second round of evaluation resulted in 3.6%, still relatively high compared to the amount of distribution and other category contributions.
Improvement: Change the top-down classification evaluation to a bottom-up classification evaluation. The details are yet to be explored.
Issue: In the two evaluation organizations, CY_vpAv, N_B485, and HHQ_fR8N were responsible for the main evaluation organization and data processing affairs. Although the key weight evaluation was determined by contributors, and the evaluation results were overall successful, the organizers still bear significant pressure and risk, and need to underestimate their own contributions to avoid falling into the mouth of others, which provides a basis for center dependency. As the community expands, the difficulty and pressure of centralized organizations will further increase, and a slight mistake will make it even more centralized.
Improvement: 1) Reduce workload by developing a contribution reporting system; 2) Fully implement self-declaration and mutual evaluation by contributors; 3) Establish a clear delegation authorization mechanism; 4) Implement a bottom-up evaluation process.
Small Contributors vs Large Contributors
Issue: If there is a huge difference in contribution among a group of evaluators, and each person has the same voting rights, the result of the self-interested game will be small contributors eating large contributors. For example, if A, B, C, D, E, F, and G jointly evaluate and A, B, and C have the same actual contributions, which are ten times that of the other four people. In the evaluation, even if A, B, and C attribute all of the other four people's contributions (100%) to themselves, they will only increase their profits by about 10%. The other four people overestimate their contributions by 100%, reducing the profits of A, B, and C by only about 10%. Moreover, the small contributors are in the majority, and if the large contributors overestimate themselves first, it will lead to harsher retaliation from the small contributors, and win by number of votes. Therefore, the result of the game is often that large contributors give up their interests.
Improvement: 1) The entire evaluation process should be open to the entire ecosystem; 2) Consider allowing independent evaluators to participate in the evaluation, the details of which are under discussion; 3) Consider using a representative system for the initial evaluation in a bottom-up evaluation process, selecting evaluators first and then evaluating weights.
Missing Contribution Reporting
Issue: During the self-reporting process, some contributors may forget, miss announcements, or not understand the mechanism, resulting in discovering missing contributions during or after evaluation.
Improvement: 1) Promote the evaluation principles of self-reporting; 2) Develop tools such as WeChat robots to facilitate quick reporting; 3) Encourage delegation of evaluation agents to record and report contributions.
Lack of Understanding of Evaluation
Issue: Contributors who participate in contribution evaluations may not understand the evaluation mechanism and how to participate.
Improvement: 1) Allow for abstention. Abstention does not mean giving up rewards, but rather accepting the results of others' evaluations; 2) Delegate evaluation agents; 3) Learn and understand the evaluation mechanism.
Lack of Knowledge of Other's Contributions
Issue: Contributors may only know their own contributions and not understand others', making it difficult to evaluate objectively and leading to subjective evaluations.
Improvement: 1) Encourage contributors to pay attention to the release of contributions and community notifications; 2) Collect and publish contribution information regularly; 3) Delegate evaluation agents.
False Contribution Reporting
Issue: False reporting of contributions has not been found yet, but as the community grows and anonymity increases, false reporting may occur.
Improvement: 1) Link CID with WeChat or other account to confirm the contributor's identity and actions; 2) Strengthen contribution verification mechanism by inviting others (especially collaborators) to assist in verification during reporting;
Contribution Reporting Techniques
Issue: Reporting techniques can affect contribution evaluations. Specifically, the more detailed the reporting, the more likely it is to influence evaluation judgments and receive more rewards.
Improvement: 1) Encourage contributors to report in the most detailed unit possible. If everyone does this, there will be no special advantages; 2) Avoid reporting in bulk due to memory and impatience during reporting; 3) Hire evaluation agents to assist.