On Acceptance Criteria in Everipedia

Published by liberdadenow 25 Jul

On Acceptance Criteria in Everipedia

The simplicity of the decision process to choose what will be and what will not be published on Everipedia is delightfully beautiful: the absolute majority of equally weighted voters decides on each edit.

This simplicity however hinders the emergence of some properties that are expected on something that is intended to be an effective editing platform to write a universal knowledge source.

The properties of the platform are extracted from the aimed result: a body of knowledge that is

  1. correct and trustful,

  2. permanent.

In order to be correct and trustful, the articles must be written and attested by experts on their respective areas, following a protocol that is used by whichever acknowledged publisher editorial board use.

The problem here is how to appropriately choose who is in charge to write and value the content that will be published in a decentralized and permissionless platform. The answer resides on the way the expert become an expert. They are given the title by the very readers and users of their intellectual work who test their claims and verify its self-consistency and validity in their daily lives. As soon as a group of experts is formed, the group assumes this task of recruiting more members simply because it is more effective at the task.

The laymen continue to be the foundation of the process because they are the target of the content. As soon as the content is imprecise and useless, the laymen do not believe anymore on the authors of that content and on the whole hierarchy the laymen themselves have established and the hierarchy is naturally dissolved.

In order to mimic this natural process in a decentralized platform, it is necessary to allow everyone to submit content proposals that will be decided to be published by the users. Besides, the good authors have to be incentivised to participate in the curation process. In this sense, I presents a mechanism to improve the editorial process.

Initially, all the users have the same voting weight but as the body of knowledge of a certain area increases, the voting power of a user increases at the same proportion of its contribution to that area. In other words, for each article or edit approved, its voting power on other edits thereafter increases by a certain amount until it reaches a saturation limit of, say, 20% of all votes. This saturation limit would prevent that only less than 3 persons decide about the merit of an edit.

These rules have at least one flaw. Once a bunch of authors - just 3 for a 20% saturation limit - reach their maximum voting power, they, as a colluding group, become powerful enough to perpetuate themselves as the curators of a branch despite a possible decrease of the quality of their judgement. In order to prevent this undesired effect, there can be added a second polling to validate the acceptance of an edit. In this second polling, the edit is rejected if at least 90% of the participants decide so. At this time, the votes are equally weighted and a minimum number of participants equal to the number of the first polling is required. Every time the author has its proposal rejected, its voting power decreases by a certain amount. Thus the scrutiny itself will be supervised by the community and patently misjudgements from careless experts could be avoided.

I hope that these reflections could somehow contribute to the improvement of the platform.

Thanks!

Endorse
36ebe77417e6f2a6402e53aa4c0c4398abe4917cee27c511c352cc5aee14ae28