Guest post: The impossibility of ratings and necessity of addressing Psychological Incentives

100% on board with this mission, which is very close to my heart.

1. Regarding ratings

All ratings are manipulated.

In 2015, Reddit accidentally revealed its users are being manipulated. The #1 entry under “Most Addicted Cities” on this now-deleted blog post is Eglin Air Force Base, a major hub for domestic propaganda and online manipulation programs.


Here is a paper funded by Eglin about online social control and information suppression:

Meanwhile, if there are reputation systems involved to enforce a standard for raters, or different kinds of raters — they. will. be. gamed. (see Why Reputation Systems are Bad Incentive Design — 3 min read:

The problem of Identity

If you want to maintain data on raters, identity is key. Even blockchains aren’t prepared to help with this yet—anyone can create multiple crypto wallets and pretend to be multiple people.

KYCing all raters by collecting passport photos doesn’t seem like an option. (Also, governments can make new passports whenever they want.)
Until a breakthrough occurs in the Identity area, sybil attacks are inevitable if something so powerful as public opinion is at stake.

2. Psychological Incentives

The problem of centralized knowledge authority is mirrored by a problem of incentives among information consumers.

We naturally seek “authoritative sources” in order to outsource the responsibility and burden of independent thinking while saving face with our communities. (After all, if the New York Times says it, how can you be blamed for believing it?)

These psychological incentives pervert the knowledge-seeking endeavor and make it possible for centralized knowledge authorities to control public opinion. These psychological incentives are engrained over millions of years, and are not going away anytime soon.

Psychological incentives are exploited by centralized knowledge authorities to memetically engineer the public in ways that benefit themselves — news outlets polarize their audiences so they’re less likely to leave, magazines publish clickbait to compel attention, and Wikipedia patrols encyclopedia entries to enforce official narratives and taboos.

The only way to liberate ourselves from the psychological incentives that make us vulnerable to (hungry for!) centralized knowledge control is to introduce new incentives on top of them.

How can we do this?

When seeking knowledge, we seek authoritative sources like the New York Times. But when seeking investments, we seek undervalued opportunities. We look for what has been overlooked by the authorities.

This is how knowledge can be enabled to advance.

By imbuing the search for superior truths with the same incentives as the search for superior investments, we can simultaneously 1) liberate ourselves from our psychological incentives to seek authoritative sources and 2) democratize the concept of authority.

Without creating something that reforms the public’s relationship to knowledge, the public remains vulnerable to information manipulation, no matter how sophisticated (or “democratized”) the new technologies are.



  1. This is a great opener to the discussion, Mike.

    I know exactly how most sites/systems are/were manipulated and you are right. The only real question is how much value vs effort usually.

    The decisions & bias added to tech should be regularly reviewed by real people. The exact way this is done is beyond my pay grade but enough transparency should be there that you can know what is happening behind the scenes.

  2. Addressing the ratings system as a whole, my initial thought was to use something like two factor authentication for accounts that wish to rate entries but it seems like the decentralized IPFS implementations of 2FA do not prohibit a user from generating multiple accounts and 2FA keys.
    However, after thinking more about the ratings system and the concerns regarding manipulation of ratings, I was wondering what others might be thinking of using a citation system similar to where they track the number of citations an article has received. Obviously there would need to be a long discussion about how to implement such a system and the possibility of abuse, but I think it may be worth consideration. This could be used in lieu of or in conjunction with a ratings system. Just wondering what others think about this idea.

  3. Very good points Mike.

    For basing anything on “votes”, you can’t count virtual identities, because of Sybil attacks, as you say.
    Any attempt to use real identities destroys privacy, and is very difficult anyway.
    So for voting, you have to abandon “counting noses” (as Heinlein used to put it) and count something else — useful contributions seems like the obvious choice.
    This is still subject to being gamed, but at least in principle it might be possible to combat it, whereas counting identities is in principle unsolvable.

  4. One thought I’ve had is that rating will be done on a specific encyclopedia the same way editing a page is, outsourcing this problem to the encyclopedia authors instead of to us. Individual sites could be in charge of it and if a site gained a reputation for abuse, encyclosphere readers could discount ratings from that site. I’m not sure if this is workable though… and I realize it doesn’t exactly solve the problem so much as outsource it.

  5. The worst of so many human aberrations is to use the word truth. Preferable is verifiable data or information that can be empirically validated.
    Having a pop-vote [if I got this rating idea right/wrong] is tendentious. Like you pointed out [and I suspected because paranoia comes natural to me: I don’t trust – anything] reddit is push-pull media in oldspeak. Social Media is fantasy land. It’s inhabitants don’t know it but they are subject to digital dementia. So asking these attention deficit defectives to vote is a only going to create fake surges.

Leave a comment