Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Method for managing blockchain abuse: Deteriorating signatories #3

Open
chrisUsick opened this issue Jun 16, 2017 · 5 comments
Open

Comments

@chrisUsick
Copy link

In order to ensure a signatory isn't an adversary implement a form of signatory verification. Make the amount of data a signatory can upload be a function of how recently they were verified. Make the number of peers it requires to become verified be around 50%, preventing cliques of adversaries from supporting each other. Signatories can periodically seek verification.
This would allow for the eventual removal of adversarial peers without significantly impacting the main use cases.

@ivoras
Copy link
Owner

ivoras commented Jun 18, 2017

The question is: what is the goal. I've actually thought about most of these methods while implementing it and came up with counter-arguments:

  • Size as a matter of signatory age: I've thought about using this to prevent adversarial signatories which would spam the blockchain with useless data and increase its size... but it's trivial to have "sleeper" signatories which wake up at once and dump data when needed. Something more complex is needed instead - possibly signing blocks by multiple signatories.
  • 50% quorum for verifying signatories: currently, it's logarithmic, because when there is a large number of signatories involved, e.g. tens of thousands, there's literally 0 percent chance of getting half of them to do anything.

I like the idea of periodic signatory verification - that could be the way to go!

@chrisUsick
Copy link
Author

chrisUsick commented Jun 19, 2017 via email

@ivoras
Copy link
Owner

ivoras commented Jun 19, 2017

Hi,

I meant "sleepers" in the sense that, if there is a limit to how much data a signatory can add to the chain based on how young / old is the signatory, the signatory can "play nice" until some time in the future (or someone can steal its key and not use it) until at a convenient time, he can spam the chain with data.

But as you say, augmenting this with multiple verifications / a "web of trust" between signatories, could solve a large part of the "sleeper" issues.

Currently, there's the (logarithmic) quorum of how many signatories are needed to accept a new signatory into their ranks. So after a candidate collects enough signatures to accept him, he is immediately "active" in the sense that he can start adding new blocks.

@ghost
Copy link

ghost commented Jun 27, 2017

Something I've thought about for similar cases; trust can be hierarchical and still be somewhat decentralised. If it's assumed that the progenitor(s) of a blockchain is the highest authority on desired usage, and they can be trusted to appoint trusted "deputies", and that those deputies have a lower-but-still-high trust to appoint more delegates, etc.etc... then you can have a GPG-style "web of trust" with diminishing trust as distance from a "trusted root" note increases.

The amount of "trust" needed to append data can then be a configurable value for each chain, possibly allowing the value to be updated with a high enough quorum. So initially data might only be appended by people at distance-1 from a trusted root, or by at least 2 people at distance 2, or at least 4 people at distance 3.. but the fall-off rate of trust as a function of distance might be reconfigured to be more permissive as the community grows, or to be more stringent if abuse occurs.

Just a thought; not an easy fix. Probably there are no easy fixes that don't have trade-offs that are unacceptable in some cases. Probably that means that an interfaced policy solution might allow the necessary versatility? E.g. when starting a chain, decide a "policy object" or "policy schema" which is given information about a contributer/contribution, and decides according to a black-box policy whether to accept? That might allow some chains to use pure distance-based metrics, others to use temporal-decay metrics, and others to use binary "yes/no by contributor" metrics. How to ensure that everyone's using the same metric.. is another layer of problems to solve. :)

@tangshuang
Copy link

My idea is that: users have to pay for publishing content to the blockchain. If they want to generate spam, they have to pay for them. The more spam, the less coins they will left in their account.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants