-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Method for managing blockchain abuse: Deteriorating signatories #3
Comments
The question is: what is the goal. I've actually thought about most of these methods while implementing it and came up with counter-arguments:
I like the idea of periodic signatory verification - that could be the way to go! |
Thanks for the reply. Periodic verification could prevent sleeper
signatories in a few ways that I can think of.
1. Have the amount of data be a function of not only how recently they have
been verified but also the number of verifications they have received over
time. I.E. require signatories to establish a trust with the system as well
as maintain trust.
2. I'm not entirely clear on how sleepers work so I may be incorrect in
saying this but if a signatory wanted to be useful it would require
verification and thus become an active signatory. Unless of course getting
verified isn't enough of an action to become "active".
…On Sun, Jun 18, 2017 at 3:24 AM Ivan Voras ***@***.***> wrote:
The question is: what is the goal. I've actually thought about most of
these methods while implementing it and came up with counter-arguments:
- Size as a matter of signatory age: I've thought about using this to
prevent adversarial signatories which would spam the blockchain with
useless data and increase its size... but it's trivial to have "sleeper"
signatories which wake up at once and dump data when needed. Something more
complex is needed instead - possibly signing blocks by multiple signatories.
- 50% quorum for verifying signatories: currently, it's logarithmic,
because when there is a large number of signatories involved, e.g. tens of
thousands, there's literally 0 percent chance of getting half of them to do
anything.
I like the idea of periodic signatory verification - that could be the way
to go!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AGSLtXsrH0XOC8L7zbME6MTcCgbGTJNOks5sFN62gaJpZM4N8ilm>
.
|
Hi, I meant "sleepers" in the sense that, if there is a limit to how much data a signatory can add to the chain based on how young / old is the signatory, the signatory can "play nice" until some time in the future (or someone can steal its key and not use it) until at a convenient time, he can spam the chain with data. But as you say, augmenting this with multiple verifications / a "web of trust" between signatories, could solve a large part of the "sleeper" issues. Currently, there's the (logarithmic) quorum of how many signatories are needed to accept a new signatory into their ranks. So after a candidate collects enough signatures to accept him, he is immediately "active" in the sense that he can start adding new blocks. |
Something I've thought about for similar cases; trust can be hierarchical and still be somewhat decentralised. If it's assumed that the progenitor(s) of a blockchain is the highest authority on desired usage, and they can be trusted to appoint trusted "deputies", and that those deputies have a lower-but-still-high trust to appoint more delegates, etc.etc... then you can have a GPG-style "web of trust" with diminishing trust as distance from a "trusted root" note increases. The amount of "trust" needed to append data can then be a configurable value for each chain, possibly allowing the value to be updated with a high enough quorum. So initially data might only be appended by people at distance-1 from a trusted root, or by at least 2 people at distance 2, or at least 4 people at distance 3.. but the fall-off rate of trust as a function of distance might be reconfigured to be more permissive as the community grows, or to be more stringent if abuse occurs. Just a thought; not an easy fix. Probably there are no easy fixes that don't have trade-offs that are unacceptable in some cases. Probably that means that an interfaced policy solution might allow the necessary versatility? E.g. when starting a chain, decide a "policy object" or "policy schema" which is given information about a contributer/contribution, and decides according to a black-box policy whether to accept? That might allow some chains to use pure distance-based metrics, others to use temporal-decay metrics, and others to use binary "yes/no by contributor" metrics. How to ensure that everyone's using the same metric.. is another layer of problems to solve. :) |
My idea is that: users have to pay for publishing content to the blockchain. If they want to generate spam, they have to pay for them. The more spam, the less coins they will left in their account. |
In order to ensure a signatory isn't an adversary implement a form of signatory verification. Make the amount of data a signatory can upload be a function of how recently they were verified. Make the number of peers it requires to become verified be around 50%, preventing cliques of adversaries from supporting each other. Signatories can periodically seek verification.
This would allow for the eventual removal of adversarial peers without significantly impacting the main use cases.
The text was updated successfully, but these errors were encountered: