Skip to content

Commit

Permalink
Merge branch 'master' into 3593-rc-delegation-pool-wallet-support
Browse files Browse the repository at this point in the history
  • Loading branch information
sgerbino committed Feb 18, 2020
2 parents fd40d47 + 581b1f2 commit bd0c574
Show file tree
Hide file tree
Showing 30 changed files with 1,024 additions and 330 deletions.
6 changes: 3 additions & 3 deletions contrib/config-for-ahnode.ini
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
6 changes: 3 additions & 3 deletions contrib/config-for-broadcaster.ini
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
6 changes: 3 additions & 3 deletions contrib/config-for-docker.ini
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
6 changes: 3 additions & 3 deletions contrib/fullnode.config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
6 changes: 3 additions & 3 deletions contrib/fullnode.opswhitelist.config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
168 changes: 168 additions & 0 deletions doc/devs/delegation_pools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
## Delegation Pools

Delegation pools are a solution primarily to the "push" approach of SP delegations. In order to "fund" new accounts, a surplus of SP must be delegated. This incentivizes abuse of faucets and ties up significant amounts of SP in delegations to accounts that may not be using it. This feature has a secondary benefit that allow users to pool RCs to fund actions for SMTs. SMTs are a unique entity in Steem in that actions are performed on behalf of an SMT (similarly to the implicit "blockchain" entity used for comment rewards, inflation, etc.) yet are still user created and thus need to be rate limited by some mechanism or else the blockchain would be open to DoS exploits/attacks.

The basic structure looks like this:

```
------------- -------------
| Account A | \ / | Account D |
------------- \ (in del) / -------------
\ /
------------- \ ======== / -------------
| Account B | ---- | Pool | ---- | Account E |
------------- / ======== \ -------------
/ \
------------- / (out del) \ -------------
| Account C | / \ | Account F |
------------- -------------
```

Furthermore, each account can receive from multiple pools.

```
==========
| Pool A | \
========== \ (out del)
\
========== \ -----------
| Pool B | ---- | Account |
========== / -----------
/
========== /
| Pool C | /
==========
```

Unlike SP Delegations, the number of which are limitless so long as the account has sufficient SP, oputgoing RC delegations are limited. This is a conseqeuence of the interactions between soft and hard consensus.

For example, if Alice has 100 SP and delegates 75 RC to Pool A, then Alice still votes with 100 SP, but only has 25 RC to fund her actions. If Alice then delegates 50 SP to Bob, she only votes with 50 SP, but has delegated a total of 125 RC! That is a problem. Currently, RCs respect SP delegations. We could remove that behavior, fixing this problem, but breaking existing use cases for delegations.

It is worth noting that existing delegations could be replaced with a pool of the following form:

```
--------- ================== -------
| Alice | ---- | Anonymous Pool | ---- | Bob |
--------- ================== -------
```

Instead, we need to create a delegation priority. SP Delegations will take priority over RC Delegations to preserve existing behavior. Continuing with the previous example, when Alice delegates 50 SP to Bob, her delegation to Pool A must implicitly decrease. The in delegation must have both fields for desired RCs and actual RCs delegated and the actual RCs delegated must decrease to 50 so that Alice isn't over subscribing her RCs. If Alice then realizes her mistake, she can chose to either decrease her RC delegation to Pool A or decrease her SP delegation to Bob. Either would fix the problem.

If she decides to decrease her SP delegation to Bob, then we need to adjust the actual RCs for her RC delegation to Pool A back up. For this simple case, this does not look bad, but if Alice delegates RCs to many pools, then these calculations can become quite expensive. RCs at their core are a system to prevent users from spamming costly actions on the blockchain. If the system which tracks this (RCs) is itself costly, then the system is not actually protecting the blockchain.

This is further exacerbated by calculations to determine how RCs are being removed from the delegations. An easy solution is to cap the number of delegations to a reasonable amount. This is currently capped at 40 delegations and should be sufficient for nearly every use case. This can be changed easily in the future due to the RC plugin being soft consensus.

Likewise, each account can only have a limited number of in delegations. When an account charges RCs, it needs to know which pools to charge to. An account can have three in delegations. To prevent an unknown pool from occupying a slot that a user does not wish to be occupied, those delegations are broken in to slots that the user white lists.

> - By default, the allowed delegator of Slot 0 is the creator.
> - By default, the allowed delegator of Slot 1 is the account recovery partner.
> - By default, the allowed delegator of Slot 2 is either the creator or the account recovery partner.
> - The (end-)user, or the user the slot is currently accessible to, can execute a (custom) operation to override the default and change that outdel slot's allowed delegator (ejecting any delegation currently using that outdel slot).
Each pool is be managed using a simple oversubstription model. The sum of out dels should be allowed to surpass the actual RCs of the pool as much as the pool owner desires. Each account receiving from the pool would not be able to use more than the amount specified by the pool owner, regardless of how much reserve RCs the pool contains and if the pool runs out of RCs altogether, then regardless of how many RCs a user has been delegated, they will not be able to use those RCs. It is the responsibility of the pool owner to manage the oversubscription and usage ratios to provide the desired level of serive to delegations.

```
(10 RC) (10 RC)
------------- -------------
| Account A | \ / | Account D |
------------- \ / -------------
(5 RC) \ (20 RC) / (10 RC)
------------- \ ======== / -------------
| Account B | ---- | Pool | ---- | Account E |
------------- / ======== \ -------------
(5 RC) / \ (10 RC)
------------- / \ -------------
| Account C | / \ | Account F |
------------- -------------
```

The above example shows how accounts A, B, and C can pool 20 RC together to provide service worth 30 RCs. So long as account's D, E, and F don't use more than their 10 RC individually and never use more than 20 RC combined, none of the accounts will see any interupption in their ability to transact.


## SMT Delegation

Each SMT has a special case pool. To delegate to an SMT, users delegate to a pool whose name is the NAI string of the SMT. That pool is special in that there an single implicit out del that receives 100% of the pool. The model would looks like the following:

```
(10 RC)
-------------
| Account A | \
------------- \
(5 RC) \ (20 RC) (20 RC)
------------- \ ======== -------------
| Account B | ---- | Pool | ---- | SMT |
------------- / ======== -------------
(5 RC) /
------------- /
| Account C | /
-------------
```

Delegations to SMTs are used to power SMT specific actions, such as token emissions. Without such delegations, these actions will not be included on chain and it is likely that the SMT will be considered "dead". The number of RCs required to support such behavior will likely be relatively low, but SMT creators and communities should be consious of this requirement and delegation their RCs to the SMTs that they regularly use.

## Managing RC Delegation Pools

Delegation Pools are implemented as a second layer solution, also called "soft consensus". The rc_plugin has defined several of its own operations that can be included in a `custom_json_operation`.

### Delegate To Pool

Delegate SP to a pool.

The delegated SP no longer generates RC for from_account, instead it generates RC for to_pool.

Each account is limited to delegating to 40 pools.

```
struct delegate_to_pool_operation
{
account_name_type from_account;
account_name_type to_pool;
asset amount;
extensions_type extensions;
};
```

### Delegate From Pool

Delegate SP from a pool to a user.

What is actually delegated are DRC (delegated RC). The from_pool account is allowed to "freely print" DRC from its own pool. This is equal to the RC that pool user is allowed to consume. Pools may be over-subscribed (i.e. total DRC output is greater than RC input).

Executing this operation effectively _replaces_ the effect of the previous `delegate_drc_from_pool_operation` with the same `(to_account, to_slot)` pair.

When deciding whether the affected manabar sections are full or empty, the decision is always made in favor of `to_account`:

- When increasing delegation, new DRC is always created to fill the new manabar section.
- When removing delegation, DRC may be destroyed, but only to the extent necessary for the account not to exceed its newly reduced maximum.

```
struct delegate_drc_from_pool_operation
{
account_name_type from_pool;
account_name_type to_account;
int8_t to_slot = 0;
asset_symbol_type asset_symbol;
int64_t drc_max_mana = 0;
extensions_type extensions;
};
```

### Set Slot Delegator

The purpose of `set_slot_delegator_operation` is to set the delegator allowed to delegate to a `(to_account, to_slot)` pair.

To delegate to a slot, from_pool must be considered the delegator of the slot. Normally the delegator is the account creator. The `set_slot_delegator_operation` allows a new delegator to be appointed by either the existing delegator, the account itself, or (in the case of slot 1) the slot may also be set by the account's recovery partner.

```
struct set_slot_delegator_operation
{
account_name_type from_pool;
account_name_type to_account;
int8_t to_slot = 0;
account_name_type signer;
extensions_type extensions;
};
```
6 changes: 3 additions & 3 deletions doc/example_config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ follow-max-feed-size = 500
# follow-start-feeds = 0

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
market-history-bucket-size = [15,60,300,3600,86400]
market-history-bucket-size = [15,60,300,3600,21600]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 5760)
market-history-buckets-per-size = 5760
# How far back in time to track market history, measure in seconds (default: 604800)
market-history-track-time = 604800

# The local IP address and port to listen for incoming connections.
# p2p-endpoint =
Expand Down
2 changes: 1 addition & 1 deletion libraries/chain/include/steem/chain/util/smt_token.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ bool schedule_next_contributor_payout( database& db, const asset_symbol_type& a
bool schedule_founder_payout( database& db, const asset_symbol_type& a );

share_type payout( database& db, const asset_symbol_type& symbol, const account_object& account, const std::vector< contribution_payout >& payouts );
fc::optional< share_type > steem_hard_cap( database& db, const asset_symbol_type& a );
fc::optional< share_type > get_ico_steem_hard_cap( database& db, const asset_symbol_type& a );
std::size_t ico_tier_size( database& db, const asset_symbol_type& symbol );
void remove_ico_objects( database& db, const asset_symbol_type& symbol );

Expand Down
39 changes: 36 additions & 3 deletions libraries/chain/smt_evaluator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -169,21 +169,54 @@ void smt_setup_evaluator::do_apply( const smt_setup_operation& o )
FC_ASSERT( _db.has_hardfork( STEEM_SMT_HARDFORK ), "SMT functionality not enabled until hardfork ${hf}", ("hf", STEEM_SMT_HARDFORK) );

const smt_token_object& _token = common_pre_setup_evaluation( _db, o.symbol, o.control_account );
share_type hard_cap;

if ( o.steem_satoshi_min > 0 )
{
auto possible_hard_cap = util::smt::ico::steem_hard_cap( _db, o.symbol );
auto possible_hard_cap = util::smt::ico::get_ico_steem_hard_cap( _db, o.symbol );

FC_ASSERT( possible_hard_cap.valid(),
"An SMT with a Steem Satoshi Minimum of ${s} cannot succeed without an ICO tier.", ("s", o.steem_satoshi_min) );

share_type hard_cap = *possible_hard_cap;
hard_cap = *possible_hard_cap;

FC_ASSERT( o.steem_satoshi_min <= hard_cap,
"The Steem Satoshi Minimum must be less than the hard cap. Steem Satoshi Minimum: ${s}, Hard Cap: ${c}",
("s", o.steem_satoshi_min)("c", hard_cap) );
}

auto ico_tiers = _db.get_index< smt_ico_tier_index, by_symbol_steem_satoshi_cap >().equal_range( _token.liquid_symbol );
share_type prev_tier_cap = 0;
share_type total_tokens = 0;

for( ; ico_tiers.first != ico_tiers.second; ++ico_tiers.first )
{
fc::uint128_t max_token_units = ( hard_cap / ico_tiers.first->generation_unit.steem_unit_sum() ).value * o.min_unit_ratio;
FC_ASSERT( max_token_units.hi == 0 && max_token_units.lo <= uint64_t( std::numeric_limits<int64_t>::max() ),
"Overflow detected in ICO tier '${t}'", ("tier", *ico_tiers.first) );

// This is done with share_types, which are safe< int64_t >. Overflow is detected but does not provide
// a useful error message. Do the checks manually to provide actionable information.
fc::uint128_t new_tokens = ( ( ico_tiers.first->steem_satoshi_cap - prev_tier_cap ).value / ico_tiers.first->generation_unit.steem_unit_sum() );

new_tokens *= o.min_unit_ratio;
FC_ASSERT( new_tokens.hi == 0 && new_tokens.lo <= uint64_t( std::numeric_limits<int64_t>::max() ),
"Overflow detected in ICO tier '${t}'", ("tier", *ico_tiers.first) );

new_tokens *= ico_tiers.first->generation_unit.token_unit_sum();
FC_ASSERT( new_tokens.hi == 0 && new_tokens.lo <= uint64_t( std::numeric_limits<int64_t>::max() ),
"Overflow detected in ICO tier '${t}'", ("tier", *ico_tiers.first) );

total_tokens += new_tokens.to_int64();
prev_tier_cap = ico_tiers.first->steem_satoshi_cap;
}

FC_ASSERT( total_tokens < STEEM_MAX_SHARE_SUPPLY,
"Max token supply for ${n} can exceed ${m}. Calculated: ${c}",
("n", _token.liquid_symbol)
("m", STEEM_MAX_SHARE_SUPPLY)
("c", total_tokens) );

_db.modify( _token, [&]( smt_token_object& token )
{
token.phase = smt_phase::setup_completed;
Expand Down Expand Up @@ -385,7 +418,7 @@ void smt_contribute_evaluator::do_apply( const smt_contribute_operation& o )
FC_ASSERT( token->phase >= smt_phase::ico, "SMT has not begun accepting contributions" );
FC_ASSERT( token->phase < smt_phase::ico_completed, "SMT is no longer accepting contributions" );

auto possible_hard_cap = util::smt::ico::steem_hard_cap( _db, o.symbol );
auto possible_hard_cap = util::smt::ico::get_ico_steem_hard_cap( _db, o.symbol );
FC_ASSERT( possible_hard_cap.valid(), "The specified token does not feature an ICO" );
share_type hard_cap = *possible_hard_cap;

Expand Down
12 changes: 6 additions & 6 deletions libraries/chain/util/smt_token.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -206,20 +206,20 @@ static payout_vars calculate_payout_vars( database& db, const smt_ico_object& ic
{
payout_vars vars;

auto hard_cap = util::smt::ico::steem_hard_cap( db, ico.symbol );
auto hard_cap = util::smt::ico::get_ico_steem_hard_cap( db, ico.symbol );
FC_ASSERT( hard_cap.valid(), "Unable to find ICO hard cap." );
share_type steem_hard_cap = *hard_cap;

vars.steem_units_sent = contribution_amount / generation_unit.steem_unit_sum();
auto contributed_steem_units = ico.contributed.amount / generation_unit.steem_unit_sum();
auto total_contributed_steem_units = ico.contributed.amount / generation_unit.steem_unit_sum();
auto steem_units_hard_cap = steem_hard_cap / generation_unit.steem_unit_sum();

auto generated_token_units = std::min(
contributed_steem_units * ico.max_unit_ratio,
auto total_generated_token_units = std::min(
total_contributed_steem_units * ico.max_unit_ratio,
steem_units_hard_cap * ico.min_unit_ratio
);

vars.unit_ratio = generated_token_units / contributed_steem_units;
vars.unit_ratio = total_generated_token_units / total_contributed_steem_units;

return vars;
}
Expand Down Expand Up @@ -443,7 +443,7 @@ bool schedule_founder_payout( database& db, const asset_symbol_type& a )
return action_scheduled;
}

fc::optional< share_type > steem_hard_cap( database& db, const asset_symbol_type& a )
fc::optional< share_type > get_ico_steem_hard_cap( database& db, const asset_symbol_type& a )
{
const auto& idx = db.get_index< smt_ico_tier_index, by_symbol_steem_satoshi_cap >();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ void blockchain_statistics_plugin_impl::on_block( const signed_block& b )
for( const auto& bucket : _tracked_buckets )
{
auto open = fc::time_point_sec( ( db.head_block_time().sec_since_epoch() / bucket ) * bucket );
auto itr = bucket_idx.find( boost::make_tuple( bucket, open ) );
auto itr = bucket_idx.find( boost::make_tuple( SBD_SYMBOL, bucket, open ) );

if( itr == bucket_idx.end() )
{
Expand Down
Loading

0 comments on commit bd0c574

Please sign in to comment.