peerplays_migrated/libraries/chain/db_witness_schedule.cpp
pbattu123 0b280882af
Merge beatrice(GPOS changes) with master (#270)
* Created unit test for #325

* remove needless find()

* issue - 154: Don't allow to vote when vesting balance is 0

* Increase block creation timeout to 2500ms

* increase delay for node connection

* remove cache from cli get_account

* add cli tests framework

* Adjust newly merged code to new API

* Merged changes from Bitshares PR 1036

* GRPH-76 - Short-cut long sequences of missed blocks

Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)

* Improved resilience of block database against corruption

* Moved reindex logic into database / chain_database, make use of additional blocks in block_database

Fixed tests wrt db.open

* Enable undo + fork database for final blocks in a replay

Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests

* Log starting block number of replay

* Prevent unsigned integer underflow

* Fixed lock detection

* Dont leave _data_dir empty if db is locked

* Writing the object_database is now almost atomic

* Improved consistency check for block_log

* Cut back block_log index file if inconsistent

* Fixed undo_database

* Added test case for broken merge on empty undo_db

* exclude second undo_db.enable() call in some cases

* Add missing change

* change bitshares to core in message

* Merge pull request #938 from bitshares/fix-block-storing

Store correct block ID when switching forks

* Fixed integer overflow issue

* Fix for for history ID mismatch ( Bitshares PR #875 )

* Update the FC submodule with the changes for GRPH-4

* Merged Bitshares PR #1462 and compilation fixes

* Support/gitlab (#123)

* Updated gitlab process

* Fix undefined references in cli test

* Updated GitLab CI

* Fix #436 object_database created outside of witness data directory

* supplement more comments on database::_opened variable

* prevent segfault when destructing application obj

* Fixed test failures and compilation issue

* minor performance improvement

* Added comment

* Fix compilation in debug mode

* Fixed duplicate ops returned from get_account_history

* Fixed account_history_pagination test

* Removed unrelated comment

* Update to fixed version of fc

* Skip auth check when pushing self-generated blocks

* Extract public keys before pushing a transaction

* Dereference chain_database shared_ptr

* Updated transaction::signees to mutable

and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing

* Added get_asset_count API

* No longer extract public keys before pushing a trx

and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* Remove leftover comment from merge

* fix for get_vesting_balance API call

* braces update

* Allow sufficient space for new undo_session

* Throw for deep nesting

* node.cpp: Check the attacker/buggy client before updating items ids

The peer is an attacker or buggy, which means the item_hashes_received is
not correct.

Move the check before updating items ids to save some time in this case.

* Create .gitlab-ci.yml

* Added cli_test to CI

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* use random port numbers in app_test (#154)

* Changes to compiple with GCC 7(Ubuntu 18.04)

* proposal fail_reason bug fixed (#157)

* Added Sonarcloud code_quality to CI (#159)

* Added sonarcloud analysis (#158)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Support/gitlab develop (#168)

* Added code_quality to CI

* Update .gitlab-ci.yml

* Point to PBSA/peerplays-fc commit f13d063 (#167)

* [GRPH-3] Additional cli tests (#155)

* Additional cli tests

* Compatible with latest fc changes

* Fixed Spacing issues

* [GRPH-106] Added voting tests (#136)

* Added more voting tests

* Added additional option

* Adjust p2p log level (#180)

* merge gpos to develop (#186)

* issue - 154: Don't allow to vote when vesting balance is 0

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* fix for get_vesting_balance API call

* braces update

* Create .gitlab-ci.yml

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* Changes to compiple with GCC 7(Ubuntu 18.04)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Revert "Revert "GPOS protocol""

This reverts commit 67616417b7.

* add new line needed to gpos hardfork file

* comment temporally cli_vote_for_2_witnesses until refactor or delete

* fix gpos tests

* fix gitlab-ci conflict

* Fixed few error messages

* error message corrections at other places

* Updated FC repository to peerplays-network/peerplays-fc (#189)

Point to fc commit hash 6096e94 [latest-fc branch]

* Project name update in Doxyfile (#146)

* changes to allow user to vote in each sub-period

* Fixed GPOS vesting factor issue when proxy is set

* Added unit test for proxy voting

* Review changes

* changes to update last voting time

* resolve merge conflict

* unit test changes and also separated GPOS test suite

* delete unused variables

* removed witness check

* eliminate time gap between two consecutive vesting periods

* deleted GPOS specific test suite and updated gpos tests

* updated GPOS hf

* Fixed dividend distribution issue and added test case

* fix flag

* clean newlines gpos_tests

* adapt gpos_tests to changed flag

* Fix to roll in GPOS rules, carry votes from 6th sub-period

* check was already modified

* comments updated

* updated comments to the benefit of reviewer

* Added token symbol name in error messages

* Added token symbol name in error messages (#204)

* case 1: Fixed last voting time issue

* get_account bug fixed

* Fixed flag issue

* Fixed spelling issue

* remove non needed gcc5 changes to dockerfile

* GRPH134- High CPU Issue, websocket changes (#213)

* update submodule branch to refer to the latest commit on latest-fc branch (#214)

* Improve account maintenance performance (#130)

* Improve account maintenance performance

* merge fixes

* Fixed merge issue

* Fixed indentations and extra ';'

* Update CI for syncing gitmodules (#216)

* Added logging for the old update_expired_feeds bug

The old bug is https://github.com/cryptonomex/graphene/issues/615 .

Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.

However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453

* Minor performance improvement for price::is_null()

* Use static refs in db_getter for immutable objects

* Minor performance improvement for db_maint

* Minor code updates for asset_evaluator.cpp

* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda

* Improve update_expired_feeds performance #1093

* Change static refs to member pointers of db class

* Added getter for witness schedule object

* Added getter for core dynamic data object

* Use getters

* Removed unused variable

* Add comments for update_expired_feeds in db_block

* Minor refactory asset_create_evaluator::do_apply()

* Added FC_ASSERT for dynamic data id of core asset

* Added header inclusions in db_management.cpp

* fix global objects usage during replay

* Logging config parsing issue

* added new files

* compilation fix

* Simplified code in database::pay_workers()

* issue with withdrawl

* Added unit test for empty account history

* set extensions default values

* Update GPOS hardfork date and don't allow GPOS features before hardfork time

* refer to latest commit of latest-fc branch (#224)

* account name or id support in all database api

* asset id or name support in all asset APIs

* Fixed compilation issues

* Fixed alignment issues

* Externalized some API templates

* Externalize serialization of blocks, tx, ops

* Externalized db objects

* Externalized genesis serialization

* Externalized serialization in protocol library

* Undo superfluous change

* remove default value for extension parameter

* fix compilation issues

* GRPH-46-Quit_command_cliwallet

* removed multiple function definition

* Fixed chainparameter update proposal issue

* Move GPOS withdraw logic to have single transaction(also single fee) and update API

* Added log for authorization failure of proposal operations

* Votes consideration on GPOS activation

* bump fc version

* fix gpos tests

* Bump fc version

* Updated gpos/voting_tests

* Fixed withdraw vesting bug

* Added unit test

* Update hardfork date for TESTNET, sync fc module and update logs

* avoid wlog as it filling up space

* Beatrice hot fix(sync issue fix)

* gpos tests fix

* Set hardfork date to Jan5th on TESTNET

Co-authored-by: Peter Conrad <github.com@quisquis.de>
Co-authored-by: John M. Jones <jmjatlanta@gmail.com>
Co-authored-by: obucinac <obucinac@users.noreply.github.com>
Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: RichardWeiYang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
2020-02-07 21:23:08 +05:30

261 lines
9.7 KiB
C++

/*
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
*
* The MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <graphene/chain/database.hpp>
#include <graphene/chain/global_property_object.hpp>
#include <graphene/chain/witness_object.hpp>
#include <graphene/chain/witness_schedule_object.hpp>
namespace graphene { namespace chain {
using boost::container::flat_set;
witness_id_type database::get_scheduled_witness( uint32_t slot_num )const
{
witness_id_type wid;
const global_property_object& gpo = get_global_properties();
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SHUFFLED_ALGORITHM)
{
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
const witness_schedule_object& wso = get_witness_schedule_object();;
uint64_t current_aslot = dpo.current_aslot + slot_num;
return wso.current_shuffled_witnesses[ current_aslot % wso.current_shuffled_witnesses.size() ];
}
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM &&
slot_num != 0 )
{
const witness_schedule_object& wso = get_witness_schedule_object();;
// ask the near scheduler who goes in the given slot
bool slot_is_near = wso.scheduler.get_slot(slot_num-1, wid);
if(! slot_is_near)
{
// if the near scheduler doesn't know, we have to extend it to
// a far scheduler.
// n.b. instantiating it is slow, but block gaps long enough to
// need it are likely pretty rare.
witness_scheduler_rng far_rng(wso.rng_seed.begin(), GRAPHENE_FAR_SCHEDULE_CTR_IV);
far_future_witness_scheduler far_scheduler =
far_future_witness_scheduler(wso.scheduler, far_rng);
if(!far_scheduler.get_slot(slot_num-1, wid))
{
// no scheduled witness -- somebody set up us the bomb
// n.b. this code path is impossible, the present
// implementation of far_future_witness_scheduler
// returns true unconditionally
assert( false );
}
}
}
return wid;
}
fc::time_point_sec database::get_slot_time(uint32_t slot_num)const
{
if( slot_num == 0 )
return fc::time_point_sec();
auto interval = block_interval();
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
if( head_block_num() == 0 )
{
// n.b. first block is at genesis_time plus one block interval
fc::time_point_sec genesis_time = dpo.time;
return genesis_time + slot_num * interval;
}
int64_t head_block_abs_slot = head_block_time().sec_since_epoch() / interval;
fc::time_point_sec head_slot_time(head_block_abs_slot * interval);
const global_property_object& gpo = get_global_properties();
if( dpo.dynamic_flags & dynamic_global_property_object::maintenance_flag )
slot_num += gpo.parameters.maintenance_skip_slots;
// "slot 0" is head_slot_time
// "slot 1" is head_slot_time,
// plus maint interval if head block is a maint block
// plus block interval if head block is not a maint block
return head_slot_time + (slot_num * interval);
}
uint32_t database::get_slot_at_time(fc::time_point_sec when)const
{
fc::time_point_sec first_slot_time = get_slot_time( 1 );
//@ROL std::cout << "@get_slot_at_time " << when.to_iso_string() << " " << first_slot_time.to_iso_string() << "\n";
if( when < first_slot_time )
return 0;
return (when - first_slot_time).to_seconds() / block_interval() + 1;
}
void database::update_witness_schedule()
{
const witness_schedule_object& wso = get_witness_schedule_object();
const global_property_object& gpo = get_global_properties();
if( head_block_num() % gpo.active_witnesses.size() == 0 )
{
modify( wso, [&]( witness_schedule_object& _wso )
{
_wso.current_shuffled_witnesses.clear();
_wso.current_shuffled_witnesses.reserve( gpo.active_witnesses.size() );
for( const witness_id_type& w : gpo.active_witnesses )
_wso.current_shuffled_witnesses.push_back( w );
auto now_hi = uint64_t(head_block_time().sec_since_epoch()) << 32;
for( uint32_t i = 0; i < _wso.current_shuffled_witnesses.size(); ++i )
{
/// High performance random generator
/// http://xorshift.di.unimi.it/
uint64_t k = now_hi + uint64_t(i)*2685821657736338717ULL;
k ^= (k >> 12);
k ^= (k << 25);
k ^= (k >> 27);
k *= 2685821657736338717ULL;
uint32_t jmax = _wso.current_shuffled_witnesses.size() - i;
uint32_t j = i + k%jmax;
std::swap( _wso.current_shuffled_witnesses[i],
_wso.current_shuffled_witnesses[j] );
}
});
}
}
vector<witness_id_type> database::get_near_witness_schedule()const
{
const witness_schedule_object& wso = get_witness_schedule_object();
vector<witness_id_type> result;
result.reserve(wso.scheduler.size());
uint32_t slot_num = 0;
witness_id_type wid;
while( wso.scheduler.get_slot(slot_num++, wid) )
result.emplace_back(wid);
return result;
}
void database::update_witness_schedule(const signed_block& next_block)
{
auto start = fc::time_point::now();
const global_property_object& gpo = get_global_properties();
const witness_schedule_object& wso = get_witness_schedule_object();
uint32_t schedule_needs_filled = gpo.active_witnesses.size();
uint32_t schedule_slot = get_slot_at_time(next_block.timestamp);
// We shouldn't be able to generate _pending_block with timestamp
// in the past, and incoming blocks from the network with timestamp
// in the past shouldn't be able to make it this far without
// triggering FC_ASSERT elsewhere
assert( schedule_slot > 0 );
witness_id_type first_witness;
bool slot_is_near = wso.scheduler.get_slot( schedule_slot-1, first_witness );
witness_id_type wit;
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
assert( dpo.random.data_size() == witness_scheduler_rng::seed_length );
assert( witness_scheduler_rng::seed_length == wso.rng_seed.size() );
modify(wso, [&](witness_schedule_object& _wso)
{
_wso.slots_since_genesis += schedule_slot;
witness_scheduler_rng rng(wso.rng_seed.data, _wso.slots_since_genesis);
_wso.scheduler._min_token_count = std::max(int(gpo.active_witnesses.size()) / 2, 1);
if( slot_is_near )
{
uint32_t drain = schedule_slot;
while( drain > 0 )
{
if( _wso.scheduler.size() == 0 )
break;
_wso.scheduler.consume_schedule();
--drain;
}
}
else
{
_wso.scheduler.reset_schedule( first_witness );
}
while( !_wso.scheduler.get_slot(schedule_needs_filled, wit) )
{
if( _wso.scheduler.produce_schedule(rng) & emit_turn )
memcpy(_wso.rng_seed.begin(), dpo.random.data(), dpo.random.data_size());
}
_wso.last_scheduling_block = next_block.block_num();
_wso.recent_slots_filled = (
(_wso.recent_slots_filled << 1)
+ 1) << (schedule_slot - 1);
});
auto end = fc::time_point::now();
static uint64_t total_time = 0;
static uint64_t calls = 0;
total_time += (end - start).count();
if( ++calls % 1000 == 0 )
idump( ( double(total_time/1000000.0)/calls) );
}
uint32_t database::update_witness_missed_blocks( const signed_block& b )
{
uint32_t missed_blocks = get_slot_at_time( b.timestamp );
FC_ASSERT( missed_blocks != 0, "Trying to push double-produced block onto current block?!" );
missed_blocks--;
const auto& witnesses = witness_schedule_id_type()(*this).current_shuffled_witnesses;
if( missed_blocks < witnesses.size() )
for( uint32_t i = 0; i < missed_blocks; ++i ) {
const auto& witness_missed = get_scheduled_witness( i+1 )(*this);
modify( witness_missed, []( witness_object& w ) {
w.total_missed++;
});
}
return missed_blocks;
}
uint32_t database::witness_participation_rate()const
{
const global_property_object& gpo = get_global_properties();
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SHUFFLED_ALGORITHM)
{
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
return uint64_t(GRAPHENE_100_PERCENT) * dpo.recent_slots_filled.popcount() / 128;
}
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM)
{
const witness_schedule_object& wso = get_witness_schedule_object();
return uint64_t(GRAPHENE_100_PERCENT) * wso.recent_slots_filled.popcount() / 128;
}
return 0;
}
} }