peerplays_migrated/libraries/plugins/witness/witness.cpp
pbattu123 0b280882af
Merge beatrice(GPOS changes) with master (#270)
* Created unit test for #325

* remove needless find()

* issue - 154: Don't allow to vote when vesting balance is 0

* Increase block creation timeout to 2500ms

* increase delay for node connection

* remove cache from cli get_account

* add cli tests framework

* Adjust newly merged code to new API

* Merged changes from Bitshares PR 1036

* GRPH-76 - Short-cut long sequences of missed blocks

Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)

* Improved resilience of block database against corruption

* Moved reindex logic into database / chain_database, make use of additional blocks in block_database

Fixed tests wrt db.open

* Enable undo + fork database for final blocks in a replay

Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests

* Log starting block number of replay

* Prevent unsigned integer underflow

* Fixed lock detection

* Dont leave _data_dir empty if db is locked

* Writing the object_database is now almost atomic

* Improved consistency check for block_log

* Cut back block_log index file if inconsistent

* Fixed undo_database

* Added test case for broken merge on empty undo_db

* exclude second undo_db.enable() call in some cases

* Add missing change

* change bitshares to core in message

* Merge pull request #938 from bitshares/fix-block-storing

Store correct block ID when switching forks

* Fixed integer overflow issue

* Fix for for history ID mismatch ( Bitshares PR #875 )

* Update the FC submodule with the changes for GRPH-4

* Merged Bitshares PR #1462 and compilation fixes

* Support/gitlab (#123)

* Updated gitlab process

* Fix undefined references in cli test

* Updated GitLab CI

* Fix #436 object_database created outside of witness data directory

* supplement more comments on database::_opened variable

* prevent segfault when destructing application obj

* Fixed test failures and compilation issue

* minor performance improvement

* Added comment

* Fix compilation in debug mode

* Fixed duplicate ops returned from get_account_history

* Fixed account_history_pagination test

* Removed unrelated comment

* Update to fixed version of fc

* Skip auth check when pushing self-generated blocks

* Extract public keys before pushing a transaction

* Dereference chain_database shared_ptr

* Updated transaction::signees to mutable

and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing

* Added get_asset_count API

* No longer extract public keys before pushing a trx

and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* Remove leftover comment from merge

* fix for get_vesting_balance API call

* braces update

* Allow sufficient space for new undo_session

* Throw for deep nesting

* node.cpp: Check the attacker/buggy client before updating items ids

The peer is an attacker or buggy, which means the item_hashes_received is
not correct.

Move the check before updating items ids to save some time in this case.

* Create .gitlab-ci.yml

* Added cli_test to CI

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* use random port numbers in app_test (#154)

* Changes to compiple with GCC 7(Ubuntu 18.04)

* proposal fail_reason bug fixed (#157)

* Added Sonarcloud code_quality to CI (#159)

* Added sonarcloud analysis (#158)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Support/gitlab develop (#168)

* Added code_quality to CI

* Update .gitlab-ci.yml

* Point to PBSA/peerplays-fc commit f13d063 (#167)

* [GRPH-3] Additional cli tests (#155)

* Additional cli tests

* Compatible with latest fc changes

* Fixed Spacing issues

* [GRPH-106] Added voting tests (#136)

* Added more voting tests

* Added additional option

* Adjust p2p log level (#180)

* merge gpos to develop (#186)

* issue - 154: Don't allow to vote when vesting balance is 0

* changes to withdraw_vesting feature(for both cdd and GPOS)

* Comments update

* update to GPOS hardfork ref

* fix for get_vesting_balance API call

* braces update

* Create .gitlab-ci.yml

* fixing build errors (#150)

* fixing build errors

vest type correction

* fixing build errors

vest type correction

* fixes 

new Dockerfile

* vesting_balance_type correction

vesting_balance_type changed to normal

* gcc5 support to Dockerfile

gcc5 support to Dockerfile

* Changes to compiple with GCC 7(Ubuntu 18.04)

* changes to have separate methods and single withdrawl fee for multiple vest objects

* 163-fix, Return only non-zero vesting balances

* Revert "Revert "GPOS protocol""

This reverts commit 67616417b7.

* add new line needed to gpos hardfork file

* comment temporally cli_vote_for_2_witnesses until refactor or delete

* fix gpos tests

* fix gitlab-ci conflict

* Fixed few error messages

* error message corrections at other places

* Updated FC repository to peerplays-network/peerplays-fc (#189)

Point to fc commit hash 6096e94 [latest-fc branch]

* Project name update in Doxyfile (#146)

* changes to allow user to vote in each sub-period

* Fixed GPOS vesting factor issue when proxy is set

* Added unit test for proxy voting

* Review changes

* changes to update last voting time

* resolve merge conflict

* unit test changes and also separated GPOS test suite

* delete unused variables

* removed witness check

* eliminate time gap between two consecutive vesting periods

* deleted GPOS specific test suite and updated gpos tests

* updated GPOS hf

* Fixed dividend distribution issue and added test case

* fix flag

* clean newlines gpos_tests

* adapt gpos_tests to changed flag

* Fix to roll in GPOS rules, carry votes from 6th sub-period

* check was already modified

* comments updated

* updated comments to the benefit of reviewer

* Added token symbol name in error messages

* Added token symbol name in error messages (#204)

* case 1: Fixed last voting time issue

* get_account bug fixed

* Fixed flag issue

* Fixed spelling issue

* remove non needed gcc5 changes to dockerfile

* GRPH134- High CPU Issue, websocket changes (#213)

* update submodule branch to refer to the latest commit on latest-fc branch (#214)

* Improve account maintenance performance (#130)

* Improve account maintenance performance

* merge fixes

* Fixed merge issue

* Fixed indentations and extra ';'

* Update CI for syncing gitmodules (#216)

* Added logging for the old update_expired_feeds bug

The old bug is https://github.com/cryptonomex/graphene/issues/615 .

Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.

However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453

* Minor performance improvement for price::is_null()

* Use static refs in db_getter for immutable objects

* Minor performance improvement for db_maint

* Minor code updates for asset_evaluator.cpp

* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda

* Improve update_expired_feeds performance #1093

* Change static refs to member pointers of db class

* Added getter for witness schedule object

* Added getter for core dynamic data object

* Use getters

* Removed unused variable

* Add comments for update_expired_feeds in db_block

* Minor refactory asset_create_evaluator::do_apply()

* Added FC_ASSERT for dynamic data id of core asset

* Added header inclusions in db_management.cpp

* fix global objects usage during replay

* Logging config parsing issue

* added new files

* compilation fix

* Simplified code in database::pay_workers()

* issue with withdrawl

* Added unit test for empty account history

* set extensions default values

* Update GPOS hardfork date and don't allow GPOS features before hardfork time

* refer to latest commit of latest-fc branch (#224)

* account name or id support in all database api

* asset id or name support in all asset APIs

* Fixed compilation issues

* Fixed alignment issues

* Externalized some API templates

* Externalize serialization of blocks, tx, ops

* Externalized db objects

* Externalized genesis serialization

* Externalized serialization in protocol library

* Undo superfluous change

* remove default value for extension parameter

* fix compilation issues

* GRPH-46-Quit_command_cliwallet

* removed multiple function definition

* Fixed chainparameter update proposal issue

* Move GPOS withdraw logic to have single transaction(also single fee) and update API

* Added log for authorization failure of proposal operations

* Votes consideration on GPOS activation

* bump fc version

* fix gpos tests

* Bump fc version

* Updated gpos/voting_tests

* Fixed withdraw vesting bug

* Added unit test

* Update hardfork date for TESTNET, sync fc module and update logs

* avoid wlog as it filling up space

* Beatrice hot fix(sync issue fix)

* gpos tests fix

* Set hardfork date to Jan5th on TESTNET

Co-authored-by: Peter Conrad <github.com@quisquis.de>
Co-authored-by: John M. Jones <jmjatlanta@gmail.com>
Co-authored-by: obucinac <obucinac@users.noreply.github.com>
Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: RichardWeiYang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
2020-02-07 21:23:08 +05:30

309 lines
13 KiB
C++

/*
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
*
* The MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <graphene/witness/witness.hpp>
#include <graphene/chain/database.hpp>
#include <graphene/chain/witness_object.hpp>
#include <graphene/utilities/key_conversion.hpp>
#include <boost/range/algorithm_ext/insert.hpp>
#include <fc/smart_ref_impl.hpp>
#include <fc/thread/thread.hpp>
#include <iostream>
using namespace graphene::witness_plugin;
using std::string;
using std::vector;
namespace bpo = boost::program_options;
void new_chain_banner( const graphene::chain::database& db )
{
std::cerr << "\n"
"********************************\n"
"* *\n"
"* ------- NEW CHAIN ------ *\n"
"* - Welcome to Graphene! - *\n"
"* ------------------------ *\n"
"* *\n"
"********************************\n"
"\n";
if( db.get_slot_at_time( fc::time_point::now() ) > 200 )
{
std::cerr << "Your genesis seems to have an old timestamp\n"
"Please consider using the --genesis-timestamp option to give your genesis a recent timestamp\n"
"\n"
;
}
}
void witness_plugin::plugin_set_program_options(
boost::program_options::options_description& command_line_options,
boost::program_options::options_description& config_file_options)
{
auto default_priv_key = fc::ecc::private_key::regenerate(fc::sha256::hash(std::string("nathan")));
string witness_id_example = fc::json::to_string(chain::witness_id_type(5));
string witness_id_example2 = fc::json::to_string(chain::witness_id_type(6));
command_line_options.add_options()
("enable-stale-production", bpo::bool_switch()->notifier([this](bool e){_production_enabled = e;}), "Enable block production, even if the chain is stale.")
("required-participation", bpo::bool_switch()->notifier([this](int e){_required_witness_participation = uint32_t(e*GRAPHENE_1_PERCENT);}), "Percent of witnesses (0-99) that must be participating in order to produce blocks")
("witness-id,w", bpo::value<vector<string>>()->composing()->multitoken(),
("ID of witness controlled by this node (e.g. " + witness_id_example + ", quotes are required, may specify multiple times)").c_str())
("witness-ids,W", bpo::value<string>(),
("IDs of multiple witnesses controlled by this node (e.g. [" + witness_id_example + ", " + witness_id_example2 + "], quotes are required)").c_str())
("private-key", bpo::value<vector<string>>()->composing()->multitoken()->
DEFAULT_VALUE_VECTOR(std::make_pair(chain::public_key_type(default_priv_key.get_public_key()), graphene::utilities::key_to_wif(default_priv_key))),
"Tuple of [PublicKey, WIF private key] (may specify multiple times)")
;
config_file_options.add(command_line_options);
}
std::string witness_plugin::plugin_name()const
{
return "witness";
}
void witness_plugin::plugin_initialize(const boost::program_options::variables_map& options)
{ try {
ilog("witness plugin: plugin_initialize() begin");
_options = &options;
LOAD_VALUE_SET(options, "witness-id", _witnesses, chain::witness_id_type)
if (options.count("witness-ids"))
boost::insert(_witnesses, fc::json::from_string(options.at("witness-ids").as<string>()).as<vector<chain::witness_id_type>>( 5 ));
if( options.count("private-key") )
{
const std::vector<std::string> key_id_to_wif_pair_strings = options["private-key"].as<std::vector<std::string>>();
for (const std::string& key_id_to_wif_pair_string : key_id_to_wif_pair_strings)
{
auto key_id_to_wif_pair = graphene::app::dejsonify<std::pair<chain::public_key_type, std::string> >(key_id_to_wif_pair_string, 5);
ilog("Public Key: ${public}", ("public", key_id_to_wif_pair.first));
fc::optional<fc::ecc::private_key> private_key = graphene::utilities::wif_to_key(key_id_to_wif_pair.second);
if (!private_key)
{
// the key isn't in WIF format; see if they are still passing the old native private key format. This is
// just here to ease the transition, can be removed soon
try
{
private_key = fc::variant(key_id_to_wif_pair.second, 2).as<fc::ecc::private_key>(1);
}
catch (const fc::exception&)
{
FC_THROW("Invalid WIF-format private key ${key_string}", ("key_string", key_id_to_wif_pair.second));
}
}
_private_keys[key_id_to_wif_pair.first] = *private_key;
}
}
ilog("witness plugin: plugin_initialize() end");
} FC_LOG_AND_RETHROW() }
void witness_plugin::plugin_startup()
{ try {
ilog("witness plugin: plugin_startup() begin");
chain::database& d = database();
if( !_witnesses.empty() )
{
ilog("Launching block production for ${n} witnesses.", ("n", _witnesses.size()));
app().set_block_production(true);
if( _production_enabled )
{
if( d.head_block_num() == 0 )
new_chain_banner(d);
_production_skip_flags |= graphene::chain::database::skip_undo_history_check;
}
schedule_production_loop();
} else
elog("No witnesses configured! Please add witness IDs and private keys to configuration.");
ilog("witness plugin: plugin_startup() end");
} FC_CAPTURE_AND_RETHROW() }
void witness_plugin::plugin_shutdown()
{
// nothing to do
}
void witness_plugin::schedule_production_loop()
{
//Schedule for the next second's tick regardless of chain state
// If we would wait less than 50ms, wait for the whole second.
fc::time_point now = fc::time_point::now();
int64_t time_to_next_second = 1000000 - (now.time_since_epoch().count() % 1000000);
if( time_to_next_second < 50000 ) // we must sleep for at least 50ms
time_to_next_second += 1000000;
fc::time_point next_wakeup( now + fc::microseconds( time_to_next_second ) );
_block_production_task = fc::schedule([this]{block_production_loop();},
next_wakeup, "Witness Block Production");
}
block_production_condition::block_production_condition_enum witness_plugin::block_production_loop()
{
block_production_condition::block_production_condition_enum result;
fc::limited_mutable_variant_object capture( GRAPHENE_MAX_NESTED_OBJECTS );
try
{
result = maybe_produce_block(capture);
}
catch( const fc::canceled_exception& )
{
//We're trying to exit. Go ahead and let this one out.
throw;
}
catch( const fc::exception& e )
{
elog("Got exception while generating block:\n${e}", ("e", e.to_detail_string()));
result = block_production_condition::exception_producing_block;
elog("Discarding all pending transactions in an attempt to prevent the same error from occurring the next time we try to produce a block");
database().clear_pending();
}
switch( result )
{
case block_production_condition::produced:
ilog("Generated block #${n} with timestamp ${t} at time ${c}",
("n", capture["n"])("t", capture["t"])("c", capture["c"]));
break;
case block_production_condition::not_synced:
ilog("Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)");
break;
case block_production_condition::not_my_turn:
break;
case block_production_condition::not_time_yet:
break;
case block_production_condition::no_private_key:
ilog("Not producing block because I don't have the private key for ${scheduled_key}",
("scheduled_key", capture["scheduled_key"]));
break;
case block_production_condition::low_participation:
elog("Not producing block because node appears to be on a minority fork with only ${pct}% witness participation",
("n", capture["n"])("t", capture["t"])("c", capture["c"]));
break;
case block_production_condition::lag:
elog("Not producing block because node didn't wake up within 2500ms of the slot time.");
break;
case block_production_condition::consecutive:
elog("Not producing block because the last block was generated by the same witness.\nThis node is probably disconnected from the network so block production has been disabled.\nDisable this check with --allow-consecutive option.");
break;
case block_production_condition::exception_producing_block:
elog( "exception producing block" );
break;
}
schedule_production_loop();
return result;
}
block_production_condition::block_production_condition_enum witness_plugin::maybe_produce_block( fc::limited_mutable_variant_object& capture )
{
chain::database& db = database();
fc::time_point now_fine = fc::time_point::now();
fc::time_point_sec now = now_fine + fc::microseconds( 500000 );
// If the next block production opportunity is in the present or future, we're synced.
if( !_production_enabled )
{
if( db.get_slot_time(1) >= now )
_production_enabled = true;
else
return block_production_condition::not_synced;
}
// is anyone scheduled to produce now or one second in the future?
uint32_t slot = db.get_slot_at_time( now );
if( slot == 0 )
{
capture("next_time", db.get_slot_time(1));
return block_production_condition::not_time_yet;
}
//
// this assert should not fail, because now <= db.head_block_time()
// should have resulted in slot == 0.
//
// if this assert triggers, there is a serious bug in get_slot_at_time()
// which would result in allowing a later block to have a timestamp
// less than or equal to the previous block
//
assert( now > db.head_block_time() );
graphene::chain::witness_id_type scheduled_witness = db.get_scheduled_witness( slot );
// we must control the witness scheduled to produce the next block.
if( _witnesses.find( scheduled_witness ) == _witnesses.end() )
{
capture("scheduled_witness", scheduled_witness);
return block_production_condition::not_my_turn;
}
fc::time_point_sec scheduled_time = db.get_slot_time( slot );
wdump((slot)(scheduled_witness)(scheduled_time)(now));
graphene::chain::public_key_type scheduled_key = scheduled_witness( db ).signing_key;
auto private_key_itr = _private_keys.find( scheduled_key );
if( private_key_itr == _private_keys.end() )
{
capture("scheduled_key", scheduled_key);
return block_production_condition::no_private_key;
}
uint32_t prate = db.witness_participation_rate();
if( prate < _required_witness_participation )
{
capture("pct", uint32_t(100*uint64_t(prate) / GRAPHENE_1_PERCENT));
return block_production_condition::low_participation;
}
// the local clock must be at least 1 second ahead of head_block_time.
//if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM)
//if( (now - db.head_block_time()).to_seconds() < GRAPHENE_MIN_BLOCK_INTERVAL ) {
// return block_production_condition::local_clock; //Not producing block because head block is less than a second old.
//}
if( llabs((scheduled_time - now).count()) > fc::milliseconds( 2500 ).count() )
{
capture("scheduled_time", scheduled_time)("now", now);
return block_production_condition::lag;
}
//if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM)
//ilog("Witness ${id} production slot has arrived; generating a block now...", ("id", scheduled_witness));
auto block = db.generate_block(
scheduled_time,
scheduled_witness,
private_key_itr->second,
_production_skip_flags
);
capture("n", block.block_num())("t", block.timestamp)("c", now);
fc::async( [this,block](){ p2p_node().broadcast(net::block_message(block)); } );
return block_production_condition::produced;
}