* Created unit test for #325
* remove needless find()
* issue - 154: Don't allow to vote when vesting balance is 0
* Increase block creation timeout to 2500ms
* increase delay for node connection
* remove cache from cli get_account
* add cli tests framework
* Adjust newly merged code to new API
* Merged changes from Bitshares PR 1036
* GRPH-76 - Short-cut long sequences of missed blocks
Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)
* Improved resilience of block database against corruption
* Moved reindex logic into database / chain_database, make use of additional blocks in block_database
Fixed tests wrt db.open
* Enable undo + fork database for final blocks in a replay
Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests
* Log starting block number of replay
* Prevent unsigned integer underflow
* Fixed lock detection
* Dont leave _data_dir empty if db is locked
* Writing the object_database is now almost atomic
* Improved consistency check for block_log
* Cut back block_log index file if inconsistent
* Fixed undo_database
* Added test case for broken merge on empty undo_db
* exclude second undo_db.enable() call in some cases
* Add missing change
* change bitshares to core in message
* Merge pull request #938 from bitshares/fix-block-storing
Store correct block ID when switching forks
* Fixed integer overflow issue
* Fix for for history ID mismatch ( Bitshares PR #875 )
* Update the FC submodule with the changes for GRPH-4
* Merged Bitshares PR #1462 and compilation fixes
* Support/gitlab (#123)
* Updated gitlab process
* Fix undefined references in cli test
* Updated GitLab CI
* Fix #436 object_database created outside of witness data directory
* supplement more comments on database::_opened variable
* prevent segfault when destructing application obj
* Fixed test failures and compilation issue
* minor performance improvement
* Added comment
* Fix compilation in debug mode
* Fixed duplicate ops returned from get_account_history
* Fixed account_history_pagination test
* Removed unrelated comment
* Update to fixed version of fc
* Skip auth check when pushing self-generated blocks
* Extract public keys before pushing a transaction
* Dereference chain_database shared_ptr
* Updated transaction::signees to mutable
and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing
* Added get_asset_count API
* No longer extract public keys before pushing a trx
and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct
* changes to withdraw_vesting feature(for both cdd and GPOS)
* Comments update
* update to GPOS hardfork ref
* Remove leftover comment from merge
* fix for get_vesting_balance API call
* braces update
* Allow sufficient space for new undo_session
* Throw for deep nesting
* node.cpp: Check the attacker/buggy client before updating items ids
The peer is an attacker or buggy, which means the item_hashes_received is
not correct.
Move the check before updating items ids to save some time in this case.
* Create .gitlab-ci.yml
* Added cli_test to CI
* fixing build errors (#150)
* fixing build errors
vest type correction
* fixing build errors
vest type correction
* fixes
new Dockerfile
* vesting_balance_type correction
vesting_balance_type changed to normal
* gcc5 support to Dockerfile
gcc5 support to Dockerfile
* use random port numbers in app_test (#154)
* Changes to compiple with GCC 7(Ubuntu 18.04)
* proposal fail_reason bug fixed (#157)
* Added Sonarcloud code_quality to CI (#159)
* Added sonarcloud analysis (#158)
* changes to have separate methods and single withdrawl fee for multiple vest objects
* 163-fix, Return only non-zero vesting balances
* Support/gitlab develop (#168)
* Added code_quality to CI
* Update .gitlab-ci.yml
* Point to PBSA/peerplays-fc commit f13d063 (#167)
* [GRPH-3] Additional cli tests (#155)
* Additional cli tests
* Compatible with latest fc changes
* Fixed Spacing issues
* [GRPH-106] Added voting tests (#136)
* Added more voting tests
* Added additional option
* Adjust p2p log level (#180)
* merge gpos to develop (#186)
* issue - 154: Don't allow to vote when vesting balance is 0
* changes to withdraw_vesting feature(for both cdd and GPOS)
* Comments update
* update to GPOS hardfork ref
* fix for get_vesting_balance API call
* braces update
* Create .gitlab-ci.yml
* fixing build errors (#150)
* fixing build errors
vest type correction
* fixing build errors
vest type correction
* fixes
new Dockerfile
* vesting_balance_type correction
vesting_balance_type changed to normal
* gcc5 support to Dockerfile
gcc5 support to Dockerfile
* Changes to compiple with GCC 7(Ubuntu 18.04)
* changes to have separate methods and single withdrawl fee for multiple vest objects
* 163-fix, Return only non-zero vesting balances
* Revert "Revert "GPOS protocol""
This reverts commit 67616417b7.
* add new line needed to gpos hardfork file
* comment temporally cli_vote_for_2_witnesses until refactor or delete
* fix gpos tests
* fix gitlab-ci conflict
* Fixed few error messages
* error message corrections at other places
* Updated FC repository to peerplays-network/peerplays-fc (#189)
Point to fc commit hash 6096e94 [latest-fc branch]
* Project name update in Doxyfile (#146)
* changes to allow user to vote in each sub-period
* Fixed GPOS vesting factor issue when proxy is set
* Added unit test for proxy voting
* Review changes
* changes to update last voting time
* resolve merge conflict
* unit test changes and also separated GPOS test suite
* delete unused variables
* removed witness check
* eliminate time gap between two consecutive vesting periods
* deleted GPOS specific test suite and updated gpos tests
* updated GPOS hf
* Fixed dividend distribution issue and added test case
* fix flag
* clean newlines gpos_tests
* adapt gpos_tests to changed flag
* Fix to roll in GPOS rules, carry votes from 6th sub-period
* check was already modified
* comments updated
* updated comments to the benefit of reviewer
* Added token symbol name in error messages
* Added token symbol name in error messages (#204)
* case 1: Fixed last voting time issue
* get_account bug fixed
* Fixed flag issue
* Fixed spelling issue
* remove non needed gcc5 changes to dockerfile
* GRPH134- High CPU Issue, websocket changes (#213)
* update submodule branch to refer to the latest commit on latest-fc branch (#214)
* Improve account maintenance performance (#130)
* Improve account maintenance performance
* merge fixes
* Fixed merge issue
* Fixed indentations and extra ';'
* Update CI for syncing gitmodules (#216)
* Added logging for the old update_expired_feeds bug
The old bug is https://github.com/cryptonomex/graphene/issues/615 .
Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.
However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453
* Minor performance improvement for price::is_null()
* Use static refs in db_getter for immutable objects
* Minor performance improvement for db_maint
* Minor code updates for asset_evaluator.cpp
* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda
* Improve update_expired_feeds performance #1093
* Change static refs to member pointers of db class
* Added getter for witness schedule object
* Added getter for core dynamic data object
* Use getters
* Removed unused variable
* Add comments for update_expired_feeds in db_block
* Minor refactory asset_create_evaluator::do_apply()
* Added FC_ASSERT for dynamic data id of core asset
* Added header inclusions in db_management.cpp
* fix global objects usage during replay
* Logging config parsing issue
* added new files
* compilation fix
* Simplified code in database::pay_workers()
* issue with withdrawl
* Added unit test for empty account history
* set extensions default values
* Update GPOS hardfork date and don't allow GPOS features before hardfork time
* refer to latest commit of latest-fc branch (#224)
* account name or id support in all database api
* asset id or name support in all asset APIs
* Fixed compilation issues
* Fixed alignment issues
* Externalized some API templates
* Externalize serialization of blocks, tx, ops
* Externalized db objects
* Externalized genesis serialization
* Externalized serialization in protocol library
* Undo superfluous change
* remove default value for extension parameter
* fix compilation issues
* GRPH-46-Quit_command_cliwallet
* removed multiple function definition
* Fixed chainparameter update proposal issue
* Move GPOS withdraw logic to have single transaction(also single fee) and update API
* Added log for authorization failure of proposal operations
* Votes consideration on GPOS activation
* bump fc version
* fix gpos tests
* Bump fc version
* Updated gpos/voting_tests
* Fixed withdraw vesting bug
* Added unit test
* Update hardfork date for TESTNET, sync fc module and update logs
* avoid wlog as it filling up space
* Beatrice hot fix(sync issue fix)
* gpos tests fix
* Set hardfork date to Jan5th on TESTNET
Co-authored-by: Peter Conrad <github.com@quisquis.de>
Co-authored-by: John M. Jones <jmjatlanta@gmail.com>
Co-authored-by: obucinac <obucinac@users.noreply.github.com>
Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: RichardWeiYang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
321 lines
10 KiB
C++
321 lines
10 KiB
C++
/*
|
|
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
|
|
*
|
|
* The MIT License
|
|
*
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
* in the Software without restriction, including without limitation the rights
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
* furnished to do so, subject to the following conditions:
|
|
*
|
|
* The above copyright notice and this permission notice shall be included in
|
|
* all copies or substantial portions of the Software.
|
|
*
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
* THE SOFTWARE.
|
|
*/
|
|
|
|
#include <graphene/chain/database.hpp>
|
|
|
|
#include <graphene/chain/chain_property_object.hpp>
|
|
#include <graphene/chain/witness_schedule_object.hpp>
|
|
#include <graphene/chain/special_authority_object.hpp>
|
|
#include <graphene/chain/operation_history_object.hpp>
|
|
#include <graphene/chain/protocol/fee_schedule.hpp>
|
|
|
|
#include <fc/io/fstream.hpp>
|
|
|
|
#include <fstream>
|
|
#include <functional>
|
|
#include <iostream>
|
|
|
|
namespace graphene { namespace chain {
|
|
|
|
database::database() :
|
|
_random_number_generator(fc::ripemd160().data())
|
|
{
|
|
initialize_indexes();
|
|
initialize_evaluators();
|
|
}
|
|
|
|
database::~database()
|
|
{
|
|
clear_pending();
|
|
}
|
|
|
|
// Right now, we leave undo_db enabled when replaying when the bookie plugin is
|
|
// enabled. It depends on new/changed/removed object notifications, and those are
|
|
// only fired when the undo_db is enabled.
|
|
// So we use this helper object to disable undo_db only if it is not forbidden
|
|
// with _slow_replays flag.
|
|
class auto_undo_enabler
|
|
{
|
|
const bool _slow_replays;
|
|
undo_database& _undo_db;
|
|
bool _disabled;
|
|
public:
|
|
auto_undo_enabler(bool slow_replays, undo_database& undo_db) :
|
|
_slow_replays(slow_replays),
|
|
_undo_db(undo_db),
|
|
_disabled(false)
|
|
{
|
|
}
|
|
|
|
~auto_undo_enabler()
|
|
{
|
|
try{
|
|
enable();
|
|
} FC_CAPTURE_AND_LOG(("undo_db enabling crash"))
|
|
}
|
|
|
|
void enable()
|
|
{
|
|
if(!_disabled)
|
|
return;
|
|
_undo_db.enable();
|
|
_disabled = false;
|
|
}
|
|
|
|
void disable()
|
|
{
|
|
if(_disabled)
|
|
return;
|
|
if(_slow_replays)
|
|
return;
|
|
_undo_db.disable();
|
|
_disabled = true;
|
|
}
|
|
};
|
|
|
|
void database::reindex( fc::path data_dir )
|
|
{ try {
|
|
auto last_block = _block_id_to_block.last();
|
|
if( !last_block ) {
|
|
elog( "!no last block" );
|
|
edump((last_block));
|
|
return;
|
|
}
|
|
if( last_block->block_num() <= head_block_num()) return;
|
|
|
|
ilog( "reindexing blockchain" );
|
|
auto start = fc::time_point::now();
|
|
const auto last_block_num = last_block->block_num();
|
|
uint32_t flush_point = last_block_num < 10000 ? 0 : last_block_num - 10000;
|
|
uint32_t undo_point = last_block_num < 50 ? 0 : last_block_num - 50;
|
|
|
|
ilog( "Replaying blocks, starting at ${next}...", ("next",head_block_num() + 1) );
|
|
auto_undo_enabler undo(_slow_replays, _undo_db);
|
|
if( head_block_num() >= undo_point )
|
|
{
|
|
if( head_block_num() > 0 )
|
|
_fork_db.start_block( *fetch_block_by_number( head_block_num() ) );
|
|
}
|
|
else
|
|
{
|
|
undo.disable();
|
|
}
|
|
for( uint32_t i = head_block_num() + 1; i <= last_block_num; ++i )
|
|
{
|
|
if( i % 10000 == 0 ) std::cerr << " " << double(i*100)/last_block_num << "% "<<i << " of " <<last_block_num<<" \n";
|
|
if( i == flush_point )
|
|
{
|
|
ilog( "Writing database to disk at block ${i}", ("i",i) );
|
|
flush();
|
|
ilog( "Done" );
|
|
}
|
|
fc::optional< signed_block > block = _block_id_to_block.fetch_by_number(i);
|
|
if( !block.valid() )
|
|
{
|
|
wlog( "Reindexing terminated due to gap: Block ${i} does not exist!", ("i", i) );
|
|
uint32_t dropped_count = 0;
|
|
while( true )
|
|
{
|
|
fc::optional< block_id_type > last_id = _block_id_to_block.last_id();
|
|
// this can trigger if we attempt to e.g. read a file that has block #2 but no block #1
|
|
if( !last_id.valid() )
|
|
break;
|
|
// we've caught up to the gap
|
|
if( block_header::num_from_id( *last_id ) <= i )
|
|
break;
|
|
_block_id_to_block.remove( *last_id );
|
|
dropped_count++;
|
|
}
|
|
wlog( "Dropped ${n} blocks from after the gap", ("n", dropped_count) );
|
|
break;
|
|
}
|
|
if( i < undo_point && !_slow_replays)
|
|
{
|
|
apply_block(*block, skip_witness_signature |
|
|
skip_transaction_signatures |
|
|
skip_transaction_dupe_check |
|
|
skip_tapos_check |
|
|
skip_witness_schedule_check |
|
|
skip_authority_check);
|
|
}
|
|
else
|
|
{
|
|
undo.enable();
|
|
push_block(*block, skip_witness_signature |
|
|
skip_transaction_signatures |
|
|
skip_transaction_dupe_check |
|
|
skip_tapos_check |
|
|
skip_witness_schedule_check |
|
|
skip_authority_check);
|
|
}
|
|
}
|
|
undo.enable();
|
|
auto end = fc::time_point::now();
|
|
ilog( "Done reindexing, elapsed time: ${t} sec", ("t",double((end-start).count())/1000000.0 ) );
|
|
} FC_CAPTURE_AND_RETHROW( (data_dir) ) }
|
|
|
|
void database::wipe(const fc::path& data_dir, bool include_blocks)
|
|
{
|
|
ilog("Wiping database", ("include_blocks", include_blocks));
|
|
if (_opened) {
|
|
close(false);
|
|
}
|
|
object_database::wipe(data_dir);
|
|
if( include_blocks )
|
|
fc::remove_all( data_dir / "database" );
|
|
}
|
|
|
|
void database::open(
|
|
const fc::path& data_dir,
|
|
std::function<genesis_state_type()> genesis_loader,
|
|
const std::string& db_version)
|
|
{
|
|
try
|
|
{
|
|
bool wipe_object_db = false;
|
|
if( !fc::exists( data_dir / "db_version" ) )
|
|
wipe_object_db = true;
|
|
else
|
|
{
|
|
std::string version_string;
|
|
fc::read_file_contents( data_dir / "db_version", version_string );
|
|
wipe_object_db = ( version_string != db_version );
|
|
}
|
|
if( wipe_object_db ) {
|
|
ilog("Wiping object_database due to missing or wrong version");
|
|
object_database::wipe( data_dir );
|
|
std::ofstream version_file( (data_dir / "db_version").generic_string().c_str(),
|
|
std::ios::out | std::ios::binary | std::ios::trunc );
|
|
version_file.write( db_version.c_str(), db_version.size() );
|
|
version_file.close();
|
|
}
|
|
|
|
object_database::open(data_dir);
|
|
|
|
_block_id_to_block.open(data_dir / "database" / "block_num_to_block");
|
|
|
|
if( !find(global_property_id_type()) )
|
|
init_genesis(genesis_loader());
|
|
else
|
|
{
|
|
_p_core_asset_obj = &get( asset_id_type() );
|
|
_p_core_dynamic_data_obj = &get( asset_dynamic_data_id_type() );
|
|
_p_global_prop_obj = &get( global_property_id_type() );
|
|
_p_chain_property_obj = &get( chain_property_id_type() );
|
|
_p_dyn_global_prop_obj = &get( dynamic_global_property_id_type() );
|
|
_p_witness_schedule_obj = &get( witness_schedule_id_type() );
|
|
}
|
|
|
|
fc::optional<block_id_type> last_block = _block_id_to_block.last_id();
|
|
if( last_block.valid() )
|
|
{
|
|
FC_ASSERT( *last_block >= head_block_id(),
|
|
"last block ID does not match current chain state",
|
|
("last_block->id", last_block)("head_block_id",head_block_num()) );
|
|
reindex( data_dir );
|
|
}
|
|
_opened = true;
|
|
}
|
|
FC_CAPTURE_LOG_AND_RETHROW( (data_dir) )
|
|
}
|
|
|
|
void database::close(bool rewind)
|
|
{
|
|
if (!_opened)
|
|
return;
|
|
|
|
// TODO: Save pending tx's on close()
|
|
clear_pending();
|
|
|
|
// pop all of the blocks that we can given our undo history, this should
|
|
// throw when there is no more undo history to pop
|
|
if( rewind )
|
|
{
|
|
try
|
|
{
|
|
uint32_t cutoff = get_dynamic_global_properties().last_irreversible_block_num;
|
|
|
|
while( head_block_num() > cutoff )
|
|
{
|
|
block_id_type popped_block_id = head_block_id();
|
|
pop_block();
|
|
_fork_db.remove(popped_block_id); // doesn't throw on missing
|
|
}
|
|
}
|
|
catch ( const fc::exception& e )
|
|
{
|
|
wlog( "Database close unexpected exception: ${e}", ("e", e) );
|
|
}
|
|
}
|
|
|
|
// Since pop_block() will move tx's in the popped blocks into pending,
|
|
// we have to clear_pending() after we're done popping to get a clean
|
|
// DB state (issue #336).
|
|
clear_pending();
|
|
|
|
object_database::flush();
|
|
object_database::close();
|
|
|
|
if( _block_id_to_block.is_open() )
|
|
_block_id_to_block.close();
|
|
|
|
_fork_db.reset();
|
|
|
|
_opened = false;
|
|
}
|
|
|
|
void database::force_slow_replays()
|
|
{
|
|
ilog("enabling slow replays");
|
|
_slow_replays = true;
|
|
}
|
|
|
|
void database::check_ending_lotteries()
|
|
{
|
|
try {
|
|
const auto& lotteries_idx = get_index_type<asset_index>().indices().get<active_lotteries>();
|
|
for( auto checking_asset: lotteries_idx )
|
|
{
|
|
FC_ASSERT( checking_asset.is_lottery() );
|
|
FC_ASSERT( checking_asset.lottery_options->is_active );
|
|
FC_ASSERT( checking_asset.lottery_options->end_date != time_point_sec() );
|
|
if( checking_asset.lottery_options->end_date > head_block_time() ) continue;
|
|
checking_asset.end_lottery(*this);
|
|
}
|
|
} catch( ... ) {}
|
|
}
|
|
|
|
void database::check_lottery_end_by_participants( asset_id_type asset_id )
|
|
{
|
|
try {
|
|
asset_object asset_to_check = asset_id( *this );
|
|
auto asset_dyn_props = asset_to_check.dynamic_data( *this );
|
|
FC_ASSERT( asset_dyn_props.current_supply == asset_to_check.options.max_supply );
|
|
FC_ASSERT( asset_to_check.is_lottery() );
|
|
FC_ASSERT( asset_to_check.lottery_options->ending_on_soldout );
|
|
asset_to_check.end_lottery( *this );
|
|
} catch( ... ) {}
|
|
}
|
|
|
|
} }
|