* Created unit test for #325
* remove needless find()
* issue - 154: Don't allow to vote when vesting balance is 0
* Increase block creation timeout to 2500ms
* increase delay for node connection
* remove cache from cli get_account
* add cli tests framework
* Adjust newly merged code to new API
* Merged changes from Bitshares PR 1036
* GRPH-76 - Short-cut long sequences of missed blocks
Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)
* Improved resilience of block database against corruption
* Moved reindex logic into database / chain_database, make use of additional blocks in block_database
Fixed tests wrt db.open
* Enable undo + fork database for final blocks in a replay
Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests
* Log starting block number of replay
* Prevent unsigned integer underflow
* Fixed lock detection
* Dont leave _data_dir empty if db is locked
* Writing the object_database is now almost atomic
* Improved consistency check for block_log
* Cut back block_log index file if inconsistent
* Fixed undo_database
* Added test case for broken merge on empty undo_db
* exclude second undo_db.enable() call in some cases
* Add missing change
* change bitshares to core in message
* Merge pull request #938 from bitshares/fix-block-storing
Store correct block ID when switching forks
* Fixed integer overflow issue
* Fix for for history ID mismatch ( Bitshares PR #875 )
* Update the FC submodule with the changes for GRPH-4
* Merged Bitshares PR #1462 and compilation fixes
* Support/gitlab (#123)
* Updated gitlab process
* Fix undefined references in cli test
* Updated GitLab CI
* Fix #436 object_database created outside of witness data directory
* supplement more comments on database::_opened variable
* prevent segfault when destructing application obj
* Fixed test failures and compilation issue
* minor performance improvement
* Added comment
* Fix compilation in debug mode
* Fixed duplicate ops returned from get_account_history
* Fixed account_history_pagination test
* Removed unrelated comment
* Update to fixed version of fc
* Skip auth check when pushing self-generated blocks
* Extract public keys before pushing a transaction
* Dereference chain_database shared_ptr
* Updated transaction::signees to mutable
and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing
* Added get_asset_count API
* No longer extract public keys before pushing a trx
and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct
* changes to withdraw_vesting feature(for both cdd and GPOS)
* Comments update
* update to GPOS hardfork ref
* Remove leftover comment from merge
* fix for get_vesting_balance API call
* braces update
* Allow sufficient space for new undo_session
* Throw for deep nesting
* node.cpp: Check the attacker/buggy client before updating items ids
The peer is an attacker or buggy, which means the item_hashes_received is
not correct.
Move the check before updating items ids to save some time in this case.
* Create .gitlab-ci.yml
* Added cli_test to CI
* fixing build errors (#150)
* fixing build errors
vest type correction
* fixing build errors
vest type correction
* fixes
new Dockerfile
* vesting_balance_type correction
vesting_balance_type changed to normal
* gcc5 support to Dockerfile
gcc5 support to Dockerfile
* use random port numbers in app_test (#154)
* Changes to compiple with GCC 7(Ubuntu 18.04)
* proposal fail_reason bug fixed (#157)
* Added Sonarcloud code_quality to CI (#159)
* Added sonarcloud analysis (#158)
* changes to have separate methods and single withdrawl fee for multiple vest objects
* 163-fix, Return only non-zero vesting balances
* Support/gitlab develop (#168)
* Added code_quality to CI
* Update .gitlab-ci.yml
* Point to PBSA/peerplays-fc commit f13d063 (#167)
* [GRPH-3] Additional cli tests (#155)
* Additional cli tests
* Compatible with latest fc changes
* Fixed Spacing issues
* [GRPH-106] Added voting tests (#136)
* Added more voting tests
* Added additional option
* Adjust p2p log level (#180)
* merge gpos to develop (#186)
* issue - 154: Don't allow to vote when vesting balance is 0
* changes to withdraw_vesting feature(for both cdd and GPOS)
* Comments update
* update to GPOS hardfork ref
* fix for get_vesting_balance API call
* braces update
* Create .gitlab-ci.yml
* fixing build errors (#150)
* fixing build errors
vest type correction
* fixing build errors
vest type correction
* fixes
new Dockerfile
* vesting_balance_type correction
vesting_balance_type changed to normal
* gcc5 support to Dockerfile
gcc5 support to Dockerfile
* Changes to compiple with GCC 7(Ubuntu 18.04)
* changes to have separate methods and single withdrawl fee for multiple vest objects
* 163-fix, Return only non-zero vesting balances
* Revert "Revert "GPOS protocol""
This reverts commit 67616417b7.
* add new line needed to gpos hardfork file
* comment temporally cli_vote_for_2_witnesses until refactor or delete
* fix gpos tests
* fix gitlab-ci conflict
* Fixed few error messages
* error message corrections at other places
* Updated FC repository to peerplays-network/peerplays-fc (#189)
Point to fc commit hash 6096e94 [latest-fc branch]
* Project name update in Doxyfile (#146)
* changes to allow user to vote in each sub-period
* Fixed GPOS vesting factor issue when proxy is set
* Added unit test for proxy voting
* Review changes
* changes to update last voting time
* resolve merge conflict
* unit test changes and also separated GPOS test suite
* delete unused variables
* removed witness check
* eliminate time gap between two consecutive vesting periods
* deleted GPOS specific test suite and updated gpos tests
* updated GPOS hf
* Fixed dividend distribution issue and added test case
* fix flag
* clean newlines gpos_tests
* adapt gpos_tests to changed flag
* Fix to roll in GPOS rules, carry votes from 6th sub-period
* check was already modified
* comments updated
* updated comments to the benefit of reviewer
* Added token symbol name in error messages
* Added token symbol name in error messages (#204)
* case 1: Fixed last voting time issue
* get_account bug fixed
* Fixed flag issue
* Fixed spelling issue
* remove non needed gcc5 changes to dockerfile
* GRPH134- High CPU Issue, websocket changes (#213)
* update submodule branch to refer to the latest commit on latest-fc branch (#214)
* Improve account maintenance performance (#130)
* Improve account maintenance performance
* merge fixes
* Fixed merge issue
* Fixed indentations and extra ';'
* Update CI for syncing gitmodules (#216)
* Added logging for the old update_expired_feeds bug
The old bug is https://github.com/cryptonomex/graphene/issues/615 .
Due to the bug, `update_median_feeds()` and `check_call_orders()`
will be called when a feed is not actually expired, normally this
should not affect consensus since calling them should not change
any data in the state.
However, the logging indicates that `check_call_orders()` did
change some data under certain circumstances, specifically, when
multiple limit order matching issue (#453) occurred at same block.
* https://github.com/bitshares/bitshares-core/issues/453
* Minor performance improvement for price::is_null()
* Use static refs in db_getter for immutable objects
* Minor performance improvement for db_maint
* Minor code updates for asset_evaluator.cpp
* changed an `assert()` to `FC_ASSERT()`
* replaced one `db.get(asset_id_type())` with `db.get_core_asset()`
* capture only required variables for lambda
* Improve update_expired_feeds performance #1093
* Change static refs to member pointers of db class
* Added getter for witness schedule object
* Added getter for core dynamic data object
* Use getters
* Removed unused variable
* Add comments for update_expired_feeds in db_block
* Minor refactory asset_create_evaluator::do_apply()
* Added FC_ASSERT for dynamic data id of core asset
* Added header inclusions in db_management.cpp
* fix global objects usage during replay
* Logging config parsing issue
* added new files
* compilation fix
* Simplified code in database::pay_workers()
* issue with withdrawl
* Added unit test for empty account history
* set extensions default values
* Update GPOS hardfork date and don't allow GPOS features before hardfork time
* refer to latest commit of latest-fc branch (#224)
* account name or id support in all database api
* asset id or name support in all asset APIs
* Fixed compilation issues
* Fixed alignment issues
* Externalized some API templates
* Externalize serialization of blocks, tx, ops
* Externalized db objects
* Externalized genesis serialization
* Externalized serialization in protocol library
* Undo superfluous change
* remove default value for extension parameter
* fix compilation issues
* GRPH-46-Quit_command_cliwallet
* removed multiple function definition
* Fixed chainparameter update proposal issue
* Move GPOS withdraw logic to have single transaction(also single fee) and update API
* Added log for authorization failure of proposal operations
* Votes consideration on GPOS activation
* bump fc version
* fix gpos tests
* Bump fc version
* Updated gpos/voting_tests
* Fixed withdraw vesting bug
* Added unit test
* Update hardfork date for TESTNET, sync fc module and update logs
* avoid wlog as it filling up space
* Beatrice hot fix(sync issue fix)
* gpos tests fix
* Set hardfork date to Jan5th on TESTNET
Co-authored-by: Peter Conrad <github.com@quisquis.de>
Co-authored-by: John M. Jones <jmjatlanta@gmail.com>
Co-authored-by: obucinac <obucinac@users.noreply.github.com>
Co-authored-by: Bobinson K B <bobinson@gmail.com>
Co-authored-by: Alfredo Garcia <oxarbitrage@gmail.com>
Co-authored-by: Miha Čančula <miha@noughmad.eu>
Co-authored-by: Abit <abitmore@users.noreply.github.com>
Co-authored-by: Roshan Syed <r.syed@pbsa.info>
Co-authored-by: Sandip Patel <sandip@knackroot.com>
Co-authored-by: RichardWeiYang <richard.weiyang@gmail.com>
Co-authored-by: gladcow <jahr@yandex.ru>
Co-authored-by: satyakoneru <satyakoneru.iiith@gmail.com>
413 lines
16 KiB
C++
413 lines
16 KiB
C++
/*
|
|
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
|
|
*
|
|
* The MIT License
|
|
*
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
* of this software and associated documentation files (the "Software"), to deal
|
|
* in the Software without restriction, including without limitation the rights
|
|
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
* copies of the Software, and to permit persons to whom the Software is
|
|
* furnished to do so, subject to the following conditions:
|
|
*
|
|
* The above copyright notice and this permission notice shall be included in
|
|
* all copies or substantial portions of the Software.
|
|
*
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
* THE SOFTWARE.
|
|
*/
|
|
|
|
#include <bitset>
|
|
#include <iostream>
|
|
|
|
#include <boost/test/unit_test.hpp>
|
|
|
|
#include <graphene/chain/database.hpp>
|
|
#include <graphene/chain/protocol/protocol.hpp>
|
|
#include <graphene/chain/account_object.hpp>
|
|
#include <graphene/chain/proposal_object.hpp>
|
|
#include <graphene/chain/witness_schedule_object.hpp>
|
|
#include <graphene/chain/vesting_balance_object.hpp>
|
|
|
|
#include <fc/crypto/digest.hpp>
|
|
|
|
#include "../common/database_fixture.hpp"
|
|
|
|
using namespace graphene::chain;
|
|
|
|
BOOST_AUTO_TEST_SUITE(block_tests)
|
|
|
|
BOOST_FIXTURE_TEST_CASE( update_account_keys, database_fixture )
|
|
{
|
|
try
|
|
{
|
|
const asset_object& core = asset_id_type()(db);
|
|
uint32_t skip_flags =
|
|
database::skip_transaction_dupe_check
|
|
| database::skip_witness_signature
|
|
| database::skip_transaction_signatures
|
|
| database::skip_authority_check
|
|
;
|
|
|
|
// Sam is the creator of accounts
|
|
private_key_type committee_key = init_account_priv_key;
|
|
private_key_type sam_key = generate_private_key("sam");
|
|
|
|
//
|
|
// A = old key set
|
|
// B = new key set
|
|
//
|
|
// we measure how many times we test following four cases:
|
|
//
|
|
// A-B B-A
|
|
// alice case_count[0] A == B empty empty
|
|
// bob case_count[1] A < B empty nonempty
|
|
// charlie case_count[2] B < A nonempty empty
|
|
// dan case_count[3] A nc B nonempty nonempty
|
|
//
|
|
// and assert that all four cases were tested at least once
|
|
//
|
|
account_object sam_account_object = create_account( "sam", sam_key );
|
|
|
|
upgrade_to_lifetime_member(sam_account_object.id);
|
|
//Get a sane head block time
|
|
generate_block( skip_flags );
|
|
|
|
db.modify(db.get_global_properties(), [](global_property_object& p) {
|
|
p.parameters.committee_proposal_review_period = fc::hours(1).to_seconds();
|
|
});
|
|
|
|
transaction tx;
|
|
processed_transaction ptx;
|
|
|
|
account_object committee_account_object = committee_account(db);
|
|
// transfer from committee account to Sam account
|
|
transfer(committee_account_object, sam_account_object, core.amount(100000));
|
|
|
|
const int num_keys = 5;
|
|
vector< private_key_type > numbered_private_keys;
|
|
vector< vector< public_key_type > > numbered_key_id;
|
|
numbered_private_keys.reserve( num_keys );
|
|
numbered_key_id.push_back( vector<public_key_type>() );
|
|
numbered_key_id.push_back( vector<public_key_type>() );
|
|
|
|
for( int i=0; i<num_keys; i++ )
|
|
{
|
|
private_key_type privkey = generate_private_key(
|
|
std::string("key_") + std::to_string(i));
|
|
public_key_type pubkey = privkey.get_public_key();
|
|
address addr( pubkey );
|
|
|
|
numbered_private_keys.push_back( privkey );
|
|
numbered_key_id[0].push_back( pubkey );
|
|
//numbered_key_id[1].push_back( addr );
|
|
}
|
|
|
|
// each element of possible_key_sched is a list of exactly num_keys
|
|
// indices into numbered_key_id[use_address]. they are defined
|
|
// by repeating selected elements of
|
|
// numbered_private_keys given by a different selector.
|
|
vector< vector< int > > possible_key_sched;
|
|
const int num_key_sched = (1 << num_keys)-1;
|
|
possible_key_sched.reserve( num_key_sched );
|
|
|
|
for( int s=1; s<=num_key_sched; s++ )
|
|
{
|
|
vector< int > v;
|
|
int i = 0;
|
|
v.reserve( num_keys );
|
|
while( v.size() < num_keys )
|
|
{
|
|
if( s & (1 << i) )
|
|
v.push_back( i );
|
|
i++;
|
|
if( i >= num_keys )
|
|
i = 0;
|
|
}
|
|
possible_key_sched.push_back( v );
|
|
}
|
|
|
|
// we can only undo in blocks
|
|
generate_block( skip_flags );
|
|
|
|
std::cout << "update_account_keys: this test will take a few minutes...\n";
|
|
for( int use_addresses=0; use_addresses<1; use_addresses++ )
|
|
{
|
|
vector< public_key_type > key_ids = numbered_key_id[ use_addresses ];
|
|
for( int num_owner_keys=1; num_owner_keys<=2; num_owner_keys++ )
|
|
{
|
|
for( int num_active_keys=1; num_active_keys<=2; num_active_keys++ )
|
|
{
|
|
std::cout << use_addresses << num_owner_keys << num_active_keys << "\n";
|
|
for( const vector< int >& key_sched_before : possible_key_sched )
|
|
{
|
|
auto it = key_sched_before.begin();
|
|
vector< const private_key_type* > owner_privkey;
|
|
vector< const public_key_type* > owner_keyid;
|
|
owner_privkey.reserve( num_owner_keys );
|
|
|
|
trx.clear();
|
|
account_create_operation create_op;
|
|
create_op.name = "alice";
|
|
|
|
for( int owner_index=0; owner_index<num_owner_keys; owner_index++ )
|
|
{
|
|
int i = *(it++);
|
|
create_op.owner.key_auths[ key_ids[ i ] ] = 1;
|
|
owner_privkey.push_back( &numbered_private_keys[i] );
|
|
owner_keyid.push_back( &key_ids[ i ] );
|
|
}
|
|
// size() < num_owner_keys is possible when some keys are duplicates
|
|
create_op.owner.weight_threshold = create_op.owner.key_auths.size();
|
|
|
|
for( int active_index=0; active_index<num_active_keys; active_index++ )
|
|
create_op.active.key_auths[ key_ids[ *(it++) ] ] = 1;
|
|
// size() < num_active_keys is possible when some keys are duplicates
|
|
create_op.active.weight_threshold = create_op.active.key_auths.size();
|
|
|
|
create_op.options.memo_key = key_ids[ *(it++) ] ;
|
|
create_op.registrar = sam_account_object.id;
|
|
trx.operations.push_back( create_op );
|
|
// trx.sign( sam_key );
|
|
//wdump( (trx) );
|
|
|
|
processed_transaction ptx_create = db.push_transaction( trx,
|
|
database::skip_transaction_dupe_check |
|
|
database::skip_transaction_signatures |
|
|
database::skip_authority_check
|
|
);
|
|
account_id_type alice_account_id =
|
|
ptx_create.operation_results[0]
|
|
.get< object_id_type >();
|
|
|
|
generate_block( skip_flags );
|
|
for( const vector< int >& key_sched_after : possible_key_sched )
|
|
{
|
|
auto it = key_sched_after.begin();
|
|
|
|
trx.clear();
|
|
account_update_operation update_op;
|
|
update_op.account = alice_account_id;
|
|
update_op.owner = authority();
|
|
update_op.active = authority();
|
|
update_op.new_options = create_op.options;
|
|
|
|
for( int owner_index=0; owner_index<num_owner_keys; owner_index++ )
|
|
update_op.owner->key_auths[ key_ids[ *(it++) ] ] = 1;
|
|
// size() < num_owner_keys is possible when some keys are duplicates
|
|
update_op.owner->weight_threshold = update_op.owner->key_auths.size();
|
|
for( int active_index=0; active_index<num_active_keys; active_index++ )
|
|
update_op.active->key_auths[ key_ids[ *(it++) ] ] = 1;
|
|
// size() < num_active_keys is possible when some keys are duplicates
|
|
update_op.active->weight_threshold = update_op.active->key_auths.size();
|
|
FC_ASSERT( update_op.new_options.valid() );
|
|
update_op.new_options->memo_key = key_ids[ *(it++) ] ;
|
|
|
|
trx.operations.push_back( update_op );
|
|
for( int i=0; i<int(create_op.owner.weight_threshold); i++)
|
|
{
|
|
sign( trx, *owner_privkey[i] );
|
|
if( i < int(create_op.owner.weight_threshold-1) )
|
|
{
|
|
GRAPHENE_REQUIRE_THROW(db.push_transaction(trx), fc::exception);
|
|
}
|
|
else
|
|
{
|
|
db.push_transaction( trx,
|
|
database::skip_transaction_dupe_check |
|
|
database::skip_transaction_signatures );
|
|
}
|
|
}
|
|
verify_account_history_plugin_index();
|
|
generate_block( skip_flags );
|
|
|
|
verify_account_history_plugin_index();
|
|
db.pop_block();
|
|
verify_account_history_plugin_index();
|
|
}
|
|
db.pop_block();
|
|
verify_account_history_plugin_index();
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
catch( const fc::exception& e )
|
|
{
|
|
edump( (e.to_detail_string()) );
|
|
throw;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* To have a secure random number we need to ensure that the same
|
|
* witness does not get to produce two blocks in a row. There is
|
|
* always a chance that the last witness of one round will be the
|
|
* first witness of the next round.
|
|
*
|
|
* This means that when we shuffle witness we need to make sure
|
|
* that there is at least N/2 witness between consecutive turns
|
|
* of the same witness. This means that durring the random
|
|
* shuffle we need to restrict the placement of witness to maintain
|
|
* this invariant.
|
|
*
|
|
* This test checks the requirement using Monte Carlo approach
|
|
* (produce lots of blocks and check the invariant holds).
|
|
*/
|
|
BOOST_FIXTURE_TEST_CASE( witness_order_mc_test, database_fixture )
|
|
{
|
|
try {
|
|
size_t num_witnesses = db.get_global_properties().active_witnesses.size();
|
|
//size_t dmin = num_witnesses >> 1;
|
|
|
|
vector< witness_id_type > cur_round;
|
|
vector< witness_id_type > full_schedule;
|
|
// if we make the maximum witness count testable,
|
|
// we'll need to enlarge this.
|
|
std::bitset< 0x40 > witness_seen;
|
|
size_t total_blocks = 1000000;
|
|
|
|
cur_round.reserve( num_witnesses );
|
|
full_schedule.reserve( total_blocks );
|
|
cur_round.push_back( db.get_dynamic_global_properties().current_witness );
|
|
|
|
// we assert so the test doesn't continue, which would
|
|
// corrupt memory
|
|
assert( num_witnesses <= witness_seen.size() );
|
|
|
|
while( full_schedule.size() < total_blocks )
|
|
{
|
|
if( (db.head_block_num() & 0x3FFF) == 0 )
|
|
{
|
|
wdump( (db.head_block_num()) );
|
|
}
|
|
witness_id_type wid = db.get_scheduled_witness( 1 );
|
|
full_schedule.push_back( wid );
|
|
cur_round.push_back( wid );
|
|
if( cur_round.size() == num_witnesses )
|
|
{
|
|
// check that the current round contains exactly 1 copy
|
|
// of each witness
|
|
witness_seen.reset();
|
|
for( const witness_id_type& w : cur_round )
|
|
{
|
|
uint64_t inst = w.instance.value;
|
|
BOOST_CHECK( !witness_seen.test( inst ) );
|
|
assert( !witness_seen.test( inst ) );
|
|
witness_seen.set( inst );
|
|
}
|
|
cur_round.clear();
|
|
}
|
|
generate_block();
|
|
}
|
|
|
|
for( size_t i=num_witnesses, m=full_schedule.size(); i<m; i+=num_witnesses )
|
|
{
|
|
BOOST_CHECK( full_schedule[i] != full_schedule[i-1] );
|
|
assert( full_schedule[i] != full_schedule[i-1] );
|
|
}
|
|
|
|
} catch (fc::exception& e) {
|
|
edump((e.to_detail_string()));
|
|
throw;
|
|
}
|
|
}
|
|
|
|
|
|
BOOST_FIXTURE_TEST_CASE( tapos_rollover, database_fixture )
|
|
{
|
|
try
|
|
{
|
|
ACTORS((alice)(bob));
|
|
|
|
BOOST_TEST_MESSAGE( "Give Alice some money" );
|
|
transfer(committee_account, alice_id, asset(10000));
|
|
generate_block();
|
|
|
|
BOOST_TEST_MESSAGE( "Generate up to block 0xFF00" );
|
|
generate_blocks( 0xFF00 );
|
|
signed_transaction xfer_tx;
|
|
|
|
BOOST_TEST_MESSAGE( "Transfer money at/about 0xFF00" );
|
|
transfer_operation xfer_op;
|
|
xfer_op.from = alice_id;
|
|
xfer_op.to = bob_id;
|
|
xfer_op.amount = asset(1000);
|
|
|
|
xfer_tx.operations.push_back( xfer_op );
|
|
xfer_tx.set_expiration( db.head_block_time() + fc::seconds( 0x1000 * db.get_global_properties().parameters.block_interval ) );
|
|
xfer_tx.set_reference_block( db.head_block_id() );
|
|
|
|
sign( xfer_tx, alice_private_key );
|
|
PUSH_TX( db, xfer_tx, 0 );
|
|
generate_block();
|
|
|
|
BOOST_TEST_MESSAGE( "Sign new tx's" );
|
|
xfer_tx.set_expiration( db.head_block_time() + fc::seconds( 0x1000 * db.get_global_properties().parameters.block_interval ) );
|
|
xfer_tx.set_reference_block( db.head_block_id() );
|
|
xfer_tx.signatures.clear();
|
|
sign( xfer_tx, alice_private_key );
|
|
|
|
BOOST_TEST_MESSAGE( "Generate up to block 0x10010" );
|
|
generate_blocks( 0x110 );
|
|
|
|
BOOST_TEST_MESSAGE( "Transfer at/about block 0x10010 using reference block at/about 0xFF00" );
|
|
PUSH_TX( db, xfer_tx, 0 );
|
|
generate_block();
|
|
}
|
|
catch (fc::exception& e)
|
|
{
|
|
edump((e.to_detail_string()));
|
|
throw;
|
|
}
|
|
}
|
|
|
|
//BOOST_FIXTURE_TEST_CASE(bulk_discount, database_fixture)
|
|
//{ try {
|
|
// ACTOR(nathan);
|
|
// // Give nathan ALLLLLL the money!
|
|
// transfer(GRAPHENE_COMMITTEE_ACCOUNT, nathan_id, db.get_balance(GRAPHENE_COMMITTEE_ACCOUNT, asset_id_type()));
|
|
// enable_fees();//GRAPHENE_BLOCKCHAIN_PRECISION*10);
|
|
// upgrade_to_lifetime_member(nathan_id);
|
|
// share_type new_fees;
|
|
// while( nathan_id(db).statistics(db).lifetime_fees_paid + new_fees < GRAPHENE_DEFAULT_BULK_DISCOUNT_THRESHOLD_MIN )
|
|
// {
|
|
// transfer(nathan_id, GRAPHENE_COMMITTEE_ACCOUNT, asset(1));
|
|
// new_fees += db.current_fee_schedule().calculate_fee(transfer_operation()).amount;
|
|
// }
|
|
// generate_blocks(db.get_dynamic_global_properties().next_maintenance_time);
|
|
// enable_fees();//GRAPHENE_BLOCKCHAIN_PRECISION*10);
|
|
// asset old_cashback;
|
|
// if(nathan.cashback_vb.valid())
|
|
// old_cashback = nathan.cashback_balance(db).balance;
|
|
//
|
|
// transfer(nathan_id, GRAPHENE_COMMITTEE_ACCOUNT, asset(1));
|
|
// generate_blocks(db.get_dynamic_global_properties().next_maintenance_time);
|
|
// enable_fees();//GRAPHENE_BLOCKCHAIN_PRECISION*10);
|
|
//
|
|
// BOOST_CHECK_EQUAL(nathan_id(db).cashback_balance(db).balance.amount.value,
|
|
// old_cashback.amount.value + GRAPHENE_BLOCKCHAIN_PRECISION * 8);
|
|
//
|
|
// new_fees = 0;
|
|
// while( nathan_id(db).statistics(db).lifetime_fees_paid + new_fees < GRAPHENE_DEFAULT_BULK_DISCOUNT_THRESHOLD_MAX )
|
|
// {
|
|
// transfer(nathan_id, GRAPHENE_COMMITTEE_ACCOUNT, asset(1));
|
|
// new_fees += db.current_fee_schedule().calculate_fee(transfer_operation()).amount;
|
|
// }
|
|
// generate_blocks(db.get_dynamic_global_properties().next_maintenance_time);
|
|
// enable_fees();//GRAPHENE_BLOCKCHAIN_PRECISION*10);
|
|
// old_cashback = nathan_id(db).cashback_balance(db).balance;
|
|
//
|
|
// transfer(nathan_id, GRAPHENE_COMMITTEE_ACCOUNT, asset(1));
|
|
// generate_blocks(db.get_dynamic_global_properties().next_maintenance_time);
|
|
//
|
|
// BOOST_CHECK_EQUAL(nathan_id(db).cashback_balance(db).balance.amount.value,
|
|
// old_cashback.amount.value + GRAPHENE_BLOCKCHAIN_PRECISION * 9);
|
|
//} FC_LOG_AND_RETHROW() }
|
|
|
|
BOOST_AUTO_TEST_SUITE_END()
|