peerplays_migrated/tests/generate_empty_blocks/main.cpp
gladcow 499e318199 [SON-107] Merge develop branch to SONs-base (#166)
* fix rng and get_winner_numbers implemented

* coipied code for bitshares fixing 429 and 433 isuues

* ticket_purchase_operation implemented. added lottery_options to asset

* lottery end implemented

* minor logic changes. added db_api and cli_wallet methods

* fix reindex on peerplays network

* fix some tests. add gitlab-ci.yml

* add pull to gitlab-ci

* fix

* fix and comment some tests

* added owner to lottery_asset_options. commented async call in on_applied_block callback

* added get_account_lotteries method to db_api and cli, lottery end_date and ticket_price verification

* merge get_account_lotteries branch. fix create_witness test

* fix test genesis and end_date verification

* fixed indices sorting and lottery end checking by date

* update db_version for replay and removed duplicate include files

* Added ntp and upgraded boost version

* Revert "GPOS protocol"

* need to remove backup files

* virtual-op-fix for deterministic virtual_op number

* Merged beatrice into 5050

* Updated gitmodules, changes to allow voting on lottery fee

* Removed submodule libraries/fc

* Added libraries/fc

* added missing , in types.hpp

* Added sweeps parameters to parameter_extension

* added missing comma in operations.hpp, small changes to config.hpp

* fixed returntype in chain_parameters.hpp

* removed sweeps_parameter_extensions

* Changed fc library

* fixed asset_object

* Changed peerplays-fc submodule

* Changed fc submodule to ubuntu 18.04 upgrade

* Removed submodule libraries/fc

* Added fc library back

* fix casting in overloaded function

* Removed blind_sign and unblind_signature functions

* Added new lottery_asset_create_operation

* Changed sweeps hardfork time

* Removed redundant if from asset_evaluator and fixed db_notify

* fixed duplicate code in fee_tests

* removed redundant tgenesis file

* Enable building on Ubuntu 18.04 using GCC 7 compiler

* fix: is_benefactor_reward had the default value of true when not set

* Docker file for Ubuntu 18.04

Base image updated to Unbuntu 18.04
Prerequisite list updated
Basic configuration updated

* Quick fix: Added missing package pkg-config

* Docker file updates

* 5050 fee update and compilation error fix

* Dockerfile, set system locale

Prevents locale::facet::_S_create_c_locale name error

* Update README.md

Fix typo

* Update README.md

* Changed hardfork time for SWEEPS and Core-429

* revert master changes that were brought in previous commit

* Fixed error when account_history_object with id 0 doesnt exist

* Fixed error while loading object database

* test for zero id object in account history

* Reorder operations in Dockerfile, to make image creation faster

- Reorder prevents unnecessary building of Boost libraries

* Fix for irrelevant signature included issue

* fix copyrigth messages order

* remove double empty lines

* Backport fix for `get_account_history` from https://github.com/bitshares/bitshares-core/pull/628 and add additional account history test case

* NTP client back

* GRPH-53-Log_format_error

* Merge pull request #1036 from jmjatlanta/issue_730

Add fail_reason to proposal_object

* Unit test case fixes and prepared SONs base

* Use offsetof instead of custom macro

* Hide some compiler warnings

* Make all the tests compile

* Add nullptr check in api.cpp for easier testing

* Add test case for broadcast_trx_with_callback API

* Unit test case fixes and prepared SONs base

* Merge pull request #714 from pmconrad/json_fix

JSON fix

* Increase max depth for trx confirmation callback

* Adapt to variant API with `max_depth` argument

* Update fc submodule

* Created unit test for #325

* remove needless find()

* GRPH-4-CliWallet_crash_ctrlD

* fix copyright message

* Make all the tests compile

* increase delay for node connection

* Increase block creation timeout to 2500ms

* remove cache from cli get_account

* add cli tests framework

* Adjust newly merged code to new API

* Improved resilience of block database against corruption

* Merged changes from Bitshares PR 1036

* GRPH-76 - Short-cut long sequences of missed blocks

Fixes database::update_global_dynamic_data to speed up counting missed blocks.
(This also fixes a minor issue with counting - the previous algorithm would skip missed blocks for the witness who signed the first block after the gap.)

* Moved reindex logic into database / chain_database, make use of additional blocks in block_database

Fixed tests wrt db.open

* Enable undo + fork database for final blocks in a replay

Dont remove blocks from block db when popping blocks, handle edge case in replay wrt fork_db, adapted unit tests

* Log starting block number of replay

* Prevent unsigned integer underflow

* Fixed lock detection

* Dont leave _data_dir empty if db is locked

* Writing the object_database is now almost atomic

* Improved consistency check for block_log

* Cut back block_log index file if inconsistent

* Fixed undo_database

* Added test case for broken merge on empty undo_db

* Merge pull request #938 from bitshares/fix-block-storing

Store correct block ID when switching forks

* exclude second undo_db.enable() call in some cases

* Add missing change

* change bitshares to core in message

* Fixed integer overflow issue

* Fix for for history ID mismatch ( Bitshares PR #875 )

* Update the FC submodule with the changes for GRPH-4

* Fix #436 object_database created outside of witness data directory

* supplement more comments on database::_opened variable

* prevent segfault when destructing application obj

* Fixed duplicate ops returned from get_account_history

* minor performance improvement

* Added comment

* Merged Bitshares PR #1462 and compilation fixes

* Support/gitlab (#123)

* Updated gitlab process

* Fix undefined references in cli test

* Fixed test failures and compilation issue

* Fixed account_history_pagination test

* Fix compilation in debug mode

* Removed unrelated comment

* Skip auth check when pushing self-generated blocks

* Extract public keys before pushing a transaction

* Dereference chain_database shared_ptr

* Updated transaction::signees to mutable

and
* updated get_signature_keys() to return a const reference,
* get_signature_keys() will update signees on first call,
* modified test cases and wallet.cpp accordingly,
* no longer construct a new signed_transaction object before pushing

* Added get_asset_count API

* Allow sufficient space for new undo_session

* Throw for deep nesting

* No longer extract public keys before pushing a trx

and removed unused new added constructor and _get_signature_keys() function from signed_transaction struct

* Added cli_test to CI

* use random port numbers in app_test (#154)

* proposal fail_reason bug fixed (#157)

* Added Sonarcloud code_quality to CI (#159)

* Added sonarcloud analysis (#158)

* fix for lottery end

* fix declarations

* fix declarations

* fix boost integer

* fix compilation

* fix chain tests

* fix app_test

* try to fix cli test

* fix incorrect max_depth param

* working cli test

* correct fc version
2019-10-08 06:55:03 +05:30

174 lines
6.2 KiB
C++

/*
* Copyright (c) 2015 Cryptonomex, Inc., and contributors.
*
* The MIT License
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <algorithm>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <fc/io/fstream.hpp>
#include <fc/io/json.hpp>
#include <fc/io/stdio.hpp>
#include <fc/smart_ref_impl.hpp>
#include <graphene/app/api.hpp>
#include <graphene/chain/protocol/protocol.hpp>
#include <graphene/egenesis/egenesis.hpp>
#include <graphene/utilities/key_conversion.hpp>
#include <boost/filesystem.hpp>
#ifndef WIN32
#include <csignal>
#endif
using namespace graphene::app;
using namespace graphene::chain;
using namespace graphene::utilities;
using namespace std;
namespace bpo = boost::program_options;
// hack: import create_example_genesis() even though it's a way, way
// specific internal detail
namespace graphene { namespace app { namespace detail {
genesis_state_type create_example_genesis();
} } } // graphene::app::detail
int main( int argc, char** argv )
{
try
{
bpo::options_description cli_options("Graphene empty blocks");
cli_options.add_options()
("help,h", "Print this help message and exit.")
("data-dir", bpo::value<boost::filesystem::path>()->default_value("empty_blocks_data_dir"), "Directory containing generator database")
("genesis-json,g", bpo::value<boost::filesystem::path>(), "File to read genesis state from")
("genesis-time,t", bpo::value<uint32_t>()->default_value(0), "Timestamp for genesis state (0=use value from file/example)")
("num-blocks,n", bpo::value<uint32_t>()->default_value(1000000), "Number of blocks to generate")
("miss-rate,r", bpo::value<uint32_t>()->default_value(3), "Percentage of blocks to miss")
("verbose,v", "Enter verbose mode")
;
bpo::variables_map options;
try
{
boost::program_options::store( boost::program_options::parse_command_line(argc, argv, cli_options), options );
}
catch (const boost::program_options::error& e)
{
std::cerr << "empty_blocks: error parsing command line: " << e.what() << "\n";
return 1;
}
if( options.count("help") )
{
std::cout << cli_options << "\n";
return 0;
}
fc::path data_dir;
if( options.count("data-dir") )
{
data_dir = options["data-dir"].as<boost::filesystem::path>();
if( data_dir.is_relative() )
data_dir = fc::current_path() / data_dir;
}
genesis_state_type genesis;
if( options.count("genesis-json") )
{
fc::path genesis_json_filename = options["genesis-json"].as<boost::filesystem::path>();
std::cerr << "embed_genesis: Reading genesis from file " << genesis_json_filename.preferred_string() << "\n";
std::string genesis_json;
read_file_contents( genesis_json_filename, genesis_json );
genesis = fc::json::from_string( genesis_json ).as< genesis_state_type >(20);
}
else
genesis = graphene::app::detail::create_example_genesis();
uint32_t timestamp = options["genesis-time"].as<uint32_t>();
if( timestamp != 0 )
{
genesis.initial_timestamp = fc::time_point_sec( timestamp );
std::cerr << "embed_genesis: Genesis timestamp is " << genesis.initial_timestamp.sec_since_epoch() << " (from CLI)\n";
}
else
std::cerr << "embed_genesis: Genesis timestamp is " << genesis.initial_timestamp.sec_since_epoch() << " (from state)\n";
bool verbose = (options.count("verbose") != 0);
uint32_t num_blocks = options["num-blocks"].as<uint32_t>();
uint32_t miss_rate = options["miss-rate"].as<uint32_t>();
fc::ecc::private_key nathan_priv_key = fc::ecc::private_key::regenerate(fc::sha256::hash(string("nathan")));
database db;
fc::path db_path = data_dir / "db";
db.open(db_path, [&]() { return genesis; }, "TEST" );
uint32_t slot = 1;
uint32_t missed = 0;
for( uint32_t i = 1; i < num_blocks; ++i )
{
signed_block b = db.generate_block(db.get_slot_time(slot), db.get_scheduled_witness(slot), nathan_priv_key, database::skip_nothing);
FC_ASSERT( db.head_block_id() == b.id() );
fc::sha256 h = b.digest();
uint64_t rand = h._hash[0];
slot = 1;
while(true)
{
if( (rand % 100) < miss_rate )
{
slot++;
rand = (rand/100) ^ h._hash[slot&3];
missed++;
}
else
break;
}
witness_id_type prev_witness = b.witness;
witness_id_type cur_witness = db.get_scheduled_witness(1);
if( verbose )
{
wdump( (prev_witness)(cur_witness) );
}
else if( (i%10000) == 0 )
{
std::cerr << "\rblock #" << i << " missed " << missed;
}
if( slot == 1 ) // can possibly get consecutive production if block missed
{
FC_ASSERT( cur_witness != prev_witness );
}
}
std::cerr << "\n";
db.close();
}
catch ( const fc::exception& e )
{
std::cout << e.to_detail_string() << "\n";
return 1;
}
return 0;
}