Compare commits

...

327 commits

Author SHA1 Message Date
Bobinson K B
d3d967a2d7 Merge branch 'hotfix/bookie2024' into 'master'
Bookie 2024

See merge request PBSA/peerplays!259
2023-12-18 06:17:23 +00:00
serkixenos
178756bd34 Bookie 2024 2023-12-18 06:17:22 +00:00
Bobinson K B
1f70857d64 Merge branch 'beatrice' into 'master'
Mainnet release

See merge request PBSA/peerplays!251
2023-10-06 10:50:31 +00:00
Vlad Dobromyslov
97e85a849d Merge branch 'develop' into 'beatrice'
Set HARDFORK_SON_FOR_ETHEREUM_TIME to 24 of October

See merge request PBSA/peerplays!250
2023-10-04 16:51:45 +00:00
Vlad Dobromyslov
dc4cdd6e4b Set HARDFORK_SON_FOR_ETHEREUM_TIME to 24 of October 2023-10-04 16:51:45 +00:00
Vlad Dobromyslov
1472066af6 Merge branch 'develop' into 'beatrice'
Fixes for 1.5.25-beta

See merge request PBSA/peerplays!249
2023-10-03 16:24:00 +00:00
Vlad Dobromyslov
a641b8e93f Fixes for 1.5.25-beta 2023-10-03 16:23:59 +00:00
Vlad Dobromyslov
aa099f960f Merge branch 'develop' into 'beatrice'
Fixes for 1.5.24-beta

See merge request PBSA/peerplays!246
2023-08-23 14:31:39 +00:00
Vlad Dobromyslov
f0654e5ffd Fixes for 1.5.24-beta 2023-08-23 14:31:38 +00:00
Vlad Dobromyslov
9fe351300b Merge branch 'develop' into 'beatrice'
Set test-e2e as manual

See merge request PBSA/peerplays!242
2023-07-17 12:49:53 +00:00
Vlad Dobromyslov
5fd79c3e78 Set test-e2e as manual 2023-07-17 12:49:53 +00:00
Vlad Dobromyslov
b56818b8ae Merge branch 'develop' into 'beatrice'
Change DB_VERSION to PPY2.5

See merge request PBSA/peerplays!240
2023-07-13 16:44:03 +00:00
Vlad Dobromyslov
bc0fbeb707 Change DB_VERSION to PPY2.5 2023-07-13 16:44:03 +00:00
Vlad Dobromyslov
accd334a86 Merge branch 'develop' into 'beatrice'
NEW HARDFORK TIME FOR SON ETH

See merge request PBSA/peerplays!238
2023-07-07 12:10:44 +00:00
Vlad Dobromyslov
84a66c6722 NEW HARDFORK TIME FOR SON ETH 2023-07-07 12:10:44 +00:00
Vlad Dobromyslov
abd446d80b Merge branch 'develop' into 'beatrice'
Fix balance discrepancies in 1.5.23-beta

See merge request PBSA/peerplays!235
2023-07-06 05:31:28 +00:00
Vlad Dobromyslov
93fb57c080 Fix balance discrepancies in 1.5.23-beta 2023-07-06 05:31:28 +00:00
Bobinson K B
a8845ffde9 Merge branch 'develop' into 'beatrice'
Fix issue with balance discrepancies in 1.5.23-beta

See merge request PBSA/peerplays!232
2023-06-20 07:37:24 +00:00
Vlad Dobromyslov
435c1f8e96 Fix issue with balance discrepancies in 1.5.23-beta 2023-06-20 07:37:24 +00:00
Bobinson K B
1123ff6f93 Merge branch 'develop' into 'beatrice'
Fixes for public testnet

See merge request PBSA/peerplays!230
2023-06-09 08:10:26 +00:00
Vlad Dobromyslov
c34415b403 Fixes for public testnet 2023-06-09 08:10:26 +00:00
Christopher Sanborn
daca2813ef Merge branch 'testnet-set-hf-dates' into 'beatrice'
Set Hard Fork dates for testnet and mainnet
2023-05-25 13:25:09 -04:00
Christopher Sanborn
0b37a48b02 Set Hard Fork dates for testnet and main net. 2023-05-25 13:23:05 -04:00
Bobinson K B
e3b10cf1ec Merge branch 'testnet-builds' into 'beatrice'
Updated build rules for mainnet and testnet

See merge request PBSA/peerplays!223
2023-05-17 16:25:03 +00:00
Rily Dunlap
f5c6a6310b Updated build rules for mainnet and testnet 2023-05-17 14:50:10 +00:00
Bobinson K B
75ee6fbed3 Merge branch 'develop' into 'beatrice'
New set of functionality

See merge request PBSA/peerplays!220
2023-05-16 11:46:25 +00:00
Vlad Dobromyslov
7516126d01 New set of functionality 2023-05-16 11:46:25 +00:00
Vlad Dobromyslov
f32a51d03b Merge branch 'develop' into 'beatrice'
Alpha release 06.03.2023

See merge request PBSA/peerplays!215
2023-03-06 16:57:45 +00:00
Vlad Dobromyslov
c421453621 Merge branch 'bug/513-num_son_merge' into 'develop'
num_son no overwriting

See merge request PBSA/peerplays!213
2023-03-06 15:50:17 +00:00
Milo M
54ff842db1 num_son no overwriting 2023-03-06 15:50:17 +00:00
Vlad Dobromyslov
16ba10ffab Merge branch 'feature/SON-connection-pool' into 'develop'
SON connection pool

See merge request PBSA/peerplays!181
2023-03-01 06:01:52 +00:00
timur
79974280c0 SON connection pool 2023-03-01 06:01:52 +00:00
Vlad Dobromyslov
5867a8ae27 Merge branch 'bug/501-connection-pool' into 'develop'
#501 - concurrent_unordered_set for connection

See merge request PBSA/peerplays!212
2023-02-27 13:34:41 +00:00
serkixenos
741534c47f Merge branch 'bug/509-hive-withdrawal' into 'develop'
#509 - fix hive withdrawal processing

See merge request PBSA/peerplays!211
2023-02-24 09:37:55 +00:00
serkixenos
f3227fb33d Merge branch 'bug/421-fix-double' into 'develop'
automated test for nft_lottery

See merge request PBSA/peerplays!208
2023-02-24 09:36:10 +00:00
Milo M
bfb961c7be automated test for nft_lottery 2023-02-24 09:36:10 +00:00
serkixenos
5e08b793c5 Merge branch 'bug/495-hive-wallet-update' into 'develop'
#495 hive wallet update

See merge request PBSA/peerplays!210
2023-02-24 09:31:51 +00:00
Vlad Dobromyslov
7af3d037b5 #495 hive wallet update 2023-02-24 09:31:51 +00:00
Vlad Dobromyslov
2788281062 #501 - concurrent_unordered_set for connection 2023-02-23 17:55:49 +02:00
serkixenos
4e2850f826 Fix libbitcoin build in docker and related README instructions 2023-02-17 05:45:33 +01:00
Vlad Dobromyslov
f477af6771 #509 - fix hive withdrawal processing 2023-02-16 12:45:03 +02:00
serkixenos
80d168e5b6 Merge branch 'bug/421-fix-double' into 'develop'
#421 fix double in consensus of nft_lottery_token_purchase

See merge request PBSA/peerplays!205
2023-02-10 14:50:23 +00:00
Milos Milosevic
e44ed0cfe5 #421 fix double in consensus of nft_lottery_token_purchase 2023-02-10 14:50:23 +00:00
serkixenos
142cf5b903 Merge branch 'develop' into 'beatrice'
Merge develop to beatrice 2023-02

See merge request PBSA/peerplays!206
2023-02-10 13:56:17 +00:00
serkixenos
ebc1529c48 Merge branch 'bug/492-eth-withdrawal' into 'develop'
#492 - fix withdrawal encoders for big numbers

See merge request PBSA/peerplays!207
2023-02-10 13:54:53 +00:00
Vlad Dobromyslov
19e0911d64 #492 - fix withdrawal encoders for big numbers 2023-02-10 13:34:53 +02:00
serkixenos
70cd09495e Merge branch 'feature/libbitcoin-son-final' into 'develop'
Bitcoin SON based on libbitcoin

See merge request PBSA/peerplays!164
2023-02-06 22:48:40 +00:00
Davor Hirunda
e9c7021e16 Bitcoin SON based on libbitcoin 2023-02-06 22:48:40 +00:00
serkixenos
9c9aaa03d3 Merge branch 'bug/506-zero-fees' into 'develop'
#506 - fix load fees from genesis file

See merge request PBSA/peerplays!203
2023-02-06 20:57:51 +00:00
serkixenos
038fa37cc6 Merge branch 'bug/507/bitcoin_regression_detected_on_develop_branch' into 'develop'
Fix the issue for wrong number of signers

See merge request PBSA/peerplays!204
2023-02-06 20:48:11 +00:00
hirunda
73b2ba635b Fix the issue for wrong number of signers 2023-02-06 21:09:28 +01:00
Vlad Dobromyslov
936f13d2a1 #506 - fix load fees from genesis file 2023-02-06 08:14:05 +02:00
Vlad Dobromyslov
fc1cdf2629 Fix build errors active_sidechain_types 2023-02-01 20:54:44 +02:00
serkixenos
71f0806b25 Merge branch 'invalid_son_number_reflection' into 'develop'
Fix replay blockchain

See merge request PBSA/peerplays!202
2023-02-01 17:13:03 +00:00
Vlad Dobromyslov
da3a858aa6 Fix replay blockchain 2023-02-01 17:13:02 +00:00
serkixenos
1883f97be2 Merge branch 'bug/500-primary-wallet-transaction' into 'develop'
#500 - fix son_wallet_update_operation

See merge request PBSA/peerplays!201
2023-02-01 12:35:58 +00:00
serkixenos
559769db2b Merge branch 'feature/473-erc20-support' into 'develop'
#473 erc20-support

See merge request PBSA/peerplays!198
2023-01-31 10:48:45 +00:00
Vlad Dobromyslov
0b64f0cfcc #473 erc20-support 2023-01-31 10:48:45 +00:00
serkixenos
96d737fbc2 Merge branch 'master' into beatrice 2023-01-31 11:45:36 +01:00
Vlad Dobromyslov
d89e5e1f23 #500 - fix son_wallet_update_operation 2023-01-30 19:27:27 +02:00
Bobinson K B
6f472d3d3b Merge branch 'develop' into 'beatrice'
Merge develop to beatrice 2022-12

See merge request PBSA/peerplays!193
2023-01-03 07:33:40 +00:00
serkixenos
cb60cbe5d1 Merge branch 'hotfix/extend_get_block_api' into 'develop'
Streamline get_block API from database and cli wallet

See merge request PBSA/peerplays!197
2022-12-28 07:45:45 +00:00
serkixenos
576c54a260 Merge branch 'bug/499-eth-son_wallet_deposit_process_operation' into 'develop'
#499 - son_wallet_deposit_process_operation approve fix

See merge request PBSA/peerplays!196
2022-12-28 07:45:18 +00:00
serkixenos
2f5e12b28e Merge branch 'bug/496-process_primary_wallet' into 'develop'
#496 process primary wallet

See merge request PBSA/peerplays!195
2022-12-28 07:44:29 +00:00
Vlad Dobromyslov
68fbd6f40b #496 process primary wallet 2022-12-28 07:44:29 +00:00
serkixenos
674b38910d Streamline get_block API from database and cli wallet 2022-12-28 08:23:29 +01:00
Vlad Dobromyslov
d264398a6f #499 - son_wallet_deposit_process_operation approve fix 2022-12-23 15:37:15 +02:00
serkixenos
c1d5691ce2 Merge branch 'feature/nft_get_metadata_by_owner' into 'develop'
Add or revive a few NFT-listing APIs

See merge request PBSA/peerplays!194
2022-12-21 17:45:42 +00:00
timur
ca69a692cc Add or revive a few NFT-listing APIs 2022-12-21 17:45:42 +00:00
serkixenos
bd041bc13f Merge branch 'bug/489-error-message-gas-prediction' into 'develop'
#489 - Fix error message

See merge request PBSA/peerplays!190
2022-12-21 17:24:55 +00:00
serkixenos
6037e89df0 Merge branch 'hive-fix-son-account-owner-authority' into 'develop'
Update Hive's son-account owner authority on primary wallet update

See merge request PBSA/peerplays!192
2022-12-16 04:32:04 +00:00
serkixenos
bb7c534b10 Update Hive's son-account owner authority on primary wallet update 2022-12-16 04:27:31 +01:00
serkixenos
1be6636bbf Merge branch 'bug/api-doc-generation-hotfix-2' into 'develop'
Fix Ubuntu 18 build

See merge request PBSA/peerplays!185
2022-12-16 00:34:34 +00:00
timur
578edc56d8 Fix Ubuntu 18 build 2022-12-16 00:34:33 +00:00
Vlad Dobromyslov
d387e324fe #489 - Fix error message 2022-12-14 15:52:28 +02:00
serkixenos
1bf5c82101 Merge branch 'bug/484-multiple-eth-withdrawals' into 'develop'
#484 multiple eth withdrawals

See merge request PBSA/peerplays!189
2022-12-08 14:03:14 +00:00
Vlad Dobromyslov
4883dfe38d #484 multiple eth withdrawals 2022-12-08 14:03:14 +00:00
serkixenos
249276b009 Merge branch 'feature/479-one-bunch-transaction' into 'develop'
#479 one bunch transaction

See merge request PBSA/peerplays!188
2022-12-08 14:02:53 +00:00
Vlad Dobromyslov
ab1e08a756 #479 one bunch transaction 2022-12-08 14:02:53 +00:00
serkixenos
440e4fbb43 Merge branch 'bug/482-burn-eth-from-son-account' into 'develop'
#482 burn eth from son account

See merge request PBSA/peerplays!186
2022-12-06 12:17:09 +00:00
Vlad Dobromyslov
42b3890a7c #482 burn eth from son account 2022-12-06 12:17:09 +00:00
serkixenos
5dff0830fb Merge branch 'feature/479-one-bunch-transaction' into 'develop'
#479 - Send one transaction for all owners

See merge request PBSA/peerplays!183
2022-12-01 01:53:39 +00:00
Vlad Dobromyslov
c3eab0a80b #479 - Send one transaction for all owners 2022-12-01 01:53:39 +00:00
serkixenos
12c0c66f4b Merge branch 'feature/478-estimate-transaction-fee' into 'develop'
#478 - fix information warning

See merge request PBSA/peerplays!184
2022-12-01 01:53:01 +00:00
Vlad Dobromyslov
b7113c4ff3 #478 - fix information warning 2022-11-28 09:35:00 +02:00
serkixenos
d76b752c8c Merge branch 'develop' into 'beatrice'
Merge develop to beatrice 2022-11

See merge request PBSA/peerplays!180
2022-11-24 17:48:46 +00:00
serkixenos
804376b149 Merge branch 'beatrice' into develop 2022-11-23 17:35:57 +01:00
serkixenos
1b340345f3 Merge branch 'feature/478-estimate-transaction-fee' into 'develop'
#478 estimate transaction fee

See merge request PBSA/peerplays!179
2022-11-22 20:44:07 +00:00
Vlad Dobromyslov
022fdeb40a #478 estimate transaction fee 2022-11-22 20:44:07 +00:00
serkixenos
f6d22466fd Merge branch 'bug/462/investigate_trx_which_caused_mainnet_halt' into 'develop'
Fix for undo crash

See merge request PBSA/peerplays!172
2022-11-15 22:34:05 +00:00
Davor Hirunda
811d68ef4d Fix for undo crash 2022-11-15 22:34:05 +00:00
serkixenos
a6da2a6413 Merge branch 'bug/433-choosing-active-sons' into 'develop'
#433 Down active sons are not substituted

See merge request PBSA/peerplays!177
2022-11-15 22:33:04 +00:00
Milos Milosevic
8853a76752 #433 Down active sons are not substituted 2022-11-15 22:33:04 +00:00
serkixenos
f209ab8ee6 Merge branch 'bug/481-ethereum-listener' into 'develop'
#481 ethereum listener

See merge request PBSA/peerplays!178
2022-11-14 13:42:24 +00:00
Vlad Dobromyslov
9620e3c211 #481 ethereum listener 2022-11-14 13:42:23 +00:00
serkixenos
3ebcd29e10 Merge branch 'bug/476-fix-v-signing-value' into 'develop'
#476 - fix calculating v value from chain id

See merge request PBSA/peerplays!174
2022-11-10 11:35:14 +00:00
serkixenos
d5b2b7aeda Merge branch 'bug/470-get-network_id' into 'develop'
#470 Don't use admin_nodeInfo when get network_id

See merge request PBSA/peerplays!173
2022-11-10 11:35:07 +00:00
Vlad Dobromyslov
759dac5d41 #476 - fix calculating v value from chain id 2022-11-07 12:16:59 +02:00
Bobinson K B
058937a3ee Merge branch 'hotfix/sidechain-address' into 'master'
Hotfix: Fix sidechain address editing

See merge request PBSA/peerplays!171
2022-11-04 11:38:30 +00:00
serkixenos
da73e31038 Hotfix: Fix sidechain address editing 2022-11-04 11:38:30 +00:00
Vlad Dobromyslov
a30325660d #470 Don't use admin_nodeInfo when get network_id 2022-11-04 07:51:22 +02:00
serkixenos
d5d6390030 Merge branch 'bug/461-system_error-exception' into 'develop'
#461 - handle exception when execute POST request

See merge request PBSA/peerplays!165
2022-10-30 02:30:59 +00:00
serkixenos
400c3cfb89 Merge branch 'bug/467-prev-set-active-sons' into 'develop'
#467 - fix create_primary_wallet_transaction signers

See merge request PBSA/peerplays!169
2022-10-30 02:30:22 +00:00
Vlad Dobromyslov
aa90f715fd #467 - fix create_primary_wallet_transaction signers 2022-10-30 02:30:21 +00:00
Davor Hirunda
c3b2a598b4 Fix for undo crash 2022-10-30 03:26:53 +01:00
serkixenos
06bc65cc79 #450 Add documentation for undocumented methods 2022-10-30 03:21:18 +01:00
serkixenos
8e8142235a Merge branch 'bug/issue-463' into 'develop'
#463 Fix unused variable son_count_histogram_buffer in Release mode

See merge request PBSA/peerplays!166
2022-10-19 17:37:23 +00:00
Milos Milosevic
194fa6abfa #463 Fix unused variable son_count_histogram_buffer in Release mode 2022-10-19 17:37:23 +00:00
Vlad Dobromyslov
283fbd28f7 #461 - handle exception when execute POST request 2022-10-18 10:19:03 +03:00
serkixenos
71c113c190 Merge branch 'issue-436-cli_tests' into 'develop'
Updated CLI Tests [Issue 436]

See merge request PBSA/peerplays!159
2022-10-11 00:13:04 +00:00
Meheboob Khan
846366139f Updated CLI Tests [Issue 436] 2022-10-11 00:13:04 +00:00
serkixenos
0856e898bb Merge branch 'update-cli-wallet-docs' into 'develop'
Update cli wallet docs

See merge request PBSA/peerplays!162
2022-10-03 17:34:01 +00:00
serkixenos
4db9f3a15b Update cli wallet docs 2022-10-03 17:34:01 +00:00
serkixenos
1b1df25023 Merge branch 'bug/fix-api-doc-generation-for-map' into 'develop'
Fix API docs generation for map<> and flat_map<>

See merge request PBSA/peerplays!161
2022-10-03 17:31:59 +00:00
serkixenos
f9314a4c0c Merge branch 'bug/457-multithread-son-processing' into 'develop'
bug/457-multithread-son-processing

See merge request PBSA/peerplays!160
2022-10-03 17:04:56 +00:00
Vlad Dobromyslov
d4c015d400 bug/457-multithread-son-processing 2022-10-03 17:04:56 +00:00
timur
e6474f5f2a Fix API docs generation for map<> and flat_map<> 2022-10-01 19:11:54 -03:00
serkixenos
f2f4b57ced Merge branch 'bug/455-sidechain-enabled' into 'develop'
bug/455-sidechain-enabled

See merge request PBSA/peerplays!157
2022-09-27 13:58:07 +00:00
Vlad Dobromyslov
2c95ac0b9d bug/455-sidechain-enabled 2022-09-27 13:58:07 +00:00
serkixenos
9831579bfe Merge branch 'feature/update-debug-node' into 'develop'
Update debug_witness plugin

See merge request PBSA/peerplays!153
2022-09-27 12:25:33 +00:00
timur
2d6dec5943 Update debug_witness plugin 2022-09-27 12:25:33 +00:00
serkixenos
46f4770071 Merge branch 'bug/451-update-son-list-on-maintenance' into 'develop'
bug/451-update-son-list-on-maintenance

See merge request PBSA/peerplays!154
2022-09-27 10:41:35 +00:00
Vlad Dobromyslov
2accee53e2 bug/451-update-son-list-on-maintenance 2022-09-27 10:41:35 +00:00
serkixenos
54a11e7662 Merge branch 'issue-389' into 'develop'
Added functionality to check if the device size is less than 50 MB [Issue 389]

See merge request PBSA/peerplays!151
2022-09-26 20:14:32 +00:00
Meheboob Khan
c1f93f58ee Added functionality to check if the device size is less than 50 MB [Issue 389] 2022-09-26 20:14:31 +00:00
serkixenos
2fd6f60112 Merge branch 'feature/son-for-ethereum' into 'develop'
SON for Ethereum

See merge request PBSA/peerplays!133
2022-09-19 19:23:39 +00:00
serkixenos
5f97eb7662 SON for Ethereum 2022-09-19 19:23:39 +00:00
serkixenos
b4e8b76a30 Merge branch 'bug/fix-wallet-api-doc-generation' into 'develop'
Bug/fix wallet api doc generation

See merge request PBSA/peerplays!152
2022-09-16 02:42:31 +00:00
timur
a9267544de Bug/fix wallet api doc generation 2022-09-16 02:42:31 +00:00
serkixenos
e287d8a845 Merge branch 'issue-430' into 'develop'
Improved get_active_sons and get_son_network_status API/CLI [issue 430]

See merge request PBSA/peerplays!150
2022-09-14 18:03:40 +00:00
Meheboob Khan
0f64947f4a Improved get_active_sons and get_son_network_status API/CLI [issue 430] 2022-09-14 18:03:40 +00:00
serkixenos
b895b52b7b Cherrypick important fixes/cosmetics from feature/son-for-ethereum 2022-09-08 17:13:18 +02:00
serkixenos
9c5aab826d Merge branch 'feature/update-delayed-node' into 'develop'
Update delayed node feature

See merge request PBSA/peerplays!145
2022-09-07 13:57:17 +00:00
serkixenos
6a38fb2382 Update delayed node feature 2022-09-07 13:57:17 +00:00
serkixenos
0a9a324277 Merge branch 'port_net_library' into 'develop'
Port net library

See merge request PBSA/peerplays!147
2022-09-07 13:57:00 +00:00
Meheboob Khan
5c416e3a5b Port net library 2022-09-07 13:57:00 +00:00
serkixenos
9268c31ac4 Code formatting 2022-08-25 17:41:44 +02:00
serkixenos
c4f6f522a4 Merge branch 'bug/fix-unit-tests-develop' into 'develop'
Fix unit test failing on develop branch, #418

See merge request PBSA/peerplays!143
2022-08-25 11:14:17 +00:00
serkixenos
f127495c0e Fix unit test failing on develop branch, #418 2022-08-25 12:21:22 +02:00
serkixenos
9b2c60f76c Merge branch 'feature/son-for-hive-voting' into 'develop'
SON for Hive voting

See merge request PBSA/peerplays!81
2022-07-26 23:17:42 +00:00
serkixenos
22fc780a91 SON for Hive voting 2022-07-26 23:17:42 +00:00
serkixenos
005478e3ef Merge branch 'feature/new-rpc-ws-clients' into 'develop'
Boost Beast based RPC client

See merge request PBSA/peerplays!139
2022-07-26 13:47:16 +00:00
serkixenos
5bfd685684 Boost Beast based RPC client 2022-07-26 13:47:16 +00:00
serkixenos
662139ca22 Fix invalid result of nft_get_total_supply (#399) 2022-07-21 20:51:53 +02:00
serkixenos
6844b74e29 Merge branch 'bug/400-verify_sig' into 'develop'
#400 - fix verify_sig function.

See merge request PBSA/peerplays!138
2022-07-15 17:59:25 +00:00
Vlad Dobromyslov
99ed37e834 #400 - fix verify_sig function. 2022-07-15 17:59:25 +00:00
serkixenos
629a6672fd Merge branch 'feature/357-secp256k1-lib' into 'develop'
#357 secp256k1 lib from libbitcoin

See merge request PBSA/peerplays!118
2022-07-07 00:53:24 +00:00
Vlad Dobromyslov
3b5e928094 #357 secp256k1 lib from libbitcoin 2022-07-07 00:53:23 +00:00
serkixenos
09579fbab1 Fix SON cli tests 2022-07-06 01:07:39 +02:00
serkixenos
ff462234af Merge branch 'bug/394/clean_exit__from_all_threads_on_ctrl_c' into 'develop'
Clean exit from CTRL + C

See merge request PBSA/peerplays!134
2022-07-05 11:48:01 +00:00
Davor Hirunda
873dfd788b Clean exit from CTRL + C 2022-07-05 11:48:01 +00:00
serkixenos
b186a2f0ed Merge branch 'bug/fix-son-count-voting' into 'develop'
#387 Allow changing number of SONs by voting, similar to witnesses

See merge request PBSA/peerplays!129
2022-06-27 12:39:35 +00:00
serkixenos
1a196bfcc2 #387 Allow changing number of SONs by voting, similar to witnesses 2022-06-27 12:39:34 +00:00
Bobinson K B
611a63076b Merge branch 'beatrice' into 'master'
Merge beatrice to master 2022-06

See merge request PBSA/peerplays!131
2022-06-27 07:36:08 +00:00
serkixenos
d234c3a8f8 Set HARDFORK_SON3_TIME to 2022-07-16T00:00:00 2022-06-24 14:39:51 +02:00
serkixenos
bc7b0e7788 Set HARDFORK_SON3_TIME to 2022-07-16T00:00:00 2022-06-24 14:38:38 +02:00
serkixenos
d650e197a9 Add port to sync node endpoints 2022-06-20 14:10:08 +02:00
serkixenos
a7f5e1f603 Add port to sync node endpoints 2022-06-20 14:07:17 +02:00
serkixenos
0a38927b0e Allow querying witness_node version by API 2022-06-17 19:08:59 +02:00
serkixenos
9012e86bd1 Add more mainnet seed nodes 2022-06-17 19:08:47 +02:00
serkixenos
eb2894c3d3 Allow querying witness_node version by API 2022-06-17 19:03:27 +02:00
serkixenos
11834c7f53 Add more mainnet seed nodes 2022-06-17 00:10:21 +02:00
serkixenos
1ae9470dab Code formatting 2022-06-16 04:10:29 +02:00
serkixenos
e575334e30 Merge branch 'bug/issue388' into 'develop'
bug fix 388: add ZMQ_RCVTIMEO, graceful thread shutdown

See merge request PBSA/peerplays!128
2022-06-16 01:16:36 +00:00
serkixenos
1788038224 Add hardcoded seed nodes to the config file 2022-06-16 02:06:11 +02:00
serkixenos
8ce0db6ec3 Add hardcoded seed nodes to the config file 2022-06-16 02:04:58 +02:00
serkixenos
02d898d4dc Silence SON logs when not needed 2022-06-15 06:35:13 +02:00
serkixenos
bcdb355f48 Fix P2P port/endpoint error message 2022-06-15 06:31:38 +02:00
serkixenos
d78e0d0e48 Silence SON logs when not needed 2022-06-15 06:30:33 +02:00
hirunda
4d112936d2 Replace fc::random with std random generator 2022-06-15 05:43:00 +02:00
Vlad Dobromyslov
c102bef768 #345 double-free-or-corruption 2022-06-15 05:41:24 +02:00
serkixenos
092a46ae61 Remove unscheduled hardfork CORE 210 2022-06-15 05:41:17 +02:00
serkixenos
f03cc7ee90 Sidechain API, SONs listener log 2022-06-15 05:28:17 +02:00
serkixenos
3294480e20 Update GitLab CI file, more manual build options 2022-06-15 05:13:10 +02:00
Vlad Dobromyslov
7fd12ccce8 #386 - check valid() for optional<operation_history_object> 2022-06-15 05:09:21 +02:00
Davor Hirunda
03e37896d5 Cancel the thread for sync blocks 2022-06-15 05:09:15 +02:00
Davor Hirunda
bc7d03cb22 Fix for scheduler wrong state 2022-06-15 05:09:06 +02:00
serkixenos
b3e426999d Remove unused libreadline-dev library 2022-06-15 05:08:02 +02:00
serkixenos
ab32415d0c Update GitLab CI file, more manual build options 2022-06-15 05:07:56 +02:00
serkixenos
3152d47eea Update GitLab CI file, more manual build options 2022-06-15 05:07:46 +02:00
serkixenos
8f32e4cdb5 Update README instructions for starting docker containers 2022-06-15 05:05:19 +02:00
serkixenos
b4501167ee Update README instructions for docker build 2022-06-15 05:04:59 +02:00
serkixenos
d13551a277 Remove fc based RNG 2022-06-15 05:04:37 +02:00
Pavel Baykov
9dd0747e5d bug fix 388: add ZMQ_RCVTIMEO, graceful thread shutdown 2022-06-13 17:03:20 -03:00
serkixenos
29189b3897 Merge branch 'bug/386-assert-operation_history_object' into 'develop'
#386 - check valid() for optional<operation_history_object>

See merge request PBSA/peerplays!126
2022-06-10 12:18:54 +00:00
Vlad Dobromyslov
2a373a70f7 #386 - check valid() for optional<operation_history_object> 2022-06-09 21:41:19 +03:00
serkixenos
b05c36b4fe Merge branch 'bug/384/ungraceful_shutdown_on_CTRL+C' into 'develop'
Cancel the thread for sync blocks

See merge request PBSA/peerplays!124
2022-06-08 22:02:40 +00:00
Davor Hirunda
e2d9741af8 Cancel the thread for sync blocks 2022-06-08 22:02:40 +00:00
serkixenos
1846b1709e Merge branch 'feature/enable_sync_with_main_net' into 'develop'
Fix for scheduler wrong state

See merge request PBSA/peerplays!122
2022-06-02 16:18:16 +00:00
Davor Hirunda
2c02591e24 Fix for scheduler wrong state 2022-06-02 16:18:15 +00:00
serkixenos
aa2dea6ddf Remove unused libreadline-dev library 2022-05-25 16:48:38 +02:00
serkixenos
9af213190a Merge branch 'feature/88/disconnect_witness_without_hardfork_info' into 'develop'
Disconnect witness which doesn't provide last hardfork time

See merge request PBSA/peerplays!119
2022-05-24 22:16:10 +00:00
hirunda
0b04faec83 Disconnect witness which doesn't provide last hardfork time 2022-05-24 23:29:57 +02:00
serkixenos
a7b4d1cef5 Merge branch 'feature/gitlabcicd_update' into 'develop'
Update GitLab CI file, more manual build options

See merge request PBSA/peerplays!117
2022-05-22 23:14:42 +00:00
serkixenos
0fd22a9945 Update GitLab CI file, more manual build options 2022-05-20 17:48:30 +02:00
serkixenos
3980512543 Merge branch 'develop' of https://gitlab.com/PBSA/peerplays into develop 2022-05-17 00:52:35 +02:00
serkixenos
fc324559eb Merge branch 'issue/367/disconnecting_non_compatible_witness_nodes_happens_too_late' into 'develop'
Resolving the bug with disconnecting non compatible witness

See merge request PBSA/peerplays!113
2022-05-16 22:49:03 +00:00
serkixenos
f16aa73b3e Fix function name typos, disconnet -> disconnect 2022-05-17 00:32:00 +02:00
serkixenos
ca5dc441a7 Update GitLab CI file, more manual build options 2022-05-16 23:04:30 +02:00
serkixenos
2e55b1818a Update README instructions for starting docker containers 2022-05-16 20:43:24 +02:00
serkixenos
a2702cd1f4 Update README instructions for docker build 2022-05-16 20:43:17 +02:00
serkixenos
564af2e19e Merge branch 'bug/349/error_in_fc_crypto_rand_test' into 'develop'
Replace fc::random with std random generator

See merge request PBSA/peerplays!110
2022-05-16 16:06:23 +00:00
serkixenos
5b4a4d18d8 Remove fc based RNG 2022-05-16 18:03:14 +02:00
serkixenos
23cdcec381 Update README instructions for starting docker containers 2022-05-13 03:26:30 +02:00
serkixenos
82a84a06da Update README instructions for docker build 2022-05-12 13:54:52 +02:00
hirunda
8562a4c655 Resolving the bug with disconnecting non compatible witness 2022-05-11 22:34:02 +02:00
serkixenos
852565dcb1 Merge branch 'beatrice' into develop 2022-05-11 22:27:00 +02:00
serkixenos
d461c718ef Set HARDFORK_SON3_TIME to 2022-05-31T00:00:00 2022-05-11 22:22:46 +02:00
serkixenos
2903fc6446 Fix README build instructions 2022-05-11 22:21:18 +02:00
Pavel Baykov
95c5280be2 fix asserts 2022-05-11 22:13:27 +02:00
serkixenos
ffbe0cd592 Merge branch 'bug/issue366' into 'develop'
fix asserts

See merge request PBSA/peerplays!112
2022-05-11 19:28:08 +00:00
Pavel Baykov
27c77ba74b fix asserts 2022-05-11 03:48:25 -03:00
serkixenos
cb3302160b Fix P2P port/endpoint error message 2022-05-09 16:07:07 +02:00
serkixenos
c79c8987dc Ubuntu 18.04 build support 2022-05-07 02:01:00 +02:00
serkixenos
e0d7a6314a Ubuntu 18.04 build support 2022-05-07 01:54:00 +02:00
hirunda
223d2a528d Replace fc::random with std random generator 2022-05-06 13:08:34 +02:00
serkixenos
62f8983c5e Merge branch 'bug/issue360' into 'develop'
fix bug 360, use zmq_setsockopt

See merge request PBSA/peerplays!109
2022-05-06 01:08:14 +00:00
Pavel Baykov
c973131ed2 fix bug 360, use zmq_setsockopt 2022-05-05 11:10:01 -03:00
serkixenos
0c01935ff4 Merge branch 'ubuntu18.04' into 'develop'
libzmq v4.3.4, cppzmq v4.8.1, cmake v3.23 for Docker Ubuntu 18.04

See merge request PBSA/peerplays!105
2022-05-04 16:46:49 +00:00
Pavel Baykov
520505b667 libzmq v4.3.4, cppzmq v4.8.1, cmake v3.23 for Docker Ubuntu 18.04 2022-05-04 16:46:48 +00:00
serkixenos
000aeaa721 Merge branch 'Manual-Build-In-Pipeline-Job-For-Testnet-CMAKE' into 'develop'
Manual build in pipeline job for testnet cmake

See merge request PBSA/peerplays!71
2022-04-26 19:12:52 +00:00
Rily Dunlap
c1048e1509 Manual build in pipeline job for testnet cmake 2022-04-26 19:12:52 +00:00
serkixenos
3664ee67ca Merge branch 'feature/88/enhance_witness_logging_and_rejecting_non_updated_witness' into 'develop'
Disconnect from non updated witness

See merge request PBSA/peerplays!96
2022-04-26 19:04:30 +00:00
Davor Hirunda
bd6f265409 Disconnect from non updated witness 2022-04-26 19:04:30 +00:00
serkixenos
0f0cf62b20 Merge branch 'bug/345-double-free-or-corruption' into 'develop'
#345 double-free-or-corruption

See merge request PBSA/peerplays!101
2022-04-26 13:09:43 +00:00
Vlad Dobromyslov
13c782ccd6 #345 double-free-or-corruption 2022-04-26 13:09:41 +00:00
serkixenos
659a3c9185 Merge branch '352-missing-after-cmake-in-dependencies-installation-qol' into 'develop'
Resolve "Missing \ after cmake in dependencies installation - QOL"

See merge request PBSA/peerplays!104
2022-04-25 11:29:22 +00:00
Rily Dunlap
b0c7a527fa Resolve "Missing \ after cmake in dependencies installation - QOL" 2022-04-25 11:29:22 +00:00
serkixenos
157e6c2fd8 Docker file for Ubuntu 18.04 2022-04-20 23:24:06 +02:00
Bobinson K B
6a59d9efba Merge branch 'beatrice' into 'master'
Merge beatrice to master 2022-04

See merge request PBSA/peerplays!84
2022-04-14 13:18:26 +00:00
Bobinson K B
93b60efba5 Merge branch 'develop' into 'beatrice'
Merge develop to beatrice 2022-04

See merge request PBSA/peerplays!80
2022-04-14 08:38:43 +00:00
serkixenos
6eccec2ba4 Set SON3 hardfork date to 2022-04-30T00:00:00 2022-04-13 10:42:59 -04:00
serkixenos
d7c654500e Merge branch 'bug/336-exception-parse-json' into 'develop'
#336 exception parse json

See merge request PBSA/peerplays!95
2022-04-06 23:23:48 +00:00
Vlad Dobromyslov
d49017ff21 #336 exception parse json 2022-04-06 23:23:47 +00:00
serkixenos
6ee37d0916 Set HARDFORK_SON3_TIME to 2022-04-24T00:00:00 2022-04-05 18:34:27 -04:00
serkixenos
44b2d21d78 Merge branch 'feature/sidechain-api' into 'develop'
Sidechain API, SONs listener log

See merge request PBSA/peerplays!94
2022-04-05 12:47:32 +00:00
serkixenos
03836d3770 Sidechain API, SONs listener log 2022-04-04 23:28:31 -04:00
serkixenos
4809619892 Merge branch 'feature/337-lock-importmulti-unlock' into 'develop'
#337 lock/unlock importmulti

See merge request PBSA/peerplays!93
2022-04-01 12:20:59 +00:00
Vlad Dobromyslov
ad0b5afb79 #337 lock/unlock importmulti 2022-04-01 12:20:59 +00:00
serkixenos
0e5d599fdd Improved error checks and messages for BTC transfer fees 2022-03-23 17:46:44 -04:00
serkixenos
d39f838eb8 Merge branch 'bug/324/refactor_Bitcoin_block_parsing_v0.21_v0.22' into 'develop'
Support parsing addresses for bitcoin v21 and v22

See merge request PBSA/peerplays!90
2022-03-23 17:37:32 +00:00
Davor Hirunda
d07c343be6 Support parsing addresses for bitcoin v21 and v22 2022-03-23 17:37:32 +00:00
serkixenos
c0bbcca0cf Update clang format config to clang-format 10 2022-03-23 13:32:03 -04:00
serkixenos
4ef0163bf2 Add missing cli wallet command parameter descriptions 2022-03-23 09:24:45 -04:00
serkixenos
65ba17adb0 HARDFORK_SON3_TIME to 2022-04-01T00:00:00, limit wallet rescan from 2022-01-01 2022-03-23 09:03:44 -04:00
serkixenos
ddd0d2fd16 Merge branch 'feature/314-sidechain_withdrawal_transaction' into 'develop'
issue 327 fixed, add block_num

See merge request PBSA/peerplays!89
2022-03-23 12:12:36 +00:00
Pavel Baykov
bfa7b13193 issue 327 fixed, add block_num 2022-03-21 13:32:26 -03:00
serkixenos
7f5a92fb1e Remove deposit/withdrawal cli tests 2022-03-21 11:19:05 -04:00
serkixenos
a408ed0dda Merge branch 'feature/325-import-btc-addresses' into 'develop'
#325 Refactor importing BTC addresses into son-wallet

See merge request PBSA/peerplays!88
2022-03-21 14:50:02 +00:00
Vlad Dobromyslov
ee018cf513 #325 Refactor importing BTC addresses into son-wallet 2022-03-21 14:50:02 +00:00
serkixenos
23e40e1004 Update wallet library Doxygen file, for cleaner build output 2022-03-18 19:29:56 -04:00
serkixenos
f4a0b3fb6d Merge branch 'bug/315/remove_hardcoded_chain_parameters' into 'develop'
Fix update son parameters on maintenance

See merge request PBSA/peerplays!85
2022-03-18 22:36:31 +00:00
Davor Hirunda
d1e425e3c9 Fix update son parameters on maintenance 2022-03-18 22:36:31 +00:00
serkixenos
237889f621 Merge branch 'feature/314-sidechain_withdrawal_transaction' into 'develop'
Feature/314 sidechain withdrawal transaction

See merge request PBSA/peerplays!83
2022-03-16 14:50:38 +00:00
Pavel Baykov
a5a8d6c617 sidechain_withdraw_transaction_test 2022-03-11 14:54:44 -04:00
Pavel Baykov
f3666d7468 sidechain_withdrawal_transaction 2022-03-10 10:28:23 -04:00
serkixenos
21e13ac4d4 Merge branch 'bug/149/prevent_misconfiguration_of_blockchain_parameters' into 'develop'
Add constraints in changing global parameters

See merge request PBSA/peerplays!79
2022-03-08 23:23:43 +00:00
Davor Hirunda
7729c09c2e Add constraints in changing global parameters 2022-03-08 23:23:42 +00:00
serkixenos
d347d3d01b Merge branch 'feature/313-son-deposit-manually' into 'develop'
#313 son deposit manually

See merge request PBSA/peerplays!78
2022-03-08 23:21:28 +00:00
Vlad Dobromyslov
f169e7a7ef #313 son deposit manually 2022-03-08 23:21:28 +00:00
serkixenos
0740bceb74 Update fc to last commit 2022-03-04 18:32:40 -04:00
serkixenos
2c411b63d3 Merge branch 'bug/279-randomly-test-fall' into 'develop'
#279 randomly test fall

See merge request PBSA/peerplays!77
2022-03-04 22:02:42 +00:00
Vlad Dobromyslov
ae5237a781 #279 randomly test fall 2022-03-04 22:02:41 +00:00
serkixenos
4efa7e4fb8 Merge branch 'feature/280-remove-fc-reflect-tamplate' into 'develop'
#280 Delete FC_REFLECT_TEMPLATE from votes

See merge request PBSA/peerplays!76
2022-02-28 23:08:12 +00:00
Vlad Dobromyslov
5e81fc0024 #280 Delete FC_REFLECT_TEMPLATE from votes 2022-02-28 23:08:12 +00:00
serkixenos
49c39afbe1 Merge branch 'bug/235-corrupt-chain-file' into 'develop'
#235 Delete small objects for game_object, tournament_object, match_object,...

See merge request PBSA/peerplays!75
2022-02-28 22:56:20 +00:00
serkixenos
8dc8ac0aec #235 Delete small objects for game_object, tournament_object, match_object,... 2022-02-28 22:56:20 +00:00
serkixenos
7d25589499 Merge branch 'bug/289/deprecation_messages_for_auto_unit_test.hpp' into 'develop'
Resolving boost deprecation include

See merge request PBSA/peerplays!74
2022-02-25 18:41:31 +00:00
hirunda
5602790db9 Resolving boost deprecation include 2022-02-24 22:16:10 +01:00
serkixenos
d46c7201bb Fix Boost component list for v1.71 2022-02-22 20:21:54 +00:00
serkixenos
5b8f14dd68 Update README.md 2022-02-21 20:52:32 +00:00
serkixenos
8b732acf17 Update Boost libraries 2022-02-21 15:29:38 -04:00
serkixenos
2981613a9e Merge branch 'bug/282-abort-when-no-argument' into 'develop'
#282 abort when no argument

See merge request PBSA/peerplays!72
2022-02-21 13:57:39 +00:00
Vlad Dobromyslov
93b57f294d #282 abort when no argument 2022-02-21 13:57:38 +00:00
serkixenos
5860002d0a Code formatting 2022-02-16 18:01:29 -04:00
serkixenos
6e2fb6fac5 Merge branch 'Updating-Gitlab-CI-and-ReadMe' into 'develop'
Updating gitlab ci and read me

See merge request PBSA/peerplays!70
2022-02-16 18:28:25 +00:00
Rily Dunlap
f7b3c8935d Updating gitlab ci and read me 2022-02-16 18:28:25 +00:00
serkixenos
65cc4a4df2 Merge branch 'bug/issue111' into 'develop'
zmq::recv_multipart

See merge request PBSA/peerplays!65
2022-02-16 18:25:25 +00:00
Pavel Baykov
6960ccbde9 zmq::recv_multipart 2022-02-16 18:25:25 +00:00
serkixenos
04eb8c33e0 Merge branch 'bug_54_handling_errors_in_cli_wallet' into 'develop'
Handling some of the errors on wrong user inputs

See merge request PBSA/peerplays!69
2022-02-16 18:22:17 +00:00
Davor Hirunda
6d8a158372 Handling some of the errors on wrong user inputs 2022-02-16 18:22:17 +00:00
serkixenos
639e242693 Unknown cli parameters handling in cli_wallet 2022-02-16 13:29:47 -04:00
serkixenos
8be4dd5e3c Code formatting 2022-02-15 10:48:52 -04:00
serkixenos
050c0b27e5 Add short version parameter to cli_wallet 2022-02-15 10:48:32 -04:00
serkixenos
fcd360c2fd Merge branch 'feature_enable_multiple_SON_support_by_default' into 'develop'
Enable multiple SON support by default

See merge request PBSA/peerplays!64
2022-02-11 17:36:26 +00:00
serkixenos
8486b7a736 Merge branch 'feature/270-funtions-unified-form' into 'develop'
#270 functions to unified form

See merge request PBSA/peerplays!61
2022-02-11 15:41:47 +00:00
Vlad Dobromyslov
339adbb054 #270 functions to unified form 2022-02-11 15:41:47 +00:00
serkixenos
8b611c3f95 Merge branch 'bug/266-fee-assets' into 'develop'
#266 Fix hard-coded fee for issuing assets in sidechain plugin

See merge request PBSA/peerplays!66
2022-02-10 23:01:59 +00:00
Vlad Dobromyslov
bd08c4c6b0 #266 Fix hard-coded fee for issuing assets in sidechain plugin 2022-02-10 23:01:59 +00:00
serkixenos
a284f42ac9 Merge branch 'feature/260-voting-info' into 'develop'
#260 Added functions get_votes() and get_voters()

See merge request PBSA/peerplays!59
2022-02-10 21:11:09 +00:00
Vlad Dobromyslov
d7e24bfb07 #260 Added functions get_votes() and get_voters() 2022-02-10 21:11:08 +00:00
serkixenos
99119dbd7d Update git submodules docs and fc 2022-02-10 16:40:03 -04:00
hirunda
18bf848119 Enable multiple SON support by default 2022-02-09 18:36:03 +01:00
serkixenos
494482eba5 Update README.md 2022-02-03 21:18:41 +00:00
serkixenos
18775061ad Merge branch 'feature/switch-to-ubuntu-20-04' into 'develop'
Update README for Ubuntu 20.04

See merge request PBSA/peerplays!58
2022-02-02 13:34:44 +00:00
serkixenos
3c19ea74dd Merge branch 'bug/replace-vulnerable-xml' into 'develop'
Replace vulnerable XML library

See merge request PBSA/peerplays!57
2022-02-02 13:34:26 +00:00
serkixenos
eb77c9dfb3 Update README for Ubuntu 20.04 2022-02-01 13:20:17 -04:00
serkixenos
7a9c90a218 Replace vulnerable XML library 2022-02-01 12:21:32 -04:00
serkixenos
66699f1e15 Merge branch 'bug/245-exception-in-witness' into 'develop'
bug #245 exception seen in witness logs

See merge request PBSA/peerplays!56
2022-01-31 14:14:25 +00:00
Vlad Dobromyslov
78fbf7c3cd bug #245 exception seen in witness logs 2022-01-31 14:14:24 +00:00
serkixenos
5247f76fc2 Merge branch '237-es7-fix' into 'develop'
Resolve "port ES changes from Bitshares"

See merge request PBSA/peerplays!53
2022-01-31 05:25:56 +00:00
Vlad Dobromyslov
39fcacd397 Resolve "port ES changes from Bitshares" 2022-01-31 05:25:56 +00:00
serkixenos
10799a2148 Merge branch 'bug/267-fix-chain_test' into 'develop'
bug #267 Fix error in chain_test in gitlab autobuild

See merge request PBSA/peerplays!55
2022-01-28 15:05:49 +00:00
Vlad Dobromyslov
8c3a424bb6 bug #267 Fix error in chain_test in gitlab autobuild 2022-01-28 15:05:49 +00:00
serkixenos
6f6811eec4 Merge branch 'bug/fix-list-active-sons' into 'develop'
Fix list_active_sons output

See merge request PBSA/peerplays!54
2022-01-26 18:30:21 +00:00
serkixenos
0bcb0487a7 Fix list_active_son command output on deregistered SONs 2022-01-21 12:17:13 -04:00
serkixenos
b5a9a0101a Merge branch 'bug/cli-wallet-memo-display' into 'develop'
Fix cli wallet memo displaying

See merge request PBSA/peerplays!52
2022-01-17 18:25:33 +00:00
serkixenos
5e85079281 Fix cli wallet memo displaying 2021-12-27 00:28:29 -04:00
Bobinson K B
9fd18b32c4 Merge branch 'beatrice' into 'master'
Hotfix: Revert change to a son_update_operation

See merge request PBSA/peerplays!50
2021-12-17 13:33:23 +00:00
serkixenos
632eb4a231 Hotfix: Revert change to a son_update_operation 2021-12-17 13:33:23 +00:00
serkixenos
bfc778068c Merge branch 'beatrice' into develop 2021-12-16 23:15:21 -04:00
serkixenos
f9a40c647e Increase replay's writing to database threshold 2021-12-16 22:27:34 -04:00
serkixenos
e828e7813c Merge branch 'master' into beatrice 2021-12-16 22:08:46 -04:00
serkixenos
0dca13ea7e Revert change to a son_update_operation 2021-12-16 21:59:05 -04:00
serkixenos
b619815077 Revert change to a son_update_operation 2021-12-16 21:34:57 -04:00
Bobinson K B
78730a4564 Merge branch 'beatrice' into 'master'
Merge beatrice to master 2021-12

See merge request PBSA/peerplays!46
2021-12-15 16:36:40 +00:00
serkixenos
c888274846 Merge beatrice to master 2021-12 2021-12-15 16:36:40 +00:00
serkixenos
de2a89ebce Set SON for Hive Mainnet hardfork date to 2021-12-21T00:00:00 2021-12-15 09:27:30 -04:00
serkixenos
f69fb7adae Merge branch '230-docker-image-from-ci-should-be-pushed-to-gitlab-registry' into 'develop'
Update .gitlab-ci.yml to push images to the gitlab registry instead of docker.io

See merge request PBSA/peerplays!48
2021-12-15 03:51:51 +00:00
Sivakumar Yavvari
666dc76ee4 Update .gitlab-ci.yml to push images to the gitlab registry instead of docker.io 2021-12-14 18:22:12 +00:00
serkixenos
f81b6460d1 Merge branch 'bug/94-docker-build' into 'develop'
Fix Docker build

See merge request PBSA/peerplays!44
2021-12-14 06:56:30 +00:00
serkixenos
861e9389ac Fix Docker build 2021-12-14 06:56:30 +00:00
serkixenos
2ba8a7f3a5 Merge branch 'master' into beatrice 2021-12-13 09:49:07 -04:00
Bobinson K B
d8cecab20f Merge branch 'revert-b875f0b8' into 'master'
Revert "Merge branch 'merge-beatrice-to-master-2021-11' into 'master'"

See merge request PBSA/peerplays!38
2021-12-10 06:18:51 +00:00
Bobinson K B
dc04759686 Revert "Merge branch 'merge-beatrice-to-master-2021-11' into 'master'" 2021-12-10 06:18:51 +00:00
Bobinson K B
b875f0b841 Merge branch 'merge-beatrice-to-master-2021-11' into 'master'
Merge Beatrice to Master 2021-11

See merge request PBSA/peerplays!33
2021-12-01 07:54:19 +00:00
serkixenos
9955b390ee Merge Beatrice to Master 2021-11 2021-12-01 07:54:19 +00:00
191 changed files with 14832 additions and 8071 deletions

View file

@ -1,6 +1,5 @@
---
Language: Cpp
# BasedOnStyle: LLVM
AccessModifierOffset: -3
AlignAfterOpenBracket: Align
AlignConsecutiveMacros: false
@ -12,7 +11,7 @@ AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllConstructorInitializersOnNextLine: false
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortBlocksOnASingleLine: false
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: None
AllowShortLambdasOnASingleLine: None
@ -57,6 +56,7 @@ ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 6
ContinuationIndentWidth: 6
Cpp11BracedListStyle: true
DeriveLineEnding: true
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
@ -69,12 +69,17 @@ IncludeBlocks: Preserve
IncludeCategories:
- Regex: '^"(llvm|llvm-c|clang|clang-c)/'
Priority: 2
SortPriority: 0
- Regex: '^(<|"(gtest|gmock|isl|json)/)'
Priority: 3
SortPriority: 0
- Regex: '.*'
Priority: 1
SortPriority: 0
IncludeIsMainRegex: '(Test)?$'
IncludeIsMainSourceRegex: ''
IndentCaseLabels: false
IndentGotoLabels: false
IndentPPDirectives: None
IndentWidth: 3
IndentWrappedFunctionNames: false
@ -110,18 +115,22 @@ SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles: false
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
SpaceBeforeSquareBrackets: false
Standard: Latest
StatementMacros:
- Q_UNUSED
- QT_REQUIRE_VERSION
TabWidth: 3
UseCRLF: false
UseTab: Never
...

View file

@ -8,8 +8,11 @@ include:
stages:
- build
- test
- dockerize
- python-test
- deploy
build:
build-mainnet:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
@ -29,25 +32,140 @@ build:
tags:
- builder
dockerize:
stage: build
script:
- docker build . -t $DOCKER_REPO:$CI_COMMIT_REF_NAME
- docker login -u $DOCKER_USER -p $DOCKER_PASS
- docker push $DOCKER_REPO:$CI_COMMIT_REF_NAME
- docker logout
tags:
- builder
when: manual
timeout: 3h
test:
test-mainnet:
stage: test
dependencies:
- build
- build-mainnet
script:
- ./build/libraries/fc/tests/all_tests
- ./build/tests/betting_test --log_level=message
- ./build/tests/chain_test --log_level=message
- ./build/tests/cli_test --log_level=message
tags:
- builder
dockerize-mainnet:
stage: dockerize
variables:
IMAGE: $CI_REGISTRY_IMAGE/mainnet/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
before_script:
- docker info
- docker builder prune -a -f
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --no-cache -t $IMAGE .
- docker push $IMAGE
after_script:
- docker rmi $IMAGE
tags:
- builder
timeout:
3h
build-testnet:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
- git submodule sync
- git submodule update --init --recursive
- rm -rf build
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_PEERPLAYS_TESTNET=1 ..
- make -j$(nproc)
artifacts:
untracked: true
paths:
- build/libraries/
- build/programs/
- build/tests/
when: manual
tags:
- builder
deploy-testnet:
stage: deploy
dependencies:
- build-testnet
script:
- sudo systemctl stop witness
- rm $WORK_DIR/peerplays/witness_node || true
- cp build/programs/witness_node/witness_node $WORK_DIR/peerplays/
- sudo systemctl restart witness
rules:
- if: $CI_COMMIT_BRANCH == "master"
when: always
environment:
name: devnet
url: $DEVNET_URL
tags:
- devnet
test-testnet:
stage: test
dependencies:
- build-testnet
script:
- ./build/libraries/fc/tests/all_tests
- ./build/tests/betting_test --log_level=message
- ./build/tests/chain_test --log_level=message
- ./build/tests/cli_test --log_level=message
tags:
- builder
when:
manual
timeout:
1h
dockerize-testnet:
stage: dockerize
variables:
IMAGE: $CI_REGISTRY_IMAGE/testnet/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
before_script:
- docker info
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --no-cache -t $IMAGE .
- docker push $IMAGE
after_script:
- docker rmi $IMAGE
tags:
- builder
when:
manual
timeout:
3h
test-e2e:
stage: python-test
variables:
IMAGE: $CI_REGISTRY_IMAGE/mainnet/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
before_script:
- docker info
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- git clone https://gitlab.com/PBSA/tools-libs/peerplays-utils.git
- cd peerplays-utils/peerplays-qa-environment
- git checkout origin/feature/python-e2e-tests-for-CI
- cd e2e-tests/
- python3 -m venv venv
- source venv/bin/activate
- pip3 install -r requirements.txt
- docker-compose down --remove-orphans
- docker ps -a
- docker pull $IMAGE
- docker tag $IMAGE peerplays-base:latest
- docker image ls -a
- docker-compose build
- python3 main.py --start all
- docker ps -a
- python3 -m pytest test_btc_init_state.py test_hive_inital_state.py test_pp_inital_state.py
- python3 main.py --stop
- deactivate
- docker ps -a
after_script:
- docker rmi $(docker images -a | grep -v 'hive-for-peerplays\|ethereum-for-peerplays\|bitcoin-for-peerplays\|ubuntu-for-peerplays' | awk '{print $3}')
tags:
- python-tests
when:
manual

14
.gitmodules vendored
View file

@ -1,9 +1,9 @@
[submodule "docs"]
path = docs
url = https://github.com/bitshares/bitshares-core.wiki.git
ignore = dirty
path = docs
url = https://github.com/bitshares/bitshares-core.wiki.git
ignore = dirty
[submodule "libraries/fc"]
path = libraries/fc
url = https://github.com/peerplays-network/peerplays-fc.git
branch = latest-fc
ignore = dirty
path = libraries/fc
url = https://gitlab.com/PBSA/tools-libs/peerplays-fc.git
branch = develop
ignore = dirty

View file

@ -22,6 +22,37 @@ endif()
list( APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules" )
function(get_linux_lsb_release_information)
find_program(LSB_RELEASE_EXEC lsb_release)
if(NOT LSB_RELEASE_EXEC)
message(FATAL_ERROR "Could not detect lsb_release executable, can not gather required information")
endif()
execute_process(COMMAND "${LSB_RELEASE_EXEC}" --short --id OUTPUT_VARIABLE LSB_RELEASE_ID_SHORT OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND "${LSB_RELEASE_EXEC}" --short --release OUTPUT_VARIABLE LSB_RELEASE_VERSION_SHORT OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND "${LSB_RELEASE_EXEC}" --short --codename OUTPUT_VARIABLE LSB_RELEASE_CODENAME_SHORT OUTPUT_STRIP_TRAILING_WHITESPACE)
set(LSB_RELEASE_ID_SHORT "${LSB_RELEASE_ID_SHORT}" PARENT_SCOPE)
set(LSB_RELEASE_VERSION_SHORT "${LSB_RELEASE_VERSION_SHORT}" PARENT_SCOPE)
set(LSB_RELEASE_CODENAME_SHORT "${LSB_RELEASE_CODENAME_SHORT}" PARENT_SCOPE)
endfunction()
if(CMAKE_SYSTEM_NAME MATCHES "Linux")
find_package(cppzmq)
target_link_libraries(cppzmq)
get_linux_lsb_release_information()
message(STATUS "Linux ${LSB_RELEASE_ID_SHORT} ${LSB_RELEASE_VERSION_SHORT} ${LSB_RELEASE_CODENAME_SHORT}")
string(REGEX MATCHALL "([0-9]+)" arg_list ${LSB_RELEASE_VERSION_SHORT})
list( LENGTH arg_list listlen )
if (NOT listlen)
message(FATAL_ERROR "Could not detect Ubuntu version")
endif()
list(GET arg_list 0 output)
message("Ubuntu version is: ${output}")
add_definitions(-DPEERPLAYS_UBUNTU_VERSION=${output})
endif()
# function to help with cUrl
macro(FIND_CURL)
if (NOT WIN32 AND NOT APPLE AND CURL_STATICLIB)
@ -83,7 +114,6 @@ LIST(APPEND BOOST_COMPONENTS thread
system
filesystem
program_options
signals
serialization
chrono
unit_test_framework

View file

@ -1,96 +1,218 @@
FROM ubuntu:18.04
MAINTAINER PeerPlays Blockchain Standards Association
FROM ubuntu:20.04
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
ENV LC_ALL en_US.UTF-8
#===============================================================================
# Ubuntu setup
#===============================================================================
RUN \
apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-utils \
autoconf \
bash \
bison \
build-essential \
ca-certificates \
cmake \
dnsutils \
doxygen \
expect \
flex \
git \
graphviz \
libbz2-dev \
libcurl4-openssl-dev \
libncurses-dev \
libreadline-dev \
libsnappy-dev \
libssl-dev \
libtool \
libzmq3-dev \
libzip-dev \
locales \
lsb-release \
mc \
nano \
net-tools \
ntp \
openssh-server \
pkg-config \
wget \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
python3 \
python3-jinja2 \
sudo \
systemd-coredump \
wget
ENV HOME /home/peerplays
RUN useradd -rm -d /home/peerplays -s /bin/bash -g root -G sudo -u 1000 peerplays
RUN echo "peerplays ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/peerplays
RUN chmod 440 /etc/sudoers.d/peerplays
RUN service ssh start
RUN echo 'peerplays:peerplays' | chpasswd
# SSH
EXPOSE 22
WORKDIR /home/peerplays/src
#===============================================================================
# Boost setup
#===============================================================================
RUN \
sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
locale-gen
# Compile Boost
RUN \
BOOST_ROOT=$HOME/boost_1_67_0 && \
wget -c 'http://sourceforge.net/projects/boost/files/boost/1.67.0/boost_1_67_0.tar.gz/download' -O boost_1_67_0.tar.gz &&\
tar -zxvf boost_1_67_0.tar.gz && \
cd boost_1_67_0/ && \
./bootstrap.sh "--prefix=$BOOST_ROOT" && \
wget https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.gz && \
tar -xzf boost_1_72_0.tar.gz && \
cd boost_1_72_0 && \
./bootstrap.sh && \
./b2 install && \
cd ..
ldconfig && \
rm -rf /home/peerplays/src/*
ADD . /peerplays-core
WORKDIR /peerplays-core
#===============================================================================
# cmake setup
#===============================================================================
# Compile Peerplays
RUN \
BOOST_ROOT=$HOME/boost_1_67_0 && \
git submodule sync --recursive && \
git submodule update --init --recursive && \
wget https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-x86_64.sh && \
chmod 755 ./cmake-3.24.2-linux-x86_64.sh && \
./cmake-3.24.2-linux-x86_64.sh --prefix=/usr --skip-license && \
cmake --version && \
rm -rf /home/peerplays/src/*
#===============================================================================
# libzmq setup
#===============================================================================
RUN \
wget https://github.com/zeromq/libzmq/archive/refs/tags/v4.3.4.tar.gz && \
tar -xzvf v4.3.4.tar.gz && \
cd libzmq-4.3.4 && \
mkdir build && \
mkdir build/release && \
cd build/release && \
cmake \
-DBOOST_ROOT="$BOOST_ROOT" \
-DCMAKE_BUILD_TYPE=Debug \
../.. && \
make witness_node cli_wallet && \
install -s programs/witness_node/witness_node programs/cli_wallet/cli_wallet /usr/local/bin && \
#
# Obtain version
mkdir /etc/peerplays && \
git rev-parse --short HEAD > /etc/peerplays/version && \
cd / && \
rm -rf /peerplays-core
cd build && \
cmake .. && \
make -j$(nproc) && \
make install && \
ldconfig && \
rm -rf /home/peerplays/src/*
# Home directory $HOME
WORKDIR /
RUN useradd -s /bin/bash -m -d /var/lib/peerplays peerplays
ENV HOME /var/lib/peerplays
RUN chown peerplays:peerplays -R /var/lib/peerplays
#===============================================================================
# cppzmq setup
#===============================================================================
# Volume
VOLUME ["/var/lib/peerplays", "/etc/peerplays"]
RUN \
wget https://github.com/zeromq/cppzmq/archive/refs/tags/v4.9.0.tar.gz && \
tar -xzvf v4.9.0.tar.gz && \
cd cppzmq-4.9.0 && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) && \
make install && \
ldconfig && \
rm -rf /home/peerplays/src/*
# rpc service:
#===============================================================================
# gsl setup
#===============================================================================
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
libpcre3-dev
RUN \
wget https://github.com/imatix/gsl/archive/refs/tags/v4.1.4.tar.gz && \
tar -xzvf v4.1.4.tar.gz && \
cd gsl-4.1.4 && \
make -j$(nproc) && \
make install && \
rm -rf /home/peerplays/src/*
#===============================================================================
# libbitcoin-build setup
# libbitcoin-explorer setup
#===============================================================================
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
libsodium-dev
RUN \
git clone --branch version3.8.0 --depth 1 https://gitlab.com/PBSA/peerplays-1.0/libbitcoin-explorer.git && \
cd libbitcoin-explorer && \
./install.sh && \
ldconfig && \
rm -rf /home/peerplays/src/*
#===============================================================================
# Doxygen setup
#===============================================================================
RUN \
sudo apt install -y bison flex && \
wget https://github.com/doxygen/doxygen/archive/refs/tags/Release_1_8_17.tar.gz && \
tar -xvf Release_1_8_17.tar.gz && \
cd doxygen-Release_1_8_17 && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) install && \
ldconfig
#===============================================================================
# Perl setup
#===============================================================================
RUN \
wget https://github.com/Perl/perl5/archive/refs/tags/v5.30.0.tar.gz && \
tar -xvf v5.30.0.tar.gz && \
cd perl5-5.30.0 && \
./Configure -des && \
make -j$(nproc) install && \
ldconfig
#===============================================================================
# Peerplays setup
#===============================================================================
## Clone Peerplays
#RUN \
# git clone https://gitlab.com/PBSA/peerplays.git && \
# cd peerplays && \
# git checkout develop && \
# git submodule update --init --recursive && \
# git branch --show-current && \
# git log --oneline -n 5
# Add local source
ADD . peerplays
# Configure Peerplays
RUN \
cd peerplays && \
git submodule update --init --recursive && \
git log --oneline -n 5 && \
mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release ..
# Build Peerplays
RUN \
cd peerplays/build && \
make -j$(nproc) cli_wallet witness_node
WORKDIR /home/peerplays/peerplays-network
# Setup Peerplays runimage
RUN \
ln -s /home/peerplays/src/peerplays/build/programs/cli_wallet/cli_wallet ./ && \
ln -s /home/peerplays/src/peerplays/build/programs/witness_node/witness_node ./
RUN ./witness_node --create-genesis-json genesis.json && \
rm genesis.json
RUN chown peerplays:root -R /home/peerplays/peerplays-network
# Peerplays RPC
EXPOSE 8090
# p2p service:
EXPOSE 1776
# Peerplays P2P:
EXPOSE 9777
# default exec/config files
ADD docker/default_config.ini /etc/peerplays/config.ini
ADD docker/peerplaysentry.sh /usr/local/bin/peerplaysentry.sh
RUN chmod a+x /usr/local/bin/peerplaysentry.sh
# Make Docker send SIGINT instead of SIGTERM to the daemon
STOPSIGNAL SIGINT
# default execute entry
CMD ["/usr/local/bin/peerplaysentry.sh"]
# Peerplays
CMD ["./witness_node", "-d", "./witness_node_data_dir"]

219
Dockerfile.18.04 Normal file
View file

@ -0,0 +1,219 @@
FROM ubuntu:18.04
#===============================================================================
# Ubuntu setup
#===============================================================================
RUN \
apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-utils \
autoconf \
bash \
bison \
build-essential \
ca-certificates \
dnsutils \
expect \
flex \
git \
graphviz \
libbz2-dev \
libcurl4-openssl-dev \
libncurses-dev \
libsnappy-dev \
libssl-dev \
libtool \
libzip-dev \
locales \
lsb-release \
mc \
nano \
net-tools \
ntp \
openssh-server \
pkg-config \
python3 \
python3-jinja2 \
sudo \
systemd-coredump \
wget
ENV HOME /home/peerplays
RUN useradd -rm -d /home/peerplays -s /bin/bash -g root -G sudo -u 1000 peerplays
RUN echo "peerplays ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/peerplays
RUN chmod 440 /etc/sudoers.d/peerplays
RUN service ssh start
RUN echo 'peerplays:peerplays' | chpasswd
# SSH
EXPOSE 22
WORKDIR /home/peerplays/src
#===============================================================================
# Boost setup
#===============================================================================
RUN \
wget https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.gz && \
tar -xzf boost_1_72_0.tar.gz && \
cd boost_1_72_0 && \
./bootstrap.sh && \
./b2 install && \
ldconfig && \
rm -rf /home/peerplays/src/*
#===============================================================================
# cmake setup
#===============================================================================
RUN \
wget https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-x86_64.sh && \
chmod 755 ./cmake-3.24.2-linux-x86_64.sh && \
./cmake-3.24.2-linux-x86_64.sh --prefix=/usr --skip-license && \
cmake --version && \
rm -rf /home/peerplays/src/*
#===============================================================================
# libzmq setup
#===============================================================================
RUN \
wget https://github.com/zeromq/libzmq/archive/refs/tags/v4.3.4.tar.gz && \
tar -xzvf v4.3.4.tar.gz && \
cd libzmq-4.3.4 && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) && \
make install && \
ldconfig && \
rm -rf /home/peerplays/src/*
#===============================================================================
# cppzmq setup
#===============================================================================
RUN \
wget https://github.com/zeromq/cppzmq/archive/refs/tags/v4.9.0.tar.gz && \
tar -xzvf v4.9.0.tar.gz && \
cd cppzmq-4.9.0 && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) && \
make install && \
ldconfig && \
rm -rf /home/peerplays/src/*
#===============================================================================
# gsl setup
#===============================================================================
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
libpcre3-dev
RUN \
wget https://github.com/imatix/gsl/archive/refs/tags/v4.1.4.tar.gz && \
tar -xzvf v4.1.4.tar.gz && \
cd gsl-4.1.4 && \
make -j$(nproc) && \
make install && \
rm -rf /home/peerplays/src/*
#===============================================================================
# libbitcoin-build setup
# libbitcoin-explorer setup
#===============================================================================
RUN \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
libsodium-dev
RUN \
git clone --branch version3.8.0 --depth 1 https://gitlab.com/PBSA/peerplays-1.0/libbitcoin-explorer.git && \
cd libbitcoin-explorer && \
./install.sh && \
ldconfig && \
rm -rf /home/peerplays/src/*
#===============================================================================
# Doxygen setup
#===============================================================================
RUN \
sudo apt install -y bison flex && \
wget https://github.com/doxygen/doxygen/archive/refs/tags/Release_1_8_17.tar.gz && \
tar -xvf Release_1_8_17.tar.gz && \
cd doxygen-Release_1_8_17 && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) install && \
ldconfig
#===============================================================================
# Perl setup
#===============================================================================
RUN \
wget https://github.com/Perl/perl5/archive/refs/tags/v5.30.0.tar.gz && \
tar -xvf v5.30.0.tar.gz && \
cd perl5-5.30.0 && \
./Configure -des && \
make -j$(nproc) install && \
ldconfig
#===============================================================================
# Peerplays setup
#===============================================================================
## Clone Peerplays
#RUN \
# git clone https://gitlab.com/PBSA/peerplays.git && \
# cd peerplays && \
# git checkout develop && \
# git submodule update --init --recursive && \
# git branch --show-current && \
# git log --oneline -n 5
# Add local source
ADD . peerplays
# Configure Peerplays
RUN \
cd peerplays && \
git submodule update --init --recursive && \
git symbolic-ref --short HEAD && \
git log --oneline -n 5 && \
mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release ..
# Build Peerplays
RUN \
cd peerplays/build && \
make -j$(nproc) cli_wallet witness_node
WORKDIR /home/peerplays/peerplays-network
# Setup Peerplays runimage
RUN \
ln -s /home/peerplays/src/peerplays/build/programs/cli_wallet/cli_wallet ./ && \
ln -s /home/peerplays/src/peerplays/build/programs/witness_node/witness_node ./
RUN ./witness_node --create-genesis-json genesis.json && \
rm genesis.json
RUN chown peerplays:root -R /home/peerplays/peerplays-network
# Peerplays RPC
EXPOSE 8090
# Peerplays P2P:
EXPOSE 9777
# Peerplays
CMD ["./witness_node", "-d", "./witness_node_data_dir"]

219
README.md
View file

@ -2,95 +2,186 @@ Intro for new developers and witnesses
------------------------
This is a quick introduction to get new developers and witnesses up to speed on Peerplays blockchain. It is intended for witnesses plannig to join a live, already deployed blockchain.
# Building on Ubuntu 18.04 LTS and Installation Instructions
The following dependencies were necessary for a clean install of Ubuntu 18.04 LTS:
```
sudo apt-get install autoconf bash build-essential ca-certificates cmake \
doxygen git graphviz libbz2-dev libcurl4-openssl-dev libncurses-dev \
libreadline-dev libssl-dev libtool libzmq3-dev locales ntp pkg-config \
wget
# Building and Installation Instructions
Officially supported OS are Ubuntu 20.04 and Ubuntu 18.04.
## Ubuntu 20.04 and 18.04
Following dependencies are needed for a clean install of Ubuntu 20.04 and Ubuntu 18.04:
```
## Build Boost 1.67.0
```
mkdir $HOME/src
cd $HOME/src
export BOOST_ROOT=$HOME/src/boost_1_67_0
sudo apt-get update
sudo apt-get install -y autotools-dev build-essential libbz2-dev libicu-dev python-dev
wget -c 'http://sourceforge.net/projects/boost/files/boost/1.67.0/boost_1_67_0.tar.bz2/download'\
-O boost_1_67_0.tar.bz2
tar xjf boost_1_67_0.tar.bz2
cd boost_1_67_0/
./bootstrap.sh "--prefix=$BOOST_ROOT"
./b2 install
sudo apt-get install \
autoconf bash bison build-essential ca-certificates dnsutils expect flex git \
graphviz libbz2-dev libcurl4-openssl-dev libncurses-dev libpcre3-dev \
libsnappy-dev libsodium-dev libssl-dev libtool libzip-dev locales lsb-release \
mc nano net-tools ntp openssh-server pkg-config python3 python3-jinja2 sudo \
systemd-coredump wget
```
## Building Peerplays
Boost libraries setup:
```
cd $HOME/src
export BOOST_ROOT=$HOME/src/boost_1_67_0
git clone https://github.com/peerplays-network/peerplays.git
wget https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.gz
tar -xzf boost_1_72_0.tar.gz boost_1_72_0
cd boost_1_72_0
./bootstrap.sh
./b2
sudo ./b2 install
sudo ldconfig
```
cmake setup:
```
wget https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-x86_64.sh
chmod 755 ./cmake-3.24.2-linux-x86_64.sh
sudo ./cmake-3.24.2-linux-x86_64.sh --prefix=/usr --skip-license
cmake --version
```
libzmq setup:
```
wget https://github.com/zeromq/libzmq/archive/refs/tags/v4.3.4.tar.gz
tar -xzvf v4.3.4.tar.gz
cd libzmq-4.3.4
mkdir build
cd build
cmake ..
make -j$(nproc)
sudo make install
sudo ldconfig
```
cppzmq setup:
```
wget https://github.com/zeromq/cppzmq/archive/refs/tags/v4.9.0.tar.gz
tar -xzvf v4.9.0.tar.gz
cd cppzmq-4.9.0
mkdir build
cd build
cmake ..
make -j$(nproc)
sudo make install
sudo ldconfig
```
gsl setup:
```
wget https://github.com/imatix/gsl/archive/refs/tags/v4.1.4.tar.gz
tar -xzvf v4.1.4.tar.gz
cd gsl-4.1.4
make -j$(nproc)
sudo make install
sudo ldconfig
```
libbitcoin-explorer setup:
```
git clone --branch version3.8.0 --depth 1 https://gitlab.com/PBSA/peerplays-1.0/libbitcoin-explorer.git
cd libbitcoin-explorer
sudo ./install.sh
sudo ldconfig
```
Doxygen setup:
```
wget https://github.com/doxygen/doxygen/archive/refs/tags/Release_1_8_17.tar.gz
tar -xvf Release_1_8_17.tar.gz
cd doxygen-Release_1_8_17
mkdir build
cd build
cmake ..
make -j$(nproc)
sudo make install
sudo ldconfig
```
Perl setup:
```
wget https://github.com/Perl/perl5/archive/refs/tags/v5.30.0.tar.gz
tar -xvf v5.30.0.tar.gz
cd perl5-5.30.0
./Configure -des
make -j$(nproc)
sudo make install
sudo ldconfig
```
Building Peerplays
```
git clone https://gitlab.com/PBSA/peerplays.git
cd peerplays
git submodule update --init --recursive
# If you want to build Mainnet node
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Release
cmake -DCMAKE_BUILD_TYPE=Release
# If you want to build Testnet node
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Release -DBUILD_PEERPLAYS_TESTNET=1
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_PEERPLAYS_TESTNET=1
# Update -j flag depending on your current system specs;
# Recommended 4GB of RAM per 1 CPU core
# make -j2 for 8GB RAM
# make -j4 for 16GB RAM
# make -j8 for 32GB RAM
make -j$(nproc)
make install # this can install the executable files under /usr/local
sudo make install # this can install the executable files under /usr/local
```
## Docker images
## Docker image
Install docker, and add current user to docker group.
```
# Install docker
sudo apt install docker.io
# Add current user to docker group
sudo usermod -a -G docker $USER
# You need to restart your shell session, to apply group membership
# Type 'groups' to verify that you are a member of a docker group
# Build docker image (from the project root, must be a docker group member)
docker build -t peerplays .
# Start docker image
docker start peerplays
# Exposed ports
# # rpc service:
# EXPOSE 8090
# # p2p service:
# EXPOSE 1776
```
Rest of the instructions on starting the chain remains same.
### Official docker image for Peerplas Mainnet
```
docker pull datasecuritynode/peerplays:latest
```
### Building docker images manually
```
# Checkout the code
git clone https://gitlab.com/PBSA/peerplays.git
cd peerplays
# Checkout the branch you want
# E.g.
# git checkout beatrice
# git checkout develop
git checkout master
git submodule update --init --recursive
# Execute from the project root, must be a docker group member
# Build docker image, using Ubuntu 20.04 base
docker build --no-cache -f Dockerfile -t peerplays .
# Build docker image, using Ubuntu 18.04 base
docker build --no-cache -f Dockerfile.18.04 -t peerplays-18-04 .
```
### Start docker image
```
# Start docker image, using Ubuntu 20.04 base
docker run peerplays:latest
# Start docker image, using Ubuntu 18.04 base
docker run peerplays-18-04:latest
```
Rest of the instructions on starting the chain remains same.
Starting A Peerplays Node
-----------------
For Ubuntu 14.04 LTS and up users, see
[this](https://github.com/cryptonomex/graphene/wiki/build-ubuntu) and
then proceed with:
git clone https://github.com/peerplays-network/peerplays.git
cd peerplays
git submodule update --init --recursive
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Release .
make
./programs/witness_node/witness_node
Launching the witness creates required directories. Next, **stop the witness** and continue.
$ vi witness_node_data_dir/config.ini

View file

@ -3,3 +3,4 @@
find ./libraries/app -regex ".*[c|h]pp" | xargs clang-format -i
find ./libraries/chain/hardfork.d -regex ".*hf" | xargs clang-format -i
find ./libraries/plugins/peerplays_sidechain -regex ".*[c|h]pp" | xargs clang-format -i
find ./programs/cli_wallet -regex ".*[c|h]pp" | xargs clang-format -i

View file

@ -1,61 +0,0 @@
# Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:9090
# P2P nodes to connect to on startup (may specify multiple times)
# seed-node =
# JSON array of P2P nodes to connect to on startup
# seed-nodes =
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 0.0.0.0:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate
# server-pem-password =
# File to read Genesis State from
# genesis-json =
# Block signing key to use for init witnesses, overrides genesis file
# dbg-init-key =
# JSON file specifying API permissions
# api-access =
# Enable block production, even if the chain is stale.
enable-stale-production = false
# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false
# ID of witness controlled by this node (e.g. "1.6.5", quotes are required, may specify multiple times)
# witness-id =
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
# private-key = ["BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
# Account ID to track history for (may specify multiple times)
# track-account =
# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
# bucket-size = [15,60,300,3600,86400]
bucket-size = [60,300,900,1800,3600,14400,86400]
# for 1 min, 5 mins, 30 mins, 1h, 4 hs and 1 day. i think this should be the default.
# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000
# Max amount of operations to store in the database, per account (drastically reduces RAM requirements)
max-ops-per-account = 1000
# Remove old operation history # objects from RAM
partial-operations = true

View file

@ -1,87 +0,0 @@
#!/bin/bash
PEERPLAYSD="/usr/local/bin/witness_node"
# For blockchain download
VERSION=`cat /etc/peerplays/version`
## Supported Environmental Variables
#
# * $PEERPLAYSD_SEED_NODES
# * $PEERPLAYSD_RPC_ENDPOINT
# * $PEERPLAYSD_PLUGINS
# * $PEERPLAYSD_REPLAY
# * $PEERPLAYSD_RESYNC
# * $PEERPLAYSD_P2P_ENDPOINT
# * $PEERPLAYSD_WITNESS_ID
# * $PEERPLAYSD_PRIVATE_KEY
# * $PEERPLAYSD_DEBUG_PRIVATE_KEY
# * $PEERPLAYSD_TRACK_ACCOUNTS
# * $PEERPLAYSD_PARTIAL_OPERATIONS
# * $PEERPLAYSD_MAX_OPS_PER_ACCOUNT
# * $PEERPLAYSD_TRUSTED_NODE
#
ARGS=""
# Translate environmental variables
if [[ ! -z "$PEERPLAYSD_SEED_NODES" ]]; then
for NODE in $PEERPLAYSD_SEED_NODES ; do
ARGS+=" --seed-node=$NODE"
done
fi
if [[ ! -z "$PEERPLAYSD_RPC_ENDPOINT" ]]; then
ARGS+=" --rpc-endpoint=${PEERPLAYSD_RPC_ENDPOINT}"
fi
if [[ ! -z "$PEERPLAYSD_REPLAY" ]]; then
ARGS+=" --replay-blockchain"
fi
if [[ ! -z "$PEERPLAYSD_RESYNC" ]]; then
ARGS+=" --resync-blockchain"
fi
if [[ ! -z "$PEERPLAYSD_P2P_ENDPOINT" ]]; then
ARGS+=" --p2p-endpoint=${PEERPLAYSD_P2P_ENDPOINT}"
fi
if [[ ! -z "$PEERPLAYSD_WITNESS_ID" ]]; then
ARGS+=" --witness-id=$PEERPLAYSD_WITNESS_ID"
fi
if [[ ! -z "$PEERPLAYSD_PRIVATE_KEY" ]]; then
ARGS+=" --private-key=$PEERPLAYSD_PRIVATE_KEY"
fi
if [[ ! -z "$PEERPLAYSD_DEBUG_PRIVATE_KEY" ]]; then
ARGS+=" --debug-private-key=$PEERPLAYSD_DEBUG_PRIVATE_KEY"
fi
if [[ ! -z "$PEERPLAYSD_TRACK_ACCOUNTS" ]]; then
for ACCOUNT in $PEERPLAYSD_TRACK_ACCOUNTS ; do
ARGS+=" --track-account=$ACCOUNT"
done
fi
if [[ ! -z "$PEERPLAYSD_PARTIAL_OPERATIONS" ]]; then
ARGS+=" --partial-operations=${PEERPLAYSD_PARTIAL_OPERATIONS}"
fi
if [[ ! -z "$PEERPLAYSD_MAX_OPS_PER_ACCOUNT" ]]; then
ARGS+=" --max-ops-per-account=${PEERPLAYSD_MAX_OPS_PER_ACCOUNT}"
fi
if [[ ! -z "$PEERPLAYSD_TRUSTED_NODE" ]]; then
ARGS+=" --trusted-node=${PEERPLAYSD_TRUSTED_NODE}"
fi
## Link the peerplays config file into home
## This link has been created in Dockerfile, already
ln -f -s /etc/peerplays/config.ini /var/lib/peerplays
# Plugins need to be provided in a space-separated list, which
# makes it necessary to write it like this
if [[ ! -z "$PEERPLAYSD_PLUGINS" ]]; then
$PEERPLAYSD --data-dir ${HOME} ${ARGS} ${PEERPLAYSD_ARGS} --plugins "${PEERPLAYSD_PLUGINS}"
else
$PEERPLAYSD --data-dir ${HOME} ${ARGS} ${PEERPLAYSD_ARGS}
fi

2
docs

@ -1 +1 @@
Subproject commit 8df8f66389853df73ab8f6dd73981be2a6957df8
Subproject commit 1e924950c2f92b166c34ceb294e8b8c4997a6c4e

View file

@ -5,6 +5,7 @@ add_subdirectory( egenesis )
add_subdirectory( fc )
add_subdirectory( net )
add_subdirectory( plugins )
add_subdirectory( sha3 )
add_subdirectory( time )
add_subdirectory( utilities )
add_subdirectory( wallet )

View file

@ -15,7 +15,7 @@ add_library( graphene_app
#target_link_libraries( graphene_app graphene_market_history graphene_account_history graphene_chain fc graphene_db graphene_net graphene_utilities graphene_debug_witness )
target_link_libraries( graphene_app
PUBLIC graphene_net graphene_utilities
graphene_account_history graphene_accounts_list graphene_affiliate_stats graphene_bookie graphene_debug_witness graphene_elasticsearch graphene_es_objects graphene_generate_genesis graphene_market_history )
graphene_account_history graphene_accounts_list graphene_affiliate_stats graphene_bookie graphene_debug_witness graphene_elasticsearch graphene_es_objects graphene_generate_genesis graphene_market_history peerplays_sidechain )
target_include_directories( graphene_app
PUBLIC "${CMAKE_CURRENT_SOURCE_DIR}/include"

View file

@ -45,7 +45,6 @@ template class fc::api<graphene::app::block_api>;
template class fc::api<graphene::app::network_broadcast_api>;
template class fc::api<graphene::app::network_node_api>;
template class fc::api<graphene::app::history_api>;
template class fc::api<graphene::app::crypto_api>;
template class fc::api<graphene::app::asset_api>;
template class fc::api<graphene::debug_witness::debug_api>;
template class fc::api<graphene::app::login_api>;
@ -90,8 +89,6 @@ void login_api::enable_api(const std::string &api_name) {
_history_api = std::make_shared<history_api>(_app);
} else if (api_name == "network_node_api") {
_network_node_api = std::make_shared<network_node_api>(std::ref(_app));
} else if (api_name == "crypto_api") {
_crypto_api = std::make_shared<crypto_api>();
} else if (api_name == "asset_api") {
_asset_api = std::make_shared<asset_api>(_app);
} else if (api_name == "debug_api") {
@ -106,6 +103,10 @@ void login_api::enable_api(const std::string &api_name) {
// can only enable this API if the plugin was loaded
if (_app.get_plugin("affiliate_stats"))
_affiliate_stats_api = std::make_shared<graphene::affiliate_stats::affiliate_stats_api>(std::ref(_app));
} else if (api_name == "sidechain_api") {
// can only enable this API if the plugin was loaded
if (_app.get_plugin("peerplays_sidechain"))
_sidechain_api = std::make_shared<graphene::peerplays_sidechain::sidechain_api>(std::ref(_app));
}
return;
}
@ -209,8 +210,8 @@ network_node_api::network_node_api(application &a) :
}
/*
* Remove expired transactions from pending_transactions
*/
* Remove expired transactions from pending_transactions
*/
for (const auto &transaction : _pending_transactions) {
if (transaction.second.expiration < block.timestamp) {
auto transaction_it = _pending_transactions.find(transaction.second.id());
@ -285,11 +286,6 @@ fc::api<history_api> login_api::history() const {
return *_history_api;
}
fc::api<crypto_api> login_api::crypto() const {
FC_ASSERT(_crypto_api);
return *_crypto_api;
}
fc::api<asset_api> login_api::asset() const {
FC_ASSERT(_asset_api);
return *_asset_api;
@ -310,6 +306,11 @@ fc::api<graphene::affiliate_stats::affiliate_stats_api> login_api::affiliate_sta
return *_affiliate_stats_api;
}
fc::api<graphene::peerplays_sidechain::sidechain_api> login_api::sidechain() const {
FC_ASSERT(_sidechain_api);
return *_sidechain_api;
}
vector<order_history_object> history_api::get_fill_order_history(std::string asset_a, std::string asset_b, uint32_t limit) const {
FC_ASSERT(_app.chain_database());
const auto &db = *_app.chain_database();
@ -513,55 +514,6 @@ vector<bucket_object> history_api::get_market_history(std::string asset_a, std::
FC_CAPTURE_AND_RETHROW((asset_a)(asset_b)(bucket_seconds)(start)(end))
}
crypto_api::crypto_api(){};
commitment_type crypto_api::blind(const blind_factor_type &blind, uint64_t value) {
return fc::ecc::blind(blind, value);
}
blind_factor_type crypto_api::blind_sum(const std::vector<blind_factor_type> &blinds_in, uint32_t non_neg) {
return fc::ecc::blind_sum(blinds_in, non_neg);
}
bool crypto_api::verify_sum(const std::vector<commitment_type> &commits_in, const std::vector<commitment_type> &neg_commits_in, int64_t excess) {
return fc::ecc::verify_sum(commits_in, neg_commits_in, excess);
}
verify_range_result crypto_api::verify_range(const commitment_type &commit, const std::vector<char> &proof) {
verify_range_result result;
result.success = fc::ecc::verify_range(result.min_val, result.max_val, commit, proof);
return result;
}
std::vector<char> crypto_api::range_proof_sign(uint64_t min_value,
const commitment_type &commit,
const blind_factor_type &commit_blind,
const blind_factor_type &nonce,
int8_t base10_exp,
uint8_t min_bits,
uint64_t actual_value) {
return fc::ecc::range_proof_sign(min_value, commit, commit_blind, nonce, base10_exp, min_bits, actual_value);
}
verify_range_proof_rewind_result crypto_api::verify_range_proof_rewind(const blind_factor_type &nonce,
const commitment_type &commit,
const std::vector<char> &proof) {
verify_range_proof_rewind_result result;
result.success = fc::ecc::verify_range_proof_rewind(result.blind_out,
result.value_out,
result.message_out,
nonce,
result.min_val,
result.max_val,
const_cast<commitment_type &>(commit),
proof);
return result;
}
range_proof_info crypto_api::range_get_info(const std::vector<char> &proof) {
return fc::ecc::range_get_info(proof);
}
// asset_api
asset_api::asset_api(graphene::app::application &app) :
_app(app),

View file

@ -48,6 +48,7 @@
#include <boost/range/algorithm/reverse.hpp>
#include <boost/signals2.hpp>
#include <atomic>
#include <iostream>
#include <fc/log/file_appender.hpp>
@ -107,6 +108,7 @@ public:
fc::optional<fc::temp_file> _lock_file;
bool _is_block_producer = false;
bool _force_validate = false;
std::atomic_bool _running{true};
void reset_p2p_node(const fc::path &data_dir) {
try {
@ -115,67 +117,29 @@ public:
_p2p_network->load_configuration(data_dir / "p2p");
_p2p_network->set_node_delegate(this);
vector<string> all_seeds;
if (_options->count("seed-node")) {
auto seeds = _options->at("seed-node").as<vector<string>>();
for (const string &endpoint_string : seeds) {
try {
std::vector<fc::ip::endpoint> endpoints = resolve_string_to_ip_endpoints(endpoint_string);
for (const fc::ip::endpoint &endpoint : endpoints) {
ilog("Adding seed node ${endpoint}", ("endpoint", endpoint));
_p2p_network->add_node(endpoint);
_p2p_network->connect_to_endpoint(endpoint);
}
} catch (const fc::exception &e) {
wlog("caught exception ${e} while adding seed node ${endpoint}",
("e", e.to_detail_string())("endpoint", endpoint_string));
}
}
all_seeds.insert(all_seeds.end(), seeds.begin(), seeds.end());
}
if (_options->count("seed-nodes")) {
auto seeds_str = _options->at("seed-nodes").as<string>();
auto seeds = fc::json::from_string(seeds_str).as<vector<string>>(2);
for (const string &endpoint_string : seeds) {
try {
std::vector<fc::ip::endpoint> endpoints = resolve_string_to_ip_endpoints(endpoint_string);
for (const fc::ip::endpoint &endpoint : endpoints) {
ilog("Adding seed node ${endpoint}", ("endpoint", endpoint));
_p2p_network->add_node(endpoint);
}
} catch (const fc::exception &e) {
wlog("caught exception ${e} while adding seed node ${endpoint}",
("e", e.to_detail_string())("endpoint", endpoint_string));
}
}
} else {
// t.me/peerplays #seednodes
vector<string> seeds = {
#ifdef BUILD_PEERPLAYS_TESTNET
all_seeds.insert(all_seeds.end(), seeds.begin(), seeds.end());
}
#else
"51.222.110.110:9777",
"95.216.90.243:9777",
"96.46.48.98:19777",
"96.46.48.98:29777",
"96.46.48.98:39777",
"96.46.48.98:49777",
"96.46.48.98:59777",
"seed.i9networks.net.br:9777",
"witness.serverpit.com:9777"
#endif
};
for (const string &endpoint_string : seeds) {
try {
std::vector<fc::ip::endpoint> endpoints = resolve_string_to_ip_endpoints(endpoint_string);
for (const fc::ip::endpoint &endpoint : endpoints) {
ilog("Adding seed node ${endpoint}", ("endpoint", endpoint));
_p2p_network->add_node(endpoint);
}
} catch (const fc::exception &e) {
wlog("caught exception ${e} while adding seed node ${endpoint}",
("e", e.to_detail_string())("endpoint", endpoint_string));
for (const string &endpoint_string : all_seeds) {
try {
std::vector<fc::ip::endpoint> endpoints = resolve_string_to_ip_endpoints(endpoint_string);
for (const fc::ip::endpoint &endpoint : endpoints) {
ilog("Adding seed node ${endpoint}", ("endpoint", endpoint));
_p2p_network->add_node(endpoint);
}
} catch (const fc::exception &e) {
wlog("caught exception ${e} while adding seed node ${endpoint}",
("e", e.to_detail_string())("endpoint", endpoint_string));
}
}
@ -398,9 +362,9 @@ public:
wild_access.allowed_apis.push_back("database_api");
wild_access.allowed_apis.push_back("network_broadcast_api");
wild_access.allowed_apis.push_back("history_api");
wild_access.allowed_apis.push_back("crypto_api");
wild_access.allowed_apis.push_back("bookie_api");
wild_access.allowed_apis.push_back("affiliate_stats_api");
wild_access.allowed_apis.push_back("sidechain_api");
_apiaccess.permission_map["*"] = wild_access;
}
@ -427,8 +391,8 @@ public:
}
/**
* If delegate has the item, the network has no need to fetch it.
*/
* If delegate has the item, the network has no need to fetch it.
*/
virtual bool has_item(const net::item_id &id) override {
try {
if (id.item_type == graphene::net::block_message_type)
@ -440,15 +404,21 @@ public:
}
/**
* @brief allows the application to validate an item prior to broadcasting to peers.
*
* @param sync_mode true if the message was fetched through the sync process, false during normal operation
* @returns true if this message caused the blockchain to switch forks, false if it did not
*
* @throws exception if error validating the item, otherwise the item is safe to broadcast on.
*/
* @brief allows the application to validate an item prior to broadcasting to peers.
*
* @param sync_mode true if the message was fetched through the sync process, false during normal operation
* @returns true if this message caused the blockchain to switch forks, false if it did not
*
* @throws exception if error validating the item, otherwise the item is safe to broadcast on.
*/
virtual bool handle_block(const graphene::net::block_message &blk_msg, bool sync_mode,
std::vector<fc::uint160_t> &contained_transaction_message_ids) override {
// check point for the threads which may be cancled on application shutdown
if (!_running.load()) {
return true;
}
try {
auto latency = fc::time_point::now() - blk_msg.block.timestamp;
FC_ASSERT((latency.count() / 1000) > -5000, "Rejecting block with timestamp in the future");
@ -528,14 +498,14 @@ public:
}
/**
* Assuming all data elements are ordered in some way, this method should
* return up to limit ids that occur *after* the last ID in synopsis that
* we recognize.
*
* On return, remaining_item_count will be set to the number of items
* in our blockchain after the last item returned in the result,
* or 0 if the result contains the last item in the blockchain
*/
* Assuming all data elements are ordered in some way, this method should
* return up to limit ids that occur *after* the last ID in synopsis that
* we recognize.
*
* On return, remaining_item_count will be set to the number of items
* in our blockchain after the last item returned in the result,
* or 0 if the result contains the last item in the blockchain
*/
virtual std::vector<item_hash_t> get_block_ids(const std::vector<item_hash_t> &blockchain_synopsis,
uint32_t &remaining_item_count,
uint32_t limit) override {
@ -582,8 +552,8 @@ public:
}
/**
* Given the hash of the requested data, fetch the body.
*/
* Given the hash of the requested data, fetch the body.
*/
virtual message get_item(const item_id &id) override {
try {
// ilog("Request for item ${id}", ("id", id));
@ -606,63 +576,63 @@ public:
}
/**
* Returns a synopsis of the blockchain used for syncing. This consists of a list of
* block hashes at intervals exponentially increasing towards the genesis block.
* When syncing to a peer, the peer uses this data to determine if we're on the same
* fork as they are, and if not, what blocks they need to send us to get us on their
* fork.
*
* In the over-simplified case, this is a straighforward synopsis of our current
* preferred blockchain; when we first connect up to a peer, this is what we will be sending.
* It looks like this:
* If the blockchain is empty, it will return the empty list.
* If the blockchain has one block, it will return a list containing just that block.
* If it contains more than one block:
* the first element in the list will be the hash of the highest numbered block that
* we cannot undo
* the second element will be the hash of an item at the half way point in the undoable
* segment of the blockchain
* the third will be ~3/4 of the way through the undoable segment of the block chain
* the fourth will be at ~7/8...
* &c.
* the last item in the list will be the hash of the most recent block on our preferred chain
* so if the blockchain had 26 blocks labeled a - z, the synopsis would be:
* a n u x z
* the idea being that by sending a small (<30) number of block ids, we can summarize a huge
* blockchain. The block ids are more dense near the end of the chain where because we are
* more likely to be almost in sync when we first connect, and forks are likely to be short.
* If the peer we're syncing with in our example is on a fork that started at block 'v',
* then they will reply to our synopsis with a list of all blocks starting from block 'u',
* the last block they know that we had in common.
*
* In the real code, there are several complications.
*
* First, as an optimization, we don't usually send a synopsis of the entire blockchain, we
* send a synopsis of only the segment of the blockchain that we have undo data for. If their
* fork doesn't build off of something in our undo history, we would be unable to switch, so there's
* no reason to fetch the blocks.
*
* Second, when a peer replies to our initial synopsis and gives us a list of the blocks they think
* we are missing, they only send a chunk of a few thousand blocks at once. After we get those
* block ids, we need to request more blocks by sending another synopsis (we can't just say "send me
* the next 2000 ids" because they may have switched forks themselves and they don't track what
* they've sent us). For faster performance, we want to get a fairly long list of block ids first,
* then start downloading the blocks.
* The peer doesn't handle these follow-up block id requests any different from the initial request;
* it treats the synopsis we send as our blockchain and bases its response entirely off that. So to
* get the response we want (the next chunk of block ids following the last one they sent us, or,
* failing that, the shortest fork off of the last list of block ids they sent), we need to construct
* a synopsis as if our blockchain was made up of:
* 1. the blocks in our block chain up to the fork point (if there is a fork) or the head block (if no fork)
* 2. the blocks we've already pushed from their fork (if there's a fork)
* 3. the block ids they've previously sent us
* Segment 3 is handled in the p2p code, it just tells us the number of blocks it has (in
* number_of_blocks_after_reference_point) so we can leave space in the synopsis for them.
* We're responsible for constructing the synopsis of Segments 1 and 2 from our active blockchain and
* fork database. The reference_point parameter is the last block from that peer that has been
* successfully pushed to the blockchain, so that tells us whether the peer is on a fork or on
* the main chain.
*/
* Returns a synopsis of the blockchain used for syncing. This consists of a list of
* block hashes at intervals exponentially increasing towards the genesis block.
* When syncing to a peer, the peer uses this data to determine if we're on the same
* fork as they are, and if not, what blocks they need to send us to get us on their
* fork.
*
* In the over-simplified case, this is a straighforward synopsis of our current
* preferred blockchain; when we first connect up to a peer, this is what we will be sending.
* It looks like this:
* If the blockchain is empty, it will return the empty list.
* If the blockchain has one block, it will return a list containing just that block.
* If it contains more than one block:
* the first element in the list will be the hash of the highest numbered block that
* we cannot undo
* the second element will be the hash of an item at the half way point in the undoable
* segment of the blockchain
* the third will be ~3/4 of the way through the undoable segment of the block chain
* the fourth will be at ~7/8...
* &c.
* the last item in the list will be the hash of the most recent block on our preferred chain
* so if the blockchain had 26 blocks labeled a - z, the synopsis would be:
* a n u x z
* the idea being that by sending a small (<30) number of block ids, we can summarize a huge
* blockchain. The block ids are more dense near the end of the chain where because we are
* more likely to be almost in sync when we first connect, and forks are likely to be short.
* If the peer we're syncing with in our example is on a fork that started at block 'v',
* then they will reply to our synopsis with a list of all blocks starting from block 'u',
* the last block they know that we had in common.
*
* In the real code, there are several complications.
*
* First, as an optimization, we don't usually send a synopsis of the entire blockchain, we
* send a synopsis of only the segment of the blockchain that we have undo data for. If their
* fork doesn't build off of something in our undo history, we would be unable to switch, so there's
* no reason to fetch the blocks.
*
* Second, when a peer replies to our initial synopsis and gives us a list of the blocks they think
* we are missing, they only send a chunk of a few thousand blocks at once. After we get those
* block ids, we need to request more blocks by sending another synopsis (we can't just say "send me
* the next 2000 ids" because they may have switched forks themselves and they don't track what
* they've sent us). For faster performance, we want to get a fairly long list of block ids first,
* then start downloading the blocks.
* The peer doesn't handle these follow-up block id requests any different from the initial request;
* it treats the synopsis we send as our blockchain and bases its response entirely off that. So to
* get the response we want (the next chunk of block ids following the last one they sent us, or,
* failing that, the shortest fork off of the last list of block ids they sent), we need to construct
* a synopsis as if our blockchain was made up of:
* 1. the blocks in our block chain up to the fork point (if there is a fork) or the head block (if no fork)
* 2. the blocks we've already pushed from their fork (if there's a fork)
* 3. the block ids they've previously sent us
* Segment 3 is handled in the p2p code, it just tells us the number of blocks it has (in
* number_of_blocks_after_reference_point) so we can leave space in the synopsis for them.
* We're responsible for constructing the synopsis of Segments 1 and 2 from our active blockchain and
* fork database. The reference_point parameter is the last block from that peer that has been
* successfully pushed to the blockchain, so that tells us whether the peer is on a fork or on
* the main chain.
*/
virtual std::vector<item_hash_t> get_blockchain_synopsis(const item_hash_t &reference_point,
uint32_t number_of_blocks_after_reference_point) override {
try {
@ -763,26 +733,26 @@ public:
low_block_num += (true_high_block_num - low_block_num + 2) / 2;
} while (low_block_num <= high_block_num);
//idump((synopsis));
// idump((synopsis));
return synopsis;
}
FC_CAPTURE_AND_RETHROW()
}
/**
* Call this after the call to handle_message succeeds.
*
* @param item_type the type of the item we're synchronizing, will be the same as item passed to the sync_from() call
* @param item_count the number of items known to the node that haven't been sent to handle_item() yet.
* After `item_count` more calls to handle_item(), the node will be in sync
*/
* Call this after the call to handle_message succeeds.
*
* @param item_type the type of the item we're synchronizing, will be the same as item passed to the sync_from() call
* @param item_count the number of items known to the node that haven't been sent to handle_item() yet.
* After `item_count` more calls to handle_item(), the node will be in sync
*/
virtual void sync_status(uint32_t item_type, uint32_t item_count) override {
// any status reports to GUI go here
}
/**
* Call any time the number of connected peers changes.
*/
* Call any time the number of connected peers changes.
*/
virtual void connection_count_changed(uint32_t c) override {
// any status reports to GUI go here
}
@ -794,10 +764,14 @@ public:
FC_CAPTURE_AND_RETHROW((block_id))
}
virtual fc::time_point_sec get_last_known_hardfork_time() override {
return _chain_db->_hardfork_times[_chain_db->_hardfork_times.size() - 1];
}
/**
* Returns the time a block was produced (if block_id = 0, returns genesis time).
* If we don't know about the block, returns time_point_sec::min()
*/
* Returns the time a block was produced (if block_id = 0, returns genesis time).
* If we don't know about the block, returns time_point_sec::min()
*/
virtual fc::time_point_sec get_block_time(const item_hash_t &block_id) override {
try {
auto opt_block = _chain_db->fetch_block_by_id(block_id);
@ -859,11 +833,26 @@ application::~application() {
void application::set_program_options(boost::program_options::options_description &cli,
boost::program_options::options_description &cfg) const {
cfg.add_options()("p2p-endpoint", bpo::value<string>(), "Endpoint for P2P node to listen on");
std::vector<string> seed_nodes = {
#ifdef BUILD_PEERPLAYS_TESTNET
#else
"51.222.110.110:9777",
"95.216.90.243:9777",
"ca.peerplays.info:9777",
"de.peerplays.xyz:9777",
"pl.peerplays.org:9777",
"seed.i9networks.net.br:9777",
"witness.serverpit.com:9777"
#endif
};
std::string seed_nodes_str = fc::json::to_string(seed_nodes);
cfg.add_options()("p2p-endpoint", bpo::value<string>()->default_value("0.0.0.0:9777"), "Endpoint for P2P node to listen on");
cfg.add_options()("seed-node,s", bpo::value<vector<string>>()->composing(), "P2P nodes to connect to on startup (may specify multiple times)");
cfg.add_options()("seed-nodes", bpo::value<string>()->composing(), "JSON array of P2P nodes to connect to on startup");
cfg.add_options()("seed-nodes", bpo::value<string>()->composing()->default_value(seed_nodes_str), "JSON array of P2P nodes to connect to on startup");
cfg.add_options()("checkpoint,c", bpo::value<vector<string>>()->composing(), "Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.");
cfg.add_options()("rpc-endpoint", bpo::value<string>()->implicit_value("127.0.0.1:8090"), "Endpoint for websocket RPC to listen on");
cfg.add_options()("rpc-endpoint", bpo::value<string>()->default_value("127.0.0.1:8090"), "Endpoint for websocket RPC to listen on");
cfg.add_options()("rpc-tls-endpoint", bpo::value<string>()->implicit_value("127.0.0.1:8089"), "Endpoint for TLS websocket RPC to listen on");
cfg.add_options()("server-pem,p", bpo::value<string>()->implicit_value("server.pem"), "The TLS certificate file for this server");
cfg.add_options()("server-pem-password,P", bpo::value<string>()->implicit_value(""), "Password for this certificate");
@ -928,7 +917,8 @@ void application::initialize(const fc::path &data_dir, const boost::program_opti
wanted.insert("accounts_list");
wanted.insert("affiliate_stats");
}
wanted.insert("witness");
if (!wanted.count("delayed_node") && !wanted.count("debug_witness") && !wanted.count("witness")) // explicitly requested delayed_node or debug_witness functionality suppresses witness functions
wanted.insert("witness");
wanted.insert("bookie");
int es_ah_conflict_counter = 0;
@ -960,7 +950,7 @@ void application::startup() {
}
std::shared_ptr<abstract_plugin> application::get_plugin(const string &name) const {
return my->_active_plugins[name];
return is_plugin_enabled(name) ? my->_active_plugins[name] : nullptr;
}
bool application::is_plugin_enabled(const string &name) const {
@ -1007,6 +997,7 @@ void application::shutdown_plugins() {
return;
}
void application::shutdown() {
my->_running.store(false);
if (my->_p2p_network)
my->_p2p_network->close();
if (my->_chain_db)

File diff suppressed because it is too large Load diff

View file

@ -28,16 +28,15 @@
#include <graphene/chain/protocol/confidential.hpp>
#include <graphene/chain/protocol/types.hpp>
#include <graphene/net/node.hpp>
#include <graphene/accounts_list/accounts_list_plugin.hpp>
#include <graphene/market_history/market_history_plugin.hpp>
#include <graphene/elasticsearch/elasticsearch_plugin.hpp>
#include <graphene/affiliate_stats/affiliate_stats_api.hpp>
#include <graphene/bookie/bookie_api.hpp>
#include <graphene/debug_witness/debug_api.hpp>
#include <graphene/net/node.hpp>
#include <graphene/elasticsearch/elasticsearch_plugin.hpp>
#include <graphene/market_history/market_history_plugin.hpp>
#include <graphene/peerplays_sidechain/sidechain_api.hpp>
#include <fc/api.hpp>
#include <fc/crypto/elliptic.hpp>
@ -86,10 +85,10 @@ struct asset_holders {
};
/**
* @brief The history_api class implements the RPC API for account history
*
* This API contains methods to access account histories
*/
* @brief The history_api class implements the RPC API for account history
*
* This API contains methods to access account histories
*/
class history_api {
public:
history_api(application &app) :
@ -98,27 +97,27 @@ public:
}
/**
* @brief Get operations relevant to the specificed account
* @param account_id_or_name The account ID or name whose history should be queried
* @param stop ID of the earliest operation to retrieve
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start ID of the most recent operation to retrieve
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
* @brief Get operations relevant to the specificed account
* @param account_id_or_name The account ID or name whose history should be queried
* @param stop ID of the earliest operation to retrieve
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start ID of the most recent operation to retrieve
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
vector<operation_history_object> get_account_history(const std::string account_id_or_name,
operation_history_id_type stop = operation_history_id_type(),
unsigned limit = 100,
operation_history_id_type start = operation_history_id_type()) const;
/**
* @brief Get only asked operations relevant to the specified account
* @param account_id_or_name The account ID or name whose history should be queried
* @param operation_id The ID of the operation we want to get operations in the account( 0 = transfer , 1 = limit order create, ...)
* @param stop ID of the earliest operation to retrieve
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start ID of the most recent operation to retrieve
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
* @brief Get only asked operations relevant to the specified account
* @param account_id_or_name The account ID or name whose history should be queried
* @param operation_id The ID of the operation we want to get operations in the account( 0 = transfer , 1 = limit order create, ...)
* @param stop ID of the earliest operation to retrieve
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start ID of the most recent operation to retrieve
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
vector<operation_history_object> get_account_history_operations(const std::string account_id_or_name,
int operation_id,
operation_history_id_type start = operation_history_id_type(),
@ -126,17 +125,17 @@ public:
unsigned limit = 100) const;
/**
* @breif Get operations relevant to the specified account referenced
* by an event numbering specific to the account. The current number of operations
* for the account can be found in the account statistics (or use 0 for start).
* @param account_id_or_name The account ID or name whose history should be queried
* @param stop Sequence number of earliest operation. 0 is default and will
* query 'limit' number of operations.
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start Sequence number of the most recent operation to retrieve.
* 0 is default, which will start querying from the most recent operation.
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
* @breif Get operations relevant to the specified account referenced
* by an event numbering specific to the account. The current number of operations
* for the account can be found in the account statistics (or use 0 for start).
* @param account_id_or_name The account ID or name whose history should be queried
* @param stop Sequence number of earliest operation. 0 is default and will
* query 'limit' number of operations.
* @param limit Maximum number of operations to retrieve (must not exceed 100)
* @param start Sequence number of the most recent operation to retrieve.
* 0 is default, which will start querying from the most recent operation.
* @return A list of operations performed by account, ordered from most recent to oldest.
*/
vector<operation_history_object> get_relative_account_history(const std::string account_id_or_name,
uint32_t stop = 0,
unsigned limit = 100,
@ -157,8 +156,8 @@ private:
};
/**
* @brief Block api
*/
* @brief Block api
*/
class block_api {
public:
block_api(graphene::chain::database &db);
@ -171,8 +170,8 @@ private:
};
/**
* @brief The network_broadcast_api class allows broadcasting of transactions.
*/
* @brief The network_broadcast_api class allows broadcasting of transactions.
*/
class network_broadcast_api : public std::enable_shared_from_this<network_broadcast_api> {
public:
network_broadcast_api(application &a);
@ -187,36 +186,36 @@ public:
typedef std::function<void(variant /*transaction_confirmation*/)> confirmation_callback;
/**
* @brief Broadcast a transaction to the network
* @param trx The transaction to broadcast
*
* The transaction will be checked for validity in the local database prior to broadcasting. If it fails to
* apply locally, an error will be thrown and the transaction will not be broadcast.
*/
* @brief Broadcast a transaction to the network
* @param trx The transaction to broadcast
*
* The transaction will be checked for validity in the local database prior to broadcasting. If it fails to
* apply locally, an error will be thrown and the transaction will not be broadcast.
*/
void broadcast_transaction(const signed_transaction &trx);
/** this version of broadcast transaction registers a callback method that will be called when the transaction is
* included into a block. The callback method includes the transaction id, block number, and transaction number in the
* block.
*/
* included into a block. The callback method includes the transaction id, block number, and transaction number in the
* block.
*/
void broadcast_transaction_with_callback(confirmation_callback cb, const signed_transaction &trx);
/** this version of broadcast transaction registers a callback method that will be called when the transaction is
* included into a block. The callback method includes the transaction id, block number, and transaction number in the
* block.
*/
* included into a block. The callback method includes the transaction id, block number, and transaction number in the
* block.
*/
fc::variant broadcast_transaction_synchronous(const signed_transaction &trx);
void broadcast_block(const signed_block &block);
/**
* @brief Not reflected, thus not accessible to API clients.
*
* This function is registered to receive the applied_block
* signal from the chain database when a block is received.
* It then dispatches callbacks to clients who have requested
* to be notified when a particular txid is included in a block.
*/
* @brief Not reflected, thus not accessible to API clients.
*
* This function is registered to receive the applied_block
* signal from the chain database when a block is received.
* It then dispatches callbacks to clients who have requested
* to be notified when a particular txid is included in a block.
*/
void on_applied_block(const signed_block &b);
private:
@ -226,60 +225,60 @@ private:
};
/**
* @brief The network_node_api class allows maintenance of p2p connections.
*/
* @brief The network_node_api class allows maintenance of p2p connections.
*/
class network_node_api {
public:
network_node_api(application &a);
/**
* @brief Return general network information, such as p2p port
*/
* @brief Return general network information, such as p2p port
*/
fc::variant_object get_info() const;
/**
* @brief add_node Connect to a new peer
* @param ep The IP/Port of the peer to connect to
*/
* @brief add_node Connect to a new peer
* @param ep The IP/Port of the peer to connect to
*/
void add_node(const fc::ip::endpoint &ep);
/**
* @brief Get status of all current connections to peers
*/
* @brief Get status of all current connections to peers
*/
std::vector<net::peer_status> get_connected_peers() const;
/**
* @brief Get advanced node parameters, such as desired and max
* number of connections
*/
* @brief Get advanced node parameters, such as desired and max
* number of connections
*/
fc::variant_object get_advanced_node_parameters() const;
/**
* @brief Set advanced node parameters, such as desired and max
* number of connections
* @param params a JSON object containing the name/value pairs for the parameters to set
*/
* @brief Set advanced node parameters, such as desired and max
* number of connections
* @param params a JSON object containing the name/value pairs for the parameters to set
*/
void set_advanced_node_parameters(const fc::variant_object &params);
/**
* @brief Return list of potential peers
*/
* @brief Return list of potential peers
*/
std::vector<net::potential_peer_record> get_potential_peers() const;
/**
* @brief Return list of pending transactions.
*/
* @brief Return list of pending transactions.
*/
map<transaction_id_type, signed_transaction> list_pending_transactions() const;
/**
* @brief Subscribes caller for notifications about pending transactions.
* @param callback a functional object which will be called when new transaction is created.
*/
* @brief Subscribes caller for notifications about pending transactions.
* @param callback a functional object which will be called when new transaction is created.
*/
void subscribe_to_pending_transactions(std::function<void(const variant &)> callback);
/**
* @brief Unsubscribes caller from notifications about pending transactions.
*/
* @brief Unsubscribes caller from notifications about pending transactions.
*/
void unsubscribe_from_pending_transactions();
private:
@ -290,61 +289,34 @@ private:
std::function<void(const variant &)> _on_pending_transaction;
};
class crypto_api {
public:
crypto_api();
fc::ecc::commitment_type blind(const fc::ecc::blind_factor_type &blind, uint64_t value);
fc::ecc::blind_factor_type blind_sum(const std::vector<blind_factor_type> &blinds_in, uint32_t non_neg);
bool verify_sum(const std::vector<commitment_type> &commits_in, const std::vector<commitment_type> &neg_commits_in, int64_t excess);
verify_range_result verify_range(const fc::ecc::commitment_type &commit, const std::vector<char> &proof);
std::vector<char> range_proof_sign(uint64_t min_value,
const commitment_type &commit,
const blind_factor_type &commit_blind,
const blind_factor_type &nonce,
int8_t base10_exp,
uint8_t min_bits,
uint64_t actual_value);
verify_range_proof_rewind_result verify_range_proof_rewind(const blind_factor_type &nonce,
const fc::ecc::commitment_type &commit,
const std::vector<char> &proof);
range_proof_info range_get_info(const std::vector<char> &proof);
};
/**
* @brief
*/
* @brief
*/
class asset_api {
public:
asset_api(graphene::app::application &app);
~asset_api();
/**
* @brief Get asset holders for a specific asset
* @param asset The specific asset id or symbol
* @param start The start index
* @param limit Maximum limit must not exceed 100
* @return A list of asset holders for the specified asset
*/
* @brief Get asset holders for a specific asset
* @param asset The specific asset id or symbol
* @param start The start index
* @param limit Maximum limit must not exceed 100
* @return A list of asset holders for the specified asset
*/
vector<account_asset_balance> get_asset_holders(std::string asset, uint32_t start, uint32_t limit) const;
/**
* @brief Get asset holders count for a specific asset
* @param asset The specific asset id or symbol
* @return Holders count for the specified asset
*/
* @brief Get asset holders count for a specific asset
* @param asset The specific asset id or symbol
* @return Holders count for the specified asset
*/
int get_asset_holders_count(std::string asset) const;
/**
* @brief Get all asset holders
* @return A list of all asset holders
*/
* @brief Get all asset holders
* @return A list of all asset holders
*/
vector<asset_holders> get_all_asset_holders() const;
uint32_t api_limit_get_asset_holders = 100;
@ -360,30 +332,29 @@ extern template class fc::api<graphene::app::block_api>;
extern template class fc::api<graphene::app::network_broadcast_api>;
extern template class fc::api<graphene::app::network_node_api>;
extern template class fc::api<graphene::app::history_api>;
extern template class fc::api<graphene::app::crypto_api>;
extern template class fc::api<graphene::app::asset_api>;
extern template class fc::api<graphene::debug_witness::debug_api>;
namespace graphene { namespace app {
/**
* @brief The login_api class implements the bottom layer of the RPC API
*
* All other APIs must be requested from this API.
*/
* @brief The login_api class implements the bottom layer of the RPC API
*
* All other APIs must be requested from this API.
*/
class login_api {
public:
login_api(application &a);
~login_api();
/**
* @brief Authenticate to the RPC server
* @param user Username to login with
* @param password Password to login with
* @return True if logged in successfully; false otherwise
*
* @note This must be called prior to requesting other APIs. Other APIs may not be accessible until the client
* has sucessfully authenticated.
*/
* @brief Authenticate to the RPC server
* @param user Username to login with
* @param password Password to login with
* @return True if logged in successfully; false otherwise
*
* @note This must be called prior to requesting other APIs. Other APIs may not be accessible until the client
* has sucessfully authenticated.
*/
bool login(const string &user, const string &password);
/// @brief Retrieve the network block API
fc::api<block_api> block() const;
@ -395,8 +366,6 @@ public:
fc::api<history_api> history() const;
/// @brief Retrieve the network node API
fc::api<network_node_api> network_node() const;
/// @brief Retrieve the cryptography API
fc::api<crypto_api> crypto() const;
/// @brief Retrieve the asset API
fc::api<asset_api> asset() const;
/// @brief Retrieve the debug API (if available)
@ -405,6 +374,8 @@ public:
fc::api<graphene::bookie::bookie_api> bookie() const;
/// @brief Retrieve the affiliate_stats API (if available)
fc::api<graphene::affiliate_stats::affiliate_stats_api> affiliate_stats() const;
/// @brief Retrieve the sidechain_api API (if available)
fc::api<graphene::peerplays_sidechain::sidechain_api> sidechain() const;
/// @brief Called to enable an API, not reflected.
void enable_api(const string &api_name);
@ -416,11 +387,11 @@ private:
optional<fc::api<network_broadcast_api>> _network_broadcast_api;
optional<fc::api<network_node_api>> _network_node_api;
optional<fc::api<history_api>> _history_api;
optional<fc::api<crypto_api>> _crypto_api;
optional<fc::api<asset_api>> _asset_api;
optional<fc::api<graphene::debug_witness::debug_api>> _debug_api;
optional<fc::api<graphene::bookie::bookie_api>> _bookie_api;
optional<fc::api<graphene::affiliate_stats::affiliate_stats_api>> _affiliate_stats_api;
optional<fc::api<graphene::peerplays_sidechain::sidechain_api>> _sidechain_api;
};
}} // namespace graphene::app
@ -473,15 +444,6 @@ FC_API(graphene::app::network_node_api,
(subscribe_to_pending_transactions)
(unsubscribe_from_pending_transactions))
FC_API(graphene::app::crypto_api,
(blind)
(blind_sum)
(verify_sum)
(verify_range)
(range_proof_sign)
(verify_range_proof_rewind)
(range_get_info))
FC_API(graphene::app::asset_api,
(get_asset_holders)
(get_asset_holders_count)
@ -494,10 +456,10 @@ FC_API(graphene::app::login_api,
(database)
(history)
(network_node)
(crypto)
(asset)
(debug)
(bookie)
(affiliate_stats))
(affiliate_stats)
(sidechain))
// clang-format on

View file

@ -56,6 +56,8 @@
#include <graphene/chain/custom_permission_object.hpp>
#include <graphene/chain/nft_object.hpp>
#include <graphene/chain/offer_object.hpp>
#include <graphene/chain/voters_info.hpp>
#include <graphene/chain/votes_info.hpp>
#include <graphene/market_history/market_history_plugin.hpp>
@ -80,6 +82,15 @@ using namespace std;
class database_api_impl;
struct signed_block_with_info : public signed_block {
signed_block_with_info();
signed_block_with_info(const signed_block &block);
signed_block_with_info(const signed_block_with_info &block) = default;
block_id_type block_id;
public_key_type signing_key;
vector<transaction_id_type> transaction_ids;
};
struct order {
double price;
double quote;
@ -128,6 +139,14 @@ struct gpos_info {
share_type account_vested_balance;
};
struct version_info {
string version;
string git_revision;
string built;
string openssl;
string boost;
};
/**
* @brief The database_api class implements the RPC API for the chain database.
*
@ -179,10 +198,10 @@ public:
optional<block_header> get_block_header(uint32_t block_num) const;
/**
* @brief Retrieve multiple block header by block numbers
* @param block_num vector containing heights of the block whose header should be returned
* @return array of headers of the referenced blocks, or null if no matching block was found
*/
* @brief Retrieve multiple block header by block numbers
* @param block_num vector containing heights of the block whose header should be returned
* @return array of headers of the referenced blocks, or null if no matching block was found
*/
map<uint32_t, optional<block_header>> get_block_header_batch(const vector<uint32_t> block_nums) const;
/**
@ -192,6 +211,13 @@ public:
*/
optional<signed_block> get_block(uint32_t block_num) const;
/**
* @brief Retrieve a full, signed block, with some extra info
* @param block_num Height of the block to be returned
* @return the referenced block, or null if no matching block was found
*/
optional<signed_block_with_info> get_block2(uint32_t block_num) const;
/**
* @brief Retrieve a list of signed blocks
* @param block_num_from start
@ -216,6 +242,11 @@ public:
// Globals //
/////////////
/**
* @brief Retrieve the @ref version_info associated with the witness node
*/
version_info get_version_info() const;
/**
* @brief Retrieve the @ref chain_property_object associated with the chain
*/
@ -248,12 +279,12 @@ public:
vector<vector<account_id_type>> get_key_references(vector<public_key_type> key) const;
/**
* Determine whether a textual representation of a public key
* (in Base-58 format) is *currently* linked
* to any *registered* (i.e. non-stealth) account on the blockchain
* @param public_key Public key
* @return Whether a public key is known
*/
* Determine whether a textual representation of a public key
* (in Base-58 format) is *currently* linked
* to any *registered* (i.e. non-stealth) account on the blockchain
* @param public_key Public key
* @return Whether a public key is known
*/
bool is_public_key_registered(string public_key) const;
//////////////
@ -558,6 +589,13 @@ public:
* @param account The ID of the account whose witness should be retrieved
* @return The witness object, or null if the account does not have a witness
*/
fc::optional<witness_object> get_witness_by_account_id(account_id_type account) const;
/**
* @brief Get the witness owned by a given account
* @param account_id_or_name The ID or name of the account whose witness should be retrieved
* @return The witness object, or null if the account does not have a witness
*/
fc::optional<witness_object> get_witness_by_account(const std::string account_name_or_id) const;
/**
@ -586,6 +624,13 @@ public:
*/
vector<optional<committee_member_object>> get_committee_members(const vector<committee_member_id_type> &committee_member_ids) const;
/**
* @brief Get the committee_member owned by a given account
* @param account The ID of the account whose committee_member should be retrieved
* @return The committee_member object, or null if the account does not have a committee_member
*/
fc::optional<committee_member_object> get_committee_member_by_account_id(account_id_type account) const;
/**
* @brief Get the committee_member owned by a given account
* @param account_id_or_name The ID or name of the account whose committee_member should be retrieved
@ -601,6 +646,11 @@ public:
*/
map<string, committee_member_id_type> lookup_committee_member_accounts(const string &lower_bound_name, uint32_t limit) const;
/**
* @brief Get the total number of committee_members registered with the blockchain
*/
uint64_t get_committee_member_count() const;
/////////////////
// SON members //
/////////////////
@ -619,7 +669,14 @@ public:
* @param account The ID of the account whose SON should be retrieved
* @return The SON object, or null if the account does not have a SON
*/
fc::optional<son_object> get_son_by_account(account_id_type account) const;
fc::optional<son_object> get_son_by_account_id(account_id_type account) const;
/**
* @brief Get the SON owned by a given account
* @param account_id_or_name The ID of the account whose SON should be retrieved
* @return The SON object, or null if the account does not have a SON
*/
fc::optional<son_object> get_son_by_account(const std::string account_id_or_name) const;
/**
* @brief Get names and IDs for registered SONs
@ -634,6 +691,32 @@ public:
*/
uint64_t get_son_count() const;
/**
* @brief Get list of active sons
* @return List of active SONs
*/
flat_map<sidechain_type, vector<son_sidechain_info>> get_active_sons();
/**
* @brief Get list of active sons
* @param sidechain Sidechain type [bitcoin|ethereum|hive]
* @return List of active SONs
*/
vector<son_sidechain_info> get_active_sons_by_sidechain(sidechain_type sidechain);
/**
* @brief Get SON network status
* @return SON network status description for a given sidechain type
*/
map<sidechain_type, map<son_id_type, string>> get_son_network_status();
/**
* @brief Get SON network status
* @param sidechain Sidechain type [bitcoin|ethereum|hive]
* @return SON network status description for a given sidechain type
*/
map<son_id_type, string> get_son_network_status_by_sidechain(sidechain_type sidechain);
/////////////////////////
// SON Wallets //
/////////////////////////
@ -698,15 +781,46 @@ public:
*/
uint64_t get_sidechain_addresses_count() const;
/// WORKERS
/////////////
// Workers //
/////////////
/**
* @brief Get a list of workers by ID
* @param worker_ids IDs of the workers to retrieve
* @return The workers corresponding to the provided IDs
*
* This function has semantics identical to @ref get_objects
*/
vector<optional<worker_object>> get_workers(const vector<worker_id_type> &worker_ids) const;
/**
* @brief Return the worker objects associated with this account.
* @param account_id_or_name The ID or name of the account whose worker should be retrieved
* @param account The ID of the account whose workers should be retrieved
* @return The worker object or null if the account does not have a worker
*/
vector<worker_object> get_workers_by_account_id(account_id_type account) const;
/**
* @brief Return the worker objects associated with this account.
* @param account_id_or_name The ID or name of the account whose workers should be retrieved
* @return The worker object or null if the account does not have a worker
*/
vector<worker_object> get_workers_by_account(const std::string account_id_or_name) const;
/**
* @brief Get names and IDs for registered workers
* @param lower_bound_name Lower bound of the first name to return
* @param limit Maximum number of results to return -- must not exceed 1000
* @return Map of worker names to corresponding IDs
*/
map<string, worker_id_type> lookup_worker_accounts(const string &lower_bound_name, uint32_t limit) const;
/**
* @brief Get the total number of workers registered with the blockchain
*/
uint64_t get_worker_count() const;
///////////
// Votes //
///////////
@ -721,6 +835,39 @@ public:
*/
vector<variant> lookup_vote_ids(const vector<vote_id_type> &votes) const;
/**
* @brief Get a list of vote_id_type that ID votes for
* @param account_name_or_id ID or name of the account to get votes for
* @return The list of vote_id_type ID votes for
*
*/
vector<vote_id_type> get_votes_ids(const string &account_name_or_id) const;
/**
* @brief Return the objects account_name_or_id votes for
* @param account_name_or_id ID or name of the account to get votes for
* @return The votes_info account_name_or_id votes for
*
*/
votes_info get_votes(const string &account_name_or_id) const;
/**
*
* @brief Get a list of accounts that votes for vote_id
* @param vote_id We search accounts that vote for this ID
* @return The accounts that votes for provided ID
*
*/
vector<account_object> get_voters_by_id(const vote_id_type &vote_id) const;
/**
* @brief Return the accounts that votes for account_name_or_id
* @param account_name_or_id ID or name of the account to get voters for
* @return The voters_info for account_name_or_id
*
*/
voters_info get_voters(const string &account_name_or_id) const;
////////////////////////////
// Authority / validation //
////////////////////////////
@ -772,15 +919,6 @@ public:
*/
vector<proposal_object> get_proposed_transactions(const std::string account_id_or_name) const;
//////////////////////
// Blinded balances //
//////////////////////
/**
* @return the set of blinded balance objects by commitment ID
*/
vector<blinded_balance_object> get_blinded_balances(const flat_set<commitment_type> &commitments) const;
/////////////////
// Tournaments //
/////////////////
@ -905,14 +1043,25 @@ public:
* @brief Returns list of all available NTF's
* @return List of all available NFT's
*/
vector<nft_object> nft_get_all_tokens() const;
vector<nft_object> nft_get_all_tokens(const nft_id_type lower_id, uint32_t limit) const;
/**
* @brief Returns NFT's owned by owner
* @param owner NFT owner
* @param lower_id ID of the first NFT to return
* @param limit Maximum number of results to return
* @return List of NFT owned by owner
*/
vector<nft_object> nft_get_tokens_by_owner(const account_id_type owner) const;
vector<nft_object> nft_get_tokens_by_owner(const account_id_type owner, const nft_id_type lower_id, uint32_t limit) const;
/**
* @brief Returns NFT metadata owned by owner
* @param owner NFT owner
* @param lower_id ID of the first NFT metadata to return
* @param limit Maximum number of results to return
* @return List of NFT owned by owner
*/
vector<nft_metadata_object> nft_get_metadata_by_owner(const account_id_type owner, const nft_metadata_id_type lower_id, uint32_t limit) const;
//////////////////
// MARKET PLACE //
@ -942,12 +1091,15 @@ extern template class fc::api<graphene::app::database_api>;
// clang-format off
FC_REFLECT_DERIVED(graphene::app::signed_block_with_info, (graphene::chain::signed_block), (block_id)(signing_key)(transaction_ids));
FC_REFLECT(graphene::app::order, (price)(quote)(base));
FC_REFLECT(graphene::app::order_book, (base)(quote)(bids)(asks));
FC_REFLECT(graphene::app::market_ticker, (base)(quote)(latest)(lowest_ask)(highest_bid)(percent_change)(base_volume)(quote_volume));
FC_REFLECT(graphene::app::market_volume, (base)(quote)(base_volume)(quote_volume));
FC_REFLECT(graphene::app::market_trade, (date)(price)(amount)(value));
FC_REFLECT(graphene::app::gpos_info, (vesting_factor)(award)(total_amount)(current_subperiod)(last_voted_time)(allowed_withdraw_amount)(account_vested_balance));
FC_REFLECT(graphene::app::version_info, (version)(git_revision)(built)(openssl)(boost));
FC_API(graphene::app::database_api,
// Objects
@ -963,11 +1115,13 @@ FC_API(graphene::app::database_api,
(get_block_header)
(get_block_header_batch)
(get_block)
(get_block2)
(get_blocks)
(get_transaction)
(get_recent_transaction_by_id)
// Globals
(get_version_info)
(get_chain_properties)
(get_global_properties)
(get_config)
@ -1033,20 +1187,28 @@ FC_API(graphene::app::database_api,
// Witnesses
(get_witnesses)
(get_witness_by_account_id)
(get_witness_by_account)
(lookup_witness_accounts)
(get_witness_count)
// Committee members
(get_committee_members)
(get_committee_member_by_account_id)
(get_committee_member_by_account)
(lookup_committee_member_accounts)
(get_committee_member_count)
// SON members
(get_sons)
(get_son_by_account_id)
(get_son_by_account)
(lookup_son_accounts)
(get_son_count)
(get_active_sons)
(get_active_sons_by_sidechain)
(get_son_network_status)
(get_son_network_status_by_sidechain)
// SON wallets
(get_active_son_wallet)
@ -1060,10 +1222,19 @@ FC_API(graphene::app::database_api,
(get_sidechain_address_by_account_and_sidechain)
(get_sidechain_addresses_count)
// workers
// Workers
(get_workers)
(get_workers_by_account_id)
(get_workers_by_account)
(lookup_worker_accounts)
(get_worker_count)
// Votes
(lookup_vote_ids)
(get_votes_ids)
(get_votes)
(get_voters_by_id)
(get_voters)
// Authority / validation
(get_transaction_hex)
@ -1078,9 +1249,6 @@ FC_API(graphene::app::database_api,
// Proposed transactions
(get_proposed_transactions)
// Blinded balances
(get_blinded_balances)
// Tournaments
(get_tournaments_in_state)
(get_tournaments_by_state)
@ -1111,6 +1279,7 @@ FC_API(graphene::app::database_api,
(nft_token_of_owner_by_index)
(nft_get_all_tokens)
(nft_get_tokens_by_owner)
(nft_get_metadata_by_owner)
// Marketplace
(list_offers)

View file

@ -53,7 +53,54 @@ void verify_authority_accounts( const database& db, const authority& a )
}
}
void verify_account_votes( const database& db, const account_options& options )
// Overwrites the num_son values from the origin to the destination for those sidechains which are found in the origin.
// Keeps the values of num_son for the sidechains which are found in the destination, but not in the origin.
// Returns false if an error is detected.
bool merge_num_sons( flat_map<sidechain_type, uint16_t>& destination,
const flat_map<sidechain_type, uint16_t>& origin,
fc::optional<time_point_sec> head_block_time = {})
{
const auto active_sidechains = head_block_time.valid() ? active_sidechain_types(*head_block_time) : all_sidechain_types;
bool success = true;
for (const auto &ns : origin)
{
destination[ns.first] = ns.second;
if (active_sidechains.find(ns.first) == active_sidechains.end())
{
success = false;
}
}
return success;
}
flat_map<sidechain_type, uint16_t> count_SON_votes_per_sidechain( const flat_set<vote_id_type>& votes )
{
flat_map<sidechain_type, uint16_t> SON_votes_per_sidechain = account_options::ext::empty_num_son();
for (const auto &vote : votes)
{
switch (vote.type())
{
case vote_id_type::son_bitcoin:
SON_votes_per_sidechain[sidechain_type::bitcoin]++;
break;
case vote_id_type::son_hive:
SON_votes_per_sidechain[sidechain_type::hive]++;
break;
case vote_id_type::son_ethereum:
SON_votes_per_sidechain[sidechain_type::ethereum]++;
break;
default:
break;
}
}
return SON_votes_per_sidechain;
}
void verify_account_votes( const database& db, const account_options& options, fc::optional<account_object> account = {} )
{
// ensure account's votes satisfy requirements
// NB only the part of vote checking that requires chain state is here,
@ -62,10 +109,47 @@ void verify_account_votes( const database& db, const account_options& options )
const auto& gpo = db.get_global_properties();
const auto& chain_params = gpo.parameters;
FC_ASSERT( db.find_object(options.voting_account), "Invalid proxy account specified." );
FC_ASSERT( options.num_witness <= chain_params.maximum_witness_count,
"Voted for more witnesses than currently allowed (${c})", ("c", chain_params.maximum_witness_count) );
FC_ASSERT( options.num_committee <= chain_params.maximum_committee_count,
"Voted for more committee members than currently allowed (${c})", ("c", chain_params.maximum_committee_count) );
FC_ASSERT( chain_params.extensions.value.maximum_son_count.valid() , "Invalid maximum son count" );
flat_map<sidechain_type, uint16_t> merged_num_sons = account_options::ext::empty_num_son();
// Merge with existing account if exists
if ( account.valid() && account->options.extensions.value.num_son.valid())
{
merge_num_sons( merged_num_sons, *account->options.extensions.value.num_son, db.head_block_time() );
}
// Apply update operation on top
if ( options.extensions.value.num_son.valid() )
{
merge_num_sons( merged_num_sons, *options.extensions.value.num_son, db.head_block_time() );
}
for(const auto& num_sons : merged_num_sons)
{
FC_ASSERT( num_sons.second <= *chain_params.extensions.value.maximum_son_count,
"Voted for more sons than currently allowed (${c})", ("c", *chain_params.extensions.value.maximum_son_count) );
}
// Count the votes for SONs and confirm that the account did not vote for less SONs than num_son
flat_map<sidechain_type, uint16_t> SON_votes_per_sidechain = count_SON_votes_per_sidechain(options.votes);
for (const auto& number_of_votes : SON_votes_per_sidechain)
{
// Number of votes of account_options are also checked in account_options::do_evaluate,
// but there we are checking the value before merging num_sons, so the values should be checked again
const auto sidechain = number_of_votes.first;
FC_ASSERT( number_of_votes.second >= merged_num_sons[sidechain],
"Voted for less sons than specified in num_son (votes ${v} < num_son ${ns}) for sidechain ${s}",
("v", number_of_votes.second) ("ns", merged_num_sons[sidechain]) ("s", sidechain) );
}
FC_ASSERT( db.find_object(options.voting_account), "Invalid proxy account specified." );
uint32_t max_vote_id = gpo.next_available_vote_id;
@ -179,6 +263,13 @@ object_id_type account_create_evaluator::do_apply( const account_create_operatio
obj.owner = o.owner;
obj.active = o.active;
obj.options = o.options;
obj.options.extensions.value.num_son = account_options::ext::empty_num_son();
if ( o.options.extensions.value.num_son.valid() )
{
merge_num_sons( *obj.options.extensions.value.num_son, *o.options.extensions.value.num_son );
}
obj.statistics = d.create<account_statistics_object>([&obj](account_statistics_object& s){
s.owner = obj.id;
s.name = obj.name;
@ -278,7 +369,7 @@ void_result account_update_evaluator::do_evaluate( const account_update_operatio
acnt = &o.account(d);
if( o.new_options.valid() )
verify_account_votes( d, *o.new_options );
verify_account_votes( d, *o.new_options, *acnt );
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
@ -317,7 +408,31 @@ void_result account_update_evaluator::do_apply( const account_update_operation&
a.active = *o.active;
a.top_n_control_flags = 0;
}
if( o.new_options ) a.options = *o.new_options;
// New num_son structure initialized to 0
flat_map<sidechain_type, uint16_t> new_num_son = account_options::ext::empty_num_son();
// If num_son of existing object is valid, we should merge the existing data
if ( a.options.extensions.value.num_son.valid() )
{
merge_num_sons( new_num_son, *a.options.extensions.value.num_son );
}
// If num_son of the operation are valid, they should merge the existing data
if ( o.new_options )
{
const auto new_options = *o.new_options;
if ( new_options.extensions.value.num_son.valid() )
{
merge_num_sons( new_num_son, *new_options.extensions.value.num_son );
}
a.options = *o.new_options;
}
a.options.extensions.value.num_son = new_num_son;
if( o.extensions.value.owner_special_authority.valid() )
{
a.owner_special_authority = *(o.extensions.value.owner_special_authority);

View file

@ -541,7 +541,7 @@ void betting_market_group_object::dispatch_new_status(database& db, betting_mark
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect betting_market_group_object to variant to properly reflect "state"
void to_variant(const graphene::chain::betting_market_group_object& betting_market_group_obj, fc::variant& v, uint32_t max_depth)
{

View file

@ -466,7 +466,7 @@ void betting_market_object::on_canceled_event(database& db)
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect betting_market_object to variant to properly reflect "state"
void to_variant(const graphene::chain::betting_market_object& event_obj, fc::variant& v, uint32_t max_depth)
{
@ -493,4 +493,3 @@ namespace fc {
const_cast<int*>(event_obj.my->state_machine.current_state())[0] = (int)status;
}
} //end namespace fc

View file

@ -33,149 +33,163 @@ namespace graphene { namespace chain {
void_result transfer_to_blind_evaluator::do_evaluate( const transfer_to_blind_operation& o )
{ try {
const auto& d = db();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME )
{
const auto& atype = o.amount.asset_id(d);
FC_ASSERT( atype.allow_confidential() );
FC_ASSERT( !atype.is_transfer_restricted() );
FC_ASSERT( !(atype.options.flags & white_list) );
const auto& atype = o.amount.asset_id(db());
FC_ASSERT( atype.allow_confidential() );
FC_ASSERT( !atype.is_transfer_restricted() );
FC_ASSERT( !(atype.options.flags & white_list) );
for( const auto& out : o.outputs )
{
for( const auto& a : out.owner.account_auths )
a.first(d); // verify all accounts exist and are valid
}
}
for( const auto& out : o.outputs )
{
for( const auto& a : out.owner.account_auths )
a.first(d); // verify all accounts exist and are valid
}
return void_result();
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void_result transfer_to_blind_evaluator::do_apply( const transfer_to_blind_operation& o )
void_result transfer_to_blind_evaluator::do_apply( const transfer_to_blind_operation& o )
{ try {
db().adjust_balance( o.from, -o.amount );
if( db().head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
db().adjust_balance(o.from, -o.amount);
const auto& add = o.amount.asset_id(db()).dynamic_asset_data_id(db()); // verify fee is a legit asset
db().modify( add, [&]( asset_dynamic_data_object& obj ){
obj.confidential_supply += o.amount.amount;
FC_ASSERT( obj.confidential_supply >= 0 );
});
for( const auto& out : o.outputs )
{
db().create<blinded_balance_object>( [&]( blinded_balance_object& obj ){
obj.asset_id = o.amount.asset_id;
obj.owner = out.owner;
obj.commitment = out.commitment;
});
}
return void_result();
const auto &add = o.amount.asset_id(db()).dynamic_asset_data_id(db()); // verify fee is a legit asset
db().modify(add, [&](asset_dynamic_data_object &obj) {
obj.confidential_supply += o.amount.amount;
FC_ASSERT(obj.confidential_supply >= 0);
});
for (const auto &out : o.outputs) {
db().create<blinded_balance_object>([&](blinded_balance_object &obj) {
obj.asset_id = o.amount.asset_id;
obj.owner = out.owner;
obj.commitment = out.commitment;
});
}
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void transfer_to_blind_evaluator::pay_fee()
{
if( db().head_block_time() >= HARDFORK_563_TIME )
pay_fba_fee( fba_accumulator_id_transfer_to_blind );
else
generic_evaluator::pay_fee();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
if (d.head_block_time() >= HARDFORK_563_TIME)
pay_fba_fee(fba_accumulator_id_transfer_to_blind);
else
generic_evaluator::pay_fee();
}
}
void_result transfer_from_blind_evaluator::do_evaluate( const transfer_from_blind_operation& o )
{ try {
const auto& d = db();
o.fee.asset_id(d); // verify fee is a legit asset
const auto& bbi = d.get_index_type<blinded_balance_index>();
const auto& cidx = bbi.indices().get<by_commitment>();
for( const auto& in : o.inputs )
{
auto itr = cidx.find( in.commitment );
FC_ASSERT( itr != cidx.end() );
FC_ASSERT( itr->asset_id == o.fee.asset_id );
FC_ASSERT( itr->owner == in.owner );
}
return void_result();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
o.fee.asset_id(d); // verify fee is a legit asset
const auto &bbi = d.get_index_type<blinded_balance_index>();
const auto &cidx = bbi.indices().get<by_commitment>();
for (const auto &in : o.inputs) {
auto itr = cidx.find(in.commitment);
FC_ASSERT(itr != cidx.end());
FC_ASSERT(itr->asset_id == o.fee.asset_id);
FC_ASSERT(itr->owner == in.owner);
}
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void_result transfer_from_blind_evaluator::do_apply( const transfer_from_blind_operation& o )
void_result transfer_from_blind_evaluator::do_apply( const transfer_from_blind_operation& o )
{ try {
db().adjust_balance( o.fee_payer(), o.fee );
db().adjust_balance( o.to, o.amount );
const auto& bbi = db().get_index_type<blinded_balance_index>();
const auto& cidx = bbi.indices().get<by_commitment>();
for( const auto& in : o.inputs )
{
auto itr = cidx.find( in.commitment );
FC_ASSERT( itr != cidx.end() );
db().remove( *itr );
}
const auto& add = o.amount.asset_id(db()).dynamic_asset_data_id(db()); // verify fee is a legit asset
db().modify( add, [&]( asset_dynamic_data_object& obj ){
obj.confidential_supply -= o.amount.amount + o.fee.amount;
FC_ASSERT( obj.confidential_supply >= 0 );
});
return void_result();
if( db().head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
db().adjust_balance(o.fee_payer(), o.fee);
db().adjust_balance(o.to, o.amount);
const auto &bbi = db().get_index_type<blinded_balance_index>();
const auto &cidx = bbi.indices().get<by_commitment>();
for (const auto &in : o.inputs) {
auto itr = cidx.find(in.commitment);
FC_ASSERT(itr != cidx.end());
db().remove(*itr);
}
const auto &add = o.amount.asset_id(db()).dynamic_asset_data_id(db()); // verify fee is a legit asset
db().modify(add, [&](asset_dynamic_data_object &obj) {
obj.confidential_supply -= o.amount.amount + o.fee.amount;
FC_ASSERT(obj.confidential_supply >= 0);
});
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void transfer_from_blind_evaluator::pay_fee()
{
if( db().head_block_time() >= HARDFORK_563_TIME )
pay_fba_fee( fba_accumulator_id_transfer_from_blind );
else
generic_evaluator::pay_fee();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
if (d.head_block_time() >= HARDFORK_563_TIME)
pay_fba_fee(fba_accumulator_id_transfer_from_blind);
else
generic_evaluator::pay_fee();
}
}
void_result blind_transfer_evaluator::do_evaluate( const blind_transfer_operation& o )
{ try {
const auto& d = db();
o.fee.asset_id(db()); // verify fee is a legit asset
const auto& bbi = db().get_index_type<blinded_balance_index>();
const auto& cidx = bbi.indices().get<by_commitment>();
for( const auto& out : o.outputs )
{
for( const auto& a : out.owner.account_auths )
a.first(d); // verify all accounts exist and are valid
}
for( const auto& in : o.inputs )
{
auto itr = cidx.find( in.commitment );
GRAPHENE_ASSERT( itr != cidx.end(), blind_transfer_unknown_commitment, "", ("commitment",in.commitment) );
FC_ASSERT( itr->asset_id == o.fee.asset_id );
FC_ASSERT( itr->owner == in.owner );
}
return void_result();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
o.fee.asset_id(d); // verify fee is a legit asset
const auto &bbi = d.get_index_type<blinded_balance_index>();
const auto &cidx = bbi.indices().get<by_commitment>();
for (const auto &out : o.outputs) {
for (const auto &a : out.owner.account_auths)
a.first(d); // verify all accounts exist and are valid
}
for (const auto &in : o.inputs) {
auto itr = cidx.find(in.commitment);
GRAPHENE_ASSERT(itr != cidx.end(), blind_transfer_unknown_commitment, "", ("commitment", in.commitment));
FC_ASSERT(itr->asset_id == o.fee.asset_id);
FC_ASSERT(itr->owner == in.owner);
}
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void_result blind_transfer_evaluator::do_apply( const blind_transfer_operation& o )
void_result blind_transfer_evaluator::do_apply( const blind_transfer_operation& o )
{ try {
db().adjust_balance( o.fee_payer(), o.fee ); // deposit the fee to the temp account
const auto& bbi = db().get_index_type<blinded_balance_index>();
const auto& cidx = bbi.indices().get<by_commitment>();
for( const auto& in : o.inputs )
{
auto itr = cidx.find( in.commitment );
GRAPHENE_ASSERT( itr != cidx.end(), blind_transfer_unknown_commitment, "", ("commitment",in.commitment) );
db().remove( *itr );
}
for( const auto& out : o.outputs )
{
db().create<blinded_balance_object>( [&]( blinded_balance_object& obj ){
obj.asset_id = o.fee.asset_id;
obj.owner = out.owner;
obj.commitment = out.commitment;
});
}
const auto& add = o.fee.asset_id(db()).dynamic_asset_data_id(db());
db().modify( add, [&]( asset_dynamic_data_object& obj ){
obj.confidential_supply -= o.fee.amount;
FC_ASSERT( obj.confidential_supply >= 0 );
});
return void_result();
if( db().head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
db().adjust_balance(o.fee_payer(), o.fee); // deposit the fee to the temp account
const auto &bbi = db().get_index_type<blinded_balance_index>();
const auto &cidx = bbi.indices().get<by_commitment>();
for (const auto &in : o.inputs) {
auto itr = cidx.find(in.commitment);
GRAPHENE_ASSERT(itr != cidx.end(), blind_transfer_unknown_commitment, "", ("commitment", in.commitment));
db().remove(*itr);
}
for (const auto &out : o.outputs) {
db().create<blinded_balance_object>([&](blinded_balance_object &obj) {
obj.asset_id = o.fee.asset_id;
obj.owner = out.owner;
obj.commitment = out.commitment;
});
}
const auto &add = o.fee.asset_id(db()).dynamic_asset_data_id(db());
db().modify(add, [&](asset_dynamic_data_object &obj) {
obj.confidential_supply -= o.fee.amount;
FC_ASSERT(obj.confidential_supply >= 0);
});
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (o) ) }
void blind_transfer_evaluator::pay_fee()
{
if( db().head_block_time() >= HARDFORK_563_TIME )
pay_fba_fee( fba_accumulator_id_blind_transfer );
else
generic_evaluator::pay_fee();
const auto& d = db();
if( d.head_block_time() < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
if (d.head_block_time() >= HARDFORK_563_TIME)
pay_fba_fee(fba_accumulator_id_blind_transfer);
else
generic_evaluator::pay_fee();
}
}
} } // graphene::chain

View file

@ -40,8 +40,10 @@
#include <graphene/chain/exceptions.hpp>
#include <graphene/chain/evaluator.hpp>
#include <graphene/chain/witness_schedule_object.hpp>
#include <graphene/db/object_database.hpp>
#include <fc/crypto/digest.hpp>
#include <boost/filesystem.hpp>
namespace {
@ -160,10 +162,13 @@ void database::check_transaction_for_duplicated_operations(const signed_transact
existed_operations_digests.insert( proposed_operations_digests.begin(), proposed_operations_digests.end() );
});
for (auto& pending_transaction: _pending_tx)
{
auto proposed_operations_digests = gather_proposed_operations_digests(pending_transaction);
existed_operations_digests.insert(proposed_operations_digests.begin(), proposed_operations_digests.end());
const std::lock_guard<std::mutex> pending_tx_lock{_pending_tx_mutex};
for (auto &pending_transaction : _pending_tx)
{
auto proposed_operations_digests = gather_proposed_operations_digests(pending_transaction);
existed_operations_digests.insert(proposed_operations_digests.begin(), proposed_operations_digests.end());
}
}
auto proposed_operations_digests = gather_proposed_operations_digests(trx);
@ -185,7 +190,12 @@ bool database::push_block(const signed_block& new_block, uint32_t skip)
bool result;
detail::with_skip_flags( *this, skip, [&]()
{
detail::without_pending_transactions( *this, std::move(_pending_tx),
std::vector<processed_transaction> pending_tx = [this] {
const std::lock_guard<std::mutex> pending_tx_lock{_pending_tx_mutex};
return std::move(_pending_tx);
}();
detail::without_pending_transactions( *this, std::move(pending_tx),
[&]()
{
result = _push_block(new_block);
@ -196,6 +206,9 @@ bool database::push_block(const signed_block& new_block, uint32_t skip)
bool database::_push_block(const signed_block& new_block)
{ try {
boost::filesystem::space_info si = boost::filesystem::space(get_data_dir());
FC_ASSERT((si.available) > 104857600, "Rejecting block due to low disk space"); // 104857600 bytes = 100 MB
uint32_t skip = get_node_properties().skip_flags;
const auto now = fc::time_point::now().sec_since_epoch();
@ -382,17 +395,26 @@ processed_transaction database::_push_transaction( const signed_transaction& trx
{
// If this is the first transaction pushed after applying a block, start a new undo session.
// This allows us to quickly rewind to the clean state of the head block, in case a new block arrives.
if( !_pending_tx_session.valid() )
_pending_tx_session = _undo_db.start_undo_session();
{
const std::lock_guard<std::mutex> pending_tx_session_lock{_pending_tx_session_mutex};
if (!_pending_tx_session.valid()) {
const std::lock_guard<std::mutex> undo_db_lock{_undo_db_mutex};
_pending_tx_session = _undo_db.start_undo_session();
}
}
// Create a temporary undo session as a child of _pending_tx_session.
// The temporary session will be discarded by the destructor if
// _apply_transaction fails. If we make it to merge(), we
// apply the changes.
const std::lock_guard<std::mutex> undo_db_lock{_undo_db_mutex};
auto temp_session = _undo_db.start_undo_session();
auto processed_trx = _apply_transaction( trx );
_pending_tx.push_back(processed_trx);
auto processed_trx = _apply_transaction(trx);
{
const std::lock_guard<std::mutex> pending_tx_lock{_pending_tx_mutex};
_pending_tx.push_back(processed_trx);
}
// notify_changed_objects();
// The transaction applied successfully. Merge its changes into the pending block session.
@ -405,6 +427,7 @@ processed_transaction database::_push_transaction( const signed_transaction& trx
processed_transaction database::validate_transaction( const signed_transaction& trx )
{
const std::lock_guard<std::mutex> undo_db_lock{_undo_db_mutex};
auto session = _undo_db.start_undo_session();
return _apply_transaction( trx );
}
@ -433,7 +456,12 @@ processed_transaction database::push_proposal(const proposal_object& proposal)
{
for( size_t i=old_applied_ops_size,n=_applied_ops.size(); i<n; i++ )
{
ilog( "removing failed operation from applied_ops: ${op}", ("op", *(_applied_ops[i])) );
if(_applied_ops[i].valid()) {
ilog("removing failed operation from applied_ops: ${op}", ("op", *(_applied_ops[i])));
}
else{
ilog("Can't remove failed operation from applied_ops (operation is not valid), op_id : ${op_id}", ("op_id", i));
}
_applied_ops[i].reset();
}
}
@ -499,47 +527,52 @@ signed_block database::_generate_block(
// the value of the "when" variable is known, which means we need to
// re-apply pending transactions in this method.
//
_pending_tx_session.reset();
_pending_tx_session = _undo_db.start_undo_session();
{
const std::lock_guard<std::mutex> pending_tx_session_lock{_pending_tx_session_mutex};
_pending_tx_session.reset();
_pending_tx_session = _undo_db.start_undo_session();
}
uint64_t postponed_tx_count = 0;
// pop pending state (reset to head block state)
for( const processed_transaction& tx : _pending_tx )
{
size_t new_total_size = total_block_size + fc::raw::pack_size( tx );
const std::lock_guard<std::mutex> pending_tx_lock{_pending_tx_mutex};
for (const processed_transaction &tx : _pending_tx) {
size_t new_total_size = total_block_size + fc::raw::pack_size(tx);
// postpone transaction if it would make block too big
if( new_total_size >= maximum_block_size )
{
postponed_tx_count++;
continue;
}
// postpone transaction if it would make block too big
if (new_total_size >= maximum_block_size) {
postponed_tx_count++;
continue;
}
try
{
auto temp_session = _undo_db.start_undo_session();
processed_transaction ptx = _apply_transaction( tx );
temp_session.merge();
try {
auto temp_session = _undo_db.start_undo_session();
processed_transaction ptx = _apply_transaction(tx);
temp_session.merge();
// We have to recompute pack_size(ptx) because it may be different
// than pack_size(tx) (i.e. if one or more results increased
// their size)
total_block_size += fc::raw::pack_size( ptx );
pending_block.transactions.push_back( ptx );
}
catch ( const fc::exception& e )
{
// Do nothing, transaction will not be re-applied
wlog( "Transaction was not processed while generating block due to ${e}", ("e", e) );
wlog( "The transaction was ${t}", ("t", tx) );
// We have to recompute pack_size(ptx) because it may be different
// than pack_size(tx) (i.e. if one or more results increased
// their size)
total_block_size += fc::raw::pack_size(ptx);
pending_block.transactions.push_back(ptx);
} catch (const fc::exception &e) {
// Do nothing, transaction will not be re-applied
wlog("Transaction was not processed while generating block due to ${e}", ("e", e));
wlog("The transaction was ${t}", ("t", tx));
}
}
}
if( postponed_tx_count > 0 )
{
wlog( "Postponed ${n} transactions due to block size limit", ("n", postponed_tx_count) );
}
_pending_tx_session.reset();
{
const std::lock_guard<std::mutex> pending_tx_session_lock{_pending_tx_session_mutex};
_pending_tx_session.reset();
}
// We have temporarily broken the invariant that
// _pending_tx_session is the result of applying _pending_tx, as
@ -587,7 +620,11 @@ signed_block database::_generate_block(
*/
void database::pop_block()
{ try {
_pending_tx_session.reset();
{
const std::lock_guard<std::mutex> pending_tx_session_lock{_pending_tx_session_mutex};
_pending_tx_session.reset();
}
auto head_id = head_block_id();
optional<signed_block> head_block = fetch_block_by_id( head_id );
GRAPHENE_ASSERT( head_block.valid(), pop_empty_chain, "there are no blocks to pop" );
@ -601,6 +638,8 @@ void database::pop_block()
void database::clear_pending()
{ try {
const std::lock_guard<std::mutex> pending_tx_lock{_pending_tx_mutex};
const std::lock_guard<std::mutex> pending_tx_session_lock{_pending_tx_session_mutex};
assert( (_pending_tx.size() == 0) || _pending_tx_session.valid() );
_pending_tx.clear();
_pending_tx_session.reset();
@ -619,7 +658,7 @@ uint32_t database::push_applied_operation( const operation& op )
void database::set_applied_operation_result( uint32_t op_id, const operation_result& result )
{
assert( op_id < _applied_ops.size() );
if( _applied_ops[op_id] )
if( _applied_ops[op_id].valid() )
_applied_ops[op_id]->result = result;
else
{
@ -700,8 +739,11 @@ void database::_apply_block( const signed_block& next_block )
if (global_props.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM) {
update_witness_schedule(next_block);
if(global_props.active_sons.size() > 0) {
update_son_schedule(next_block);
for(const auto& active_sons : global_props.active_sons) {
if(!active_sons.second.empty()) {
update_son_schedule(active_sons.first, next_block);
}
}
}
@ -716,7 +758,7 @@ void database::_apply_block( const signed_block& next_block )
check_ending_lotteries();
check_ending_nft_lotteries();
create_block_summary(next_block);
place_delayed_bets(); // must happen after update_global_dynamic_data() updates the time
clear_expired_transactions();
@ -734,11 +776,15 @@ void database::_apply_block( const signed_block& next_block )
// TODO: figure out if we could collapse this function into
// update_global_dynamic_data() as perhaps these methods only need
// to be called for header validation?
update_maintenance_flag( maint_needed );
if (global_props.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SHUFFLED_ALGORITHM) {
update_witness_schedule();
if(global_props.active_sons.size() > 0) {
update_son_schedule();
for(const auto& active_sidechain_type : active_sidechain_types(dynamic_global_props.time)) {
if(global_props.active_sons.at(active_sidechain_type).size() > 0) {
update_son_schedule(active_sidechain_type);
}
}
}
@ -806,7 +852,7 @@ processed_transaction database::_apply_transaction(const signed_transaction& trx
return get_account_custom_authorities(id, op);
};
trx.verify_authority( chain_id, get_active, get_owner, get_custom,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(head_block_time()),
true,
get_global_properties().parameters.max_authority_depth );
}

View file

@ -222,17 +222,32 @@ std::set<son_id_type> database::get_sons_to_be_deregistered()
for( auto& son : son_idx )
{
if(son.status == son_status::in_maintenance)
bool need_to_be_deregistered = true;
for(const auto& status : son.statuses)
{
auto stats = son.statistics(*this);
// TODO : We need to add a function that returns if we can deregister SON
// i.e. with introduction of PW code, we have to make a decision if the SON
// is needed for release of funds from the PW
if(head_block_time() - stats.last_down_timestamp >= fc::seconds(get_global_properties().parameters.son_deregister_time()))
const auto& sidechain = status.first;
if(status.second != son_status::in_maintenance)
need_to_be_deregistered = false;
if(need_to_be_deregistered)
{
ret.insert(son.id);
auto stats = son.statistics(*this);
// TODO : We need to add a function that returns if we can deregister SON
// i.e. with introduction of PW code, we have to make a decision if the SON
// is needed for release of funds from the PW
if(stats.last_active_timestamp.contains(sidechain)) {
if (head_block_time() - stats.last_active_timestamp.at(sidechain) < fc::seconds(get_global_properties().parameters.son_deregister_time())) {
need_to_be_deregistered = false;
}
}
}
}
if(need_to_be_deregistered)
{
ret.insert(son.id);
}
}
return ret;
}
@ -289,28 +304,50 @@ bool database::is_son_dereg_valid( son_id_type son_id )
return false;
}
return (son->status == son_status::in_maintenance &&
(head_block_time() - son->statistics(*this).last_down_timestamp >= fc::seconds(get_global_properties().parameters.son_deregister_time())));
bool status_son_dereg_valid = true;
for (const auto &active_sidechain_type : active_sidechain_types(head_block_time())) {
if(son->statuses.at(active_sidechain_type) != son_status::in_maintenance)
status_son_dereg_valid = false;
if(status_son_dereg_valid)
{
if(son->statistics(*this).last_active_timestamp.contains(active_sidechain_type)) {
if (head_block_time() - son->statistics(*this).last_active_timestamp.at(active_sidechain_type) < fc::seconds(get_global_properties().parameters.son_deregister_time())) {
status_son_dereg_valid = false;
}
}
}
}
return status_son_dereg_valid;
}
bool database::is_son_active( son_id_type son_id )
bool database::is_son_active( sidechain_type type, son_id_type son_id )
{
const auto& son_idx = get_index_type<son_index>().indices().get< by_id >();
auto son = son_idx.find( son_id );
if(son == son_idx.end())
{
if(son == son_idx.end()) {
return false;
}
const global_property_object& gpo = get_global_properties();
if(!gpo.active_sons.contains(type)) {
return false;
}
const auto& gpo_as = gpo.active_sons.at(type);
vector<son_id_type> active_son_ids;
active_son_ids.reserve(gpo.active_sons.size());
std::transform(gpo.active_sons.begin(), gpo.active_sons.end(),
active_son_ids.reserve(gpo_as.size());
std::transform(gpo_as.cbegin(), gpo_as.cend(),
std::inserter(active_son_ids, active_son_ids.end()),
[](const son_info& swi) {
[](const son_sidechain_info& swi) {
return swi.son_id;
});
if(active_son_ids.empty()) {
return false;
}
auto it_son = std::find(active_son_ids.begin(), active_son_ids.end(), son_id);
return (it_son != active_son_ids.end());
}
@ -349,23 +386,14 @@ vector<uint64_t> database::get_random_numbers(uint64_t minimum, uint64_t maximum
bool database::is_asset_creation_allowed(const string &symbol)
{
time_point_sec now = head_block_time();
std::unordered_set<std::string> post_son_hf_symbols = {"ETH", "USDT", "BNB", "ADA", "DOGE", "XRP", "USDC", "DOT", "UNI", "BUSD", "BCH", "LTC", "SOL", "LINK", "MATIC", "THETA",
"WBTC", "XLM", "ICP", "DAI", "VET", "ETC", "TRX", "FIL", "XMR", "EGR", "EOS", "SHIB", "AAVE", "CRO", "ALGO", "AMP", "BTCB",
"BSV", "KLAY", "CAKE", "FTT", "LEO", "XTZ", "TFUEL", "MIOTA", "LUNA", "NEO", "ATOM", "MKR", "FEI", "WBNB", "UST", "AVAX",
"STEEM", "HIVE", "HBD", "SBD", "BTS"};
if (symbol == "BTC")
{
if (now < HARDFORK_SON_TIME)
return false;
}
if (post_son_hf_symbols.find(symbol) != post_son_hf_symbols.end())
{
if (now >= HARDFORK_SON_TIME)
if (head_block_time() < HARDFORK_SON_TIME)
return false;
}
return true;
}
} }
}
}

View file

@ -329,10 +329,54 @@ void database::initialize_evaluators()
register_evaluator<random_number_store_evaluator>();
}
void database::initialize_hardforks()
{
_hardfork_times.emplace_back(HARDFORK_357_TIME);
_hardfork_times.emplace_back(HARDFORK_359_TIME);
_hardfork_times.emplace_back(HARDFORK_385_TIME);
_hardfork_times.emplace_back(HARDFORK_409_TIME);
_hardfork_times.emplace_back(HARDFORK_413_TIME);
_hardfork_times.emplace_back(HARDFORK_415_TIME);
_hardfork_times.emplace_back(HARDFORK_416_TIME);
_hardfork_times.emplace_back(HARDFORK_419_TIME);
_hardfork_times.emplace_back(HARDFORK_436_TIME);
_hardfork_times.emplace_back(HARDFORK_445_TIME);
_hardfork_times.emplace_back(HARDFORK_453_TIME);
_hardfork_times.emplace_back(HARDFORK_480_TIME);
_hardfork_times.emplace_back(HARDFORK_483_TIME);
_hardfork_times.emplace_back(HARDFORK_516_TIME);
_hardfork_times.emplace_back(HARDFORK_533_TIME);
_hardfork_times.emplace_back(HARDFORK_538_TIME);
_hardfork_times.emplace_back(HARDFORK_555_TIME);
_hardfork_times.emplace_back(HARDFORK_563_TIME);
_hardfork_times.emplace_back(HARDFORK_572_TIME);
_hardfork_times.emplace_back(HARDFORK_599_TIME);
_hardfork_times.emplace_back(HARDFORK_607_TIME);
_hardfork_times.emplace_back(HARDFORK_613_TIME);
_hardfork_times.emplace_back(HARDFORK_615_TIME);
_hardfork_times.emplace_back(HARDFORK_999_TIME);
_hardfork_times.emplace_back(HARDFORK_1000_TIME);
_hardfork_times.emplace_back(HARDFORK_1001_TIME);
_hardfork_times.emplace_back(HARDFORK_5050_1_TIME);
_hardfork_times.emplace_back(HARDFORK_CORE_429_TIME);
_hardfork_times.emplace_back(HARDFORK_GPOS_TIME);
_hardfork_times.emplace_back(HARDFORK_NFT_TIME);
_hardfork_times.emplace_back(HARDFORK_SON_FOR_HIVE_TIME);
_hardfork_times.emplace_back(HARDFORK_SON_TIME);
_hardfork_times.emplace_back(HARDFORK_SON2_TIME);
_hardfork_times.emplace_back(HARDFORK_SON_FOR_ETHEREUM_TIME);
_hardfork_times.emplace_back(HARDFORK_SWEEPS_TIME);
std::sort(_hardfork_times.begin(), _hardfork_times.end());
}
void database::initialize_indexes()
{
reset_indexes();
_undo_db.set_max_size( GRAPHENE_MIN_UNDO_HISTORY );
const std::lock_guard<std::mutex> undo_db_lock{_undo_db_mutex};
_undo_db.set_max_size(GRAPHENE_MIN_UNDO_HISTORY);
//Protocol object indexes
add_index< primary_index<asset_index, 13> >(); // 8192 assets per chunk
@ -432,7 +476,9 @@ void database::init_genesis(const genesis_state_type& genesis_state)
FC_ASSERT(genesis_state.initial_active_witnesses <= genesis_state.initial_witness_candidates.size(),
"initial_active_witnesses is larger than the number of candidate witnesses.");
const std::lock_guard<std::mutex> undo_db_lock{_undo_db_mutex};
_undo_db.disable();
struct auth_inhibitor {
auth_inhibitor(database& db) : db(db), old_flags(db.node_properties().skip_flags)
{ db.node_properties().skip_flags |= skip_authority_check; }
@ -1058,8 +1104,9 @@ void database::init_genesis(const genesis_state_type& genesis_state)
FC_ASSERT( _p_witness_schedule_obj->id == witness_schedule_id_type() );
// Initialize witness schedule
#ifndef NDEBUG
const son_schedule_object& sso =
const son_schedule_object& ssobitcoin =
#endif
create<son_schedule_object>([&](son_schedule_object& _sso)
{
@ -1068,24 +1115,64 @@ void database::init_genesis(const genesis_state_type& genesis_state)
witness_scheduler_rng rng(_sso.rng_seed.begin(), GRAPHENE_NEAR_SCHEDULE_CTR_IV);
auto init_witnesses = get_global_properties().active_witnesses;
auto init_bitcoin_sons = get_global_properties().active_sons.at(sidechain_type::bitcoin);
_sso.scheduler = son_scheduler();
_sso.scheduler._min_token_count = std::max(int(init_witnesses.size()) / 2, 1);
_sso.scheduler._min_token_count = std::max(int(init_bitcoin_sons.size()) / 2, 1);
_sso.last_scheduling_block = 0;
_sso.recent_slots_filled = fc::uint128::max_value();
});
assert( sso.id == son_schedule_id_type() );
assert( ssobitcoin.id == son_schedule_id_type(get_son_schedule_id(sidechain_type::bitcoin)) );
#ifndef NDEBUG
const son_schedule_object& ssoethereum =
#endif
create<son_schedule_object>([&](son_schedule_object& _sso)
{
// for scheduled
memset(_sso.rng_seed.begin(), 0, _sso.rng_seed.size());
witness_scheduler_rng rng(_sso.rng_seed.begin(), GRAPHENE_NEAR_SCHEDULE_CTR_IV);
auto init_ethereum_sons = get_global_properties().active_sons.at(sidechain_type::ethereum);
_sso.scheduler = son_scheduler();
_sso.scheduler._min_token_count = std::max(int(init_ethereum_sons.size()) / 2, 1);
_sso.last_scheduling_block = 0;
_sso.recent_slots_filled = fc::uint128::max_value();
});
assert( ssoethereum.id == son_schedule_id_type(get_son_schedule_id(sidechain_type::ethereum)) );
#ifndef NDEBUG
const son_schedule_object& ssohive =
#endif
create<son_schedule_object>([&](son_schedule_object& _sso)
{
// for scheduled
memset(_sso.rng_seed.begin(), 0, _sso.rng_seed.size());
witness_scheduler_rng rng(_sso.rng_seed.begin(), GRAPHENE_NEAR_SCHEDULE_CTR_IV);
auto init_hive_sons = get_global_properties().active_sons.at(sidechain_type::hive);
_sso.scheduler = son_scheduler();
_sso.scheduler._min_token_count = std::max(int(init_hive_sons.size()) / 2, 1);
_sso.last_scheduling_block = 0;
_sso.recent_slots_filled = fc::uint128::max_value();
});
assert( ssohive.id == son_schedule_id_type(get_son_schedule_id(sidechain_type::hive)) );
// Enable fees
modify(get_global_properties(), [&genesis_state](global_property_object& p) {
p.parameters.current_fees = genesis_state.initial_parameters.current_fees;
});
// Create FBA counters
create<fba_accumulator_object>([&]( fba_accumulator_object& acc )
{

File diff suppressed because it is too large Load diff

View file

@ -44,6 +44,7 @@ database::database() :
{
initialize_indexes();
initialize_evaluators();
initialize_hardforks();
}
database::~database()
@ -123,7 +124,7 @@ void database::reindex( fc::path data_dir )
}
for( uint32_t i = head_block_num() + 1; i <= last_block_num; ++i )
{
if( i % 10000 == 0 )
if( i % 1000000 == 0 )
{
ilog( "Writing database to disk at block ${i}", ("i",i) );
flush();

View file

@ -203,27 +203,10 @@ struct get_impacted_account_visitor
_impacted.insert( op.issuer );
}
void operator()( const transfer_to_blind_operation& op )
{
_impacted.insert( op.from );
for( const auto& out : op.outputs )
add_authority_accounts( _impacted, out.owner );
}
void operator()( const blind_transfer_operation& op )
{
for( const auto& in : op.inputs )
add_authority_accounts( _impacted, in.owner );
for( const auto& out : op.outputs )
add_authority_accounts( _impacted, out.owner );
}
void operator()( const transfer_from_blind_operation& op )
{
_impacted.insert( op.to );
for( const auto& in : op.inputs )
add_authority_accounts( _impacted, in.owner );
}
//! We don't use this operations
void operator()( const transfer_to_blind_operation& op ){}
void operator()( const blind_transfer_operation& op ){}
void operator()( const transfer_from_blind_operation& op ){}
void operator()( const asset_settle_cancel_operation& op )
{
@ -625,7 +608,6 @@ void database::notify_changed_objects()
if( _undo_db.enabled() )
{
const auto& head_undo = _undo_db.head();
auto chain_time = head_block_time();
// New
if( !new_objects.empty() )
@ -637,8 +619,7 @@ void database::notify_changed_objects()
new_ids.push_back(item);
auto obj = find_object(item);
if(obj != nullptr)
get_relevant_accounts(obj, new_accounts_impacted,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(chain_time));
get_relevant_accounts(obj, new_accounts_impacted, true);
}
GRAPHENE_TRY_NOTIFY( new_objects, new_ids, new_accounts_impacted)
@ -652,8 +633,7 @@ void database::notify_changed_objects()
for( const auto& item : head_undo.old_values )
{
changed_ids.push_back(item.first);
get_relevant_accounts(item.second.get(), changed_accounts_impacted,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(chain_time));
get_relevant_accounts(item.second.get(), changed_accounts_impacted, true);
}
GRAPHENE_TRY_NOTIFY( changed_objects, changed_ids, changed_accounts_impacted)
@ -670,8 +650,7 @@ void database::notify_changed_objects()
removed_ids.emplace_back( item.first );
auto obj = item.second.get();
removed.emplace_back( obj );
get_relevant_accounts(obj, removed_accounts_impacted,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(chain_time));
get_relevant_accounts(obj, removed_accounts_impacted, true);
}
GRAPHENE_TRY_NOTIFY( removed_objects, removed_ids, removed, removed_accounts_impacted)

View file

@ -74,21 +74,32 @@ witness_id_type database::get_scheduled_witness( uint32_t slot_num )const
return wid;
}
son_id_type database::get_scheduled_son( uint32_t slot_num )const
unsigned_int database::get_son_schedule_id( sidechain_type type )const
{
static const map<sidechain_type, unsigned_int> schedule_map = {
{ sidechain_type::bitcoin, 0 },
{ sidechain_type::ethereum, 1 },
{ sidechain_type::hive, 2 }
};
return schedule_map.at(type);
}
son_id_type database::get_scheduled_son( sidechain_type type, uint32_t slot_num )const
{
son_id_type sid;
const global_property_object& gpo = get_global_properties();
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SHUFFLED_ALGORITHM)
{
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
const son_schedule_object& sso = son_schedule_id_type()(*this);
const son_schedule_object& sso = son_schedule_id_type(get_son_schedule_id(type))(*this);
uint64_t current_aslot = dpo.current_aslot + slot_num;
return sso.current_shuffled_sons[ current_aslot % sso.current_shuffled_sons.size() ];
}
if (gpo.parameters.witness_schedule_algorithm == GRAPHENE_WITNESS_SCHEDULED_ALGORITHM &&
slot_num != 0 )
{
const son_schedule_object& sso = son_schedule_id_type()(*this);
const son_schedule_object& sso = son_schedule_id_type(get_son_schedule_id(type))(*this);
// ask the near scheduler who goes in the given slot
bool slot_is_near = sso.scheduler.get_slot(slot_num-1, sid);
if(! slot_is_near)
@ -189,36 +200,39 @@ void database::update_witness_schedule()
}
}
void database::update_son_schedule()
void database::update_son_schedule(sidechain_type type)
{
const son_schedule_object& sso = son_schedule_id_type()(*this);
const global_property_object& gpo = get_global_properties();
if( head_block_num() % gpo.active_sons.size() == 0 )
const son_schedule_object& sidechain_sso = get(son_schedule_id_type(get_son_schedule_id(type)));
if( gpo.active_sons.at(type).size() != 0 &&
head_block_num() % gpo.active_sons.at(type).size() == 0)
{
modify( sso, [&]( son_schedule_object& _sso )
modify( sidechain_sso, [&]( son_schedule_object& _sso )
{
_sso.current_shuffled_sons.clear();
_sso.current_shuffled_sons.reserve( gpo.active_sons.size() );
_sso.current_shuffled_sons.reserve( gpo.active_sons.at(type).size() );
for( const son_info& w : gpo.active_sons )
_sso.current_shuffled_sons.push_back( w.son_id );
for ( const auto &w : gpo.active_sons.at(type) ) {
_sso.current_shuffled_sons.push_back(w.son_id);
}
auto now_hi = uint64_t(head_block_time().sec_since_epoch()) << 32;
for( uint32_t i = 0; i < _sso.current_shuffled_sons.size(); ++i )
for (uint32_t i = 0; i < _sso.current_shuffled_sons.size(); ++i)
{
/// High performance random generator
/// http://xorshift.di.unimi.it/
uint64_t k = now_hi + uint64_t(i)*2685821657736338717ULL;
uint64_t k = now_hi + uint64_t(i) * 2685821657736338717ULL;
k ^= (k >> 12);
k ^= (k << 25);
k ^= (k >> 27);
k *= 2685821657736338717ULL;
uint32_t jmax = _sso.current_shuffled_sons.size() - i;
uint32_t j = i + k%jmax;
std::swap( _sso.current_shuffled_sons[i],
_sso.current_shuffled_sons[j] );
uint32_t j = i + k % jmax;
std::swap(_sso.current_shuffled_sons[i],
_sso.current_shuffled_sons[j]);
}
});
}
@ -304,13 +318,15 @@ void database::update_witness_schedule(const signed_block& next_block)
idump( ( double(total_time/1000000.0)/calls) );
}
void database::update_son_schedule(const signed_block& next_block)
void database::update_son_schedule(sidechain_type type, const signed_block& next_block)
{
auto start = fc::time_point::now();
const global_property_object& gpo = get_global_properties();
#ifndef NDEBUG
const son_schedule_object& sso = get(son_schedule_id_type());
uint32_t schedule_needs_filled = gpo.active_sons.size();
uint32_t schedule_slot = get_slot_at_time(next_block.timestamp);
#endif
const global_property_object& gpo = get_global_properties();
const uint32_t schedule_needs_filled = gpo.active_sons.at(type).size();
const uint32_t schedule_slot = get_slot_at_time(next_block.timestamp);
// We shouldn't be able to generate _pending_block with timestamp
// in the past, and incoming blocks from the network with timestamp
@ -319,48 +335,49 @@ void database::update_son_schedule(const signed_block& next_block)
assert( schedule_slot > 0 );
son_id_type first_son;
bool slot_is_near = sso.scheduler.get_slot( schedule_slot-1, first_son );
son_id_type son;
const dynamic_global_property_object& dpo = get_dynamic_global_properties();
assert( dpo.random.data_size() == witness_scheduler_rng::seed_length );
assert( witness_scheduler_rng::seed_length == sso.rng_seed.size() );
modify(sso, [&](son_schedule_object& _sso)
const son_schedule_object& sidechain_sso = get(son_schedule_id_type(get_son_schedule_id(type)));
son_id_type first_son;
bool slot_is_near = sidechain_sso.scheduler.get_slot( schedule_slot-1, first_son );
son_id_type son_id;
modify(sidechain_sso, [&](son_schedule_object& _sso)
{
_sso.slots_since_genesis += schedule_slot;
witness_scheduler_rng rng(sso.rng_seed.data, _sso.slots_since_genesis);
_sso.slots_since_genesis += schedule_slot;
witness_scheduler_rng rng(_sso.rng_seed.data, _sso.slots_since_genesis);
_sso.scheduler._min_token_count = std::max(int(gpo.active_sons.size()) / 2, 1);
_sso.scheduler._min_token_count = std::max(int(gpo.active_sons.at(type).size()) / 2, 1);
if( slot_is_near )
{
uint32_t drain = schedule_slot;
while( drain > 0 )
{
if( _sso.scheduler.size() == 0 )
break;
_sso.scheduler.consume_schedule();
--drain;
}
}
else
{
_sso.scheduler.reset_schedule( first_son );
}
while( !_sso.scheduler.get_slot(schedule_needs_filled, son) )
{
if( _sso.scheduler.produce_schedule(rng) & emit_turn )
memcpy(_sso.rng_seed.begin(), dpo.random.data(), dpo.random.data_size());
}
_sso.last_scheduling_block = next_block.block_num();
_sso.recent_slots_filled = (
(_sso.recent_slots_filled << 1)
+ 1) << (schedule_slot - 1);
if( slot_is_near )
{
uint32_t drain = schedule_slot;
while( drain > 0 )
{
if( _sso.scheduler.size() == 0 )
break;
_sso.scheduler.consume_schedule();
--drain;
}
}
else
{
_sso.scheduler.reset_schedule( first_son );
}
while( !_sso.scheduler.get_slot(schedule_needs_filled, son_id) )
{
if( _sso.scheduler.produce_schedule(rng) & emit_turn )
memcpy(_sso.rng_seed.begin(), dpo.random.data(), dpo.random.data_size());
}
_sso.last_scheduling_block = next_block.block_num();
_sso.recent_slots_filled = (
(_sso.recent_slots_filled << 1)
+ 1) << (schedule_slot - 1);
});
auto end = fc::time_point::now();
static uint64_t total_time = 0;
static uint64_t calls = 0;

View file

@ -47,7 +47,7 @@ namespace graphene { namespace chain {
};
} }
FC_REFLECT_ENUM(graphene::chain::event_state,
FC_REFLECT_ENUM(graphene::chain::event_state,
(upcoming)
(frozen_upcoming)
(in_progress)
@ -61,12 +61,12 @@ namespace graphene { namespace chain {
namespace msm = boost::msm;
namespace mpl = boost::mpl;
namespace
namespace
{
// Events -- most events happen when the witnesses publish an event_update operation with a new
// status, so if they publish an event with the status set to `frozen`, we'll generate a `frozen_event`
struct upcoming_event
struct upcoming_event
{
database& db;
upcoming_event(database& db) : db(db) {}
@ -76,12 +76,12 @@ namespace graphene { namespace chain {
database& db;
in_progress_event(database& db) : db(db) {}
};
struct frozen_event
struct frozen_event
{
database& db;
frozen_event(database& db) : db(db) {}
};
struct finished_event
struct finished_event
{
database& db;
finished_event(database& db) : db(db) {}
@ -104,7 +104,7 @@ namespace graphene { namespace chain {
betting_market_group_resolved_event(database& db, betting_market_group_id_type resolved_group, bool was_canceled) : db(db), resolved_group(resolved_group), was_canceled(was_canceled) {}
};
// event triggered when a betting market group is closed. When we get this,
// event triggered when a betting market group is closed. When we get this,
// if all child betting market groups are closed, transition to finished
struct betting_market_group_closed_event
{
@ -127,7 +127,7 @@ namespace graphene { namespace chain {
void on_entry(const upcoming_event& event, event_state_machine_& fsm) {
dlog("event ${id} -> upcoming", ("id", fsm.event_obj->id));
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(fsm.event_obj->id)))
try
{
@ -147,7 +147,7 @@ namespace graphene { namespace chain {
void on_entry(const in_progress_event& event, event_state_machine_& fsm) {
dlog("event ${id} -> in_progress", ("id", fsm.event_obj->id));
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(fsm.event_obj->id)))
try
{
@ -203,7 +203,7 @@ namespace graphene { namespace chain {
void freeze_betting_market_groups(const frozen_event& event) {
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id)))
{
try
@ -222,7 +222,7 @@ namespace graphene { namespace chain {
void close_all_betting_market_groups(const finished_event& event) {
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id)))
{
try
@ -241,7 +241,7 @@ namespace graphene { namespace chain {
void cancel_all_betting_market_groups(const canceled_event& event) {
auto& betting_market_group_index = event.db.template get_index_type<betting_market_group_object_index>().indices().template get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id)))
event.db.modify(betting_market_group, [&event](betting_market_group_object& betting_market_group_obj) {
betting_market_group_obj.on_canceled_event(event.db, true);
@ -252,15 +252,15 @@ namespace graphene { namespace chain {
bool all_betting_market_groups_are_closed(const betting_market_group_closed_event& event)
{
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id)))
if (betting_market_group.id != event.closed_group)
{
betting_market_group_status status = betting_market_group.get_status();
if (status != betting_market_group_status::closed &&
status != betting_market_group_status::graded &&
status != betting_market_group_status::re_grading &&
status != betting_market_group_status::settled &&
if (status != betting_market_group_status::closed &&
status != betting_market_group_status::graded &&
status != betting_market_group_status::re_grading &&
status != betting_market_group_status::settled &&
status != betting_market_group_status::canceled)
return false;
}
@ -276,7 +276,7 @@ namespace graphene { namespace chain {
if (event_obj->at_least_one_betting_market_group_settled)
return false;
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id)))
if (betting_market_group.id != event.resolved_group)
if (betting_market_group.get_status() != betting_market_group_status::canceled)
@ -290,7 +290,7 @@ namespace graphene { namespace chain {
event_obj->at_least_one_betting_market_group_settled = true;
auto& betting_market_group_index = event.db.get_index_type<betting_market_group_object_index>().indices().get<by_event_id>();
for (const betting_market_group_object& betting_market_group :
for (const betting_market_group_object& betting_market_group :
boost::make_iterator_range(betting_market_group_index.equal_range(event_obj->id))) {
if (betting_market_group.id != event.resolved_group) {
betting_market_group_status status = betting_market_group.get_status();
@ -344,7 +344,6 @@ namespace graphene { namespace chain {
{
FC_THROW_EXCEPTION(graphene::chain::no_transition, "No transition");
}
template <class Fsm>
void no_transition(canceled_event const& e, Fsm&, int state)
{
@ -372,7 +371,7 @@ namespace graphene { namespace chain {
{
}
event_object::event_object(const event_object& rhs) :
event_object::event_object(const event_object& rhs) :
graphene::db::abstract_object<event_object>(rhs),
name(rhs.name),
season(rhs.season),
@ -408,7 +407,7 @@ namespace graphene { namespace chain {
}
namespace {
bool verify_event_status_constants()
{
unsigned error_count = 0;
@ -443,19 +442,19 @@ namespace graphene { namespace chain {
dlog("Event status constants are correct");
else
wlog("There were ${count} errors in the event status constants", ("count", error_count));
return error_count == 0;
}
} // end anonymous namespace
event_status event_object::get_status() const
{
static bool state_constants_are_correct = verify_event_status_constants();
(void)&state_constants_are_correct;
event_state state = (event_state)my->state_machine.current_state()[0];
ddump((state));
switch (state)
{
case event_state::upcoming:
@ -523,8 +522,8 @@ namespace graphene { namespace chain {
my->state_machine.process_event(betting_market_group_closed_event(db, closed_group));
}
// These are the only statuses that can be explicitly set by witness operations. The missing
// status, 'settled', is automatically set when all of the betting market groups have
// These are the only statuses that can be explicitly set by witness operations. The missing
// status, 'settled', is automatically set when all of the betting market groups have
// settled/canceled
void event_object::dispatch_new_status(database& db, event_status new_status)
{
@ -533,16 +532,16 @@ namespace graphene { namespace chain {
on_upcoming_event(db);
break;
case event_status::in_progress: // by witnesses when the event starts
on_in_progress_event(db);
on_in_progress_event(db);
break;
case event_status::frozen: // by witnesses when the event needs to be frozen
on_frozen_event(db);
on_frozen_event(db);
break;
case event_status::finished: // by witnesses when the event is complete
on_finished_event(db);
on_finished_event(db);
break;
case event_status::canceled: // by witnesses to cancel the event
on_canceled_event(db);
on_canceled_event(db);
break;
default:
FC_THROW("Status ${new_status} cannot be explicitly set", ("new_status", new_status));
@ -551,7 +550,7 @@ namespace graphene { namespace chain {
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect event_object to variant to properly reflect "state"
void to_variant(const graphene::chain::event_object& event_obj, fc::variant& v, uint32_t max_depth)
{

View file

@ -547,7 +547,7 @@ namespace graphene { namespace chain {
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect game_object to variant to properly reflect "state"
void to_variant(const graphene::chain::game_object& game_obj, fc::variant& v, uint32_t max_depth)
{

View file

@ -1,10 +0,0 @@
// #210 Check authorities on custom_operation
#ifndef HARDFORK_CORE_210_TIME
#ifdef BUILD_PEERPLAYS_TESTNET
#define HARDFORK_CORE_210_TIME (fc::time_point_sec::from_iso_string("2030-01-01T00:00:00")) // (Not yet scheduled)
#else
#define HARDFORK_CORE_210_TIME (fc::time_point_sec::from_iso_string("2030-01-01T00:00:00")) // (Not yet scheduled)
#endif
// Bugfix: pre-HF 210, custom_operation's required_auths field was ignored.
#define MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(chain_time) (chain_time <= HARDFORK_CORE_210_TIME)
#endif

View file

@ -0,0 +1,7 @@
#ifndef HARDFORK_HOTFIX_2024_TIME
#ifdef BUILD_PEERPLAYS_TESTNET
#define HARDFORK_HOTFIX_2024_TIME (fc::time_point_sec::from_iso_string("2023-12-20T00:00:00"))
#else
#define HARDFORK_HOTFIX_2024_TIME (fc::time_point_sec::from_iso_string("2023-12-20T00:00:00"))
#endif
#endif

View file

@ -0,0 +1,7 @@
#ifndef HARDFORK_SIDECHAIN_DELETE_TIME
#ifdef BUILD_PEERPLAYS_TESTNET
#define HARDFORK_SIDECHAIN_DELETE_TIME (fc::time_point_sec::from_iso_string("2022-11-16T02:00:00"))
#else
#define HARDFORK_SIDECHAIN_DELETE_TIME (fc::time_point_sec::from_iso_string("2022-11-16T02:00:00"))
#endif
#endif

View file

@ -0,0 +1,7 @@
#ifndef HARDFORK_SON_FOR_ETHEREUM_TIME
#ifdef BUILD_PEERPLAYS_TESTNET
#define HARDFORK_SON_FOR_ETHEREUM_TIME (fc::time_point_sec::from_iso_string("2023-07-17T12:00:00"))
#else
#define HARDFORK_SON_FOR_ETHEREUM_TIME (fc::time_point_sec::from_iso_string("2023-10-24T12:00:00"))
#endif
#endif

View file

@ -2,6 +2,6 @@
#ifdef BUILD_PEERPLAYS_TESTNET
#define HARDFORK_SON_FOR_HIVE_TIME (fc::time_point_sec::from_iso_string("2021-03-31T00:00:00"))
#else
#define HARDFORK_SON_FOR_HIVE_TIME (fc::time_point_sec::from_iso_string("2021-12-11T00:00:00"))
#define HARDFORK_SON_FOR_HIVE_TIME (fc::time_point_sec::from_iso_string("2021-12-21T00:00:00"))
#endif
#endif

View file

@ -24,19 +24,18 @@
#pragma once
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/protocol/betting_market.hpp>
#include <graphene/db/object.hpp>
#include <graphene/db/generic_index.hpp>
#include <graphene/chain/protocol/betting_market.hpp>
#include <sstream>
#include <boost/multi_index/composite_key.hpp>
#include <sstream>
namespace graphene { namespace chain {
class betting_market_object;
class betting_market_group_object;
} }
namespace fc {
namespace fc {
void to_variant(const graphene::chain::betting_market_object& betting_market_obj, fc::variant& v, uint32_t max_depth = 1);
void from_variant(const fc::variant& v, graphene::chain::betting_market_object& betting_market_obj, uint32_t max_depth = 1);
void to_variant(const graphene::chain::betting_market_group_object& betting_market_group_obj, fc::variant& v, uint32_t max_depth = 1);
@ -626,10 +625,9 @@ typedef multi_index_container<
typedef generic_index<betting_market_position_object, betting_market_position_multi_index_type> betting_market_position_index;
template<typename Stream>
inline Stream& operator<<( Stream& s, const betting_market_object& betting_market_obj )
{
{
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
// fc::raw::pack<Stream, const graphene::db::abstract_object<betting_market_object> >(s, betting_market_obj);
@ -649,7 +647,7 @@ inline Stream& operator<<( Stream& s, const betting_market_object& betting_marke
}
template<typename Stream>
inline Stream& operator>>( Stream& s, betting_market_object& betting_market_obj )
{
{
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<betting_market_object> >(s, betting_market_obj);
fc::raw::unpack(s, betting_market_obj.id);
@ -663,14 +661,14 @@ inline Stream& operator>>( Stream& s, betting_market_object& betting_market_obj
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
betting_market_obj.unpack_impl(stream);
return s;
}
template<typename Stream>
inline Stream& operator<<( Stream& s, const betting_market_group_object& betting_market_group_obj )
{
{
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
// fc::raw::pack<Stream, const graphene::db::abstract_object<betting_market_group_object> >(s, betting_market_group_obj);
@ -693,7 +691,7 @@ inline Stream& operator<<( Stream& s, const betting_market_group_object& betting
}
template<typename Stream>
inline Stream& operator>>( Stream& s, betting_market_group_object& betting_market_group_obj )
{
{
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<betting_market_group_object> >(s, betting_market_group_obj);
fc::raw::unpack(s, betting_market_group_obj.id);
@ -711,15 +709,113 @@ inline Stream& operator>>( Stream& s, betting_market_group_object& betting_marke
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
betting_market_group_obj.unpack_impl(stream);
return s;
}
} } // graphene::chain
FC_REFLECT_DERIVED( graphene::chain::betting_market_rules_object, (graphene::db::object), (name)(description) )
FC_REFLECT_DERIVED( graphene::chain::betting_market_group_object, (graphene::db::object), (description)(event_id)(rules_id)(asset_id)(total_matched_bets_amount)(never_in_play)(delay_before_settling)(settling_time) )
FC_REFLECT_DERIVED( graphene::chain::betting_market_object, (graphene::db::object), (group_id)(description)(payout_condition)(resolution) )
FC_REFLECT_DERIVED( graphene::chain::bet_object, (graphene::db::object), (bettor_id)(betting_market_id)(amount_to_bet)(backer_multiplier)(back_or_lay)(end_of_delay) )
FC_REFLECT_DERIVED( graphene::chain::betting_market_position_object, (graphene::db::object), (bettor_id)(betting_market_id)(pay_if_payout_condition)(pay_if_not_payout_condition)(pay_if_canceled)(pay_if_not_canceled)(fees_collected) )
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::betting_market_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::betting_market_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::betting_market_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::betting_market_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::betting_market_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw::detail
template <>
struct get_typename<graphene::chain::betting_market_object> {
static const char *name() {
return "graphene::chain::betting_market_object";
}
};
template <>
struct reflector<graphene::chain::betting_market_object> {
typedef graphene::chain::betting_market_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::betting_market_group_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::betting_market_group_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::betting_market_group_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::betting_market_group_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::betting_market_group_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw:detail
template <>
struct get_typename<graphene::chain::betting_market_group_object> {
static const char *name() {
return "graphene::chain::betting_market_group_object";
}
};
template <>
struct reflector<graphene::chain::betting_market_group_object> {
typedef graphene::chain::betting_market_group_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc

View file

@ -158,7 +158,7 @@
#define GRAPHENE_RECENTLY_MISSED_COUNT_INCREMENT 4
#define GRAPHENE_RECENTLY_MISSED_COUNT_DECREMENT 3
#define GRAPHENE_CURRENT_DB_VERSION "PPY2.4"
#define GRAPHENE_CURRENT_DB_VERSION "PPY2.5"
#define GRAPHENE_IRREVERSIBLE_THRESHOLD (70 * GRAPHENE_1_PERCENT)

View file

@ -66,6 +66,8 @@ namespace graphene { namespace chain {
database();
~database();
std::vector<fc::time_point_sec> _hardfork_times;
enum validation_steps
{
skip_nothing = 0,
@ -243,7 +245,16 @@ namespace graphene { namespace chain {
witness_id_type get_scheduled_witness(uint32_t slot_num)const;
/**
* @brief Get the son scheduled for block production in a slot.
* @brief Get son schedule id for the given sidechain_type.
*
* type sidechain_type we getting schedule.
*
* returns Id of the schedule object.
*/
unsigned_int get_son_schedule_id(sidechain_type type)const;
/**
* @brief Get the bitcoin or hive son scheduled for block production in a slot.
*
* slot_num always corresponds to a time in the future.
*
@ -256,7 +267,7 @@ namespace graphene { namespace chain {
*
* Passing slot_num == 0 returns GRAPHENE_NULL_WITNESS
*/
son_id_type get_scheduled_son(uint32_t slot_num)const;
son_id_type get_scheduled_son(sidechain_type type, uint32_t slot_num)const;
/**
* Get the time at which the given slot occurs.
@ -281,8 +292,8 @@ namespace graphene { namespace chain {
vector<witness_id_type> get_near_witness_schedule()const;
void update_witness_schedule();
void update_witness_schedule(const signed_block& next_block);
void update_son_schedule();
void update_son_schedule(const signed_block& next_block);
void update_son_schedule(sidechain_type type);
void update_son_schedule(sidechain_type type, const signed_block& next_block);
void check_lottery_end_by_participants( asset_id_type asset_id );
void check_ending_lotteries();
@ -311,7 +322,7 @@ namespace graphene { namespace chain {
fc::optional<operation> create_son_deregister_proposal( son_id_type son_id, account_id_type paying_son );
signed_transaction create_signed_transaction( const fc::ecc::private_key& signing_private_key, const operation& op );
bool is_son_dereg_valid( son_id_type son_id );
bool is_son_active( son_id_type son_id );
bool is_son_active( sidechain_type type, son_id_type son_id );
bool is_asset_creation_allowed(const string& symbol);
time_point_sec head_block_time()const;
@ -332,6 +343,8 @@ namespace graphene { namespace chain {
void initialize_evaluators();
/// Reset the object graph in-memory
void initialize_indexes();
void initialize_hardforks();
void init_genesis(const genesis_state_type& genesis_state = genesis_state_type());
template<typename EvaluatorType>
@ -507,12 +520,16 @@ namespace graphene { namespace chain {
void notify_changed_objects();
private:
std::mutex _pending_tx_session_mutex;
optional<undo_database::session> _pending_tx_session;
vector< unique_ptr<op_evaluator> > _operation_evaluators;
template<class Index>
vector<std::reference_wrapper<const typename Index::object_type>> sort_votable_objects(size_t count)const;
template<class Index>
vector<std::reference_wrapper<const typename Index::object_type>> sort_votable_objects(sidechain_type sidechain, size_t count)const;
//////////////////// db_block.cpp ////////////////////
public:
@ -562,19 +579,22 @@ namespace graphene { namespace chain {
void initialize_budget_record( fc::time_point_sec now, budget_record& rec )const;
void process_budget();
void pay_workers( share_type& budget );
void pay_sons();
void pay_sons_before_hf_ethereum();
void pay_sons_after_hf_ethereum();
void perform_son_tasks();
void perform_chain_maintenance(const signed_block& next_block, const global_property_object& global_props);
void update_active_witnesses();
void update_active_committee_members();
void update_son_metrics( const vector<son_info>& curr_active_sons );
void update_son_metrics( const flat_map<sidechain_type, vector<son_sidechain_info> >& curr_active_sons );
void update_active_sons();
void remove_son_proposal( const proposal_object& proposal );
void remove_inactive_son_down_proposals( const vector<son_id_type>& son_ids_to_remove );
void remove_inactive_son_proposals( const vector<son_id_type>& son_ids_to_remove );
void update_son_statuses( const vector<son_info>& cur_active_sons, const vector<son_info>& new_active_sons );
void update_son_wallet( const vector<son_info>& new_active_sons );
void update_son_statuses( const flat_map<sidechain_type, vector<son_sidechain_info> >& curr_active_sons,
const flat_map<sidechain_type, vector<son_sidechain_info> >& new_active_sons );
void update_son_wallet( const flat_map<sidechain_type, vector<son_sidechain_info> >& new_active_sons );
void update_worker_votes();
void hotfix_2024();
public:
double calculate_vesting_factor(const account_object& stake_account);
@ -585,6 +605,7 @@ namespace graphene { namespace chain {
///@}
///@}
std::mutex _pending_tx_mutex;
vector< processed_transaction > _pending_tx;
fork_database _fork_db;
@ -612,11 +633,17 @@ namespace graphene { namespace chain {
uint16_t _current_op_in_trx = 0;
uint32_t _current_virtual_op = 0;
vector<uint64_t> _vote_tally_buffer;
vector<uint64_t> _witness_count_histogram_buffer;
vector<uint64_t> _committee_count_histogram_buffer;
vector<uint64_t> _son_count_histogram_buffer;
uint64_t _total_voting_stake;
vector<uint64_t> _vote_tally_buffer;
vector<uint64_t> _witness_count_histogram_buffer;
vector<uint64_t> _committee_count_histogram_buffer;
flat_map<sidechain_type, vector<uint64_t> > _son_count_histogram_buffer = []{
flat_map<sidechain_type, vector<uint64_t> > son_count_histogram_buffer;
for(const auto& active_sidechain_type : all_sidechain_types){
son_count_histogram_buffer[active_sidechain_type] = vector<uint64_t>{};
}
return son_count_histogram_buffer;
}();
uint64_t _total_voting_stake;
flat_map<uint32_t,block_id_type> _checkpoints;

View file

@ -35,7 +35,7 @@ namespace graphene { namespace chain {
class event_object;
} }
namespace fc {
namespace fc {
void to_variant(const graphene::chain::event_object& event_obj, fc::variant& v, uint32_t max_depth = 1);
void from_variant(const fc::variant& v, graphene::chain::event_object& event_obj, uint32_t max_depth = 1);
} //end namespace fc
@ -56,7 +56,7 @@ class event_object : public graphene::db::abstract_object< event_object >
event_object& operator=(const event_object& rhs);
internationalized_string_type name;
internationalized_string_type season;
optional<time_point_sec> start_time;
@ -114,7 +114,7 @@ typedef generic_index<event_object, event_object_multi_index_type> event_object_
template<typename Stream>
inline Stream& operator<<( Stream& s, const event_object& event_obj )
{
{
fc_elog(fc::logger::get("event"), "In event_obj to_raw");
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
@ -137,7 +137,7 @@ typedef generic_index<event_object, event_object_multi_index_type> event_object_
}
template<typename Stream>
inline Stream& operator>>( Stream& s, event_object& event_obj )
{
{
fc_elog(fc::logger::get("event"), "In event_obj from_raw");
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<event_object> >(s, event_obj);
@ -154,10 +154,57 @@ typedef generic_index<event_object, event_object_multi_index_type> event_object_
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
event_obj.unpack_impl(stream);
return s;
}
} } // graphene::chain
FC_REFLECT(graphene::chain::event_object, (name)(season)(start_time)(event_group_id)(at_least_one_betting_market_group_settled)(scores))
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::event_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::event_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::event_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::event_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::event_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw::detail
template <>
struct get_typename<graphene::chain::event_object> {
static const char *name() {
return "graphene::chain::event_object";
}
};
template <>
struct reflector<graphene::chain::event_object> {
typedef graphene::chain::event_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc

View file

@ -23,6 +23,7 @@
*/
#pragma once
#include <boost/exception/diagnostic_information.hpp>
#include <fc/exception/exception.hpp>
#include <graphene/chain/protocol/protocol.hpp>
@ -65,19 +66,27 @@
msg \
)
#define GRAPHENE_TRY_NOTIFY( signal, ... ) \
try \
{ \
signal( __VA_ARGS__ ); \
} \
catch( const graphene::chain::plugin_exception& e ) \
{ \
elog( "Caught plugin exception: ${e}", ("e", e.to_detail_string() ) ); \
throw; \
} \
catch( ... ) \
{ \
wlog( "Caught unexpected exception in plugin" ); \
#define GRAPHENE_TRY_NOTIFY( signal, ... ) \
try \
{ \
signal( __VA_ARGS__ ); \
} \
catch( const graphene::chain::plugin_exception& e ) \
{ \
elog( "Caught plugin exception: ${e}", ("e", e.to_detail_string() ) ); \
throw; \
} \
catch( const boost::exception& e ) \
{ \
elog( "Caught plugin boost::exception: ${e}", ("e", boost::diagnostic_information(e) ) ); \
} \
catch( const std::exception& e ) \
{ \
elog( "Caught plugin std::exception: ${e}", ("e", e.what() ) ); \
} \
catch( ... ) \
{ \
wlog( "Caught unexpected exception in plugin" ); \
}
namespace graphene { namespace chain {

View file

@ -23,10 +23,8 @@
*/
#pragma once
#include <graphene/chain/match_object.hpp>
#include <graphene/chain/rock_paper_scissors.hpp>
#include <boost/multi_index/composite_key.hpp>
#include <graphene/db/flat_index.hpp>
#include <graphene/db/object.hpp>
#include <graphene/db/generic_index.hpp>
#include <fc/crypto/hex.hpp>
#include <sstream>
@ -35,7 +33,7 @@ namespace graphene { namespace chain {
class game_object;
} }
namespace fc {
namespace fc {
void to_variant(const graphene::chain::game_object& game_obj, fc::variant& v, uint32_t max_depth = 1);
void from_variant(const fc::variant& v, graphene::chain::game_object& game_obj, uint32_t max_depth = 1);
} //end namespace fc
@ -82,7 +80,7 @@ namespace graphene { namespace chain {
void on_move(database& db, const game_move_operation& op);
void on_timeout(database& db);
void start_game(database& db, const std::vector<account_id_type>& players);
// serialization functions:
// for serializing to raw, go through a temporary sstream object to avoid
// having to implement serialization in the header file
@ -116,7 +114,7 @@ namespace graphene { namespace chain {
template<typename Stream>
inline Stream& operator<<( Stream& s, const game_object& game_obj )
{
{
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
// fc::raw::pack<Stream, const graphene::db::abstract_object<game_object> >(s, game_obj);
@ -138,7 +136,7 @@ namespace graphene { namespace chain {
template<typename Stream>
inline Stream& operator>>( Stream& s, game_object& game_obj )
{
{
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<game_object> >(s, game_obj);
fc::raw::unpack(s, game_obj.id);
@ -153,10 +151,9 @@ namespace graphene { namespace chain {
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
game_obj.unpack_impl(stream);
return s;
}
} }
FC_REFLECT_ENUM(graphene::chain::game_state,
@ -165,7 +162,52 @@ FC_REFLECT_ENUM(graphene::chain::game_state,
(expecting_reveal_moves)
(game_complete))
//FC_REFLECT_TYPENAME(graphene::chain::game_object) // manually serialized
FC_REFLECT(graphene::chain::game_object, (players))
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::game_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::game_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::game_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::game_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::game_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw::detail
template <>
struct get_typename<graphene::chain::game_object> {
static const char *name() {
return "graphene::chain::game_object";
}
};
template <>
struct reflector<graphene::chain::game_object> {
typedef graphene::chain::game_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc

View file

@ -27,7 +27,7 @@
#include <graphene/chain/protocol/chain_parameters.hpp>
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/database.hpp>
#include <graphene/chain/son_info.hpp>
#include <graphene/chain/son_sidechain_info.hpp>
#include <graphene/db/object.hpp>
namespace graphene { namespace chain {
@ -49,10 +49,18 @@ namespace graphene { namespace chain {
chain_parameters parameters;
optional<chain_parameters> pending_parameters;
uint32_t next_available_vote_id = 0;
vector<committee_member_id_type> active_committee_members; // updated once per maintenance interval
flat_set<witness_id_type> active_witnesses; // updated once per maintenance interval
vector<son_info> active_sons; // updated once per maintenance interval
uint32_t next_available_vote_id = 0;
vector<committee_member_id_type> active_committee_members; // updated once per maintenance interval
flat_set<witness_id_type> active_witnesses; // updated once per maintenance interval
flat_map<sidechain_type, vector<son_sidechain_info> > active_sons = []() // updated once per maintenance interval
{
flat_map<sidechain_type, vector<son_sidechain_info> > active_sons;
for(const auto& active_sidechain_type : all_sidechain_types)
{
active_sons[active_sidechain_type] = vector<son_sidechain_info>();
}
return active_sons;
}();
// n.b. witness scheduling is done by witness_schedule object
};
@ -131,6 +139,7 @@ namespace graphene { namespace chain {
}}
FC_REFLECT_DERIVED( graphene::chain::dynamic_global_property_object, (graphene::db::object),
(random)
(head_block_number)
(head_block_id)
(time)

View file

@ -1,8 +1,5 @@
#pragma once
#include <graphene/chain/protocol/tournament.hpp>
#include <graphene/chain/rock_paper_scissors.hpp>
#include <boost/multi_index/composite_key.hpp>
#include <graphene/db/flat_index.hpp>
#include <graphene/db/object.hpp>
#include <graphene/db/generic_index.hpp>
#include <fc/crypto/hex.hpp>
#include <sstream>
@ -11,11 +8,12 @@ namespace graphene { namespace chain {
class match_object;
} }
namespace fc {
namespace fc {
void to_variant(const graphene::chain::match_object& match_obj, fc::variant& v, uint32_t max_depth = 1);
void from_variant(const fc::variant& v, graphene::chain::match_object& match_obj, uint32_t max_depth = 1);
} //end namespace fc
namespace graphene { namespace chain {
class database;
using namespace graphene::db;
@ -89,6 +87,7 @@ namespace graphene { namespace chain {
void pack_impl(std::ostream& stream) const;
void unpack_impl(std::istream& stream);
void on_initiate_match(database& db);
void on_game_complete(database& db, const game_object& game);
game_id_type start_next_game(database& db, match_id_type match_id);
@ -106,7 +105,7 @@ namespace graphene { namespace chain {
template<typename Stream>
inline Stream& operator<<( Stream& s, const match_object& match_obj )
{
{
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
// fc::raw::pack<Stream, const graphene::db::abstract_object<match_object> >(s, match_obj);
@ -132,7 +131,7 @@ namespace graphene { namespace chain {
template<typename Stream>
inline Stream& operator>>( Stream& s, match_object& match_obj )
{
{
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<match_object> >(s, match_obj);
fc::raw::unpack(s, match_obj.id);
@ -151,10 +150,9 @@ namespace graphene { namespace chain {
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
match_obj.unpack_impl(stream);
return s;
}
} }
FC_REFLECT_ENUM(graphene::chain::match_state,
@ -162,6 +160,52 @@ FC_REFLECT_ENUM(graphene::chain::match_state,
(match_in_progress)
(match_complete))
//FC_REFLECT_TYPENAME(graphene::chain::match_object) // manually serialized
FC_REFLECT(graphene::chain::match_object, (players))
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::match_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::match_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::match_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::match_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::match_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw::detail
template <>
struct get_typename<graphene::chain::match_object> {
static const char *name() {
return "graphene::chain::match_object";
}
};
template <>
struct reflector<graphene::chain::match_object> {
typedef graphene::chain::match_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc

View file

@ -130,6 +130,9 @@ namespace graphene { namespace chain {
std::greater< uint32_t >,
std::greater< object_id_type >
>
>,
ordered_non_unique< tag<by_owner>,
member<nft_metadata_object, account_id_type, &nft_metadata_object::owner>
>
>
>;

View file

@ -28,6 +28,7 @@
#include <graphene/chain/protocol/special_authority.hpp>
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/protocol/vote.hpp>
#include <graphene/chain/sidechain_defs.hpp>
namespace graphene { namespace chain {
@ -35,8 +36,28 @@ namespace graphene { namespace chain {
bool is_cheap_name( const string& n );
/// These are the fields which can be updated by the active authority.
struct account_options
struct account_options
{
struct ext
{
/// The number of active son members this account votes the blockchain should appoint
/// Must not exceed the actual number of son members voted for in @ref votes
optional< flat_map<sidechain_type, uint16_t> > num_son;
/// Returns and empty num_son map with all sidechains
static flat_map<sidechain_type, uint16_t> empty_num_son()
{
flat_map<sidechain_type, uint16_t> num_son;
for(const auto& active_sidechain_type : all_sidechain_types)
{
num_son[active_sidechain_type] = 0;
}
return num_son;
}
};
/// The memo key is the key this account will typically use to encrypt/sign transaction memos and other non-
/// validated account activities. This field is here to prevent confusion if the active authority has zero or
/// multiple keys in it.
@ -52,14 +73,11 @@ namespace graphene { namespace chain {
/// The number of active committee members this account votes the blockchain should appoint
/// Must not exceed the actual number of committee members voted for in @ref votes
uint16_t num_committee = 0;
/// The number of active son members this account votes the blockchain should appoint
/// Must not exceed the actual number of son members voted for in @ref votes
uint16_t num_son = 0;
/// This is the list of vote IDs this account votes for. The weight of these votes is determined by this
/// account's balance of core asset.
flat_set<vote_id_type> votes;
extensions_type extensions;
extension< ext > extensions;
/// Whether this account is voting
inline bool is_voting() const
{
@ -244,7 +262,7 @@ namespace graphene { namespace chain {
*/
struct account_upgrade_operation : public base_operation
{
struct fee_parameters_type {
struct fee_parameters_type {
uint64_t membership_annual_fee = 2000 * GRAPHENE_BLOCKCHAIN_PRECISION;
uint64_t membership_lifetime_fee = 10000 * GRAPHENE_BLOCKCHAIN_PRECISION; ///< the cost to upgrade to a lifetime member
};
@ -289,6 +307,7 @@ namespace graphene { namespace chain {
} } // graphene::chain
FC_REFLECT(graphene::chain::account_options::ext, (num_son))
FC_REFLECT(graphene::chain::account_options, (memo_key)(voting_account)(num_witness)(num_committee)(votes)(extensions))
// FC_REFLECT_TYPENAME( graphene::chain::account_whitelist_operation::account_listing)
FC_REFLECT_ENUM( graphene::chain::account_whitelist_operation::account_listing,

View file

@ -70,6 +70,7 @@ namespace graphene { namespace chain {
optional < uint16_t > maximum_son_count = GRAPHENE_DEFAULT_MAX_SONS; ///< maximum number of active SONS
optional < asset_id_type > hbd_asset = asset_id_type();
optional < asset_id_type > hive_asset = asset_id_type();
optional < asset_id_type > eth_asset = asset_id_type();
};
struct chain_parameters
@ -220,6 +221,9 @@ namespace graphene { namespace chain {
inline asset_id_type hive_asset() const {
return extensions.value.hive_asset.valid() ? *extensions.value.hive_asset : asset_id_type();
}
inline asset_id_type eth_asset() const {
return extensions.value.eth_asset.valid() ? *extensions.value.eth_asset : asset_id_type();
}
private:
static void safe_copy(chain_parameters& to, const chain_parameters& from);
};
@ -257,6 +261,7 @@ FC_REFLECT( graphene::chain::parameter_extension,
(maximum_son_count)
(hbd_asset)
(hive_asset)
(eth_asset)
)
FC_REFLECT( graphene::chain::chain_parameters,

View file

@ -111,12 +111,12 @@ struct stealth_confirmation
/**
* Packs *this then encodes as base58 encoded string.
*/
operator string()const;
//operator string()const;
/**
* Unpacks from a base58 string
*/
stealth_confirmation( const std::string& base58 );
stealth_confirmation(){}
//stealth_confirmation( const std::string& base58 );
//stealth_confirmation(){}
public_key_type one_time_key;
optional<public_key_type> to;
@ -152,7 +152,6 @@ struct transfer_to_blind_operation : public base_operation
uint32_t price_per_output = 5*GRAPHENE_BLOCKCHAIN_PRECISION;
};
asset fee;
asset amount;
account_id_type from;
@ -160,8 +159,8 @@ struct transfer_to_blind_operation : public base_operation
vector<blind_output> outputs;
account_id_type fee_payer()const { return from; }
void validate()const;
share_type calculate_fee(const fee_parameters_type& )const;
//void validate()const;
//share_type calculate_fee(const fee_parameters_type& )const;
};
/**
@ -181,13 +180,12 @@ struct transfer_from_blind_operation : public base_operation
vector<blind_input> inputs;
account_id_type fee_payer()const { return GRAPHENE_TEMP_ACCOUNT; }
void validate()const;
void get_required_authorities( vector<authority>& a )const
{
for( const auto& in : inputs )
a.push_back( in.owner );
}
//void validate()const;
//void get_required_authorities( vector<authority>& a )const
//{
// for( const auto& in : inputs )
// a.push_back( in.owner );
//}
};
/**
@ -243,17 +241,16 @@ struct blind_transfer_operation : public base_operation
asset fee;
vector<blind_input> inputs;
vector<blind_output> outputs;
/** graphene TEMP account */
account_id_type fee_payer()const;
void validate()const;
share_type calculate_fee( const fee_parameters_type& k )const;
void get_required_authorities( vector<authority>& a )const
{
for( const auto& in : inputs )
a.push_back( in.owner );
}
/** graphene TEMP account */
account_id_type fee_payer()const { return GRAPHENE_TEMP_ACCOUNT; }
//void validate()const;
//share_type calculate_fee( const fee_parameters_type& k )const;
//void get_required_authorities( vector<authority>& a )const
//{
// for( const auto& in : inputs )
// a.push_back( in.owner );
//}
};
///@} endgroup stealth

View file

@ -18,7 +18,7 @@ namespace graphene
// Buyer purchasing lottery tickets
account_id_type buyer;
// count of tickets to buy
uint64_t tickets_to_buy;
share_type tickets_to_buy;
// amount that can spent
asset amount;
@ -83,4 +83,4 @@ FC_REFLECT(graphene::chain::nft_lottery_reward_operation::fee_parameters_type, (
FC_REFLECT(graphene::chain::nft_lottery_end_operation::fee_parameters_type, (fee))
FC_REFLECT(graphene::chain::nft_lottery_token_purchase_operation, (fee)(lottery_id)(buyer)(tickets_to_buy)(amount)(extensions))
FC_REFLECT(graphene::chain::nft_lottery_reward_operation, (fee)(lottery_id)(winner)(amount)(win_percentage)(is_benefactor_reward)(winner_ticket_id)(extensions))
FC_REFLECT(graphene::chain::nft_lottery_end_operation, (fee)(lottery_id)(extensions))
FC_REFLECT(graphene::chain::nft_lottery_end_operation, (fee)(lottery_id)(extensions))

View file

@ -106,9 +106,9 @@ namespace graphene { namespace chain {
assert_operation,
balance_claim_operation,
override_transfer_operation,
transfer_to_blind_operation,
blind_transfer_operation,
transfer_from_blind_operation,
transfer_to_blind_operation, //! We don't use this operation
blind_transfer_operation, //! We don't use this operation
transfer_from_blind_operation, //! We don't use this operation
asset_settle_cancel_operation, // VIRTUAL
asset_claim_fees_operation,
fba_distribute_operation, // VIRTUAL

View file

@ -71,7 +71,7 @@ FC_REFLECT( graphene::chain::sidechain_transaction_create_operation, (fee)(payer
(sidechain)
(object_id)
(transaction)
(signers) )
(signers))
FC_REFLECT( graphene::chain::sidechain_transaction_sign_operation::fee_parameters_type, (fee) )
FC_REFLECT( graphene::chain::sidechain_transaction_sign_operation, (fee)(signer)(payer)

View file

@ -33,7 +33,6 @@ namespace graphene { namespace chain {
optional<public_key_type> new_signing_key;
optional<flat_map<sidechain_type, string>> new_sidechain_public_keys;
optional<vesting_balance_id_type> new_pay_vb;
optional<son_status> new_status;
account_id_type fee_payer()const { return owner_account; }
share_type calculate_fee(const fee_parameters_type& k)const { return 0; }
@ -105,7 +104,7 @@ FC_REFLECT(graphene::chain::son_create_operation, (fee)(owner_account)(url)(depo
FC_REFLECT(graphene::chain::son_update_operation::fee_parameters_type, (fee) )
FC_REFLECT(graphene::chain::son_update_operation, (fee)(son_id)(owner_account)(new_url)(new_deposit)
(new_signing_key)(new_sidechain_public_keys)(new_pay_vb)(new_status) )
(new_signing_key)(new_sidechain_public_keys)(new_pay_vb) )
FC_REFLECT(graphene::chain::son_deregister_operation::fee_parameters_type, (fee) )
FC_REFLECT(graphene::chain::son_deregister_operation, (fee)(son_id)(payer) )

View file

@ -1,40 +1,47 @@
#pragma once
#include <graphene/chain/protocol/base.hpp>
#include <graphene/chain/son_info.hpp>
#include <graphene/chain/son_sidechain_info.hpp>
namespace graphene { namespace chain {
struct son_wallet_recreate_operation : public base_operation
{
struct fee_parameters_type { uint64_t fee = 0; };
struct son_wallet_recreate_operation : public base_operation
{
struct fee_parameters_type { uint64_t fee = 0; };
struct ext
{
optional<flat_map<sidechain_type, vector<son_sidechain_info> > > sidechain_sons;
};
asset fee;
account_id_type payer;
asset fee;
account_id_type payer;
vector<son_info> sons;
vector<son_info> sons;
extension< ext > extensions;
account_id_type fee_payer()const { return payer; }
share_type calculate_fee(const fee_parameters_type& k)const { return 0; }
};
account_id_type fee_payer()const { return payer; }
share_type calculate_fee(const fee_parameters_type& k)const { return 0; }
};
struct son_wallet_update_operation : public base_operation
{
struct fee_parameters_type { uint64_t fee = 0; };
struct son_wallet_update_operation : public base_operation
{
struct fee_parameters_type { uint64_t fee = 0; };
asset fee;
account_id_type payer;
asset fee;
account_id_type payer;
son_wallet_id_type son_wallet_id;
sidechain_type sidechain;
string address;
son_wallet_id_type son_wallet_id;
sidechain_type sidechain;
string address;
account_id_type fee_payer()const { return payer; }
share_type calculate_fee(const fee_parameters_type& k)const { return 0; }
};
account_id_type fee_payer()const { return payer; }
share_type calculate_fee(const fee_parameters_type& k)const { return 0; }
};
} } // namespace graphene::chain
FC_REFLECT(graphene::chain::son_wallet_recreate_operation::fee_parameters_type, (fee) )
FC_REFLECT(graphene::chain::son_wallet_recreate_operation, (fee)(payer)(sons) )
FC_REFLECT(graphene::chain::son_wallet_recreate_operation::ext, (sidechain_sons))
FC_REFLECT(graphene::chain::son_wallet_recreate_operation, (fee)(payer)(sons)(extensions) )
FC_REFLECT(graphene::chain::son_wallet_update_operation::fee_parameters_type, (fee) )
FC_REFLECT(graphene::chain::son_wallet_update_operation, (fee)(payer)(son_wallet_id)(sidechain)(address) )

View file

@ -395,6 +395,13 @@ namespace graphene { namespace chain {
bool is_valid_muse( const std::string& base58str );
};
class pubkey_comparator {
public:
inline bool operator()(const public_key_type& a, const public_key_type& b) const {
return a.key_data < b.key_data;
}
};
struct extended_public_key_type
{
struct binary_key
@ -577,6 +584,8 @@ FC_REFLECT_TYPENAME( graphene::chain::fba_accumulator_id_type )
FC_REFLECT_TYPENAME( graphene::chain::betting_market_position_id_type )
FC_REFLECT_TYPENAME( graphene::chain::global_betting_statistics_id_type )
FC_REFLECT_TYPENAME( graphene::chain::tournament_details_id_type )
FC_REFLECT_TYPENAME( graphene::chain::game_id_type )
FC_REFLECT_TYPENAME( graphene::chain::match_id_type )
FC_REFLECT_TYPENAME( graphene::chain::custom_permission_id_type )
FC_REFLECT_TYPENAME( graphene::chain::custom_account_authority_id_type )
FC_REFLECT_TYPENAME( graphene::chain::offer_history_id_type )

View file

@ -59,7 +59,9 @@ struct vote_id_type
committee,
witness,
worker,
son,
son_bitcoin,
son_hive,
son_ethereum,
VOTE_TYPE_COUNT
};
@ -144,7 +146,7 @@ void from_variant( const fc::variant& var, graphene::chain::vote_id_type& vo, ui
FC_REFLECT_TYPENAME( fc::flat_set<graphene::chain::vote_id_type> )
FC_REFLECT_ENUM( graphene::chain::vote_id_type::vote_type, (witness)(committee)(worker)(son)(VOTE_TYPE_COUNT) )
FC_REFLECT_ENUM( graphene::chain::vote_id_type::vote_type, (witness)(committee)(worker)(son_bitcoin)(son_hive)(son_ethereum)(VOTE_TYPE_COUNT) )
FC_REFLECT( graphene::chain::vote_id_type, (content) )
GRAPHENE_EXTERNAL_SERIALIZATION( extern, graphene::chain::vote_id_type )

View file

@ -31,11 +31,20 @@ namespace graphene { namespace chain {
time_point_sec expires;
sidechain_address_object() :
sidechain(sidechain_type::bitcoin),
sidechain(sidechain_type::bitcoin), //! FIXME - bitcoin ???
deposit_public_key(""),
deposit_address(""),
withdraw_public_key(""),
withdraw_address("") {}
inline string get_deposit_address() const {
if(sidechain_type::ethereum != sidechain)
return deposit_address;
auto deposit_address_lower = deposit_address;
std::transform(deposit_address_lower.begin(), deposit_address_lower.end(), deposit_address_lower.begin(), ::tolower);
return deposit_address_lower;
}
};
struct by_account;
@ -76,7 +85,7 @@ namespace graphene { namespace chain {
ordered_non_unique< tag<by_sidechain_and_deposit_address_and_expires>,
composite_key<sidechain_address_object,
member<sidechain_address_object, sidechain_type, &sidechain_address_object::sidechain>,
member<sidechain_address_object, string, &sidechain_address_object::deposit_address>,
const_mem_fun<sidechain_address_object, string, &sidechain_address_object::get_deposit_address>,
member<sidechain_address_object, time_point_sec, &sidechain_address_object::expires>
>
>

View file

@ -1,6 +1,11 @@
#pragma once
#include <set>
#include <graphene/chain/hardfork.hpp>
#include <fc/reflect/reflect.hpp>
#include <fc/time.hpp>
namespace graphene { namespace chain {
@ -13,12 +18,28 @@ enum class sidechain_type {
hive
};
} }
static const std::set<sidechain_type> all_sidechain_types = {sidechain_type::bitcoin, sidechain_type::ethereum, sidechain_type::hive};
inline std::set<sidechain_type> active_sidechain_types(const fc::time_point_sec block_time) {
std::set<sidechain_type> active_sidechain_types{};
if (block_time >= HARDFORK_SON_TIME)
active_sidechain_types.insert(sidechain_type::bitcoin);
if (block_time >= HARDFORK_SON_FOR_HIVE_TIME)
active_sidechain_types.insert(sidechain_type::hive);
if (block_time >= HARDFORK_SON_FOR_ETHEREUM_TIME)
active_sidechain_types.insert(sidechain_type::ethereum);
return active_sidechain_types;
}
} // namespace chain
} // namespace graphene
FC_REFLECT_ENUM(graphene::chain::sidechain_type,
(unknown)
(bitcoin)
(ethereum)
(eos)
(hive)
(peerplays) )
(unknown)
(bitcoin)
(ethereum)
(eos)
(hive)
(peerplays) )

View file

@ -2,7 +2,7 @@
#include <boost/multi_index/composite_key.hpp>
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/sidechain_defs.hpp>
#include <graphene/chain/son_info.hpp>
#include <graphene/chain/son_sidechain_info.hpp>
namespace graphene { namespace chain {
using namespace graphene::db;
@ -30,7 +30,7 @@ namespace graphene { namespace chain {
sidechain_type sidechain = sidechain_type::unknown;
object_id_type object_id;
std::string transaction;
std::vector<son_info> signers;
std::vector<son_sidechain_info> signers;
std::vector<std::pair<son_id_type, std::string>> signatures;
std::string sidechain_transaction;

View file

@ -3,12 +3,11 @@
#include <graphene/chain/sidechain_defs.hpp>
namespace graphene { namespace chain {
using namespace graphene::db;
/**
* @class son_info
* @brief tracks information about a SON info required to re/create primary wallet
* @ingroup object
* @class son_info
* @brief tracks information about a SON info required to re/create primary wallet
* @ingroup object
*/
struct son_info {
son_id_type son_id;
@ -26,11 +25,11 @@ namespace graphene { namespace chain {
if (son_sets_equal) {
bool sidechain_public_keys_equal = true;
for (size_t i = 0; i < sidechain_public_keys.size(); i++) {
const auto lhs_scpk = sidechain_public_keys.nth(i);
const auto rhs_scpk = rhs.sidechain_public_keys.nth(i);
sidechain_public_keys_equal = sidechain_public_keys_equal &&
(lhs_scpk->first == rhs_scpk->first) &&
(lhs_scpk->second == rhs_scpk->second);
const auto lhs_scpk = sidechain_public_keys.nth(i);
const auto rhs_scpk = rhs.sidechain_public_keys.nth(i);
sidechain_public_keys_equal = sidechain_public_keys_equal &&
(lhs_scpk->first == rhs_scpk->first) &&
(lhs_scpk->second == rhs_scpk->second);
}
son_sets_equal = son_sets_equal && sidechain_public_keys_equal;
}
@ -40,8 +39,4 @@ namespace graphene { namespace chain {
} }
FC_REFLECT( graphene::chain::son_info,
(son_id)
(weight)
(signing_key)
(sidechain_public_keys) )
FC_REFLECT( graphene::chain::son_info, (son_id) (weight) (signing_key) (sidechain_public_keys) )

View file

@ -35,15 +35,15 @@ namespace graphene { namespace chain {
// Transactions signed since the last son payouts
flat_map<sidechain_type, uint64_t> txs_signed;
// Total Voted Active time i.e. duration selected as part of voted active SONs
uint64_t total_voted_time = 0;
flat_map<sidechain_type, uint64_t> total_voted_time;
// Total Downtime barring the current down time in seconds, used for stats to present to user
uint64_t total_downtime = 0;
flat_map<sidechain_type, uint64_t> total_downtime;
// Current Interval Downtime since last maintenance
uint64_t current_interval_downtime = 0;
flat_map<sidechain_type, uint64_t> current_interval_downtime;
// Down timestamp, if son status is in_maintenance use this
fc::time_point_sec last_down_timestamp;
flat_map<sidechain_type, fc::time_point_sec> last_down_timestamp;
// Last Active heartbeat timestamp
fc::time_point_sec last_active_timestamp;
flat_map<sidechain_type, fc::time_point_sec> last_active_timestamp;
// Deregistered Timestamp
fc::time_point_sec deregistered_timestamp;
// Total sidechain transactions reported by SON network while SON was active
@ -64,23 +64,48 @@ namespace graphene { namespace chain {
static const uint8_t type_id = son_object_type;
account_id_type son_account;
vote_id_type vote_id;
uint64_t total_votes = 0;
flat_map<sidechain_type, vote_id_type> sidechain_vote_ids;
flat_map<sidechain_type, uint64_t> total_votes = []()
{
flat_map<sidechain_type, uint64_t> total_votes;
for(const auto& active_sidechain_type : all_sidechain_types)
{
total_votes[active_sidechain_type] = 0;
}
return total_votes;
}();
string url;
vesting_balance_id_type deposit;
public_key_type signing_key;
vesting_balance_id_type pay_vb;
son_statistics_id_type statistics;
son_status status = son_status::inactive;
flat_map<sidechain_type, son_status> statuses = []()
{
flat_map<sidechain_type, son_status> statuses;
for(const auto& active_sidechain_type : all_sidechain_types)
{
statuses[active_sidechain_type] = son_status::inactive;
}
return statuses;
}();
flat_map<sidechain_type, string> sidechain_public_keys;
void pay_son_fee(share_type pay, database& db);
bool has_valid_config()const;
bool has_valid_config(time_point_sec head_block_time)const;
bool has_valid_config(time_point_sec head_block_time, sidechain_type sidechain) const;
inline optional<vote_id_type> get_sidechain_vote_id(sidechain_type sidechain) const { return sidechain_vote_ids.contains(sidechain) ? sidechain_vote_ids.at(sidechain) : optional<vote_id_type>{}; }
inline optional<vote_id_type> get_bitcoin_vote_id() const { return get_sidechain_vote_id(sidechain_type::bitcoin); }
inline optional<vote_id_type> get_hive_vote_id() const { return get_sidechain_vote_id(sidechain_type::hive); }
inline optional<vote_id_type> get_ethereum_vote_id() const { return get_sidechain_vote_id(sidechain_type::ethereum); }
private:
bool has_valid_config(sidechain_type sidechain) const;
};
struct by_account;
struct by_vote_id;
struct by_vote_id_bitcoin;
struct by_vote_id_hive;
struct by_vote_id_ethereum;
using son_multi_index_type = multi_index_container<
son_object,
indexed_by<
@ -90,8 +115,14 @@ namespace graphene { namespace chain {
ordered_unique< tag<by_account>,
member<son_object, account_id_type, &son_object::son_account>
>,
ordered_unique< tag<by_vote_id>,
member<son_object, vote_id_type, &son_object::vote_id>
ordered_non_unique< tag<by_vote_id_bitcoin>,
const_mem_fun<son_object, optional<vote_id_type>, &son_object::get_bitcoin_vote_id>
>,
ordered_non_unique< tag<by_vote_id_hive>,
const_mem_fun<son_object, optional<vote_id_type>, &son_object::get_hive_vote_id>
>,
ordered_non_unique< tag<by_vote_id_ethereum>,
const_mem_fun<son_object, optional<vote_id_type>, &son_object::get_ethereum_vote_id>
>
>
>;
@ -117,14 +148,14 @@ FC_REFLECT_ENUM(graphene::chain::son_status, (inactive)(active)(request_maintena
FC_REFLECT_DERIVED( graphene::chain::son_object, (graphene::db::object),
(son_account)
(vote_id)
(sidechain_vote_ids)
(total_votes)
(url)
(deposit)
(signing_key)
(pay_vb)
(statistics)
(status)
(statuses)
(sidechain_public_keys)
)

View file

@ -0,0 +1,31 @@
#pragma once
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/sidechain_defs.hpp>
namespace graphene { namespace chain {
/**
* @class son_sidechain_info
* @brief tracks information about a SON info required to re/create primary wallet
* @ingroup object
*/
struct son_sidechain_info {
son_id_type son_id;
weight_type weight = 0;
public_key_type signing_key;
string public_key;
bool operator==(const son_sidechain_info& rhs) const {
bool son_sets_equal =
(son_id == rhs.son_id) &&
(weight == rhs.weight) &&
(signing_key == rhs.signing_key) &&
(public_key == rhs.public_key);
return son_sets_equal;
}
};
} }
FC_REFLECT( graphene::chain::son_sidechain_info, (son_id) (weight) (signing_key) (public_key) )

View file

@ -18,7 +18,7 @@ namespace graphene { namespace chain {
static const uint8_t type_id = son_wallet_deposit_object_type;
time_point_sec timestamp;
uint32_t block_num;
uint32_t block_num = 0;
sidechain_type sidechain = sidechain_type::unknown;
std::string sidechain_uid;
std::string sidechain_transaction_id;

View file

@ -1,6 +1,6 @@
#pragma once
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/son_info.hpp>
#include <graphene/chain/son_sidechain_info.hpp>
#include <graphene/chain/sidechain_defs.hpp>
namespace graphene { namespace chain {
@ -21,7 +21,7 @@ namespace graphene { namespace chain {
time_point_sec expires;
flat_map<sidechain_type, string> addresses;
vector<son_info> sons;
flat_map<sidechain_type, vector<son_sidechain_info> > sons;
};
struct by_valid_from;

View file

@ -18,7 +18,7 @@ namespace graphene { namespace chain {
static const uint8_t type_id = son_wallet_withdraw_object_type;
time_point_sec timestamp;
uint32_t block_num;
uint32_t block_num = 0;
sidechain_type sidechain = sidechain_type::unknown;
std::string peerplays_uid;
std::string peerplays_transaction_id;

View file

@ -1,8 +1,7 @@
#pragma once
#include <graphene/chain/protocol/tournament.hpp>
#include <graphene/chain/rock_paper_scissors.hpp>
#include <boost/multi_index/composite_key.hpp>
#include <graphene/db/flat_index.hpp>
#include <graphene/db/object.hpp>
#include <graphene/db/generic_index.hpp>
#include <fc/crypto/hex.hpp>
#include <sstream>
@ -11,7 +10,7 @@ namespace graphene { namespace chain {
class tournament_object;
} }
namespace fc {
namespace fc {
void to_variant(const graphene::chain::tournament_object& tournament_obj, fc::variant& v, uint32_t max_depth = 1);
void from_variant(const fc::variant& v, graphene::chain::tournament_object& tournament_obj, uint32_t max_depth = 1);
} //end namespace fc
@ -154,10 +153,9 @@ namespace graphene { namespace chain {
> tournament_details_object_multi_index_type;
typedef generic_index<tournament_details_object, tournament_details_object_multi_index_type> tournament_details_index;
template<typename Stream>
inline Stream& operator<<( Stream& s, const tournament_object& tournament_obj )
{
{
fc_elog(fc::logger::get("tournament"), "In tournament_obj to_raw");
// pack all fields exposed in the header in the usual way
// instead of calling the derived pack, just serialize the one field in the base class
@ -175,15 +173,16 @@ namespace graphene { namespace chain {
std::ostringstream stream;
tournament_obj.pack_impl(stream);
std::string stringified_stream(stream.str());
fc_elog(fc::logger::get("tournament"), "Serialized state ${state} to bytes ${bytes}",
fc_elog(fc::logger::get("tournament"), "Serialized state ${state} to bytes ${bytes}",
("state", tournament_obj.get_state())("bytes", fc::to_hex(stringified_stream.c_str(), stringified_stream.size())));
fc::raw::pack(s, stream.str());
return s;
}
template<typename Stream>
inline Stream& operator>>( Stream& s, tournament_object& tournament_obj )
{
{
fc_elog(fc::logger::get("tournament"), "In tournament_obj from_raw");
// unpack all fields exposed in the header in the usual way
//fc::raw::unpack<Stream, graphene::db::abstract_object<tournament_object> >(s, tournament_obj);
@ -201,9 +200,9 @@ namespace graphene { namespace chain {
fc::raw::unpack(s, stringified_stream);
std::istringstream stream(stringified_stream);
tournament_obj.unpack_impl(stream);
fc_elog(fc::logger::get("tournament"), "Deserialized state ${state} from bytes ${bytes}",
fc_elog(fc::logger::get("tournament"), "Deserialized state ${state} from bytes ${bytes}",
("state", tournament_obj.get_state())("bytes", fc::to_hex(stringified_stream.c_str(), stringified_stream.size())));
return s;
}
@ -230,8 +229,6 @@ namespace graphene { namespace chain {
flat_set<account_id_type> before_account_ids;
};
} }
FC_REFLECT_DERIVED(graphene::chain::tournament_details_object, (graphene::db::object),
@ -240,8 +237,7 @@ FC_REFLECT_DERIVED(graphene::chain::tournament_details_object, (graphene::db::ob
(payers)
(players_payers)
(matches))
//FC_REFLECT_TYPENAME(graphene::chain::tournament_object) // manually serialized
FC_REFLECT(graphene::chain::tournament_object, (creator))
FC_REFLECT_ENUM(graphene::chain::tournament_state,
(accepting_registrations)
(awaiting_start)
@ -249,3 +245,52 @@ FC_REFLECT_ENUM(graphene::chain::tournament_state,
(registration_period_expired)
(concluded))
namespace fc {
template<>
template<>
inline void if_enum<fc::false_type>::from_variant(const variant &vo, graphene::chain::tournament_object &v, uint32_t max_depth) {
from_variant(vo, v, max_depth);
}
template<>
template<>
inline void if_enum<fc::false_type>::to_variant(const graphene::chain::tournament_object &v, variant &vo, uint32_t max_depth) {
to_variant(v, vo, max_depth);
}
namespace raw { namespace detail {
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<size_t> &s, const graphene::chain::tournament_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::pack(fc::datastream<char*> &s, const graphene::chain::tournament_object &v, uint32_t) {
s << v;
}
template<>
template<>
inline void if_enum<fc::false_type>::unpack(fc::datastream<const char*> &s, graphene::chain::tournament_object &v, uint32_t) {
s >> v;
}
} } // namespace fc::raw::detail
template <>
struct get_typename<graphene::chain::tournament_object> {
static const char *name() {
return "graphene::chain::tournament_object";
}
};
template <>
struct reflector<graphene::chain::tournament_object> {
typedef graphene::chain::tournament_object type;
typedef fc::true_type is_defined;
typedef fc::false_type is_enum;
};
} // namespace fc

View file

@ -0,0 +1,40 @@
#pragma once
#include <graphene/chain/protocol/vote.hpp>
namespace graphene { namespace chain {
/**
* @class voters_info_object
* @ingroup object
*/
struct voters_info_object {
vote_id_type vote_id;
vector<account_id_type> voters;
};
/**
* @class voters_info
* @brief tracks information about a voters info
* @ingroup object
*/
struct voters_info {
optional<voters_info_object> voters_for_committee_member;
optional<voters_info_object> voters_for_witness;
optional<vector<voters_info_object> > voters_for_workers;
optional<vector<voters_info_object> > voters_against_workers;
optional<flat_map<sidechain_type, voters_info_object> > voters_for_son;
};
} } // graphene::chain
FC_REFLECT( graphene::chain::voters_info_object,
(vote_id)
(voters) )
FC_REFLECT( graphene::chain::voters_info,
(voters_for_committee_member)
(voters_for_witness)
(voters_for_workers)
(voters_against_workers)
(voters_for_son))

View file

@ -0,0 +1,40 @@
#pragma once
#include <graphene/chain/protocol/vote.hpp>
namespace graphene { namespace chain {
/**
* @class votes_info_object
* @ingroup object
*/
struct votes_info_object {
vote_id_type vote_id;
object_id_type id;
};
/**
* @class votes_info
* @brief tracks information about a votes info
* @ingroup object
*/
struct votes_info {
optional< vector< votes_info_object > > votes_for_committee_members;
optional< vector< votes_info_object > > votes_for_witnesses;
optional< vector< votes_info_object > > votes_for_workers;
optional< vector< votes_info_object > > votes_against_workers;
optional< flat_map<sidechain_type, vector< votes_info_object > > > votes_for_sons;
};
} } // graphene::chain
FC_REFLECT( graphene::chain::votes_info_object,
(vote_id)
(id) )
FC_REFLECT( graphene::chain::votes_info,
(votes_for_committee_members)
(votes_for_witnesses)
(votes_for_workers)
(votes_against_workers)
(votes_for_sons))

View file

@ -96,7 +96,7 @@ class son_schedule_object : public graphene::db::abstract_object<son_schedule_ob
static const uint8_t space_id = implementation_ids;
static const uint8_t type_id = impl_son_schedule_object_type;
vector< son_id_type > current_shuffled_sons;
vector<son_id_type > current_shuffled_sons;
son_scheduler scheduler;
uint32_t last_scheduling_block;

View file

@ -162,8 +162,12 @@ class generic_witness_scheduler
_schedule.pop_front();
auto it = _lame_duck.find( result );
if( it != _lame_duck.end() )
_lame_duck.erase( it );
if( it != _lame_duck.end() ) {
set< WitnessID > removal_set;
removal_set.insert(*it);
remove_all( removal_set );
_lame_duck.erase(it);
}
if( debug ) check_invariant();
return result;
}
@ -389,7 +393,7 @@ class generic_witness_scheduler
// scheduled
std::deque < WitnessID > _schedule;
// in _schedule, but not to be replaced
// in _schedule, but must be removed
set< WitnessID > _lame_duck;
};

View file

@ -362,7 +362,7 @@ namespace graphene { namespace chain {
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect match_object to variant to properly reflect "state"
void to_variant(const graphene::chain::match_object& match_obj, fc::variant& v, uint32_t max_depth)
{ try {

View file

@ -30,7 +30,7 @@ namespace graphene
auto lottery_options = lottery_md_obj.lottery_data->lottery_options;
FC_ASSERT(lottery_options.ticket_price.asset_id == op.amount.asset_id);
FC_ASSERT((double)op.amount.amount.value / lottery_options.ticket_price.amount.value == (double)op.tickets_to_buy);
FC_ASSERT(op.tickets_to_buy * lottery_options.ticket_price.amount.value == op.amount.amount.value);
return void_result();
}
FC_CAPTURE_AND_RETHROW((op))
@ -142,4 +142,4 @@ namespace graphene
FC_CAPTURE_AND_RETHROW((op))
}
} // namespace chain
} // namespace graphene
} // namespace graphene

View file

@ -302,8 +302,7 @@ void_result proposal_create_evaluator::do_evaluate( const proposal_create_operat
vector<authority> other;
for( auto& op : o.proposed_ops )
{
operation_get_required_authorities( op.op, auths, auths, other,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(block_time) );
operation_get_required_authorities( op.op, auths, auths, other, true );
}
FC_ASSERT( other.size() == 0 ); // TODO: what about other???
@ -352,8 +351,7 @@ object_id_type proposal_create_evaluator::do_apply( const proposal_create_operat
// TODO: consider caching values from evaluate?
for( auto& op : _proposed_trx.operations )
operation_get_required_authorities( op, required_active, proposal.required_owner_approvals, other,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(chain_time) );
operation_get_required_authorities( op, required_active, proposal.required_owner_approvals, other, true);
//All accounts which must provide both owner and active authority should be omitted from the active authority set;
//owner authority approval implies active authority approval.

View file

@ -39,7 +39,7 @@ bool proposal_object::is_authorized_to_execute( database& db ) const
[&]( account_id_type id ){ return &id(db).owner; },
[&]( account_id_type id, const operation& op ){
return db.get_account_custom_authorities(id, op); },
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS( db.head_block_time() ),
true,
db.get_global_properties().parameters.max_authority_depth,
true, /* allow committee */
available_active_approvals,

View file

@ -174,22 +174,37 @@ void account_options::validate() const
{
auto needed_witnesses = num_witness;
auto needed_committee = num_committee;
auto needed_sons = num_son;
for( vote_id_type id : votes )
if( id.type() == vote_id_type::witness && needed_witnesses )
--needed_witnesses;
else if ( id.type() == vote_id_type::committee && needed_committee )
--needed_committee;
else if ( id.type() == vote_id_type::son && needed_sons )
--needed_sons;
FC_ASSERT( needed_witnesses == 0,
"May not specify fewer witnesses than the number voted for.");
FC_ASSERT( needed_committee == 0,
"May not specify fewer committee members than the number voted for.");
FC_ASSERT( needed_sons == 0,
"May not specify fewer SONs than the number voted for.");
if ( extensions.value.num_son.valid() )
{
flat_map<sidechain_type, uint16_t> needed_sons = *extensions.value.num_son;
for( vote_id_type id : votes )
if ( id.type() == vote_id_type::son_bitcoin && needed_sons[sidechain_type::bitcoin] )
--needed_sons[sidechain_type::bitcoin];
else if ( id.type() == vote_id_type::son_hive && needed_sons[sidechain_type::hive] )
--needed_sons[sidechain_type::hive];
else if ( id.type() == vote_id_type::son_ethereum && needed_sons[sidechain_type::ethereum] )
--needed_sons[sidechain_type::ethereum];
FC_ASSERT( needed_sons[sidechain_type::bitcoin] == 0,
"May not specify fewer Bitcoin SONs than the number voted for.");
FC_ASSERT( needed_sons[sidechain_type::hive] == 0,
"May not specify fewer Hive SONs than the number voted for.");
FC_ASSERT( needed_sons[sidechain_type::ethereum] == 0,
"May not specify fewer Ethereum SONs than the number voted for.");
}
}
void affiliate_reward_distribution::validate() const

View file

@ -22,12 +22,10 @@
* THE SOFTWARE.
*/
#include <graphene/chain/protocol/confidential.hpp>
#include <graphene/chain/confidential_evaluator.hpp>
#include <graphene/chain/database.hpp>
#include <fc/crypto/base58.hpp>
#include <fc/io/raw.hpp>
/*
namespace graphene { namespace chain {
void transfer_to_blind_operation::validate()const
@ -47,19 +45,6 @@ void transfer_to_blind_operation::validate()const
FC_ASSERT( !outputs[i].owner.is_impossible() );
}
FC_ASSERT( out.size(), "there must be at least one output" );
auto public_c = fc::ecc::blind(blinding_factor,net_public);
FC_ASSERT( fc::ecc::verify_sum( {public_c}, out, 0 ), "", ("net_public",net_public) );
if( outputs.size() > 1 )
{
for( auto out : outputs )
{
auto info = fc::ecc::range_get_info( out.range_proof );
FC_ASSERT( info.max_value <= GRAPHENE_MAX_SHARE_SUPPLY );
}
}
}
share_type transfer_to_blind_operation::calculate_fee( const fee_parameters_type& k )const
@ -79,31 +64,15 @@ void transfer_from_blind_operation::validate()const
vector<commitment_type> in(inputs.size());
vector<commitment_type> out;
int64_t net_public = fee.amount.value + amount.amount.value;
out.push_back( fc::ecc::blind( blinding_factor, net_public ) );
for( uint32_t i = 0; i < in.size(); ++i )
{
in[i] = inputs[i].commitment;
/// by requiring all inputs to be sorted we also prevent duplicate commitments on the input
if( i > 0 ) FC_ASSERT( in[i-1] < in[i], "all inputs must be sorted by commitment id" );
}
FC_ASSERT( in.size(), "there must be at least one input" );
FC_ASSERT( fc::ecc::verify_sum( in, out, 0 ) );
}
/**
* If fee_payer = temp_account_id, then the fee is paid by the surplus balance of inputs-outputs and
* 100% of the fee goes to the network.
*/
account_id_type blind_transfer_operation::fee_payer()const
{
return GRAPHENE_TEMP_ACCOUNT;
}
/**
* This method can be computationally intensive because it verifies that input commitments - output commitments add up to 0
*/
void blind_transfer_operation::validate()const
{ try {
vector<commitment_type> in(inputs.size());
@ -122,17 +91,6 @@ void blind_transfer_operation::validate()const
FC_ASSERT( !outputs[i].owner.is_impossible() );
}
FC_ASSERT( in.size(), "there must be at least one input" );
FC_ASSERT( fc::ecc::verify_sum( in, out, net_public ), "", ("net_public", net_public) );
if( outputs.size() > 1 )
{
for( auto out : outputs )
{
auto info = fc::ecc::range_get_info( out.range_proof );
FC_ASSERT( info.max_value <= GRAPHENE_MAX_SHARE_SUPPLY );
}
}
FC_ASSERT( fc::ecc::verify_sum( in, out, net_public ), "", ("net_public", net_public) );
} FC_CAPTURE_AND_RETHROW( (*this) ) }
share_type blind_transfer_operation::calculate_fee( const fee_parameters_type& k )const
@ -140,16 +98,12 @@ share_type blind_transfer_operation::calculate_fee( const fee_parameters_type& k
return k.fee + outputs.size() * k.price_per_output;
}
/**
* Packs *this then encodes as base58 encoded string.
*/
stealth_confirmation::operator string()const
{
return fc::to_base58( fc::raw::pack( *this ) );
}
/**
* Unpacks from a base58 string
*/
stealth_confirmation::stealth_confirmation( const std::string& base58 )
{
*this = fc::raw::unpack<stealth_confirmation>( fc::from_base58( base58 ) );
@ -157,6 +111,8 @@ stealth_confirmation::stealth_confirmation( const std::string& base58 )
} } // graphene::chain
*/
GRAPHENE_EXTERNAL_SERIALIZATION( /*not extern*/, graphene::chain::transfer_to_blind_operation::fee_parameters_type )
GRAPHENE_EXTERNAL_SERIALIZATION( /*not extern*/, graphene::chain::transfer_from_blind_operation::fee_parameters_type )
GRAPHENE_EXTERNAL_SERIALIZATION( /*not extern*/, graphene::chain::blind_transfer_operation::fee_parameters_type )

View file

@ -192,6 +192,16 @@ namespace graphene { namespace chain {
FC_ASSERT( *extensions.value.betting_rake_fee_percentage <= TOURNAMENT_MAXIMAL_RAKE_FEE_PERCENTAGE,
"Rake fee percentage must not be greater than ${max}", ("max", TOURNAMENT_MAXIMAL_RAKE_FEE_PERCENTAGE));
}
if( extensions.value.son_heartbeat_frequency.valid() && extensions.value.son_deregister_time.valid() )
FC_ASSERT( *extensions.value.son_heartbeat_frequency < *extensions.value.son_deregister_time );
if( extensions.value.son_heartbeat_frequency.valid() && extensions.value.son_down_time.valid() )
FC_ASSERT( *extensions.value.son_heartbeat_frequency < *extensions.value.son_down_time );
if( extensions.value.son_heartbeat_frequency.valid() && extensions.value.son_pay_time.valid() )
FC_ASSERT( *extensions.value.son_heartbeat_frequency < *extensions.value.son_pay_time );
}
} } // graphene::chain

View file

@ -22,11 +22,14 @@ object_id_type add_sidechain_address_evaluator::do_apply(const sidechain_address
const auto &sidechain_addresses_idx = db().get_index_type<sidechain_address_index>().indices().get<by_account_and_sidechain_and_expires>();
const auto &addr_itr = sidechain_addresses_idx.find(std::make_tuple(op.sidechain_address_account, op.sidechain, time_point_sec::maximum()));
if (addr_itr != sidechain_addresses_idx.end())
{
db().modify(*addr_itr, [&](sidechain_address_object &sao) {
sao.expires = db().head_block_time();
});
if (addr_itr != sidechain_addresses_idx.end()) {
if (db().head_block_time() >= HARDFORK_SIDECHAIN_DELETE_TIME) {
db().remove(*addr_itr);
} else {
db().modify(*addr_itr, [&](sidechain_address_object &sao) {
sao.expires = db().head_block_time();
});
}
}
const auto& new_sidechain_address_object = db().create<sidechain_address_object>( [&]( sidechain_address_object& obj ){
@ -47,7 +50,7 @@ void_result update_sidechain_address_evaluator::do_evaluate(const sidechain_addr
{ try {
const auto& sidx = db().get_index_type<son_index>().indices().get<by_account>();
const auto& son_obj = sidx.find(op.payer);
FC_ASSERT( son_obj != sidx.end() && db().is_son_active(son_obj->id), "Non active SON trying to update deposit address object" );
FC_ASSERT( son_obj != sidx.end() && db().is_son_active(op.sidechain, son_obj->id), "Non active SON trying to update deposit address object" );
const auto& sdpke_idx = db().get_index_type<sidechain_address_index>().indices().get<by_sidechain_and_deposit_public_key_and_expires>();
FC_ASSERT( op.deposit_address.valid() && op.deposit_public_key.valid() && op.deposit_address_data.valid(), "Update operation by SON is not valid");
FC_ASSERT( (*op.deposit_address).length() > 0 && (*op.deposit_public_key).length() > 0 && (*op.deposit_address_data).length() > 0, "SON should create a valid deposit address with valid deposit public key");
@ -105,10 +108,14 @@ void_result delete_sidechain_address_evaluator::do_apply(const sidechain_address
{ try {
const auto& idx = db().get_index_type<sidechain_address_index>().indices().get<by_id>();
auto sidechain_address = idx.find(op.sidechain_address_id);
if(sidechain_address != idx.end()) {
db().modify(*sidechain_address, [&](sidechain_address_object &sao) {
sao.expires = db().head_block_time();
});
if (sidechain_address != idx.end()) {
if (db().head_block_time() >= HARDFORK_SIDECHAIN_DELETE_TIME) {
db().remove(*sidechain_address);
} else {
db().modify(*sidechain_address, [&](sidechain_address_object &sao) {
sao.expires = db().head_block_time();
});
}
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (op) ) }

View file

@ -11,7 +11,7 @@ namespace graphene { namespace chain {
void_result sidechain_transaction_create_evaluator::do_evaluate(const sidechain_transaction_create_operation &op)
{ try {
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT( op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer." );
FC_ASSERT(op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer.");
FC_ASSERT((op.object_id.is<son_wallet_id_type>() || op.object_id.is<son_wallet_deposit_id_type>() || op.object_id.is<son_wallet_withdraw_id_type>()), "Invalid object id");
@ -28,15 +28,26 @@ void_result sidechain_transaction_create_evaluator::do_evaluate(const sidechain_
object_id_type sidechain_transaction_create_evaluator::do_apply(const sidechain_transaction_create_operation &op)
{ try {
const auto &new_sidechain_transaction_object = db().create<sidechain_transaction_object>([&](sidechain_transaction_object &sto) {
sto.timestamp = db().head_block_time();
sto.sidechain = op.sidechain;
sto.object_id = op.object_id;
sto.transaction = op.transaction;
sto.signers = op.signers;
std::transform(op.signers.begin(), op.signers.end(), std::inserter(sto.signatures, sto.signatures.end()), [](const son_info &si) {
std::vector<son_sidechain_info> signers;
for(const auto& signer : op.signers){
son_sidechain_info ssi;
ssi.son_id = signer.son_id;
ssi.weight = signer.weight;
ssi.signing_key = signer.signing_key;
ssi.public_key = signer.sidechain_public_keys.at(op.sidechain);
signers.emplace_back(std::move(ssi));
}
sto.signers = std::move(signers);
std::transform(sto.signers.begin(), sto.signers.end(), std::inserter(sto.signatures, sto.signatures.end()), [](const son_sidechain_info &si) {
return std::make_pair(si.son_id, std::string());
});
for (const auto &si : op.signers) {
for (const auto &si : sto.signers) {
sto.total_weight = sto.total_weight + si.weight;
}
sto.sidechain_transaction = "";

View file

@ -39,13 +39,28 @@ void_result create_son_evaluator::do_evaluate(const son_create_operation& op)
object_id_type create_son_evaluator::do_apply(const son_create_operation& op)
{ try {
vote_id_type vote_id;
db().modify(db().get_global_properties(), [&vote_id](global_property_object& p) {
vote_id = get_next_vote_id(p, vote_id_type::son);
});
flat_map<sidechain_type, vote_id_type> vote_ids;
const auto& new_son_object = db().create<son_object>( [&]( son_object& obj ){
const auto now = db().head_block_time();
if( now < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
db().modify(db().get_global_properties(), [&vote_id](global_property_object &p) {
vote_id = get_next_vote_id(p, vote_id_type::son_bitcoin);
});
}
else {
db().modify(db().get_global_properties(), [&vote_ids](global_property_object &p) {
vote_ids[sidechain_type::bitcoin] = get_next_vote_id(p, vote_id_type::son_bitcoin);
vote_ids[sidechain_type::hive] = get_next_vote_id(p, vote_id_type::son_hive);
vote_ids[sidechain_type::ethereum] = get_next_vote_id(p, vote_id_type::son_ethereum);
});
}
const auto& new_son_object = db().create<son_object>( [&]( son_object& obj ) {
obj.son_account = op.owner_account;
obj.vote_id = vote_id;
if( now < HARDFORK_SON_FOR_ETHEREUM_TIME )
obj.sidechain_vote_ids[sidechain_type::bitcoin] = vote_id;
else
obj.sidechain_vote_ids = vote_ids;
obj.url = op.url;
obj.deposit = op.deposit;
obj.signing_key = op.signing_key;
@ -79,9 +94,6 @@ void_result update_son_evaluator::do_evaluate(const son_update_operation& op)
FC_ASSERT(vbo.policy.which() == vesting_policy::tag<linear_vesting_policy>::value,
"Payment balance must have linear vesting policy");
}
if(op.new_status.valid()) {
FC_ASSERT(db().get(op.son_id).status == son_status::deregistered, "SON must be in deregistered state");
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (op) ) }
@ -97,7 +109,8 @@ object_id_type update_son_evaluator::do_apply(const son_update_operation& op)
if(op.new_signing_key.valid()) so.signing_key = *op.new_signing_key;
if(op.new_sidechain_public_keys.valid()) so.sidechain_public_keys = *op.new_sidechain_public_keys;
if(op.new_pay_vb.valid()) so.pay_vb = *op.new_pay_vb;
if(op.new_status.valid()) so.status = son_status::inactive;
for(auto& status : so.statuses)
if(status.second == son_status::deregistered) status.second = son_status::inactive;
});
}
return op.son_id;
@ -130,7 +143,8 @@ void_result deregister_son_evaluator::do_apply(const son_deregister_operation& o
});
db().modify(*son, [&op](son_object &so) {
so.status = son_status::deregistered;
for(auto& status : so.statuses)
status.second = son_status::deregistered;
});
auto stats_obj = ss_idx.find(son->statistics);
@ -147,18 +161,28 @@ void_result son_heartbeat_evaluator::do_evaluate(const son_heartbeat_operation&
{ try {
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK"); // can be removed after HF date pass
const auto& idx = db().get_index_type<son_index>().indices().get<by_id>();
auto itr = idx.find(op.son_id);
const auto itr = idx.find(op.son_id);
FC_ASSERT( itr != idx.end() );
FC_ASSERT(itr->son_account == op.owner_account);
auto stats = itr->statistics( db() );
// Inactive SONs need not send heartbeats
FC_ASSERT((itr->status == son_status::active) || (itr->status == son_status::in_maintenance) || (itr->status == son_status::request_maintenance), "Inactive SONs need not send heartbeats");
bool status_need_to_send_heartbeats = false;
for(const auto& status : itr->statuses)
{
if( (status.second == son_status::active) || (status.second == son_status::in_maintenance) || (status.second == son_status::request_maintenance) )
status_need_to_send_heartbeats = true;
}
FC_ASSERT(status_need_to_send_heartbeats, "Inactive SONs need not send heartbeats");
// Account for network delays
fc::time_point_sec min_ts = db().head_block_time() - fc::seconds(5 * db().block_interval());
// Account for server ntp sync difference
fc::time_point_sec max_ts = db().head_block_time() + fc::seconds(5 * db().block_interval());
FC_ASSERT(op.ts > stats.last_active_timestamp, "Heartbeat sent without waiting minimum time");
FC_ASSERT(op.ts > stats.last_down_timestamp, "Heartbeat sent is invalid can't be <= last down timestamp");
for(const auto& active_sidechain_type : active_sidechain_types(db().head_block_time())) {
if(stats.last_active_timestamp.contains(active_sidechain_type))
FC_ASSERT(op.ts > stats.last_active_timestamp.at(active_sidechain_type), "Heartbeat sent for sidechain = ${sidechain} without waiting minimum time", ("sidechain", active_sidechain_type));
if(stats.last_down_timestamp.contains(active_sidechain_type))
FC_ASSERT(op.ts > stats.last_down_timestamp.at(active_sidechain_type), "Heartbeat sent for sidechain = ${sidechain} is invalid can't be <= last down timestamp", ("sidechain", active_sidechain_type));
}
FC_ASSERT(op.ts >= min_ts, "Heartbeat ts is behind the min threshold");
FC_ASSERT(op.ts <= max_ts, "Heartbeat ts is above the max threshold");
return void_result();
@ -167,44 +191,48 @@ void_result son_heartbeat_evaluator::do_evaluate(const son_heartbeat_operation&
object_id_type son_heartbeat_evaluator::do_apply(const son_heartbeat_operation& op)
{ try {
const auto& idx = db().get_index_type<son_index>().indices().get<by_id>();
auto itr = idx.find(op.son_id);
const auto itr = idx.find(op.son_id);
if(itr != idx.end())
{
const global_property_object& gpo = db().get_global_properties();
vector<son_id_type> active_son_ids;
active_son_ids.reserve(gpo.active_sons.size());
std::transform(gpo.active_sons.begin(), gpo.active_sons.end(),
std::inserter(active_son_ids, active_son_ids.end()),
[](const son_info& swi) {
return swi.son_id;
});
auto it_son = std::find(active_son_ids.begin(), active_son_ids.end(), op.son_id);
bool is_son_active = true;
for(const auto& active_sidechain_sons : gpo.active_sons) {
const auto& sidechain = active_sidechain_sons.first;
const auto& active_sons = active_sidechain_sons.second;
if(it_son == active_son_ids.end()) {
is_son_active = false;
}
vector<son_id_type> active_son_ids;
active_son_ids.reserve(active_sons.size());
std::transform(active_sons.cbegin(), active_sons.cend(),
std::inserter(active_son_ids, active_son_ids.end()),
[](const son_sidechain_info &swi) {
return swi.son_id;
});
if(itr->status == son_status::in_maintenance) {
db().modify( itr->statistics( db() ), [&]( son_statistics_object& sso )
{
sso.current_interval_downtime += op.ts.sec_since_epoch() - sso.last_down_timestamp.sec_since_epoch();
sso.last_active_timestamp = op.ts;
} );
const auto it_son = std::find(active_son_ids.begin(), active_son_ids.end(), op.son_id);
bool is_son_active = true;
db().modify(*itr, [&is_son_active](son_object &so) {
if(is_son_active) {
so.status = son_status::active;
} else {
so.status = son_status::inactive;
}
});
} else if ((itr->status == son_status::active) || (itr->status == son_status::request_maintenance)) {
db().modify( itr->statistics( db() ), [&]( son_statistics_object& sso )
{
sso.last_active_timestamp = op.ts;
} );
if (it_son == active_son_ids.end()) {
is_son_active = false;
}
if (itr->statuses.at(sidechain) == son_status::in_maintenance) {
db().modify(itr->statistics(db()), [&](son_statistics_object &sso) {
sso.current_interval_downtime[sidechain] += op.ts.sec_since_epoch() - (sso.last_down_timestamp.contains(sidechain) ? sso.last_down_timestamp.at(sidechain).sec_since_epoch() : op.ts.sec_since_epoch());
sso.last_active_timestamp[sidechain] = op.ts;
});
db().modify(*itr, [&is_son_active, &sidechain](son_object &so) {
if (is_son_active) {
so.statuses[sidechain] = son_status::active;
} else {
so.statuses[sidechain] = son_status::inactive;
}
});
} else if ((itr->statuses.at(sidechain) == son_status::active) || (itr->statuses.at(sidechain) == son_status::request_maintenance)) {
db().modify(itr->statistics(db()), [&](son_statistics_object &sso) {
sso.last_active_timestamp[sidechain] = op.ts;
});
}
}
}
return op.son_id;
@ -216,29 +244,41 @@ void_result son_report_down_evaluator::do_evaluate(const son_report_down_operati
FC_ASSERT(op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer.");
const auto& idx = db().get_index_type<son_index>().indices().get<by_id>();
FC_ASSERT( idx.find(op.son_id) != idx.end() );
auto itr = idx.find(op.son_id);
auto stats = itr->statistics( db() );
FC_ASSERT(itr->status == son_status::active || itr->status == son_status::request_maintenance, "Inactive/Deregistered/in_maintenance SONs cannot be reported on as down");
FC_ASSERT(op.down_ts >= stats.last_active_timestamp, "down_ts should be greater than last_active_timestamp");
const auto itr = idx.find(op.son_id);
const auto stats = itr->statistics( db() );
bool status_need_to_report_down = false;
for(const auto& status : itr->statuses)
{
if( (status.second == son_status::active) || (status.second == son_status::request_maintenance) )
status_need_to_report_down = true;
}
FC_ASSERT(status_need_to_report_down, "Inactive/Deregistered/in_maintenance SONs cannot be reported on as down");
for(const auto& active_sidechain_type : active_sidechain_types(db().head_block_time())) {
if(stats.last_active_timestamp.contains(active_sidechain_type))
FC_ASSERT(op.down_ts >= stats.last_active_timestamp.at(active_sidechain_type), "sidechain = ${sidechain} down_ts should be greater than last_active_timestamp", ("sidechain", active_sidechain_type));
}
return void_result();
} FC_CAPTURE_AND_RETHROW( (op) ) }
object_id_type son_report_down_evaluator::do_apply(const son_report_down_operation& op)
{ try {
const auto& idx = db().get_index_type<son_index>().indices().get<by_id>();
auto itr = idx.find(op.son_id);
const auto itr = idx.find(op.son_id);
if(itr != idx.end())
{
if ((itr->status == son_status::active) || (itr->status == son_status::request_maintenance)) {
db().modify( itr->statistics( db() ), [&]( son_statistics_object& sso )
{
sso.last_down_timestamp = op.down_ts;
});
for( const auto& status : itr->statuses ) {
const auto& sidechain = status.first;
db().modify(*itr, [&op](son_object &so) {
so.status = son_status::in_maintenance;
});
}
if ((status.second == son_status::active) || (status.second == son_status::request_maintenance)) {
db().modify(*itr, [&sidechain](son_object &so) {
so.statuses[sidechain] = son_status::in_maintenance;
});
db().modify(itr->statistics(db()), [&](son_statistics_object &sso) {
sso.last_down_timestamp[sidechain] = op.down_ts;
});
}
}
}
return op.son_id;
} FC_CAPTURE_AND_RETHROW( (op) ) }
@ -252,9 +292,19 @@ void_result son_maintenance_evaluator::do_evaluate(const son_maintenance_operati
FC_ASSERT( itr != idx.end() );
// Inactive SONs can't go to maintenance, toggle between active and request_maintenance states
if(op.request_type == son_maintenance_request_type::request_maintenance) {
FC_ASSERT(itr->status == son_status::active, "Inactive SONs can't request for maintenance");
} else if(op.request_type == son_maintenance_request_type::cancel_request_maintenance) {
FC_ASSERT(itr->status == son_status::request_maintenance, "Only maintenance requested SONs can cancel the request");
bool status_active = false;
for(const auto& status : itr->statuses) {
if( (status.second == son_status::active) )
status_active = true;
}
FC_ASSERT(status_active, "Inactive SONs can't request for maintenance");
} else if(op.request_type == son_maintenance_request_type::cancel_request_maintenance) {
bool status_request_maintenance = false;
for(const auto& status : itr->statuses) {
if( (status.second == son_status::request_maintenance) )
status_request_maintenance = true;
}
FC_ASSERT(status_request_maintenance, "Only maintenance requested SONs can cancel the request");
} else {
FC_ASSERT(false, "Invalid maintenance operation");
}
@ -267,15 +317,33 @@ object_id_type son_maintenance_evaluator::do_apply(const son_maintenance_operati
auto itr = idx.find(op.son_id);
if(itr != idx.end())
{
if(itr->status == son_status::active && op.request_type == son_maintenance_request_type::request_maintenance) {
db().modify(*itr, [](son_object &so) {
so.status = son_status::request_maintenance;
});
} else if(itr->status == son_status::request_maintenance && op.request_type == son_maintenance_request_type::cancel_request_maintenance) {
db().modify(*itr, [](son_object &so) {
so.status = son_status::active;
});
}
bool status_active = false;
for(const auto& status : itr->statuses) {
if( (status.second == son_status::active) )
status_active = true;
}
if(status_active && op.request_type == son_maintenance_request_type::request_maintenance) {
db().modify(*itr, [](son_object &so) {
for(auto& status : so.statuses) {
status.second = son_status::request_maintenance;
}
});
}
else
{
bool status_request_maintenance = false;
for(const auto& status : itr->statuses) {
if( (status.second == son_status::request_maintenance) )
status_request_maintenance = true;
}
if(status_request_maintenance && op.request_type == son_maintenance_request_type::cancel_request_maintenance) {
db().modify(*itr, [](son_object &so) {
for(auto& status : so.statuses) {
status.second = son_status::active;
}
});
}
}
}
return op.son_id;
} FC_CAPTURE_AND_RETHROW( (op) ) }

View file

@ -6,20 +6,22 @@ namespace graphene { namespace chain {
db.adjust_balance(son_account, pay);
}
bool son_object::has_valid_config()const {
return ((std::string(signing_key).length() > 0) &&
(sidechain_public_keys.size() > 0) &&
(sidechain_public_keys.find( sidechain_type::bitcoin ) != sidechain_public_keys.end()) &&
(sidechain_public_keys.at(sidechain_type::bitcoin).length() > 0));
bool son_object::has_valid_config(sidechain_type sidechain) const {
return (sidechain_public_keys.find( sidechain ) != sidechain_public_keys.end()) &&
(sidechain_public_keys.at(sidechain).length() > 0);
}
bool son_object::has_valid_config(time_point_sec head_block_time)const {
bool retval = has_valid_config();
bool son_object::has_valid_config(time_point_sec head_block_time, sidechain_type sidechain) const {
bool retval = (std::string(signing_key).length() > 0) && (sidechain_public_keys.size() > 0);
if (head_block_time >= HARDFORK_SON_FOR_HIVE_TIME) {
retval = retval &&
(sidechain_public_keys.find( sidechain_type::hive ) != sidechain_public_keys.end()) &&
(sidechain_public_keys.at(sidechain_type::hive).length() > 0);
if (head_block_time < HARDFORK_SON_FOR_HIVE_TIME) {
retval = retval && has_valid_config(sidechain_type::bitcoin);
}
if (head_block_time >= HARDFORK_SON_FOR_HIVE_TIME && head_block_time < HARDFORK_SON_FOR_ETHEREUM_TIME) {
retval = retval && has_valid_config(sidechain_type::bitcoin) && has_valid_config(sidechain_type::hive);
}
else if (head_block_time >= HARDFORK_SON_FOR_ETHEREUM_TIME) {
retval = retval && has_valid_config(sidechain);
}
return retval;

View file

@ -23,9 +23,9 @@ void_result create_son_wallet_deposit_evaluator::do_evaluate(const son_wallet_de
const auto &swdo_idx = db().get_index_type<son_wallet_deposit_index>().indices().get<by_sidechain_uid>();
const auto swdo = swdo_idx.find(op.sidechain_uid);
if (swdo == swdo_idx.end()) {
auto &gpo = db().get_global_properties();
const auto &gpo = db().get_global_properties();
bool expected = false;
for (auto &si : gpo.active_sons) {
for (auto &si : gpo.active_sons.at(op.sidechain)) {
if (op.son_id == si.son_id) {
expected = true;
break;
@ -78,8 +78,8 @@ object_id_type create_son_wallet_deposit_evaluator::do_apply(const son_wallet_de
swdo.peerplays_to = op.peerplays_to;
swdo.peerplays_asset = op.peerplays_asset;
auto &gpo = db().get_global_properties();
for (auto &si : gpo.active_sons) {
const auto &gpo = db().get_global_properties();
for (auto &si : gpo.active_sons.at(op.sidechain)) {
swdo.expected_reports.insert(std::make_pair(si.son_id, si.weight));
auto stats_itr = db().get_index_type<son_stats_index>().indices().get<by_owner>().find(si.son_id);
@ -142,11 +142,11 @@ void_result process_son_wallet_deposit_evaluator::do_evaluate(const son_wallet_d
{ try{
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT( op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer." );
FC_ASSERT(db().get_global_properties().active_sons.size() >= db().get_chain_properties().immutable_parameters.min_son_count, "Min required voted SONs not present");
const auto& idx = db().get_index_type<son_wallet_deposit_index>().indices().get<by_id>();
const auto& itr = idx.find(op.son_wallet_deposit_id);
FC_ASSERT(itr != idx.end(), "Son wallet deposit not found");
FC_ASSERT(db().get_global_properties().active_sons.at(itr->sidechain).size() >= db().get_chain_properties().immutable_parameters.min_son_count, "Min required voted SONs not present");
FC_ASSERT(!itr->processed, "Son wallet deposit is already processed");
return void_result();
} FC_CAPTURE_AND_RETHROW( (op) ) }

View file

@ -7,8 +7,9 @@ namespace graphene { namespace chain {
void_result recreate_son_wallet_evaluator::do_evaluate(const son_wallet_recreate_operation& op)
{ try{
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT( op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer." );
const auto now = db().head_block_time();
FC_ASSERT(now >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT(op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer.");
const auto& idx = db().get_index_type<son_wallet_index>().indices().get<by_id>();
auto itr = idx.rbegin();
@ -16,12 +17,36 @@ void_result recreate_son_wallet_evaluator::do_evaluate(const son_wallet_recreate
{
// Compare current wallet SONs and to-be lists of active sons
auto cur_wallet_sons = (*itr).sons;
auto new_wallet_sons = op.sons;
flat_map<sidechain_type, vector<son_sidechain_info> > new_wallet_sons;
if( now < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
for(const auto& son : op.sons){
for(const auto& active_sidechain_type : active_sidechain_types(db().head_block_time())){
son_sidechain_info ssi;
ssi.son_id = son.son_id;
ssi.weight = son.weight;
ssi.signing_key = son.signing_key;
ssi.public_key = son.sidechain_public_keys.at(active_sidechain_type);
new_wallet_sons[active_sidechain_type].emplace_back(std::move(ssi));
}
}
}
else{
FC_ASSERT(op.extensions.value.sidechain_sons.valid(), "Sons is not valid");
new_wallet_sons = *op.extensions.value.sidechain_sons;
}
bool son_sets_equal = (cur_wallet_sons.size() == new_wallet_sons.size());
if (son_sets_equal) {
for( size_t i = 0; i < cur_wallet_sons.size(); i++ ) {
son_sets_equal = son_sets_equal && cur_wallet_sons.at(i) == new_wallet_sons.at(i);
for( const auto& cur_wallet_sidechain_sons : cur_wallet_sons ) {
const auto& sidechain = cur_wallet_sidechain_sons.first;
const auto& _cur_wallet_sidechain_sons = cur_wallet_sidechain_sons.second;
son_sets_equal = son_sets_equal && (_cur_wallet_sidechain_sons.size() == new_wallet_sons.at(sidechain).size());
if (son_sets_equal) {
for (size_t i = 0; i < cur_wallet_sons.size(); i++) {
son_sets_equal = son_sets_equal && _cur_wallet_sidechain_sons.at(i) == new_wallet_sons.at(sidechain).at(i);
}
}
}
}
@ -43,9 +68,26 @@ object_id_type recreate_son_wallet_evaluator::do_apply(const son_wallet_recreate
}
const auto& new_son_wallet_object = db().create<son_wallet_object>( [&]( son_wallet_object& obj ){
obj.valid_from = db().head_block_time();
const auto now = db().head_block_time();
obj.valid_from = now;
obj.expires = time_point_sec::maximum();
obj.sons = op.sons;
if( now < HARDFORK_SON_FOR_ETHEREUM_TIME ) {
flat_map<sidechain_type, vector<son_sidechain_info> > sons;
for(const auto& son : op.sons){
for(const auto& active_sidechain_type : active_sidechain_types(db().head_block_time())){
son_sidechain_info ssi;
ssi.son_id = son.son_id;
ssi.weight = son.weight;
ssi.signing_key = son.signing_key;
ssi.public_key = son.sidechain_public_keys.at(active_sidechain_type);
sons[active_sidechain_type].emplace_back(std::move(ssi));
}
}
obj.sons = std::move(sons);
}
else{
obj.sons = *op.extensions.value.sidechain_sons;
}
});
return new_son_wallet_object.id;
} FC_CAPTURE_AND_RETHROW( (op) ) }
@ -55,8 +97,19 @@ void_result update_son_wallet_evaluator::do_evaluate(const son_wallet_update_ope
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT( op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer." );
const son_wallet_id_type son_wallet_id = [&]{
if(db().head_block_time() >= HARDFORK_SON_FOR_ETHEREUM_TIME)
{
const auto ast = active_sidechain_types(db().head_block_time());
const auto id = (op.son_wallet_id.instance.value - std::distance(ast.begin(), ast.find(op.sidechain))) / ast.size();
return son_wallet_id_type{ id };
}
return op.son_wallet_id;
}();
const auto& idx = db().get_index_type<son_wallet_index>().indices().get<by_id>();
FC_ASSERT( idx.find(op.son_wallet_id) != idx.end() );
FC_ASSERT( idx.find(son_wallet_id) != idx.end() );
//auto itr = idx.find(op.son_wallet_id);
//FC_ASSERT( itr->addresses.find(op.sidechain) == itr->addresses.end() ||
// itr->addresses.at(op.sidechain).empty(), "Sidechain wallet address already set");
@ -65,8 +118,19 @@ void_result update_son_wallet_evaluator::do_evaluate(const son_wallet_update_ope
object_id_type update_son_wallet_evaluator::do_apply(const son_wallet_update_operation& op)
{ try {
const son_wallet_id_type son_wallet_id = [&]{
if(db().head_block_time() >= HARDFORK_SON_FOR_ETHEREUM_TIME)
{
const auto ast = active_sidechain_types(db().head_block_time());
const auto id = (op.son_wallet_id.instance.value - std::distance(ast.begin(), ast.find(op.sidechain))) / ast.size();
return son_wallet_id_type{ id };
}
return op.son_wallet_id;
}();
const auto& idx = db().get_index_type<son_wallet_index>().indices().get<by_id>();
auto itr = idx.find(op.son_wallet_id);
auto itr = idx.find(son_wallet_id);
if (itr != idx.end())
{
if (itr->addresses.find(op.sidechain) == itr->addresses.end()) {

View file

@ -10,12 +10,13 @@ namespace graphene { namespace chain {
void_result create_son_wallet_withdraw_evaluator::do_evaluate(const son_wallet_withdraw_create_operation& op)
{ try {
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
const auto now = db().head_block_time();
FC_ASSERT(now >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
const auto &son_idx = db().get_index_type<son_index>().indices().get<by_id>();
const auto so = son_idx.find(op.son_id);
FC_ASSERT(so != son_idx.end(), "SON not found");
FC_ASSERT(so->son_account == op.payer, "Payer is not SON account owner");
FC_ASSERT(!(op.sidechain == sidechain_type::peerplays && now >= HARDFORK_SON_FOR_ETHEREUM_TIME), "Peerplays sidechain type is not allowed");
const auto &ss_idx = db().get_index_type<son_stats_index>().indices().get<by_owner>();
FC_ASSERT(ss_idx.find(op.son_id) != ss_idx.end(), "Statistic object for a given SON ID does not exists");
@ -23,15 +24,23 @@ void_result create_son_wallet_withdraw_evaluator::do_evaluate(const son_wallet_w
const auto &swwo_idx = db().get_index_type<son_wallet_withdraw_index>().indices().get<by_peerplays_uid>();
const auto swwo = swwo_idx.find(op.peerplays_uid);
if (swwo == swwo_idx.end()) {
auto &gpo = db().get_global_properties();
const sidechain_type sidechain = [&op]{
if(op.sidechain == sidechain_type::peerplays){
return op.withdraw_sidechain;
}
else
return op.sidechain;
}();
const auto &gpo = db().get_global_properties();
bool expected = false;
for (auto &si : gpo.active_sons) {
for (auto &si : gpo.active_sons.at(sidechain)) {
if (op.son_id == si.son_id) {
expected = true;
break;
}
}
FC_ASSERT(expected, "Only active SON can create deposit");
FC_ASSERT(expected, "Only active SON can create withdraw");
} else {
bool exactly_the_same = true;
exactly_the_same = exactly_the_same && (swwo->sidechain == op.sidechain);
@ -76,8 +85,16 @@ object_id_type create_son_wallet_withdraw_evaluator::do_apply(const son_wallet_w
swwo.withdraw_currency = op.withdraw_currency;
swwo.withdraw_amount = op.withdraw_amount;
auto &gpo = db().get_global_properties();
for (auto &si : gpo.active_sons) {
const sidechain_type sidechain = [&op]{
if(op.sidechain == sidechain_type::peerplays){
return op.withdraw_sidechain;
}
else
return op.sidechain;
}();
const auto &gpo = db().get_global_properties();
for (auto &si : gpo.active_sons.at(sidechain)) {
swwo.expected_reports.insert(std::make_pair(si.son_id, si.weight));
auto stats_itr = db().get_index_type<son_stats_index>().indices().get<by_owner>().find(si.son_id);
@ -138,13 +155,17 @@ object_id_type create_son_wallet_withdraw_evaluator::do_apply(const son_wallet_w
void_result process_son_wallet_withdraw_evaluator::do_evaluate(const son_wallet_withdraw_process_operation& op)
{ try{
FC_ASSERT(db().head_block_time() >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
const auto now = db().head_block_time();
FC_ASSERT(now >= HARDFORK_SON_TIME, "Not allowed until SON HARDFORK");
FC_ASSERT( op.payer == db().get_global_properties().parameters.son_account(), "SON paying account must be set as payer." );
FC_ASSERT(db().get_global_properties().active_sons.size() >= db().get_chain_properties().immutable_parameters.min_son_count, "Min required voted SONs not present");
const auto& idx = db().get_index_type<son_wallet_withdraw_index>().indices().get<by_id>();
const auto& itr = idx.find(op.son_wallet_withdraw_id);
FC_ASSERT(itr != idx.end(), "Son wallet withdraw not found");
FC_ASSERT(!(itr->sidechain == sidechain_type::peerplays && now >= HARDFORK_SON_FOR_ETHEREUM_TIME), "Peerplays sidechain type is not allowed");
if(itr->sidechain != sidechain_type::peerplays) {
FC_ASSERT(db().get_global_properties().active_sons.at(itr->sidechain).size() >= db().get_chain_properties().immutable_parameters.min_son_count, "Min required voted SONs not present");
}
FC_ASSERT(!itr->processed, "Son wallet withdraw is already processed");
return void_result();
} FC_CAPTURE_AND_RETHROW( (op) ) }

View file

@ -721,7 +721,7 @@ namespace graphene { namespace chain {
}
} } // graphene::chain
namespace fc {
namespace fc {
// Manually reflect tournament_object to variant to properly reflect "state"
void to_variant(const graphene::chain::tournament_object& tournament_obj, fc::variant& v, uint32_t max_depth)
{

View file

@ -29,6 +29,7 @@
#include <fc/log/logger.hpp>
#include <map>
#include <mutex>
namespace graphene { namespace db {
@ -144,6 +145,7 @@ namespace graphene { namespace db {
fc::path get_data_dir()const { return _data_dir; }
/** public for testing purposes only... should be private in practice. */
mutable std::mutex _undo_db_mutex;
undo_database _undo_db;
protected:
template<typename IndexType>

View file

@ -34,10 +34,10 @@ namespace graphene { namespace db {
struct undo_state
{
unordered_map<object_id_type, unique_ptr<object> > old_values;
unordered_map<object_id_type, object_id_type> old_index_next_ids;
std::unordered_set<object_id_type> new_ids;
unordered_map<object_id_type, unique_ptr<object> > removed;
unordered_map<object_id_type, unique_ptr<object> > old_values;
unordered_map<object_id_type, object_id_type> old_index_next_ids;
std::set<object_id_type, std::greater<object_id_type> > new_ids;
unordered_map<object_id_type, unique_ptr<object> > removed;
};

@ -1 +1 @@
Subproject commit 488883921936139e8734b99822d3a589afe80da1
Subproject commit 156b0c4e41c9215eadb2af8009b05e0f38c16dda

View file

@ -47,4 +47,3 @@ namespace graphene { namespace net {
const core_message_type_enum get_current_connections_reply_message::type = core_message_type_enum::get_current_connections_reply_message_type;
} } // graphene::net

View file

@ -23,6 +23,8 @@
*/
#pragma once
#include <stddef.h>
#define GRAPHENE_NET_PROTOCOL_VERSION 106
/**
@ -110,3 +112,6 @@
#define GRAPHENE_NET_MAX_NESTED_OBJECTS (250)
#define MAXIMUM_PEERDB_SIZE 1000
constexpr size_t MAX_BLOCKS_TO_HANDLE_AT_ONCE = 200;
constexpr size_t MAX_SYNC_BLOCKS_TO_PREFETCH = 10 * MAX_BLOCKS_TO_HANDLE_AT_ONCE;

View file

@ -61,7 +61,7 @@ namespace graphene { namespace net {
class node_delegate
{
public:
virtual ~node_delegate(){}
virtual ~node_delegate() = default;
/**
* If delegate has the item, the network has no need to fetch it.
@ -71,7 +71,9 @@ namespace graphene { namespace net {
/**
* @brief Called when a new block comes in from the network
*
* @param blk_msg the message which contains the block
* @param sync_mode true if the message was fetched through the sync process, false during normal operation
* @param contained_transaction_msg_ids container for the transactions to write back into
* @returns true if this message caused the blockchain to switch forks, false if it did not
*
* @throws exception if error validating the item, otherwise the item is
@ -152,6 +154,8 @@ namespace graphene { namespace net {
virtual uint32_t get_block_number(const item_hash_t& block_id) = 0;
virtual fc::time_point_sec get_last_known_hardfork_time() = 0;
/**
* Returns the time a block was produced (if block_id = 0, returns genesis time).
* If we don't know about the block, returns time_point_sec::min()
@ -193,7 +197,7 @@ namespace graphene { namespace net {
{
public:
node(const std::string& user_agent);
~node();
virtual ~node();
void close();
@ -211,11 +215,34 @@ namespace graphene { namespace net {
*/
void add_node( const fc::ip::endpoint& ep );
/*****
* @brief add a list of nodes to seed the p2p network
* @param seeds a vector of url strings
*/
void add_seed_nodes( std::vector<std::string> seeds );
/****
* @brief add a node to seed the p2p network
* @param in the url as a string
*/
void add_seed_node( const std::string& in);
/**
* Attempt to connect to the specified endpoint immediately.
*/
virtual void connect_to_endpoint( const fc::ip::endpoint& ep );
/**
* @brief Helper to convert a string to a collection of endpoints
*
* This converts a string (i.e. "bitshares.eu:665535" to a collection of endpoints.
* NOTE: Throws an exception if not in correct format or was unable to resolve URL.
*
* @param in the incoming string
* @returns a vector of endpoints
*/
static std::vector<fc::ip::endpoint> resolve_string_to_ip_endpoints( const std::string& in );
/**
* Specifies the network interface and port upon which incoming
* connections should be accepted.

View file

@ -62,6 +62,7 @@ namespace graphene { namespace net
class peer_connection_delegate
{
public:
virtual ~peer_connection_delegate() = default;
virtual void on_message(peer_connection* originating_peer,
const message& received_message) = 0;
virtual void on_connection_closed(peer_connection* originating_peer) = 0;
@ -125,7 +126,7 @@ namespace graphene { namespace net
* it is sitting on the queue
*/
virtual size_t get_size_in_queue() = 0;
virtual ~queued_message() {}
virtual ~queued_message() = default;
};
/* when you queue up a 'real_queued_message', a full copy of the message is
@ -258,6 +259,8 @@ namespace graphene { namespace net
uint32_t last_known_fork_block_number = 0;
fc::time_point_sec last_known_hardfork_time;
fc::future<void> accept_or_connect_task_done;
firewall_check_state_data *firewall_check_state = nullptr;

View file

@ -97,7 +97,7 @@ namespace graphene { namespace net {
{
public:
peer_database();
~peer_database();
virtual ~peer_database();
void open(const fc::path& databaseFilename);
void close();

File diff suppressed because it is too large Load diff

View file

@ -50,7 +50,8 @@ namespace graphene { namespace net {
indexed_by<ordered_non_unique<tag<last_seen_time_index>,
member<potential_peer_record,
fc::time_point_sec,
&potential_peer_record::last_seen_time> >,
&potential_peer_record::last_seen_time>,
std::greater<fc::time_point_sec> >,
hashed_unique<tag<endpoint_index>,
member<potential_peer_record,
fc::ip::endpoint,

View file

@ -79,123 +79,137 @@ account_history_plugin_impl::~account_history_plugin_impl()
void account_history_plugin_impl::update_account_histories( const signed_block& b )
{
graphene::chain::database& db = database();
vector<optional< operation_history_object > >& hist = db.get_applied_operations();
bool is_first = true;
auto skip_oho_id = [&is_first,&db,this]() {
if( is_first && db._undo_db.enabled() ) // this ensures that the current id is rolled back on undo
{
db.remove( db.create<operation_history_object>( []( operation_history_object& obj) {} ) );
is_first = false;
}
else
_oho_index->use_next_id();
};
for( optional< operation_history_object >& o_op : hist )
{
optional<operation_history_object> oho;
auto create_oho = [&]() {
is_first = false;
operation_history_object result = db.create<operation_history_object>( [&]( operation_history_object& h )
try \
{
graphene::chain::database& db = database();
vector<optional< operation_history_object > >& hist = db.get_applied_operations();
bool is_first = true;
auto skip_oho_id = [&is_first,&db,this]() {
const std::lock_guard<std::mutex> undo_db_lock{db._undo_db_mutex};
if( is_first && db._undo_db.enabled() ) // this ensures that the current id is rolled back on undo
{
if( o_op.valid() )
h = *o_op;
} );
o_op->id = result.id;
return optional<operation_history_object>(result);
db.remove( db.create<operation_history_object>( []( operation_history_object& obj) {} ) );
is_first = false;
}
else
_oho_index->use_next_id();
};
if( !o_op.valid() || ( _max_ops_per_account == 0 && _partial_operations ) )
for( optional< operation_history_object >& o_op : hist )
{
// Note: the 2nd and 3rd checks above are for better performance, when the db is not clean,
// they will break consistency of account_stats.total_ops and removed_ops and most_recent_op
skip_oho_id();
continue;
}
else if( !_partial_operations )
// add to the operation history index
oho = create_oho();
optional<operation_history_object> oho;
const operation_history_object& op = *o_op;
// get the set of accounts this operation applies to
flat_set<account_id_type> impacted;
vector<authority> other;
// fee payer is added here
operation_get_required_authorities( op.op, impacted, impacted, other,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS( db.head_block_time() ) );
if( op.op.which() == operation::tag< account_create_operation >::value )
impacted.insert( op.result.get<object_id_type>() );
else
graphene::chain::operation_get_impacted_accounts( op.op, impacted,
MUST_IGNORE_CUSTOM_OP_REQD_AUTHS(db.head_block_time()) );
if( op.op.which() == operation::tag< lottery_end_operation >::value )
{
auto lop = op.op.get< lottery_end_operation >();
auto asset_object = lop.lottery( db );
impacted.insert( asset_object.issuer );
for( auto benefactor : asset_object.lottery_options->benefactors )
impacted.insert( benefactor.id );
}
for( auto& a : other )
for( auto& item : a.account_auths )
impacted.insert( item.first );
// be here, either _max_ops_per_account > 0, or _partial_operations == false, or both
// if _partial_operations == false, oho should have been created above
// so the only case should be checked here is:
// whether need to create oho if _max_ops_per_account > 0 and _partial_operations == true
// for each operation this account applies to that is in the config link it into the history
if( _tracked_accounts.size() == 0 ) // tracking all accounts
{
// if tracking all accounts, when impacted is not empty (although it will always be),
// still need to create oho if _max_ops_per_account > 0 and _partial_operations == true
// so always need to create oho if not done
if (!impacted.empty() && !oho.valid()) { oho = create_oho(); }
if( _max_ops_per_account > 0 )
{
// Note: the check above is for better performance, when the db is not clean,
// it breaks consistency of account_stats.total_ops and removed_ops and most_recent_op,
// but it ensures it's safe to remove old entries in add_account_history(...)
for( auto& account_id : impacted )
auto create_oho = [&]() {
is_first = false;
operation_history_object result = db.create<operation_history_object>( [&]( operation_history_object& h )
{
// we don't do index_account_keys here anymore, because
// that indexing now happens in observers' post_evaluate()
if( o_op.valid() )
h = *o_op;
} );
o_op->id = result.id;
return optional<operation_history_object>(result);
};
// add history
add_account_history( account_id, oho->id );
}
if( !o_op.valid() || ( _max_ops_per_account == 0 && _partial_operations ) )
{
// Note: the 2nd and 3rd checks above are for better performance, when the db is not clean,
// they will break consistency of account_stats.total_ops and removed_ops and most_recent_op
skip_oho_id();
continue;
}
}
else // tracking a subset of accounts
{
// whether need to create oho if _max_ops_per_account > 0 and _partial_operations == true ?
// the answer: only need to create oho if a tracked account is impacted and need to save history
else if( !_partial_operations )
// add to the operation history index
oho = create_oho();
if( _max_ops_per_account > 0 )
const operation_history_object& op = *o_op;
// get the set of accounts this operation applies to
flat_set<account_id_type> impacted;
vector<authority> other;
// fee payer is added here
operation_get_required_authorities( op.op, impacted, impacted, other, true );
if( op.op.which() == operation::tag< account_create_operation >::value )
impacted.insert( op.result.get<object_id_type>() );
else
graphene::chain::operation_get_impacted_accounts( op.op, impacted, true );
if( op.op.which() == operation::tag< lottery_end_operation >::value )
{
// Note: the check above is for better performance, when the db is not clean,
// it breaks consistency of account_stats.total_ops and removed_ops and most_recent_op,
// but it ensures it's safe to remove old entries in add_account_history(...)
for( auto account_id : _tracked_accounts )
auto lop = op.op.get< lottery_end_operation >();
auto asset_object = lop.lottery( db );
impacted.insert( asset_object.issuer );
for( auto benefactor : asset_object.lottery_options->benefactors )
impacted.insert( benefactor.id );
}
for( auto& a : other )
for( auto& item : a.account_auths )
impacted.insert( item.first );
// be here, either _max_ops_per_account > 0, or _partial_operations == false, or both
// if _partial_operations == false, oho should have been created above
// so the only case should be checked here is:
// whether need to create oho if _max_ops_per_account > 0 and _partial_operations == true
// for each operation this account applies to that is in the config link it into the history
if( _tracked_accounts.size() == 0 ) // tracking all accounts
{
// if tracking all accounts, when impacted is not empty (although it will always be),
// still need to create oho if _max_ops_per_account > 0 and _partial_operations == true
// so always need to create oho if not done
if (!impacted.empty() && !oho.valid()) { oho = create_oho(); }
if( _max_ops_per_account > 0 )
{
if( impacted.find( account_id ) != impacted.end() )
// Note: the check above is for better performance, when the db is not clean,
// it breaks consistency of account_stats.total_ops and removed_ops and most_recent_op,
// but it ensures it's safe to remove old entries in add_account_history(...)
for( auto& account_id : impacted )
{
if (!oho.valid()) { oho = create_oho(); }
// we don't do index_account_keys here anymore, because
// that indexing now happens in observers' post_evaluate()
// add history
add_account_history( account_id, oho->id );
}
}
}
else // tracking a subset of accounts
{
// whether need to create oho if _max_ops_per_account > 0 and _partial_operations == true ?
// the answer: only need to create oho if a tracked account is impacted and need to save history
if( _max_ops_per_account > 0 )
{
// Note: the check above is for better performance, when the db is not clean,
// it breaks consistency of account_stats.total_ops and removed_ops and most_recent_op,
// but it ensures it's safe to remove old entries in add_account_history(...)
for( auto account_id : _tracked_accounts )
{
if( impacted.find( account_id ) != impacted.end() )
{
if (!oho.valid()) { oho = create_oho(); }
// add history
add_account_history( account_id, oho->id );
}
}
}
}
if (_partial_operations && ! oho.valid())
skip_oho_id();
}
if (_partial_operations && ! oho.valid())
skip_oho_id();
}
catch( const boost::exception& e )
{
elog( "Caught account_history_plugin::update_account_histories(...) boost::exception: ${e}", ("e", boost::diagnostic_information(e) ) );
}
catch( const std::exception& e )
{
elog( "Caught account_history_plugin::update_account_histories(...) std::exception: ${e}", ("e", e.what() ) );
}
catch( ... )
{
wlog( "Caught unexpected exception in account_history_plugin::update_account_histories(...)" );
}
}

View file

@ -29,7 +29,6 @@
#include <graphene/chain/protocol/types.hpp>
#include <graphene/chain/protocol/asset.hpp>
#include <graphene/chain/event_object.hpp>
#include <graphene/chain/operation_history_object.hpp>
#include <graphene/affiliate_stats/affiliate_stats_objects.hpp>

View file

@ -157,37 +157,45 @@ fc::variants bookie_api_impl::get_objects(const vector<object_id_type>& ids) con
{
case event_id_type::type_id:
{
auto& persistent_events_by_event_id = db->get_index_type<detail::persistent_event_index>().indices().get<by_event_id>();
auto iter = persistent_events_by_event_id.find(id.as<event_id_type>());
if (iter != persistent_events_by_event_id.end())
return iter->ephemeral_event_object.to_variant();
const auto &idx = db->get_index_type<event_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_event_index>();
auto iter = refs.ephemeral_event_object.find(id.as<event_id_type>());
if (iter != refs.ephemeral_event_object.end())
return iter->second.to_variant();
else
return {};
}
case bet_id_type::type_id:
{
auto& persistent_bets_by_bet_id = db->get_index_type<detail::persistent_bet_index>().indices().get<by_bet_id>();
auto iter = persistent_bets_by_bet_id.find(id.as<bet_id_type>());
if (iter != persistent_bets_by_bet_id.end())
return iter->ephemeral_bet_object.to_variant();
const auto &idx = db->get_index_type<bet_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_bet_index>();
auto iter = refs.internal.find(id.as<bet_id_type>());
if (iter != refs.internal.end())
return iter->second.ephemeral_bet_object.to_variant();
else
return {};
}
case betting_market_object::type_id:
{
auto& persistent_betting_markets_by_betting_market_id = db->get_index_type<detail::persistent_betting_market_index>().indices().get<by_betting_market_id>();
auto iter = persistent_betting_markets_by_betting_market_id.find(id.as<betting_market_id_type>());
if (iter != persistent_betting_markets_by_betting_market_id.end())
return iter->ephemeral_betting_market_object.to_variant();
{
const auto &idx = db->get_index_type<betting_market_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_betting_market_index>();
auto iter = refs.ephemeral_betting_market_object.find(id.as<betting_market_id_type>());
if (iter != refs.ephemeral_betting_market_object.end())
return iter->second.to_variant();
else
return {};
}
case betting_market_group_object::type_id:
{
auto& persistent_betting_market_groups_by_betting_market_group_id = db->get_index_type<detail::persistent_betting_market_group_index>().indices().get<by_betting_market_group_id>();
auto iter = persistent_betting_market_groups_by_betting_market_group_id.find(id.as<betting_market_group_id_type>());
if (iter != persistent_betting_market_groups_by_betting_market_group_id.end())
return iter->ephemeral_betting_market_group_object.to_variant();
{
const auto &idx = db->get_index_type<betting_market_group_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_betting_market_group_index>();
auto iter = refs.internal.find(id.as<betting_market_group_id_type>());
if (iter != refs.internal.end())
return iter->second.ephemeral_betting_market_group_object.to_variant();
else
return {};
}
@ -203,25 +211,28 @@ std::vector<matched_bet_object> bookie_api_impl::get_matched_bets_for_bettor(acc
{
std::vector<matched_bet_object> result;
std::shared_ptr<graphene::chain::database> db = app.chain_database();
auto& persistent_bets_by_bettor_id = db->get_index_type<detail::persistent_bet_index>().indices().get<by_bettor_id>();
auto iter = persistent_bets_by_bettor_id.lower_bound(std::make_tuple(bettor_id, true));
while (iter != persistent_bets_by_bettor_id.end() &&
iter->get_bettor_id() == bettor_id &&
iter->is_matched())
{
matched_bet_object match;
match.id = iter->ephemeral_bet_object.id;
match.bettor_id = iter->ephemeral_bet_object.bettor_id;
match.betting_market_id = iter->ephemeral_bet_object.betting_market_id;
match.amount_to_bet = iter->ephemeral_bet_object.amount_to_bet;
match.back_or_lay = iter->ephemeral_bet_object.back_or_lay;
match.end_of_delay = iter->ephemeral_bet_object.end_of_delay;
match.amount_matched = iter->amount_matched;
match.associated_operations = iter->associated_operations;
result.emplace_back(std::move(match));
const auto &idx = db->get_index_type<bet_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_bet_index>();
++iter;
for( const auto& bet_pair : refs.internal )
{
const auto& bet = bet_pair.second;
if( bet.get_bettor_id() == bettor_id && bet.is_matched() )
{
matched_bet_object match;
match.id = bet.ephemeral_bet_object.id;
match.bettor_id = bet.ephemeral_bet_object.bettor_id;
match.betting_market_id = bet.ephemeral_bet_object.betting_market_id;
match.amount_to_bet = bet.ephemeral_bet_object.amount_to_bet;
match.back_or_lay = bet.ephemeral_bet_object.back_or_lay;
match.end_of_delay = bet.ephemeral_bet_object.end_of_delay;
match.amount_matched = bet.amount_matched;
match.associated_operations = bet.associated_operations;
result.emplace_back(std::move(match));
}
}
return result;
}
@ -231,29 +242,32 @@ std::vector<matched_bet_object> bookie_api_impl::get_all_matched_bets_for_bettor
std::vector<matched_bet_object> result;
std::shared_ptr<graphene::chain::database> db = app.chain_database();
auto& persistent_bets_by_bettor_id = db->get_index_type<detail::persistent_bet_index>().indices().get<by_bettor_id>();
persistent_bet_multi_index_type::index<by_bettor_id>::type::iterator iter;
if (start == bet_id_type())
iter = persistent_bets_by_bettor_id.lower_bound(std::make_tuple(bettor_id, true));
else
iter = persistent_bets_by_bettor_id.lower_bound(std::make_tuple(bettor_id, true, start));
while (iter != persistent_bets_by_bettor_id.end() &&
iter->get_bettor_id() == bettor_id &&
iter->is_matched() &&
result.size() < limit)
{
matched_bet_object match;
match.id = iter->ephemeral_bet_object.id;
match.bettor_id = iter->ephemeral_bet_object.bettor_id;
match.betting_market_id = iter->ephemeral_bet_object.betting_market_id;
match.amount_to_bet = iter->ephemeral_bet_object.amount_to_bet;
match.back_or_lay = iter->ephemeral_bet_object.back_or_lay;
match.end_of_delay = iter->ephemeral_bet_object.end_of_delay;
match.amount_matched = iter->amount_matched;
result.emplace_back(std::move(match));
const auto &idx = db->get_index_type<bet_object_index>();
const auto &aidx = dynamic_cast<const base_primary_index &>(idx);
const auto &refs = aidx.get_secondary_index<detail::persistent_bet_index>();
++iter;
for( const auto& bet_pair : refs.internal )
{
const auto& bet_id = bet_pair.first;
const auto& bet = bet_pair.second;
if( bet.get_bettor_id() == bettor_id &&
bet.is_matched() &&
bet_id > start &&
result.size() < limit )
{
matched_bet_object match;
match.id = bet.ephemeral_bet_object.id;
match.bettor_id = bet.ephemeral_bet_object.bettor_id;
match.betting_market_id = bet.ephemeral_bet_object.betting_market_id;
match.amount_to_bet = bet.ephemeral_bet_object.amount_to_bet;
match.back_or_lay = bet.ephemeral_bet_object.back_or_lay;
match.end_of_delay = bet.ephemeral_bet_object.end_of_delay;
match.amount_matched = bet.amount_matched;
match.associated_operations = bet.associated_operations;
result.emplace_back(std::move(match));
}
}
return result;
}

View file

@ -59,143 +59,80 @@ namespace detail
* We do this by creating a secondary index on bet_object. We don't actually use it
* to index any property of the bet, we just use it to register for callbacks.
*/
class persistent_bet_object_helper : public secondary_index
{
public:
virtual ~persistent_bet_object_helper() {}
virtual void object_inserted(const object& obj) override;
//virtual void object_removed( const object& obj ) override;
//virtual void about_to_modify( const object& before ) override;
virtual void object_modified(const object& after) override;
void set_plugin_instance(bookie_plugin* instance) { _bookie_plugin = instance; }
private:
bookie_plugin* _bookie_plugin;
};
void persistent_bet_object_helper::object_inserted(const object& obj)
void persistent_bet_index::object_inserted(const object& obj)
{
const bet_object& bet_obj = *boost::polymorphic_downcast<const bet_object*>(&obj);
_bookie_plugin->database().create<persistent_bet_object>([&](persistent_bet_object& saved_bet_obj) {
saved_bet_obj.ephemeral_bet_object = bet_obj;
});
if(0 == internal.count(bet_obj.id))
internal.insert( {bet_obj.id, bet_obj} );
else
internal[bet_obj.id] = bet_obj;
}
void persistent_bet_object_helper::object_modified(const object& after)
void persistent_bet_index::object_modified(const object& after)
{
database& db = _bookie_plugin->database();
auto& persistent_bets_by_bet_id = db.get_index_type<persistent_bet_index>().indices().get<by_bet_id>();
const bet_object& bet_obj = *boost::polymorphic_downcast<const bet_object*>(&after);
auto iter = persistent_bets_by_bet_id.find(bet_obj.id);
assert (iter != persistent_bets_by_bet_id.end());
if (iter != persistent_bets_by_bet_id.end())
db.modify(*iter, [&](persistent_bet_object& saved_bet_obj) {
saved_bet_obj.ephemeral_bet_object = bet_obj;
});
auto iter = internal.find(bet_obj.id);
assert (iter != internal.end());
if (iter != internal.end())
iter->second = bet_obj;
}
//////////// end bet_object ///////////////////
class persistent_betting_market_object_helper : public secondary_index
{
public:
virtual ~persistent_betting_market_object_helper() {}
virtual void object_inserted(const object& obj) override;
//virtual void object_removed( const object& obj ) override;
//virtual void about_to_modify( const object& before ) override;
virtual void object_modified(const object& after) override;
void set_plugin_instance(bookie_plugin* instance) { _bookie_plugin = instance; }
private:
bookie_plugin* _bookie_plugin;
};
void persistent_betting_market_object_helper::object_inserted(const object& obj)
void persistent_betting_market_index::object_inserted(const object& obj)
{
const betting_market_object& betting_market_obj = *boost::polymorphic_downcast<const betting_market_object*>(&obj);
_bookie_plugin->database().create<persistent_betting_market_object>([&](persistent_betting_market_object& saved_betting_market_obj) {
saved_betting_market_obj.ephemeral_betting_market_object = betting_market_obj;
});
if(0 == ephemeral_betting_market_object.count(betting_market_obj.id))
ephemeral_betting_market_object.insert( {betting_market_obj.id, betting_market_obj} );
else
ephemeral_betting_market_object[betting_market_obj.id] = betting_market_obj;
}
void persistent_betting_market_object_helper::object_modified(const object& after)
void persistent_betting_market_index::object_modified(const object& after)
{
database& db = _bookie_plugin->database();
auto& persistent_betting_markets_by_betting_market_id = db.get_index_type<persistent_betting_market_index>().indices().get<by_betting_market_id>();
const betting_market_object& betting_market_obj = *boost::polymorphic_downcast<const betting_market_object*>(&after);
auto iter = persistent_betting_markets_by_betting_market_id.find(betting_market_obj.id);
assert (iter != persistent_betting_markets_by_betting_market_id.end());
if (iter != persistent_betting_markets_by_betting_market_id.end())
db.modify(*iter, [&](persistent_betting_market_object& saved_betting_market_obj) {
saved_betting_market_obj.ephemeral_betting_market_object = betting_market_obj;
});
auto iter = ephemeral_betting_market_object.find(betting_market_obj.id);
assert (iter != ephemeral_betting_market_object.end());
if (iter != ephemeral_betting_market_object.end())
iter->second = betting_market_obj;
}
//////////// end betting_market_object ///////////////////
class persistent_betting_market_group_object_helper : public secondary_index
{
public:
virtual ~persistent_betting_market_group_object_helper() {}
virtual void object_inserted(const object& obj) override;
//virtual void object_removed( const object& obj ) override;
//virtual void about_to_modify( const object& before ) override;
virtual void object_modified(const object& after) override;
void set_plugin_instance(bookie_plugin* instance) { _bookie_plugin = instance; }
private:
bookie_plugin* _bookie_plugin;
};
void persistent_betting_market_group_object_helper::object_inserted(const object& obj)
void persistent_betting_market_group_index::object_inserted(const object& obj)
{
const betting_market_group_object& betting_market_group_obj = *boost::polymorphic_downcast<const betting_market_group_object*>(&obj);
_bookie_plugin->database().create<persistent_betting_market_group_object>([&](persistent_betting_market_group_object& saved_betting_market_group_obj) {
saved_betting_market_group_obj.ephemeral_betting_market_group_object = betting_market_group_obj;
});
if(0 == internal.count(betting_market_group_obj.id))
internal.insert( {betting_market_group_obj.id, betting_market_group_obj} );
else
internal[betting_market_group_obj.id] = betting_market_group_obj;
}
void persistent_betting_market_group_object_helper::object_modified(const object& after)
void persistent_betting_market_group_index::object_modified(const object& after)
{
database& db = _bookie_plugin->database();
auto& persistent_betting_market_groups_by_betting_market_group_id = db.get_index_type<persistent_betting_market_group_index>().indices().get<by_betting_market_group_id>();
const betting_market_group_object& betting_market_group_obj = *boost::polymorphic_downcast<const betting_market_group_object*>(&after);
auto iter = persistent_betting_market_groups_by_betting_market_group_id.find(betting_market_group_obj.id);
assert (iter != persistent_betting_market_groups_by_betting_market_group_id.end());
if (iter != persistent_betting_market_groups_by_betting_market_group_id.end())
db.modify(*iter, [&](persistent_betting_market_group_object& saved_betting_market_group_obj) {
saved_betting_market_group_obj.ephemeral_betting_market_group_object = betting_market_group_obj;
});
auto iter = internal.find(betting_market_group_obj.id);
assert (iter != internal.end());
if (iter != internal.end())
iter->second = betting_market_group_obj;
}
//////////// end betting_market_group_object ///////////////////
class persistent_event_object_helper : public secondary_index
{
public:
virtual ~persistent_event_object_helper() {}
virtual void object_inserted(const object& obj) override;
//virtual void object_removed( const object& obj ) override;
//virtual void about_to_modify( const object& before ) override;
virtual void object_modified(const object& after) override;
void set_plugin_instance(bookie_plugin* instance) { _bookie_plugin = instance; }
private:
bookie_plugin* _bookie_plugin;
};
void persistent_event_object_helper::object_inserted(const object& obj)
void persistent_event_index::object_inserted(const object& obj)
{
const event_object& event_obj = *boost::polymorphic_downcast<const event_object*>(&obj);
_bookie_plugin->database().create<persistent_event_object>([&](persistent_event_object& saved_event_obj) {
saved_event_obj.ephemeral_event_object = event_obj;
});
if(0 == ephemeral_event_object.count(event_obj.id))
ephemeral_event_object.insert( {event_obj.id, event_obj} );
else
ephemeral_event_object[event_obj.id] = event_obj;
}
void persistent_event_object_helper::object_modified(const object& after)
void persistent_event_index::object_modified(const object& after)
{
database& db = _bookie_plugin->database();
auto& persistent_events_by_event_id = db.get_index_type<persistent_event_index>().indices().get<by_event_id>();
const event_object& event_obj = *boost::polymorphic_downcast<const event_object*>(&after);
auto iter = persistent_events_by_event_id.find(event_obj.id);
assert (iter != persistent_events_by_event_id.end());
if (iter != persistent_events_by_event_id.end())
db.modify(*iter, [&](persistent_event_object& saved_event_obj) {
saved_event_obj.ephemeral_event_object = event_obj;
});
auto iter = ephemeral_event_object.find(event_obj.id);
assert (iter != ephemeral_event_object.end());
if (iter != ephemeral_event_object.end())
iter->second = event_obj;
}
//////////// end event_object ///////////////////
@ -207,7 +144,6 @@ class bookie_plugin_impl
{ }
virtual ~bookie_plugin_impl();
/**
* Called After a block has been applied and committed. The callback
* should not yield and should execute quickly.
@ -299,27 +235,35 @@ void bookie_plugin_impl::on_block_applied( const signed_block& )
const asset& amount_bet = bet_matched_op.amount_bet;
// object may no longer exist
//const bet_object& bet = bet_matched_op.bet_id(db);
auto& persistent_bets_by_bet_id = db.get_index_type<persistent_bet_index>().indices().get<by_bet_id>();
auto bet_iter = persistent_bets_by_bet_id.find(bet_matched_op.bet_id);
assert(bet_iter != persistent_bets_by_bet_id.end());
if (bet_iter != persistent_bets_by_bet_id.end())
const auto &idx_bet_object = db.get_index_type<bet_object_index>();
const auto &aidx_bet_object = dynamic_cast<const base_primary_index &>(idx_bet_object);
const auto &refs_bet_object = aidx_bet_object.get_secondary_index<detail::persistent_bet_index>();
auto& nonconst_refs_bet_object = const_cast<persistent_bet_index&>(refs_bet_object);
auto bet_iter = nonconst_refs_bet_object.internal.find(bet_matched_op.bet_id);
assert(bet_iter != nonconst_refs_bet_object.internal.end());
if (bet_iter != nonconst_refs_bet_object.internal.end())
{
db.modify(*bet_iter, [&]( persistent_bet_object& obj ) {
obj.amount_matched += amount_bet.amount;
if (is_operation_history_object_stored(op.id))
obj.associated_operations.emplace_back(op.id);
});
const bet_object& bet_obj = bet_iter->ephemeral_bet_object;
bet_iter->second.amount_matched += amount_bet.amount;
if (is_operation_history_object_stored(op.id))
bet_iter->second.associated_operations.emplace_back(op.id);
auto& persistent_betting_market_idx = db.get_index_type<persistent_betting_market_index>().indices().get<by_betting_market_id>();
auto persistent_betting_market_object_iter = persistent_betting_market_idx.find(bet_obj.betting_market_id);
FC_ASSERT(persistent_betting_market_object_iter != persistent_betting_market_idx.end());
const betting_market_object& betting_market = persistent_betting_market_object_iter->ephemeral_betting_market_object;
const bet_object& bet_obj = bet_iter->second.ephemeral_bet_object;
auto& persistent_betting_market_group_idx = db.get_index_type<persistent_betting_market_group_index>().indices().get<by_betting_market_group_id>();
auto persistent_betting_market_group_object_iter = persistent_betting_market_group_idx.find(betting_market.group_id);
FC_ASSERT(persistent_betting_market_group_object_iter != persistent_betting_market_group_idx.end());
const betting_market_group_object& betting_market_group = persistent_betting_market_group_object_iter->ephemeral_betting_market_group_object;
const auto &idx_betting_market = db.get_index_type<betting_market_object_index>();
const auto &aidx_betting_market = dynamic_cast<const base_primary_index &>(idx_betting_market);
const auto &refs_betting_market = aidx_betting_market.get_secondary_index<detail::persistent_betting_market_index>();
auto persistent_betting_market_object_iter = refs_betting_market.ephemeral_betting_market_object.find(bet_obj.betting_market_id);
FC_ASSERT(persistent_betting_market_object_iter != refs_betting_market.ephemeral_betting_market_object.end());
const betting_market_object& betting_market = persistent_betting_market_object_iter->second;
const auto &idx_betting_market_group = db.get_index_type<betting_market_group_object_index>();
const auto &aidx_betting_market_group = dynamic_cast<const base_primary_index &>(idx_betting_market_group);
const auto &refs_betting_market_group = aidx_betting_market_group.get_secondary_index<detail::persistent_betting_market_group_index>();
auto& nonconst_refs_betting_market_group = const_cast<persistent_betting_market_group_index&>(refs_betting_market_group);
auto persistent_betting_market_group_object_iter = nonconst_refs_betting_market_group.internal.find(betting_market.group_id);
FC_ASSERT(persistent_betting_market_group_object_iter != nonconst_refs_betting_market_group.internal.end());
const betting_market_group_object& betting_market_group = persistent_betting_market_group_object_iter->second.ephemeral_betting_market_group_object;
// if the object is still in the main database, keep the running total there
// otherwise, add it directly to the persistent version
@ -330,9 +274,7 @@ void bookie_plugin_impl::on_block_applied( const signed_block& )
obj.total_matched_bets_amount += amount_bet.amount;
});
else
db.modify( *persistent_betting_market_group_object_iter, [&]( persistent_betting_market_group_object& obj ){
obj.ephemeral_betting_market_group_object.total_matched_bets_amount += amount_bet.amount;
});
persistent_betting_market_group_object_iter->second.total_matched_bets_amount += amount_bet.amount;
}
}
else if( op.op.which() == operation::tag<event_create_operation>::value )
@ -364,33 +306,35 @@ void bookie_plugin_impl::on_block_applied( const signed_block& )
else if ( op.op.which() == operation::tag<bet_canceled_operation>::value )
{
const bet_canceled_operation& bet_canceled_op = op.op.get<bet_canceled_operation>();
auto& persistent_bets_by_bet_id = db.get_index_type<persistent_bet_index>().indices().get<by_bet_id>();
auto bet_iter = persistent_bets_by_bet_id.find(bet_canceled_op.bet_id);
assert(bet_iter != persistent_bets_by_bet_id.end());
if (bet_iter != persistent_bets_by_bet_id.end())
const auto &idx_bet_object = db.get_index_type<bet_object_index>();
const auto &aidx_bet_object = dynamic_cast<const base_primary_index &>(idx_bet_object);
const auto &refs_bet_object = aidx_bet_object.get_secondary_index<detail::persistent_bet_index>();
auto& nonconst_refs_bet_object = const_cast<persistent_bet_index&>(refs_bet_object);
auto bet_iter = nonconst_refs_bet_object.internal.find(bet_canceled_op.bet_id);
assert(bet_iter != nonconst_refs_bet_object.internal.end());
if (bet_iter != nonconst_refs_bet_object.internal.end())
{
// ilog("Adding bet_canceled_operation ${canceled_id} to bet ${bet_id}'s associated operations",
// ("canceled_id", op.id)("bet_id", bet_canceled_op.bet_id));
if (is_operation_history_object_stored(op.id))
db.modify(*bet_iter, [&]( persistent_bet_object& obj ) {
obj.associated_operations.emplace_back(op.id);
});
bet_iter->second.associated_operations.emplace_back(op.id);
}
}
else if ( op.op.which() == operation::tag<bet_adjusted_operation>::value )
{
const bet_adjusted_operation& bet_adjusted_op = op.op.get<bet_adjusted_operation>();
auto& persistent_bets_by_bet_id = db.get_index_type<persistent_bet_index>().indices().get<by_bet_id>();
auto bet_iter = persistent_bets_by_bet_id.find(bet_adjusted_op.bet_id);
assert(bet_iter != persistent_bets_by_bet_id.end());
if (bet_iter != persistent_bets_by_bet_id.end())
const auto &idx_bet_object = db.get_index_type<bet_object_index>();
const auto &aidx_bet_object = dynamic_cast<const base_primary_index &>(idx_bet_object);
const auto &refs_bet_object = aidx_bet_object.get_secondary_index<detail::persistent_bet_index>();
auto& nonconst_refs_bet_object = const_cast<persistent_bet_index&>(refs_bet_object);
auto bet_iter = nonconst_refs_bet_object.internal.find(bet_adjusted_op.bet_id);
assert(bet_iter != nonconst_refs_bet_object.internal.end());
if (bet_iter != nonconst_refs_bet_object.internal.end())
{
// ilog("Adding bet_adjusted_operation ${adjusted_id} to bet ${bet_id}'s associated operations",
// ("adjusted_id", op.id)("bet_id", bet_adjusted_op.bet_id));
if (is_operation_history_object_stored(op.id))
db.modify(*bet_iter, [&]( persistent_bet_object& obj ) {
obj.associated_operations.emplace_back(op.id);
});
bet_iter->second.associated_operations.emplace_back(op.id);
}
}
@ -472,31 +416,21 @@ void bookie_plugin::plugin_initialize(const boost::program_options::variables_ma
database().new_objects.connect([this](const vector<object_id_type>& ids, const flat_set<account_id_type>& impacted_accounts) { my->on_objects_new(ids); });
database().removed_objects.connect([this](const vector<object_id_type>& ids, const vector<const object*>& objs, const flat_set<account_id_type>& impacted_accounts) { my->on_objects_removed(ids); });
//auto event_index =
database().add_index<primary_index<detail::persistent_event_index> >();
database().add_index<primary_index<detail::persistent_betting_market_group_index> >();
database().add_index<primary_index<detail::persistent_betting_market_index> >();
database().add_index<primary_index<detail::persistent_bet_index> >();
const primary_index<bet_object_index>& bet_object_idx = database().get_index_type<primary_index<bet_object_index> >();
primary_index<bet_object_index>& nonconst_bet_object_idx = const_cast<primary_index<bet_object_index>&>(bet_object_idx);
detail::persistent_bet_object_helper* persistent_bet_object_helper_index = nonconst_bet_object_idx.add_secondary_index<detail::persistent_bet_object_helper>();
persistent_bet_object_helper_index->set_plugin_instance(this);
nonconst_bet_object_idx.add_secondary_index<detail::persistent_bet_index>();
const primary_index<betting_market_object_index>& betting_market_object_idx = database().get_index_type<primary_index<betting_market_object_index> >();
primary_index<betting_market_object_index>& nonconst_betting_market_object_idx = const_cast<primary_index<betting_market_object_index>&>(betting_market_object_idx);
detail::persistent_betting_market_object_helper* persistent_betting_market_object_helper_index = nonconst_betting_market_object_idx.add_secondary_index<detail::persistent_betting_market_object_helper>();
persistent_betting_market_object_helper_index->set_plugin_instance(this);
nonconst_betting_market_object_idx.add_secondary_index<detail::persistent_betting_market_index>();
const primary_index<betting_market_group_object_index>& betting_market_group_object_idx = database().get_index_type<primary_index<betting_market_group_object_index> >();
primary_index<betting_market_group_object_index>& nonconst_betting_market_group_object_idx = const_cast<primary_index<betting_market_group_object_index>&>(betting_market_group_object_idx);
detail::persistent_betting_market_group_object_helper* persistent_betting_market_group_object_helper_index = nonconst_betting_market_group_object_idx.add_secondary_index<detail::persistent_betting_market_group_object_helper>();
persistent_betting_market_group_object_helper_index->set_plugin_instance(this);
nonconst_betting_market_group_object_idx.add_secondary_index<detail::persistent_betting_market_group_index>();
const primary_index<event_object_index>& event_object_idx = database().get_index_type<primary_index<event_object_index> >();
primary_index<event_object_index>& nonconst_event_object_idx = const_cast<primary_index<event_object_index>&>(event_object_idx);
detail::persistent_event_object_helper* persistent_event_object_helper_index = nonconst_event_object_idx.add_secondary_index<detail::persistent_event_object_helper>();
persistent_event_object_helper_index->set_plugin_instance(this);
nonconst_event_object_idx.add_secondary_index<detail::persistent_event_index>();
ilog("bookie plugin: plugin_startup() end");
}

View file

@ -29,39 +29,21 @@
namespace graphene { namespace bookie {
using namespace chain;
enum bookie_object_type
{
persistent_event_object_type,
persistent_betting_market_group_object_type,
persistent_betting_market_object_type,
persistent_bet_object_type,
BOOKIE_OBJECT_TYPE_COUNT ///< Sentry value which contains the number of different object types
};
namespace detail
{
class persistent_event_object : public graphene::db::abstract_object<persistent_event_object>
/**
* @brief This secondary index will allow a reverse lookup of all events that happened
*/
class persistent_event_index : public secondary_index
{
public:
static const uint8_t space_id = bookie_objects;
static const uint8_t type_id = persistent_event_object_type;
public:
virtual void object_inserted( const object& obj ) override;
virtual void object_modified( const object& after ) override;
event_object ephemeral_event_object;
event_id_type get_event_id() const { return ephemeral_event_object.id; }
map< event_id_type, event_object > ephemeral_event_object;
};
typedef object_id<bookie_objects, persistent_event_object_type, persistent_event_object> persistent_event_id_type;
struct by_event_id;
typedef multi_index_container<
persistent_event_object,
indexed_by<
ordered_unique<tag<by_id>, member<object, object_id_type, &object::id> >,
ordered_unique<tag<by_event_id>, const_mem_fun<persistent_event_object, event_id_type, &persistent_event_object::get_event_id> > > > persistent_event_multi_index_type;
typedef generic_index<persistent_event_object, persistent_event_multi_index_type> persistent_event_index;
#if 0 // we no longer have competitors, just leaving this here as an example of how to do a secondary index
class events_by_competitor_index : public secondary_index
{
@ -101,95 +83,122 @@ void events_by_competitor_index::object_modified( const object& after )
}
#endif
//////////// betting_market_group_object //////////////////
class persistent_betting_market_group_object : public graphene::db::abstract_object<persistent_betting_market_group_object>
/**
* @brief This secondary index will allow a reverse lookup of all betting_market_group that happened
*/
class persistent_betting_market_group_index : public secondary_index
{
public:
static const uint8_t space_id = bookie_objects;
static const uint8_t type_id = persistent_betting_market_group_object_type;
public:
struct internal_type
{
internal_type() = default;
internal_type(const betting_market_group_object& other)
: ephemeral_betting_market_group_object{other}
{}
internal_type& operator=(const betting_market_group_object& other)
{
ephemeral_betting_market_group_object = other;
return *this;
}
friend bool operator==(const internal_type& lhs, const internal_type& rhs);
friend bool operator<(const internal_type& lhs, const internal_type& rhs);
friend bool operator>(const internal_type& lhs, const internal_type& rhs);
betting_market_group_object ephemeral_betting_market_group_object;
share_type total_matched_bets_amount;
};
betting_market_group_id_type get_betting_market_group_id() const { return ephemeral_betting_market_group_object.id; }
public:
virtual void object_inserted( const object& obj ) override;
virtual void object_modified( const object& after ) override;
map< betting_market_group_id_type, internal_type > internal;
};
struct by_betting_market_group_id;
typedef multi_index_container<
persistent_betting_market_group_object,
indexed_by<
ordered_unique<tag<by_id>, member<object, object_id_type, &object::id> >,
ordered_unique<tag<by_betting_market_group_id>, const_mem_fun<persistent_betting_market_group_object, betting_market_group_id_type, &persistent_betting_market_group_object::get_betting_market_group_id> > > > persistent_betting_market_group_multi_index_type;
typedef generic_index<persistent_betting_market_group_object, persistent_betting_market_group_multi_index_type> persistent_betting_market_group_index;
//////////// betting_market_object //////////////////
class persistent_betting_market_object : public graphene::db::abstract_object<persistent_betting_market_object>
inline bool operator==(const persistent_betting_market_group_index::internal_type& lhs, const persistent_betting_market_group_index::internal_type& rhs)
{
public:
static const uint8_t space_id = bookie_objects;
static const uint8_t type_id = persistent_betting_market_object_type;
return lhs.ephemeral_betting_market_group_object == rhs.ephemeral_betting_market_group_object;
}
betting_market_object ephemeral_betting_market_object;
inline bool operator<(const persistent_betting_market_group_index::internal_type& lhs, const persistent_betting_market_group_index::internal_type& rhs)
{
return lhs.ephemeral_betting_market_group_object < rhs.ephemeral_betting_market_group_object;
}
share_type total_matched_bets_amount;
inline bool operator>(const persistent_betting_market_group_index::internal_type& lhs, const persistent_betting_market_group_index::internal_type& rhs)
{
return !operator<(lhs, rhs);
}
betting_market_id_type get_betting_market_id() const { return ephemeral_betting_market_object.id; }
/**
* @brief This secondary index will allow a reverse lookup of all betting_market_object that happened
*/
class persistent_betting_market_index : public secondary_index
{
public:
virtual void object_inserted( const object& obj ) override;
virtual void object_modified( const object& after ) override;
map< betting_market_id_type, betting_market_object > ephemeral_betting_market_object;
};
struct by_betting_market_id;
typedef multi_index_container<
persistent_betting_market_object,
indexed_by<
ordered_unique<tag<by_id>, member<object, object_id_type, &object::id> >,
ordered_unique<tag<by_betting_market_id>, const_mem_fun<persistent_betting_market_object, betting_market_id_type, &persistent_betting_market_object::get_betting_market_id> > > > persistent_betting_market_multi_index_type;
typedef generic_index<persistent_betting_market_object, persistent_betting_market_multi_index_type> persistent_betting_market_index;
//////////// bet_object //////////////////
class persistent_bet_object : public graphene::db::abstract_object<persistent_bet_object>
/**
* @brief This secondary index will allow a reverse lookup of all bet_object that happened
*/
class persistent_bet_index : public secondary_index
{
public:
static const uint8_t space_id = bookie_objects;
static const uint8_t type_id = persistent_bet_object_type;
public:
struct internal_type
{
internal_type() = default;
bet_object ephemeral_bet_object;
internal_type(const bet_object& other)
: ephemeral_bet_object{other}
{}
// total amount of the bet that matched
share_type amount_matched;
internal_type& operator=(const bet_object& other)
{
ephemeral_bet_object = other;
return *this;
}
std::vector<operation_history_id_type> associated_operations;
bet_id_type get_bet_id() const { return ephemeral_bet_object.id; }
account_id_type get_bettor_id() const { return ephemeral_bet_object.bettor_id; }
bool is_matched() const { return amount_matched != share_type(); }
friend bool operator==(const internal_type& lhs, const internal_type& rhs);
friend bool operator<(const internal_type& lhs, const internal_type& rhs);
friend bool operator>(const internal_type& lhs, const internal_type& rhs);
bet_object ephemeral_bet_object;
// total amount of the bet that matched
share_type amount_matched;
std::vector<operation_history_id_type> associated_operations;
};
public:
virtual void object_inserted( const object& obj ) override;
virtual void object_modified( const object& after ) override;
map< bet_id_type, internal_type > internal;
};
struct by_bet_id;
struct by_bettor_id;
typedef multi_index_container<
persistent_bet_object,
indexed_by<
ordered_unique<tag<by_id>, member<object, object_id_type, &object::id> >,
ordered_unique<tag<by_bet_id>, const_mem_fun<persistent_bet_object, bet_id_type, &persistent_bet_object::get_bet_id> >,
ordered_unique<tag<by_bettor_id>,
composite_key<
persistent_bet_object,
const_mem_fun<persistent_bet_object, account_id_type, &persistent_bet_object::get_bettor_id>,
const_mem_fun<persistent_bet_object, bool, &persistent_bet_object::is_matched>,
const_mem_fun<persistent_bet_object, bet_id_type, &persistent_bet_object::get_bet_id> >,
composite_key_compare<
std::less<account_id_type>,
std::less<bool>,
std::greater<bet_id_type> > > > > persistent_bet_multi_index_type;
inline bool operator==(const persistent_bet_index::internal_type& lhs, const persistent_bet_index::internal_type& rhs)
{
return lhs.ephemeral_bet_object == rhs.ephemeral_bet_object;
}
typedef generic_index<persistent_bet_object, persistent_bet_multi_index_type> persistent_bet_index;
inline bool operator<(const persistent_bet_index::internal_type& lhs, const persistent_bet_index::internal_type& rhs)
{
return lhs.ephemeral_bet_object < rhs.ephemeral_bet_object;
}
inline bool operator>(const persistent_bet_index::internal_type& lhs, const persistent_bet_index::internal_type& rhs)
{
return !operator<(lhs, rhs);
}
} } } //graphene::bookie::detail
FC_REFLECT_DERIVED( graphene::bookie::detail::persistent_event_object, (graphene::db::object), (ephemeral_event_object) )
FC_REFLECT_DERIVED( graphene::bookie::detail::persistent_betting_market_group_object, (graphene::db::object), (ephemeral_betting_market_group_object)(total_matched_bets_amount) )
FC_REFLECT_DERIVED( graphene::bookie::detail::persistent_betting_market_object, (graphene::db::object), (ephemeral_betting_market_object) )
FC_REFLECT_DERIVED( graphene::bookie::detail::persistent_bet_object, (graphene::db::object), (ephemeral_bet_object)(amount_matched)(associated_operations) )

View file

@ -16,3 +16,4 @@ install( TARGETS
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
)
INSTALL( FILES ${HEADERS} DESTINATION "include/graphene/debug_witness" )

Some files were not shown because too many files have changed in this diff Show more