dogecoin/test
Wladimir J. van der Laan d7d7d31506
Merge #15141: Rewrite DoS interface between validation and net_processing
0ff1c2a838 Separate reason for premature spends (coinbase/locktime) (Suhas Daftuar)
54470e767b Assert validation reasons are contextually correct (Suhas Daftuar)
2120c31521 [refactor] Update some comments in validation.cpp as we arent doing DoS there (Matt Corallo)
12dbdd7a41 [refactor] Drop unused state.DoS(), state.GetDoS(), state.CorruptionPossible() (Matt Corallo)
aa502b88d1 scripted-diff: Remove DoS calls to CValidationState (Matt Corallo)
7721ad64f4 [refactor] Prep for scripted-diff by removing some \ns which annoy sed. (Matt Corallo)
5e78c5734b Allow use of state.Invalid() for all reasons (Matt Corallo)
6b34bc6b6f Fix handling of invalid headers (Suhas Daftuar)
ef54b486d5 [refactor] Use Reasons directly instead of DoS codes (Matt Corallo)
9ab2a0412e CorruptionPossible -> BLOCK_MUTATED (Matt Corallo)
6e55b292b0 CorruptionPossible -> TX_WITNESS_MUTATED (Matt Corallo)
7df16e70e6 LookupBlockIndex -> CACHED_INVALID (Matt Corallo)
c8b0d22698 [refactor] Drop redundant nDoS, corruptionPossible, SetCorruptionPossible (Matt Corallo)
34477ccd39 [refactor] Add useful-for-dos "reason" field to CValidationState (Matt Corallo)
6a7f8777a0 Ban all peers for all block script failures (Suhas Daftuar)
7b999103e2 Clean up banning levels (Matt Corallo)
b8b4c80146 [refactor] drop IsInvalid(nDoSOut) (Matt Corallo)
8818729013 [refactor] Refactor misbehavior ban decisions to MaybePunishNode() (Matt Corallo)
00e11e61c0 [refactor] rename stateDummy -> orphan_state (Matt Corallo)
f34fa719cf Drop obsolete sigops comment (Matt Corallo)

Pull request description:

  This is a rebase of #11639 with some fixes for the last few comments which were not yet addressed.

  The original PR text, with some strikethroughs of text that is no longer correct:

  > This cleans up an old main-carryover - it made sense that main could decide what DoS scores to assign things because the DoS scores were handled in a different part of main, but now validation is telling net_processing what DoS scores to assign to different things, which is utter nonsense. Instead, we replace CValidationState's nDoS and CorruptionPossible with a general ValidationInvalidReason, which net_processing can handle as it sees fit. I keep the behavior changes here to a minimum, but in the future we can utilize these changes for other smarter behavior, such as disconnecting/preferring to rotate outbound peers based on them providing things which are invalid due to SOFT_FORK because we shouldn't ban for such cases.
  >
  > This is somewhat complementary with, though obviously conflicts heavily with #11523, which added enums in place of DoS scores, as well as a few other cleanups (which are still relevant).
  >
  > Compared with previous bans, the following changes are made:
  >
  > Txn with empty vin/vout or null prevouts move from 10 DoS
  > points to 100.
  > Loose transactions with a dependency loop now result in a ban
  > instead of 10 DoS points.
  > ~~BIP68-violation no longer results in a ban as it is SOFT_FORK.~~
  > ~~Non-SegWit SigOp violation no longer results in a ban as it
  > considers P2SH sigops and is thus SOFT_FORK.~~
  > ~~Any script violation in a block no longer results in a ban as
  > it may be the result of a SOFT_FORK. This should likely be
  > fixed in the future by differentiating between them.~~
  > Proof of work failure moves from 50 DoS points to a ban.
  > Blocks with timestamps under MTP now result in a ban, blocks
  > too far in the future continue to not result in a ban.
  > Inclusion of non-final transactions in a block now results in a
  > ban instead of 10 DoS points.

  Note: The change to ban all peers for consensus violations is actually NOT the change I'd like to make -- I'd prefer to only ban outbound peers in those situations.  The current behavior is a bit of a mess, however, and so in the interests of advancing this PR I tried to keep the changes to a minimum.  I plan to revisit the behavior in a followup PR.

  EDIT: One reviewer suggested I add some additional context for this PR:

  > The goal of this work was to make net_processing aware of the actual reasons for validation failures, rather than just deal with opaque numbers instructing it to do something.
  >
  > In the future, I'd like to make it so that we use more context to decide how to punish a peer. One example is to differentiate inbound and outbound peer misbehaviors. Another potential example is if we'd treat RECENT_CONSENSUS_CHANGE failures differently (ie after the next consensus change is implemented), and perhaps again we'd want to treat some peers differently than others.

ACKs for commit 0ff1c2:
  jnewbery:
    utACK 0ff1c2a838
  ryanofsky:
    utACK 0ff1c2a838. Only change is dropping the first commit (f3883a321bf4ab289edcd9754b12cae3a648b175), and dropping the temporary `assert(level == GetDoS())` that was in 35ee77f2832eaffce30042e00785c310c5540cdc (now c8b0d22698)

Tree-SHA512: e915a411100876398af5463d0a885920e44d473467bb6af991ef2e8f2681db6c1209bb60f848bd154be72d460f039b5653df20a6840352c5f7ea5486d9f777a3
2019-05-04 11:58:57 +02:00
..
functional Merge #15141: Rewrite DoS interface between validation and net_processing 2019-05-04 11:58:57 +02:00
fuzz fuzz: test_runner: Better error message when built with afl 2019-02-14 15:47:08 -05:00
lint lint: Check that all wallet args are hidden 2019-04-28 12:43:50 -04:00
sanitizer_suppressions Poly1305: tolerate the intentional unsigned wraparound in poly1305.cpp 2019-03-26 18:12:31 +01:00
util Remove Python 2 import workarounds 2018-12-13 16:46:31 +01:00
config.ini.in QA: feature_filelock, interface_bitcoin_cli: Use PACKAGE_NAME in messages rather than hardcoding Bitcoin Core 2019-04-25 20:43:04 +00:00
README.md Merge #14519: tests: add utility to easily profile node performance with perf 2019-02-05 17:40:16 -05:00

This directory contains integration tests that test bitcoind and its utilities in their entirety. It does not contain unit tests, which can be found in /src/test, /src/wallet/test, etc.

This directory contains the following sets of tests:

  • functional which test the functionality of bitcoind and bitcoin-qt by interacting with them through the RPC and P2P interfaces.
  • util which tests the bitcoin utilities, currently only bitcoin-tx.
  • lint which perform various static analysis checks.

The util tests are run as part of make check target. The functional tests and lint scripts are run by the travis continuous build process whenever a pull request is opened. All sets of tests can also be run locally.

Running tests locally

Before tests can be run locally, Bitcoin Core must be built. See the building instructions for help.

Functional tests

Dependencies

The ZMQ functional test requires a python ZMQ library. To install it:

  • on Unix, run sudo apt-get install python3-zmq
  • on mac OS, run pip3 install pyzmq

Running the tests

Individual tests can be run by directly calling the test script, e.g.:

test/functional/feature_rbf.py

or can be run through the test_runner harness, eg:

test/functional/test_runner.py feature_rbf.py

You can run any combination (incl. duplicates) of tests by calling:

test/functional/test_runner.py <testname1> <testname2> <testname3> ...

Run the regression test suite with:

test/functional/test_runner.py

Run all possible tests with

test/functional/test_runner.py --extended

By default, up to 4 tests will be run in parallel by test_runner. To specify how many jobs to run, append --jobs=n

The individual tests and the test_runner harness have many command-line options. Run test_runner.py -h to see them all.

Troubleshooting and debugging test failures

Resource contention

The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another bitcoind process running on the system (perhaps from a previous test which hasn't successfully killed all its bitcoind nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other bitcoind processes are running.

On linux, the test_framework will warn if there is another bitcoind process running when the tests are started.

If there are zombie bitcoind processes after test failure, you can kill them by running the following commands. Note that these commands will kill all bitcoind processes running on the system, so should not be used if any non-test bitcoind processes are being run.

killall bitcoind

or

pkill -9 bitcoind
Data directory cache

A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure bitcoind processes are stopped as above):

rm -rf cache
killall bitcoind
Test logging

The tests contain logging at different levels (debug, info, warning, etc). By default:

  • when run through the test_runner harness, all logs are written to test_framework.log and no logs are output to the console.
  • when run directly, all logs are written to test_framework.log and INFO level and above are output to the console.
  • when run on Travis, no logs are output to the console. However, if a test fails, the test_framework.log and bitcoind debug.logs will all be dumped to the console to help troubleshooting.

To change the level of logs output to the console, use the -l command line argument.

test_framework.log and bitcoind debug.logs can be combined into a single aggregate log by running the combine_logs.py script. The output can be plain text, colorized text or html. For example:

combine_logs.py -c <test data directory> | less -r

will pipe the colorized logs from the test into less.

Use --tracerpc to trace out all the RPC calls and responses to the console. For some tests (eg any that use submitblock to submit a full block over RPC), this can result in a lot of screen output.

By default, the test data directory will be deleted after a successful run. Use --nocleanup to leave the test data directory intact. The test data directory is never deleted after a failed test.

Attaching a debugger

A python debugger can be attached to tests at any point. Just add the line:

import pdb; pdb.set_trace()

anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the bitcoind nodes-under-test.

If further introspection of the bitcoind instances themselves becomes necessary, this can be accomplished by first setting a pdb breakpoint at an appropriate location, running the test to that point, then using gdb to attach to the process and debug.

For instance, to attach to self.node[1] during a run:

2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3

use the directory path to get the pid from the pid file:

cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid
gdb /home/example/bitcoind <pid>

Note: gdb attach step may require ptrace_scope to be modified, or sudo preceding the gdb. See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt

Profiling

An easy way to profile node performance during functional tests is provided for Linux platforms using perf.

Perf will sample the running node and will generate profile data in the node's datadir. The profile data can then be presented using perf report or a graphical tool like hotspot.

To generate a profile during test suite runs, use the --perf flag.

To see render the output to text, run

perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less

For ways to generate more granular profiles, see the README in test/functional.

Util tests

Util tests can be run locally by running test/util/bitcoin-util-test.py. Use the -v option for verbose output.

Lint tests

Dependencies

The lint tests require codespell and flake8. To install: pip3 install codespell flake8.

Running the tests

Individual tests can be run by directly calling the test script, e.g.:

test/lint/lint-filenames.sh

You can run all the shell-based lint tests by running:

test/lint/lint-all.sh

Writing functional tests

You are encouraged to write functional tests for new or existing features. Further information about the functional test framework and individual tests is found in test/functional.