Merge #16421: Conservatively accept RBF bumps bumping one tx at the package limits

5ce822efbe Conservatively accept RBF bumps bumping one tx at the package limits (Matt Corallo)

Pull request description:

  Based on #15681, this adds support for some simple cases of RBF inside of large packages. Issue pointed out by sdaftuar in #15681, and this fix (or a broader one) is required ot make #15681 fully useful.

  Accept RBF bumps of single transactions (ie which evict exactly one
  transaction) even when that transaction is a member of a package
  which is currently at the package limit iff the new transaction
  does not add any additional mempool dependencies from the original.

  This could be made a bit looser in the future and still be safe,
  but for now this fixes the case that a transaction which was
  accepted by the carve-out rule will not be directly RBF'able

ACKs for top commit:
  instagibbs:
    re-ACK 5ce822efbe
  ajtowns:
    ACK 5ce822efbe ; GetSizeWithDescendants is only change and makes sense
  sipa:
    Code review ACK 5ce822efbe. I haven't thought hard about the effect on potential DoS issues this policy change may have.

Tree-SHA512: 1cee3bc57393940a30206679eb60c3ec8cb4f4825d27d40d1f062c86bd22542dd5944fa5567601c74c8d9fd425333ed3e686195170925cfc68777e861844bd55
This commit is contained in:
fanquake 2019-09-07 09:32:01 +08:00
commit 0d20c42a01
No known key found for this signature in database
GPG key ID: 2EEB9F5CC09526C1
2 changed files with 51 additions and 3 deletions

View file

@ -615,17 +615,55 @@ static bool AcceptToMemoryPoolWorker(const CChainParams& chainparams, CTxMemPool
REJECT_HIGHFEE, "absurdly-high-fee",
strprintf("%d > %d", nFees, nAbsurdFee));
const CTxMemPool::setEntries setIterConflicting = pool.GetIterSet(setConflicts);
// Calculate in-mempool ancestors, up to a limit.
CTxMemPool::setEntries setAncestors;
size_t nLimitAncestors = gArgs.GetArg("-limitancestorcount", DEFAULT_ANCESTOR_LIMIT);
size_t nLimitAncestorSize = gArgs.GetArg("-limitancestorsize", DEFAULT_ANCESTOR_SIZE_LIMIT)*1000;
size_t nLimitDescendants = gArgs.GetArg("-limitdescendantcount", DEFAULT_DESCENDANT_LIMIT);
size_t nLimitDescendantSize = gArgs.GetArg("-limitdescendantsize", DEFAULT_DESCENDANT_SIZE_LIMIT)*1000;
if (setConflicts.size() == 1) {
// In general, when we receive an RBF transaction with mempool conflicts, we want to know whether we
// would meet the chain limits after the conflicts have been removed. However, there isn't a practical
// way to do this short of calculating the ancestor and descendant sets with an overlay cache of
// changed mempool entries. Due to both implementation and runtime complexity concerns, this isn't
// very realistic, thus we only ensure a limited set of transactions are RBF'able despite mempool
// conflicts here. Importantly, we need to ensure that some transactions which were accepted using
// the below carve-out are able to be RBF'ed, without impacting the security the carve-out provides
// for off-chain contract systems (see link in the comment below).
//
// Specifically, the subset of RBF transactions which we allow despite chain limits are those which
// conflict directly with exactly one other transaction (but may evict children of said transaction),
// and which are not adding any new mempool dependencies. Note that the "no new mempool dependencies"
// check is accomplished later, so we don't bother doing anything about it here, but if BIP 125 is
// amended, we may need to move that check to here instead of removing it wholesale.
//
// Such transactions are clearly not merging any existing packages, so we are only concerned with
// ensuring that (a) no package is growing past the package size (not count) limits and (b) we are
// not allowing something to effectively use the (below) carve-out spot when it shouldn't be allowed
// to.
//
// To check these we first check if we meet the RBF criteria, above, and increment the descendant
// limits by the direct conflict and its descendants (as these are recalculated in
// CalculateMempoolAncestors by assuming the new transaction being added is a new descendant, with no
// removals, of each parent's existing dependant set). The ancestor count limits are unmodified (as
// the ancestor limits should be the same for both our new transaction and any conflicts).
// We don't bother incrementing nLimitDescendants by the full removal count as that limit never comes
// into force here (as we're only adding a single transaction).
assert(setIterConflicting.size() == 1);
CTxMemPool::txiter conflict = *setIterConflicting.begin();
nLimitDescendants += 1;
nLimitDescendantSize += conflict->GetSizeWithDescendants();
}
std::string errString;
if (!pool.CalculateMemPoolAncestors(entry, setAncestors, nLimitAncestors, nLimitAncestorSize, nLimitDescendants, nLimitDescendantSize, errString)) {
setAncestors.clear();
// If CalculateMemPoolAncestors fails second time, we want the original error string.
std::string dummy_err_string;
// Contracting/payment channels CPFP carve-out:
// If the new transaction is relatively small (up to 40k weight)
// and has at most one ancestor (ie ancestor limit of 2, including
// the new transaction), allow it if its parent has exactly the
@ -674,7 +712,6 @@ static bool AcceptToMemoryPoolWorker(const CChainParams& chainparams, CTxMemPool
CFeeRate newFeeRate(nModifiedFees, nSize);
std::set<uint256> setConflictsParents;
const int maxDescendantsToVisit = 100;
const CTxMemPool::setEntries setIterConflicting = pool.GetIterSet(setConflicts);
for (const auto& mi : setIterConflicting) {
// Don't allow the replacement to reduce the feerate of the
// mempool.
@ -734,6 +771,11 @@ static bool AcceptToMemoryPoolWorker(const CChainParams& chainparams, CTxMemPool
// feerate junk to be mined first. Ideally we'd keep track of
// the ancestor feerates and make the decision based on that,
// but for now requiring all new inputs to be confirmed works.
//
// Note that if you relax this to make RBF a little more useful,
// this may break the CalculateMempoolAncestors RBF relaxation,
// above. See the comment above the first CalculateMempoolAncestors
// call for more info.
if (!setConflictsParents.count(tx.vin[j].prevout.hash))
{
// Rather than check the UTXO set - potentially expensive -

View file

@ -33,7 +33,7 @@ class MempoolPackagesTest(BitcoinTestFramework):
outputs = {}
for i in range(num_outputs):
outputs[node.getnewaddress()] = send_value
rawtx = node.createrawtransaction(inputs, outputs)
rawtx = node.createrawtransaction(inputs, outputs, 0, True)
signedtx = node.signrawtransactionwithwallet(rawtx)
txid = node.sendrawtransaction(signedtx['hex'])
fulltx = node.getrawtransaction(txid, 1)
@ -75,10 +75,16 @@ class MempoolPackagesTest(BitcoinTestFramework):
# ...especially if its > 40k weight
assert_raises_rpc_error(-26, "too-long-mempool-chain, too many descendants", self.chain_transaction, self.nodes[0], [chain[0][0]], [1], chain[0][1], fee, 350)
# But not if it chains directly off the first transaction
self.chain_transaction(self.nodes[0], [chain[0][0]], [1], chain[0][1], fee, 1)
(replacable_txid, replacable_orig_value) = self.chain_transaction(self.nodes[0], [chain[0][0]], [1], chain[0][1], fee, 1)
# and the second chain should work just fine
self.chain_transaction(self.nodes[0], [second_chain], [0], second_chain_value, fee, 1)
# Make sure we can RBF the chain which used our carve-out rule
second_tx_outputs = {self.nodes[0].getrawtransaction(replacable_txid, True)["vout"][0]['scriptPubKey']['addresses'][0]: replacable_orig_value - (Decimal(1) / Decimal(100))}
second_tx = self.nodes[0].createrawtransaction([{'txid': chain[0][0], 'vout': 1}], second_tx_outputs)
signed_second_tx = self.nodes[0].signrawtransactionwithwallet(second_tx)
self.nodes[0].sendrawtransaction(signed_second_tx['hex'])
# Finally, check that we added two transactions
assert_equal(len(self.nodes[0].getrawmempool(True)), MAX_ANCESTORS + 3)