-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
indexes: Don't wipe indexes again when continuing a prior reindex #30132
indexes: Don't wipe indexes again when continuing a prior reindex #30132
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
133bf46
to
991f50a
Compare
Concept ACK |
991f50a
to
9de8b26
Compare
Updated 991f50a -> 9de8b26 (preserveIndexOnRestart_0 -> preserveIndexOnRestart_1, compare)
|
EDIT: Never mind, I see the problem now after reading f27290c commit description. The bug happens because the BlockManager is destroyed each loop iteration in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good in a first glance. It would be nice to add some coverage for it just so it doesn't happen again. Maybe assert that certain logs are not present during init? Like the "Wiping LevelDB in <index_path>" one.
Approach ACK. First 2 commits (0d04433) LGTM but the third one I'm going to need to spend a lot more time wrapping my head around the implications. |
Updated 9de8b26 -> dd290b3 (preserveIndexOnRestart_1 -> preserveIndexOnRestart_2, compare)
|
Concept ACK |
dd290b3
to
891784c
Compare
Updated dd290b3 -> 891784c (preserveIndexOnRestart_2 -> preserveIndexOnRestart_3, compare)
|
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the Possibly this is due to a silent merge conflict (the changes in this pull request being Leave a comment here, if you need help tracking down a confusing failure. |
891784c
to
eeea081
Compare
Thanks for the reviews @maflcko, 891784c -> eeea081 (preserveIndexOnRestart_3 -> preserveIndexOnRestart_4, compare) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code review ACK eeea081
I like this suggestion. I think the naming is more clear in terms of the actual effect each variable has. I also think not having two I don't think |
That's a good point. One possible way to address it could be to rename The renames suggested in 9c643e7 and 38cc045 don't directly relate to this PR, so might make more sense as followups to avoid complicating things. On the other hand, if you took the portion of 9c643e7 replacing
Looking at the code more, I think setting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK eeea081
Nice catch. Took me a while to see where b47bd95 introduced the bug (the retry logic is confusing indeed!), but both the detailed commit messages and yesterday's PR review club notes/log were very helpful to grok it.
Thanks for the review and ACKs, I will address the left over nits here shortly, I think re-ACKing should be easy enough. |
Reverts a bug introduced in b47bd95 "kernel: De-globalize fReindex". The change leads to a GUI user being prompted to re-index on a chainstate loading failure more than once as well as the node actually not reindexing if the user chooses to. Fix this by setting the reindexing option instead of the atomic, which can be safely re-used to indicate that a reindex should be attempted. The bug specifically is caused by the chainman, and thus the blockman and its m_reindexing atomic being destroyed on every iteration of the for loop. The reindex option for ChainstateLoadOptions is currently also set in a confusing way. By using the reindex atomic, it is not obvious in which scenario it is true or false. The atomic is controlled by both the user passing the -reindex option, the user chosing to reindex if something went wrong during chainstate loading when running the gui, and by reading the reindexing flag from the block tree database in LoadBlockIndexDB. In practice this read is done through the chainstate module's CompleteChainstateInitialization's call to LoadBlockIndex. Since this is only done after the reindex option is set already, it does not have an effect on it. Make this clear by using the reindex option from the blockman opts which is only controlled by the user.
It does not control any actual logic and the log message as well as the comment are obsolete, since no database initialization takes place there anymore. Log messages indicating when indexes and chainstate databases are loaded exist in other places.
eeea081
to
682f1f1
Compare
Thank you for your suggestions @ryanofsky! I applied both of your suggested patches. eeea081 -> 682f1f1 (preserveIndexOnRestart_4 -> preserveIndexOnRestart_5, compare)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM 682f1f1 modulo one small bug in comments.
I couldn't find or think of any occasions where the new reduced wiping behaviour introduces problematic behaviour. The new variable names introduced make things significantly more clear and is a very welcome change.
Drop confusing kernel options: BlockManagerOpts::reindex ChainstateLoadOptions::reindex ChainstateLoadOptions::reindex_chainstate Replacing them with more straightforward options: ChainstateLoadOptions::wipe_block_tree_db ChainstateLoadOptions::wipe_chainstate_db Having two options called "reindex" which did slightly different things was needlessly confusing (one option wiped the block tree database, and the other caused block files to be rescanned). Also the previous set of options did not allow rebuilding the block database without also rebuilding the chainstate database, when it should be possible to do those independently.
Before this change continuing a reindex without the -reindex flag set would leave the block and coins db intact, but discard the data of the optional indexes. While not a bug per se, wiping the data again is wasteful, both in terms of having to write it again, and potentially leading to longer startup times. When initially running a reindex, both the block index and any further activated indexes are wiped. On an index's Init(), both the best block stored by the index and the chain's tip are null. An index's m_synced member is therefore true. This means that it will process blocks through validation events while the reindex is running. Currently, if the reindex is continued without the user re-specifying the reindex flag, the block index is preserved but further index data is wiped. This leads to the stored best block being null, but the chain tip existing. The m_synced member will be set to false. The index will not process blocks through the validation interface, but instead use the background sync once the reindex is completed. If the index is preserved (this change) after a restart its best block may potentially match the chain tip. The m_synced member will be set to true and the index can process validation events during the rest of the reindex.
Co-authored-by: furszy <matiasfurszyfer@protonmail.com>
This is a just a mechanical change, renaming and inverting the meaning of the indexing variable. "m_blockfiles_indexed" is a more straightforward name for this variable because this variable just indicates whether or not <datadir>/blocks/blk?????.dat files have been indexed in the <datadir>/blocks/index LevelDB database. The name "m_reindexing" was more confusing, it could be true even if -reindex was not specified, and false when it was specified. Also, the previous name unnecessarily required thinking about the whole reindexing process just to understand simple checks in validation code about whether blocks were indexed. The motivation for this change is to follow up on previous commits, moving away from having multiple variables called "reindex" internally, and instead naming variables individually after what they do and represent.
682f1f1
to
f68cba2
Compare
Thanks for the review @stickies-v, Updated f68cba2 -> f68cba2 (preserveIndexOnRestart_5 -> preserveIndexOnRestart_6, compare)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK f68cba2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code review ACK f68cba2
I also confirmed that the new test fails before the changes here are applied.
I think the test could use a bit more explanation on its reasoning but this shouldn't block a merge of this as-is.
self.log.info("Restarting node while reindexing..") | ||
node.stop_node() | ||
with node.busy_wait_for_debug_log([b'initload thread start']): | ||
node.start(['-blockfilterindex', '-reindex']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Why was the blockfilterindex chosen here? I am assuming because it's slow? Would be good to add a comment because it may be confusing for others in the future what this choice has to do with the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just picked the one which furszy picked and I'm guessing he just picked one too. I'll check if there is any signifcant effect when picking a different one.
@@ -73,13 +73,33 @@ def find_block(b, start): | |||
# All blocks should be accepted and processed. | |||
assert_equal(self.nodes[0].getblockcount(), 12) | |||
|
|||
def continue_reindex_after_shutdown(self): | |||
node = self.nodes[0] | |||
self.generate(node, 1500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I guess this is needed so the node can be stopped fast enough. It's still a race and could turn out to be flakey in the CI, right? I don't have a better idea to fix this right now but a comment might be good to make this explicit and make future debugging easier if this turns out to be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
options.wipe_block_tree_db = m_args.GetBoolArg("-reindex", false); | ||
options.wipe_chainstate_db = m_args.GetBoolArg("-reindex", false) || m_args.GetBoolArg("-reindex-chainstate", false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tiny nit
options.wipe_block_tree_db = m_args.GetBoolArg("-reindex", false); | |
options.wipe_chainstate_db = m_args.GetBoolArg("-reindex", false) || m_args.GetBoolArg("-reindex-chainstate", false); | |
options.wipe_block_tree_db = m_args.GetBoolArg("-reindex", false); | |
options.wipe_chainstate_db = options.wipe_block_tree_db || m_args.GetBoolArg("-reindex-chainstate", false); |
Side note:
I don't think this is used anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested the current approach, see #30132 (comment)
I don't think this is used anywhere.
What do you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested the current approach, see #30132 (comment)
Hmm ok. We should probably go further and deduplicate the init.cpp / setup_commons.cpp code somewhere in the future.
I don't think this is used anywhere.
What do you mean?
This is part of the unit test framework and no unit test, benchmark or fuzz test seems to make use of it. "-reindex" and "-reindex-chainstate" are always unset.
No, and I'm not sure it should be given we don't support this. Maybe we can add a comment and assert that just wiping the block index db is not supported for now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code review ACK f68cba2. Only changes since last review were cherry-picking suggested commits that rename variables, improving comments, and making some tweaks to test code.
re: #30132 (comment)
FWIW, my original draft of 804f09d added this code to LoadChainstate: // For now, don't allow wiping block tree db without also wiping chainstate
// db. There's no reason this could not work in theory, but in practice the
// code path is untested, and to be really robust, the
// LoadExternalBlockFile function should to be updated to scan undo files,
// not just block files, and to populate CBlockIndex::nUndoPos, not just
// CBlockIndex::nDataPos.
assert(!options.wipe_block_tree_db || options.wipe_chainstate_db); I decided to drop it to keep things simpler, since inevitably the kernel API will support combinations of options bitcoin core doesn't exercise or test, and it might be cumbersome to try to warn about all of them. But I could understand wanting to do it in some cases like this. I think the PR is ready to merge, so you can let me know if you want to add an assert or just merge it in its current form. |
I think this is rfm. |
When restarting
bitcoind
during an ongoing reindex without setting the-reindex
flag again, the block and coins db is left intact, but any data from the optional indexes is discarded. While not a bug per se, wiping the data again iswasteful, both in terms of having to write it again, as well as potentially leading to longer startup times. So keep the index data instead when continuing a prior reindex.
Also includes a bugfix and smaller code cleanups around the reindexing code. The bug was introduced in b47bd95: "kernel: De-globalize fReindex".