Since otherwise the docx file did not contain correct representations
of the discussion section validity robustness, I split them in two
separate figures (no sub-figures)instead. Now they are like any
other figure.
For display in findings summaries we can now allow arbitrary strength
of evidence binning. We simply pass in a dict with the strength (as
float) as the key and the string-representation that should appear
in the table as value.
When first importing bibtex entries, if there are e.g. double entries,
the library gives an 'unsuppressable' warning which will always
appear in the manuscript otherwise. This simply ensures no warning
is displayed by turning off logging for the duration of their use.
Since we add the studies that bring findings for the finding tables manually,
this ensures that we spot any mistakes when entering them, or similar
discrepancies on the other side of the raw data.
Instead of it lying in the src directory (where, reasonably, only python
data extraction, processing and modelling code should lie), I turned
the retorque filter into a simple quarto extension.
Could reasonably be repackaged separately from this repo since I believe
other people could profit off it.
Validity calculation belongs to the modelling, so we put it into the
validity module.
Extracting our matrix is a processing step so we made its own matrix
module and put it in their.
Should hopefully provide better separation of concerns going forward.
From strength of findings to the more general validity module, which can then
in turn contain the 'add_to_findings' function which unsurprisingly adds
validities to findings. Makes more sense to me.