We come across a lot of academic papers and research at MDCalc when figuring out what to add to the site next. There’s a huge range of information that we’ll add to MDCalc, including scores, algorithms, “decision rules,” referenced lists of accepted information (like exclusion criteria for TPA), and actual math equations. (We end up referring to these all as “calculators,” just so that it’s easy to know what we’re referring to.)
But not all “calculators” are created equal, of course. Some are better than others, for a number of reasons.
- How strong is its evidence? Probably first and most importantly, does the calculator appear to do what it’s supposed to do? If the paper states its job is to figure out who has right ear pain vs who has left ear pain, did it do that according to the results? And, taking it an important step further – and that we typically require on MDCalc – did it get validated?
- Is it solving or helping in a clinical conundrum? You could imagine someone coming up with a clinical decision instrument for ear pain:
- Which ear does the patient have pain in?
- Does that ear look red?
- Is that ear tender?
But obviously no one needs a score for this, because that’s just what you do as a clinician. It’s obvious. It’s one of the criticisms people have of some of our calculators, including the HEART Score for Major Cardiac Events, specifically the elevated troponin. We all known that patients with chest pain with an elevated troponin are much more likely to have a poor outcome, so obviously those patients require admission to the hospital – no one needs a rule or instrument for that.
- Are terms well-defined? It often takes detective work to figure out where a particular criteria is defined in the paper; often terms are not clear at all, and we end up contacting authors to figure out exactly what they meant by “Heart Rate > 100,” or “Recent Surgery.” Heart Rate > 100 initially, or ever? How recent is recent?
- Is it reasonably easy to perform? While hopefully MDCalc makes it much easier to use any decision instrument and takes away your mnemonics and rote memorization, it’s really important that a user can move through the score with relative ease. For example, the APACHE II Score is widely criticized for being incredibly complex, long, and requiring a huge number of data points. And if you’re missing one of them, you then have to potentially order additional laboratory tests to calculate it. When possible, scores should be straightforward and easy to perform with as few pieces of clinical data as possible.
Those are some of the criteria that help us determine if a piece of research should join the MDCalc reference list. We’ll dive deeper into some of these categories, as well as talk more about poor clinical decision instruments next.