Context-Based Decision-Making In Neuromonitoring: Doing It Better

Context-Based Decision-Making In Neuromonitoring: Doing It Better


Context-Based Decision-Making In IONM

Say you're speaking with a surgical team about alarm criteria for tcMEP. You say "all-or-none," as that's what can be read in the guidelines.

Are you right? Maybe.

The direction the field is heading is context-based decision-making, which means you need more information to make a "best" decision.

What you would really like to understand is what's understood to be the most acceptable probabilities for intervening or not when some sort of change happens.

To get to that point, you'll need to understand how to properly use some tools first.

Sensitivity-Specificity

We'll start where you should, in the beginning, by supplying some definitions.

But being able to define sensitivity-specificity is far less important than being able to apply it.

Definitions:

Sensitivity: the proportion of patients with the disease who test positive.

Specificity: the proportion of patients without disease who test negative.

True Positive: correctly identified, or sick people correctly identified as sick.

True Negative: correctly rejected, or healthy people correctly identified as healthy.

False Positive: incorrectly identified, or healthy people incorrectly identified as sick.

False Negative: incorrectly rejected, or sick people incorrectly identified as healthy.


No alt text provided for this image

Sensitivity

Sensitivity deals with one of our biggest “uh oh” moments faced in neuromonitoring. The cases that lower your overall monitoring sensitivity are the problems where the OR staff gets involved and discuss dropping your contract.

Here’s what it looks like: you record the case and see no changes. But the patient wakes with a problem when you thought there wouldn’t be one (false negative).

Now you have to prove that it was a shortcoming of the modality (poor sensitivity or poor understanding of what is being monitored) and not your ability to monitor/interpret the results.

The reason not detecting a problem when a problem occurs lowers your sensitivity is because of the following equation:

Sensitivity = (# of true positives) / (# of true positives + # of false negatives)

Or

Sensitivity = (# of people with surgical injury detected by monitoring) / (# of people with surgical injury detected by monitoring + # of people with a surgical injury that had no significant changes in monitoring)

Or

Sensitivity = probability of an injury given there was a significant change

So we would like to see our outcome to be 1, or 100%. If there is a deficit not picked up on monitoring, then our denominator is larger than our numerator, bringing our percentage down.

For example, 11 patients woke up with a deficit, but monitoring only picked up 10.

Sensitivity = (10) / (10 + 1)

= 0.91, or 91%

You have to be careful when reading papers where they are assessing the modality used for the correct purpose. For instance, SSEP monitors the dorsal column or sensory track. Is it fair to say that they had a false negative if there was damage to the anterior horn cells?

Specificity

Specificity is what we get challenged on all the time. It’s what sets off panic mode. You say “Doc, I have a change in SSEP!” S/he says, “Are you sure it isn’t technical, I didn’t do anything?” What s/he is saying is that you do not have a specific modality, or that your positive result (the change) lacks the means to identify the patient's injury. S/he is praying for a false positive because that would mean the patient is OK even though you are saying there is a problem. But the surgeon doesn’t want this to be a reoccurring situation either because now s/he can’t trust you. A high amount of false positives means a poor specificity (which means, “Why the hell am I using you again? If this keeps up we need to discuss the problem…”), because of the equation used to find specificity:

But the surgeon doesn’t want this to be a reoccurring situation either because now s/he can’t trust you. A high amount of false positives means a poor specificity (which means, “Why the hell am I using you again? If this keeps up we need to discuss the problem…”), because of the equation used to find specificity:

Specificity = (# of true negatives) / (# of true negatives + # of false positives)

Or

Specificity = (# of people with no surgical injuries and no significant monitoring changes) / (# of people with no surgical injuries and no significant monitoring changes + # of people with no surgical injury but had significant monitoring changes)

Or

Specificity = probability of no injury given there were no significant monitoring findings

So if someone new to the field is calling EMG for every burst they see, they have become a false positive machine. If they did 10 cases and made clinically significant calls in 5 of those cases, and all 10 patients woke up OK, then they have poor specificity.

Specificity = (5) / (5+5)

=0.5, or 50%

Application of Sensitivity-Specificity numbers to tcMEP

For our purposes, having a lower specificity is usually more desirable than having a lower sensitivity. And you can’t have it both ways. As criteria change to improve one, the other will suffer.

That’s why you’ll see some monitoring groups move away from the all-or-none criteria suggested for tcMEP. When you understand what is going into the recording of a muscle potential after stimulating the cortex through the cranium, a 100% reduction makes the most sense (and you can make an argument that the way amplitudes are being measured by most groups lacks accuracy). The specificity goes way up. But since there have been reports of post-op deficits with some CMAP response present, there is a reduction in the sensitivity.

Some groups have adopted 75-80% reduction criteria in order to even further minimize any loss of sensitivity, even if the drop in specificity is far greater.

Should a group choose to really cover their basis against false negatives and lower sensitivity, they might choose a 50% reduction in tcMEP as a significant change. From my experience, they would have a sensitivity of 100%, but their specificity would be unacceptable (I’m talking about tcMEP for the spinal cord, not the brainstem or peripheral nerve monitoring here).

So this is one of the reasons that there are no agreed-upon alarm criteria for a lot of what we do, even if there are guidelines set by our associations.

One last observation on sensitivity-specificity.

For the neuromonitoring tech in the room, we can find ourselves in a tough situation. A lot of our surgeons only want to be told about something if it is a problem. Giving far too many false alarms is an easy way to get kicked out of their room.

As the oversight neuromonitoring doctor, we can find ourselves in a tough situation. We are there to make sure that the surgeon is informed of possible deficits. And because some causes are time-dependent, the sooner the surgeon is informed and can make corrective measures the better.

I’ve been on both sides of the monitor, as the clinician in the room, remote doc overseeing the case, and doing cases in the operating room without any other oversight.

There are definitely different emotional factors at play.

In the operating room, it is easier to lean towards making sure specificity is not forgotten about. You don’t want any false negatives, but you’re not looking to jump the gun either and become a false positive machine.

Overseeing someone else running the case is a little nerve-wracking. There is a loss of control, and the talent level on the other side of the monitor can vary greatly. Human instinct makes you lean more towards making sure all changes are reported and sensitivity is as high as possible. Specificity sometimes takes a back seat.

Remember, I am talking about human emotions here.

But having more objective discussions that result in a plan helps the team stay on the same page and removes some of the emotions from the equation.

It's always going to be imperfect. Your best bet is to manage what you can.

Joseph Hartman

Director of Operations | Talks About IONM, EEG, and Managing Remote Teams

1y

I think we're inching along. I would agree that the odds are in your favor to use all or none in a mixed anesthesia protocol, whereas you might be better served with a more strict alarm criteria as you stated. That's all generally speaking. We might be even more cautious in other less myelinated nerves, like CN VII. And now we're inching even further. It would be interesting to look at TIVA usage trending over the years with MEP cases. My gut tells me it continues to get better (outside of supply shortages) as it becomes taught that way. Of course, cost consciousness could work against it.

Like
Reply
John Sestito

R.EEG/EP T., CLTM, CNIM

1y

Main issue confounding an established decrease based alarm criteria is anesthetic regimen on a hospital to hospital basis. How does your hospital's anesthesia team adjust to a surgeon requesting monitoring. All that variability impedes the continuing research necessary to standardize our alarm criteria. So we dumbed it down and said "all or nothing" at one point. I do think with optimal anesthesia, a percentage based AUC TcMEP alarm can work. We started using an 80% AUC alarm, based on some recent-ish articles, with accurate prediction of foot drop in a few cases. Right now, TcMEP alarm criteria is like the wild west. More variability begets more false positives which leads to surgeon distrust. I'm all for an updated alarm, but we need to conduct more research in an anesthetically optimal environment to get there.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics