Biostatistics

Chapter 25 Biostatistics






Basic Concepts



Test Characteristics






4 How does the sensitivity of a test relate to its specificity?


Sensitivity and specificity move in opposite directions as test parameters change. In other words, as sensitivity increases, specificity decreases, and vice versa. This occurs because, in order to improve the sensitivity of a test (i.e., detect more people with a disease of interest), the limits on what results are considered to be positive must be made less stringent. In detecting more people with the disease, the test will therefore also yield positive results in more people without the disease.


For example, the rheumatoid factor (RF) is often used to aid in the diagnosis of rheumatoid arthritis (RA). RF is positive in 70% of patients with RA. If you want to catch more cases of patients with RA, you can use the erythrocyte sedimentation rate (ESR). The ESR is positive in 90% of patients with RA. However, with this increased sensitivity comes decreased specificity. ESR is very nonspecific and can be positive in any inflammatory process, from pneumonia to temporal arteritis. Given its high sensitivity, a negative ESR is helpful in ruling out inflammatory disease (see later discussion).


SPIN and SNOUT are useful mnemonics. SPIN tells us that specific tests rule in disease. That is, the more specific a test, the more likely it is that a positive result indicates real disease. SNOUT tells us that sensitive tests rule out disease. That is, the more sensitive a test, the more likely that a negative result rules out disease. In serious diseases that can be treated effectively if detected, a greater sensitivity is desired (often at the expense of specificity).













14 What are the differences between type I and type II error? How is power related to type II error?


Type I (α) error refers to “false positive error.” In other words, α = the probability of claiming that a true difference exists between two means when in reality none exists. The “null hypothesis” (H0) claims that there is no difference between two means. If a researcher claims a statistical difference between two groups when none exists and falsely rejects the null hypothesis, the researcher has committed a type I error.


By contrast, type II (β) error is “false negative error.” It refers to the probability of stating that no difference exists between the means of two values when in fact there truly is a difference. Type II error is an acceptance of the null hypothesis when it should indeed be rejected.


Let’s, for instance, say that we wish to determine the factors that influence the difference in the mean USMLE scores observed between two groups of medical students. We hypothesize that time spent studying for the USMLE could be significantly different between the two groups and may thus exhibit a causal relationship with the scores obtained by the students. We devise a method to measure this variable and do not find that this significantly differs between the two groups. If a difference truly exists, we have made a type II error in this case.


The term “statistical power” is a reflection of a study’s ability to correctly detect a difference between means when one truly exists. Power is equal to 1 − β. This makes good sense, right? Since β is the probability of committing a type II error (not detecting a difference that truly exists), power should inversely correlate with this value.




Stay updated, free articles. Join our Telegram channel

Apr 7, 2017 | Posted by in NURSING | Comments Off on Biostatistics

Full access? Get Clinical Tree

Get Clinical Tree app for offline access