<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>PMean &#187; Risk adjustment</title>
	<atom:link href="http://blog.pmean.com/tag/risk-adjustment/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.pmean.com</link>
	<description>A blog about statistics, evidence-based medicine, and research ethics</description>
	<lastBuildDate>Sat, 22 Jun 2024 17:10:26 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.7.41</generator>
	<item>
		<title>PMean: How do you select variables for a risk adjusted model?</title>
		<link>http://blog.pmean.com/select-variables/</link>
		<comments>http://blog.pmean.com/select-variables/#comments</comments>
		<pubDate>Thu, 16 Jul 2015 18:51:14 +0000</pubDate>
		<dc:creator><![CDATA[pmean]]></dc:creator>
				<category><![CDATA[Statistics]]></category>
		<category><![CDATA[Risk adjustment]]></category>

		<guid isPermaLink="false">http://blog.pmean.com/?p=501</guid>
		<description><![CDATA[I was helping a colleague write a response to a reviewer who asked about a risk adjusted model. How did you select the variables for adjustment? He/she speculated that we had used some type of stepwise selection. I used to do this, but stopped doing it in favor of adjusting for any or all variables [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I was helping a colleague write a response to a reviewer who asked about a risk adjusted model. How did you select the variables for adjustment? He/she speculated that we had used some type of stepwise selection. I used to do this, but stopped doing it in favor of adjusting for any or all variables that were known or suspected to be important. There are serious problems with screening using stepwise approaches to select variables for risk adjustment. But the literature is quite complex and there is no apparent consensus on what is best. Here are some quotes from a few publications about this issue.<span id="more-501"></span></p>
<p>First, several sources said that you should not adjust for intermediate variables or variables in the causal pathway.</p>
<p>&#8220;Confounders should not be confused with intermediate variables, which are part of the causal pathway (Figure 1b), because adjusting for an intermediate variable would attenuate the measure of association.&#8221; Wakkee (2014)</p>
<p>&#8220;An intermediate variable is generally thought of as a post-treatment variable influenced by treatment that may or may not lie on the causal pathway between the treatment and the outcome. When the goal of CER is to estimate the total causal effect of the treatment on the outcome, adjustment for variables on the causal pathway between treatment and outcome is unnecessary and likely to induce bias2 toward a relative risk of 1.0, though the direction can sometimes be in the opposite direction.&#8221; Sauer (2013)</p>
<p>In general, it is better to include a variable in a risk adjusted model than to exclude it, provided that you have enough data.</p>
<p>&#8220;When unmeasured confounding remains, strong arguments exist for error on the side of over-adjustment (adjusting for instruments and colliders) rather than failing to adjust for measured confounders (under-adjustment).&#8221; Sauer (2013)</p>
<p>Of course, you need a lot of data for this, sometimes.</p>
<p>&#8220;For multiple logistic regression and Cox proportional hazards analysis, it is recommended that for every independent variable screened for association there are at least 10 events (Harrell, 2001). In multivariable linear regression it is recommended that for every independent variable approximately 10 subjects are included (Harrell, 2001).&#8221; Wakkee (2014)</p>
<p>If you try to avoid this problem by a preliminary screening step or by using some type of stepwise selection of variables, you may be jumping from the frying pan into the fire.</p>
<p>&#8220;One option is bivariable analysis, where confounders that change the studied association by 10% or more are included in the final model (Vandenbroucke et al., 2007). Variables can also be selected based on a certain P value (e.g., &lt;0.05) in univariable analysis, although a disadvantage of this technique is that variables that are not important in the univariable association, and are therefore excluded, can be important in the full model. Another option is to allow the statistical program to choose the variables by forward or backward selection. In this technique the role of each independent variable is evaluated stepwise based on statistical significance. The most significant variables are added one by one, starting with an empty model (forward selection), or nonsignificant variables are removed stepwise (backward selection), starting with a full model. This can lead to unpredictable effects because the significance may depend on the order of adding or removing covariates. In addition, this technique may also lead to the exclusion of variables you might have preferred to keep in your model based on clinical reasons.&#8221; Wakkee (2014)</p>
<p>&#8220;Variable selection procedures also suffer from the fact that estimates about treatment effects are made after having already used the data to decide on covariates.&#8221; Sauer (2013)</p>
<p>&#8220;In practice, variable selection is dogged by the fact that the final variance estimates tend to be downwardly biased if they do not account for the selection, while the point estimates may suffer related distortions&#8221; Greenland (2008)</p>
<p>Greenland suggests various shrinkage approaches here, which are difficult to summarize with a few brief quotes. Also it is worth looking at some of the approaches in Sauer (2013).</p>
<p>References</p>
<p>Marlies Wakkee, Loes M Hollestein and Tamar Nijsten. &#8220;Multivariable Analysis&#8221; Journal of Investigative Dermatology (2014) 134, e20. doi:10.1038/jid.2014.132. Available at <a href="http://www.nature.com/jid/journal/v134/n5/full/jid2014132a.html">http://www.nature.com/jid/journal/v134/n5/full/jid2014132a.html</a>.</p>
<p>Sander Greenland (2007) &#8220;Invited Commentary: Variable Selection versus Shrinkage in the Control of Multiple Confounders&#8221;  American Journal of Epidemiology. Volume 167, Issue 5, Pp. 523-529. Available at <a href="http://aje.oxfordjournals.org/content/167/5/523.full">http://aje.oxfordjournals.org/content/167/5/523.full</a>.</p>
<p>Brian C. Sauer, Alan Brookhart, Jason Roy, and Tyler Vanderweele (2013) &#8220;A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research&#8221; Pharmacoepidemiol Drug Saf. 2013 Nov; 22(11): 1139–1145. doi: 10.1002/pds.3506. Available at <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4190055/">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4190055/</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pmean.com/select-variables/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>PMean: Forget confounding, and think of things in terms of covariate imbalance</title>
		<link>http://blog.pmean.com/covariate-imbalance/</link>
		<comments>http://blog.pmean.com/covariate-imbalance/#comments</comments>
		<pubDate>Wed, 04 Mar 2015 20:50:22 +0000</pubDate>
		<dc:creator><![CDATA[pmean]]></dc:creator>
				<category><![CDATA[Statistics]]></category>
		<category><![CDATA[Risk adjustment]]></category>

		<guid isPermaLink="false">http://blog.pmean.com/?p=414</guid>
		<description><![CDATA[Someone noted in a passing comment in their email that they found the term &#8220;confounding&#8221; to be difficult and confusing. I&#8217;ve been doing this stuff for over thirty years, but to be quite honest, I get a little nervous about this as well. But I took the time to explain a simpler concept, &#8220;covariate imbalance.&#8221; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Someone noted in a passing comment in their email that they found the term &#8220;confounding&#8221; to be difficult and confusing. I&#8217;ve been doing this stuff for over thirty years, but to be quite honest, I get a little nervous about this as well. But I took the time to explain a simpler concept, &#8220;covariate imbalance.&#8221;<span id="more-414"></span></p>
<p>A covariate is a variable that is not of direct interest in your study, but one that you need to account for because it might potentially influence your outcome variable. If a covariate is imbalanced between the treatment and control group, then there is potential for bias and you need to look at an adjusted model (sometimes called risk adjustment).</p>
<p>For example, I was looking at a study of soldiers who were part of the artillery. They would fire these big guns and it would shoot forward a large projectile. But at the same time, the big guns would send a puff of smoke backwards. This smoke has a fair amount of lead in it. The scientist wanted to know if the lead exposure caused problems with the soldiers&#8217; sperm counts.</p>
<p>This was a study that you could not randomize because (among other things) if you randomly assigned half of a group of soldiers who had never fired artillery before to go out and fire the big guns for an afternoon, their lack of training would probably cause them to kill themselves.</p>
<p>So you had to take a cohort of soldiers with artillery experience and compare them to a cohort of solders with a totally different and lead free experience.</p>
<p>So that&#8217;s what this scientist did and when he collected the data he noticed an imbalance in a key covariate: smoking.</p>
<p>Anyone who works around artillery is strongly discouraged from smoking anywhere near the ammunition. Maybe they could smoke while off duty, but it turns out that smokers were almost non-existent in the artillery cohort. In the controls, smoking was not very high but certainly a lot higher than in the artillery group.</p>
<p>Does smoking influence sperm counts? The data is mixed on this, but we decided that we had to adjust for smoking in any comparison of artillery soldiers versus the controls.</p>
<p>So that&#8217;s an example of covariate imbalance. I&#8217;m sure your application doesn&#8217;t require your subjects to fire Howitzers, but I hope you can still draw a rough analogy to your work.</p>
<p>Now covariate imbalance is not quite the same thing as confounding, but the distinction between the two is often not that important.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pmean.com/covariate-imbalance/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Recommended:  Comparison of Logistic Regression versus Propensity Score When the Number of Events Is Low and There Are Multiple Confounders</title>
		<link>http://blog.pmean.com/propensity-monte-carlo/</link>
		<comments>http://blog.pmean.com/propensity-monte-carlo/#comments</comments>
		<pubDate>Fri, 20 Dec 2013 21:20:56 +0000</pubDate>
		<dc:creator><![CDATA[pmean]]></dc:creator>
				<category><![CDATA[Recommended]]></category>
		<category><![CDATA[Risk adjustment]]></category>

		<guid isPermaLink="false">http://blog.pmean.com/?p=104</guid>
		<description><![CDATA[This page has moved to a new website.]]></description>
				<content:encoded><![CDATA[<p>This page has moved to <a href="http://new.pmean.com/propensity-monte-carlo/">a new website</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pmean.com/propensity-monte-carlo/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
