ผลต่างระหว่างรุ่นของ "หน้าหลัก"
ล |
ล |
||
แถว 1: | แถว 1: | ||
− | + | S session that I'd like to hear the speakers say more about is definitely the inherent tension whenever you do surrogacy modeling, considering about association versus causality or causal modeling or causal effects. By "association" what I am referring to is a quantity that you just can compute and estimate in the observed data. Dr Daniels alluded to this in his discussion. It truly is uncomplicated to check no matter if the model is holding true or not. The issue is that inside a lot of situations what we care about scientifically is what I'm calling causal estimands, and those are going to need much more assumptions. Dr Taylor gave a nice instance in his application with 14 parameters of which only ten could in fact be identified in the observed data. That's the inherent tension: do you wish to base inference only around the observed information or try to perform causal modeling, which demands greater than the observed data. There are many causal modeling frameworks that you just can use--Dr Joffe called them "languages for causal inference." Dr Taylor worked largely using the potentialClin Trials. Author manuscript; obtainable in PMC 2015 November 22.Daniels et al.Pageoutcomes framework. Dr Joffe, in his talk, showed you plenty of graphs (which Judea Pearl10 calls causal diagrams). Fundamentally, this concern of causality becomes more of a study design problem; surrogacy inherently relates to anything that happens just after the treatment gets assigned. If you wish to assess how a post-treatment event affects your correct outcome, you're speaking about an embedded observational study. Even when your initial trial has been randomized, once you begin taking a look at post-treatment events and attempting to figure out what their effects are inside your experiment, you will be talking about a question that's generally an observational data evaluation question, not a randomized trial question. You make a lot of assumptions once you do causal modeling. People allude to utilizing sensitivity evaluation, however it seems like an additional very good point to do is assume you've got the incorrect model to start with. In other words, do your causal modeling primarily based on some true model and then ask, considering that I made use of the incorrect model, how bad is my final answer; how sensitive are my outcomes to the incorrect model choice? Also, several assumptions are frequently needed for defining causal estimands. One also can take into account simultaneous sets of assumptions, exploring them in sensitivity analyses, as an alternative to fixing all but a single parameter and carrying out sensitivity evaluation just on that; looking at numerous parameters simultaneously must give you much more info regarding the sensitivity final results. My subsequent point can be a historical throwback. People utilized to think lots about model misspecification within the 1960s and 1970s, with Kullback eibler divergence and least false parameter minimizers; that time period was when the entire notion of sandwich variance estimators began. Let's say I've these assumptions I have to have for valid causal inference, but I don't know them to be correct. Is it achievable to believe of constructing analogies to sandwich variance estimators, which could have some robustness built in against starting with all the wrong assumptions? I consider Dr Daniels alluded to this inside a a lot more nonparametric way when he talked about applying nonparametric priors. |
รุ่นแก้ไขเมื่อ 18:22, 7 มกราคม 2565
S session that I'd like to hear the speakers say more about is definitely the inherent tension whenever you do surrogacy modeling, considering about association versus causality or causal modeling or causal effects. By "association" what I am referring to is a quantity that you just can compute and estimate in the observed data. Dr Daniels alluded to this in his discussion. It truly is uncomplicated to check no matter if the model is holding true or not. The issue is that inside a lot of situations what we care about scientifically is what I'm calling causal estimands, and those are going to need much more assumptions. Dr Taylor gave a nice instance in his application with 14 parameters of which only ten could in fact be identified in the observed data. That's the inherent tension: do you wish to base inference only around the observed information or try to perform causal modeling, which demands greater than the observed data. There are many causal modeling frameworks that you just can use--Dr Joffe called them "languages for causal inference." Dr Taylor worked largely using the potentialClin Trials. Author manuscript; obtainable in PMC 2015 November 22.Daniels et al.Pageoutcomes framework. Dr Joffe, in his talk, showed you plenty of graphs (which Judea Pearl10 calls causal diagrams). Fundamentally, this concern of causality becomes more of a study design problem; surrogacy inherently relates to anything that happens just after the treatment gets assigned. If you wish to assess how a post-treatment event affects your correct outcome, you're speaking about an embedded observational study. Even when your initial trial has been randomized, once you begin taking a look at post-treatment events and attempting to figure out what their effects are inside your experiment, you will be talking about a question that's generally an observational data evaluation question, not a randomized trial question. You make a lot of assumptions once you do causal modeling. People allude to utilizing sensitivity evaluation, however it seems like an additional very good point to do is assume you've got the incorrect model to start with. In other words, do your causal modeling primarily based on some true model and then ask, considering that I made use of the incorrect model, how bad is my final answer; how sensitive are my outcomes to the incorrect model choice? Also, several assumptions are frequently needed for defining causal estimands. One also can take into account simultaneous sets of assumptions, exploring them in sensitivity analyses, as an alternative to fixing all but a single parameter and carrying out sensitivity evaluation just on that; looking at numerous parameters simultaneously must give you much more info regarding the sensitivity final results. My subsequent point can be a historical throwback. People utilized to think lots about model misspecification within the 1960s and 1970s, with Kullback eibler divergence and least false parameter minimizers; that time period was when the entire notion of sandwich variance estimators began. Let's say I've these assumptions I have to have for valid causal inference, but I don't know them to be correct. Is it achievable to believe of constructing analogies to sandwich variance estimators, which could have some robustness built in against starting with all the wrong assumptions? I consider Dr Daniels alluded to this inside a a lot more nonparametric way when he talked about applying nonparametric priors.