{"id":182,"date":"2021-09-16T22:31:26","date_gmt":"2021-09-17T02:31:26","guid":{"rendered":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/?p=182"},"modified":"2021-09-16T22:33:16","modified_gmt":"2021-09-17T02:33:16","slug":"summary-of-chapter-7","status":"publish","type":"post","link":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/2021\/09\/16\/summary-of-chapter-7\/","title":{"rendered":"Summary of Chapter 7"},"content":{"rendered":"<p><strong><u>Scale Reliability &amp; Validity<\/u><\/strong><\/p>\n<ul>\n<li>Why must we test scales?\n<ol>\n<li>To ensure these scales indeed measure the unobservable construct that we wanted to test \u2013 (i.e. the scales are \u201cvalid\u201d).<\/li>\n<li>To ensure they measure the intended construct consistently and precisely (i.e. the scales are \u201creliable\u201d)<\/li>\n<li>Reliability and validity are the yardsticks against which the adequacy of our measurement procedures are evaluated in scientific research.<\/li>\n<\/ol>\n<\/li>\n<li>A measure can be reliable but not valid if it is measuring something very consistently but is consistently measuring the wrong construct.<\/li>\n<li>A measure can be valid but not be reliable if it is measuring the right construct, but not in a consistent manner.<\/li>\n<\/ul>\n<p><strong><u>Reliability<\/u><\/strong><\/p>\n<ul>\n<li><strong>Reliability <\/strong>is the degree to which the measure of a construct is consistent or dependable. In other words, if we use this scale to measure the same construct multiple times, do we get the same result every time, assuming the underlying phenomenon is not changing?\n<ol>\n<li>Reliability implies consistency but not accuracy.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p><strong>What are the sources of unreliable observations in social science measurements?<\/strong><\/p>\n<ul>\n<li>The observer\u2019s (or researcher\u2019s) subjectivity.<\/li>\n<li>Asking imprecise or ambiguous questions.<\/li>\n<li>Asking questions about issues that the respondents are not familiar about or care about.<\/li>\n<\/ul>\n<p><strong>What are the different ways of estimating reliability?<\/strong><\/p>\n<ul>\n<li><strong>Inter-rater reliability<\/strong>\n<ol>\n<li>Also called inter-observer reliability, is a measure of consistency between two or more independent rates (observers) of the same construct.<\/li>\n<li>Usually, this is assessed in a pilot study.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Test-retest reliability<\/strong>\n<ol>\n<li>Measure of consistency between two measurements (tests) of the same construct administered to the same sample at two different points of time.<\/li>\n<li>The time interval between the two tests is critical.\n<ol>\n<li>Generally, the longer the time gap, the greater the chance for the two observations may change during this time (due to random error).<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<ul>\n<li><strong>Split-half reliability<\/strong>\n<ol>\n<li>Measure of consistency between two halves of a construct measure.<\/li>\n<li>The longer the instrument, the more likely it is that the two halves of the measure will be similar (since random errors are minimized as more items are added).<\/li>\n<\/ol>\n<\/li>\n<li><strong>Internal consistency reliability<\/strong>\n<ol>\n<li>Measure of consistency between different items in a construct.<\/li>\n<li>If a multiple-item construct measure is administered to respondents, the extent to which respondents rate those items in a similar manner is a reflection of the internal consistency.<\/li>\n<li><strong>Cronbach\u2019s Alpha<\/strong> \u2013 A reliability measure designed by Lee Cronbach in 1951, factors in scale size in reliability estimation.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p><strong><u>Validity<\/u><\/strong><\/p>\n<ul>\n<li>Validity refers to the extent to which a measure adequately represents the underlying construct that it is supposed to measure.<\/li>\n<li>Validity can be assessed using theoretical and empirical approaches and ideally both.\n<ol>\n<li>Theoretical assessment focuses on how well the idea of a theoretical construct is translated into or represented in an operation model.\n<ol>\n<li>This is called translation validity and consists of two subtypes: face and content validity.<\/li>\n<\/ol>\n<\/li>\n<li>Empirical assessment examines how well a given measure relates to one or more external criterion, based on empirical observations.\n<ol>\n<li>This is call criterion-related validity and includes four subtypes: convergent, discriminant, concurrent and predictive.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p><strong>What are the different ways to measure validity?<\/strong><\/p>\n<ul>\n<li><strong>Face Validity<\/strong>\n<ol>\n<li>Refers to whether an indicator seems to be a reasonable measure of its underlying construct \u201con its face\u201d.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Content Validity<\/strong>\n<ol>\n<li>Assessment of how well a set of scale items matches with relevant content domain of the construct that it is trying to measure.<\/li>\n<li>Requires a detailed descriptions of the entire content domain of a construct.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Convergent Validity<\/strong>\n<ol>\n<li>Refers to the closeness with which a measure relates to (or converges on) the construct that it is purported to measure.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Discriminant Validity<\/strong>\n<ol>\n<li>Refers to the degree of which a measure does not measure (or discriminates from) other constructs that it is not supposed to measure.<\/li>\n<li>Usually convergent and discriminant validity are assessed jointly.\n<ol>\n<li>Convergent and discriminant validity can be valuated with bivariate correlations, exploratory factor analysis or the multi-trait multi-method (MTMM) approach.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<ul>\n<li><strong>Predictive Validity<\/strong>\n<ol>\n<li>Degree to which a measure successfully predicts a future outcome that is theoretically expected to predict.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Concurrent Validity<\/strong>\n<ol>\n<li>Examines how well one measure relates to other concrete criterion that is presumed to occur simultaneously.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p><strong><u>Theory of Measurement<\/u><\/strong><\/p>\n<ul>\n<li>Classic Test or True Score Theory\n<ol>\n<li>Psychometric theory that examines how measurements work, what it measures and what it does not measure.<\/li>\n<li>Theory postulates that every measurement has a <em>true score <\/em>T that can be observed accurately if there were no errors in measurements.<\/li>\n<li>However, the presence of <em>measurement errors <\/em>E results in a deviation of the <em>observed score <\/em>X from the true score.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p>X\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 =\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 T\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 +\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 E<\/p>\n<p>Observed scored\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 True Score\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Error<\/p>\n<ul>\n<li>Measurements errors can be two types: random and systematic.\n<ol>\n<li><strong>Random error<\/strong> is the error that can be attributed to a set of unknown and uncontrollable external factors that randomly influence some observations but not others.\n<ol>\n<li>Random error reduces the reliability of measurement by increasing validity in observations.<\/li>\n<\/ol>\n<\/li>\n<li><strong>Systematic error<\/strong> is an error that is introduced by factors that systematically affect all observations of a construct across an entire sample in a systematic manner.\n<ol>\n<li>Systematic error reduces the validity of measurement by shifting the central tendency measure.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p><strong><u>Integrated Approach to Measurement Validation<\/u><\/strong><\/p>\n<ul>\n<li>Complete and adequate assessment of validity mush include both theoretical and empirical approaches.<\/li>\n<li>Integrated approach starts in the theoretical realm.\n<ol>\n<li>Conceptualize the constructs of interest<\/li>\n<li>Select or create items or indicators for each construct base on our conceptualization of the construct.<\/li>\n<li>Q-Sort for item refinement and dropping<\/li>\n<li>Examine face and content validity.<\/li>\n<\/ol>\n<\/li>\n<li>Integrated approach moves into empirical realm.\n<ol>\n<li>Collect pilot test data.<\/li>\n<li>Factor analysis for convergent\/discriminant validity.<\/li>\n<li>Examine reliability and scale dimensionality.<\/li>\n<li>Examine predictive validity.<\/li>\n<li>If construct measures satisfy most of all of the requirements of reliability and validity, the operational measures are reasonably adequate and accurate.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Scale Reliability &amp; Validity Why must we test scales? To ensure these scales indeed measure the unobservable construct that we wanted to test \u2013 (i.e. the scales are \u201cvalid\u201d). To ensure they measure the intended construct consistently and precisely (i.e. the scales are \u201creliable\u201d) Reliability and validity are the yardsticks against which the adequacy of our measurement procedures are evaluated&#8230; <a href=\"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/2021\/09\/16\/summary-of-chapter-7\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":5244,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177784],"tags":[],"class_list":["post-182","post","type-post","status-publish","format-standard","hentry","category-chapter-summary"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/posts\/182","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/users\/5244"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/comments?post=182"}],"version-history":[{"count":9,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/posts\/182\/revisions"}],"predecessor-version":[{"id":191,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/posts\/182\/revisions\/191"}],"wp:attachment":[{"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/media?parent=182"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/categories?post=182"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richmond.edu\/researchmethods-fall2021\/wp-json\/wp\/v2\/tags?post=182"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}