Thu, Aug 18, 2022: 2:45 PM-3:00 PM
514B
Background/Question/MethodsA major barrier to improving teaching of complex skills in science is the ability to measure them on larger scales. Evidence-based teaching depends on evidence, and with large classes or across varied populations it can be challenging to find the time and other resources to gather the data needed to design and refine curriculum.Our team has been developing a digital, performance-based assessment of graph construction competence that can be (mostly) auto-scored and used at large scale. We call the assessment GraphSmarts and it uses a storyline about trophic cascades affecting conservation of an ecological community to test students. We’ve gone through six iterations of the assessment, the last three using the Evidence-Centered Design framework to guide us. This framework specifies three parts for an assessment – a student model defining the practices to be measured; an interface task which students complete to show their competence; and an evidence-model which connects the data from the interface task to the practices on the student model. We’ve used a combination of literature review, almost 100 student interviews, around 20 faculty interviews, several faculty focus groups, and testing in 11 classrooms in diverse settings to refine and validate the assessment.
Results/ConclusionsWe will briefly present the student model (Graph Construction Conceptual Model), which consists of seventeen practices identified as important for graph construction in the biological sciences. We will show data from 294 students and 17 instructors across multiple institutions that demonstrates practices where GraphSmarts appears to capture competence well, and where it still needs improvement. We find that GraphSmarts distinguishes graphing competence between populations in an expected pattern. GraphSmarts performance-based assessment has significant correlation (tau = 51%) with data from questions on the same practice, and with a paper-and-pencil version of the assessment. Order effects for the different tasks in the assessment were small and non-significant (p > 0.25 for all comparisons). We have evidence of test-retest reliability from testing multiple semesters of the same class, and internal reliability from significant (albeit moderate) correlation between assessment tasks. Some practices showed very little variation between students, indicating the assessment was not sensitive to competence in those. We’ll discuss the implications of this data for GraphSmarts potential as a tool for wide-scale assessment of graphing competence. We’ll also provide information for how faculty can sign up to have their own classes assessed for graphing competence.
Results/ConclusionsWe will briefly present the student model (Graph Construction Conceptual Model), which consists of seventeen practices identified as important for graph construction in the biological sciences. We will show data from 294 students and 17 instructors across multiple institutions that demonstrates practices where GraphSmarts appears to capture competence well, and where it still needs improvement. We find that GraphSmarts distinguishes graphing competence between populations in an expected pattern. GraphSmarts performance-based assessment has significant correlation (tau = 51%) with data from questions on the same practice, and with a paper-and-pencil version of the assessment. Order effects for the different tasks in the assessment were small and non-significant (p > 0.25 for all comparisons). We have evidence of test-retest reliability from testing multiple semesters of the same class, and internal reliability from significant (albeit moderate) correlation between assessment tasks. Some practices showed very little variation between students, indicating the assessment was not sensitive to competence in those. We’ll discuss the implications of this data for GraphSmarts potential as a tool for wide-scale assessment of graphing competence. We’ll also provide information for how faculty can sign up to have their own classes assessed for graphing competence.