Overview

Academic Technology Support Services and the Disability Resource Center recruited 30 faculty and staff to participate in a pilot of Universal Design Online Content Inspection Tool (UDOIT), an accessibility tool that is integrated into Canvas. Instructors use the tool to evaluate their entire Canvas course for accessibility, and UDOIT  offers suggestions on what to do to fix errors that were found in the course. OIT is particularly interested in how this tool may influence accessibility practices for instructors, course coordinators, academic technologists, and instructional designers.

Pilot Timeline

The pilot participants used UDOIT August and September 2019. The pilot concluded early Fall Semester 2019. It was conducted in four phases.

  1. Initial recruitment of participants and gathering of initial data on the course used, self-assessment of knowledge and expertise in editing Canvas pages, and accessibility.
  2. Evaluation: Active use by pilot testers; technical, functional, usability, effort spent remediating errors, and course scan data gathering. This included an “initial scan” form to record where the volunteer started on a per-course basis, and a “post scan” survey to document their corrections and time spent on each type of error, for the same course(s).
  3. Report and Recommendation: Deliver a discovery report to the vendor with feedback from pilot participants to learn what worked well and what was challenging, especially for accessibility remediation but also how it works with Design Tools.
  4. Debrief and Feedback-Gathering: Focus group sessions for instructors and gathering final feedback from all participants (not yet completed).

Pilot Goals

The goals of the UDOIT pilot are threefold:

  • First, to articulate University-wide requirements for course accessibility assessment. Beyond the legal and ethical dictates of Section 508 compliance, what should be assessed in course sites and activities, to what levels or standards should it be assessed, and how should it be assessed? 
  • Second, to evaluate the functionality and usability of UDOIT. What does it assess? How well? Are there significant gaps in UDOIT’s coverage or significant deficits in terms of its performance? Was it easy to use? Did instructors require additional training or support to use it effectively? How well did UDOIT’s repair/remediation functions address the flagged accessibility issues?
  • Finally, to determine the resource costs necessary to operate UDOIT. How many resources were required to install and configure the base application? How many resources were devoted to maintenance and regression testing the application during the operational phase? What was required to scaffold the use of UDOIT—how many trainers, central support team members, and/or college academic technology staff members were required to support the use of UDOIT?

Pilot Participants

There were 30 unique volunteers and 42 courses with varying complexities, including four out of the five system campuses, three central units, and six collegiate units. There are 13 instructors known to be currently teaching courses tested with UDOIT. Additionally, there were three staff members that found the tool within Canvas without our guidance, but we have limited data on their experience. 

Pilot Process

Participants scanned 42 courses and reported the number of errors and suggestions they encountered on the Initial Scan form. After making all corrections they were able (or willing) to make in those courses, each participant re-ran UDOIT for the course. Then, via a different form, they reported how many errors and suggestions remained.

Pilot Results

Pilot participants used UDOIT to scan 42 courses and reported the number of errors and suggestions they encountered on the Initial Scan form. After making all corrections they were able (or willing) to make in those courses, each participant re-ran UDOIT for the course. Then, via a different form, they reported how many errors and suggestions remained. 

For convenience, we broke the courses into four categories based on the number of errors or suggestions UDOIT reported:

  1. Large: greater than (>) 39 errors or suggestions

  2. Intermediate: errors or suggestions between 10 and 39

  3. Small: errors and suggestions between 1 and 10

  4. None: errors less or equal to 1 

Criteria

Initial Scan: Errors

Initial Scan: Suggestions

Post Scan: Errors

Post Scan: Suggestions

Large: >39

14

14

6

7

Intermediate: 10 to 39

11

12

8

9

Small:  2 to 10

12

11

13

10

None: 0 or 1

5

5

11

12

Improvement Summary

  • Of the “intermediate” courses, 9 out of the 11 improved to “small or no errors,” marking an 81% improvement. 
  • Of the “large” courses, 6 out of the 14 remained in that category, marking a 57% change.
  • Of the “small” courses, 5 out of the 12 courses ended with no errors, marking a 41% improvement.
  • Of the pilot course users who performed the scan, 4 out of the 42 did nothing with the data. Of those 4, 2 were in the “large” category, and two were in the “intermediate” category.
  • Of all the volunteers, 4 out of the 30 did not use UDOIT. Note: the admin tool used to discover that information is still in development and as such, is imperfect.

Pilot Participant Feedback

  • 100% of pilot participant responders recommended that UDOIT be integrated with Canvas.
  • 90% said they will use the tool frequently.
  • After using UDOIT, confidence in the material instructors created jumped to 90%.

Final Result

UDOIT will be integrated with Canvas Spring Semester 2020.

Getting Started

Additional Resources