Accessibility test report
Lydall (CCVAX)
Autotest a11y
, version 7
Score: 1091
Introduction
In a comparison of the accessibility of web pages, Lydall (CCVAX) received a score of 1091 (where 0 is the best possible score). This report explains how that score was computed.
The pages were tested with version 7 of the a11y
procedure of Autotest. A total of 427 tests (of which 411 were bundled into 3 packages) were performed on each page. The resulting data were saved in a JSON-format file.
These tests—like all tests—are fallible. The failures described below merit investigation as potential opportunities for improved accessibility.
Summary
The packages’ and tests’ contributions to the score were:
total | 1091 |
---|---|
focInd | 405 |
hover | 172 |
ibm | 118 |
wave | 104 |
axe | 88 |
focOp | 64 |
zIndex | 54 |
styleDiff | 39 |
log | 17 |
labClash | 10 |
focAll | 9 |
linkUl | 9 |
role | 2 |
bulk | 0 |
embAc | 0 |
menuNav | 0 |
motion | 0 |
radioSet | 0 |
tabNav | 0 |
Test packages
Most of the tests belong to the following three accessibility test packages created by other specialists.
axe
The page did not pass the axe
test and received a score of 88 on axe
. The details are in the JSON-format file, in the section starting with "which": "axe"
. There was at least one failure of:
- color-contrast: Ensures the contrast between foreground and background colors meets WCAG 2 AA contrast ratio thresholds
- duplicate-id: Ensures every id attribute value is unique
- empty-heading: Ensures headings have discernible text
- heading-order: Ensures the order of headings is semantically correct
- label: Ensures every form element has a label
- landmark-one-main: Ensures the document has a main landmark
- link-name: Ensures links have discernible text
- meta-viewport: Ensures <meta name="viewport"> does not disable text scaling and zooming
- region: Ensures all page content is contained by landmarks
Axe is an open-source package sponsored by accessibility consulting firm Deque. The axe
test performs all 138 default tests in the Axe package.
ibm
The page did not pass the ibm
test and received a score of 118 on ibm
. The details are in the JSON-format file, in the section starting with "which": "ibm"
. There was at least one failure of:
- WCAG20_Input_ExplicitLabel: Form control element <input> has no associated label
- WCAG20_Body_FirstASkips_Native_Host_Sematics: The page does not provide a way to quickly navigate to the main content (ARIA "main" landmark or a skip link)
- WCAG20_A_HasText: Hyperlink has no link text, label or image with a text alternative
- RPT_Table_DataHeadingsAria: Table has no headers identified
- RPT_Header_HasContent: Heading element has no descriptive content
- RPT_Elem_UniqueId: The <p> element has the id "copyrights" that is already in use
- Rpt_Aria_OrphanedContent_Native_Host_Sematics: Content is not within a landmark element
- IBMA_Color_Contrast_WCAG2AA: Text contrast of 2.85 with its background is less than the WCAG AA minimum requirements for text of size 11px and weight of 400
- IBMA_Color_Contrast_WCAG2AA: Text contrast of 4.05 with its background is less than the WCAG AA minimum requirements for text of size 10px and weight of 400
- IBMA_Color_Contrast_WCAG2AA: Text contrast of 3.52 with its background is less than the WCAG AA minimum requirements for text of size 18px and weight of 400
- IBMA_Color_Contrast_WCAG2AA: Text contrast of 4.05 with its background is less than the WCAG AA minimum requirements for text of size 16px and weight of 700
- IBMA_Color_Contrast_WCAG2AA: Text contrast of 4.05 with its background is less than the WCAG AA minimum requirements for text of size 14px and weight of 400
Equal Access is an open-source package sponsored by IBM Corporation. The ibm
test performs all 163 default tests in the Equal Access package.
wave
The page did not pass the wave
test and received a score of 104 on wave
. The details are in the JSON-format file, in the section starting with "which": "wave"
. There was at least one failure of:
- error/label_missing: Missing form label
- error/heading_empty: Empty heading
- error/button_empty: Empty button
- error/link_empty: Empty link
- contrast/contrast: Very low contrast
- alert/alt_suspicious: Suspicious alternative text
- alert/region_missing: No page regions
- alert/heading_skipped: Skipped heading level
- alert/text_small: Very small text
- alert/underline: Underlined text
- alert/title_redundant: Redundant title text
- alert/table_layout: Layout table
WAVE is a proprietary package owned by webAIM, a program of the Institute for Disability Research, Policy, and Practice at Utah State University. The wave
test performs all 110 default tests in the WAVE package.
Custom tests
The tests in the above packages are designed to detect some, not all, accessibility problems. The procedure includes the following custom tests that supplement the tests of the packages.
bulk
The page passed the bulk
test.
The bulk
test counts the initially visible elements in a page. A page with a large count tends to be complex and busy, frustrating some users, especially if they have visual or motor disabilities, as they try to determine what the page is about, whether it is relevant, and how to find a specific thing in it.
When the count exceeds 250, the procedure begins to assign a non-zero score.
embAc
The page passed the embAc
test.
The embAc
test detects improper embedding of interactive elements (links, buttons, inputs, and select lists) within links or buttons. Such embedding violates the HTML standard, complicates user interaction, and creates risks of error. It becomes non-obvious what a user will activate with a click.
focAll
The page did not pass the focAll
test and received a score of 9 on focAll
. The details are in the JSON-format file, in the section starting with "which": "focAll"
.
Summary of the details:
- tabFocusables: 39
- tabFocused: 42
- discrepancy: 3
The focAll
test detects discrepancies between focusable and Tab-focused element counts. Navigating with the Tab key normally moves the focus and does nothing else. If it also adds or fails to focus focusable elements, this complicates navigation and may make the page unusable for people who must use only a keyboard (not a mouse) or a keyboard-emulating assistive device to navigate.
focInd
The page did not pass the focInd
test and received a score of 405 on focInd
. The details are in the JSON-format file, in the section starting with "which": "focInd"
.
Summary of the details:
- indicatorMissing: 81
- nonOutlinePresent: 0
The focInd
test detects focusable elements without standard focus indicators. An outline is the standard and most recognizable focus indicator; as you repeatedly press the Tab key, the outline moves through the page. Other focus indicators are more likely to be misunderstood. For example, underlines may be mistaken for selection indicators or links. An absent focus indicator prevents the user from knowing what a keyboard action will act on.
focOp
The page did not pass the focOp
test and received a score of 64 on focOp
. The details are in the JSON-format file, in the section starting with "which": "focOp"
.
Summary of the details:
- onlyFocusable: 0
- onlyOperable: 16
The focOp
test detects descrepancies between Tab-focusability and operability. The standard practice is to make focusable elements operable and vice versa. If focusable elements are not operable, users are likely to be surprised that nothing happens when they try to operate such elements. If operable elements are not focusable, users depending on keyboard navigation are prevented from operating those elements. The test considers an element operable if it has a non-inherited pointer cursor and is not a LABEL
element, or has an operable tag name (A
, BUTTON
, IFRAME
, INPUT
, SELECT
, or TEXTAREA
), or has an onclick
attribute. The test considers an element Tab-focusable if its tabIndex
property has the value 0.
hover
The page did not pass the hover
test and received a score of 172 on hover
. The details are in the JSON-format file, in the section starting with "which": "hover"
.
Summary of the details:
- triggers: 12
- madeVisible: 62
- opacityChanged: 0
- opacityAffected: 0
- unhoverables: 0
The hover
test detects unexpected effects of hovering. The normal purpose of hovering is to show the user which element is currently being actually or effectively hovered over and would therefore be the target of a mouse click. When hovering does more than that, the additional effects can confuse or startle users, especially those without precise mouse control. The test detects whether hovering makes elements visible, changes the opacities of elements, affects the opacities of elements by changing the opacities of their ancestors, and fails to reach elements. Only visible elements that have A
, BUTTON
, and LI
tag names or have onmouseenter
or onmouseover
attributes are considered as triggers of such effects when hovered over. The effects of hovering are inspected for the descendants of the grandparent of the trigger if the trigger has the tag name A
or BUTTON
, or otherwise the descendants of the trigger. The only elements counted as being made visible by hovering are those with tag names A
, BUTTON
, INPUT
, and SPAN
, and those with role="menuitem"
attributes.
labClash
The page did not pass the labClash
test and received a score of 10 on labClash
. The details are in the JSON-format file, in the section starting with "which": "labClash"
.
Summary of the details:
- mislabeled: 0
- unlabeled: 5
The labClash
test detects defects in the labeling of buttons, non-hidden inputs, select lists, and text areas. The defects include missing labels and redundant labels. Redundant labels are labels that are superseded by other labels. Explicit and implicit (wrapped) labels are additive, not conflicting.
linkUl
The page did not pass the linkUl
test and received a score of 9 on linkUl
. The details are in the JSON-format file, in the section starting with "which": "linkUl"
.
Summary of the details:
- total: 3
- underlined: 0
- underlinedPercent: 0
The linkUl
test detects failures to underline inline links. Underlining and color are the traditional style properties that identify links. Collections of links in blocks can sometimes be recognized without underlines, but inline links are difficult or impossible to distinguish visually from surrounding text if not underlined. Underlining inline links only on hover provides an indicator valuable only to mouse users, and even they must traverse the text with a mouse merely to discover which passages are links.
Warning: This test classifies links as inline or block. Some links classified as inline may not look like inline links to users.
log
The page did not pass the log
test and received a score of 17 on log
. The details are in the JSON-format file, in the section starting with "which": "log"
.
Summary of the details:
- logCount: 18
- logSize: 858
- visitRejectionCount: 0
- prohibitedCount: 0
- visitTimeoutCount: 0
The log
test detects problems with the behavior of the page or the server. Indicators of such problems are the number of messages logged by the browser, the aggregate size of those messages in characters, the number of rejections with abnormal HTTP statuses, the number of prohibited
HTTP statuses, and the number of times the browser timed out trying to reach the page. Although log messages do not always indicate page defects, they mostly do.
menuNav
The page passed the menuNav
test.
The menuNav
test detects nonstandard keyboard navigation among menu items in menus that manage the focus of their menu items. Menus that use pseudofocus with the aria-activedescendant
attribute are not tested. The test is based on WAI-ARIA recommendations.
motion
The page passed the motion
test.
The motion
test detects unrequested motion in a page. Accessibility standards minimally require motion to be brief, or else stoppable by the user. But stopping motion can be difficult or impossible, and, by the time a user manages to stop motion, the motion may have caused annoyance or harm. For superior accessibility, a page contains no motion until and unless the user authorizes it. The test compares five screen shots of the initially visible part of the page and assigns a score based on:
- bytes: an array of the sizes of the screen shots, in bytes
- localRatios: an array of the ratios of bytes of the larger to the smaller of adjacent pairs of screen shots
- meanLocalRatio: the mean of the ratios in the localRatios array
- maxLocalRatio: the greatest of the ratios in the localRatios array
- globalRatio: the ratio of bytes of the largest to the smallest screen shot
- pixelChanges: an array of counts of differing pixels between adjacent pairs of screen shots
- meanPixelChange: the mean of the counts in the pixelChanges array
- maxPixelChange: the greatest of the counts in the pixelChanges array
- changeFrequency: what fraction of the adjacent pairs of screen shots has pixel differences
Warning: This test waits 2.4 seconds before making its first screen shot. If a page loads more slowly than that, the test may treat it as exhibiting motion.
radioSet
The page passed the radioSet
test.
The radioSet
test detects nonstandard groupings of radio buttons. It defines the standard to require that two or more radio buttons with the same name, and no other radio buttons, be grouped in a fieldset
element with a valid legend
element.
role
The page did not pass the role
test and received a score of 2 on role
. The details are in the JSON-format file, in the section starting with "which": "role"
.
Summary of the details:
- roleElements: 1
- badRoleElements: 1
The role
test detects nonstandard and confusing role assignments. It is inspired by the WAI-ARIA recommendations on roles and their authoring rules. Abstract roles and roles that are implicit in HTML elements fail the test. The math
role has been removed, because of poor adoption and exclusion from HTML5. The img
role has accessibility uses, so does not fail the test, although implicit in the HTML img
element.
styleDiff
The page did not pass the styleDiff
test and received a score of 39 on styleDiff
. The details are in the JSON-format file, in the section starting with "which": "styleDiff"
.
Summary of the details:
- h1: 2 different styles
- h2: 2 different styles
- h3: 1 style
- h5: 3 different styles
- aInline: 2 different styles
- aBlock: 10 different styles
- button: 1 style
The styleDiff
test detects style inconsistencies among inline links, block links, buttons, and all 6 levels of headings. The principle of consistent identification requires using styles to help users classify content. For example, level-2 headings look the same, and they look different from level-1 headings. Ideally, then, for each of these element types, there would be exactly 1 style. The test considers the style properties borderStyle
, borderWidth
, fontStyle
, fontWeight
, lineHeight
, maxHeight
, maxWidth
, minHeight
, minWidth
, opacity
, outlineOffset
, outlineStyle
, outlineWidth
, textDecorationLine
, textDecorationStyle
, and textDecorationThickness
. For headings, it also considers the fontSize
style property.
tabNav
The page passed the tabNav
test.
The tabNav
test detects nonstandard keyboard navigation among tab elements in tab lists. Tab lists let users choose which of several content blocks to display. The Tab key moves the focus into and out of a tab list, but the arrow, Home, and End keys move the focus from tab to tab.
zIndex
The page did not pass the zIndex
test and received a score of 54 on zIndex
. The details are in the JSON-format file, in the section starting with "which": "zIndex"
.
Summary of the details:
- UL: 3
- LI: 6
- DIV: 9
The zIndex
test detects elements with non-default z indexes. Pages present difficulty for some users when they require users to perceive a third dimension (depth) in the two-dimensional display. Layers, popups, and dialogs that cover other content make it difficult for some users to interpret the content and know what parts of the content can be acted on. Layering also complicates accessibility testing. Tests for visibility of focus, for example, may fail if a focused element is covered by another element.
Testing failures
Some pages prevent some of the tests in this procedure from being performed. This may occur, for example, when a page tries to block any non-human visitor. The procedure estimates high scores when pages prevent tests, because preventing accessibility testing is itself an accessibility deficiency. Specifically:
- Measuring success is a prerequisite for achieving success, so interfering with accessibility measurement interferes with accessibility.
- Users with disabilities often rely on assistive technologies to mediate between them and web applications. Measures that interfere with automated testing are at risk of interfering with some assistive technologies, too.