Accessibility test report


Autotest a11y, version 7

Score: 140


In a comparison of the accessibility of web pages, Passio received a score of 140 (where 0 is the best possible score). This report explains how that score was computed.

The pages were tested with version 7 of the a11y procedure of Autotest. A total of 427 tests (of which 411 were bundled into 3 packages) were performed on each page. The resulting data were saved in a JSON-format file.

These tests—like all tests—are fallible. The failures described below merit investigation as potential opportunities for improved accessibility.


The packages’ and tests’ contributions to the score were:


Test packages

Most of the tests belong to the following three accessibility test packages created by other specialists.


The page did not pass the axe test and received a score of 13 on axe. The details are in the JSON-format file, in the section starting with "which": "axe". There was at least one failure of:

Axe is an open-source package sponsored by accessibility consulting firm Deque. The axe test performs all 138 default tests in the Axe package.


The page did not pass the ibm test and received a score of 10 on ibm. The details are in the JSON-format file, in the section starting with "which": "ibm". There was at least one failure of:

Equal Access is an open-source package sponsored by IBM Corporation. The ibm test performs all 163 default tests in the Equal Access package.


The page did not pass the wave test and received a score of 72 on wave. The details are in the JSON-format file, in the section starting with "which": "wave". There was at least one failure of:

WAVE is a proprietary package owned by webAIM, a program of the Institute for Disability Research, Policy, and Practice at Utah State University. The wave test performs all 110 default tests in the WAVE package.

Custom tests

The tests in the above packages are designed to detect some, not all, accessibility problems. The procedure includes the following custom tests that supplement the tests of the packages.


The page passed the bulk test.

The bulk test counts the initially visible elements in a page. A page with a large count tends to be complex and busy, frustrating some users, especially if they have visual or motor disabilities, as they try to determine what the page is about, whether it is relevant, and how to find a specific thing in it.

When the count exceeds 250, the procedure begins to assign a non-zero score.


The page passed the embAc test.

The embAc test detects improper embedding of interactive elements (links, buttons, inputs, and select lists) within links or buttons. Such embedding violates the HTML standard, complicates user interaction, and creates risks of error. It becomes non-obvious what a user will activate with a click.


The page passed the focAll test.

The focAll test detects discrepancies between focusable and Tab-focused element counts. Navigating with the Tab key normally moves the focus and does nothing else. If it also adds or fails to focus focusable elements, this complicates navigation and may make the page unusable for people who must use only a keyboard (not a mouse) or a keyboard-emulating assistive device to navigate.


The page passed the focInd test.

The focInd test detects focusable elements without standard focus indicators. An outline is the standard and most recognizable focus indicator; as you repeatedly press the Tab key, the outline moves through the page. Other focus indicators are more likely to be misunderstood. For example, underlines may be mistaken for selection indicators or links. An absent focus indicator prevents the user from knowing what a keyboard action will act on.


The page did not pass the focOp test and received a score of 8 on focOp. The details are in the JSON-format file, in the section starting with "which": "focOp".

Summary of the details:

The focOp test detects descrepancies between Tab-focusability and operability. The standard practice is to make focusable elements operable and vice versa. If focusable elements are not operable, users are likely to be surprised that nothing happens when they try to operate such elements. If operable elements are not focusable, users depending on keyboard navigation are prevented from operating those elements. The test considers an element operable if it has a non-inherited pointer cursor and is not a LABEL element, or has an operable tag name (A, BUTTON, IFRAME, INPUT, SELECT, or TEXTAREA), or has an onclick attribute. The test considers an element Tab-focusable if its tabIndex property has the value 0.


The page passed the hover test.

The hover test detects unexpected effects of hovering. The normal purpose of hovering is to show the user which element is currently being actually or effectively hovered over and would therefore be the target of a mouse click. When hovering does more than that, the additional effects can confuse or startle users, especially those without precise mouse control. The test detects whether hovering makes elements visible, changes the opacities of elements, affects the opacities of elements by changing the opacities of their ancestors, and fails to reach elements. Only visible elements that have A, BUTTON, and LI tag names or have onmouseenter or onmouseover attributes are considered as triggers of such effects when hovered over. The effects of hovering are inspected for the descendants of the grandparent of the trigger if the trigger has the tag name A or BUTTON, or otherwise the descendants of the trigger. The only elements counted as being made visible by hovering are those with tag names A, BUTTON, INPUT, and SPAN, and those with role="menuitem" attributes.


The page did not pass the labClash test and received a score of 8 on labClash. The details are in the JSON-format file, in the section starting with "which": "labClash".

Summary of the details:

The labClash test detects defects in the labeling of buttons, non-hidden inputs, select lists, and text areas. The defects include missing labels and redundant labels. Redundant labels are labels that are superseded by other labels. Explicit and implicit (wrapped) labels are additive, not conflicting.


The page passed the linkUl test.

The linkUl test detects failures to underline inline links. Underlining and color are the traditional style properties that identify links. Collections of links in blocks can sometimes be recognized without underlines, but inline links are difficult or impossible to distinguish visually from surrounding text if not underlined. Underlining inline links only on hover provides an indicator valuable only to mouse users, and even they must traverse the text with a mouse merely to discover which passages are links.

Warning: This test classifies links as inline or block. Some links classified as inline may not look like inline links to users.


The page did not pass the log test and received a score of 7 on log. The details are in the JSON-format file, in the section starting with "which": "log".

Summary of the details:

The log test detects problems with the behavior of the page or the server. Indicators of such problems are the number of messages logged by the browser, the aggregate size of those messages in characters, the number of rejections with abnormal HTTP statuses, the number of prohibited HTTP statuses, and the number of times the browser timed out trying to reach the page. Although log messages do not always indicate page defects, they mostly do.


The page passed the menuNav test.

The menuNav test detects nonstandard keyboard navigation among menu items in menus that manage the focus of their menu items. Menus that use pseudofocus with the aria-activedescendant attribute are not tested. The test is based on WAI-ARIA recommendations.


The page passed the motion test.

The motion test detects unrequested motion in a page. Accessibility standards minimally require motion to be brief, or else stoppable by the user. But stopping motion can be difficult or impossible, and, by the time a user manages to stop motion, the motion may have caused annoyance or harm. For superior accessibility, a page contains no motion until and unless the user authorizes it. The test compares five screen shots of the initially visible part of the page and assigns a score based on:

  1. bytes: an array of the sizes of the screen shots, in bytes
  2. localRatios: an array of the ratios of bytes of the larger to the smaller of adjacent pairs of screen shots
  3. meanLocalRatio: the mean of the ratios in the localRatios array
  4. maxLocalRatio: the greatest of the ratios in the localRatios array
  5. globalRatio: the ratio of bytes of the largest to the smallest screen shot
  6. pixelChanges: an array of counts of differing pixels between adjacent pairs of screen shots
  7. meanPixelChange: the mean of the counts in the pixelChanges array
  8. maxPixelChange: the greatest of the counts in the pixelChanges array
  9. changeFrequency: what fraction of the adjacent pairs of screen shots has pixel differences

Warning: This test waits 2.4 seconds before making its first screen shot. If a page loads more slowly than that, the test may treat it as exhibiting motion.


The page passed the radioSet test.

The radioSet test detects nonstandard groupings of radio buttons. It defines the standard to require that two or more radio buttons with the same name, and no other radio buttons, be grouped in a fieldset element with a valid legend element.


The page passed the role test.

The role test detects nonstandard and confusing role assignments. It is inspired by the WAI-ARIA recommendations on roles and their authoring rules. Abstract roles and roles that are implicit in HTML elements fail the test. The math role has been removed, because of poor adoption and exclusion from HTML5. The img role has accessibility uses, so does not fail the test, although implicit in the HTML img element.


The page did not pass the styleDiff test and received a score of 19 on styleDiff. The details are in the JSON-format file, in the section starting with "which": "styleDiff".

Summary of the details:

The styleDiff test detects style inconsistencies among inline links, block links, buttons, and all 6 levels of headings. The principle of consistent identification requires using styles to help users classify content. For example, level-2 headings look the same, and they look different from level-1 headings. Ideally, then, for each of these element types, there would be exactly 1 style. The test considers the style properties borderStyle, borderWidth, fontStyle, fontWeight, lineHeight, maxHeight, maxWidth, minHeight, minWidth, opacity, outlineOffset, outlineStyle, outlineWidth, textDecorationLine, textDecorationStyle, and textDecorationThickness. For headings, it also considers the fontSize style property.


The page passed the tabNav test.

The tabNav test detects nonstandard keyboard navigation among tab elements in tab lists. Tab lists let users choose which of several content blocks to display. The Tab key moves the focus into and out of a tab list, but the arrow, Home, and End keys move the focus from tab to tab.


The page did not pass the zIndex test and received a score of 3 on zIndex. The details are in the JSON-format file, in the section starting with "which": "zIndex".

Summary of the details:

The zIndex test detects elements with non-default z indexes. Pages present difficulty for some users when they require users to perceive a third dimension (depth) in the two-dimensional display. Layers, popups, and dialogs that cover other content make it difficult for some users to interpret the content and know what parts of the content can be acted on. Layering also complicates accessibility testing. Tests for visibility of focus, for example, may fail if a focused element is covered by another element.

Testing failures

Some pages prevent some of the tests in this procedure from being performed. This may occur, for example, when a page tries to block any non-human visitor. The procedure estimates high scores when pages prevent tests, because preventing accessibility testing is itself an accessibility deficiency. Specifically: