AOM at me bro
Phase 1: Accessible Properties
In the new API, updating ARIA attributes will be baked into the current web API, so that they are accurately reflected in the HTML elements. This property will also be available to the shadowRoot interface so custom elements have access to them.
el.role = 'button'
Phase 2: User Action Events
The second phase will address handling input events triggered specifically by assistive technologies. This will enable a user to interact with a page through alternate input controls like voice command. Currently, there is only partial browser support for interacting with native HTML elements via accessible actions. A developer can, for example, implement scrolling for assistive device via the scrollIntoView method. However, this stopgap solution doesn’t give you the full control of handling a semantic event via a web API. Moreover the current API doesn’t yet account for fairly common actions like dismiss for exiting dialogues or increment/decrement for moving a slider. User action events will address these challenges by providing specific event listeners like actionDecrement, actionIncrement and actionDismiss.
Admittedly, the browser needs to be aware that the input device is using assistive technology to take advantage of this new API. This can be problematic to users who choose not to disclose their disability to the browser. To account for the privacy of these users, a new user permission dialogue will be triggered before an AOM event listener is fully captured. This way, users have the choice to disclose their current status to the browser in real time.
Phase 3: Virtual Accessibility Nodes
Phase 3 of the spec will introduce the concept of virtually accessible nodes, so developers can modify the semantics of nodes in the accessibility tree. These virtual accessibility nodes are not associated directly with any DOM element and will only be available to assistive technologies. As a result, developers have more granular control over the accessibility of custom APIs. One compelling use case for this is canvas elements. In the current browser implementation, canvas elements, which are used to power complex WebGL components are inaccessible because there is no standard way to mark up content in a canvas object. With virtual nodes, developers can now express content available in canvas by building parent/child relationships with other virtual nodes to denote their position and dimension. 🤯
Phase 4: Computed Accessibility Tree
The last phase of the spec will introduce a computed accessibility tree API. Through this, developers gain full access and control of the accessibility tree and can interact with it declaratively. (Finally!) By directly querying and manipulating the accessibility tree, developers can properly check if an accessibility property was successfully applied; At the moment, there is no way to do this except by manual trial and testing.
let computed = await getComputedAccessibleNode(myListItem);
computed.role; // listitem
This is useful not only as a way to prevent errors but also as a way to check for feature detection in browsers. With a computed tree structure developers can enhance our tests so that the semantics of an element are accurately asserted beyond just checking for the accuracy of a string. Access to the accessibility tree also means more control over how the tree is structured and updated which translates to a better user experience for assistive device users.
AOM’s all around