slideshare quotation-marks triangle book file-text2 file-picture file-music file-play file-video location calendar search wrench cogs stats-dots hammer2 menu download2 question cross enter google-plus facebook instagram twitter medium linkedin drupal GitHub quotes-close
Lighthouse

Automated accessibility testing with Google LHCI. Here we're going to tell you how Code Enigma do it using our local development environments.

Lighthouse is Google's website performance and accessibility testing tool, built into the Google Chrome web browser. It provides free testing for any web page across a number of criteria. LighthouseCI, or LHCI for short, is the command-line version of Lighthouse, allowing you to run those tests from a terminal and therefore in a scripted environment. This blog post shows you Code Enigma implemented it.

At Code Enigma we've been making web accessibility a priority for a while. We offer audits, we help people meet the legal requirements, we advise on best practice, and so on. Indeed, a number of the team are on Deque University courses on their way to International Association of Accessibility Professionals certification. One of the missing pieces of the puzzle for us has been automated accessibility testing, it's been on our roadmap for literally years, but it's not that simple to implement in a complex and custom CI stack like ours.

However, we've finally done it! Our local development stack and our development environments can now run Google's LHCI tests as standard. I'll explain what's involved and how we did it.

Whistle-stop tour of LHCI

The first thing to know about LHCI is there are two components, a client and a server. The great thing is the server simply records test results over time and provides an interface to browse them, so in order to do some basic local testing you can forget the server component entirely. This immediately makes life easier. (If you want to try a server then there's a container on Docker Hub you can grab and run. I did this just to kick the tyres.)

Next, you need to know that the LHCI client part works via different commands you run in sequence to get an entire outcome. Typically the two most important ones are:

  • lhci collect - this actually runs the tests you want to run
  • lhci upload - this collates the results into a report and saves them somewhere

So once LHCI is up and running on a machine you can test any URL with those two commands and get a Lighthouse report back. The specifics of how we use this are coming.

Installing the components

So we need to be able to install the client. We do all infrastructure provisioning, including local development containers, with our ce-provision product these days, which - like LHCI - is free to download and use. It's based on the popular orchestration tool, Ansible, which has a concept of "roles" to do different collections of "tasks", so I needed to create an Ansible "role" in ce-provision containing the "tasks" to install LHCI on any machine I point it at.

Fortunately, the same people who packaged the LHCI server container I linked to above also made a client image. The image is no good to me, however the Dockerfile on GitHub that provisions that image contains everything I need for a basic LHCI client install, namely:

  • Chrome
  • npm (the Node package manager)
  • lhci/cli (installed via npm)
  • lighthouse (also installed via npm)

I didn't do things quite the same way as them, because with Ansible there are tools to make certain actions, such as managing the Google code repository for Debian, more straightforward. We'll look at the finished role in a moment. The TL;DR at this point is that I installed the four packages mentioned above via an Ansible role on a local Docker container. Then I tried to run a simple `lhci collect` command against example.com and it worked fine. So I tried it against a local development build of Drupal 8 with a self-signed certificate and it timed out on me because it couldn't start Chrome to run the tests. Problem number one!

After a fair bit of trial and error, and no small amount of Google searching, I found this GitHub issue (yes, the last comment is me). Turns out you pretty much must run Chrome in "headful" mode, that means in a desktop environment. But I hadn't installed any of the software required for that yet, so there were some more dependencies to unravel.

Getting Chrome to run in a "headful" state on a Debian Linux container with no desktop installed means a particular set of requirements. Fortunately, I found this post on StackOverflow which describes them well. If you look at my `lhci` role in ce-provision you can see that just before it installs Chrome it also installs the list of dependencies from that post and starts up `Xvfb`, which is the desktop environment itself Chrome needs.

Making LHCI available on containers

Now we have a role in ce-provision to install LHCI wherever we want it, we need to make it available to developers using the ce-dev product. This is actually a local client tool for macOS and Linux that allows you to easily create custom local Docker-based development environments on a per project basis. It leverages ce-provision to install software on the local containers, so fortunately now the role is in ce-provision it is automatically available to ce-dev.

To make life easier, we pack a couple of "template" container images, ready to run web server containers for blank projects and also for Drupal. The templates get built by GitHub Actions whenever we push a release of ce-dev and we have included LHCI as standard in these template images by including the role in their provision.yml file, which contains the provisioning instructions for ce-provision. By way of an example, click this link to see the role in the Drupal 8 template.

If you use ce-dev but with your own containers, or you use ce-provision alone and not ce-dev, you can add `lhci` to any server the same way, just import the role into your YAML file as above.

Executing a test

I mentioned near the start that running a test basically means executing two commands in LHCI - collect and upload. However, before we can run our tests the desktop environment needs to be up and running. To do this you need to run two commands:

Xvfb -ac :99 -screen 0 1280x1024x16 &
export DISPLAY=:99

The first one fires up the desktop engine in the background with a screen ID of 99 (big number to hopefully avoid clashing with any actually running desktops, which will typically be numbered between 0 and 10). The second one tells your terminal client that it can expect to find a desktop environment to use with screen ID 99. Once you've run those two commands, Chrome should be able to run, which in turn means LHCI should be able to start it.

Now, in order to execute tests you need to pass LHCI some settings. You can do this with the command line, but I created a configuration file instead. Details about the LHCI configuration file can be found here, because we're working with Ansible and YAML anyway I chose the YAML format for the config file too, so I created a `lighthouserc.yml` file that looks like this:

ci:
  collect:
    url:
      - "https://www.test.local"
      - "https://www.test.local/user/login"
    settings:
      chromeFlags:
        - "--no-sandbox"
        - "--ignore-certificate-errors"
    numberOfRuns: 1
    headful: true
  upload:
    outputDir: "./reports"
    target: "filesystem"

Unpicking these settings, let's start with `collect`. URL is fairly obvious, they're the addresses on my local development container I want to test. The settings, particularly chromeFlags, are what get passed through to Lighthouse by LHCI in terms of runtime settings for Chrome. Mainly with the local environment we need to ignore certificate errors, because the local certificates are self-signed thus "invalid" as far as Chrome is concerned. A full list of possible settings is available via the documentation. The headful line is important, as per the GitHub issue above, without running in "headful" mode the local self-signed certificate causes LHCI to fail, for reasons nobody ever really explained the flag to ignore certificates doesn't work in "headless" mode.

As for `upload`, the settings are simply for the saving of the lighthouse report. All I'm doing here is saying save the report locally in a 'reports' directory. LHCI will make the directory if it doesn't exist. You can do all kinds of clever things with `upload` - more information on the `upload` command here - but we're keeping it simple for now.

And that's it. I'm putting lighthouserc.yml in the root of my Git repository and then when I run `lhci collect && lhci upload` I get a reports directory in my repo with my report within in JSON and HTML formats.

Automated testing

On to the last piece of the puzzle. How to automate all this? This is where we bring in the third Code Enigma product based on Ansible, ce-deploy. This set of Ansible roles exists purely to deploy code to servers or containers. We use it to deploy code to different environments, but we also use it with ce-dev to deploy code to containers. So it's perfectly placed for automating testing.

I created a role called `lhci_run` to execute tests automatically, which you can find here. If you look at the tasks, it does the following:

  • Checks LHCI is installed
  • Checks Xvfb (the desktop) is running - and starts it if it isn't
  • Uses a template to build the `lighthouserc.yml` file based on the provided Ansible variables
  • Runs `lhci collect` with the DISPLAY variable set to :99 as per the Xvfb start-up command
  • Runs `lhci upload` to save the reports in the specified location

Remember our ce-dev containers already have LHCI installed. You can see a sample deploy.yml file for ce-dev here, but all our developers need to do is add something like this to their local deploy.yml file to make LHCI run on every build:

  vars:
  ...
    - lhci_run:
        test_urls:
          - "https://www.test.local"
          - "https://www.test.local/user/login"
  ...
  roles:
    - _init # Sets some variables the deploy scripts rely on.
    - composer # Composer install step.
    - database_backup # This is still needed to generate credentials.
    - config_generate # Generates settings.php
    - database_apply # Run drush updb and config import.
    - lhci_run # <--- WE ADDED THIS - run LHCI on the application
    - _exit # Some common housekeeping.

And that's it, right at the end of each execution of `ce-dev deploy` they get a new LHCI report in the location they specified, or reports/ISO_DATETIME by default. And of course the same ce-deploy configuration can be used on development and/or QA environment servers and become part of the standard deployment path.

If you're interested in using our CI tools and you're not sure where to start...


Contact us