puppeteer/test
Jack Franklin 3e76554fcb
chore: fix async dialog specs when they fail (#5859)
When this test was failing, it would cause no future tests to run. This
was because the `expect` call within the `page.on` callback would throw
an error, and that would trigger a unhandled promise rejection that
caused the test framework to stop.

The fundamental issue here is making `expect` calls within callbacks.
They are brittle due to the fact that they throw, and the test framework
won't catch it, but also because you have no guarantee that they will
run. If the callback is never executed you dont' know about it.

Although it's slightly more code, using a stub is the way to do this.
Not only can we assert that the stub was called, we can make synchronous
`expect` calls that Mocha will pick up properly if they fail.

Before this change, running the tests (and making it fail on purpose)
would cause all test execution to stop:

```
> puppeteer@3.0.4-post unit /Users/jacktfranklin/src/puppeteer
> mocha --config mocha-config/puppeteer-unit-tests.js

  .(node:69580) UnhandledPromiseRejectionWarning: Error: expect(received).toBe(expected) // Object.is equality

Expected: "yes."
Received: ""
    at Page.<anonymous> (/Users/jacktfranklin/src/puppeteer/test/dialog.spec.js:42:37)
    [snip]
(node:69580) UnhandledPromiseRejectionWarning: Unhandled promise rejection ... [snip]
```

But with this change, the rest of the tests run:

```
> puppeteer@3.0.4-post unit /Users/jacktfranklin/src/puppeteer
> mocha --config mocha-config/puppeteer-unit-tests.js

  Page.Events.Dialog
    ✓ should fire
    1) should allow accepting prompts
    ✓ should dismiss the prompt

  2 passing (2s)
  1 failing

  1) Page.Events.Dialog
       should allow accepting prompts:
     Error: expect(received).toBe(expected) // Object.is equality

Expected: "yes."
Received: ""
      at Context.<anonymous> (test/dialog.spec.js:53:35)
      at processTicksAndRejections (internal/process/task_queues.js:94:5)
```

This is much better because one failing test now doesn't stop the rest
of the test suite.

This probably isn't the only instance of this in the codebase so I
propose as we encounter them we fix them usng this commit as the
template.
2020-05-14 11:34:22 +02:00
..
assets chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
fixtures chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
golden-chromium test: add page.screenshot viewport clipping test (#5079) 2019-10-24 14:05:13 +02:00
golden-firefox test: add page.screenshot viewport clipping test (#5079) 2019-10-24 14:05:13 +02:00
accessibility.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
assert-coverage-test.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
browser.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
browsercontext.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
CDPSession.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
chromiumonly.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
click.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
cookies.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
coverage-utils.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
coverage.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
defaultbrowsercontext.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
dialog.spec.js chore: fix async dialog specs when they fail (#5859) 2020-05-14 11:34:22 +02:00
diffstyle.css Implement FrameManager 2017-06-21 14:11:52 -07:00
elementhandle.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
emulation.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
evaluation.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
fixtures.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
frame.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
golden-utils.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
headful.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
ignorehttpserrors.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
input.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
jshandle.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
keyboard.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
launcher.spec.js chore: fetch Firefox from JSON source instead of RegExp (#5864) 2020-05-13 15:48:39 +02:00
mocha-utils.js Warn when given unsupported product name. (#5845) 2020-05-12 10:30:24 +01:00
mouse.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
navigation.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
network.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
oopif.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
page.spec.js chore: restore page.setUserAgent test (#5868) 2020-05-14 10:24:30 +01:00
queryselector.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
README.md chore: add running TSC to test README (#5852) 2020-05-13 09:20:33 +01:00
requestinterception.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
run_static_server.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
screenshot.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
target.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
touchscreen.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
tracing.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
utils.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
waittask.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00
worker.spec.js chore: add Prettier (#5825) 2020-05-07 12:54:55 +02:00

Puppeteer unit tests

Unit tests in Puppeteer are written using Mocha as the test runner and Expect as the assertions library.

Test state

We have some common setup that runs before each test and is defined in mocha-utils.js.

You can use the getTestState function to read state. It exposes the following that you can use in your tests. These will be reset/tidied between tests automatically for you:

  • puppeteer: an instance of the Puppeteer library. This is exactly what you'd get if you ran require('puppeteer').
  • puppeteerPath: the path to the root source file for Puppeteer.
  • defaultBrowserOptions: the default options the Puppeteer browser is launched from in test mode, so tests can use them and override if required.
  • server: a dummy test server instance (see utils/testserver for more).
  • httpsServer: a dummy test server HTTPS instance (see utils/testserver for more).
  • isFirefox: true if running in Firefox.
  • isChrome: true if running Chromium.
  • isHeadless: true if the test is in headless mode.

If your test needs a browser instance, you can use the setupTestBrowserHooks() function which will automatically configure a browser that will be cleaned between each test suite run. You access this via getTestState().

If your test needs a Puppeteer page and context, you can use the setupTestPageAndContextHooks() function which will configure these. You can access page and context from getTestState() once you have done this.

The best place to look is an existing test to see how they use the helpers.

Skipping tests for Firefox

Tests that are not expected to pass in Firefox can be skipped. You can skip an individual test by using itFailsFirefox rather than it. Similarly you can skip a describe block with describeFailsFirefox.

There is also describeChromeOnly which will only execute the test if running in Chromium. Note that this is different from describeFailsFirefox: the goal is to get any FailsFirefox calls passing in Firefox, whereas describeChromeOnly should be used to test behaviour that will only ever apply in Chromium.

Running tests

Despite being named 'unit', these are integration tests, making sure public API methods and events work as expected.

  • To run all tests:
npm run unit
  • Important: don't forget to first run TypeScript if you're testing local changes:
npm run tsc && npm run unit
  • To run a specific test, substitute the it with it.only:
  ...
  it.only('should work', async function() {
    const {server, page} = getTestState();
    const response = await page.goto(server.EMPTY_PAGE);
    expect(response.ok).toBe(true);
  });
  • To disable a specific test, substitute the it with xit (mnemonic rule: 'cross it'):
  ...
  // Using "xit" to skip specific test
  xit('should work', async function({server, page}) {
    const {server, page} = getTestState();
    const response = await page.goto(server.EMPTY_PAGE);
    expect(response.ok).toBe(true);
  });
  • To run tests in non-headless mode:
HEADLESS=false npm run unit
  • To run tests with custom browser executable:
BINARY=<path-to-executable> npm run unit