We'd like to pass an abortion signal inside Helper.waitForEvent in order to interrupt it when browser/page closes. Several approaches have been considered:
1. Pass CDPSession instance as a another parameter to the helper method and listen to Disconnected event on it. It would introduce undesired dependency on the session object.
2. Listen to the CDPSession closure at the call sites (e.g. waitForRequest) and pass an abortion promise which would be fulfilled when such event is fired. The listeners would have to be removed from the session on successful completion of waitForEvent so we'd have to pass some kind of DisposablePromise which would be disposed during cleanup. Such parameter looked somewhat hairy.
3. Create DisconnectPromise on CDPSession. One potential risk with that is all chained promises would hang around until the event is fired which might inadvertently cause memory leaks. On the other hand, adding such promise to Promise.race will remove dependency as soon as the race is finished. So this is the approach we're taking with one tweak: the promise is created locally inside Page.
Ideally the disconnectPromise would throw when the session is closed but it may lead to uncaught promise errors if all chained promises are resolved, to avoid that the promise is resolved with an Error and Helper.waitForEvent throws it later.
Fix#4733
Our magical node6 transpiler can't handle object spreads :(
Drive-by: use "includes" instead of "startsWith" for devtools URL
since I remember that it used to be "chrome-devtools://", and somehow
I saw it as "devtools://" somewhere.
This roll includes:
- https://crrev.com/681997 - Turn on default SiteInstance by default.
The SiteInstance by default was breaking "devtools: true" option, so
there's a new feature we disable now by default.
This keeps pressuring us towards OOPIF support since that's an
inevitable future.
`testRunner.run()` might have 4 different outcomes:
- `ok` - all non-skipped tests passed
- `failed` - some tests failed or timed out
- `terminated` - process received SIGHUP/SIGINT while testrunner was running tests. This happens on CI's under certain circumstances, e.g. when
VM is getting re-scheduled.
- `crashed` - testrunner terminated test execution due to either `UnhandledPromiseRejection` or
some of the hooks (`beforeEach/afterEach/beforeAll/afterAll`) failures.
As an implication, there are 2 new test results: `terminated` and `crashed`.
All possible test results are:
- `ok` - test worked just fine
- `skipped` - test was skipped with `xit`
- `timedout` - test timed out
- `failed` - test threw an exception while running
- `terminated` - testrunner got terminated while running this test
- `crashed` - some `beforeEach` / `afterEach` hook corresponding to this
test timed out of threw an exception.
This patch changes a few parts of the testrunner API:
- `testRunner.run()` now returns an object `{result: string,
terminationError?: Error, terminationMessage?: string}`
- the same object is dispatched via `testRunner.on('finished')` event
- `testRunner.on('terminated')` got removed
- tests now might have `crashed` and `terminated` results
- `testRunner.on('teststarted')` dispatched before running all related
`beforeEach` hooks, and `testRunner.on('testfinished')` dispatched after
running all related `afterEach` hooks.
Adds note about Jest maxWorkers as well as the base image to start with. This is an improvement over the previous section I wrote, from me banging my head against a YAML file all week 🙃.
This came from personal difficulties in running Puppeteer tests on CircleCI. I tried to keep the note as brief as possible, while being helpful for an entire CI platform.
This patch:
- updates Flakiness Dashboard format to define version per-build
and to pass COMMIT information
- drops the README.md generation - we'll move on to a designated flakiness
dashboard viewer
We used to track API Coverage for public events, but this was regressed in the refactoring that
introduced `//lib/Events.js`.
This patch:
- Brings back API Coverage for events
- Combines all coverage-generated tests into a single one. This way
we can generate less data for flakiness dashboard.
- fix `FLAKINESS_DASHBOARD_BUILD_URL` to point to a task instead of a build
- do not pretty-print `dashboard.json` when serializing flakiness results
- filter out 'COVERAGE' test(s) so that they don't add up to `dashboard.json` payload. These are useless
- validate certain important options of flakiness dashboard
- more logging to STDOUT to actually say which repo and what branch is getting used
- enhance commit message with a build URL
- use a more compact format for JSON. For 100 runs of 700 tests it yields 21MB json instead of 23MB.
- bump default builds number to 100
Make eval test resilient to variations in error message format between browsers.This will make the test pass without alternations in WebKit as well as Crhrome and Firefox.
This patch introduces a dashboard that records test results and
uploads them to https://github.com/aslushnikov/puppeteer-flakiness-dashboard
Since many bots might push results in parallel, each bot pushes
results to its own git branch.
FlakinessDashboard also generates a simple README.md with a flakiness
summary. If this proves to be not enough, we can build a website that
fetches flakiness data and renders it nicely.
This patch introduces a page.waitForFileChooser() method
that adds a watchdog to wait for file chooser dialogs.
This lets Puppeteer users to capture file chooser requests
and fulfill/cancel them if necessary.
Fixes#2946