This CL migrates all the tests to TypeScript. The main benefits of this is that we start consuming our TypeScript definitions and therefore find errors in them. The act of migrating found some bugs in our definitions and now we can be sure to avoid them going forwards.
You'll notice the addition of some `TODO`s in the code; I didn't want this CL to get any bigger than it already is but I intend to follow those up once this lands. It's mostly figuring out how to extend the `expect` types with our `toBeGolden` helpers and some other slight confusions with types that the tests exposed.
Co-authored-by: Mathias Bynens <mathias@qiwi.be>
As far as I can tell these became irrelevant as of v1.15 which added
`puppeteer.errors` and `puppeteer.devices [1]. This is a breaking change
but one that's easily mitigated. We've said that we don't consider
changes to our folder/file structure a breaking change, but we can't
really do that if we have these two top level files that we've
documented.
[1]: e3abb0aa32 (diff-522b24108d7446af4c59873472a90444)
These files will be used by both the web and node versions of Puppeteer.
Another name for this might be "core" but I don't want to cause
confusion with the puppeteer-core package that we publish at the moment.
Fix child process killing when the parent process SIGINTs.
If you `ctrl + c` the Puppeteer parent process, we would sometimes not properly handle killing of the child processes. This would then leave child processes behind, with running Chromium instances. This in turn could block Puppeteer from launching again and results in
cryptic errors.
Instead of using the generic `process.kill` with the process id (which for some reason is negative the pid, which I don't get), we can kill the child process directly by calling `proc.kill`.
Fixes#5729.
Fixes#4796.
Fixes#4963.
Fixes#4333.
Fixes#1825.
* chore: remove "Extracting..." log message
Fixes#5741.
* test: support extra Launcher options and skips
The extra Launcher options and skipping conditions enable
unit tests to be run more easily by third-parties, e.g.
browser vendors that are interested in Puppeteer support.
Extra Launcher options were previously removed as part of
switching away from the custom test harness.
* test: enable more tests for Firefox
We deferred this initially because our Windows CI built wasn't stable
and so debugging this was hard. It's now much more stable so let's push
this back a month but at the same time I'll reach out to the Moz folks
as it should be easier to debug reliably now CI is stable on Windows.
* Don't use expect within Promises (#5466)
If a call to expect fails within a Promise it will not
be resolved, and causing the test to crash.
The patch aligns the code similar to what is used by all
the other tests.
* chore: fix invalid SSL assertion on Catalina
The error Chrome gives with an invalid cert changes between older Mac
versions and Catalina as detailed here:
https://support.google.com/chrome/thread/18125056?hl=en.
This PR changes Travis to run Catalina (and we think most devs run up to
date OS versions) so this fix ensures the test behaviour is consistent
locally and on Travis.
For those on older Mac versions I've left a comment by the tests to
hopefully save them debugging!
Co-authored-by: Mathias Bynens <mathias@qiwi.be>
The coverage utils depend on `src/api.ts` being up to date and pointing to the right modules. If they aren't, you would get a cryptic error on CI:
```
1) "before all" hook in "{root}":
TypeError: Cannot read property 'prototype' of undefined
at traceAPICoverage (test/coverage-utils.js:40:54)
at Context.before (test/coverage-utils.js:103:7)
2) "after all" hook in "{root}":
TypeError: Cannot read property 'stop' of undefined
at Context.after (test/mocha-utils.js:168:22)
```
This change logs a clearer error that highlights the missing class and exits, so it's much easier to realise what's gone wrong.
Ideally the coverage wouldn't need a hardcoded list of sources, but until then this will help spot this error in the future.
This corresponds to Chromium 83.0.4103.0.
This roll includes:
- Enable SameSiteByDefaultCookies and CookiesWithoutSameSiteMustBeSecure https://crrev.com/c/2122809
When this test was failing, it would cause no future tests to run. This
was because the `expect` call within the `page.on` callback would throw
an error, and that would trigger a unhandled promise rejection that
caused the test framework to stop.
The fundamental issue here is making `expect` calls within callbacks.
They are brittle due to the fact that they throw, and the test framework
won't catch it, but also because you have no guarantee that they will
run. If the callback is never executed you dont' know about it.
Although it's slightly more code, using a stub is the way to do this.
Not only can we assert that the stub was called, we can make synchronous
`expect` calls that Mocha will pick up properly if they fail.
Before this change, running the tests (and making it fail on purpose)
would cause all test execution to stop:
```
> puppeteer@3.0.4-post unit /Users/jacktfranklin/src/puppeteer
> mocha --config mocha-config/puppeteer-unit-tests.js
.(node:69580) UnhandledPromiseRejectionWarning: Error: expect(received).toBe(expected) // Object.is equality
Expected: "yes."
Received: ""
at Page.<anonymous> (/Users/jacktfranklin/src/puppeteer/test/dialog.spec.js:42:37)
[snip]
(node:69580) UnhandledPromiseRejectionWarning: Unhandled promise rejection ... [snip]
```
But with this change, the rest of the tests run:
```
> puppeteer@3.0.4-post unit /Users/jacktfranklin/src/puppeteer
> mocha --config mocha-config/puppeteer-unit-tests.js
Page.Events.Dialog
✓ should fire
1) should allow accepting prompts
✓ should dismiss the prompt
2 passing (2s)
1 failing
1) Page.Events.Dialog
should allow accepting prompts:
Error: expect(received).toBe(expected) // Object.is equality
Expected: "yes."
Received: ""
at Context.<anonymous> (test/dialog.spec.js:53:35)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
```
This is much better because one failing test now doesn't stop the rest
of the test suite.
This probably isn't the only instance of this in the codebase so I
propose as we encounter them we fix them usng this commit as the
template.
* Warn when given unsupported product name.
Fixes#5844.
This change means when a user launches Puppeteer with a product name
that is not supported (which at the time of this commit means it's not
`firefox` or `chrome) we will warn them about it.
Decided on just a warning vs an error because the current behaviour is
that we fallback to launching Chrome and I don't think this warrants a
breaking change.
* chore: update how we track coverage during unit tests
The old method of tracking coverage was causing issues. If a test failed
on CI, that test's failure would be lost because the test failing would
in turn cause the coverage to fail, but the `process.exit(1)` in the
coverage code caused Mocha to not output anything useful.
Instead the coverage checker now:
* tracks the coverage in memory in a Map (this hasn't changed)
* after all tests, writes that to disk in test/coverage.json (which is
gitignored)
* we then run a single Mocha test that asserts every method was called.
This means if the test run fails, the build will fail and give the error
about that test run, and that output won't be lost when the coverage
then fails too.
Co-authored-by: Mathias Bynens <mathias@qiwi.be>
The codebase was incredibly inconsistent with the use of spacing around
curly braces, e.g.:
```
// this?
const a = {b: 1}
// or?
const a = { b: 1 }
```
This extended into import statements also. Google's styleguide is no
spacing, so we're going with that.
* chore: Add Windows to Travis
This commit runs the unit tests on Windows.
There are two tests failing on Windows that we skip.
I spoke to Mathias B and we agreed to
defer debugging this for now in favour of getting tests running on
Windows. But we didn't want to ignore it forever, hence giving the test
a date at which it will start to fail.
This commit updates all the non-Puppeteer unit tests to run using Mocha and then deletes the custom test runner framework from this repository. The documentation has also been updated.
Rather than maintain our own test runner we should instead lean on the community and use Mocha which is very popular and also our test runner of choice in DevTools too.
Note that this commit doesn't remove the TestRunner source as it's still used for other unit tests, but they will be updated in a future PR and then we can remove the TestRunner.
The main bulk of this PR is updating the tests as the old TestRunner passed in contextual data via the `it` function callback whereas Mocha does not, so we introduce some helpers for the tests to make it easier.
See the large code comment in the diff for a full explanation but we can't rely on the functions being referentially equivalent so instead we test the behaviour in duplicate tests across the deprecated method and the new method.
They fail because cookies in Firefox return a `sameSite` key which the tests don't expect.
This is a solution that at least gets the Travis Firefox build (hopefully!) green again. Longer term it'd be great to allow the assertion to change based on the browser, rather than skip these tests entirely.
Rather than use our own custom expect library, we can use expect from npm [1], which has an API almost identical to the one Puppeteer has, but with more options, better diffing, and is used by many in the community as it's the default assertions library that comes with Jest.
It's also thoroughly documented [2].
[1]: https://www.npmjs.com/package/expect
[2]: https://jestjs.io/docs/en/expect
* (feat) Add option to fetch Firefox Nightly
Add Firefox support to BrowserFetcher and the install script.
By default, the latest Firefox Nightly is downloaded
directly from archive.mozilla.org (dmg, tar.bz2 and zip)
This also required changes that impact `puppeteer.launch()`
and `puppeteer.executablePath()`
Fixes#5151
* Update docs/api.md
Co-Authored-By: Mathias Bynens <mathias@qiwi.be>
* Clean up revision promise
* Improve error handling in revision check
* Remove matchAll
* Use explicit octal mode
* Update .gitignore
Co-authored-by: Mathias Bynens <mathias@qiwi.be>
* feat: Set which browser to launch via PUPPETEER_PRODUCT
This change introduces a PUPPETEER_PRODUCT environment
variable as a first step toward using Puppeteer with
many different browsers. Setting PUPPETEER_PRODUCT=firefox, for
example, enables Firefox-specific Launcher settings.
The state is also exposed as `puppeteer.product` in the API
to support adding other product-specific behaviour as needed.
The bulk of the change is a refactoring in Launcher
to decouple generic browser start-up from product-specific
configuration.
Respecting the puppeteer-core restriction for PUPPETEER_
environment variables, lazily instantiate the Launcher
based on a `product` Puppeteer.launch option, if available.
* test: Distinguish Juggler unit tests from Firefox
The funit script is renamed to fjunit (j for Juggler, which is
used only by the experimental puppeteer-firefox package.
In contrast, the funit script now refers to running Puppeteer
unit tests against the main puppeteer package with Firefox.
To do so with Firefox Nightly, run:
`BINARY=path/to/firefox npm run funit`
A number of changes in this patch make it easier to run
Puppeteer unit tests in Mozilla's CI.
Node.js v6 was end-of-life'd in April, 2019, with AWS Lambda prohibiting updaets to the Node.js v6 runtime since June 30, 2019.
This makes it quite safe for us to remove the Node 6 support from the repository.
We'd like to pass an abortion signal inside Helper.waitForEvent in order to interrupt it when browser/page closes. Several approaches have been considered:
1. Pass CDPSession instance as a another parameter to the helper method and listen to Disconnected event on it. It would introduce undesired dependency on the session object.
2. Listen to the CDPSession closure at the call sites (e.g. waitForRequest) and pass an abortion promise which would be fulfilled when such event is fired. The listeners would have to be removed from the session on successful completion of waitForEvent so we'd have to pass some kind of DisposablePromise which would be disposed during cleanup. Such parameter looked somewhat hairy.
3. Create DisconnectPromise on CDPSession. One potential risk with that is all chained promises would hang around until the event is fired which might inadvertently cause memory leaks. On the other hand, adding such promise to Promise.race will remove dependency as soon as the race is finished. So this is the approach we're taking with one tweak: the promise is created locally inside Page.
Ideally the disconnectPromise would throw when the session is closed but it may lead to uncaught promise errors if all chained promises are resolved, to avoid that the promise is resolved with an Error and Helper.waitForEvent throws it later.
Fix#4733
Our magical node6 transpiler can't handle object spreads :(
Drive-by: use "includes" instead of "startsWith" for devtools URL
since I remember that it used to be "chrome-devtools://", and somehow
I saw it as "devtools://" somewhere.
This roll includes:
- https://crrev.com/681997 - Turn on default SiteInstance by default.
The SiteInstance by default was breaking "devtools: true" option, so
there's a new feature we disable now by default.
This keeps pressuring us towards OOPIF support since that's an
inevitable future.
`testRunner.run()` might have 4 different outcomes:
- `ok` - all non-skipped tests passed
- `failed` - some tests failed or timed out
- `terminated` - process received SIGHUP/SIGINT while testrunner was running tests. This happens on CI's under certain circumstances, e.g. when
VM is getting re-scheduled.
- `crashed` - testrunner terminated test execution due to either `UnhandledPromiseRejection` or
some of the hooks (`beforeEach/afterEach/beforeAll/afterAll`) failures.
As an implication, there are 2 new test results: `terminated` and `crashed`.
All possible test results are:
- `ok` - test worked just fine
- `skipped` - test was skipped with `xit`
- `timedout` - test timed out
- `failed` - test threw an exception while running
- `terminated` - testrunner got terminated while running this test
- `crashed` - some `beforeEach` / `afterEach` hook corresponding to this
test timed out of threw an exception.
This patch changes a few parts of the testrunner API:
- `testRunner.run()` now returns an object `{result: string,
terminationError?: Error, terminationMessage?: string}`
- the same object is dispatched via `testRunner.on('finished')` event
- `testRunner.on('terminated')` got removed
- tests now might have `crashed` and `terminated` results
- `testRunner.on('teststarted')` dispatched before running all related
`beforeEach` hooks, and `testRunner.on('testfinished')` dispatched after
running all related `afterEach` hooks.
This patch:
- updates Flakiness Dashboard format to define version per-build
and to pass COMMIT information
- drops the README.md generation - we'll move on to a designated flakiness
dashboard viewer
We used to track API Coverage for public events, but this was regressed in the refactoring that
introduced `//lib/Events.js`.
This patch:
- Brings back API Coverage for events
- Combines all coverage-generated tests into a single one. This way
we can generate less data for flakiness dashboard.
- fix `FLAKINESS_DASHBOARD_BUILD_URL` to point to a task instead of a build
- do not pretty-print `dashboard.json` when serializing flakiness results
- filter out 'COVERAGE' test(s) so that they don't add up to `dashboard.json` payload. These are useless
- validate certain important options of flakiness dashboard
- more logging to STDOUT to actually say which repo and what branch is getting used
- enhance commit message with a build URL
- use a more compact format for JSON. For 100 runs of 700 tests it yields 21MB json instead of 23MB.
- bump default builds number to 100
Make eval test resilient to variations in error message format between browsers.This will make the test pass without alternations in WebKit as well as Crhrome and Firefox.
This patch introduces a dashboard that records test results and
uploads them to https://github.com/aslushnikov/puppeteer-flakiness-dashboard
Since many bots might push results in parallel, each bot pushes
results to its own git branch.
FlakinessDashboard also generates a simple README.md with a flakiness
summary. If this proves to be not enough, we can build a website that
fetches flakiness data and renders it nicely.
This patch introduces a page.waitForFileChooser() method
that adds a watchdog to wait for file chooser dialogs.
This lets Puppeteer users to capture file chooser requests
and fulfill/cancel them if necessary.
Fixes#2946
Don't ignore stdout and stderr when using pipe for remote debugging and dumpio is true. In that case puppeteer process connects to the stdout/stderr streams of the child process and it will not hang.
This patch adds new TestRunner options:
- `disableTimeoutWhenInspectorIsEnabled` - disable test timeout if
testrunner detects enabled inspector.
- `breakOnFailure` - if testrunner should terminate test running on
first test failure
Chrome has a set of component extensions - e.g. CryptoTokenExtension
that helps with 2FA.
These extensions are loaded regardless of the `--disable-extensions`
flag we already pass. To disable these extensions, we need to pass additional
`--disable-component-extensions-with-background-pages` flag.
Fix#4300
This patch teaches page.evaluate to do 1 hop instead of 2 hops.
As a result, things such as `page.select` will not throw an
unfortunate exception when they schedule a navigation.
Fix#4537
* feat(chromium): roll Chromium to r665405
This roll includes:
- https://crrev.com/665226 - DevTools: make interception respect cross-process frame boundaries
This fixes page loading with dynamic OOPIFs - test is added.
Fix#4442
* fix lint
This roll includes:
- [inspector_protocol:8ec18cf](8ec18cf088) Support STRING16 in the template when converting CBOR map keys
to protocol::Value.
- [inspector_protocol:37518ac](37518ac421) fix parsing of the last ASCII character
This fixes protocol handling of UTF8 in both V8 and Chromium.
Fixes#4443.