Current Dev State

This commit is contained in:
Tim Lorsbach
2025-06-23 20:13:54 +02:00
parent b4f9bb277d
commit ded50edaa2
22617 changed files with 4345095 additions and 174 deletions

View File

@ -0,0 +1,87 @@
name: The Browser Object
category: testrunner
tags: guide
index: 3
title: WebdriverIO - The Browser Object
---
The Browser Object
==================
If you use the wdio test runner you can access the webdriver instance through the global `browser` object. The session is initialized by the test runner so you don't need to call [`init`](/api/protocol/init.html) command. The same goes for ending the session. This is also done by the test runner process.
Besides all commands from the [api](/api.html) the browser object provides some more information you might be interested in during your test run:
### Get desired capabilities
```js
console.log(browser.desiredCapabilities);
/**
* outputs:
* {
javascriptEnabled: true,
locationContextEnabled: true,
handlesAlerts: true,
rotatable: true,
browserName: 'chrome',
loggingPrefs: { browser: 'ALL', driver: 'ALL' }
}
*/
```
### Get wdio config options
```js
// wdio.conf.js
exports.config = {
// ...
foobar: true,
// ...
}
```
```js
console.log(browser.options);
/**
* outputs:
* {
port: 4444,
protocol: 'http',
waitforTimeout: 10000,
waitforInterval: 250,
coloredLogs: true,
logLevel: 'verbose',
baseUrl: 'http://localhost',
connectionRetryTimeout: 90000,
connectionRetryCount: 3,
sync: true,
specs: [ 'err.js' ],
foobar: true, // <-- custom option
// ...
*/
```
### Check if capability is a mobile device
```js
var client = require('webdriverio').remote({
desiredCapabilities: {
platformName: 'iOS',
app: 'net.company.SafariLauncher',
udid: '123123123123abc',
deviceName: 'iPhone',
}
});
console.log(client.isMobile); // outputs: true
console.log(client.isIOS); // outputs: true
console.log(client.isAndroid); // outputs: false
```
### Log results
```js
browser.logger.info('some random logging');
```
For more information about the logger class check out [Logger.js](https://github.com/webdriverio/webdriverio/blob/master/lib/utils/Logger.js) on GitHub.

View File

@ -0,0 +1,349 @@
name: configurationfile
category: testrunner
tags: guide
index: 1
title: WebdriverIO - Test Runner Configuration File
---
Configuration File
==================
The configuration file contains all necessary information to run your test suite. It is a node module that exports a JSON. Here is an example configuration with all supported properties and additional information:
```js
exports.config = {
// =====================
// Server Configurations
// =====================
// Host address of the running Selenium server. This information is usually obsolete as
// WebdriverIO automatically connects to localhost. Also if you are using one of the
// supported cloud services like Sauce Labs, Browserstack or Testing Bot you also don't
// need to define host and port information because WebdriverIO can figure that out
// according to your user and key information. However if you are using a private Selenium
// backend you should define the host address, port, and path here.
//
host: '0.0.0.0',
port: 4444,
path: '/wd/hub',
//
// =================
// Service Providers
// =================
// WebdriverIO supports Sauce Labs, Browserstack and Testing Bot (other cloud providers
// should work too though). These services define specific user and key (or access key)
// values you need to put in here in order to connect to these services.
//
user: 'webdriverio',
key: 'xxxxxxxxxxxxxxxx-xxxxxx-xxxxx-xxxxxxxxx',
//
// ==================
// Specify Test Files
// ==================
// Define which test specs should run. The pattern is relative to the directory
// from which `wdio` was called. Notice that, if you are calling `wdio` from an
// NPM script (see https://docs.npmjs.com/cli/run-script) then the current working
// directory is where your package.json resides, so `wdio` will be called from there.
//
specs: [
'test/spec/**'
],
// Patterns to exclude.
exclude: [
'test/spec/multibrowser/**',
'test/spec/mobile/**'
],
//
// ============
// Capabilities
// ============
// Define your capabilities here. WebdriverIO can run multiple capabilities at the same
// time. Depending on the number of capabilities, WebdriverIO launches several test
// sessions. Within your capabilities you can overwrite the spec and exclude option in
// order to group specific specs to a specific capability.
//
//
// First you can define how many instances should be started at the same time. Let's
// say you have 3 different capabilities (Chrome, Firefox and Safari) and you have
// set maxInstances to 1, wdio will spawn 3 processes. Therefor if you have 10 spec
// files and you set maxInstances to 10, all spec files will get tested at the same time
// and 30 processes will get spawned. The property basically handles how many capabilities
// from the same test should run tests.
//
//
maxInstances: 10,
//
// If you have trouble getting all important capabilities together, check out the
// Sauce Labs platform configurator - a great tool to configure your capabilities:
// https://docs.saucelabs.com/reference/platforms-configurator
//
capabilities: [{
browserName: 'chrome'
}, {
// maxInstances can get overwritten per capability. So if you have an in house Selenium
// grid with only 5 firefox instance available you can make sure that not more than
// 5 instance gets started at a time.
maxInstances: 5,
browserName: 'firefox',
specs: [
'test/ffOnly/*'
]
},{
browserName: 'phantomjs',
exclude: [
'test/spec/alert.js'
]
}],
//
// When enabled opens a debug port for node-inspector and pauses execution
// on `debugger` statements. The node-inspector can be attached with:
// `node-inspector --debug-port 5859 --no-preload`
// When debugging it is also recommended to change the timeout interval of
// test runner (eg. jasmineNodeOpts.defaultTimeoutInterval) to a very high
// value and setting maxInstances to 1.
debug: false,
//
// Additional list node arguments to use when starting child processes
execArgv: null,
//
//
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Per default WebdriverIO commands getting executed in a synchronous way using
// the wdio-sync package. If you still want to run your tests in an async way
// using promises you can set the sync command to false.
sync: true,
//
// Level of logging verbosity: silent | verbose | command | data | result | error
logLevel: 'silent',
//
// Enables colors for log output.
coloredLogs: true,
//
// If you only want to run your tests until a specific amount of tests have failed use
// bail (default is 0 - don't bail, run all tests).
bail: 0,
//
// Saves a screenshot to a given path if a command fails.
screenshotPath: 'shots',
//
// Set a base URL in order to shorten url command calls. If your url parameter starts
// with "/", the base url gets prepended.
baseUrl: 'http://localhost:9090',
//
// Default timeout for all waitForXXX commands.
waitforTimeout: 1000,
//
// Initialize the browser instance with a WebdriverIO plugin. The object should have the
// plugin name as key and the desired plugin options as property. Make sure you have
// the plugin installed before running any tests. The following plugins are currently
// available:
// WebdriverCSS: https://github.com/webdriverio/webdrivercss
// WebdriverRTC: https://github.com/webdriverio/webdriverrtc
// Browserevent: https://github.com/webdriverio/browserevent
plugins: {
webdrivercss: {
screenshotRoot: 'my-shots',
failedComparisonsRoot: 'diffs',
misMatchTolerance: 0.05,
screenWidth: [320,480,640,1024]
},
webdriverrtc: {},
browserevent: {}
},
//
// Framework you want to run your specs with.
// The following are supported: mocha, jasmine and cucumber
// see also: http://webdriver.io/guide/testrunner/frameworks.html
//
// Make sure you have the wdio adapter package for the specific framework installed before running any tests.
framework: 'mocha',
//
// Test reporter for stdout.
// The only one supported by default is 'dot'
// see also: http://webdriver.io/guide.html and click on "Reporters" in left column
reporters: ['dot', 'allure'],
//
// Some reporter require additional information which should get defined here
reporterOptions: {
//
// If you are using the "xunit" reporter you should define the directory where
// WebdriverIO should save all unit reports.
outputDir: './'
},
//
// Options to be passed to Mocha.
// See the full list at http://mochajs.org/
mochaOpts: {
ui: 'bdd'
},
//
// Options to be passed to Jasmine.
// See also: https://github.com/webdriverio/wdio-jasmine-framework#jasminenodeopts-options
jasmineNodeOpts: {
//
// Jasmine default timeout
defaultTimeoutInterval: 5000,
//
// The Jasmine framework allows it to intercept each assertion in order to log the state of the application
// or website depending on the result. For example it is pretty handy to take a screenshot every time
// an assertion fails.
expectationResultHandler: function(passed, assertion) {
// do something
},
//
// Make use of Jasmine-specific grep functionality
grep: null,
invertGrep: null
},
//
// If you are using Cucumber you need to specify where your step definitions are located.
// See also: https://github.com/webdriverio/wdio-cucumber-framework#cucumberopts-options
cucumberOpts: {
require: [], // <string[]> (file/dir) require files before executing features
backtrace: false, // <boolean> show full backtrace for errors
compiler: [], // <string[]> ("extension:module") require files with the given EXTENSION after requiring MODULE (repeatable)
dryRun: false, // <boolean> invoke formatters without executing steps
failFast: false, // <boolean> abort the run on first failure
format: ['pretty'], // <string[]> (type[:path]) specify the output format, optionally supply PATH to redirect formatter output (repeatable)
colors: true, // <boolean> disable colors in formatter output
snippets: true, // <boolean> hide step definition snippets for pending steps
source: true, // <boolean> hide source URIs
profile: [], // <string[]> (name) specify the profile to use
strict: false, // <boolean> fail if there are any undefined or pending steps
tags: [], // <string[]> (expression) only execute the features or scenarios with tags matching the expression
timeout: 20000, // <number> timeout for step definitions
ignoreUndefinedDefinitions: false, // <boolean> Enable this config to treat undefined definitions as warnings.
},
//
// =====
// Hooks
// =====
// WebdriverIO provides a several hooks you can use to interfere the test process in order to enhance
// it and build services around it. You can either apply a single function to it or an array of
// methods. If one of them returns with a promise, WebdriverIO will wait until that promise got
// resolved to continue.
//
/**
* Gets executed once before all workers get launched.
* @param {Object} config wdio configuration object
* @param {Array.<Object>} capabilities list of capabilities details
*/
onPrepare: function (config, capabilities) {
},
/**
* Gets executed just before initialising the webdriver session and test framework. It allows you
* to manipulate configurations depending on the capability or spec.
* @param {Object} config wdio configuration object
* @param {Array.<Object>} capabilities list of capabilities details
* @param {Array.<String>} specs List of spec file paths that are to be run
*/
beforeSession: function (config, capabilities, specs) {
},
/**
* Gets executed before test execution begins. At this point you can access to all global
* variables like `browser`. It is the perfect place to define custom commands.
* @param {Array.<Object>} capabilities list of capabilities details
* @param {Array.<String>} specs List of spec file paths that are to be run
*/
before: function (capabilities, specs) {
},
/**
* Hook that gets executed before the suite starts
* @param {Object} suite suite details
*/
beforeSuite: function (suite) {
},
/**
* Hook that gets executed _before_ a hook within the suite starts (e.g. runs before calling
* beforeEach in Mocha)
*/
beforeHook: function () {
},
/**
* Hook that gets executed _after_ a hook within the suite starts (e.g. runs after calling
* afterEach in Mocha)
*/
afterHook: function () {
},
/**
* Function to be executed before a test (in Mocha/Jasmine) or a step (in Cucumber) starts.
* @param {Object} test test details
*/
beforeTest: function (test) {
},
/**
* Runs before a WebdriverIO command gets executed.
* @param {String} commandName hook command name
* @param {Array} args arguments that command would receive
*/
beforeCommand: function (commandName, args) {
},
/**
* Runs after a WebdriverIO command gets executed
* @param {String} commandName hook command name
* @param {Array} args arguments that command would receive
* @param {Number} result 0 - command success, 1 - command error
* @param {Object} error error object if any
*/
afterCommand: function (commandName, args, result, error) {
},
/**
* Function to be executed after a test (in Mocha/Jasmine) or a step (in Cucumber) starts.
* @param {Object} test test details
*/
afterTest: function (test) {
},
/**
* Hook that gets executed after the suite has ended
* @param {Object} suite suite details
*/
afterSuite: function (suite) {
},
/**
* Gets executed after all tests are done. You still have access to all global variables from
* the test.
* @param {Number} result 0 - test pass, 1 - test fail
* @param {Array.<Object>} capabilities list of capabilities details
* @param {Array.<String>} specs List of spec file paths that ran
*/
after: function (result, capabilities, specs) {
},
/**
* Gets executed right after terminating the webdriver session.
* @param {Object} config wdio configuration object
* @param {Array.<Object>} capabilities list of capabilities details
* @param {Array.<String>} specs List of spec file paths that ran
*/
afterSession: function (config, capabilities, specs) {
},
/**
* Gets executed after all workers got shut down and the process is about to exit.
* @param {Object} exitCode 0 - success, 1 - fail
* @param {Object} config wdio configuration object
* @param {Array.<Object>} capabilities list of capabilities details
*/
onComplete: function (exitCode, config, capabilities) {
},
//
// Cucumber specific hooks
beforeFeature: function (feature) {
},
beforeScenario: function (scenario) {
},
beforeStep: function (step) {
},
afterStep: function (stepResult) {
},
afterScenario: function (scenario) {
},
afterFeature: function (feature) {
}
};
```
You can also find that file with all possible options and variations in the [example folder](https://github.com/webdriverio/webdriverio/blob/master/examples/wdio.conf.js).

View File

@ -0,0 +1,132 @@
name: Debugging
category: testrunner
tags: guide
index: 8
title: WebdriverIO - Test Runner Frameworks
---
Debugging
==========
Debugging is significantly more difficult when there are several processes spawning dozens of tests in multiple browsers.
For starters, it is extremely helpful to limit parallelism by setting `maxInstances` to 1 and targeting only those specs and browsers that need to be debugged.
In `wdio.conf`:
```
maxInstances: 1,
specs: ['**/myspec.spec.js'],
capabilities: [{browserName: 'firefox'}]
```
In many cases, you can use [`browser.debug()`](/api/utility/debug.html) to pause your test and inspect the browser. Your command line interface will also switch into a REPL mode that allows you to fiddle around with commands and elements on the page. In REPL mode you can access the browser object or `$` and `$$` functions like you can in your tests.
When using `browser.debug()` you will likely need to increase the timeout of the test runner to prevent the test runner from failing the test for taking to long. For example:
In `wdio.conf`:
```
jasmineNodeOpts: {
defaultTimeoutInterval: (24 * 60 * 60 * 1000);
}
```
See [timeouts](/guide/testrunner/timeouts.html) for more information on how to do that using other frameworks.
## Watch files
With `v4.6.0` WebdriverIO introduced a watch argument that can help you to run certain specs when they get updated. To enable it just run the wdio command with the watch flag like:
```sh
wdio wdio.conf.js --watch
```
It will initialize the desired Selenium sessions defined in your config and will wait until a file that was defined via the `specs` option has changed. This works regardless you run your tests on a local grid or on cloud services like [SauceLabs](https://saucelabs.com/).
## Node Inspector
**n.b. If you are using Node v6.3 and above, you should use Node's built-in debugger, instead. [See below](#node_debugger)**
For a more comprehensive debugging experience you can enable debug flag to start the test runner processes with an open debugger port.
This will allow attaching the node-inspector and pausing test execution with `debugger`. Each child process will be assigned a new debugging port starting at `5859`.
This feature can be enabled by enabling the `debug` flag in wdio.conf:
```
{
debug: true
}
```
Once enabled tests will pause at `debugger` statements. You then must attach the debugger to continue.
If you do not already have `node-inspector` installed, install it with:
```
npm install -g node-inspector
```
And attach to the process with:
```
node-inspector --debug-port 5859 --no-preload
```
The `no-preload` option defers loading source file until needed, this helps performance significantly when project contains a large number of node_modules, but you may need to remove this if you need to navigate your source and add additional breakpoints after attaching the debugger.
## Node built-in debugging with chrome-devtools<a id="node_debugger"></a>
Chrome devtool debugging looks like it's going to be the accepted replacement for node-inspector. This quote is from the node-inspector github README:
> Since version 6.3, Node.js provides a buit-in DevTools-based debugger which mostly deprecates Node Inspector, see e.g. this blog post to get started. The built-in debugger is developed directly by the V8/Chromium team and provides certain advanced features (e.g. long/async stack traces) that are too difficult to implement in Node Inspector.
To get it working, you need to pass the `--inspect` flag down to the node process running tests like this:
In `wdio.conf`:
```
execArgv: ['--inspect']
```
You should see a message something like this in console:
```
Debugger listening on port 9229.
Warning: This is an experimental feature and could change at any time.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532...
```
You'll want to open that url, which will attach the debugger.
Tests will pause at `debugger` statements, but ONLY once dev-tools has been opened and the debugger attached. That can be a little awkward if you're trying to debug something close to the start of a test.
You can get around that by adding a `browser.debug()` to pause long enough.
Once execution has finished, the test doesn't actually finish until the devtools is closed. You'll need to do that yourself.
## Dynamic configuration
Note that `wdio.conf` can contain javascript. Since you probably do not want to permanently change your timeout value to 1 day, it can be often helpful to change these settings from the command line using an environment variable. This can used to dynamically change the configuration:
```
var debug = process.env.DEBUG;
var defaultCapabilities = ...;
var defaultTimeoutInterval = ...;
var defaultSpecs = ...;
exports.config = {
debug: debug,
maxInstances: debug ? 1 : 100,
capabilities: debug ? [{browserName: 'chrome'}] : defaultCapabilities,
specs: process.env.SPEC ? [process.env.SPEC] : defaultSepcs,
jasmineNodeOpts: {
defaultTimeoutInterval: debug ? (24 * 60 * 60 * 1000) : defaultTimeoutInterval
}
```
You can then prefix the `wdio` command with your desired values:
```
DEBUG=true SPEC=myspec ./node_modules/.bin/wdio wdio.conf
```
## Dynamic Repl with Atom
If you are an [Atom](https://atom.io/) hacker you can try [wdio-repl](https://github.com/kurtharriger/wdio-repl) by [@kurtharriger](https://github.com/kurtharriger) which is a dynamic repl that allows you to execute single code lines in Atom. Watch [this](https://www.youtube.com/watch?v=kdM05ChhLQE) Youtube video to see a demo.

View File

@ -0,0 +1,113 @@
name: frameworks
category: testrunner
tags: guide
index: 2
title: WebdriverIO - Test Runner Frameworks
---
Frameworks
==========
The wdio runner currently supports [Mocha](http://mochajs.org/), [Jasmine](http://jasmine.github.io/) (v2.0) and [Cucumber](https://cucumber.io/). To integrate each framework with WebdriverIO there are adapter packages on NPM that need to be downloaded and installed. Note that these packages need to be installed at the same place WebdriverIO is installed. If you've installed WebdriverIO globally make sure you have the adapter package installed globally as well.
Within your spec files or step definition you can access the webdriver instance using the global variable `browser`. You don't need to initiate or end the Selenium session. This is taken care of by the wdio testrunner.
## Using Mocha
First you need to install the adapter package from NPM:
```sh
npm install wdio-mocha-framework --save-dev
```
If you like to use Mocha you should additionally install an assertion library to have more expressive tests, e.g. [Chai](http://chaijs.com). Initialise that library in the `before` hook in your configuration file:
```js
before: function() {
var chai = require('chai');
global.expect = chai.expect;
chai.Should();
}
```
Once that is done you can write beautiful assertions like:
```js
describe('my awesome website', function() {
it('should do some chai assertions', function() {
browser.url('http://webdriver.io');
browser.getTitle().should.be.equal('WebdriverIO - WebDriver bindings for Node.js');
});
});
```
WebdriverIO supports Mochas `BDD` (default), `TDD` and `QUnit` [interface](https://mochajs.org/#interfaces). If you like to write your specs in TDD language set the ui property in your `mochaOpts` config to `tdd`, now your test files should get written like:
```js
suite('my awesome website', function() {
test('should do some chai assertions', function() {
browser.url('http://webdriver.io');
browser.getTitle().should.be.equal('WebdriverIO - WebDriver bindings for Node.js');
});
});
```
If you want to define specific Mocha settings you can do that by adding `mochaOpts` to your configuration file. A list of all options can be found on the [project website](http://mochajs.org/).
Note that since all commands are running synchronously there is no need to have async mode in Mocha enabled. Therefor you can't use the `done` callback:
```js
it('should test something', function () {
done(); // throws "done is not a function"
})
```
If you want to run something asynchronously you can either use the [`call`](/api/utility/call.html) command or [custom commands](/guide/usage/customcommands.html).
## Using Jasmine
First you need to install the adapter package from NPM:
```sh
npm install wdio-jasmine-framework --save-dev
```
Jasmine already provides assertion methods you can use with WebdriverIO. So there is no need to add another one.
### Intercept Assertion
The Jasmine framework allows it to intercept each assertion in order to log the state of the application or website depending on the result. For example it is pretty handy to take a screenshot everytime an assertion fails. In your `jasmineNodeOpts` you can add a property called `expectationResultHandler` that takes a function to execute. The function parameter give you information about the result of the assertion. The following example demonstrate how to take a screenshot if an assertion fails:
```js
jasmineNodeOpts: {
defaultTimeoutInterval: 10000,
expectationResultHandler: function(passed, assertion) {
/**
* only take screenshot if assertion failed
*/
if(passed) {
return;
}
browser.saveScreenshot('assertionError_' + assertion.error.message + '.png');
}
},
```
Please note that you can't stop the test execution to do something async. It might happen that
the command takes too much time and the website state has changed. Though usually after 2 another
commands the screenshot got taken which gives you still valuable information about the error.
## Using Cucumber
First you need to install the adapter package from NPM:
```sh
npm install wdio-cucumber-framework --save-dev
```
If you want to use Cucumber set the `framework` property to cucumber, either by adding `framework: 'cucumber'` to the [config file](/guide/testrunner/configurationfile.html) or by adding `-f cucumber` to the command line.
Options for Cucumber can be given in the config file with cucumberOpts. Check out the whole list of options [here](https://github.com/webdriverio/wdio-cucumber-framework#cucumberopts-options).
To get up and running quickly with Cucumber have a look on our [cucumber-boilerplate](https://github.com/webdriverio/cucumber-boilerplate) project that comes with all step definition you will probably need and allows you to start writing feature files right away.

View File

@ -0,0 +1,90 @@
name: gettingstarted
category: testrunner
tags: guide
index: 0
title: WebdriverIO - Test Runner
---
Getting Started
===============
WebdriverIO comes with its own test runner to help you get started with integration testing as quickly as possible. All the fiddling around hooking up WebdriverIO with a test framework belongs to the past. The WebdriverIO runner does all the work for you and helps you to run your tests as efficiently as possible.
To see the command line interface help just type the following command in your terminal:
```txt
$ ./node_modules/.bin/wdio --help
WebdriverIO CLI runner
Usage: wdio [options] [configFile]
config file defaults to wdio.conf.js
The [options] object will override values from the config file.
Options:
--help, -h prints WebdriverIO help menu
--version, -v prints WebdriverIO version
--host Selenium server host address
--port Selenium server port
--path Selenium server path (default: /wd/hub)
--user, -u username if using a cloud service as Selenium backend
--key, -k corresponding access key to the user
--watch watch specs for changes
--logLevel, -l level of logging verbosity (default: silent)
--coloredLogs, -c if true enables colors for log output (default: true)
--bail stop test runner after specific amount of tests have failed (default: 0 - don't bail)
--screenshotPath, -s saves a screenshot to a given path if a command fails
--baseUrl, -b shorten url command calls by setting a base url
--waitforTimeout, -w timeout for all waitForXXX commands (default: 1000ms)
--framework, -f defines the framework (Mocha, Jasmine or Cucumber) to run the specs (default: mocha)
--reporters, -r reporters to print out the results on stdout
--suite overwrites the specs attribute and runs the defined suite
--spec run only a certain spec file
--cucumberOpts.* Cucumber options, see the full list options at https://github.com/webdriverio/wdio-cucumber-framework#cucumberopts-options
--jasmineOpts.* Jasmine options, see the full list options at https://github.com/webdriverio/wdio-jasmine-framework#jasminenodeopts-options
--mochaOpts.* Mocha options, see the full list options at http://mochajs.org
```
Sweet! Now you need to define a configuration file where all information about your tests, capabilities and settings are set. Switch over to the [Configuration File](/guide/testrunner/configurationfile.html) section to find out how that file should look like. With the `wdio` configuration helper it is super easy to generate your config file. Just run:
```sh
$ ./node_modules/.bin/wdio config
```
and it launches the helper utility. It will ask you questions depending on the answers you give. This way
you can generate your config file in less than a minute.
<div class="cliwindow" style="width: 92%">
![WDIO configuration utility](/images/config-utility.gif "WDIO configuration utility")
</div>
Once you have your configuration file set up you can start your
integration tests by calling:
```sh
$ ./node_modules/.bin/wdio wdio.conf.js
```
That's it! Now, you can access to the selenium instance via the global variable `browser`.
## Run the test runner programmatically
Instead of calling the wdio command you can also include the test runner as module and run in within any arbitrary environment. For that you need to require the launcher module (in `/node_modules/webdriverio/build/launcher`) the following way:
```js
var Launcher = require('webdriverio').Launcher;
```
After that you create an instance of the launcher and run the test. The Launcher class expects as parameter the url to the config file and accepts [certain](https://github.com/webdriverio/webdriverio/blob/973f23d8949dae8168e96b1b709e5b19241a373b/lib/cli.js#L51-L55) parameters that will overwrite the value in the config.
```js
var wdio = new Launcher(opts.configFile, opts);
wdio.run().then(function (code) {
process.exit(code);
}, function (error) {
console.error('Launcher failed to start the test', error.stacktrace);
process.exit(1);
});
```
The run command returns a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) that gets resolved if the test ran successful or failed or gets rejected if the launcher was not able to start run the tests.

View File

@ -0,0 +1,51 @@
name: jenkins
category: testrunner
tags: guide
index: 7
title: WebdriverIO - Test Runner Jenkins Integration
---
Jenkins Integration
===================
WebdriverIO offers a tight integration to CI systems like [Jenkins](https://jenkins-ci.org/). With the [junit reporter](https://github.com/webdriverio/wdio-junit-reporter) you can easily debug your tests as well as keep track of your test results. The integration is pretty easy. There is a [demo project](https://github.com/christian-bromann/wdio-demo) we used in this tutorial to demonstrate how to integrate a WebdriverIO testsuite with Jenkins.
First we need to define `junit` as test reporter. Also make sure you have it installed (`$ npm install --save-dev wdio-junit-reporter`) and that we save our xunit results at a place where Jenkins can pick them up. Therefore we define our reporter in our config as follows:
```js
// wdio.conf.js
module.exports = {
// ...
reporters: ['dot', 'junit'],
reporterOptions: {
junit: {
outputDir: './'
}
},
// ...
};
```
It is up to you which framework you want to choose. The reports will be similar. This tutorial is going to use Jasmine. After you have written [couple of tests](https://github.com/christian-bromann/wdio-demo/tree/master/test/specs) you can begin to setup a new Jenkins job. Give it a name and a description:
![Name And Description](/images/jenkins-jobname.png "Name And Description")
Then make sure it grabs always the newest version of your repository:
![Jenkins Git Setup](/images/jenkins-gitsetup.png "Jenkins Git Setup")
Now the important part: create a build step to execute shell commands. That build step needs to build your project. Since this demo project only tests an external app we don't need to build anything but install the node dependencies and run our test command `npm test` which is an alias for `node_modules/.bin/wdio test/wdio.conf.js`.
![Build Step](/images/jenkins-runjob.png "Build Step")
After our test we want Jenkins to track our xunit report. To do so we have to add a post-build action called _"Publish JUnit test result report"_. You could also install an external xunit plugin to track your reports. The JUnit one comes with the basic Jenkins installation and is sufficient enough for now.
According to our config file we store the xunit reports in our workspace root directory. These reports are xml files. So all we need to do in order to track the reports is to point Jenkins to all xml files in our root directory:
![Post-build Action](/images/jenkins-postjob.png "Post-build Action")
That's it! This is all you need to setup Jenkins to run your WebdriverIO jobs. The only thing that didn't got mentioned is that Jenkins is setup in a way that it runs Node.js v0.12 and has the [Sauce Labs](https://saucelabs.com/) environment variables set in the settings.
Your job will now provide detailed test results with history charts, stacktrace information on failed jobs as well as a list of commands with payload that got used in each test.
![Jenkins Final Integration](/images/jenkins-final.png "Jenkins Final Integration")

View File

@ -0,0 +1,112 @@
name: organizing suites
category: testrunner
tags: guide
index: 4
title: WebdriverIO - Organize Test Suite
---
Organizing Test Suites
===================
While your project is growing you will inevitably add more and more integration tests. This will increase your build time and will also slow down your productivity. To prevent this you should start to run your tests in parallel. You might have already recognised that WebdriverIO creates for each spec (or feature file in cucumber) a single Selenium session. In general, you should try to test a single feature in your app in one spec file. Try to not have too many or too less tests in one file. However, there is no golden rule about that.
Once you get more and more spec files you should start running them concurrently. To do so you can adjust the [`maxInstances`](https://github.com/webdriverio/webdriverio/blob/master/examples/wdio.conf.js#L52-L60) property in your config file. WebdriverIO allows you to run your tests with maximum concurrency meaning that no matter how many files and tests you have, they could run all in parallel. Though there are certain limits (computer CPU, concurrency restrictions).
> Let's say you have 3 different capabilities (Chrome, Firefox, and Safari) and you have set maxInstances to 1, the wdio test runner will spawn 3 processes. Therefore, if you have 10 spec files and you set maxInstances to 10; all spec files will get tested at the same time and 30 processes will get spawned.
You can define the `maxInstance` property globally to set the attribute for all browser. If you run your own Selenium grid it could be that you have more capacity for one browser than for an other one. In this case you can limit the `maxInstance` in your capability object:
```js
// wdio.conf.js
exports.config = {
// ...
// set maxInstance for all browser
maxInstances: 10,
// ...
capabilities: [{
browserName: "firefox"
}, {
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instance available you can make sure that not more than
// 5 instance gets started at a time.
browserName: 'chrome'
}],
// ...
}
```
## Inherit From Main Config File
If you run your test suite in multiple environments (e.g. dev and integration) it could be helpful to have multiple configuration files to keep them easy manageable. Similar to the [page object concept](/guide/testrunner/pageobjects.html) you first create a main config file. It contains all configurations you share across environments. Then for each environment you can create a file and supplement the information from the main config file with environment specific ones:
```js
// wdio.dev.config.js
var merge = require('deepmerge');
var wdioConf = require('./wdio.conf.js');
// have main config file as default but overwrite environment specific information
exports.config = merge(wdioConf.config, {
capabilities: [
// more caps defined here
// ...
],
// run tests on sauce instead locally
user: process.env.SAUCE_USERNAME,
key: process.env.SAUCE_ACCESS_KEY,
services: ['sauce']
});
// add an additional reporter
exports.config.reporters.push('allure');
```
## Group Test Specs
You can easily group test specs in suites and run single specific suites instead of all of them. To do so you first need to define your suites in your wdio config:
```js
// wdio.conf.js
exports.config = {
// define all tests
specs: ['./test/specs/**/*.spec.js'],
// ...
// define specific suites
suites: {
login: [
'./test/specs/login.success.spec.js',
'./test/specs/login.failure.spec.js'
],
otherFeature: [
// ...
]
},
// ...
}
```
If you now want to run a single suite only you can pass the suite name as cli argument like
```sh
$ wdio wdio.conf.js --suite login
```
or run multiple suites at once
```sh
$ wdio wdio.conf.js --suite login,otherFeature
```
## Run Single Test Suites
If you are working on your WebdriverIO tests you don't want to execute your whole suite everytime you added an assertion or any other code. With the `--spec` parameter you can specify which suite (Mocha, Jasmine) or feature (Cucumber) should be run. For example if you only want to run your login test, do:
```sh
$ wdio wdio.conf.js --spec ./test/specs/e2e/login.js
```
Note that each test file is running in a single test runner process. Since we don't scan files in advance you _can't_ use for example `describe.only` at the top of your spec file to say Mocha to only run that suite. This feature will help you though to do that in the same way.
## Stop testing after failure
With the `bail` option you can specify when WebdriverIO should stop the test run after test failures. This can be helpful when you have a big test suite and want to avoid long test runs when you already know that your build will break. The option expects a number that specifies after how many spec failures it should stop the whole test run. The default is `0` meaning that it always runs all tests specs it can find.

View File

@ -0,0 +1,166 @@
name: pageobjects
category: testrunner
tags: guide
index: 6
title: WebdriverIO - Page Object Pattern
---
Page Object Pattern
===================
The new version (v4) of WebdriverIO was designed with Page Object Pattern support in mind. By introducing the "elements as first citizen" principle it is now possible to build up large test suites using this pattern. There are no additional packages required to create page objects. It turns out that `Object.create` provides all necessary features we need:
- inheritance between page objects
- lazy loading of elements and
- encapsulation of methods and actions
The goal behind page objects is to abstract any page information away from the actual tests. Ideally you should store all selectors or specific instructions that are unique for a certain page in a page object, so that you still can run your test after you've completely redesigned your page.
First off we need a main page object that we call `Page`. It will contain general selectors or methods all page objects will inherit from. Apart from all child page objects `Page` is created using the prototype model:
```js
function Page () {
this.title = 'My Page';
}
Page.prototype.open = function (path) {
browser.url('/' + path)
}
module.exports = new Page()
```
Or, using ES6 class:
```js
"use strict";
class Page {
constructor() {
this.title = 'My Page';
}
open(path) {
browser.url('/' + path);
}
}
module.exports = new Page();
```
We will always export an instance of a page object and never create that instance in the test. Since we are writing end to end tests we always see the page as a stateless construct the same way as each http request is a stateless construct. Sure, the browser can carry session information and therefore can display different pages based on different sessions, but this shouldn't be reflected within a page object. These state changes should emerge from your actual tests.
Let's start testing the first page. For demo purposes we use [The Internet](http://the-internet.herokuapp.com) website by [Elemental Selenium](http://elementalselenium.com/) as guinea pig. Let's try to build a page object example for the [login page](http://the-internet.herokuapp.com/login). First step is to write all important selectors that are required in our `login.page` object as getter functions. As mentioned above we are using the `Object.create` method to inherit the prototype of our main page:
```js
// login.page.js
var Page = require('./page')
var LoginPage = Object.create(Page, {
/**
* define elements
*/
username: { get: function () { return browser.element('#username'); } },
password: { get: function () { return browser.element('#password'); } },
form: { get: function () { return browser.element('#login'); } },
flash: { get: function () { return browser.element('#flash'); } },
/**
* define or overwrite page methods
*/
open: { value: function() {
Page.open.call(this, 'login');
} },
submit: { value: function() {
this.form.submitForm();
} }
});
module.exports = LoginPage;
```
OR, when using ES6 class:
```js
// login.page.js
"use strict";
var Page = require('./page')
class LoginPage extends Page {
get username() { return browser.element('#username'); }
get password() { return browser.element('#password'); }
get form() { return browser.element('#login'); }
get flash() { return browser.element('#flash'); }
open() {
super.open('login');
}
submit() {
this.form.submitForm();
}
}
module.exports = new LoginPage();
```
Defining selectors in getter functions might look a bit verbose but it is really useful. These functions get evaluated when you actually access the property and not when you generate the object. With that you always request the element before you do an action on it.
WebdriverIO internally remembers the last result of a command. If you chain an element command with an action command it finds the element from the previous command and uses the result to execute the action. With that you can remove the selector (first parameter) and the command looks as simple as:
```js
LoginPage.username.setValue('Max Mustermann');
```
which is basically the same thing as:
```js
var elem = browser.element('#username');
elem.setValue('Max Mustermann');
```
or
```js
browser.element('#username').setValue('Max Mustermann');
```
or
```js
browser.setValue('#username', 'Max Mustermann');
```
After we've defined all required elements and methods for the page we can start to write the test for it. All we need to do to use the page object is to require it and that's it. The `Object.create` method returns an instance of that page so we can start using it right away. By adding an additional assertion framework you can make your tests even more expressive:
```js
// login.spec.js
var expect = require('chai').expect;
var LoginPage = require('../pageobjects/login.page');
describe('login form', function () {
it('should deny access with wrong creds', function () {
LoginPage.open();
LoginPage.username.setValue('foo');
LoginPage.password.setValue('bar');
LoginPage.submit();
expect(LoginPage.flash.getText()).to.contain('Your username is invalid!');
});
it('should allow access with correct creds', function () {
LoginPage.open();
LoginPage.username.setValue('tomsmith');
LoginPage.password.setValue('SuperSecretPassword!');
LoginPage.submit();
expect(LoginPage.flash.getText()).to.contain('You logged into a secure area!');
});
});
```
From the structural side it makes sense to separate spec files and page objects and put them into different directories. Additionally you can give each page object the ending: `.page.js`. This way it is easy to figure out that you actually require a page object if you execute `var LoginPage = require('../pageobjects/form.page');`.
This is the basic principle of how to write page objects with WebdriverIO. Note that you can build up way more complex page object structures than this. For example have specific page objects for modals or split up a huge page object into different sections objects that inherit from the main page object. The pattern gives you really a lot of opportunities to encapsulate page information from your actual tests, which is important to keep your test suite structured and clear in times where the project and number of tests grows.
You can find this and some more page object examples in the [example folder](https://github.com/webdriverio/webdriverio/tree/master/examples/pageobject) on GitHub.

View File

@ -0,0 +1,82 @@
name: Retry Flaky Tests
category: testrunner
tags: guide
index: 9
title: WebdriverIO - Retry Flaky Tests
---
Retry Flaky Tests
=================
You can rerun certain tests with the WebdriverIO testrunner that turn out to be unstable due to e.g. flaky network or race conditions. However it is not recommended to just increase the rerun rate if tests become unstable.
## Rerun suites in MochaJS
Since version 3 of MochaJS you can rerun whole test suites (everything inside an `describe` block). If you use Mocha you should favor this retry mechanism instead of the WebdriverIO implementation that only allows you to rerun certain test blocks (everything within an `it` block). Here is an example how to rerun a whole suite in MochaJS:
```js
describe('retries', function() {
// Retry all tests in this suite up to 4 times
this.retries(4);
beforeEach(function () {
browser.url('http://www.yahoo.com');
});
it('should succeed on the 3rd try', function () {
// Specify this test to only retry up to 2 times
this.retries(2);
console.log('run');
expect(browser.isVisible('.foo')).to.eventually.be.true;
});
});
```
## Rerun single tests in Jasmine or Mocha
To rerun a certain test block just apply the number of reruns as last parameter after the test block function:
```js
describe('my flaky app', function () {
/**
* spec that runs max 4 times (1 actual run + 3 reruns)
*/
it('should rerun a test at least 3 times', function () {
// ...
}, 3);
});
```
The same works for hooks too:
```js
describe('my flaky app', function () {
/**
* hook that runs max 2 times (1 actual run + 1 rerun)
*/
beforeEach(function () {
// ...
}, 1)
// ...
});
```
It is __not__ possible to rerun whole suites, only hooks or test blocks. To use this you have to have the [wdio-mocha-framework](https://github.com/webdriverio/wdio-mocha-framework) adapter installed with `v0.3.0` or greater or the [wdio-jasmine-framework](https://github.com/webdriverio/wdio-jasmine-framework) adapter with `v0.2.0` or greater.
## Rerun Step Definitions in Cucumber
To define a rerun rate for a certain step definitions just apply a retry option to it, like:
```js
module.exports = function () {
/**
* step definition that runs max 3 times (1 actual run + 2 reruns)
*/
this.Given(/^some step definition$/, { retry: 2 }, () => {
// ...
})
// ...
```
Reruns can only be defined in your step definitions file and not in your feature file. To use this you have to have the [wdio-cucumber-framework](https://github.com/webdriverio/wdio-cucumber-framework) adapter installed with `v0.1.0` or greater.

View File

@ -0,0 +1,133 @@
name: Timeouts
category: testrunner
tags: guide
index: 5
title: WebdriverIO - Timeouts
---
Timeouts
========
Each command in WebdriverIO is an asynchronous operation where a request is fired to the Selenium server (or a cloud service like [Sauce Labs](https://saucelabs.com/)), and its response contains the result once the action has completed or failed. Therefore time is a crucial component in the whole testing process. When a certain action depends on the state of a different action, you need to make sure that they get executed in the right order. Timeouts play an important role when dealing with these issues.
## Selenium timeouts
### Session Script Timeout
A session has an associated session script timeout that specifies a time to wait for asynchronous scripts to run. Unless stated otherwise it is 30 seconds. You can set this timeout via:
```js
browser.timeouts('script', 60000);
browser.executeAsync(function (done) {
console.log('this should not fail');
setTimeout(done, 59000);
});
```
### Session Page Load Timeout
A session has an associated session page load timeout that specifies a time to wait for the page loading to complete. Unless stated otherwise it is 300,000 milliseconds. You can set this timeout via:
```js
browser.timeouts('pageLoad', 10000);
```
> The `pageLoad` keyword is a part of the official WebDriver [specification](https://www.w3.org/TR/webdriver/#set-timeouts), but might not be [supported](https://github.com/seleniumhq/selenium-google-code-issue-archive/issues/687) for your browser (the previous name is `page load`).
### Session Implicit Wait Timeout
A session has an associated session implicit wait timeout that specifies a time to wait for the implicit element location strategy when locating elements using the [`element`](/api/protocol/element.html) or [`elements`](/api/protocol/elements.html) commands. Unless stated otherwise it is zero milliseconds. You can set this timeout via:
```js
browser.timeouts('implicit', 5000);
```
## WebdriverIO related timeouts
### WaitForXXX timeout
WebdriverIO provides multiple commands to wait on elements to reach a certain state (e.g. enabled, visible, existing). These commands take a selector argument and a timeout number which declares how long the instance should wait for that element to reach the state. The `waitforTimeout` option allows you to set the global timeout for all waitFor commands so you don't need to set the same timeout over and over again. Note the lowercase `f`.
```js
// wdio.conf.js
exports.config = {
// ...
waitforTimeout: 5000,
// ...
};
```
In your test you now can do this:
```js
var myElem = browser.element('#myElem');
myElem.waitForVisible();
// which is the same as
browser.waitForVisible('#myElem');
// which is the same as
browser.waitForVisible('#myElem', 5000);
```
## Framework related timeouts
Also the testing framework you use with WebdriverIO has to deal with timeouts especially since everything is asynchronous. It ensures that the test process don't get stuck if something went wrong. By default the timeout is set to 10 seconds which means that a single test should not take longer than that. A single test in Mocha looks like:
```js
it('should login into the application', function () {
browser.url('/login');
var form = browser.element('form');
var username = browser.element('#username');
var password = browser.element('#password');
username.setValue('userXY');
password.setValue('******');
form.submit();
expect(browser.getTitle()).to.be.equal('Admin Area');
});
```
In Cucumber the timeout applies to a single step definition. However if you want to increase the timeout because your test takes longer than the default value you need to set it in the framework options. This is for Mocha:
```js
// wdio.conf.js
exports.config = {
// ...
framework: 'mocha',
mochaOpts: {
timeout: 20000
},
// ...
}
```
For Jasmine:
```js
// wdio.conf.js
exports.config = {
// ...
framework: 'jasmine',
jasmineNodeOpts: {
defaultTimeoutInterval: 20000
},
// ...
}
```
and for Cucumber:
```js
// wdio.conf.js
exports.config = {
// ...
framework: 'cucumber',
cucumberOpts: {
timeout: 20000
},
// ...
}
```