Giter Club home page Giter Club logo

phantomcss's Introduction

Unmaintained notice: As of December 22nd 2017 this project will no longer be maintained. It's been a fantastic five years, a project that has hopefully had a positive influence on the shape and extent of Web UI testing. Read more on why its time to move on.


Cute image of a ghost

CSS regression testing. A CasperJS module for automating visual regression testing with PhantomJS 2 or SlimerJS and Resemble.js. For testing Web apps, live style guides and responsive layouts. Read more on Huddle's Engineering blog: CSS Regression Testing.

Huddle is Hiring! We're looking for talented front-end engineers

What?

PhantomCSS takes screenshots captured by CasperJS and compares them to baseline images using Resemble.js to test for rgb pixel differences. PhantomCSS then generates image diffs to help you find the cause.

A failed visual regression test, pink areas show where padding has changed.

Screenshot based regression testing can only work when UI is predictable. It's possible to hide mutable UI components with PhantomCSS but it would be better to test static pages or drive the UI with faked data during test runs.

Example

casper.
	start( url ).
	then(function(){
		
		// do something
		casper.click('button#open-dialog');
		
		// Take a screenshot of the UI component
		phantomcss.screenshot('#the-dialog', 'a screenshot of my dialog');

	});

From the command line/terminal run:

  • casperjs test demo/testsuite.js

or

  • casperjs test demo/testsuite.js --verbose --log-level=debug --xunit=results.xml

Updating from PhantomCSS v0 to v1

Rendering is quite different with PhantomJS 2, so when you update, old visual tests will start failing. If your tests are green and passing before updating, I would recommend rebasing the visual tests, i.e. delete them, and run the test suite to create a new baseline.

You can still use the v0 branch if you wish, though it now unmaintained.

Download

PhantomCSS can be downloaded in various ways:

  • npm install phantomcss (PhantomCSS is not itself a Node.js module)
  • bower install phantomcss
  • git clone git://github.com/Huddle/PhantomCSS.git

If you are not installing via NPM, you will need to run npm install in the PhantomCSS root folder.

Please note that depending on how you have installed PhantomCSS you may need to set the libraryRoot configuration property to link to the directory in which phantomcss.js resides.

Getting started, try the demo

  • For convenience I've included CasperJS.bat for Windows users. If you are not a Windows user, you will have to install the latest version of CasperJS.
  • Download or clone this repo and run casperjs test demo/testsuite.js in command/terminal from the PhantomCSS folder. PhantomJS is the only binary dependency - this should just work
  • Find the screenshot folder and have a look at the (baseline) images
  • Run the tests again with casperjs test demo/testsuite.js. New screenshots will be created to compare against the baseline images. These new images can be ignored, they will be replaced every test run.
  • To test failure, add/change some CSS in the file demo/coffeemachine.html e.g. make .mug bright green
  • Run the tests again, you should see some reported failures
  • In the failures folder some images should have been created. The images should show bright pink where the screenshot has visually changed
  • If you want to manually compare the images, go to the screenshot folder to see the original/baseline and latest screenshots

SlimerJS

SlimerJS uses the Gecko browser engine rather than Webkit. This has some advantages over PhantomJS, such as a non-headless view. If this is of interest to you, please follow the download and install instructions and ensure SlimerJS is installed globally.

  • casperjs test demo/testsuite.js --engine=slimerjs

Options and setup

If you are using SlimerJS, you will need to specify absolute paths (see 'demo').

phantomcss.init({

    /*
    captureWaitEnabled defaults to true, setting to false will remove a small wait/delay on each
    screenshot capture - useful when you don't need to worry about
    animations and latency in your visual tests
    */
    captureWaitEnabled: true,

	/*
		libraryRoot is now optional unless you are using SlimerJS where
		you will need to set it to the correct path. It must point to
		your phantomcss folder. If you are using NPM, this will probably
		be './node_modules/phantomcss'.
	*/
	libraryRoot: './modules/PhantomCSS',

	screenshotRoot: './screenshots',

	/*
		By default, failure images are put in the './failures' folder. 
		If failedComparisonsRoot is set to false a separate folder will 
		not be created but failure images can still be found alongside 
		the original and new images.
	*/
	failedComparisonsRoot: './failures',

	/*
		Remove results directory tree after run.  Use in conjunction 
		with failedComparisonsRoot to see failed comparisons.
	*/
	cleanupComparisonImages: true,

	/*
		A reference to a particular Casper instance. Required for SlimerJS.
	*/
	casper: specific_instance_of_casper,

	/*
		You might want to keep master/baseline images in a completely 
		different folder to the diffs/failures.  Useful when working 
		with version control systems. By default this resolves to the 
		screenshotRoot folder.
	*/
	comparisonResultRoot: './results',

	/* 
		Don't add count number to images. If set to false, a filename is 
		required when capturing screenshots.
	*/
	addIteratorToImage: false,

	/*
		Don't add label to generated failure image
	*/
	addLabelToFailedImage: false,

	/*
		Mismatch tolerance defaults to  0.05%. Increasing this value 
		will decrease test coverage
	*/
	mismatchTolerance: 0.05,

	/*
		Callbacks for your specific integration
	*/
	onFail: function(test){ console.log(test.filename, test.mismatch); },
	
	onPass: function(test){ console.log(test.filename); },
	
	/* 
		Called when creating new baseline images
	*/
	onNewImage: function(){ console.log(test.filename); },
	
	onTimeout: function(){ console.log(test.filename); },
	
	onComplete: function(allTests, noOfFails, noOfErrors){
		allTests.forEach(function(test){
			if(test.fail){
				console.log(test.filename, test.mismatch);
			}
		});
	},

	onCaptureFail: function(ex, target) { console.log('Capture of ' + target + ' failed due to ' + ex.message); }

	/*
		Change the output screenshot filenames for your specific 
		integration
	*/
	fileNameGetter: function(root,filename){ 
		// globally override output filename
		// files must exist under root
		// and use the .diff convention
		var name = root+'/somewhere/'+filename;
		if(fs.isFile(name+'.png')){
			return name+'.diff.png';
		} else {
			return name+'.png';
		}
	},

	/*
		Prefix the screenshot number to the filename, instead of suffixing it
	*/
	prefixCount: true,

	/*
		Output styles for image failure outputs generated by Resemble.js
	*/
	outputSettings: {
		errorColor: {
			red: 255,
			green: 255,
			blue: 0
		},
		errorType: 'movement',
		transparency: 0.3
	},

	/*
		Rebase is useful when you want to create new baseline 
		images without manually deleting the files
		casperjs demo/test.js --rebase
	*/
	rebase: casper.cli.get("rebase"),

	/*
		If true, test will fail when captures fail (e.g. no element matching selector).
	 */
	failOnCaptureError: false
});

/*
	Turn off CSS transitions and jQuery animations
*/
phantomcss.turnOffAnimations();

Don't like pink?

A failed visual regression test, yellow areas show where the icon has enlarged and pushed other elements down.

phantomcss.init({
	/*
		Output styles for image failure outputs generated by Resemble.js
	*/
	outputSettings: {

		/*
			Error pixel color, RGB, anything you want, 
			though bright and ugly works best!
		*/
		errorColor: {
			red: 255,
			green: 255,
			blue: 0
		},
		
		/*
			ErrorType values include 'flat', or 'movement'.  
			The latter merges error color with base image
			which makes it a little easier to spot movement.
		*/
		errorType: 'movement',
		
		/*
			Fade unchanged areas to make changed areas more apparent.
		*/
		transparency: 0.3
	}
});

There are different ways to take a screenshot

var delay = 10;
var hideElements = 'input[type=file]';
var screenshotName = 'the_dialog'

phantomcss.screenshot( "#CSS .selector", screenshotName);

// phantomcss.screenshot({
//  	'Screenshot 1 File name': {selector: '.screenshot1', ignore: '.selector'},
//  	'Screenshot 2 File name': '#screenshot2'
// });
// phantomcss.screenshot( "#CSS .selector" );
// phantomcss.screenshot( "#CSS .selector", delay, hideElements, screenshotName);

// phantomcss.screenshot({
//   top: 100,
//   left: 100,
//   width: 500,
//   height: 400
// }, screenshotName);

Compare the images when and how you want

/*
	String is converted into a Regular expression that matches on full image path
*/
phantomcss.compareAll('exclude.test'); 

// phantomcss.compareMatched('include.test', 'exclude.test');
// phantomcss.compareMatched( new RegExp('include.test'), new RegExp('exclude.test'));

/*
	Compare image diffs generated in this test run only
*/
// phantomcss.compareSession();

/*
	Explicitly define what files you want to compare
*/
// phantomcss.compareExplicit(['/dialog.diff.png', '/header.diff.png']);

/*
	Get a list of image diffs generated in this test run
*/
// phantomcss.getCreatedDiffFiles();

/*
	Compare any two images, and wait for the results to complete
*/
// phantomcss.compareFiles(baseFile, diffFile);
// phantomcss.waitForTests();

Best Practices

Name your screenshots!

By default PhantomCSS creates a file called screenshot_0.png, not very helpful. You can name your screenshot by passing a string to either the second or forth parameter.

var delay, hideElementsSelector;

phantomcss.screenshot("#feedback-form", delay, hideElementsSelector, "Responsive Feedback Form");

phantomcss.screenshot("#feedback-form", "Responsive Feedback Form");

Perhaps a better way is to use the ‘fileNameGetter’ callback property on the ‘init’ method. This does involve having a bit more structure around your tests. See: https://github.com/Huddle/PhantomFlow/blob/master/lib/phantomCSSAdaptor.js#L41

CSS3 selectors for testing

Try not to use complex CSS3 selectors for asserting or creating screenshots. In the same way that CSS should be written with good content/container separation, so should your test selectors be agnostic of location/context. This might mean you need to add more ID's or data- attributes into your mark-up, but it's worth it, your tests will be more stable and more explicit. This is not a good idea:

phantomcss.screenshot("#sidebar li:nth-child(3) > div form");

But this is:

phantomcss.screenshot("#feedback-form");
PhantomCSS should not be used to replace functional tests

If you needed functional tests before, then you still need them. Automated visual regression testing gives us coverage of CSS and design in a way we didn't have before, but that doesn't mean that conventional test assertions are now invalid. Feedback time is crucial with test automation, the longer it takes the easier it is to ignore; the easier it is to ignore the sooner trust is lost from the team. Unfortunately comparing images is not, and never will be as fast as simple DOM assertion.

Don't try to test all the visuals

I'd argue this applies to all automated testing approaches. As a rule, try to maximise coverage with fewer tests. This is a difficult balancing act because granular feedback/reporting is very important for debugging and build analysis. Testing many things in one assert/screenshot might tell you there is a problem, but makes it harder to get to the root of the bug. As a CSS/HTML Dev you'll know what components are more fragile than others, which are reused and which aren't, concentrate your visual tests on these areas.

Full page screenshots are a bad idea

If you try to test too much in one screenshot then you could end up with lots of failing tests every time someone makes a small change. Say you've set up full-page visual regression tests for your 50 page website, and someone adds 2px padding to the footer - that’s 50 failed tests because of one change. It's better to test UI components individually; in this example the footer could have its own test. There is also a technical problem with this approach, the larger the image, the longer it takes to process. An added pixel padding on the page body will offset everything, at best you'll have a sea of pink in the failed diff, at worse you'll get a TIMEOUT because it took too long to analyse.

Scaling visual regression testing within a large team

Scaling your test suite for many contributors may not be easy. Resemble.js (the core analysis engine of PhantomCSS) tries to consider image differences caused by different operating systems and graphics cards, but it's only so good, you are likely to see problems as more people contribute baseline screenshots. You can mitigate this by hiding problematic elements such as select elements, file upload inputs etc. as so.

phantomcss.screenshot("#feedback-form", undefined, 'input[type=file]');

Below is an example of a false-negative caused by antialiasing differences on different machines. How can we solve this? Contributions welcome!

Three images: baseline, latests and diff where antialiasing has caused the failed diff

Scaling visual regression testing with Git

If you're using a version control system like Git to store the baseline screenshots the repository size becomes increasingly relevant as your test suite grows. I'd recommend using a tool like https://github.com/joeyh/git-annex or https://github.com/schacon/git-media to store the images outside of the repo.

...You might also be interested in

PhantomFlow and grunt-phantomflow wrap PhantomCSS and provides an experimental way of describing and visualising user flows through tests with CasperJS. As well as providing a terse readable structure for UI testing, it also produces intriguing graph visualisations that can be used to present PhantomCSS screenshots and failed diffs. We're actively using it at Huddle and it's changing the way we think about UI for the better.

Also, take a look at PhantomXHR for stubbing and mocking XHR requests. Isolated UI testing IS THE FUTURE!

Why is this project no longer maintained

The introduction of headless Chrome has simply meant that PhantomJS is no longer the best tool for running browser tests. Huddle is making efforts to move away from PhantomJS based testing, largely to gain better coverage of new browser features such as CSS grid. Interestingly there still doesn't seem to be a straight replacement for PhantomCSS for Chrome, perhaps because it is now far easier to roll-your-own VRT suite. The Huddle development team is now actively looking into using Docker containers for running Mocha/Chai test suites against headless Chrome, using Resemblejs directly in NodeJS for image comparison.

Huddle Careers

Huddle strongly believe in innovation and give you 20% of work time to spend on innovative projects of your choosing.

If you like what you see and would like to work on this kind of stuff for a job then get in touch.

Visit http://www.huddle.com/careers for open vacancies now, or register your interest for the future.


PhantomCSS was created by James Cryer and the Huddle development team.

phantomcss's People

Contributors

baseinfinity avatar bbarwick-fdg avatar blairlearn avatar cazgp avatar chris-dura avatar crocket avatar designjockey avatar doryphores avatar filipchalupa avatar fracmak avatar fuhlig avatar hdennison avatar jfreax avatar maks3w avatar matt-in-a-hat avatar maxbarrett avatar mcnuttandrew avatar micahgodbolt avatar nighttrax avatar ntzm avatar pgmullan avatar raveclassic avatar sedovsek avatar thingsinjars avatar tony2nite avatar unional avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

phantomcss's Issues

Ignore whitespace/size differences?

Hi,
Just getting going with PhantomCSS, and liking it very much. One issue I am getting is where a selector has changed size (for example, something outside of it is holding a longer string). This change I do not want to capture; is there a way to get phantomCSS to ignore any differences between images that are just whitespace? Or to get it to ignore any additional size on the second image being compared to the baseline?
thanks!

How are fail images created because ours aren't created as expected

I'm working with an implementation of PhantomCSS which has a custom test runner to interface between Phantom & our source tree. The app being tested is a django app running a local server all managed & ran through Apache ANT.

So instead of reading the files directly, we're calling URLs and having Phantom run that way using our local server.

There doesn't seem to be a problem with our baseline images, and the creation of the diff images, but when fail images are created they only capture the img in the header of our pages :(

I assume the issue is with the following line which just finds that img on the page, rather than what is meant to happen usually with Phantom...
casper.captureSelector(failFile, 'img');

How does the creation of the .fail.png work?

If I change the phantom source to..
casper.captureSelector(failFile, 'body');

The fail image captures what I would expect to see, but above the bar with the file name, it's captured the screen ('body') of our app also. I'm not sure if that helps.

TypeError when calling phantomcss.compareAll()

I'm developing a small proof-of-concept using PhantomCSS on a Windows machine and am getting errors when calling phantomcss.compareAll() when I've taken more than a single screenshot. I'm running with casperjs version 1.1.0-beta3.

testsuite.js

var phantomcss = require('../node_modules/phantomcss/phantomcss.js');

phantomcss.init({
  libraryRoot: './node_modules/phantomcss',
  screenshotRoot: './test/screenshots',
  failedComparisonsRoot: false
});

casper.test.begin('Prototype perceived difference setup', 0, function (test) {

  casper.start('http://localhost:8080');

  casper.then(function () {
    phantomcss.screenshot('h1', 'header');
    phantomcss.screenshot('p', 'body content');
  });

  casper.then(function () {
    phantomcss.compareAll();
  });

  casper.then(function () {
    casper.test.done();
  });

  casper.run(function  () {
    phantom.exit(phantomcss.getExitStatus());
  });
});

Calling this for the first time creates two screenshots as expected with no errors. Here is the log I get when I run the test suite a second time:

Test file: test/testsuite.js
# Prototype perceived difference setup
FAIL TypeError: '/\.diff\./' is not a function (evaluating 'casper.fill('form#im
age-diff', {

                'one': one,

                'two': two

        })')
#    type: uncaughtError
#    file: test/testsuite.js:236
#    error: '/\.diff\./' is not a function (evaluating 'casper.fill('form#image-
diff', {

                'one': one,

                'two': two

        })')
#           TypeError: '/\.diff\./' is not a function (evaluating 'casper.fill('
form#image-diff', {

#                       'one': one,

#                       'two': two

#               })')
#               at asyncCompare (C:/Users/Kriss/dev/myrtle/node_modules/phantomc
ss/phantomcss.js:236)
#               at C:/Users/Kriss/dev/myrtle/node_modules/phantomcss/phantomcss.
js:405
#               at runStep (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casp
erjs/modules/casper.js:1553)
#               at checkStep (C:/Users/Kriss/AppData/Roaming/npm/node_modules/ca
sperjs/modules/casper.js:399)
#    stack: not provided
FAIL TypeError: '/\.diff\./' is not a function (evaluating 'casper.fill('form#im
age-diff', {

                'one': one,

                'two': two

        })')
#    type: uncaughtError
#    file: test/testsuite.js:236
#    error: '/\.diff\./' is not a function (evaluating 'casper.fill('form#image-
diff', {

                'one': one,

                'two': two

        })')
#           TypeError: '/\.diff\./' is not a function (evaluating 'casper.fill('
form#image-diff', {

#                       'one': one,

#                       'two': two

#               })')
#               at asyncCompare (C:/Users/Kriss/dev/myrtle/node_modules/phantomc
ss/phantomcss.js:236)
#               at C:/Users/Kriss/dev/myrtle/node_modules/phantomcss/phantomcss.
js:405
#               at runStep (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casp
erjs/modules/casper.js:1553)
#               at checkStep (C:/Users/Kriss/AppData/Roaming/npm/node_modules/ca
sperjs/modules/casper.js:399)
#    stack: not provided
FAIL 2 tests executed in 1.851s, 0 passed, 2 failed, 0 dubious, 0 skipped.

Details for the 2 failed tests:

In test/testsuite.js:236
  Prototype perceived difference setup
    uncaughtError: TypeError: '/\.diff\./' is not a function (evaluating 'casper
.fill('form#image-diff', {

                'one': one,

                'two': two

        })')
In test/testsuite.js:236
  Prototype perceived difference setup
    uncaughtError: TypeError: '/\.diff\./' is not a function (evaluating 'casper
.fill('form#image-diff', {

                'one': one,

                'two': two

        })')

Done, without errors.

In an effort to determine whether this was something in my code that wasn't set up correctly, I cloned the PhantomCSS repository and ran the demo testsuite. Again, the first run was as expected, but the second run resulted in the following:

C:\Users\Kriss\dev\PhantomCSS [master]> casperjs test .\demo\testsuite.js
C:\Users\Kriss\dev\PhantomCSS [master]> Test file: .\demo\testsuite.js


New screenshot at .\screenshots\open coffee machine button_0.png


New screenshot at .\screenshots\coffee machine dialog_1.png


New screenshot at .\screenshots\cappuccino success_2.png


New screenshot at .\screenshots\coffee machine close success_3.png

Must be your first time?
Some screenshots have been generated in the directory .\screenshots
This is your 'baseline', check the images manually. If they're wrong, delete the
 images.
The next time you run these tests, new screenshots will be taken.  These screens
hots will be compared to the original.
If they are different, PhantomCSS will report a failure.
WARN Looks like you didn't run any test.

C:\Users\Kriss\dev\PhantomCSS [master +1 ~0 -0 !]> casperjs test .\demo\testsuit
e.js
C:\Users\Kriss\dev\PhantomCSS [master +1 ~0 -0 !]> Test file: .\demo\testsuite.j
s
FAIL TypeError: 'undefined' is not an object (evaluating 't.nodeName.toLowerCase
')
#    type: uncaughtError
#    file: .\demo\testsuite.js
#    error: {"message":"'undefined' is not an object (evaluating 't.nodeName.toL
owerCase')","line":4,"sourceId":83498024,"sourceURL":"http://code.jquery.com/jqu
ery-1.9.1.min.js","stack":"TypeError: 'undefined' is not an object (evaluating '
t.nodeName.toLowerCase')\n    at http://code.jquery.com/jquery-1.9.1.min.js:4\n
   at asyncCompare (C:/Users/Kriss/dev/PhantomCSS/phantomcss.js:236)\n    at C:/
Users/Kriss/dev/PhantomCSS/phantomcss.js:405\n    at runStep (C:/Users/Kriss/App
Data/Roaming/npm/node_modules/casperjs/modules/casper.js:1553)\n    at checkStep
 (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/casper.js:399
)","stackArray":[{"sourceURL":"http://code.jquery.com/jquery-1.9.1.min.js","line
":4},{"function":"asyncCompare","sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phant
omcss.js","line":236},{"sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phantomcss.js"
,"line":405},{"function":"runStep","sourceURL":"C:/Users/Kriss/AppData/Roaming/n
pm/node_modules/casperjs/modules/casper.js","line":1553},{"function":"checkStep"
,"sourceURL":"C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/c
asper.js","line":399}]}
#    stack: not provided
FAIL TypeError: 'undefined' is not an object (evaluating 't.nodeName.toLowerCase
')
#    type: uncaughtError
#    file: .\demo\testsuite.js
#    error: {"message":"'undefined' is not an object (evaluating 't.nodeName.toL
owerCase')","line":4,"sourceId":83498024,"sourceURL":"http://code.jquery.com/jqu
ery-1.9.1.min.js","stack":"TypeError: 'undefined' is not an object (evaluating '
t.nodeName.toLowerCase')\n    at http://code.jquery.com/jquery-1.9.1.min.js:4\n
   at asyncCompare (C:/Users/Kriss/dev/PhantomCSS/phantomcss.js:236)\n    at C:/
Users/Kriss/dev/PhantomCSS/phantomcss.js:405\n    at runStep (C:/Users/Kriss/App
Data/Roaming/npm/node_modules/casperjs/modules/casper.js:1553)\n    at checkStep
 (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/casper.js:399
)","stackArray":[{"sourceURL":"http://code.jquery.com/jquery-1.9.1.min.js","line
":4},{"function":"asyncCompare","sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phant
omcss.js","line":236},{"sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phantomcss.js"
,"line":405},{"function":"runStep","sourceURL":"C:/Users/Kriss/AppData/Roaming/n
pm/node_modules/casperjs/modules/casper.js","line":1553},{"function":"checkStep"
,"sourceURL":"C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/c
asper.js","line":399}]}
#    stack: not provided
FAIL TypeError: 'undefined' is not an object (evaluating 't.nodeName.toLowerCase
')
#    type: uncaughtError
#    file: .\demo\testsuite.js
#    error: {"message":"'undefined' is not an object (evaluating 't.nodeName.toL
owerCase')","line":4,"sourceId":83498024,"sourceURL":"http://code.jquery.com/jqu
ery-1.9.1.min.js","stack":"TypeError: 'undefined' is not an object (evaluating '
t.nodeName.toLowerCase')\n    at http://code.jquery.com/jquery-1.9.1.min.js:4\n
   at asyncCompare (C:/Users/Kriss/dev/PhantomCSS/phantomcss.js:236)\n    at C:/
Users/Kriss/dev/PhantomCSS/phantomcss.js:405\n    at runStep (C:/Users/Kriss/App
Data/Roaming/npm/node_modules/casperjs/modules/casper.js:1553)\n    at checkStep
 (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/casper.js:399
)","stackArray":[{"sourceURL":"http://code.jquery.com/jquery-1.9.1.min.js","line
":4},{"function":"asyncCompare","sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phant
omcss.js","line":236},{"sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phantomcss.js"
,"line":405},{"function":"runStep","sourceURL":"C:/Users/Kriss/AppData/Roaming/n
pm/node_modules/casperjs/modules/casper.js","line":1553},{"function":"checkStep"
,"sourceURL":"C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/c
asper.js","line":399}]}
#    stack: not provided
FAIL TypeError: 'undefined' is not an object (evaluating 't.nodeName.toLowerCase
')
#    type: uncaughtError
#    file: .\demo\testsuite.js
#    error: {"message":"'undefined' is not an object (evaluating 't.nodeName.toL
owerCase')","line":4,"sourceId":83498024,"sourceURL":"http://code.jquery.com/jqu
ery-1.9.1.min.js","stack":"TypeError: 'undefined' is not an object (evaluating '
t.nodeName.toLowerCase')\n    at http://code.jquery.com/jquery-1.9.1.min.js:4\n
   at asyncCompare (C:/Users/Kriss/dev/PhantomCSS/phantomcss.js:236)\n    at C:/
Users/Kriss/dev/PhantomCSS/phantomcss.js:405\n    at runStep (C:/Users/Kriss/App
Data/Roaming/npm/node_modules/casperjs/modules/casper.js:1553)\n    at checkStep
 (C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/casper.js:399
)","stackArray":[{"sourceURL":"http://code.jquery.com/jquery-1.9.1.min.js","line
":4},{"function":"asyncCompare","sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phant
omcss.js","line":236},{"sourceURL":"C:/Users/Kriss/dev/PhantomCSS/phantomcss.js"
,"line":405},{"function":"runStep","sourceURL":"C:/Users/Kriss/AppData/Roaming/n
pm/node_modules/casperjs/modules/casper.js","line":1553},{"function":"checkStep"
,"sourceURL":"C:/Users/Kriss/AppData/Roaming/npm/node_modules/casperjs/modules/c
asper.js","line":399}]}
#    stack: not provided

C:\Users\Kriss\dev\PhantomCSS [master +1 ~0 -0 !]> FAIL 4 tests executed in 2.70
1s, 0 passed, 4 failed, 0 dubious, 0 skipped.

Details for the 4 failed tests:

In .\demo\testsuite.js
  Untitled suite in .\demo\testsuite.js
    uncaughtError: TypeError: 'undefined' is not an object (evaluating 't.nodeNa
me.toLowerCase')
In .\demo\testsuite.js
  Untitled suite in .\demo\testsuite.js
    uncaughtError: TypeError: 'undefined' is not an object (evaluating 't.nodeNa
me.toLowerCase')
In .\demo\testsuite.js
  Untitled suite in .\demo\testsuite.js
    uncaughtError: TypeError: 'undefined' is not an object (evaluating 't.nodeNa
me.toLowerCase')
In .\demo\testsuite.js
  Untitled suite in .\demo\testsuite.js
    uncaughtError: TypeError: 'undefined' is not an object (evaluating 't.nodeNa
me.toLowerCase')

Any ideas on where I could be going wrong? Is Windows support something that is expected to work? I normally develop under OS X (where I've had no issues) but this project needs to work on both platforms ideally.

Mismatch percentage should be configurable

https://github.com/Huddle/PhantomCSS/blob/master/phantomcss.js#L343 & https://github.com/Huddle/PhantomCSS/blob/master/phantomcss.js#L351

Mismatch percentage should be configurable, though I solved my problem by selecting a smaller test area.

For a small change like a button border radius, the mismatch can easily be less than 0.05% if the selection is not kept small. In the images below the selection is on the a single button and the % mismatch is only 0.08%, selecting 3 buttons dropped this below the 0.05% threshold and failed to find the regression.

default button_1
default button_1 fail

exe and linux bianries together?

Hi,

I've noticed that you have a phantomjs.exe in the root of your project, but capsper has a bin directory with (presumably) linux binaries?

Is this repository looking to provide a bundle of phantom, casper and ressemble so users don't have to install them separately?

Also, is there a dependency on nodejs?

I'm sorry for asking these dumb questions, but the wiki link does not seem to be working..

Testing print styles

We've been having regressions around our print.css styles, since we don't see them often. Can PhantomCSS be used to test print styles?

Fail image is incorrect

I am not sure how and why this is occurring but I cannot seem to fathom the root of the problem.

Both the original image and the diff image are being saved correctly but the failed image that is generated is never correct (it is a print screen of the same page but some other place on the page).

The errors in the console successfully show that the image is different and shows the correct percentage changed. It's just the failed image saved is always incorrect.

minibugs bonus round

Hi guys, I am testing some rather large pages, and think perhaps im running out of memory or something, and the tests (12 pages) take around 20 minutes.

A number of little bugs include:

  • the screenshots taken do not accurately depict my webpage (which uses angularjs and a bunch of other js libraries, maybe that's why) , with elements shifting of place, and a large margin down the whole right side.
  • tests fail after running test twice in a row without any interstitial changes.
  • "Could not complete image comparison for ./screenshots/screenshot_(X).png" also shows up for every single page tested.

Please let me know if im doing anything wrong, or if youd liek more details

Ignore certain elements

Hi

Is there a way to ignore certain elements when comparing?
We have facebook integrated all over the page and in some cases I can not take screenshots without facebook being on there. But sometimes loading facebook takes longer, so on some screenshots facebook is already loaded and on some others its still missing.

It would be nice if I could pass a selector like ".fb-like" that gets ignored.
I wouldn't mind if I would have to set that globally if no other way is possible.

Thanks

ReferenceError: Can't find variable: Uint8ClampedArray

I know this issue is not supported by the current version of Phantomjs, however I compile the master phantomjs to the latest code in master branch and it works to generate the screenshot I need.

I would like to know how I can use phantomcss with my compiled phantomjs.

Any one can help me?

Unable to run demo on OS X

Hi. I'm intrigued by your project, but I'm unable to run the demo on my Mac.

I've opened the binary file, and added the line "phantomjs demo/testsuite.js" as suggested in the demo, however, this causes the error message "parse error". Where am I going wrong?

*I realise this isn't a issue per se, but I don't know where else to post this.

fileName not being used in _fileNameGetter

'screenshot' function is calling '_fileNameGetter' with two parameters and it accepts 0, so '_root' is taken from global scope, but 'fileName' is always undefined.

This makes impossible to make a test with custom filename:

css.screenshot('body' , 1, '', 'main_page' );

Will create "screenshot_0.png" instead of "main_page_0.png".

Fails with own instance of Casperjs Tester module

Here is those places:

function _onPass(test){
console.log('\n');
casper.test.pass('No changes found for screenshot ' + test.filename);
}
function _onFail(test){
console.log('\n');
casper.test.fail('Visual change found for screenshot ' + test.filename + ' (' + test.mismatch + '% mismatch)');
}
function _onTimeout(test){
console.log('\n');
casper.test.info('Could not complete image comparison for ' + test.filename);
}

actually casper.test can have any other key name for example casper.tester

some color diffs do not trigger failures

We are targeting a heading for one of our screenshots. When we change the color of the heading from red to #686868, the tests still pass. Is this expected? How does the library account for color?

We're using the library via grunt-phantomcss, which looks like it has the default resemble.js setting (ignoreColors: false).

casper.start( 'http://localhost:8000/all.html' );
casper.viewport(1024, 768);

casper.then(function(){
  phantomcss.screenshot('#responsive_images', 'responsive images section');
});

casper.then( function now_check_the_screenshots(){
  // compare screenshots
  phantomcss.compareAll();
});

casper.then( function end_it(){
  casper.test.done();
});

casper.run(function(){
  console.log('\nTHE END.');
  phantom.exit(phantomcss.getExitStatus());
});

responsive images section_0 diff
responsive images section_0

CasperJS version

Hey,

Currently you are using 1.0.2 but I tried manually upgrading to 1.1.0 without any luck. Everything works up until the image comparison takes place.

Any possibility that this upgrade will occur?

Thanks!

Can't fail test manually

I have a part of code that needs to verify an alert box. But since phantomjs is headless testing, window doesn't actually exist, the only way for me to verify an alert box is through the remote.alert event handling in Casperjs, so I needed to verify the alert box's message with an assert if possible.

I tried to do

casper.start();
casper.then(function(){
casper.test.fail("fail");
})

But nothing happens. I also tried to do casper.test.assertMatch but that doesn't work too. Other casper functions like casper.echo works. Is Phantomcss overriding the status output of the test?

Inconsistency in failures - whitespace or elements outside the selector

I'm testing an app with a familiar layout, header at the top, content underneath it.

Generally the screenshot selector is on the content and often the pages can be quite long (1,103px × 3,478px). I am noticing that there is often a diff of just a few pixels causing failures where there isn't actually any layout change, but extra space above the element has been captured in the screenshot forcing a slight difference. Sometimes there is even significant amounts of our header captured also.

Has anybody else encountered this type of thing? I'm trying to figure out a way to gain greater consistency. Perhaps I need to look at capturing smaller sections of the screen? This just feels like it invalidates a layout test through if you want to test that the whole page is correctly displayed you don't want it split in to pieces.

Any suggestions would be appreciated :)

Use PerceptualDiff to get the diff between images

According to the documentation, the actual pixel-based diff from Resemble.js seems to have some difficulties when the variation is very small (like antialiasing).

From http://pdiff.sourceforge.net/:
"PerceptualDiff is an image comparison utility that makes use of a computational model of the human visual system to compare two images."

It may help remove false positive to use this tool (but maybe not easy to embed it inside PhantomCSS as it's not a JS lib).

Requesting a screenshot on an invisible item

As far as I can tell - it's defaulting to taking a full-viewport screenshot when the selector is invisible. Is this happening? If so - perhaps a better response would be an empty PNG file.

Running tests from any folder

Hi,

I would like to be able to run the tests from anywhere in the console not only the cloned repo directory.

By example, my app is in: myAppFolder/app/

but tests are in myAppFolder/tests/css

I would like to be able to be in myAppFolder and run

phantomjs tests/css/testsuite.js

but I got a bunch of errors about not found modules at first time, I modified the variables and routes and I got it working like that but then phantomcss fails in compare screenshots (always getting timeout) I suspect is because It generates the screenshots in a folder by try to read them from other folder.

I do see the generated images in the specified folder but I always got a Timout and failed test.

Does anyone figured out to do this?

Does not run in debug mode because of missing casper module

I'm trying to run my testsuite.js in debug mode but am unable to do so because the casper module cannot be found. I presume this is because the debugger uses the context of the browser you run it in and so you lose access to the filesystem.

Steps to reproduce:

  1. From the terminal, run phantomjs --remote-debugger-port=9000 testsuite.js
  2. Navigate to http://localhost:9000 in a browser (I use Chrome)
  3. In the console, type __run()

An exception is thrown with message: cannot find module 'casper'.

It would be great to know how you guys go about debugging against PhantomJS.

GitHub Releases

Could you please create new releases periodically as you bump the package.json version? As mentioned in #20 and #26, it would help other projects build strong dependencies on phantomcss using package managers like npm and bower. It would be great to download just a tarball instead of checking out the entire git repo.

Multi-Browser Support

Even though this project is named PhantomCSS, it would be great if you gave the option of testing in different browsers.

Because as we all know, Chrome != Firefox != IE != ...

Misaligned checkboxes & radio buttons on OSX

We run Phantom across a suite of XP, Centos and OSX (10.6.8), and Windows & Linux are as you would expect. OSX however renders radio buttons and check boxes away from their corresponding labels, significantly lower in the page.

Is this a bug in PhantomJS or is there something else I've missed here that causes this or may prove a way to improve/fix the issue?

Running PhantomCSS from a Makefile

Hi again,

Here's the current makefile i have for my project. Whilst the phantomCSS test works great when directly executed, my phantomCSS reports all sorts of directory problems when run from a makefile, in the parent folder. It's root seems to be base on where the makefile is, not the root of the phantomCSS install (where phantomCSS' dependencies are located.)

serve:
grunt &
pserve --reload development.ini &
casperjs test frontend-tests/404-test.js &
casperjs test frontend-tests/regression-test.js
.PHONY: serve

[PhantomCSS] Resemble.js not found: ./node_modules/resemblejs/resemble.js

[PhantomCSS] Can't find Resemble container. Perhaps the library root is mis configured. (./resemblejscontainer.html)

Do you think this is a problem related to phantomCSS, or perhaps make? My other casperjs test runs without any problems.

Let me know if I can provide any more details.
Thanks for your continued work!

a question about image quality

Hi

very nice project. Over the last couple of weeks, i created a similar project with python/selenium/phantomjs (https://github.com/kinkerl/eukalypse) but i ran into some issues with phantomjs.

We use it as a standalone server in a production environment and once a week, phantomjs created images which where a bit "off". Everything was shifted some pixels in a direction. But only about once a week. I still have no idea why.

In the end, we still have to use selenium with Chrome because Firefox has (sometimes) problems with a deterministic antialising of webfonts and phantomjs is (sometimes) just off.

Did you get a similar problem?

Broken demo?

Not sure if this happening for anyone else but the demo is broken for me. I am running PhantomJS 1.9.7_1 and CasperJS 1.1 Beta 3.

The initial screen shot is generated without any styles and then fails by timing out from waiting for the modal to come up. I fixed it by downloading the CSS and JS dependencies and referencing them locally in coffeemachine.html

I'm not sure if this is a bug or if it's supposed to grab CSS and JS externally if they are referenced as such.

Demo file assumes local lib files

I have zero trouble getting your demo to work, but would rather like npm handle the dependancies. Problem is that I'm having trouble piecing together a test js file that actually works with npm controlled libs.

Is it possible to have a walkthrough/demo file, that works with basic NPM setup?

ignoreAntialiasing causes bad quality image

Hello

PhantomCSS generates the following screenshot which is pretty in bad quality.

With ignoreAntialiasing :

ocmv-2010-step1-desktop_2 fail

I found out that removing the ignoreAntialiasing() call in phantomcss.js line 503 gives me a better quality image.

Without ignoreAntialiasing :

ocmv-2010-step1-desktop_2 fail

Can you provide options for us to enable or disable the ignoreAntialiasing option of ressemblejs ?

Best regards

Ignore antialiasing gives false positive

Diffing two images with ignore antialiasing turned on gives me a false positive (the diff is blank). Images are below. Diffing them with ignore nothing or ignore colors produces the expected result.

Image 1

drag

Image 2

drag2

Diff with ignore antialiasing

image

/cc @jamescryer

Question - JavaScript driven anchor doesn't update page when clicked

The page I'm am trying to test with PhantomCSS updates its content via AJAX on a mouse down event when certain links are clicked. I have a selector that gets the link I want to click on (I have verified this by using this.echo(this.getHTML(selector, true))), and I use this.click(selector) to click on it. Nothing changes when I take my next screenshot... so I tried to wait for 20 seconds to see if something happens, and still no change in the screenshot.

Is testing a single-page, ajax-driven application feasible using PhantomCSS? If so, where is a good place to look to debug why nothing is happening?

I haven't yet figured out how to get the console statements from the pages JavaScript to print out.

screenshot() assuming jQuery $ to hide elements

If your page isn't using jQuery without noConflict, this will not work

(in my case, we're using MooTools)

I'll provide a pull request with a plain old JavaScript fix using querySelectorAll

        if(hideSelector || _hideElements){
            casper.evaluate(function(s1, s2){
                if(s1){
                    $(s1).css('visibility', 'hidden');
                }
                $(s2).css('visibility', 'hidden');
            }, {
                s1: _hideElements,
                s2: hideSelector
            });
        }

change path in PhantomCSS

Hey!

On my script and on demo too I have strange problem.

I get timeout when phantomcss try to compare img's

TIMEOUT: /screenshots\img_0.png

but the path should look like /screenshot/image_0.png how to change this?

When run the testsuite again, there is no image to show the difference between new and old screenshots.

I'm trying the example. When I run the testsuite.js, it creates screenshot.0.png and screenshot.1.png. Then I updated css and run it again, I can see the command line output "1 of them failed", and it creates screenshot.0.diff.png and screenshot.1.diff.png, but they actually are new screenshots rather than the difference between the old and new screenshot.

When I run the testsuite again, can phantomcss generate new screenshot and its difference together? Then there will be screenshot.0.png, screenshot.0.diff.png and screenshot.0.compare.png, it would be more clear for me to check the difference.

Integrating the screenshots generated by Browserstack with PhantomCSS?

"I'm looking to integrate the screenshots generated by Browserstack with phantomCSS. For example, with Browserstack you can either request a full web page screenshot across a set of devices/browsers (Screenshots API), or a set of screenshots given a css selector (Automate API)

I'm wondering if you had any thoughts as to how I could go about integrating the two?"

-- Question was copied from email.

Visual Regression Testing

You have defined PhantomCSS a tool for visual regression testing. Can you elaborate on the terminology you have chosen because I cannot find any through definition about what it is. I know what it does, I am just curious if your buzzword is actually scientifically strong enough or not.

Any feedbacks or comments are mostly welcomed.

When failing, the fail screenshot is first image on page, not the difference between pages

I'm doing a big CSS overhaul and trying to make the new CSS look like the old CSS. PhantomCSS has been a great tool at helping me ensure parity, but the diff tool doesn't seem to be working correctly. According to the documentation, I believe PhantomCSS is supposed to highlight the parts of the page that differ in the failure screenshot.

The difference between the two attached images is about 1.4, and really hard to eyeball, but the fail screenshot just shows the first image on the page. I've tried removing the image, but it will just grab the next image instead. Is this a bug or am I doing something wrong?

I cloned the project from github yesterday.

BASELINE SCREENSHOT:
screenshot_1

DIFF SCREENSHOT:
screenshot_1 diff

FAILURE SCREENSHOT:
screenshot_1 fail

0.08% unexplained diff only in travis-ci

Before I'll ask I want to say this is an awesome project, changing the way functional testing works.

The problem:

When I run the test in my project locally it passes. but when Travis-ci runs it it fails with a 0.08% diff. I checked for local vs travis casperjs and phantomjs versions and they look the same to me.

what can be the explanation?

Also, can I set a fail threshold. telling it not to fail on less, say, 0.1% difference? (A command line option maybe/)

Cannot find module 'casper' when trying to run within Rails app

Hi,

I was trying to use this within my pre-existing Rails app and seem to keep getting a 'Cannot find module "casper"' error message.

I added your Git repo to the root of my app and changed the casperPath to point the CasperJs directory for phantom.casperPath, but no luck.

Let me know if you can help with this or if you can point me to any useful tutorials on using PhantomCSS in a Rails app.

Thanks

Ted

No easy way to pass in target url

Is there a way to pass in a target url the tests?

For an automation pipeline this is useful since the baseline image is the same but the environments are different.

Detect optical illusions

I am not sure if this fits into the scope of PhantomCSS or whether it is time to think about it yet.

It would be greate to check UI screenshots against optical illusions.
For example Trello has this annoying Hermann-grid optical illusion when seeing "My Boards" on larger screens (see screenshot).
Why I think it is better to have such testing at stage of implemented software rathen than at stage of implemented design pictures? Because optical illusions may appear at screen resolutions that were not covered by design pictures, and little changes may be applied to already implemented software that may cause optical illusions.

optical-illusion trello

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.