approvals / approvals.nodejs Goto Github PK
View Code? Open in Web Editor NEWApproval Tests implementation in NodeJS
License: Apache License 2.0
Approval Tests implementation in NodeJS
License: Apache License 2.0
When I run my test suite and an approval test fails, it opens my reporter (p4merge). While I'm reviewing the changes, the test suite continues. Before I get a chance to approve the new received snapshot, the next test completes (also failing) and p4merge now shows the new failure. Also, my clipboard now has the command to approve the second and not the first.
One way I could see to resolve this is to have approvals.verify
return a Promise that doesn't resolve until I close p4merge. I could await
it in the test suite -- I may have to set an unlimited timeout, but that's fine by me.
Or, am I just using approvals.verify
wrong?
Setup
const approvals = require('approvals');
const MultiReporter = approvals.reporters.MultiReporter;
approvals.configure({
reporters: [
new MultiReporter(['p4merge', 'copycommand'])
]
});
Assertion (this.page is a Puppeteer page)
const screenshot = await this.page.screenshot({ fullPage: true })
approvals.verify('features/approvals', sanitizeName(approvalName), screenshot);
Also: thanks for this tool, it's awesome!
the config I put in the home directory is squashed by the default config.
Seems to me that it comes from line no 75 in config.js
var currentConfigObj = _.defaults({}, defaultConfig);
that should probably bee
var currentConfigObj = _.defaults({}, homeConfig, defaultConfig);
so that it doesn't get squashed by line no 94.
var resultConfig = _.defaults(configOverrides || {}, currentConfigObj || {}, homeConfig, defaultConfig);
I tried migrating my configuration to a file-based config as suggested in the docs, but it looks like my custom reporter is not recognized. I'm not clear if my format is simply wrong and the config is not Javascript or if custom reporters are not being recognized. Perhaps the config file isn't being read? I'm just really unclear on where the actual problem lives. Let me know what info you need to help identify the cause and I'll gather it for you.
Beyond Compare 4 won't ever be found because it immediately gets overwritten by Beyond Compare 3 path.
} else if (osTool.platform.isWindows) {
app = autils.searchForExecutable("Beyond Compare 4", "BCompare.exe");
app = autils.searchForExecutable("Beyond Compare 3", "BCompare.exe");
}
I assume this is meant to be
else if (osTool.platform.isWindows) {
app = autils.searchForExecutable("Beyond Compare 4", "BCompare.exe")
|| autils.searchForExecutable("Beyond Compare 3", "BCompare.exe");
}
Hey Jason,
This isn't an issue so much as I just wanted to let you know, I built a tiny little module which allows people to generate a default config without needing to specify anything except a reporter. I got to the point where I was tired of copy/pasting my config all over the place, so I thought I would share.
Thanks again for all your hard work on the Approvals project!
https://github.com/cmstead/approvals-config-factory
https://www.npmjs.com/package/approvals-config-factory
-C
Upgrade some tooling
Replace jshint
/jscs
with a set of eslint.
Consider starting from the airbnb eslint-config-airbnb adding project (my) preferences on top of that π
Hi,
Great project! Really glad to see that approvals has been ported to JavaScript (I use it extensively via .NET).
I'm not sure on what the performance expectations are for this project on Windows and I'm a little surprised with how long it seems to take to do an approval.
For instance, I just ran the following on a number of computers we have here:
var approvals = require('approvals');
console.log (new Date().getTime() + " Starting");
try {
approvals.verify(__dirname, "test", "asdf");
} catch (e) {
console.log (e);
}
console.log (new Date().getTime() + " Finishing");
And it's taking between 1s - 3s depending on the machine and the diff viewer (e.g. kdiff, tortoisediff, p4merge, git diff). e.g.:
C:\dev\temp\> node main.js
1429163375410 Starting
CMD: C:/Program Files/TortoiseGit/bin/TortoiseGitMerge.exe C:\dev\temp\test.
received.txt C:\dev\temp\test.approved.txt
File sizes do not match:
Approved: C:\dev\temp\test.approved.txt
Received: C:\dev\temp\test.received.txt
1429163377028 Finishing
Is this expected? Is there anything I can do to improve performance? Is there any more information I can send you to help diagnose?
Thanks!
When I install v3.0.3 I get:
npm ERR! code ETARGET
npm ERR! notarget No matching version found for flatmap-stream@^0.1.0
npm ERR! notarget In most cases you or one of your dependencies are requesting
npm ERR! notarget a package version that doesn't exist.
npm ERR! notarget
npm ERR! notarget It was specified as a dependency of 'event-stream'
npm ERR! notarget
Looks like the referenced version of event-stream has a dependency on the removed (malicious) flatmap-stream package perhaps.
Could you pls update the referenced version of event-stream?
PS - thanks for fixing the null value bug with the key ordering code.
Hello again,
I've been trying to use Approvals with Jasmine but can't seem to get it to work. The Mocha version on the other hand works fine, so it might be Jasmine-related.
Having [email protected]
and [email protected]
installed, on Windows 7 x64, when I run the following command it crashes :
C:\wamp\www\Jamstash>jasmine-node spec
F
Failures:
1) With a Jasmine describe should be able to use Approvals
Message:
TypeError: Cannot read property 'description' of undefined
Stacktrace:
TypeError: Cannot read property 'description' of undefined
at getParentName (C:\wamp\www\Jamstash\node_modules\Approvals\lib\Providers\Jasmine\JasmineNamer.js:12:20)
at getFullTestName (C:\wamp\www\Jamstash\node_modules\Approvals\lib\Providers\Jasmine\JasmineNamer.js:20:16)
at JasmineNamer.pathCreator (C:\wamp\www\Jamstash\node_modules\Approvals\lib\Providers\Jasmine\JasmineNamer.js:48:21)
at Namer.getApprovedFile (C:\wamp\www\Jamstash\node_modules\Approvals\lib\Namer.js:24:17)
at Function.FileApprover.verify (C:\wamp\www\Jamstash\node_modules\Approvals\lib\FileApprover.js:13:34)
at verify (C:\wamp\www\Jamstash\node_modules\Approvals\lib\Providers\BeforeEachVerifierBase.js:51:26)
at null.<anonymous> (C:\wamp\www\Jamstash\spec\player_jasmine_spec.js:9:14)
Finished in 2.447 seconds
1 test, 1 assertion, 1 failure, 0 skipped
Inside the C:\wamp\www\Jamstash\spec
folder I have made a player_jasmine_spec.js
which contains the code from the Getting Started Wiki :
require('Approvals').jasmine(__dirname);
describe('With a Jasmine describe', function () {
it('should be able to use Approvals', function () {
var data = "Hello World!";
// Call the Approvals verify with some data.
this.verify(data);
});
});
Let me know if you need more info. Wish I could help you but right now I don't know nearly enough about NodeJS at all...
when an undefined value (non string or Buffer) is passed to verify
we get a not-so-friendly error.
When verifying data we run into false positives when the string remains the same but new data is added to the end of the string, this is happening with node code and can easily be reproduced using the CLI tool.
If the approval matches the first call below (below), it will also match the second call in this example:
echo 'Hello World!' | approvals helloWorldTest --reporter gitdiff --outdir .
echo 'Hello World!2' | approvals helloWorldTest --reporter gitdiff --outdir .
We were able to produce the expected result by adding the following to the FileApprover.js:
// if( approvedFileBuffer.length !== receivedFileBuffer.length)
// {
// throwReporterError("Files do not match");
// }
Heya, this is just a friendly note to let you know that a new version of Liftoff and some of the dependencies commonly used in conjunction with it have been released. This release adds better support for situations where your user's configuration files are written using features behind a v8 flag (e.g. --harmony
). Windows users and io.js
users should see less issues if you upgrade!
Some helpful links:
https://github.com/tkellen/node-liftoff/blob/master/UPGRADING.md#100---200
https://github.com/tkellen/node-v8flags/blob/master/README.md#example
https://github.com/tkellen/node-hacker/blob/master/bin/hacker.js#L10-L14
Thanks!
Getting a problem when copy pasting the received file into the approved one. The diff is still there even though the text is identical.
reporters: Meld, Kdiff3 at least
os: Macos, Linux
I'll dig into this a bit deeper soon, but posting in case someone ran into the same thing
Johan
This gets complicated and needs discussion...
There is current no seam to add a reporter to the list of available reporters.
Due to the complexity of setting up and using the approvals config, it would probably be helpful (especially when diagnosing line-ending errors) to log the config object when reporting an error.
Is it possible to publish also the declaration file for module 'approvals' (.d.ts)
=> npm install @types/approvals`
=> yarn add @types/approvals
When you use approvals the require syntax is a bit annoying.
Instead of just require('approvals').mocha()
we need to pass in the current directory of the testing file like require('approvals').mocha(__dirname)
.
This is an annoying tax on each require to approvals and would be ideal to remove.
Not sure if anyone has any ideas on this, one might be to walk the stacktrace and see if we can get file/directory information from there.
Hey @isidore,
Looks like since I'm not an owner of the org anymore, I don't have access to add Greenkeeper to the project.
It's likely worthwhile to consider having more than a single person be an "owner" of this org. Thinking bus factor...
Regardless, would you mind turning on Greenkeeper? (I have no idea what that process is going to be like yet)
Installed TortoiseSVN on my machine with default installation settings but I receive this message:
Error: the string "No reporter found!" was thrown, throw an Error :)
Maybe an override in configuration file for setting the path could solve the problem?
As per conversation with Llewellyn
var dateScrubber = function(data){
return data.replace(/[0-9]*-[0-9]*-[0-9]*/g, '**scrubbed-name**');
}
var multiScrubber = function() {
var scrubbers = arguments;
return function(data){
for (var i = 0; i < scrubbers.length; i++) {
data = scrubbers[i](data);
}
return data
}
}
var verify = function(data, scrubber){
var json = JSON.stringify(data, null, ' ')
if(scrubber == null){
scrubber = function(data) {
return data
}
}
json = scrubber(json);
that.verify(json, { reporters: [DiffMergeReporter] });
}
If approvals does not find p4merge
on the PATH
, it could attempt to look for in locations such as
/Applications/p4merge.app/Contents/Resources/launchp4merge
/opt/homebrew-cask/Caskroom/p4merge/2014.3-1007540/p4merge.app/Contents/Resources/launchp4merge
I have a temp work-around with
approvals.configure({
errorOnStaleApprovedFiles: false,
reporters: [
{
name: 'p4merge-hack',
canReportOn: function (receivedFilePath) {
return true;
},
report: function(approvedFilePath, receivedFilePath) {
require('approvals/lib/AUtils').createEmptyFileIfNotExists(approvedFilePath);
require('child_process').spawn('/Applications/p4merge.app/Contents/Resources/launchp4merge', [receivedFilePath, approvedFilePath], {
detached: true,
stdio: ['pipe', 1, 2, 'ipc']
});
}
}
]
});
Using image diffing and thanks to a suggestion from @isidore I have successfully used a command copy reporter for use in moving received images to approved images.
This has only been tested on a Mac
approvals.configure({
reporters: [
new MultiReporter([
'p4merge',
{
name: 'CommandCopy',
canReportOn: function (receivedFilePath) {
return true;
},
report: function(approvedFilePath, receivedFilePath) {
var foundPrograms = false;
var programs = {
'win32': {
clipboard: 'clip',
fileCopy: 'copy'
},
'linux': {
clipboard: 'xclip -selection clipboard',
fileCopy: 'copy'
},
'darwin': {
clipboard: 'pbcopy',
fileCopy: 'cp'
}
};
var selectedPrograms = programs[process.platform];
var copyFragment = "\'" + receivedFilePath + "\' \'" + approvedFilePath + "\'";
if (selectedPrograms) {
var cp = require('child_process');
cp.execSync(selectedPrograms.clipboard, {input: selectedPrograms.fileCopy + ' ' + copyFragment});
} else {
console.log('cp ' + copyFragment);
}
}
}
])
]
});
I have a local change for this - just putting an issue here to remind myself to polish it up and add documentation/tests
here's the commit: 249bb6c
We all know this is a bad practice, especially in a 3rd party utility/tool like Approvals.
Luckily it only happens at the end of a process so it's less likely to interfere with other's code, but still it's ugly...
There are several extensions that need to be refactored and remove the native prototype modification.
@jamesrcounts I just pulled the project on a win10 machine and ran gulp
- seeing lots of errors (for some reason).
I didn't see any issues with npm install
as you mentioned in #31 - what npm install errors are you seeing?
I am replacing a homegrown solution with approvals (after Clare Macrae sold me on the idea!)
I have ~300 approval tests and when I first ran I nearly lost control of my computer as 300+ meld
sessions attempted to spawn at the same time...
I was hoping I'd get some kind of control over this, so I don't DoS my computer if more than a dozen or so tests fail at once!
Hola,
It looks like the Visual Studio reporter requires edge. This appears to be a problem for users on Mac who can't install the edge module. Would it make sense to extract this particular reporter to a separate module someone can add as a plugin if they need it?
Here's the associated bug in my project:
Hi to all,
What is the correct format for the .approvalsConfig file ? It seems not to work !
How can I configure the location of the approved/received files ?
In my example, they goes in folder 2 but I want that the test in folder 1 stays in folder 1 and not in folder 2.
You can find attached a test project.
Thanks in advance
Hello
I am using approvals on Windows 7 x64, I have TortoiseGit v1.8.11.0 (latest) installed.
When using --reporter tortoisemerge
it cannot find the executable for the simple reason that it is not named TortoiseMerge.exe
but TortoiseGitMerge.exe
.
I don't know how it is named on the TortoiseSVN side however. Just thought I'd let you know.
Thanks for this tool by the way, I discovered the idea of approvals test last week, can't wait to try !
There should be a corresponding version of
https://github.com/approvals/ApprovalTests.Net/blob/master/ApprovalTests.Tests/RunMaintenance.cs
to help detect and ensure abandoned files are cleaned up
We have some tests in subfolders
/test
|- /views
|- /zzyzx
When running mocha --recursive, approvals looks for all approval files under zzyzx
I can add a beyond compare 4 reporter, this is just a reminder for myself
Putting the sample code from the README into approvals.js
require('approvals').mocha(__dirname);
describe('When running some tests', function () {
it('should be able to use Approvals', function () {
var data = "Hello World!";
this.verify(data); // or this.verifyAsJSON(data)
});
});
I run mocha approvals.js
and I get the following files generated
? tests\_before_each__hook.approved.txt
? tests\_before_each__hook.received.txt
Shouldn't they be named for the test?
The global options object is killing the flexibility of this tool... We need to find a better place where we can retrieve global options/overrides. Such as reporters, EOL, etc...
Consider loading a .approvalsrc
or .approvalsconfig
(other name?) config file. With this approach every time you require('approvals').configure(...overrides...)
don't clobber other config overrides. Allowing multiple files to require approvals and override them. (maybe?)
Store config in package.json
?
This option has yet to be discovered/proposed - <-- fill in the blank...
We also need a user-override option or machine-level override? (picking a different DiffReporter
per machine?)
On windows, with beyond compare I often have problems with beyond compare updating the line endings in the approval file. Since there doesn't seem to be a way to tell BC not to do this, I usually resort to writing a normalizeLineEndings
function like this:
function normalizeLineEndings(value){
var re = new RegExp("\r?\n", "g");
return value.replace(re, "\r\n");
}
It would be nice to have support for this built in so that the custom function doesn't need to be re-implemented on each project. The ideal place would be an option passed to verifyAsJSON, since the issue comes up in that situation most often.
Hi again !
Thanks for your quick fixes. Unfortunately, I still can't seem to figure out how to do what I originally wanted to do with approvals.
I would really like to test and refactor a browser-related JS file. To be more precise : it's an Angular file.
I cannot figure out a way to use require()
in my Jasmine test which needs to be in a browser context so that I can use Angular.
I can see at least two solutions :
beforeEach()
enables me to use this.verify()
while running tests in my browser. I've already tried doing that but I ran into issues beyond my understanding of require()
. Apparently browserify
doesn't like some of the paths required in Approvals. But anyway I'm not even sure if it would work at all that way, seing that we call executables on the computer from Approvals.require()
in a plugin. It might work better this way. Plus since I'm using karma-coverage, I'd be able to actually see if everything is covered with my approval tests before refactoring, which really helps in being confident that everything is still going to be ok afterwards.I imagine it's going to take quite some work... but then think of all the code we'd be able to refactor ! There's just so much broken, untested JS in browsers. I'll try to use approvals manually (without coverage) in the meantime.
Point to solid tests with examples of approvals tests as links within the readme.
When I don't specify a reporter anywhere it selects opendiff although it doesn't exist on linux
Error raised by reporter [opendiff]: TypeError: Bad argument
If I recall correctly I had the same behavior on macos. I'll fix soon probably but just reporting
We have a number of ways to specify configuration which drives approvals behavior.
One ideal piece of configuration I would like to get to would be project-level (err directory level) config files.
Say you have a test project that looks like this.
ββββsrc
β main.js
β
ββββtests
ββββfolder1
β test1.js
β
ββββfolder2
test2.js
Also say the approval tests in folder
require different "configuration" than the rest of the project?
You can do this by passing a common configuration object at the verify method level like (.verify(__dirname, "test-file-name", "the thing to verify", {...config here...});
but I would like to at minimum see a project-level configuration where we can look up the folder-tree from the test for a .approvalsConfig
file in hopes to allow for more configuration-over-explicit-coding (is that a term?).
Anyway, what if you could drop .approvalsConfig
in the project like such:
β .approvalsConfig <--- project root config
β
ββββsrc
β main.js
β
ββββtests
ββββfolder1
β .approvalsConfig <--- folder specific config overrides root project config
β test1.js
β
ββββfolder2
test2.js
This could allow for an overall project-level set of configuration, as well as the ability to override some configuration properties at a specific folder level.
Hi staxmanade,
I'm using your port of approvals and I must say it's really easy to set up and use.
Unfortunatly I'm having problems with line endings.
I'm using a Win 7 32bit plaform, I had to install Perforce to use approvals, since it couldn't find my TortoiseSVN executable; it seems it is looking for a TortoiseGit executable.
When running the example test it complains since the received file is 13 bytes while the approved file is 14 bytes; it is probably a difference in file writing between the approvals writer and the perforce file writer.
Setting up the ignore line endings differences didn't work.
Thank you for your work.
Can you do a release so that the Error
fixes that #35 added can be available on npm?
In the documentation, the approvalsConfig example file is written in JSON.
When testing it out, I get a YAML error, indicating the documentation is not reflecting the current state of Approvals.NodeJS.
I have version 3.0.5 here, installed via npm.
It looks like Approvals version > 1.2.x has a problem with the optional approvals file path. My configuration setup looks like the following:
require('approvals').configure(approvalsConfig).mocha('./test/approvals')
However, my test failure output looks like the following:
1) Add Export should approve stuff:
Error: Approved file does not exist: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_
stuff.approved.txt:
Approved: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_stuff.approved.txt
Received: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_stuff.received.txt 1) Add Export should approve stuff:
Error: Approved file does not exist: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_
stuff.approved.txt:
Approved: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_stuff.approved.txt
Received: C:\Users\cstead\Documents\projects\js-refactor\test\Add_Export.should_approve_stuff.received.txt
I can commit this test run in a branch on JS Refactorings for your ease of reproduction. At any rate, I would rather not clutter up my main test folder with a mass of approval files, and I used to store everything in a ./test/approvals folder, so losing this functionality presents a hefty challenge.
Side note, I can dig in and issue a pull request if you'd like.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.