Giter Club home page Giter Club logo

govtechsg / purple-a11y Goto Github PK

View Code? Open in Web Editor NEW
112.0 9.0 40.0 141.98 MB

Purple A11y is a customisable, automated web accessibility testing tool that allows software development teams to find and fix accessibility problems to improve persons with disabilities (PWDs) access to digital services.

License: MIT License

JavaScript 1.27% Mustache 6.52% HTML 3.47% Batchfile 0.06% Shell 0.32% PowerShell 0.35% Dockerfile 0.17% EJS 67.91% TypeScript 19.93%

purple-a11y's Issues

Sitemap Scan Broken

This did work historically, but now it seems to skip the pages and go right to producing a null report.

 % node cli.js -u https://www.tech.gov.sg/sitemap.xml  -c 1 -p 500 -k "CivicActions gTracker2:[email protected]"
(node:24191) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)

 Purple A11y version: 0.9.59 
 Starting scan... 
 Fetching URLs. This might take some time... 
 Fetch URLs completed. Beginning scan 
┌────────────────────────┐
│ No pages were scanned. │
└────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Report of this run is at a11y-scan-results.zip                  │
│ Results directory is at results/20240528_171104_www.tech.gov.sg │
└─────────────────────────────────────────────────────────────────┘
% 

Combine efforts

Just saw you put out a new release.

Would love to see some of the work we've done here get pulled back into the main repo - https://github.com/civicactions/purple-hats

I haven't seen what you've done recently but will definitely take a look. Might be worth while talking about some of the places where I've worked to extend the code. Sorry for the messy JS. I'm still very much learning node.

Add to text summary

Right now when running the cli.js file I get output like:
 

Purple A11y version: 0.9.41

Starting scan...

Fetching URLs. This might take some time...

Fetch URLs completed. Beginning scan

┌─────────────────────────────────────────┐

│ Scan Summary                            │

│                                         │

│ Must Fix: 0 issues / 0 occurrences      │

│ Good to Fix: 1 issues / 353 occurrences │

│ Passed: 135610 occurrences              │

└─────────────────────────────────────────┘

┌────────────────────────────────────────────────────────┐

│ Report of this run is at a11y-scan-results.zip         │

│ Results directory is at results/20240105_181127_custom │

└────────────────────────────────────────────────────────┘

Which kinda works when you're running 1 domain. 

However, you could add:
- Domain Name
- Number of URLs Crawled
- axeImpact scores
- WCAGissue numbers

## Add a score for better comparison.

I also like developing a score so that scans can be more easily compared. This is one approach.

Use the axe criteria to addresses severity -  Score  = (critical*3 + serious*2 + moderate*1.5 minor) / urls*5 - which allows you to easily compare a 1000 page site to a 10000 page site with a numeric value which lets you determine if the site is getting better or worse for users. https://github.com/CivicActions/purple-hats/blob/master/mergeAxeResults.js#L383

Grade that score to a letter grade that is easier for folks to understand:

  •     // A+ = 0 ; A <= 0.1 ; A- <= 0.3 ;
  •     // B+ <= 0.5 ; B <= 0.7 ; B- <= 0.9 ;
  •     // C+ <= 2 ; C <= 4 C- <= 6 ;
  •     // D+ <= 8 ; D <= 10 ; D- <= 13 ;
  •     // F+ <= 15 ; F <= 20 ; F- >= 20 ;

Error when Chrome is running

There is nothing that says chrome must not be running before starting a scan.
If that is a requirement then there should be a friendly message and not an error.


PS D:\repos\purple-a11y> node index
┌────────────────────────────────────────────────────────────┐
│ Purple A11y (ver 0.9.46)                                   │
│ We recommend using Chrome browser for the best experience. │
│                                                            │
│ Welcome back Brian Teeman!                                 │
│ (Refer to readme.txt on how to change your profile)        │
└────────────────────────────────────────────────────────────┘
? What would you like to scan? (Use arrow keys)
❯ Sitemap
? What would you like to scan? Sitemap
? Do you want purple-a11y to run in the background? Yes
? Which screen size would you like to scan? (Use arrow keys) Desktop
? Please enter URL or file path to sitemap, or drag and drop a sitemap file here:  https://brianstest.site

 Error: EBUSY: resource busy or locked, copyfile 'C:\Users\brian\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies' -> 'C:\Users\brian\AppData\Local\Google\Chrome\User Data\Purple-A11y\Default\Network\Cookies' 


C:\Users\brian\AppData\Local\Microsoft\Edge\User Data\Purple-A11y destDir
true cloneLocalStateFileSuccess


 Error: EBUSY: resource busy or locked, copyfile 'C:\Users\brian\AppData\Local\Microsoft\Edge\User Data\Default\Network\Cookies' -> 'C:\Users\brian\AppData\Local\Microsoft\Edge\User Data\Purple-A11y\Default\Network\Cookies'  


C:\Users\brian\AppData\Local\Microsoft\Edge\User Data null getEdgeData

Make it easier to find Sitemap.xml files that can be used.

There are a number of sites that have sitemaps but it isn't always easy to find them from a list of URLs.

To reduce the amount of time required to find them it would be helpful to have a script to seek out common places where sitemaps may be defined.

Unable to run the application since this build 06.06.24

Hello,
I've tested each install and run of the app since 05.26.24. The 05.26.24 release is the last one which I can run successfully. Since the 06.06.24 release, there appears to be modules missing or not being called correctly. See screenshot of the error after executing "a11y_shell.cmd".

Error:
PS C:\Users\jackc\Desktop\06.06.24\purple-a11y> node index
node:internal/modules/cjs/loader:1147
throw err;
^

Error: Cannot find module 'C:\Users\jackc\Desktop\06.06.24\purple-a11y\index'
at Module._resolveFilename (node:internal/modules/cjs/loader:1144:15)
at Module._load (node:internal/modules/cjs/loader:985:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12)
at node:internal/main/run_main_module:28:49 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}

image

Also note: Vera-Pdf files that with t01.xml through t06.xml do not extract - file too long
image

Include xpath & severity from axe

Axe includes the xpath in their csv reports, but Purple Hats does not.

Xpaths are useful for identifying where there may be multiple instances across multiple pages. They are also very useful for finding the exact reference which was causing the error.

I also do think that the Critical, Serious, Moderate & Minor impact is useful. You split yours up into the WCAG vs Deque best practices which is useful to. However it should be possible to capture this:
https://github.com/dequelabs/axe-core/blob/develop/doc/issue_impact.md

I also liked how you had in earlier versions what types of disabilities were affected by a given SC. That was useful.

More data that may be usefu to folks:
https://github.com/CivicActions/accessibility-data-reference

Exclude urls matching a pattern

The sites I am scanning all have rss links in the format example.com/page?format=feed&type=rss

Is it possible to exclude all urls that have this pattern? I tried to add an entry in exclusions.txt \.*format=feed&type=rss\.* but that had no impact

Error from purple hats, but not axe

I crawled this site lflegal.com and got a number of errors on pages like this:

<html> element must have a lang attribute WCAG 3.1.1

<html dir="ltr">

One of the errors that popped up was:

https://www.lflegal.com/category/articles/settlement-agreement-press-releases/point-of-sale-press-releases/page/2/

HTML on the page is:

<!DOCTYPE html>

<!--[if IE 9]>
<html class="ie ie9" lang="en" prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb#">
 <![endif]-->
<html lang="en" prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb#">

Did catch some other useful errors, but this seems to be false.

It should be possible to script the execution

I'd love to be able to run this on a bunch of sites from the command line. Even just being able to execute it on cron, so that every Sunday night so we have a fresh report to look at every week. 

The current approach doesn't allow for an execution like this. The Desktop and CLI Interface are nice if you want to scan a web site every once in a while, but after a few weeks you really don't want to go through the steps of doing this by hand.

Cannot find module -

After pulling down the latest git repo, I can't seem to get past:

 % npm run cli -- -c 2 -o a11y-scan-results.zip -u "http://localhost:8000" -w 360

 @govtechsg/[email protected] cli
 node dist/cli.js -c 2 -o a11y-scan-results.zip -u http://localhost:8000 -w 360

node:internal/modules/cjs/loader:1093
 throw err;
 ^

Error: Cannot find module '/Users/mgifford/Documents/GitHub/purple-a11y/dist/cli.js'
   at Module._resolveFilename (node:internal/modules/cjs/loader:1090:15)
   at Module._load (node:internal/modules/cjs/loader:934:27)
   at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:83:12)
   at node:internal/main/run_main_module:23:47 {
 code: 'MODULE_NOT_FOUND',
 requireStack: []
}

Node.js v19.9.0

 

I've tied to build it again from a fresh install but can't seem to get the results I need to get it running again from the command line.

Make crawls efficient and fast, improve issue coverage by 23%, and domain coverage

Hi, I saw this project recommended on my Github feed. I thought id share this project to help save some time and effort on some challenges that were solved at scale so our environment does not suffer since web automated testing is not a simple thing that should be done in one language ( nodejs is not meant for concurrency - runtime is forked each pool and does not scale - also the community usually confuses concurrency with parallel they are not the same thing ).
The job requires a micro-service setup to scale (crawler Rust,C, C++ etc, HTML needs dedicated browsers with configs that render background processing and script transformation to prevent abuse, RPC streams to communicate REST does not scale with this type of workload and should not be done, security layer, resource control when headless rendering, algorithms that are efficient most of the ones today have loops on loops on loops with no data structure architecting, bad mutations, ownership instead of using iterators, and etc, - we rewrote the old runners and made them efficient when people want to use things like axe for results or codesniffer, and much more. Are goal is to be accurate and performant without any drips or leaks along the way ).

We left a Lite version for the community to use as it is far superior than any combination of tool to get the job done for dynamic automated web accessibility testing with 23% more coverage than alternatives and the most efficient crawling to scale https://github.com/a11ywatch/a11ywatch.

  • Here is a video of a crawl completing with 63% accessibility coverage, subdomain and TLDs coverage, code fixes, detailed AI alt enhancement, and most importantly the most efficient crawling at scale ( millions of pages within seconds to minutes ).
demo.mp4

I hope this helps solve some of the challenges being built as the system has many ways to integrate. We actually built several technologies along the way that are used in big tech companies today ex: https://github.com/spider-rs/spider.

Here is an example of a PHP integration with a tool called Equalify https://github.com/bbertucc/equalify/tree/176-a11ywatch-integration - exact commit for main code required https://github.com/j-mendez/equalify/commit/5eddd04653bf91eca15435465b78dab6c30920d8

Integration for nodejs

You can use the sidecar to integrate without going over the wire.

npm i @a11ywatch/a11ywatch --save

import { scan, multiPageScan, crawlList } from "@a11ywatch/a11ywatch";

// single page website scan.
await scan({ url: "https://jeffmendez.com" });

// single page website scan with lighthouse results.
await scan({ url: "https://jeffmendez.com", pageInsights: true });

// all pages
await multiPageScan({ url: "https://a11ywatch.com" });

// all pages and subdomains
await multiPageScan({
  url: "https://a11ywatch.com",
  subdomains: true,
});

// all pages and tld extensions
await multiPageScan({ url: "https://a11ywatch.com", tld: true });

// all pages, subdomains, and tld extensions
await multiPageScan({
  url: "https://a11ywatch.com",
  subdomains: true,
  tld: true,
});

// all pages, subdomains, sitemap extend, and tld extensions
await multiPageScan({
  url: "https://a11ywatch.com",
  subdomains: true,
  tld: true,
  sitemap: true
});

// multi page scan with callback on each result asynchronously
const callback = ({ data }) => {
  console.log(data);
};
await multiPageScan(
  {
    url: "https://a11ywatch.com",
  },
  callback
);

Ps: only reason for multiple languages and not Rust for everything is due to needing to customize browsers that are made in different languages or tweaking.

BasicCrawler limit

Is there a way to get around this?

INFO: BasicCrawler: Crawler reached the maxRequestsPerCrawl limit of 100 requests and will shut down soon. Requests that are in progress will be allowed to finish.
INFO: BasicCrawler: Earlier, the crawler reached the maxRequestsPerCrawl limit of 100 requests and all requests that were in progress at that time have now finished. In total, the crawler processed 104 requests and will shut down.

Provide Proof of Progress

If you are scanning pages with purple hats, demonstate that there is improvement over time.

Build a score based on axe criteria which addresses severity - Score = (critical3 + serious2 + moderate1.5 minor) / urls5 - which allows you to easily compare a 1000 page site to a 10000 page site with a numeric value which lets you determine if the site is getting better or worse for users. https://github.com/CivicActions/purple-hats/blob/master/mergeAxeResults.js#L383

This just makes it easier for people to see relative scores.

It is also critical to remind folks about Goodhart's law, that states: "When a measure becomes a target, it ceases to be a good measure". A score based on axe results is just one way to measure accessibility.

PDFs are being scanned when they shouldn't be.

I am not setting the filetype

  -i, --fileTypes                   

With

node --max-old-space-size=6000 --no-deprecation purple-a11y/cli.js -u https://www.whitehouse.gov -c 2 -s same-domain -p 50  -a none --blacklistedPatternsFilename ./pa-gTracker-exclude-medicare.csv -k "Random Example:[email protected]"

But I am still finding PDFs in the list of URLs crawled. This shouldn't be the case..  If the default is html only then I shouldn't see any PDFs (or other docs) in my results.

Couldn't keep to the domain

When I added -a, but I should be able to do either:

  -a, --additional

With

node --max-old-space-size=6000 --no-deprecation purple-a11y/cli.js -u https://www.whitehouse.gov -c 2 -s same-domain -p 50  -a none --blacklistedPatternsFilename ./pa-gTracker-exclude-medicare.csv -k "Random Example:[email protected]"

It ran fine, but I found sub-domains in the returned results.

01/03/2024 Not able to run on Windows 11

Hello,
I am currently able to run the 12/14 release, but I get an error with 01/03/2024 release. One of the changes must have had a negative impact, or the installation instructions need to be modified for something. I'm unsure.

Error:
{"timestamp":"2024-01-12 17:50:10","level":"error","message":"uncaughtException: Invalid URL\nTypeError: Invalid URL\n at new URL (node:internal/url:775:36)\n at getHost (file:///C:/Users/User/Desktop/Purple_A11Y_01032024/purple-a11y/utils.js:15:31)\n at combineRun (file:///C:/Users/User/Desktop/Purple_A11Y_01032024/purple-a11y/combine.js:42:81)\n at runScan (file:///C:/Users/User/Desktop/Purple_A11Y_01032024/purple-a11y/index.js:62:11)\n at file:///C:/Users/User/Desktop/Purple_A11Y_01032024/purple-a11y/index.js:99:11\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}

Question, more than issue. Every scan results says "Website crawl (100 pages)". Is Purple A11Y limited to 100?

As the title mentions, all of my scan results say "Website crawl (100 pages)" on the left side of the HTML report. Is Purple A11Y limited to 100 pages? What concerns me is that it hasn't varied at all. I would expect some sites to have 98, others 130, etc... Is there a JS/Config file I can check or edit how many pages are scanned? Thanks in advance! I appreciate all you do for this app!

image
image

Thanks,
Jack

What do you mean by "customisable"?

By customisable, do you mean that Purple A11y can be rebranded into a theme with custom colours, designs, behaviors, and images?

Or by customisable do you mean that we can create custom flows for scanning?

yargs needed for CLI

I tried to run the CLI with using Node.js v20.0.0:

% node cli.js -c -u https://www.va.gov

Got this error:

node:internal/errors:490
   ErrorCaptureStackTrace(err);
   ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/Users/mgifford/node_modules/yargs/helpers' imported from /Users/mgifford/purple-hats/cli.js
...

I think this resolved the problem:

% npm install yargs

ERR_DLOPEN_FAILED

From /purple-hats-portable-mac-arm64/purple-hats when I run $ node index I get the following error. 

node:internal/modules/cjs/loader:1243
  return process.dlopen(module, path.toNamespacedPath(filename));
                 ^
Error: dlopen(/Users/mgifford/node_modules/canvas/build/Release/canvas.node, 0x0001): tried: '/Users/mgifford/node_modules/canvas/build/Release/canvas.node' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/mgifford/node_modules/canvas/build/Release/canvas.node' (no such file), '/Users/mgifford/node_modules/canvas/build/Release/canvas.node' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
    at Module._extensions..node (node:internal/modules/cjs/loader:1243:18)
    at Module.load (node:internal/modules/cjs/loader:1037:32)
    at Module._load (node:internal/modules/cjs/loader:878:12)
    at Module.require (node:internal/modules/cjs/loader:1061:19)
    at require (node:internal/modules/cjs/helpers:103:18)
    at Object.<anonymous> (/Users/mgifford/node_modules/canvas/lib/bindings.js:3:18)
    at Module._compile (node:internal/modules/cjs/loader:1159:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1213:10)
    at Module.load (node:internal/modules/cjs/loader:1037:32)
    at Module._load (node:internal/modules/cjs/loader:878:12) {
  code: 'ERR_DLOPEN_FAILED'
}


Node.js v18.12.1
bash-3.2$

Filenames should be unique

The reports should have URL & date hard coded into the names. So say 18f.gov-25012020-report.html & 18f.gov-25012020-compiledResult.json

This would make it easier to track.

Respect Robots.txt Files

Sites should respect the robots.txt files that some sites use to manage traffic.

Would be great if by default the scanner respected the wishes of the site owner.

Option to eliminate Axe rules best practices

Hello,
How do we eliminate best practices? Currently all issues are reported both violations & best practices. Many best practices are not required to be implemented & some look like false positives.

Out of one hundred issues I see ninety of them are best practice rules of Axe.

Upgrade to latest version of axe

I've got this in a PR here:
#15

Just making sure it is also in the issue queue.

The current version of Purple Hats is using an older version of the axe-core engine.

Unable to run an audit starting with the 07/14/2023 portable release

Hello,
I've been able to use PurpleHats with any release after 07/04.

I execute "Node Index" and it just stops. I found this in "errors.txt":
{"timestamp":"2023-08-22 11:50:11","level":"error","message":"uncaughtException: Unexpected end of JSON input\nSyntaxError: Unexpected end of JSON input\n at JSON.parse ()\n at getUserDataTxt (file:///C:/Users/Jackc/Desktop/Purple0822/purple-hats/utils.js:68:27)\n at file:///C:/Users/Jackc/Desktop/Purple0822/purple-hats/constants/questions.js:12:18\n at ModuleJob.run (node:internal/modules/esm/module_job:193:25)\n at async Promise.all (index 0)\n at async ESMLoader.import (node:internal/modules/esm/loader:530:24)\n at async loadESM (node:internal/process/esm_loader:91:5)\n at async handleMainPromise (node:internal/modules/run_main:65:12)"}

Can you help? I'd like to take advantage of the latest updates. I'm on Windows10

Make it easier to amplify the sitemap.xml crawl

The ability to scan sitemap.xml files in Purple A11y is powerful. However, the sitemaps generally are too big to be effectively crawled by this script.

I've created a script that I think can help make existing sitemap.xml files more powerful.

Scanning just a handful of sites is a problem. Scanning all web pages in a site also brings challenges with it, particularly for larger sites. To have confidence in accessibility, a random sampling of pages, should allow us to get statistical certainty of our knowledge about the accessibility of a whole site.

The trouble is that there are often too many URLs, and you have site-maps of sitemaps.

My script which aggregates the XML files into just one and then removes all the stuff that we don’t want to be analyzing (.doc, .pdf, .zip, ext.) This produces I have a random sampling of URLs. This XML file can be used in the future to test if the exact same URLs have improved over time (or not). I’m capping sitemap size at 2000 URLs as that is a pretty decent sampling for the sites we work with.

It could be enhanced in the future to ensure that files like the home page, search page, representative landing pages, and any unusual pages are included in the scan. This could be something that is just appended.

Are there lists of sitemap.xml tools that folks find useful?

Not scanning all pages for all sitemap.xml files in some, more in others.

I'd like to be able to scan more than a hundred pages. Actually more than a thousand. I try this:

node cli.js -c 1 -u file:///Users/mgifford/myOwnSitemap.xml -p 1500 -k "Me Really:[email protected]"

And it only gets me:

Sitemap crawl (139 pages)
Purple A11y Version 0.9.43

Any thoughts?

I can see another with:

Sitemap crawl (955 pages)
Purple A11y Version 0.9.43

But both of these sitemap.xml files has more than a 1000 URLs.

Document how to run against local lists

From this:

  1. A text file with a list of links
  2. A XML file with XML tags in Sitemap Protocol format

It sounds like I should be able to run this against a local file on my local system, but I can't seem to get the formatting to accept a local path.

Closest I can come gives me this error:

Scanning website...
(node:76046) UnhandledPromiseRejectionWarning: Error: Cannot fetch a request list from file:/Users/mikegifford/purple-hats/url_list_Olivero.xml: RequestError: Error: Invalid URI "file:///Users/mikegifford/purple-hats/url_list_Olivero.xml"
    at RequestList._fetchRequestsFromUrl (/Users/mikegifford/purple-hats/node_modules/apify/build/request_list.js:564:13)
    at process._tickCallback (internal/process/next_tick.js:68:7)
    at evalScript (internal/bootstrap/node.js:585:13)
    at startup (internal/bootstrap/node.js:267:9)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:739:3)
(node:76046) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:76046) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
The file /Users/mikegifford/purple-hats/results/2020-9-10/159975560855e6613c80/reports/report.html does not exist.

Make it easier to analyze reports

I created another script to help analyze the reports.

Right now if you scan a few sites with Purple A11y's CLI tool you'll be left with bunch of directories with HTML and CSV files in them. That's great, however it doesn't allow you to really gather the metadata behind the scan. I wanted a snapshot and not the detailed reports.

So I created a little script to crawl the CSV files and extract some very basic information that would be a very basic status report.

I wanted a consistent benchmark for our sites that allows us to demonstrate improvement over time. Seeing the individual errors is useful, but if I have scanned 1000 pages, it would make sense that there would be more errors than if I'd just scanned 100.

Knowing how many URLs, WCAG errors (for each type), axe impact status (for each type) gives a better snapshot.

Perfection is nice, but for most sites that may be unattainable, and it would be better to strive to be able to prove progress,

Report crashes - Allocation failed - JavaScript heap out of memory

If I crawl this sitemap.xml file https://cnib.ca/en/sitemap.xml?region=on

The page fails. There are over 1800 pages in it, but had hoped it would be able to manage sites larger than this.

INFO: BasicCrawler: All the requests from request list and/or request queue have been processed, the crawler will shut down.
INFO: Crawler final request statistics: {"avgDurationMillis":8803,"perMinute":34,"finished":1813,"failed":3,"retryHistogram":[1811,2,null,3]}
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

<--- Last few GCs --->

[9364:0x102679000]  3255724 ms: Mark-sweep 1306.0 (1388.6) -> 1305.9 (1357.1) MB, 68.6 / 0.0 ms  (average mu = 0.719, current mu = 0.000) last resort GC in old space requested
[9364:0x102679000]  3255797 ms: Mark-sweep 1305.9 (1357.1) -> 1305.9 (1357.1) MB, 72.7 / 0.0 ms  (average mu = 0.556, current mu = 0.000) last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x200b585878a1]
Security context: 0x1c9e2dd1e6e1 <JSObject>
    1: byteLength(aka byteLength) [0x1c9eb2305ef1] [buffer.js:~510] [pc=0x200b58ca5d2c](this=0x1c9e693826f1 <undefined>,string=0x1c9e2920a331 <Very long string[460960089]>,encoding=0x1c9e2dd3d4f1 <String[4]: utf8>)
    2: arguments adaptor frame: 3->2
    3: fromString(aka fromString) [0x1c9eb231c639] [buffer.js:~335] [pc=0x200b58cec790](this=0x1c9e693826f1 ...

 1: 0x10003ae75 node::Abort() [/usr/local/bin/node]
 2: 0x10003b07f node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x1001a7ae5 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0x100572ef2 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 5: 0x10057c3f4 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 6: 0x10054e1e4 v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/local/bin/node]
 7: 0x10067fd99 v8::internal::String::SlowFlatten(v8::internal::Handle<v8::internal::ConsString>, v8::internal::PretenureFlag) [/usr/local/bin/node]
 8: 0x1001c587d v8::String::Utf8Length() const [/usr/local/bin/node]
 9: 0x10004e7b6 node::Buffer::(anonymous namespace)::ByteLengthUtf8(v8::FunctionCallbackInfo<v8::Value> const&) [/usr/local/bin/node]
10: 0x200b585878a1 
The file /Users/mikegifford/purple-hats/results/2020-9-24/160096044136e10467e2/reports/report.html does not exist.

What's the best way to deal with this memory error

Mac Installation Error

After cloning the repo I get this:

$ chmod +x mac-installer.sh 
$ bash mac-installer.sh 
: command not foundine 2: 
mac-installer.sh: line 49: syntax error: unexpected end of file

Ended up running:

$ dos2unix mac-installer.sh 
dos2unix: converting file mac-installer.sh to Unix format...

And it's working fine now.

Ability to parse list of URLs for scan

A user might have a list of URLs to scan. Other than existing sitemap scan, a feature to upload a CSV file to scan would help users scan a subsite with a list of URLs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.