webhintio / rfcs Goto Github PK
View Code? Open in Web Editor NEW🛑 This repository is deprecated!! To request changes, new features, hints, etc. please open an issue in https://github.com/webhintio/hint
🛑 This repository is deprecated!! To request changes, new features, hints, etc. please open an issue in https://github.com/webhintio/hint
Original reported by @alrra in webhintio/hint#30
@dstorey had a good idea earlier about this. We could use the WebIDL of the different versions of different browsers to track which APIs have been deprecated and removed. This should be more reliable than a manually curated list, should be more automatic (we just need to pull the new version each time), etc.
I won't be trivial but I think it is worth a shot.
SGTM.
we just need to pull the new version each time
Note: We can (?) even use Travis CI to pull the data periodically and automatically commit any changes.
The Edge WebIDL files have "extended" (browser specific), "deprecated" (deprecated from the spec) and "interop" (included for interop reasons). Older versions just have the "msDeprecated" tag I think.
I'll need to look into the other browsers to see how they mark things.
For Firefox they're all in one directory like Edge: https://dxr.mozilla.org/mozilla-central/source/dom/webidl
Chrome and Firefox mix theirs in with their implementation such as http://trac.webkit.org/browser/webkit/releases/Apple/Safari%20Technology%20Preview%2028/WebCore/html and https://chromium.googlesource.com/chromium/blink/+/master/Source/core/html
Firefox splits out their deprecated members in their own partial interface like we're doing with Edge, but I don't see a tag so you'll probably need to recognize the URL comment above the partial to know they're deprecated. Chrome doesn't split them out but does have a comment saying obsolete or such. Hopefully those comments are consistent enough to check. In Safari there all mixed together so probably can't pick up any info.
When a feature is dropped, like webkitDropzone, it is removed from the webidl file, so you'll need to keep track of that.
DOM IDL attributes that match the equivalent html attributes are tagged with [Reflect] in most browsers, so should be able to use that to detect html attribute support, I'm not sure about attributes that have different names in HTML vs the IDL interface attribute.
@dstorey We had a discussion about this the other day, can you provide more details here? Thanks!
I'm putting together more details on what could be done here.
We could also expand the scope of this hint to notify users about new APIs which aren't yet available somewhere within their chosen browser support matrix (see #1203 for CSS) and HTML deprecations (see #1204).
We just had a good conversation today with @digitarald and @martinbalfanz from Mozilla and it sounds like the MDN compatibility data would be a good fit to help us build this hint. There's also a Firefox/Chrome DevTools extension called Compat Report which is built on this data we could look at as an example. Currently the MDN data is complete for CSS with work in-progress to finish the rest of the platform APIs.
I'm particularly interested in this space and would be happy to help us move forward on getting this hint started once I wrap up with the VS Code extension.
I'd still like to see WebIDL used to keep the MDN data up to date as it is often incorrect (at least for Edge) or out of date (relies on someone to manually check each release and not miss anything). Having tooling that checks both WebIDL and type mirror to guide updating MDN (if it isn't allowed to be automatic) would be fantastic. E.g. Google's type mirror tool that compares with MDN: https://mdittmer.github.io/mdn-confluence/multi.html and https://github.com/mdittmer/mdn-confluence . Both data sources have their uses as some data is lost with type mirror (including non enumerables, parameters, etc. while sometimes WebIDL may be incorrect (such as if something is off by default or a API is broken somehow)
I'd still like to see WebIDL used to keep the MDN data up to date
@dstorey Totally agree that figuring out how to use WebIDL to help keep whatever data source we use up-to-date is a good idea. I noticed there is an open issue on the MDN compat data repo about looking into WebIDL for keeping in sync with specifications.
I'd be curious to have @patrickkettner's input too, but it seems like it would be worthwhile to start a discussion or new issue there about also using the WebIDL from browsers?
I think so as it is a good source of thruth for the most part, with all browsers using it to generate code (with some caveats)
I'd still like to see WebIDL used to keep the MDN data up to date as it is often incorrect (at least for Edge) or out of date (relies on someone to manually check each release and not miss anything).
@dstorey is that still the case with the recent involvement from all browsers to keep the data fresh? I wonder if it is on webhint to figure this problem out, since the browser-compat-data seems to be working on improving correctness and reducing the update latency.
it is substantially less true than it was previously, but it is still an issue. IDL code will almost always be more accurate than a manually curated list
I can understand the risk of using bad data, its a GIGO risk and it can dissolve the trust in the audit. My expectation would be, and I am happy to take point on driving this, that MDN's data will keep aiming to be the most complete and up to date dataset.
As perfection will be never achieved with this kind of aggregated date, what level of accuracy do we expect in the data that powers the audits? Or more specific, what is good enough to unblock this discussion?
Part of #48.
The initial webhint browser extension uses a fixed configuration of recommended hints and target browsers.
The goal is to allow configuration of:
Originally reported by @kshyju in webhintio/hint#1156 with participation of @Ruffio
A rule which checks whether images are scaled down in the page.
We can flag the images where computed width/height is less than natural width/height
- The
img
tag- Any tag(ex: a
div
) where background is set as an image.- Exclude the tracking images (usually 1px* 1px) from this rule
Thoughts ?
You should also consider inline css images
@Ruffio what do you think we should do with inline css images? Should we put a threshold on the maximum size to inline? Something else like checking if the image is increasing the size in such a way that another HTTP package should be sent (although cool to figure out, it might be a bit overkill)?
Hmmm, good questions :-)
Inline images are Base64 encoded version of the original image and will therefore always be larger than the original image. The inline image can be placed directly into the webpage or in separate css file. If you are on a mobile device with a slow internet connection the extra size can be more appealing than having the image as a separate file where the browser has to open a new connection to download it.
My comment above was actually 'just' to compare the size of the encoded image with what is displayed.
So if the original and thereby the encoded image has a size of 100x100, then it should also be displayed as 100x100.I have a site (https://www.rcfed.com/Utilities/Base64Encode), where you can 'upload' an image and then get the encoded version back in to versions:
- for inline usage directly in the webpage
- for usage in a css file
So my point it that we should not only test separate image files for computed width/height is not equal to the natural width/height. This should also be done for inline images.
I don't think we can find out/calculate if the image should be inlined or as separate file as it depends of transfer speed and the time it takes to open a new connection. My comment was only regarding that the encoded size (height/width) must be the same as the displayed size (height/width), not the actual size/length of the image.
Originally reported by @poshaughnessy in webhintio/hint#769
Jotting this down while I remember (I think @torgo raised it originally?):
It would be great if we could analyse whether permissions (location / push notifications / storage / other?) are requested on initial page load, without a user interaction? (Since this is bad practice and a current bone of contention for many).
Good idea. A couple ideas on how to detect this:
- We could monkey patch the original methods and at the end of the scan see if they have been executed.
- Detect it via the JS parser. This will be the ideal scenario because it will work on the local scenario, but I'm not confident we can make it as reliable as the other option (especially if using something like react or similar)
Originally reported by @kshyju in webhintio/hint#1091
Majority of the end users are not lucky like @alrra to have a tall/wide monitor
We should consider building a rule which recommends lazy loading of offscreen images
https://developers.google.com/web/tools/lighthouse/audits/offscreen-images
Originally reported by @antross in webhintio/hint#1203
Related to #3, the
doiuse
library warns about using CSS properties which aren't supported in a given set of target browsers backed by caniuse data (vs stylelint which focuses more on general CSS mistakes). This could be useful to warn not only about adoption of newer features, but also older features which haven't been implemented in all browsers (e.g. the CSSzoom
property). There's even a stylelint plugin that wraps this, but I suspect it'd be better to avoid the extra middle man.
Originally reported by @alrra in webhintio/hint#20
1; mode=block
. (?)See also:
Originally reported by @alrra in webhintio/hint#29
See also:
- https://blogs.windows.com/msedgedev/2016/12/14/edge-flash-click-run/
- https://blog.mozilla.org/futurereleases/2016/07/20/reducing-adobe-flash-usage-in-firefox/
- https://blog.google/products/chrome/flash-and-chrome/
- https://webkit.org/blog/6589/next-steps-for-legacy-plug-ins/
- https://www.fxsitecompat.com/en-CA/docs/2016/plug-in-support-has-been-dropped-other-than-flash/
- https://developer.mozilla.org/en-US/Add-ons/Plugins
- https://developer.chrome.com/extensions/npapi
- https://blogs.windows.com/msedgedev/2017/07/25/flash-on-windows-timeline/
Originally reported by @alrra in webhintio/hint#462
type
is specified.sizes
is accurate.At this moment, webhint is supporting the browserlist configured in the .hintrc
file and inside package.json
file. But in some project users have defined the browserlist in .browserlistrc
To do that will also be necessary to "merge" the list of borwsers (or similar solution) if the user uses .browserlistrc
+ .hintrc
.
CC/ @molant
Originally reported by @alrra in webhintio/hint#34
See also:
Originally reported by @alrra in webhintio/hint#146
/favicon.ico
Look into octohook, see where ZenHub stores information, etc.
Originally reported by @ThomasArdal in webhintio/hint#624
This is a follow up on the following tweet: https://twitter.com/narwhalnellie/status/926083751175536647
My former product HippoValidator is no longer live. A lot of the ideas in the product are similar to the ones in Sonar. This is a summary of the features implemented for HippoValidator. I still have the code and would love for you to use either the code itself or ideas from it.
Much like your site, HippoValidator consisted of different validators. The following validators are available:
AccessibilityValidator
Basically a client built on top of AChecker (https://achecker.ca/checker/index.php), which validates a website against WCAG. The client itself is available here: https://github.com/HippoValidator/ACheckerAccessibilityValidationClient and generated from the WSDL. I have some private code that takes the result and transform it into errors, warnings etc.
Broken link checker
A home made validator of all links on a URL. The code is based on a project I created called FluentLinkChecker: https://github.com/HippoValidator/FluentLinkChecker.
CSS Lint
A CSS linter for C#, based on the Jurassic NuGet package: https://github.com/HippoValidator/CSSLintValidator
Feed Validation
Extracted any RSS/Atom feed URL's from a URL and validated against w3 feed validator: http://validator.w3.org/feed
JSHint
A JSHint validator for .NET, again based on Jurassic: https://github.com/HippoValidator/JsHintValidator
Markup validator
Basically a .NET client for the w3 markup validator.
Mobile validator
A set of 11 rules checking common mobile issues like missing favicon and other rules.
ModernIE
A client for the modern ie website. Don't know if that still exists.
Performance
Client for the Google Page Speed API
RedBot validator
A client for the redbot.org webservice
SEO Validator
Home made SEO validator consisting of 30+ rules of common technical seo issues like missing tags, friendly URLs etc.
Spell checking
A client for the AfterTheDeadline spell checker service
Style validation
A client for the w3 style validator.
Originally reported by @molant in webhintio/hint#131
The idea will be to use svg-weirdness and prevent the user if they are using something that could cause trouble.
Ideally the rule will:
Originally reported by @alrra in webhintio/hint#31
Originally reported by @molant in webhintio/hint#552
Original comment: webhintio/hint#438 (comment)
This should be a security concern and raise some eyebrows if it happens.
Originally opened by @alrra in webhintio/hint#438
Possible checks:
- Multiple redirect (bad for performance) (#641)
- Redirects from HTTPS to HTTP
- Client-side redirects (meta or JavaScript)
- Other?
Originally reported by @alrra in webhintio/hint#1094
e.g.:
type="text/css"
for<style>
type="text/javascript"
(and others) for<script>
(note:<script>
with version parameter will no longer be loaded)
Originally reported by @ffoodd in webhintio/hint#1097
@zachleat came up with faux-pas, wich description is:
A script to highlight elements that are mismatched incorrectly to @font-face blocks, which may result in > shoddy faux bold or faux italic rendering.
Not really an error, but certainly something to warn about.
This sounds like a great idea. Some things to figure out:
- Looks like it uses the CSS Font Loading API. I'm not sure that JSDOM supports that (the connector we are using in the online service) so we will have to see if there's any workaround for this
- We will need to do some modifications to return the element with the mismatch so it works with
report()
, we don't do any highlight@zachleat from what I see in here you are using the fonts API to know which ones have been loaded, is that right? Wonder if we can use a polyfill like
fontloader
orfont face observer
for the connectors that do not support it. Have you done any tests with any?
I’m almost certain that JSDOM does not support the CSS Font Loading API. I was using JSDOM for the GlyphHanger project and migrated off (for this and other reasons).
The important piece of the CSS Font Loading API is that it includes a collection of web fonts in use on the page. Neither
fontloader
orfontfaceobserver
have that functionality. Do y’all have something that parses CSS? You’d need to take that method to find any and all @font-face blocks in use. That’d be the missing piece there.
We don't have a CSS parser but want to add one at one moment or another.
One option would be to not wait and create the rule "limited" to onlychrome
(and thus thecli
) until we figure out how to support this onjsdom
.
This API has grown to contain a number of optional parameters which can lead to awkward calls like:
await context.report(resource, null, 'Resource has no content.', undefined, problemLocation);
public async report(resource: string, element: IAsyncHTMLElement | null, message: string, content?: string, location?: ProblemLocation, severity?: Severity, codeSnippet?: string): Promise<void>
The only two pieces that are truly required are the first and third parameters (resource
and message
).
I propose either replacing or providing an alternate version of this API which takes an options
object instead (depending on whether we're willing to take this as a breaking change):
type ReportOptions = {
codeSnippet?: string;
content?: string;
element?: IAsyncHTMLElement | null;
location?: ProblemLocation;
severity?: Severity;
};
public async report(resource: string, message: string, options?: ReportOptions): Promise<void>
Originally reported by @rachelnabors in webhintio/hint#561
Anton had a great idea at our last meeting: how about making some rules for animations? Here're my "rules" as it were.
Keep in mind that CSS Animations and Transitions are fairly easy to spot: look for the
animation
andtransition
properties and their component properties (likeanimation-property
andtransition-duration
). But when it come to JavaScript animations, ala GSAP and the Web Animations API, things get trickier.Let's tackle those later (should just be a matter or translating these rules to a different kind of parsing).
Rules ideas
- perf
animation/transition-property
should betransform
oropacity
for greatest performance or throw a warning.- perf Test for the FLIP technique in CSS animations: tricky, but here you're looking for
animation-direction: reverse
. There's a JS lib that does this, worth picking apart.- style Single-fire animations that fill forwards could be transitions:
animation: wouldbetransition 1 forwards
oranimation-iteration: 1; animation-fill-mode: forwards;
(Warning: these stack like so:animation-iteration: 1, infinity; animation-fill-mode: forwards, both;
I will add to this as I think of other things.
The ESLint no-unused-vars
rule generates a number of false-positives, in particular when importing types that are only used for type-checking and not at runtime. It also suffers from late discovery as we don't currently run linting with every build.
I propose switching to using the flags supported by tsc
to check for this instead (see --noUnusedLocals
and -noUnusedParameters
in compiler options), gaining better type-awareness and making this a build-blocking error.
Originally reported by @sarvaje in webhintio/hint#1251:
If a web site has errors in the console can mean that your scripts are not working as you expect.
- maybe somethings is deprecated, blocked (because of CSP), etc.
The CLI for webhint already prompts when hint
or an @hint/
package is out-of-date and installs the latest versions if accepted. I'd like to see the VS Code extension for webhint do the same.
Part of #48.
The initial webhint browser extension displays results in the console. While this may be an interesting option for some users, displaying results in a devtools panel provides additional functionality and control over the experience.
Currently debugging-protocol-connector.ts
contains a number of properties which are asynchronously initialized after the CDP connection has been established. As the code is migrated to TypeScript's strict mode, each of these properties is considered potentially uninitialized when read within methods. The spot fix is to override the compiler's assertion for now.
Going forward, this would be better handled by taking all of these late-initialized properties and putting them within a single (initially null
) sub-object. This provides a single point at which null-checks can be performed in dependent methods without having to check each property individually. It also allows the TypeScript compiler to understand that this check has taken place.
Originally reported by @antross in webhintio/hint#1179
This would expand coverage to catch a number of common CSS mistakes, not only in CSS but in pre->processor languages like SASS and LESS too. It's built on PostCSS, but also supports its own plug-ins >which may make it a useful hook for additional rules.
There's a good list of built-in rules already available as well with the recommended config enabling just >the rules checking for possible errors: https://stylelint.io/user-guide/rules/
Originally reported by @molant in webhintio/hint#132
It will use the data from flexbugs.
Similar to webhintio/hint#131, we will have to:
JSON
for the patterns..sonarrc
level and not just the rule).Originally reported by @alrra in webhintio/hint#21
"DENY"
on pages that allow a user to make a state changing operation (e.g: login pages, pages that contain one-click purchase links, checkout or bank-transfer confirmation pages, pages that make permanent configuration changes, etc.)See also:
Originally reported by @alrra in webhintio/hint#464
Hi there,
this is issue is based on this Twitter conversation. First things first: Thank you for this project.
I'd like to know what is the exact scope of webhint? The website says "webhint is a linting tool that will help you with your site's accessibility, speed, security and more, by checking your code for best practices and common errors.". I'd like to know if it should be treated as a (pluggable?) general purpose hinter/linter?
Why do I ask?
Currently we use multiple linters. Every linter slightly differs in output/configurations/and so on. I'm looking for a tool which tries to generalize and abstract the common behavior. Ideally I would like to have a basic linting foundation which works similar to prettier: it is has some common rules/APIs and is pluggable for new languages.
I'd like to lint my CSS/HTML/JS/TS/etc. files with the same tools, especially when I have a linting rule which affects multiple languages (changing path, renaming things which correspond to each other and so on).
It would be nice if webhint could hook into existing language service (e.g. the on for TypeScript) to use them for analyzing the code and triggering refactors.
Is this in the scope of this project?
Originally reported by @molant in webhintio/hint#127
Addy Osmani published an interesting article about Chrome's preload prefetch and priorities
We should:
Originally reported by @molant in webhintio/hint#797
Firefox is deprecating it:
This can probably be done as part of #10
The idea is to create a connector
around the web extensions model to facilitate running webhint as a browser extension. This would make it easier to run webhint cross-browser so it's more accessible to developers wherever they spend their time. This would be wrapped in an extension-browser
package to provide the pieces to register and start running webhint.
Originally reported by @alrra in webhintio/hint#25
X-WebKit-CSP
, X-Content-Security-Policy
).TODO
: Look into what other checks we can add for that this (e.g.: validate the content of the header, upgrade-insecure-requests
)See also:
After talking with @molant seems super useful to have a parser that brings to the hints the content of package.json
.
Originally reported by @molant in webhintio/hint#1287
Originally reported by @molant in webhintio/hint#658
There is a new endpoint from Cloudinary where you can upload an image and it will give you information about other formats. It is currently not supported in the npm package but it's available in one of the branches. We should look into adding support for it.
Because we only need to upload images maybe we can remove the dependency on the package and have our own version.
I'll update this issue once I've explored a bit more the code.
Do you have a link to this? I was already thinking about building a desktop tool to "scan" images by just uploading them to Cloudinary and downloading the optimized version, but this sounds more interesting. Would love to have similar functionality built into Sonarwhal...
The new endpoints are exposed in https://github.com/cloudinary/cloudinary_npm/tree/analyze_api and used in https://github.com/cloudinary/web-speed-test-client (code for https://webspeedtest.cloudinary.com/). The code hasn't been published to npm so we will probably have to create our own uploader that talks with the endpoint directly (shouldn't have to be too complicated)
My plan is to start working on this in January (trying to finish #142 and #12 and some other items) but if you want to submit a PR before that would be awesome.
Originally reported by @alrra in webhintio/hint#145
We already have a cloudinary rule so maybe we should explore the sharp
option from this comment:
Another option could be to use
sharp
and do everything locally. Not entirely sure what parameters we should use. If we decide to go down this path, even for the online version, we should definitely split the tests so the image optimization can run asap.
Originally reported by @antross in webhintio/hint#1204
Includes items like
<frameset>
and<keygen>
. Full list at https://html.spec.whatwg.org/multipage/obsolete.html#non-conforming-features.
Originally reporte by @molant in webhintio/hint#972
After talking with @bterlson today, here goes an initial list of rules for TypeScript we could implement:
- use strict mode (done in webhintio/hint@69aab2d)
- check if TS is installed globally and if so flag it as an error
- inline sourcemaps is the most reliable way to use them
importHelpers
can help reducing the final size avoiding repeated code (done in webhintio/hint@6565acc)esModuleInterop
should betrue
andallowSyntheticDefaultImports
shouldn't be set. This property helps avoid issues when the generated code is going to be used by another tool like Babel that mightrequire
the code differently.decorators
should be falseforceConsistentCasingInFileNames
should betrue
to avoid possible issues when working with *nix and Windows on the same project (done in webhintio/hint@a9a97fc)- Look into https://github.com/RyanCavanaugh/tsbuild, it's still early in the development but will be useful for monorepo projects and will have its own configuration file
One thing that came up during the conversation is to have a rule that will check that your bundle is the smallest one. This could actually be a mix of several checks, like
importHelpers
andno-comments
and maybe other ones.@sonarwhal/core I'm opening this issue so we can start the discussion, see if there's anything we can do initially, etc.
@bterlson if you know of someone else that could be interested in this kind of rules for TS please add them to the conversation.
/cc @DanielRosenwasser @bowdenk7 @RyanCavanaugh may have some feedback
- I'd say that if you're targeting Webpack or Rollup, you should be using
module: esnext
. This is the easiest win for users by far.- Use
--isolatedModules
to help ensure TypeScript can support single-file emit (which Babel is limited to).
To address some of the above:
esModuleInterop
should be true andallowSyntheticDefaultImports
shouldn't be set.Actually it depends.
allowSyntheticDefaultImports
should be used when targetingmodule: esnext
, which is most ideal when handing output to Webpack.esModuleInterop
is really only useful forAMD
orCommonJS
.decorators should be false
Maybe. It's sort of a tough call given existing code.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.