Giter Club home page Giter Club logo

stream-http's Introduction

stream-http Build Status

Sauce Test Status

This module is an implementation of Node's native http module for the browser. It tries to match Node's API and behavior as closely as possible, but some features aren't available, since browsers don't give nearly as much control over requests.

This is heavily inspired by, and intended to replace, http-browserify.

What does it do?

In accordance with its name, stream-http tries to provide data to its caller before the request has completed whenever possible.

Backpressure, allowing the browser to only pull data from the server as fast as it is consumed, is supported in:

  • Chrome >= 58 (using fetch and WritableStream)

The following browsers support true streaming, where only a small amount of the request has to be held in memory at once:

  • Chrome >= 43 (using the fetch API)
  • Firefox >= 9 (using moz-chunked-arraybuffer responseType with xhr)

All other supported browsers support pseudo-streaming, where the data is available before the request finishes, but the entire response must be held in memory. This works for both text and binary data.

IE note:

As of version 3.0.0, IE10 and below are no longer supported. IE11 support will remain for now.

How do you use it?

The intent is to have the same API as the client part of the Node HTTP module. The interfaces are the same wherever practical, although limitations in browsers make an exact clone of the Node API impossible.

This module implements http.request, http.get, and most of http.ClientRequest and http.IncomingMessage in addition to http.METHODS and http.STATUS_CODES. See the Node docs for how these work.

Extra features compared to Node

  • The message.url property provides access to the final URL after all redirects. This is useful since the browser follows all redirects silently, unlike Node. It is available in Chrome 37 and newer, Firefox 32 and newer, and Safari 9 and newer.

  • The options.withCredentials boolean flag, used to indicate if the browser should send cookies or authentication information with a CORS request. Default false.

This module has to make some tradeoffs to support binary data and/or streaming. Generally, the module can make a fairly good decision about which underlying browser features to use, but sometimes it helps to get a little input from the developer.

  • The options.mode field passed into http.request or http.get can take on one of the following values:

    • 'default' (or any falsy value, including undefined): Try to provide partial data before the request completes, but not at the cost of correctness for binary data or correctness of the 'content-type' response header. This mode will also avoid slower code paths whenever possible, which is particularly useful when making large requests in a browser like Safari that has a weaker JavaScript engine.
    • 'allow-wrong-content-type': Provides partial data in more cases than 'default', but at the expense of causing the 'content-type' response header to be incorrectly reported (as 'text/plain; charset=x-user-defined') in some browsers, notably Safari and Chrome 42 and older. Preserves binary data whenever possible. In some cases the implementation may also be a bit slow. This was the default in versions of this module before 1.5.
    • 'prefer-streaming': Provide data before the request completes even if binary data (anything that isn't a single-byte ASCII or UTF8 character) will be corrupted. Of course, this option is only safe for text data. May also cause the 'content-type' response header to be incorrectly reported (as 'text/plain; charset=x-user-defined').
    • 'disable-fetch': Force the use of plain XHR regardless of the browser declaring a fetch capability. Preserves the correctness of binary data and the 'content-type' response header.
    • 'prefer-fast': Deprecated; now a synonym for 'default', which has the same performance characteristics as this mode did in versions before 1.5.
  • options.requestTimeout allows setting a timeout in millisecionds for XHR and fetch (if supported by the browser). This is a limit on how long the entire process takes from beginning to end. Note that this is not the same as the node setTimeout functions, which apply to pauses in data transfer over the underlying socket, or the node timeout option, which applies to opening the connection.

Features missing compared to Node

  • http.Agent is only a stub
  • The 'socket', 'connect', 'upgrade', and 'continue' events on http.ClientRequest.
  • Any operations, other than request.setTimeout, that operate directly on the underlying socket.
  • Any options that are disallowed for security reasons. This includes setting or getting certain headers.
  • message.httpVersion
  • message.rawHeaders is modified by the browser, and may not quite match what is sent by the server.
  • message.trailers and message.rawTrailers will remain empty.
  • Redirects are followed silently by the browser, so it isn't possible to access the 301/302 redirect pages.

Example

http.get('/bundle.js', function (res) {
	var div = document.getElementById('result');
	div.innerHTML += 'GET /beep<br>';

	res.on('data', function (buf) {
		div.innerHTML += buf;
	});

	res.on('end', function () {
		div.innerHTML += '<br>__END__';
	});
})

Running tests

There are two sets of tests: the tests that run in Node (found in test/node) and the tests that run in the browser (found in test/browser). Normally the browser tests run on Sauce Labs.

Running npm test will run both sets of tests, but in order for the Sauce Labs tests to run you will need to sign up for an account (free for open source projects) and put the credentials in a .airtaprc file. You will also need to run a Sauce Connect Proxy with the same credentials.

To run just the Node tests, run npm run test-node.

To run the browser tests locally, run npm run test-browser-local and point your browser to the link shown in your terminal.

License

MIT. Copyright (C) John Hiesey and other contributors.

stream-http's People

Contributors

alexjeffburke avatar bendrucker avatar billiegoose avatar cesarandreu avatar connor4312 avatar cusspvz avatar dandv avatar devsnek avatar dominictarr avatar ecoslado avatar esrefdurna avatar feross avatar hugomrdias avatar jhiesey avatar luozhang002 avatar mikeal avatar okdistribute avatar reggi avatar regular avatar rlisagor avatar tjmehta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stream-http's Issues

Unable to set User-Agent

This is the issue to accompany #89.

Cloning Gists using isomorphic-git in the browser doesn't work, because Github enforces a bizaire User-Agent policy, returning 404 Not Found if the User-Agent string doesn't start with "git/". This is being tracked in isomorphic-git/isomorphic-git#247

The chain of dependencies that led me to make PR #89 is as follows:
isomorphic-git > simple-get > webpack/node-libs-browser > https > http

Hopefully we can get this resolved soon, because people seem to like cloning gists. I will also ask Github to stop filtering by user-agent, but am less than optimistic about getting that policy changed.

Crashes when content security policy enabled with fixed connect sources

With content security policy enabled, and the connect sources set to accept certain sources I get the following error in the console:

image

Seems the issues comes from this line: https://github.com/jhiesey/stream-http/blob/master/lib/capability.js#L17

How to reproduce:

serve any browser bundle (via browserify or webpack) containing stream-http over http-server with header Content-Security-Policy: connect-src 'self'.
I managed to achieve this using server from this gist and adding at line 45: self.send_header("Content-Security-Policy", "connect-src 'self'").

I don't know how to fix this issue so let me know if I can help you by setting up minimal reproducible repo.

Uncaught TypeError: Cannot read property 'from' of undefined

Getting this error when attempting to use this library wrapped by http or https:

Uncaught TypeError: Cannot read property 'from' of undefined
  | (anonymous) | @ | index.js:11
-- | -- | -- | --
  | ./node_modules/safe-buffer/index.js | @ | vendor.2.55.0.js:6783
  | __webpack_require__ | @ | manifest.2.55.0.js:55
  | (anonymous) | @ | _stream_readable.js:55
  | (anonymous) | @ | _stream_readable.js:1020
  | ./node_modules/readable-stream/lib/_stream_readable.js | @ | vendor.2.55.0.js:6709
  | __webpack_require__ | @ | manifest.2.55.0.js:55
  | (anonymous) | @ | readable-browser.js:1
  | ./node_modules/readable-stream/readable-browser.js | @ | vendor.2.55.0.js:6755
  | __webpack_require__ | @ | manifest.2.55.0.js:55
  | (anonymous) | @ | response.js:3
  | (anonymous) | @ | response.js:226
  | ./node_modules/stream-http/lib/response.js | @ | vendor.2.55.0.js:6825
  | __webpack_require__ | @ | manifest.2.55.0.js:55
  | (anonymous) | @ | request.js:3
  | (anonymous) | @ | request.js:329
  | ./node_modules/stream-http/lib/request.js | @ | vendor.2.55.0.js:6818
  | __webpack_require__ | @ | manifest.2.55.0.js:55
  | (anonymous) | @ | index.js:1
  | (anonymous) | @ | index.js:86
  | ./node_modules/stream-http/index.js | @ | vendor.2.55.0.js:6804

Looks like its related to:

var buffer = __webpack_require__("./node_modules/webpack/node_modules/node-libs-browser/mock/empty.js")
var Buffer = buffer.Buffer

// alternative to using Object.keys for old browsers
function copyProps (src, dst) {
  for (var key in src) {
    dst[key] = src[key]
  }
}
if (**Buffer.from** && Buffer.alloc && Buffer.allocUnsafe && Buffer.allocUnsafeSlow) {
  module.exports = buffer
} else {
  // Copy properties from require('buffer')
  copyProps(buffer, exports)
  exports.Buffer = SafeBuffer
}```

Occurs on both Chrome 67.0.3396.99  and Firefox 61.0 

Uncaught TypeError: function Buffer(arg)

Uncaught TypeError: function Buffer(arg) {
  if (!(this instanceof Buffer)) {
    // Avoid going through an ArgumentsAdaptorTrampol...<omitted>...
} is not a function

Has anyone seen an error similar to this caused by this lib? I can't seem to figure out why its happening around the time of request.write.

In fetch mode the setTimeout was not cleared ;Maybe I can not understand what you mean clearly

description:
in stream-http/request.js file .There is some code .please look at the picture:
2018-05-03 12 51 12

chrome edition:

2018-05-03 1 53 52

safari edition:
2018-05-03 1 58 03

in fetch mode you define global.setTimeout (condition:capability.abortController ) to emit requestTimeout event ,but i can not see the function clearTimeout in this code.i do not know clearly. because in our sdk we monitor an event requestTimeout, it continually throws warn But when I add clearTiimeout in the library ,it works well. so i think it is a emergency issue for us.Thank you very much.

http.IncomingMessage is undefined

Even using stream-http with browserify, I still got undefined error on http.IncoingMessage.

This is my code:

var http = require('stream-http');
var req = Object.create(http.IncomingMessage.prototype)

TypeError: undefined is not an object (evaluating 'http.IncomingMessage.prototype')

ie8

Do we need ie8 support now? If not, you can remove 3 depenendencies.

Ability to disable use of fetch

Hi there,

Would there be any traction in allowing an option to be specified that disabled the use of fetch? I'm working on a plugin for a testing framework that mocks out XHR - but simply trying to bundle it using newer versions of browserfiy without such an option results in fetch being automatically preferred and thus the mocking being escaped. It'd be great to be able to disable thus avoiding custom patches.

If this is something that'd be acceptable I'd be happy to look at a PR.

Thanks, Alex J Burke.

http.request not handling relative paths

If window.location.href is http://localhost:8000/app/#/, one would expect a GET to ./ping to be a request to http://localhost:8000/app/ping. At least, that's how XMLHttpRequest behaves.

But instead it looks like http.request sends the request to http://localhost:8000./ping, which is not a valid URL.

Limit overhead of adding stream-http to your project

We're debating whether to use jQuery's ajax call or to use this http plug-in. One good argument from my colleague that would prefer jQuery is that depending on this package comes with an extra 15kb needed to be transferred (gzipped), which would mean an 18% increase in size of the minified gzip of our project.

Could we optimize size here by maybe depending on less big projects in stream-http? Are there quick wins concerning size to be found?

Reference debate: LinkedDataFragments/Client.js@4dc3e19#commitcomment-18326257

Support for IPv6 addresses

Following code:

http.request('http://[::1]:80/foo',function(res) {
    res.on('end', function() {
        console.log('request status: ' + res.statusCode);
    });
}).on('error', function(e) {
       console.log(e.message);
}).end();

will result in this error message:
Failed to execute 'fetch' on 'Window': Failed to parse URL from http://::1:80/foo

The problem seems to be that the implementation is not aware of literal IPv6 addresses in the URL (see https://www.ietf.org/rfc/rfc2732.txt).

Expose the final requested URI

In a browser, you can use the responseUrl parameter of an XHR object to get the finally retrieved URL after redirects. When writing an application using nodejs, the XHR object however is not exposed to the outside world and there's no documented way to retrieve the finally retrieved URL.

I suggest adding a parameter responseUrl to the response, similar to the XHR object, so that we can access the final URL of the document we've retrieved.

Edge's `fetch` does not allow GET with `body: null`

Hi, I am using stream-http indirectly via browserify. I think I may have hit a regression in 2.6.2

The fix for #67 triggers some misbehavior in Edge, briefly mentioned in a comment to this issue: https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/8546263/

The fetch API seems to disallow GET-requests with a body -- I only tried chromium and edge. Both raise an error if I try to fetch something with GET and an non-empty body.
Edge seems to be particularly picky about this; here the fetch call reports an error, even with body: null.

Try this:

// breaks in both browsers:
fetch(location.origin,{body:"barf"}).then(function(){console.log("yes");}, function(e){console.log("no", e);})
// works in chromium, but not in edge:
fetch(location.origin,{body:null}).then(function(){console.log("yes");}, function(e){console.log("no", e);})
// works in both browsers:
fetch(location.origin,{body:undefined}).then(function(){console.log("yes");}, function(e){console.log("no", e);})

Since 8df10e0, the body variable is initialized to null.
For GET requests, I think it should always be undefined, shouldn't it. Or would that break #67 again?

Allow Request Body for custom HTTP verbs other than POST, PUT and PATCH.

Hi Guys, to give context, i am using Angular 2 and Typescript, my server is Django, and stream-http==2.6.3.

My server provides a custom verb named GETX, and its purpose is to have the query parameters be its body instead of being part of its URL. This is to solve the URL length limit problem with servers and to retrieve a specific set of data by using a more complex query.

Instead of:

GET /foo?query=1

We use:

GETX /foo
Content-Type: application/json
{"query":1}

Both calls below work in the NodeJS shell but when compiling it using ng build then serving them to the browser: GETX does not have a body.

let request = require('request');

request({  // #1
    url: "http://localhost:7000/api/blog-articles",
    method: "POST", json: {foo: 1}
    }, (err, res, body) => console.log(res.request.body));
request({  // #2
    url: "http://localhost:7000/api/blog-articles",
    method: "GETX", json: {foo: 1}
    }, (err, res, body) => console.log(res.request.body));

The code that prevents me from using a body for custom verbs is lib/request.js:102 of this project:

if (opts.method === 'POST' || opts.method === 'PUT' || opts.method === 'PATCH' || opts.method === 'MERGE') {
...

May i ask that custom verbs be allowed to have a body or some sort of option to allow such?

Import fails with CORS

My company's web stack uses CORS. Merely importing stream-http causes a request to be made to example.com, which violates CORS and therefore causes the import to fail. In fact, I'm not importing stream-http directly, I'm importing fetch, so I can't simply try/catch the import or anything like that.

The error appears on the line:
Failed on xhr.open('GET', global.location.host ? '/' : 'https://example.com')
So this is related to the previous issue.

Wrong "content-type" header

I'm seeing an issue in Safari 8 where response.headers["content-type"] is "text/plain; charset=x-user-defined" when the server actually sent text/javascript; charset=utf-8 (which I can confirm the the developer tools network pane).

It looks like the content-type header gets overridden here:

xhr.overrideMimeType('text/plain; charset=x-user-defined')
— but I am not sure why.

IE11 Blob Memory Leaking

Hi,

Latest version of IE11 (11.0.43) will leaking memories when upload with Blob payload. However, IE seems only update for security reasons, not likely this can be fixed by IE:

We noticed you’re filing a bug against Internet Explorer 11. Note that, while Internet Explorer 11 is still supported with critical security updates, we do not plan further feature updates. All new feature work is focused on Microsoft Edge at this time. For this reason, only high-impact security bugs will be considered for Internet Explorer servicing.

For IE11, stream-http will create Blob objects as the fetch/xhr payloads. The memory leaking happens especially uploading is parallel. Whether per uploading request is successful, the leak will always happen. Closing or refresh the IE tab will not help GC the leaked memory.

Here is a leaking demo:
https://ie11-memory-leak.azurewebsites.net/api/HttpTriggerJS1

Can stream-http provide an option for the preferred payload type, or disabling the Blob type payload?

Upload failure because Chrome may fail to GC the blobs

Hi,

I'm from Microsoft and trying to make Azure Storage Node.js SDK work in the browser based on browserify. The stream-http is used to replace the http module in the project.

I found an interesting uploading issue related to Chrome.

Issue Description

I tried to upload a file more than 1GB.
Every big file is split into 4MB trunks.
Every trunk is uploaded by HTTP PUT based on stream-http.

However, every time the file is uploaded around 50%, the left trunks will fail.
After one fail, we cannot upload another trunk anymore.
Only page refresh will reset the status.

The Chrome console will report "net::ERR_FILE_NOT_FOUND".
chrome_blob_errfilenotfound

Every other browsers work fine, only Chrome reports this error.
I tested on the latest Chrome Version 54.0.2840.71 m (64-bit).

Root Cause

I did investigation for the problem.
The root cause should on Chrome's side.

In stream-http, for Chrome, every data trunk will be send by xhr in the blob type.
(The logic is defined in Request.js, ClientRequest.prototype._onFinish.)

However, in my case:

  1. Chrome seems cannot auto GC the blobs used.
  2. And Chrome also has an total blob size limitation.

There also are related issue reports for Chrome:

  1. Blob Objects are not getting cleared once the blob object is de-referenced
  2. the total blobs' size cannot exceed about 500MiB

So, every time I upload a file larger than the size limit, it will totally fail.
In the screenshot, we can find that many unused 4MB blobs still with 1 reference, cannot be GC.
chrome_blob_leak

Solution

Because Chrome also supports ArrayBuffer and uInt8Array as XMLHttpRequest payloads.
So I tried to pass the uInt8Array _body into uInt8Array as the payload directly.
The modification is in the Request.js ClientRequest.prototype._onFinish.
And it works!

Chrome seems still need long time to make changes.
Before Chrome makes an update for this issue.
We may try to use other xhr payload types instead of blob?

Can you please help check this issue?
I'm also willing to help solve the problem or commit some codes.

Best

Support for https?

First of all: Awesome work, this really is a great module and a very important improvement for browserify! Great work :-)!

What I am wondering about is whether this module already is able to support https, or - if not - whether there are any plans to do so? Is this on the roadmap in any way?

Redirect history

Is it not accessible via the use of fetch's {redirect:'manual'}?

Compatibility check doesn't select 'fetch' on Chrome

First of all, thanks for this great library! It seems ReadableByteStream is no longer in the latest Chrome (Tried with Chrome 52.0.2743.82 (64-bit) on OS X 10.11.6). This causes the fetch compatibility check to fail, selecting xhr instead. I checked the fetch response body and it was a ReadableStream. Maybe you should check for ReadableStream instead, or check for either ReadableStream or ReadableByteStream. I'm not sure the best cross-platform way to do this.

Cheers

Investigate occasional failures of the 'large binary request without streaming' test

In older versions of safari, the non-streaming large binary streaming test used to occasionally fail before I added the cache-control header to the test server. The response length would be too large (larger than the content-length header and what the test expected).

My hunch is that there was a bug in safari's cache. The issue has gone away since adding cache-control: no-store to the tests, but that isn't a very satisfying answer.

A quick search didn't reveal any bug reports of this happening, but I have seen other bugs in safari's cache so it wouldn't entirely surprise me.

Access is denied with AJAX call using CORS

I'm experiencing an exception related to CORS in IE9, specifically in request.js _onFinish method

else {
        var xhr = self._xhr = new global.XMLHttpRequest()
        try {
//error occurs here
            xhr.open(self._opts.method, self._opts.url, true)
        } catch (err) {
            process.nextTick(function () {
                self.emit('error', err)
            })
            return
        }

I understand that IE9 uses XDomainRequest for CORS requests. Is this something avoidable in IE9 with stream-http?

req.destroy / req.abort does not close the underlying connection

If you have an open request and call req.destroy(), you won't get data any longer, but the connection stays open (at least in Chrome).

So, for the server-side, there is no way to detect that the client is not interested any more in the server's data and can therefor not clean up afterwards :-(

What should I do to handle this?

Uncaught TypeError

Might be related to #29

I was writing a browser app using Ionic and testing on Android 4.0.3, using browserify so I could use needle for http requests in the browser.

It worked until I tried to fetch a url like http://data.flightradar24.com/_external/planedata_json.1.4.php?f=87ae8b0&equip_hint=A320&format=2

And I could see an error being logged on the console pointing stream-http/lib/response.js to "IncomingMessage.prototype._onXHRProgress = function() { ... }" the line containing self.push(...). The error was Uncaught TypeError: Type error

        case 'arraybuffer':
            if (xhr.readyState !== rStates.DONE)
                break
            response = xhr.response
            self.push(new Buffer(new Uint8Array(response)))
            break

That's all I know, just wanted to let you know.

The default protocol should be 'http:'

In order to be consistent with node's http lib. Or the browser will complain like this:

TypeError: Failed to execute 'fetch' on 'Window': Failed to parse URL from //10.101.168.94:6000/ (req "error")

note //10.101.168.94:6000/ doesn't have http: prefix.

ClientRequest.prototype.setHeader don't handle Array values

Hi, I want to set 'access-control-allow-origin' header property via node-fetch (version: "1.6.3"):

const fetch = require('node-fetch');
promise = fetch(url, {headers: { 
		'Access-Control-Allow-Origin': 'http://google.com',
		}})
		.catch(error => console.log(error));

But the value was set to property name. So, I decide to debug it. It's inside the request.js (lib folder).
I'm coming from this code line:

Object.keys(opts.headers).forEach(function (name) {
	self.setHeader(name, opts.headers[name])
})

I'm passing the name: "access-control-allow-origin" and the value: Array[1] that contains "http://google.com"

ClientRequest.prototype.setHeader = function (name, value) {
	...
	self._headers[lowerName] = {
		name: name,
		value: value // <- The value of the array is not handled
	}
}

@jhiesey What's the best level to fix it?

Trouble with headers in fetch polyfill

I'm seeing issues on IE with a fetch polyfill that I think are due to the change in 11d14fc.

Basically, headers is getting passed to fetch as an array of arrays, and the implementation is expecting that the forEach callback will be called with value, name.

The MDN docs suggest that the headers property is

a Headers object or an object literal with ByteString values.

I think the spec says that the headers property can be an array of arrays

typedef (sequence<sequence> or record<ByteString, ByteString>) HeadersInit

So I'm guessing that this is an issue with the polyfill instead of this module. But I'm curious if you can give some background on 11d14fc.

TypeError: Invalid array length argument

In this chunk of code:

case 'arraybuffer':

  case 'arraybuffer':
    if (xhr.readyState !== rStates.DONE)
      break
    response = xhr.response
    self.push(new Buffer(new Uint8Array(response)))
    break

It assumes the response is a valid integer. I'm seeing cases with some of my users where the Uint8Array(response) is throwing

TypeError: Invalid array length argument

Should you check the response first or do you intend to throw an exception in this case and it is the caller's responsibility to catch exceptions like this?

Error on Safari 7 in response._onXHRProgress()

Sorry, I know this is not very helpful or informative, but I'm seeing an error on Safari 7.

User agent:

Mozilla/5.0 (iPad; CPU OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53

Stack trace:

TypeError: Type error
  [native code]
  [email protected]:2869:2050
  [email protected]:2864:3148
  [email protected]:2864:2748

I've narrowed it to response._onXHRProgress(), but I don't have any additional context. I'll try to come up with a reproducible test case or at least a non-minified stack trace.

stream-http `get` call fails on MS Edge 38

Microsoft has released an anniversary update which updates MS Edge to version 38

On this version, stream-http get fails silently and no network request goes out.

I checked the sauce-lab's test results page and looks like this version is not tested. Kindly help

Consider changing default withCredentials value to false

Sending credentials by default increases the risk of leaked credentials. It makes it too easy to shoot yourself in the foot if you don't know any better. Sending sensitive information should be opt-in.

I propose that withCredentials be made an opt-in feature (i.e. set the default to false), forcing the developer to acknowledge the risk involved.

I'm under the impression that browser vendors agree with omitting credentials as the default behavior because that's what GlobalFetch.fetch does.

Thoughts?

Failed on xhr.open('GET', global.location.host ? '/' : 'https://example.com')

Hello,

I tried to use browserified package which used http in Firefox extension, and I have got exception on loading script caused by lines

var xhr = new global.XMLHttpRequest()
// If location.host is empty, e.g. if this page/worker was loaded
// from a Blob, then use example.com to avoid an error
xhr.open('GET', global.location.host ? '/' : 'https://example.com')

from capability.js

The error was because my extension loaded on the page chrome://seleniumbuilder3 and global.location.host variable was equal to "seleniumbuilder3".

My global.location variable

2016-09-16_170013

I simplest replaced line above to xhr.open('GET', 'https://example.com') and it works.

So, I propose to add checking if global.location.protocol is not "chrome:" or something else (I don't have so many experience with it). It can save a couple of hours for another devs who will try to use it in their extensions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.