Giter Club home page Giter Club logo

performance-timeline's Introduction

Performance Timeline

Overview

The PerformanceTimeline specification defines ways in which web developers can measure specific aspects of their web applications in order to make them faster. It introduces two main ways to obtain these measurements: via getter methods from the Performance interface and via the PerformanceObserver interface. The latter is the recommended way to reduce the performance impact of querying these measurements.

PerformanceEntry

A PerformanceEntry object can host performance data of a certain metric. A PerformanceEntry has 4 attributes: name, entryType, startTime, and duration. This specification does not define concrete PerformanceEntry objects. Examples of specifications that define new concrete types of PerformanceEntry objects are Paint Timing, User Timing, Resource Timing, and Navigation Timing.

Performance getters

The Performance interface is augmented with three new methods that can return a list of PerformanceEntry objects:

  • getEntries(): returns all of the entries available to the Performance object.
  • getEntriesByType(type): returns all of the entries available to the Performance object whose entryType matches type.
  • getEntriesByName(name, type): returns all of the entries available to the Performance object whose name matches name. If the optional parameter type is specified, it only returns entries whose entryType matches type.

Using the getters in JavaScript

The following example shows how getEntriesByName() could be used to obtain the first paint information:

// Returns the FirstContentfulPaint entry, or null if it does not exist.
function getFirstContentfulPaint() {
  // We want the entry whose name is "first-contentful-paint" and whose entryType is "paint".
  // The getter methods all return arrays of entries.
  const list = performance.getEntriesByName("first-contentful-paint", "paint");
  // If we found the entry, then our list should actually be of length 1,
  // so return the first entry in the list.
  if (list.length > 0)
    return list[0];
  // Otherwise, the entry is not there, so return null.
  else
    return null;
}

PerformanceObserver

A PerformanceObserver object can notified of new PerformanceEntry objects, according to their entryType value. The constructor of the object must receive a callback, which will be ran whenever the user agent is dispatching new entries whose entryType value match one of the ones being observed by the observer. This callback is not run once per PerformanceEntry nor immediately upon creation of a PerformanceEntry. Instead, entries are 'queued' at the PerformanceObserver, and the user agent can execute the callback later. When the callback is executed, all queued entries are passed onto the function, and the queue for the PerformanceObserver is reset. The PerformanceObserver initially does not observer anything: the observe() method must be called to specify what kind of PerformanceEntry objects are to be observed. The observe() method can be called with either an 'entryTypes' array or with a single 'type' string, as detailed below. Those modes cannot be mixed, or an exception will be thrown.

PerformanceObserverCallback

The callback passed onto PerformanceObserver upon construction is a PerformanceObserverCallback. It is a void callback with the following parameters:

  • entries: a PerformanceObserverEntryList object containing the list of entries being dispatched in the callback.
  • observer: the PerformanceObserver object that is receiving the above entries.
  • hasDroppedEntries: a boolean indicating whether observer is currently observing an entryType for which at least one entry has been lost due to the corresponding buffer being full. See the buffered flag section.

supportedEntryTypes

The static PerformanceObserver.supportedEntryTypes returns an array of the entryType values which the user agent supports, sorted in alphabetical order. It can be used to detect support for specific types.

observe(entryTypes)

In this case, the PerformanceObserver can specify various entryTypes values with a single call to observe(). However, no additional parameters are allowed in this case. Multiple observe() calls will override the kinds of objects being observed. Example of a call: observer.observe({entryTypes: ['resource', 'navigation']}).

observe(type)

In this case, the PerformanceObserver can only specify a single type per call to the observe() method. Additional parameters are allowed in this case. Multiple observe() calls will stack, unless a call to observer the same type has been made in the past, in which case it will override. Example of a call: observer.observe({type: "mark"}).

buffered flag

One parameter that can be used with observe(type) is defined in this specification: the buffered flag, which is unset by default. When this flag is set, the user agent dispatches records that it has buffered prior to the PerformanceObserver's creation, and thus they are received in the first callback after this observe() call occurs. This enables web developers to register PerformanceObservers when it is convenient to do so without missing out on entries dispatched early on during the page load. Example of a call using this flag: observer.observe({type: "measure", buffered: true}).

Each entryType has special characteristics around buffering, described in the registry. In particular, note that there are limits to the numbers of entries of each type that are buffered. When the buffer of an entryType becomes full, no new entries are buffered. A PerformanceObserver may query whether an entry was dropped (not buffered) due to the buffer being full via the hasDroppedEntry parameter of its callback.

disconnect()

This method can be called when the PerformanceObserver should no longer be notified of entries any more.

takeRecords()

This method returns a list of entries that have been queued for the PerformanceObserver but for which the callback has not yet run. The queue of entries is also emptied for the PerformanceObserver. It can be used in tandem with disconnect() to ensure that all entries up to a specific point in time are processed.

Using the PerformanceObserver

The following example logs all User Timing, Resource Timing entries by using a PerformanceObserver which observers marks and measures.

// Helper to log a single entry.
function logEntry(entry => {
  const objDict = {
    "entry type":, entry.entryType,
    "name": entry.name,
    "start time":, entry.startTime,
    "duration": entry.duration
  };
  console.log(objDict);
});

const userTimingObserver = new PerformanceObserver(list => {
  list.getEntries().forEach(entry => {
    logEntry(entry);
  });
});

// Call to log all previous and future User Timing entries.
function logUserTiming() {
  if (!PerformanceObserver.supportedEntryTypes.includes("mark")) {
    console.log("Marks are not observable");
  } else {
    userTimingObserver.observe({type: "mark", buffered: true});
  }
  if (!PerformanceObserver.supportedEntryTypes.includes("measure")) {
    console.log("Measures are not observable");
  } else {
    userTimingObserver.observe({type: "measure", buffered: true});
  }
}

// Call to stop logging entries.
function stopLoggingUserTiming() {
  userTimingObserver.disconnect();
}

// Call to force logging queued entries immediately.
function flushLog() {
  userTimingObserver.takeRecords().forEach(entry => {
    logEntry(entry);
  });
}

performance-timeline's People

Contributors

autokagami avatar caribouw3 avatar clelland avatar cvazac avatar ehanley324 avatar elchi3 avatar foolip avatar igrigorik avatar jyasskin avatar marcoscaceres avatar mpb avatar ms2ger avatar noamr avatar npm1 avatar plehegar avatar sidvishnoi avatar siusin avatar tdresser avatar toddreifsteck avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

performance-timeline's Issues

Very noticeable delay when loading the "Latest editor's draft" (github.io version)

From Chrome's console:

respec-w3c-common:formatted:21894
GET https://labs.w3.org/specrefs/bibrefs?refs=FRAME-TIMING,HR-TIME-2,NAVIGATION-TIMING-2,RESOURCE-TIMING,RFC2119,SERVER-TIMING,USER-TIMING,WebIDL
run @   respec-w3c-common:formatted:21894
(anonymous function)    @   respec-w3c-common:formatted:5970
(anonymous function)    @   respec-w3c-common:formatted:5969

/performance-timeline/#the-performanceobserver-interface:1
Fetch API cannot load https://labs.w3.org/specrefs/bibrefs?refs=FRAME-TIMING,HR-TIME-2,NAVIGATION-TIMING-2,RESOURCE-TIMING,RFC2119,SERVER-TIMING,USER-TIMING,WebIDL. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://w3c.github.io' is therefore not allowed access. The response had HTTP status code 503. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

respec-w3c-common:formatted:21900
TypeError: Failed to fetch(…)
(anonymous function)    @   respec-w3c-common:formatted:21900

Performance should implement EventTarget

Before the definition of performance was moved from NavigationTiming2, it had an update to have Performance be an EventTarget:

interface Performance : EventTarget

I think this got lost when moved to the PerformanceTimeline? For example, ResourceTiming's onresourcetimingbufferfull would still need it.

Redundant throw should be removed

The spec says:

If options' entryTypes attribute is not present, throw a JavaScript TypeError.

You don't need that line, as WebIDL already will throw as entryTypes is marked as required in the corresponding dictionary.

window.performance.createObserver feedback

#29 landed window.performance.createPerformanceObserver, but we have a few -1's on the new API:

PerformanceObserver should lookup the |performance| object in the context on which it's defined. That's how new Text() gets an ownerDocument, how new EventSource() finds an origin, how new Notification() works, how new Worker() gets an origin... most of the web platform is based around the idea of looking up objects in the current context -- #29 (comment).

Strong -1 for changing from new PerformanceObserver to window.performance.createObserver -- #29 (comment)


To lookup the performance object we can reference the "entry settings object" (EventSource example).

Any other implementation or other concerns with this API? Should we go back to new PerformanceObserver?

/cc @bzbarsky @esprehn @domenic

Second example disconnects immediately which means nothing is delivered

ex. https://w3c.github.io/performance-timeline/#h-introduction

has:

var observedEvents = [];
var observer = new PerformanceObserver(function(list) {
  observedEvents.push(list); // defer processing
});

// subscribe to Frame-Timing and User-Timing events
observer.observe({eventTypes: ['render', 'composite', 'mark', 'measure']});
observer.disconnect();
}

which disconnects immediately after observing which means you should get nothing delivered.

Clarity on buffered-until-onload (and buffers, buffers, everywhere)

Hi everyone!

We've recently integrated the buffered: true flag for the PerformanceObserver, but I'm confused a bit on its behavior.

From our discussions in the past, I believe the intention was that buffered: true would give you all buffered PO entries up to the point the PerformanceObserver.observe({ buffered: true}) is called. I think this is captured in the spec correctly, and it's an awesome addition for RUM.

I think we had also discussed that some of the specs might clear that PO buffer at onload. For example, with ResourceTiming were were discussing that at onload, its PO buffer would be cleared. So if you called PerformanceObserver.observe({ buffered: true}) after onload, you would not get any ResourceTimings that happened before onload.

I know we just merged in #76 and probably just haven't updated ResourceTiming to reflect this behavior, but I think that behavior should be captured either in PerformanceTimeline or ResourceTiming.

Along those lines, I have questions on that behavior. I think I understand the answer to some of these, but I'm not sure these are all explained in the specs:

  1. Does the PO buffer behavior apply to all specs, or is it up to each spec to define how long that buffer lasts. For example, do all specs clear their PO buffer at onload (like RT/ST which have PT buffer limits) or can some fill the buffer indefinitely (like UT which have no PT buffer limit)? If the former, we should describe this behavior in the PerformanceTimeline spec. If the later, we should probably add the details to each other spec, and possibly briefly describe the differences in the PerformanceTimeline spec as well. My preference is to allow each spec to specify its PO buffer clearing behavior, since they each have different PT buffer behaviors.

  2. What happens if you call PerformanceObserver.observe({ buffered: true}) after onload for a spec that clears its buffer at onload? You don't get any buffered entries, just new ones going forward, correct? We should point this out in the spec.

  3. The buffer we're talking about for PerformanceObserver is different from the PerformanceTimeline buffer right? Does any of this behavior with the PerformanceObserver buffer affect the PerformanceTimeline buffer? i.e. I want to make sure that ResourceTiming entries in the PerformanceTimeline wouldn't also get cleared at onload just because the PerformanceObserver's buffer is cleared (which would be a regression from today's behavior).

  4. What happens if someone calls .clearResourceTimings() before onload? This clears the PerformanceTimeline buffer. Does that also affect the PerformanceObserver buffer?

  5. What happens if the PerformanceTimeline buffer for ResourceTiming/ServerTiming (e.g. 150 entries) reaches capacity? Does the PerformanceObserver buffer still fill up without bounds until onload? That would be my preference.

  6. I've heard a bit of confusion on how long new specs like ServerTiming and PaintTiming will keep their entries -- notably that 1) after onload, if no PO is registered, the PT will not get any more entries, or 2) at onload, the PT buffer will get cleared. Both of those would be a challenge for RUM.

I also made this small chart for clarity on how each spec handles buffering and what I think it's behavior should be:

Spec Static Data PerformanceTimeline Expected Entries PerformanceTimeline Buffer Size PO buffer cleared at onload
NavigationTiming performance.timing yes 1 n/a no
ResourceTiming n/a yes 100+ 150 yes
UserTiming n/a yes 0-many n/a no
ServerTiming n/a yes 100+ 150 yes
PaintTiming proposed yes 2+ n/a no
LongTasks no no < 100 n/a yes?

(I'm willing to help out with a spec PR once there's clarity on the above).

Add a note regarding feature detection options for new entry types

While reviewing a LongTask implementation patch I noticed that there's no way for developers to detect browser support for new entry types:

I think we should change that so that at least one but ideally both throw or return an error when called with an unsupported entry type.

/cc @spanicker @igrigorik

getEntries* are depending on the scope

PerformanceEntry objects are dependant on their scope, such as
myWorker.getEntriesByType("navigation") --> empty list.

So, a bit of improvement for sentences like:
[[
For each available PerformanceEntry object (entryObject),
]]

Allow creation of PerformanceObserver with flag to "emit timeline"?

If a site wants to use PerformanceObserver in place of the existing performance.getEntries() API, they will could encounter scenarios where it is impossible to get a PerformanceObserver created before some PerformanceEntries are emitted.

Would it be helpful to add a flag to the creation of PerformanceObserver that indicates the first event should include the existing performance timeline that match the specified filter?

This mechanism should allow PerformanceObserver to 100% replace performance.getEntries().

Define buffering behaviors & hooks for queued (PerformanceObserver) entries

We recently (see #76) added an optional flag to "queue an entry" that allows the caller to optionally append the entry to the performance entry buffer.. For example, Long Tasks or Server Timing can set this to true while before onload to allow developers to register and retrieve any LT/ST records captured between start of page load and max(time when they register observer, onloadend).

A few interactions that we did not address:

  1. How does performance entry buffer interact with other buffers defined by individual specs? E.g. ResourceTiming defines own buffer with methods to query and clear it.
  • When RT's buffer is cleared, does that affect the performance entry buffer?
  1. Do we want to set a global cap on the performance entry buffer?
  • What happens when the limit is reached, do we start dropping items on the floor?

Sidenote: moving forward we're not planning on adding more type-specific buffers.. Instead, we want to encourage use of PerfObserver. So, this is mostly an exercise in backwards compat and to make sure that we can pave a clean path for existing users of RT/NT/UT.


For (1), my hunch is "no", calls to clear type-specific buffers should not clear the performance entry buffer. This does mean that we might end up with some double book-keeping, but one of the motivating reasons for PerfObserver was to resolve the race conditions created by consumers clearing buffers from under each other. As such, I think I propose we treat them as separate entities: it should be possible to clear the RT buffer and still use PerfObserver with buffered: true to get all the entries queued before onload.

For (2), my hunch is "yes", and we should probably recommend a minimum ~ user agent should allow a minimum of XXXX entries to be queued, and once full the items are silently discarded.

/cc @cvazac @nicjansma @toddreifsteck

getEntries(<filter>) isn't supported

[[
Ilya, Yoav and I took a look and it appears that getEntries({filter}) is not yet implemented in any browser. We also discussed that adding it would be difficult to feature detect. Because of this, I think we can cut it from the Performance Timeline specification.
]]
#57 (comment)

Annotation of performance entries

From: http://lists.w3.org/Archives/Public/public-web-perf/2014Feb/0056.html

@Jatinder: Another piece feedback I’ve heard on the Timing APIs is that it would be great if we provided the ability to “annotate” entries with app-specific meta data. For example, if someone wanted to add in specific information on what the server was doing during a certain time period they could annotate the timeline so you can see the complete client and server side interaction. Additionally, one could also annotate why a measure might have taken as long as it did (e.g. how many items were rendered, which column was sorted, etc.). Something like performance.annotateMeasure(“measureName”, “keyName”, “value”). Does something like this sound remotely interesting?

Juan Carlos Estibariz: The annotations would be very useful, and it would be even better if it was possible to add annotations directly from the server (e.g. by adding a header defined by this specification), this would allow direct correlation between client side and server side performance measurements.

@josh: User timings in the response header sounds elegant.
We currently shove all our server timings into a hidden element in the
page footer as data- attributes. Then we have to parse these out on
page load.


Having the ability to somehow annotate an entry with custom metadata would open the doorway to some very interesting tooling improvements with multi-page UIs, e.g. multi-frame pages or multi-process apps like in Firefox OS.

Is there enough information available to work towards this? Based on @Jatinder's recommendation I see:

partial interface Performance {
  void annotateMark(DOMString markName, DOMString key, DOMString value);
  void annotateMeasure(DOMString measureName, DOMString key, DOMString value);
};

PerformanceObserver developer / RUM friendliness

Hi everyone,

As I've been pondering how we're going to use PerformanceObserver to capture things for Boomerang, I wanted to share some thoughts I had on PerformanceObserver versus the PerformanceTimeline.

Part of the reason I'm bringing this up is that we're making improvements to PerformanceObserver to have it buffer-until-onload as a solution to avoid maintaining a global buffer like the PerformanceTimeline does. I'm worried that the group is leaning toward the PerformanceTimeline going away, deprecating it, or having new specs not support it (e.g. LongTasks).

(I totally get the performance reasons to switch to the observer model, so all of these observations are just on the downsides)

So I would like to share some ideas why, as a developer, and similarly for RUM, getting performance data from PerformanceTimeline is more friendly than observing via PerformanceObserver. I'm going to mostly pick on LongTasks here, since it's the Observable-only spec right now (even though I love it :):

  1. As a web developer, if I'm browsing on my local computer and see a performance issue I want to investigate, if the data is only Observeable, I will be unable to look at it after-the-fact.

    Example scenario: I'm browsing my personal site and it locked up with a white screen for a second while loading or trying to interact with it. I didn't have the developer tools already open. I want to investigate, but I only have access to things that are in the PerformanceTimeline (like UserTiming, ResourceTiming which can help), and not things that are only Observable (like LongTasks which might have helped more).

    Solutions? LongTasks could support PerformanceTimeline, and keep a reasonable buffer like ResourceTiming/ServerTiming do. Or, if Observable-only, maybe there's a flag I can enable in developer tools to tell it to not clear PerformanceObserver entries at onload? Or, always monitor via TamperMonkey (below).

  2. As a web developer, if I just want to explore or play around with a new spec that is Observeable-only, it's really hard to do in Dev Tools. In order to capture data during the page load process, you have to argue with the developer tools in a race to get your Observer started in time.

    Example scenario: I'm trying to see LongTasks for site X during onload, so I hit reload in the browser. I then frantically try to get the Console to accept my PerformanceObserver.observe (below) just so I could see all LongTasks that happened during onload.

    Solution: To work around this, I'm using a third-party extension (e.g. TamperMonkey) to register my PerformanceObserver at the start of the page load.

(new PerformanceObserver(function(list) { console.log(list.getEntries()); }))
.observe({ entryTypes: ["longtask"] });
  1. For RUM, to get a complete-ish picture of the page's Waterfall via ResourceTiming and PerformanceTimeline, we can traverse into all IFRAMEs, e.g. calling performance.getEntriesByType("resource").

    We can't do this only using an Observeble-only spec, as new IFRAMEs can be added by other (third-party) scripts. There's no way for us to force an observer into those IFRAMEs in time, so we'll miss those events.

    With the PerformanceTimeline, we can leisurely crawl the page's IFRAME tree when we beacon our data, getting all entries when needed.

  2. For RUM, if we want to monitor an Observable-only interface, we need to add a <script> as early as possible in the <HEAD> to turn on the observer and start buffering, since we try to load the main RUM script async (which often arrives after onload). With PerformanceTimeline, we can leisurely capture this data when needed.

I totally get all of the reasons for the Observer, and why UA's might not want to have a global buffer for high-frequency entries. But many of the specs expect only 10s or max 100s of entries by default.

So I guess what I'm saying is, Long Live the PerformanceTimeline!

Notification for new performance entries

From: http://lists.w3.org/Archives/Public/public-web-perf/2014Feb/0052.html

@josh: Navigation and user timings have been useful to gather and analysis on
the server side. Though, its a bit tricky to build tools that just
take that data and upload it to a server. Theres no notification when
new performance entries have been recorded. This requires invasive
code to recheck the performance entries buffer.

    var seen = 0;
    function reportPerformance() {
      var entries = performance.getEntries();
      var newEntries = entries.slice(seen);
      seen = entries.length;
      if (newEntries.length) {
        navigator.sendBeacon("/stats",
          JSON.stringify(newEntries));
      }
    });

    // gather all entries from initial load
    window.addEventListener("load", function() {
      reportPerformance();
    });

    // report script timing after async script loads
    var script = document.createElement("script");
    script.src = "/async-script.js";
    script.onload = reportPerformance;
    document.head.appendChild(script);

    // report user timing
    window.performance.mark("mark_fully_loaded");
    reportPerformance();

And knowing when its safe to read performance.timing stats has a small
edge case.

    window.addEventListener("load", function() {
      performance.timing.loadEventEnd; // 0
      setTimeout(function() {
        performance.timing.loadEventEnd; // 123
      }, 0);
    });

It would be helpful to receive an event or callback when the browser
knows there are new performance entries available. This should cover
any resource and user timings. Anything that will add new entries to
performance.getEntries(). The notification should not even be
synchronous. Batching and throttling the notification would be up to
the browser.

As of the NavigationTiming2 spec, the Performance interface is already
an EventTarget. Heres an example of using a fictitious event name with
to log client performance timings back to a server backend.

    performance.addEventListener("entries", function(event) {
      navigator.sendBeacon("/stats",
        JSON.stringify(event.newEntries));
    });

And maybe even add an event or promise for navigation timing readiness.

    performance.timing.ready().then(function() {
      navigator.sendBeacon("/stats",
        JSON.stringify(performance.timing));
    });

@jainarvind: Seems like a good idea. - source

@Jatinder: It sounds like a good idea to me as well. I suppose this would be most useful for the Resource and User Timing specs, as the Navigation Timing data is unlikely to change once the page has loaded. Should we consider a single event in Performance Timeline that indicates that any new PerformanceEntry has been added or should we do individual events in each of the specs? Seems like a single event may be sufficient. - source

@igrigorik: Josh, why wouldn't you just use measure / clearMarks? I don't think you
need to traverse the full buffer.
@josh: For user timings, we're already using a wrapper as you mentioned. However, I'd prefer to use the standard interface. Its also possible for libraries outside my control to set new marks that I'd like to report. New resource timing data is often outside of the user's control. New
images and scripts can be appended dynamically. - source

Add firstPaint PerformanceEntry

Out of the list of proposed PerformanceEntries the one missing from my POV is the firstPaint. Each browser has a custom way of reporting it, and it would fit nicely to get this as part of the performance API instead.

See: http://dxr.mozilla.org/mozilla-central/source/toolkit/components/startup/StartupTimeline.h#15

Some use cases:

  • measuring time to first paint
  • allow developers to track cases where the paint happened before DOMContentLoadEnd or LoadEnd (which means potential flicker).

performance.getEntries() can't get failed requests.

For example, I added '0.0.0.0 avatars2.githubusercontent.com' into /etc/hosts, then visited github.com.
After page fully loaded, I run performance.getEntriesByType('resource').filter(function(entry){return /avatars2\.githubusercontent\.com/.test(entry.name)}); in the console, I got nothing.
However in production env, sometimes it's important to know how many requests are timeout or aborted by the browser.
Hope there will be an api or some way to get that.

Clarify sort order for entries with equivalent startTime

The spec says that entries in PerformanceObserverEntryList should be sorted by start time:

Given optional name and type string values this algorithm returns a PerformanceEntryList object that contains a list of PerformanceEntry objects, sorted in chronological order with respect to startTime.

However, many entries may contain equivalent startTime and implementations may differ on how these are sorted.

In this test:

function wait() {
    let now = performance.now();
    while (now === performance.now())
        continue;
}

let observer = new PerformanceObserver((list) => {
    for (let mark of list.getEntries())
        console.log(mark.name + " - " + mark.startTime);
});

observer.observe({entryTypes: ["mark", "measure"]});
performance.mark("mark1");
performance.measure("measure1");
wait(); // Ensure mark1 !== mark2 startTime by making sure performance.now advances.
performance.mark("mark2");
performance.measure("measure2");
performance.measure("measure-matching-mark2-1", "mark2");
wait(); // Ensure mark2 !== mark3 startTime by making sure performance.now advances.
performance.mark("mark3");
performance.measure("measure3");
performance.measure("measure-matching-mark2-2", "mark2");

With times: 0 < t1 < t2:

  • measure1, measure2, and measure3 will have a startTime of 0
  • mark1 will have a startTime of t1
  • mark2, measure-matching-mark2-1, measure-matching-mark2-2 will match t1
  • mark3 will have a startTime of t2

Implementations sort these differently:

  • Chrome (v58)
    measure1 - 0
    measure3 - 0
    measure2 - 0
    mark1 - 110.115
    measure-matching-mark2-1 - 110.14
    mark2 - 110.14
    measure-matching-mark2-2 - 110.14
    mark3 - 110.155

  • Firefox (v54)
    No sort whatsoever.

Take just the top 3 in Chrome. It feels weird to have measure2 sorted after measure3 even though they have the same startTime.

[Maintenance, reminder] Cycle environment variables that were encrypted with Travis

Travis CI sent a message this week alerting of a bug in the way they used to encrypt environment variables. Owners of this repo should have received it.

Depending on the way encryption was done in this repo using the Travis CLI, those values might have leaked.

This is a reminder for maintainers of this repository to make sure they discard old values, and re-encrypt new ones with the latest version of travis, now that the bug has been fixed.

Please close this issue if action has already been taken. Feel free to ping me or sysreq if you need help.

Clarify how PerformanceObserver invokes the callback - arguments and `this` value

The spec text for invoking the PerformanceObserver callback appears incorrect.

It currently says:

If entries is non-empty, call po’s callback with entries as first argument and callback this value. If this throws an exception, report the exception.

I suspect it should be saying:

If entries is non-empty, call po’s callback with entries as first argument and po as the second argument and callback this value. If this throws an exception, report the exception.

This would match behavior of MutationObservers and IntersectionObservers as well as existing PerformanceObserver implementations in Chrome and Firefox.

A simple test would be:

let observer = new PerformanceObserver(function(entries, po) {
    assert(entries instanceof PerformanceObserverEntryList);
    assert(po === observer);
    assert(this === observer);
});
observer.observe({entryTypes: ["mark"]);
performance.mark("test");

When are new entries delivered to observer?

Some entries have attributes that may take a long time to "finalize". For example, it may take 10s+ to get the loadEventEnd timestamp for NavigationTiming. When does an entry with such attributes become visible to the observer? Once all the attributes values are known, or can the entry be delivered sooner? We should clarify this in the spec.

Thinking out loud.. it seems like they would have to be delivered once finalized. If they're not, then you run into interesting issues: observer gets notified and calls getEntries(), which returns a copy of the event with partial attributes. However, from that point forward the retrieved entry wouldn't be updated with new attribute values, and observer wouldn't fire notifications for same entry with new attributes either, so there is no way to observe updated values.

performance.timing shouldn't be deprecated

We tried to ship performance.now() without performance.timing and that caused a whole bunch of compatibility issues to an extent we couldn't ship Safari.

I don't think we can deprecate this feature anytime soon.

WebIDL serializer has been deprecated in favor of toJSON operation

Hi!

We recently deprecated WebIDL serializers. You can now directly specify toJSON operations instead, which you previously weren't allowed to do.

To deal with common cases, we added a new [Default] extended attribute which triggers the default toJSON operation that behaves similarly to how serializers={attributes} or serializers={attributes, inherit} used to. That is, it serializes all attributes that are of a JSON type into a vanilla JSON object.

It seems only the following interface in this spec is impacted by this change:

It's a good candidate for the default toJSON operation, so the below should be all you need:

[Exposed=(Window,Worker)]
interface PerformanceEntry {
    readonly attribute DOMString           name;
    readonly attribute DOMString           entryType;
    readonly attribute DOMHighResTimeStamp startTime;
    readonly attribute DOMHighResTimeStamp duration;
    [Default] object toJSON();
};

I'm sorry for the inconvenience this causes, but our hope is that this ultimately makes things a lot simpler and clearer for everybody.

Please feel free to reach out if you have any questions.

Thanks!

Clarify disconnecting a PerformanceObserver that has observed entries

My reading of the spec says that if a PerformanceObserver is disconnected, its callback would not fire with entries. However web-platform-tests/performance-timeline/po-disconnect.html tests otherwise.

In this simple test:

var observer = new PerformanceObserver(console.log);
observer.observe({entryTypes: ["mark"]});
performance.mark("mark1");
observer.disconnect();
performance.mark("mark2");
console.log("DONE");

My reading of the spec is:

  1. observer.observe => Registers observer.
  2. performance.mark("mark1") => Triggers queue a PerformanceEntry, which appends entry to observer's buffer and queues a task to later dispatch it.
  3. observer.disconnect => Clears this observer's buffer and unregisters it.
  4. Later the task runs => this observer's buffer is empty, so the callback is not invoked.

Implementations differ, with one matching the spec (Firefox) and one matching the tests (Chrome).

  • Chrome (v58)
    Logs in callback.
    Logs DONE.

  • Firefox (v54)
    Logs DONE.

Expose performance interface to worker

ServiceWorker needs access to Performance interface. To enable this, I think we need to do a bit of spec shuffling and cleanup:

  • Move definition of Performance interface from NavTiming (sec 4.2) into Performance Timeline spec, and update Nav Timing language to "participate" in the timeline.
  • In Perf Timeline expose Performance interface to [Window,Worker].
  • Cleanup Nav/Resource/User Timing and remove references to 'window.performance'.

@plehegar any objections?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.