editcontext's People
editcontext's Issues
The selectionupdate event and selectionchanged method bleed together so its not clear who is responsible for raising and calling
I don't think we need modifier keys in editor code example as they come with KeyboardEvent
Consider making (which is rarely HTML) a less disputable fact (which is not always HTML)
For a v2 of this document I think we should have an editor implementation that has a simple model
Building an editor where the model is in the EditContext shared buffer seems like it might lead to confusion since it doesn't show where the document model belongs for real implementations. Additionally the Google roadmap talks about clean separation of model, view for the user and view for the input services. This collapses model and view for the input services.
This isn't urgent, but opening issue to track.
Synchronize buffer serialization logic across all editing scenarios
If the composition and other text input related operations are handled by the web developers, then should we also let them serialize the content when user invokes a cut or copy operation so there is consistency in how we report edit context buffer to TSF and how we serialize plain text (from html content) to clipboard? Selection model can also complicate things if the developer has a different view of the selection than the rendering engine. Thoughts?
API details could use code examples
Here's my checklist of things we should show in code (basically every operation):
* Enabling OS input services for an editable part of the document
* "" two or more parts of the document
* Specifying the mode of input to enable software keyboard specialization
* Providing the text of the document to the OS input layer for context
* Providing selection information to the OS input layer
* Providing position information to the OS input layer
* Applying decorations over the API at the request of the OS input layer
* Replacing the text of the document at the request of the OS input layer
* Adjusting the selection of the document at the request of the OS input layer
* Notifying the OS input layer that text has changed independent of its input
* Resolving conflicts and requesting replay
Queuing layout updates
How is this model going to calculate layout updates and communicate to input service when there is a change in layout of the edit context through css stylesheets (Ex: Setting visibility to hidden). Will there be a timer task that calculates the layout bounds and check if there is any diff between the previous and latest bounds of the edit context?
Consider changing "where an empty selection is effectively a caret/insertion point" to "collapsed selection represents an insertion point or caret"
The intro for implementation notes should clarify reasoning for double buffering
I think the key reason is at least some operating systems require a synchronous response to queries about the contents of the document, location of selection etc, and that combined with a browser's desire to keep the input thread responsive leads to a design that will most likely have two buffers that need a protocol for synchronization.
client coordinates could use a link
Consider adding to the overview something about the value of stateful text services
The reason "scraping of HTML" and "in order to derive the correct representation" are mentioned I don't think is clear unless you explain that we must communicate our DOM as text to a stateful text services OS feature in order to get benefits like suggestions.
Make accessibility concern of textformatupdate an open question
Also check to see how we deal with it today in Edge and if we have a bug or solution
typo "view represents and object" (and should be an)
Images could use numbers so the flow can be followed more easily before reading subsequent paragraph
Consider adding architectural diagram to Details section
It's hard to grok the workings of the system from just the text. Consider adding a diagram depicting the user, their input, the IME, the text services intermediary, the browser and the web application. Using these entities, specify the flow a single key and then build on that example for more complicated cases that will need to be discussed.
Consider providing high level summary of cooperation before diving into details
I like this intro: "Because the buffer and selection are stateful, updating the contents of the buffer is a cooperative process between the characters coming from the user and changes to the content that are driven by other events."
But would like to suggest expanding on the cooperation at a high-level first instead of going straight into the events. Example:
Cooperation takes place through a series of events dispatched to the web application on the EditContext by the text services framework for the purposes of reading or requesting updates to the buffer or web application's view. The web application can also push changes about its state to the text services framework using method calls against the EditContext object. More specifically,
The text services framework can request information about the shared buffer by reading its:
- location on the screen
- contents
- selection location
The text services framework can also request that the buffer or view of the application be modified by requesting that:
- the text of the buffer be updated
- the selection of the buffer be relocated
- the text of the buffer be highlighted over a particular range
The web application is free to communicate before, after or during a request from the text services framework that its:
- buffer has changed
- selection has changed
- layout has changed
- type of expected input has changed
After this I think the architectural diagram mentioned in another issue with the flow of a single character would be good and then go into the API surface. After that I would finish with an inventory of all interesting cases about conflicting changes.
Need API to request current (on screen) location of selection and and other arbitrary text ranges in the buffer of the edit context
Consider whether there are other special behaviors being bypassed
One such example might be pasting content. Validate that the paste event can still dispatch on the focused element and that editable elements are not special in some way.
Are there other examples?
I think the keypress event should be included in the event sequence
Need API to set input scope so correct keyboard can be displayed
The word Write in the images have a little "o" in front of them - consider removing
Change "in absolute character positions" to "as offsets" to avoid the ambiguity of character
Prefer these names for init dictionary of EditContext mode, text, selection
I would get rid of accessibility claim... only mentioned in the overview and not directly addressed by this explainer
Question regarding virtual keyboard logic
So with built in widgets like input or content-editable the user agent knows that for example user is taping on that area so it kind of knows when for example to open a virtual keyboard. Obviously when VK is opened user agent is going to use the editContext buffers and cursor position in there to suggest stuff on VK and what not.
But I'd like to focus on showing and hiding virtual keyboard within EditContext. What should be now the heuristic for hiding and showing the VK so that it somehow gives control to the app as well.?
Need more clarification on format update as it can have accessibility issues
There have been discussions (can't find the bug number from the chromium database) on whether IME should be allowed to apply its own styles in an edit context or not. These formats can affect accessibility so it is important that we clarify this part in the design. Ex: A blue highlight for IME reconversion may not be what the user wants to see on a webpage that has a blue background color for some reason.
When you get to the collaborative example, maybe separate it with a heading so it doesn't run together with the previous
Focus needs to be discussed in depth along with the model for how multiple EditContext objects can be associated with a document
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.