material-motion-archive / starmap Goto Github PK
View Code? Open in Web Editor NEWIssue tracker for the starmap
Home Page: https://material-motion.github.io/material-motion/
Issue tracker for the starmap
Home Page: https://material-motion.github.io/material-motion/
We should support an initializer that encodes the "damping" coefficient present in the following relationship between friction and tension:
friction = sqrt(4 * tension) * damping
If damping < 1: the spring is under-damped and will oscillate
If damping = 1: the spring is critically-damped and will oscillate at most once
If damping > 1: the spring is over-damped and will not oscillate
This tracer protocol should methods like so:
func didCreatePerformer(named: String)
func didAdd(plan: Plan, toPerformerNamed: String)
etc...
You should be able to add a tracer to the scheduler like so:
scheduler.add(tracer:MyTracerImplementation())
A tracer can then do anything from logging the events to the console, e.g. a LogTracer
, to sending events over the wire to a debugging tool.
See the debugging spec here: https://material-motion.gitbooks.io/material-motion-starmap/content/specifications/debugging.html
Note that the current tracer implementation on iOS uses notifications. This results in somewhat complex logic and does not make it particularly easy to capture general behavior.
@jverkoey, you use "bridge motion family" often, but haven't defined in the glossary. I took a stab at a definition. Feel free to clarify (or LGTM) it and close this issue.
Does Material Motion have an opinion on formatting commit messages or PRs (e.g. Gerrit-style squashes v.s. including every commit in a PR)?
At least for the JS implementation, I intend to follow rf-release/rf-changelog's commit format. It's great, because it makes it really easy to generate a CHANGELOG if we choose to go that way. (There are also arguments to be made for manually maintaining the CHANGELOGs).
I'm curious how the timeline works when progress is driven in a non-linear way (e.g. by a gesture or a spring). Seems like there's risk for the whole thing to feel on-rails/canned.
This spec will no longer be necessary once we start pursuing explicit view duplication as per #57.
A Runtime will need to translate desired Intentions into Actor instances.
Potential ways this can be achieved:
For the second approach, are there compile-time mechanisms we can use to ensure that an Actor can actually perform a given Intention?
Important emphasis points:
Instead, expose the addPlan/removePlan APIs on the scheduler directly. This will reduce the overall complexity of the runtime. We'll still be able to build abstractions in the future (like the Transaction) that hide direct access to the scheduler.
Also expose addPlan/removePlan APIs on scheduler directly in the spec.
Example:
let duplicate = duplicator.duplicate(target)
addPlan(plan, to: duplicate)
Notable questions to answer:
If you click on one, it will take you to the glossary definition for that word. But then there is no way to get back to where you were on the page you came from without scrolling.
Specifically:
It's easier to find on the filesystem and more inline with our naming conventions.
The Tween family draft refers to directors and elements. My understanding is that we refer to the things plans apply to as "targets" and try to keep the parts of the system separate where reasonable.
When specifying families, should we use the word "target" instead of "element"? Should we talk about plans being added in a director, or be more generic?
Whatever the decision, we should be careful to use the same verbiage consistently across articles.
We must emphasize the “sugar” nature of Expressions.
It should always be possible to create intentions using the standard “initialize + configure” pattern. This assumes somewhat more intimate knowledge of the platform’s primitives.
Does this actually affect the model layer or just the presentation layer (as we would say in iOS)?
E.g. A Timeline whose progress extends beyond the [0...1] range.
What does this primitive look like? How might it work?
E.g. Rotatable, withAnchorPoint(.5,.5)
as a plan would lock the anchor point of the view to the center of the view. This would override the default behavior of adjusting the anchor point to the centroid of the gesture.
I wrote a GDoc of litmus tests for the sorts of transitions a successful Material Motion implementation should be able to express. I need to move the ones that are not-secret to the Starmap.
ActivityPerforming more accurately reflects what conforming to this protocol means: the performer is expected to indicate to the runtime when some form of work is active, be it an active gesture recognizer or an active animation.
As opposed to using composition via the scheduler to do so.
When executing a move, it can be relative to the viewport or to the document. The difference is apparent when scrolling: document-relative moves will scroll with the document; whereas, viewport-relative moves will not.
There are situations where you'd want either/both. For instance, the photos demo should probably be viewport-relative when a photo is expanding, but document-relative when it's collapsing.
Which do we want to do by default? What syntax should we use for specifying this? Perhaps
move().to({ x: 200, y: 200 }).in(DOCUMENT)
, andmove().to({ x: 200, y: 200 }).in(VIEWPORT)
?From markwei:
Why should modifications not be executed? Generally this just means changing a field.
In reference to Expressions => Modifiers.
In a given motion runtime, when should actors be removed from the system?
The current assumption is that actor instances will live for as long as the runtime instance.
No spec page yet.
Each motion family should be communicable across any platforms, so consistency is only important within a given family.
It's likely that each family can/will have its own encoder/decoder. This gives families the flexibility to innovate and resolves the need to define a standard.
This needs to be formalized in the starmap spec.
https://github.com/GitbookIO/gitbook
What do we need to get started with this?
expression = Tween().fadeIn()
Is there a reason this needs to be instantiated? I imagine being able to do something like:
import {
fadeIn,
fadeOut,
} from 'odeon-tween-language';
...
registerIntentions(
{
intentions: [
fadeIn(.5)
],
target: element
}
)
but that would mean that Tween would be a module (not a class). You'd still be able to do:
import Tween from 'odeon-tween-language';
if you wanted the namespace.
Is so that you can chain terms together. If the terms are static methods then the and
instance won't be able to invoke them.
If we remove term chaining then we can make the Language class a static-method-only class.
Might be interesting to play with both styles and see which feels better on each platform.
Seems like most of this API design (lists vs. chains and all the follow-on ramifications) can be done in such a way that you could write one style as sugar on top of the other. At the end of the day, they generate the same thing.
Discussion thread starts here: material-motion/material-motion-js#20
Right now, the Scheduler spec prescribes an activityState
property with two states (active
and idle
).
Why isn't this just a Boolean (e.g. scheduler.isActive
)?
Our emphasis as a team is change over time. Rather than emphasizing content, our goal is to emphasize form and its movement through time. Our visual aesthetic will ideally emphasize this.
Each demo we create will be emphasizing a particular aspect of motion, so our aesthetic should mold and draw attention to these aspects.
Advantages of doing so:
Disadvantages of doing so:
Proposed private Transaction API:
/**
Returns all transaction logs committed to this instance and removes the stored logs.
The transaction will be empty after invoking this method.
*/
func extractLogs() -> [MDMTransactionLog]
This will likely live in the "Languages" chapter.
@schlem: "I'd love to have a quick animated example for each term once we land on them. It would be really easy to visually capture what it means instead of trying to describe it in words."
@appsforartists: "I imagine it would serve as both a demo and an end-to-end test. One of the responsibilities of introducing a new intention should be adding a demo of it to our sandbox."
https://material-motion.gitbooks.io/material-motion-starmap/content/community_index/
Each platform's index can probably better be represented as a table.
https://material-motion.gitbooks.io/material-motion-starmap/content/community_index/android.html
Goal: use code from a framer.js prototype in a production app.
From markwei:
The Modifiers section may not make sense from java's point of view. Probably faster to talk about this off-doc. A couple of questions while reading this:
We presently include tables of solutions in the concept articles. These feel somewhat constrained and may not be an ideal way to represent this information.
The purpose of the links is:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.