Giter Club home page Giter Club logo

Comments (67)

EGjoni avatar EGjoni commented on June 8, 2024 1

Additionally, do note that if you're making a single precision port, you should reference the single precision implementation, specifically with regard to the QCP.java class, as the single precision version of QCP includes some modifications for numerical stability which the double precision version
can and does afford to skip.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024 1

Alright, I just pushed the latest code (with performance and stability improvements) to the modularization branch in this repository, and updated the rc branch of the processing extension repository with a bunch of new commented examples / demos.

Particularly, you'll most want to take a look at FullExample, which programmatically specifies a humanoid armature, and the Humanoid_Holding_Item demo, which loads an armature from JSON, and demos simultaneous head pelvis and wrist tracking.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024 1

Also, the constraints are on the entire skeleton rather than a bone chain right?

If you mean joint constraints (Kusudamas), they are per bone. If you mean the targets or "pins", those are evaluated over the entire section of the skeleton which they bound. Meaning, if you pin a humanoid skeleton at the pelvis, left wrist, and right wrist; then moving the pin/target of the left wrist around enough would likely result in bones in both the spine and the right arm rotating in order to keep satisfying the requirements of all three targets.

For each key-frame in the original skeleton set the original bone as a target and re-target every key-framed bone to the new kusudama constrained rig.

Thoughts?

For clarity let's call the original animated skeleton the "source rig", and the target animation skeleton the "destination rig".

In theory you don't even need to bother with the A-Pose part. Meaning, if you set the Source Rig to have roughly the same scale as the Destination Rig, then it doesn't matter if the two share the same starting pose, nor does it really even matter if there is a one-to-one mapping in the number of bones between the two rigs. All you would have to do (in theory, but I haven't actually tried it) is decide which joints on the Source Rig best correspond to which joints on the Destination Rig. Then, simply enable targets on the bones of interest in the Destination Rig, and have those targets follow the appropriate joints of interest on the Source Rig.

I imagine you would get worse results by trying to match every single bone between the two rigs than by trying to match just the bones of interest.

Is GlobalMBasis the concatenation of every basis to the root?

Yes. GlobalMBasis is basically a cache of the composed transformation of a base with all of its ancestor bases up to and including the transformation at the armature's root. GlobalMBasis is used to avoid recalculating a basis from scratch when none of its parent bases have changed.
"updateGlobal()" basically updates this cached version. There are probably a few places I call updateGlobal() where it doesn't need to be explicitly called (since the node-hierarchy system is already autocaching results). The cost of calling it when the result is already cached is negligible, but I should clean those up at some point.

What are simLocalAxes and simConstraintAxes? What space are they in? Global or the bone relative?

The "LocalAxes" part in simLocalAxes is referring only to fact that they are derived from AbstractBone's "localAxes" object. The "localAxes" of a bone are named that due to an artifact of early implementations, which lacked a dedicated scenegraph. Now that you bring it up, I can see how that would be horribly confusing.
To clarify:
The "sim" part of both "simLocalAxes" and "simConstraintAxes" denotes that they are scratchpad Axes derived from but operated upon separately from the Axes on the actual armature. In essence, the solver maintains something like a doppleganger of your Armature. Each time you call the solver, or update the topology of the armature, the Axes on that doppleganger are updated to have all of the same transformations and parent-child relationships as the Armature. This is so that the solver can run for multiple iterations, and even compute otherwise illegal transformations, without modifying your actual armature until its finished running and ready to show you its presentable solution.

You also swap between getGlobalMBasis and the local Basis.
Yes, some of the calculations are more cleanly done in global space, others are more cleanly done in local space.

In theory a basis can be both scale and rotation, but I think the code only uses it for rotation.

Yeah. And translation. Why do you mention it?

How is a pin different from a bone tip or a bone target (aka multi effector target)? It was a bit unclear.

A pin and a target are the same thing. A bone-tip is the thing that reaches for a target, aka an end-effector. All of these terms are (again, now that you mention it) also horribly confusing artifacts of previous versions.

In future implementations the following conventions will be adopted:
"effector" : will refer to a bone which the solver tries to maintain in a certain position/orientation.
"target" : will refer to a position/orientation which an effector tries to match.
"pin" : will be used only to refer to an operation one can do on a bone, meaning "create a target for this bone at the location/orientation where the bone currently is."
and function like "enablePin()" will be renamed to "enableTarget()"

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024 1

What is updateTargetHeadings() or updateTipHeadings()?

updateTipHeadings() updates vectors that track the headings of all effectors which are descendants of the bone currently being evaluated by the solver. updateTargetHeadings() updates vectors that track the headings of each of those effector's targets.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024 1

Do you know what this is in degrees?

In degrees that would be 540, which is equivalent to 180. It can just as well be changed to just Math.PI (It's so high IIRC because I was testing that the modulo logic was working correctly).

I'm keeping the range in radian angles rather than TAU because the documentation is confusing.

I'm not sure I understand what you mean by "radian angles rather than TAU". TAU is already in radian angles. TAU = 2*PI = a full 360 degree turn.

because the documentation is confusing.

That much is absolutely true. But do be careful about taking liberties with any portion of the implementation upon which the "snapToTwistLimits()" function relies. It's a complicated function that assumes very particular things about the representation its working with so that the library overall can be agnostic to the handedness of the Axes being used.

Most game engines decide in advance that all bases should be right-handed, so you probably don't actually need this level of agnosticism you can simplify things for your purposes, but do note that if you want to simplify the minAxial and maxAxialRange implementation, you will certainly need to rewrite the logic of snapToTwistLimits(). (In fact, you can see a leftover comment in that function from when I made a quick attempt at structuring it to not be agnostic).

If you intend to try your luck with such simplifications, I strongly suggest holding off on them until AFTER you've made a more naive one-to-one port that you can verify is working.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024 1

What I did was keep the axial [limits] as a data only struct. When update it I convert from degrees to radians.

When I use it I replace every place it is used with to_tau and from_tau functions.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

The plan currently is to have analytic solvers for the IK system, but it doesn't have sound theory behind it.

This plan is almost certainly not going to work.

Make a visualizer for kusudama constraints

The Kusudama visualizer in the youtube link from my reply in #7 is primarily shader-based, if you'd like, I can share the code for that. Rolling your own visualizer would require understanding the math behind Kusudamas, which might not be entirely obvious to infer from the code Java code (and I've yet to write a paper on them).

My peers had some concerns about FABRIK's unnaturalness around the bending of the elbows, given only three sensor points.

Your peer's concerns about "Unnaturalness" are well founded, but they aren't a limitation of FABRIK per se. They are more a problem resulting from a combination of the following three factors.

  1. Most IK systems (FABRIK included) don't account for the "twist" component of their end effectors. (If you touch the back of your right hand to your cheek [so your thumb points toward your nose] then rotate your hand around the direction of your middle finger by 360 degrees [so your thumb points toward your nose again], note the hug difference in where your shoulder and elbow had to be in the first pose vs the second pose)
  2. Poor constraint specification.
  3. The tools of Robotics map poorly to the realities of Biology.

This library currently accounts for factor 1 (the solver can take the orientation of effectors into account -- though, at some cost to speed and convergence stability in edge cases) and factor 2 (via Kusudama constraints).

For your particular use case, factor 3 is both the most important and I think the most trivial to handle. If done correctly, a solution to 3 would (for your use case) almost completely obviate concerns regarding 1 and 2.

Depending on whether you're using this library as a reference, or developing a full port, I can either implement a solution to 3 in this library, or offer some suggestions as to a viable approach, and let you figure out the details as relevant to your specific implementation. But the very broad idea is:

  1. Define a set of (preferably extreme) pre-computed poses.
  2. Determine the n-poses in that set most closely matching the user's pose.
  3. Take either a weighted average or the closest of those pre-computed poses, and run the solver using those pre-computed poses as the starting position, and the sensors as the target position.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Thank you for your detailed reply.

If you could implement this in this library, I could reference how it is implemented. The shader code would be useful too.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I've done some cleanup of the repository, and removed the binary library from the repository.

There's also some updates to the readme.

https://github.com/fire/Everything-WIll-Be-IK

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

I've just updated the library with a massively improved algorithm. As far as I can tell, it sort of "just works" and probably won't require any of the preprocessing suggestions above (check the video in the readme). I've also removed all of the old solvers in favor of this much better solver.

If you're still porting this, let me know if you need any help.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I am still porting, but I was distracted by other tasks. I'll have to review the code.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Is ConstraintExample_Kusudama.pde compatible with the new api?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

You mentioned you could provide help with the cone based constraint shader. I'm having trouble understanding that approach.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

My goal is to create a sample with the three or more sensors to affect a skeleton I've defined in a c++ engine.

This requires visualization and editing of constraints.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

You mentioned you could provide help with the cone based constraint shader. I'm having trouble understanding that approach.

I'm away from my dev machine at the moment, but I'll be home tonight and will upload the shader then.

My goal is to create a sample with the three or more sensors to affect a skeleton I've defined in a c++ engine.

This requires visualization and editing of constraints.

In that case, I'll also update the Processing extension of the library available at https://github.com/EGjoni/Everything-Will-Be-IK-Processing. The latest version of which will include a visualizer for Kusudamas (and a number of other useful examples for generally getting the most out of the library). If you're willing to put in a little extra work to port the JSON based armature Saver and Loader, it will likely help you quite a bit in ensuring the armature you create behaves the same way in your C++ port as it does in Java.

Anyway, if you'd like to have something to chew on in the meantime, the basic idea behind Kusudamas is this:

https://i.imgur.com/ERuipqk.png

Except the regions are on a sphere (so the straight lines end up being curves).

In effect, this means you can add a sequence of LimitCones, and the kusudama will treat them as specifying a "path" of allowable space on the sphere that gracefully encompasses the region between one cone and the next. It also means that if you specify just one cone, a kusudama behaves identically to traditional cone-constrained ball-and-socket joints.

Is ConstraintExample_Kusudama.pde compatible with the new api?

I'm not 100% sure. But on the topic of compatibility, I've made a number of further optimizations (>400% speed gains) and improvements / feature additions since the last pull to the master branch, So if you're starting the port, please use the "modularization" branch to make sure you benefit from the latest changes.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

What is the difference between double and single precision code?

Is it more precise? Faster?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Thank you for your help.

The processing example code makes it easier for me to understand the implementation.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Hey, sorry. I'm kind of backlogged with work until Friday. So you'll have to wait until then for the more complete Processing examples (there'll be some very useful stuff for your usecase demoed in there that you'll want to look at when I upload it).

What is the difference between double and single precision code?

Is it more precise? Faster?

The double precision version is a little more precise and in Java also a little faster due to how Java's trigonometric operations are handled (specifically, Java's trig functions are on doubles, and so the trigonometric operations for the single precision format have the same cost as the double precision format + the cost of casting from double to float).

The reason for the single precision library is pragmatic. Specifically

  1. If a developer wants to integrate the library into some existing Java framework for games development or graphics applications which use single-precision based transformations, then they can do so more efficiently / conveniently using the single-precision version of the library and
  2. If a developer wants to port the code outside of Java to a framework that relies on single precision transformations, they don't have to worry about whether any differences in the results / stability of their port are due to precision differences, or implementation errors. If a single precision port is unstable, they have a single precision Java implementation available to check the output against.

Sorry again for the delay. Here is the shader code in the meantime if it helps.

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;

//Model space normal direction of the current fragment
//since we're on a sphere, this is literally just the fragment's position in 
//modelspace
varying vec3 vertNormal;

//This shader can display up to 10 cones (represented by 30 4d vectors [tangentCone, limitCone, tangentCone]) 
// alphachannel represents radius, rgb channels represent xyz coordinates of 
// the cone direction vector in model space
uniform vec4 coneSequence[30];
uniform int coneCount; 
 
//Make this "true" for sceendoor transparency (randomly discarding fragments)
//so that you can blur the result in another pass. Otherwise make it  
//false for a solid shell.  
uniform bool multiPass;

//Following three varyings are 
//Only used for fake lighting. 
//Not conceptually relevant
varying vec3 vertWorldNormal;
varying vec3 vertLightDir;
varying vec4 vertWorldPos;


///NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING
float hash( uint n ) { // from https://www.shadertoy.com/view/llGSzw  Base: Hugo Elias. ToFloat: http://iquilezles.org/www/articles/sfrand/sfrand.htm
	n = (n << 13U) ^ n;
    n = n * (n * n * 15731U + 789221U) + 1376312589U;
    return uintBitsToFloat( (n>>9U) | 0x3f800000U ) - 1.;
}

float noise(vec2 U) {
    return hash(uint(U.x+5000.0*U.y));
}

bool randBit(vec2 U) {
	float dist2 = 1.0;
	return 0.5 < (noise(U) * 4. -(noise(U+vec2(dist2,0.))+noise(U+vec2(0.,dist2))+noise(U-vec2(0.,dist2))+noise(U-vec2(dist2,0.))) + 0.5);
}
///END OF NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING.

//Determine if the fragment is in the path between two limitcones. 
//this should only be relied on after verifying that the fragment
//isn't in the two limitcones its checking between. 
bool isInInterConePath(in vec3 normalDir, in vec4 tangent1, in vec4 cone1, in vec4 tangent2, in vec4 cone2) {			
	vec3 c1xc2 = cross(cone1.xyz, cone2.xyz);		
	float c1c2dir = dot(normalDir, c1xc2);
		
	if(c1c2dir < 0.0) { 
		vec3 c1xt1 = cross(cone1.xyz, tangent1.xyz); 
		vec3 t1xc2 = cross(tangent1.xyz, cone2.xyz);	
		float c1t1dir = dot(normalDir, c1xt1);
		float t1c2dir = dot(normalDir, t1xc2);
		
	 	return (c1t1dir > 0.0 && t1c2dir > 0.0); 
			
	}else {
		vec3 t2xc1 = cross(tangent2.xyz, cone1.xyz);	
		vec3 c2xt2 = cross(cone2.xyz, tangent2.xyz);	
		float t2c1dir = dot(normalDir, t2xc1);
		float c2t2dir = dot(normalDir, c2xt2);
		
		return (c2t2dir > 0.0 && t2c1dir > 0.0);
	}	
	return false;
}

//determines the current draw condition based on the desired draw condition in the setTo argument
// -3 = disallowed entirely; 
// -2 = disallowed and on tangentCone boundary
// -1 = disallowed and on controlCone boundary
// 0 =  allowed and empty; 
// 1 =  allowed and on controlCone boundary
// 2  = allowed and on tangentCone boundary
int getAllowabilityCondition(in int currentCondition, in int setTo) {
	if((currentCondition == -1 || currentCondition == -2)
		&& setTo >= 0) {
		return currentCondition *= -1;
	} else if(currentCondition == 0 && (setTo == -1 || setTo == -2)) {
		return setTo *=-2;
	}  	
	return max(currentCondition, setTo);
}

//returns 1 if normalDir is beyond (cone.a) radians from cone.rgb
//returns 0 if normalDir is within (cone.a + boundaryWidth) radians from cone.rgb 
//return -1 if normalDir is less than (cone.a) radians from cone.rgb
int isInCone(in vec3 normalDir, in vec4 cone, in float boundaryWidth) {
	float arcDistToCone = acos(dot(normalDir, cone.rgb));
	if(arcDistToCone > (cone.a+(boundaryWidth/2.))) {
		return 1; 
	}
	if(arcDistToCone < cone.a-(boundaryWidth/2.)) {
		return -1;
	}
	return 0;
} 

//returns a color corresponding to the allowability of this region, or otherwise the boundaries corresponding 
//to various cones and tangentCone 
vec4 colorAllowed(in vec3 normalDir,  in int coneCount, in float boundaryWidth) {
	normalDir = normalize(normalDir);
	int currentCondition = -3;
	
	if(coneCount == 1) {
		vec4 cone = coneSequence[0];
		int inCone = isInCone(normalDir, cone, boundaryWidth);
		inCone = inCone == 0 ? -1 : inCone < 0 ? 0 : -3;
		currentCondition = getAllowabilityCondition(currentCondition, inCone);
	} else {
		for(int i=0; i<coneCount-1; i+=3) {
			
			int idx = i*3; 
			vec4 cone1 = coneSequence[idx];
			vec4 tangent1 = coneSequence[idx+1];			
			vec4 tangent2 = coneSequence[idx+2];			
			vec4 cone2 = coneSequence[idx+3];
										
			int inCone1 = isInCone(normalDir, cone1, boundaryWidth);
			
			inCone1 = inCone1 == 0 ? -1 : inCone1 < 0 ? 0 : -3;
			currentCondition = getAllowabilityCondition(currentCondition, inCone1);
				
			int inCone2 = isInCone(normalDir, cone2, boundaryWidth);
			inCone2 =  inCone2 == 0 ? -1 : inCone2  < 0 ? 0 : -3;
			currentCondition = getAllowabilityCondition(currentCondition, inCone2);
		
			int inTan1 = isInCone(normalDir, tangent1, boundaryWidth); 
			int inTan2 = isInCone(normalDir, tangent2, boundaryWidth);
			
			if( inTan1 < 1. || inTan2  < 1.) {			
				inTan1 =  inTan1 == 0 ? -2 : -3;
				currentCondition = getAllowabilityCondition(currentCondition, inTan1);
				inTan2 =  inTan2 == 0 ? -2 : -3;
				currentCondition = getAllowabilityCondition(currentCondition, inTan2);
			} else {				 
				bool inIntercone = isInInterConePath(normalDir, tangent1, cone1, tangent2, cone2);
				int interconeCondition = inIntercone ? 0 : -3; 
				currentCondition = getAllowabilityCondition(currentCondition, interconeCondition);					
			}
		}
	}	
	
	if(multiPass && (currentCondition == -3 || currentCondition > 0)) {
		
		/////////
		//CODE FOR FANCY BLURRED TRANSPARENCY. 
		//NOT OTHERWISE CONCEPTUALLY RELEVANT TO 
		//TO VISUALIZATION
		////////
		
		vec3 randDir = vec3(normalDir.x  * noise(normalDir.xy)/50.0,  normalDir.y  * noise(normalDir.yz)/50.0, normalDir.z  * noise(normalDir.zx)/50.0);
		randDir = normalDir;
		float zAdd = abs(vertWorldPos.z);
		float lon = atan(randDir.x/randDir.z) + 3.14159265/2.0;
		float lat = atan(randDir.y/randDir.x) + 3.14159265/2.0;
				
		bool latDraw = randBit(vec2(lat, lon));//mod(lat, 0.005) < 0.00499;
		bool lonDraw = randBit(vec2(lon, lat));//mod(lon, 0.005) < 0.00499;
			
		if(randBit(vec2(lon, lat))) {		
			result = vec4(0.0,0.0,0.0,0.0);	
		}
		////////
		//END CODE FOR FANCY BLURRED TRANSPARENCY
		///////
	} else if (currentCondition != 0) {
	
		float onTanBoundary = abs(currentCondition) == 2 ? 0.3 : 0.0; 
		float onConeBoundary = abs(currentCondition) == 1 ? 0.3 : 0.0;	
	
		//return distCol;
		result += vec4(0.0, onConeBoundary, onTanBoundary, 1.0);
	} else {
		discard;
	}
	return result;
			
}

void main() {

  vec3 normalDir = normalize(vertNormal); // the vertex normal in Model Space.
  float lightScalar = dot(vertLightDir, vec3(0.5,-1.,0.5)); 
  lightScalar *= lightScalar*lightScalar;
  vec4 colorAllowed = colorAllowed(normalDir, coneCount, 0.02);  

  if(colorAllowed.a == 0.0)
  	discard;
  	
  colorAllowed += (colorAllowed + fwidth(colorAllowed)); 
  colorAllowed /= 2.0;
  vec3 lightCol = vec3(1.0,0.8,0.0);
  float gain = vertWorldNormal.z < 0 ? -0.3 : 0.5;
 colorAllowed.rgb = (colorAllowed.rgb + lightCol*(lightScalar + gain)) / 2.;
 vec4 specCol = vec4(1.0, 1.0, 0.6, colorAllowed.a);  
 colorAllowed = colorAllowed.g > 0.8 ? colorAllowed+specCol : colorAllowed;  	
  	
  gl_FragColor = colorAllowed;
}

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

My plan is to make a single precision port, that has an alias for using double precision. Thanks for the insight.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I'm away this weekend, but I'm starting to get back into this code.

For Godot Engine, I have gltf2 scene export. In theory, anything that is in a scene can be exported as possiblity animated geometry.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Interesting. I was unaware of gtlf2 as a thing that exists.

It should be moderately straightforward to convert from EWBIKs save/load format to gltf2. The formats are already extremely similar, with the exception that EWBIK's format allows for skew/shear, and EWBIKs quaternions are saved in [w,x,y,z] order (as opposed to gltf2's [x,y,z,w]). And the exception that EWBIK's format references nodes by identity hash, instead of by array index.

But I'm not sure if it would be more straightforward than just porting the save/load code already provided.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

The idea is that this code is included into Godot ported into C++.

The animations are captured and constrained to the sensor positions.

The animations can be exported again from the software as something Blender can read.

My plan is to port the code using the internal apis.

Exporting the GLTF2 format is notable because it can be used to export scenes from your projects though.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I'm working on the api, the IK system isn't functional yet, but trying to make the interface usable.
unknown

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Tantalizing!

Keep me updated on any interesting progress you make or approaches you take or snags you hit, if you could.

It would be immensely helpful for figuring out the least painful ways to structure future versions of the library.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

(Also, the algorithm isn't technically CCD. Something like "Cyclic Mean Deviation Descent" [CMDD] might be more accurate, give or take a few letters.)

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I renamed the class "Cyclic Mean Deviation Descent" [CMDD].

Also, the constraints are on the entire skeleton rather than a bone chain right?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

It should be possible in theory to use this for skeletal animation re-targeting too.

[Make the scale relatively match]

[Set both skeletons to a standard pose, let's say an "A" pose]

[Reset the original skeleton and animations to be based on the "A" pose]

[Map original skeleton bone to the new skeleton bone]

For each key-frame in the original skeleton set the original bone as a target and re-target every key-framed bone to the new kusudama constrained rig.

Thoughts?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I'm having trouble understanding what these are:

  sb.simLocalAxes.updateGlobal();
  sb.forBone.setAxesToSnapped(sb.simLocalAxes, sb.simConstraintAxes, boneDamp);
  sb.simLocalAxes.translateByGlobal(translateBy);
  sb.simConstraintAxes.translateByGlobal(translateBy);

What are simLocalAxes and simConstraintAxes? What space are they in? Global or the bone relative?

What is updateTargetHeadings() or updateTipHeadings()?

You also swap between getGlobalMBasis and the local Basis.

In theory a basis can be both scale and rotation, but I think the code only uses it for rotation.

Is GlobalMBasis the concatenation of every basis to the root?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

How is a pin different from a bone tip or a bone target (aka multi effector target)? It was a bit unclear.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

In Godot Engine Basis is 3x3 matrix representing a rotation and a scale. A transform is a rotation, scale and origin.

I believe you only use LimitingAxes as a rotation.

In the current prototype, I have split the Global Transform from the local transform. Do you think there's any problems with that? Nothing is functional so I can't test.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I've used these concepts:

There's a godot engine bone which is replicated in a Chain (similiar to skeleton) and a ChainItem (similiar to bone).

I want to make a ChainItem also an IKTarget depending if it is pinned.

Thoughts?

There is also a

	struct EndEffector {
		BoneId effector_bone;
		Transform goal_transform;
	};

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

In the current prototype, I have split the Global Transform from the local transform. Do you think there's any problems with that? Nothing is functional so I can't test.

Depends on what you mean by "split," and where.

I believe you only use LimitingAxes as a rotation.

IIRC, yes.

I want to make a ChainItem also an IKTarget depending if it is pinned.

If I'm understanding correctly, you want to make Bone A, be the target of Bone B?

If so, then the correct way to do that would be to define Bone B's Target Transform as having Bone A's Axes as its parent (and as having no rotation or translation in the local space of that parent). That way, Bone B's Target always ends up having the same position and orientation as Bone A.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Worked on the editor.

[I think the control point, the radius, and the hidden connected to are the main properties of the cone limit.]

image

Worked heavily on the internal implementation which is not done.

[Edited picture]

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

@EGjoni Do you know what this is in degrees?

https://github.com/EGjoni/Everything-Will-Be-IK/blob/modularization/ewbIK/src/IK/floatIK/AbstractKusudama.java#L48

The range variable default?

I'm keeping the range in radian angles rather than TAU because the documentation is confusing.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

That sounds reasonable (so long as you're not converting back and forth internally every time the solver uses it)

By the way, in your screenshot, what are OverrideTipBasis and UseMagnet?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

They're from the api of the current Godot Engine IK fabrik solver. I haven't touched it.

My friend mentioned that limit cones and axial limits are hard to understand.

Bullet calls axial limits sliders with rotations.

image

Bullet calls limit cones spherical constraints.

image

https://www.panda3d.org/manual/?title=Bullet_Constraints

The axial and cone limits are still hard to understand.

We talked about using the words twist vs rotation.

Any suggestions?

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

How about "twist" and "direction"?

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

(Also, I think Bullet's "Slider" constraints allow for translation along an axis in addition to rotation around it. AxialLimits limits, which you would be referring to as "twist", do not support translation, and in general ewbIK does not support prismatic joints!)

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

"twist" for axial limits and "direction" for cone limits makes sense.

We also discussed that changing the limit cone control point and radius is difficult via a vector 3 and a float.

They need to be controllable by a graphical representation.

We're imagining a handle that rotates the pole extended from the center of the sphere outwards through the control point or moving the point on the sphere directly through a screen space to sphere space transform.

Thoughts?

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

What I do is show a sphere which the user can click on. Wherever the user clicks / drags, a ray is cast from the camera through the sphere. The closest of those intersections to the camera represents the direction of the LimitCone when transformed to the local reference frame of the constraint..

For letting the user specify radius, I put a little draggable handle on the perimeter of the cone, and the user can drag that outward or inward from the cone's on-screen center to make the cone wider or thinner.

That approach is easy to program and intuitive to use, but does have the drawback of requiring that the user change their viewing angle if they want to drag their cone to be on the opposite side of the sphere. The most bang-for-buck solution to programming a convenience for that is to change which of the two cameraray-constraintsphere intersection points is used based on the number of times the user has dragged outside of the sphere in the process of manipulating the cone's control point.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I had a lot of trouble exporting the processing demos as stand alone executables. The shaders were not able to be found.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Can you explain how alignToAxes works? Why do you do this?

/**

  • translates and rotates the pin to match the position
  • and orientation of the input Axes. The orientation
  • is only relevant for orientation aware solvers.
  • @param inAxes
    */
    public void alignToAxes(AbstractAxes inAxes)

I have a IKAxes class and three variables in a ChainItem which is the substitute of your bones class.

// Positions relative to the root bone
Transform initial_transform;
// Position from this bone to the child
Vector3 current_pos;
// Direction from this bone to the child
Vector3 current_ori;

My problem is converting alignToAxes().

Chain is my substitute for your armature class. This is because Godot Engine already has a Skeleton and Bones.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

image

Some parts are working! Not very well, but it's a good start.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

I had a lot of trouble exporting the processing demos as stand alone executables. The shaders were not able to be found.

Does the issue occur when running the demos from within the processing IDE?

Can you explain how alignToAxes works? Why do you do this?

It used to be a convenience method, but I don't think it's used at all anymore.

I have a IKAxes class and three variables in a ChainItem which is the substitute of your bones class.
// Positions relative to the root bone
Transform initial_transform;
// Position from this bone to the child
Vector3 current_pos;
// Direction from this bone to the child
Vector3 current_ori;
My problem is converting alignToAxes().
Chain is my substitute for your armature class. This is because Godot Engine already has a Skeleton and Bones.

You've lost me. Are the variables listed (intial_transform, current_pos, current_ori) part of your IKAxes class, or part of your ChainItem (bone) class?

Some parts are working! Not very well, but it's a good start.

Looks like a modern art installation!

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Here's the https://github.com/fire/Everything-Will-Be-IK-Processing/tree/v3-rc branch with the changes.

The key insight is the data folder is where the ".pde" is. I copied the armature and the shaders because it was simpler but less elegant. Can you think of a better way?

You've lost me. Are the variables listed (intial_transform, current_pos, current_ori) part of your IKAxes class, or part of your ChainItem (bone) class?

"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

The key insight is the data folder is where the ".pde" is. I copied the armature and the shaders because it was simpler but less elegant. Can you think of a better way?

Sorry, could you be more specific? What is the goal you are trying to accomplish by copying the .pde files into the data folder. (Also, I don't see any .pdes in the data folder of the branch you linked to). Is this to try to get the examples running in the Processing IDE, or to try to get the examples running as self-contained executables? Or to try to port the shaders for use with the Godot engine? I'm not clear.

"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

If initial_transform is supposed to correspond to localAxes, then it should be defined relative to the bone's parent bone's transform, not relative to the root bone's transform.

// Position from this bone to the child
Vector3 current_pos;
// Direction from this bone to the child
Vector3 current_ori;

A bone can have multiple child bones, so what does it mean to define a bone's position or orientation relative to the position or orientation of just one of its children?
Also defining current_ori as a Vector3 would imply something like a Euler angle representation for orientation, but the library operates on quaternion rotations. So what's going on there?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

image

The goal is to get standalone processing binaries exported from processing IDE. This is also known as self-contained executables.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

And what's the purpose of trying to get standalone processing binaries exported from the processing IDE?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

My use case was to demonstrate your proof of concept to interested friends without learning git, processing IDE and other misc thing.

I wanted other people to evaluate the proof of concept. Someone who's an artist or a rigger.

I'm also typing a response for the transforms question.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024
"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

If initial_transform is supposed to correspond to localAxes, then it should be defined relative to the bone's parent bone's transform, not relative to the root bone's transform.

// Position from this bone to the child
Vector3 current_pos;
// Direction from this bone to the child
Vector3 current_ori;

A bone can have multiple child bones, so what does it mean to define a bone's position or orientation relative to the position or orientation of just one of its children?

Also defining current_ori as a Vector3 would imply something like a Euler angle representation for orientation, but the library operates on quaternion rotations. So what's going on there?

That was the first question that came to mind too. Why does the Godot IK api split the rotation in euler angles from the translation when the Transform data structure handles everything.

Godot's internal Transform is a 3x4 matrix consisting of a 3x3 matrix for Basis (rotation and scale) and an Vector3 origin for translation.

A Basis is convertable to Quat.

My thought was to remove the current_ori and current_pos and use one transform. Do I want it to be global or local? Do I want a transform for the global transform (relative to the root bone) and a local transform (relative to the parent bone).

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Why does the Godot IK api split the rotation in euler angles from the translation when the Transform data structure handles everything.

Maybe a user convenience? (Easier for humans to think in Euler angles).

Godot's internal Transform is a 3x4 matrix consisting of a 3x3 matrix for Basis (rotation and scale) and an Vector3 origin for translation.

This is a highly atypical representation. Usually transform matrices are square matrices. In 3D, a 4x4 matrix is used with the fourth column or row representing perspective.

My thought was to remove the current_ori and current_pos and use one transform. Do I want it to be global or local? Do I want a transform for the global transform (relative to the root bone) and a local transform (relative to the parent bone).

For each Bone, you want a single transformation defined relative to its parent bone's transformation. The bone's global transformation is important for doing a bunch of things, but that global transformation can and should only ever be inferred by composing it with all of its ancestor transformations. The result of that composition should ideally be cached, so that the global transformation is only ever recomputed when one of its ancestor transformations change, and even more ideally so that the recomputation traversal is carried out only up to the ancestor that was changed.

That entire process (of defining transformations relative to parent transformations, caching the global transformation that results from composing that transformation with all of its ancestors, and keeping track of whether a global transformation should be recomputed before returning the result to the user), is what the AbstractAxes class is dedicated to.

I imagine the Godot engine must have SOME scheme that does something similar with its own transformations internally (or else the Godot engine is highly inefficient).

A Basis is convertable to Quat.

Almost. A Basis also contains a translation component. So, a Basis is convertible to a Quat + a Vector3.

My use case was to demonstrate your proof of concept to interested friends without learning git, processing IDE and other misc thing.
I wanted other people to evaluate the proof of concept. Someone who's an artist or a rigger.

I see. Yes that's probably a good thing for the library to have in general. Processing is pretty finicky about directory structures, so if you haven't managed to get self contained executables working, I'll make some and link you hopefully sometime tonight. If you HAVE managed to get them working, then feel free to contribute them back ;).

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

https://github.com/fire/Everything-Will-Be-IK-Processing/tree/v3-rc Is my branch with the fixes to

  • ewbIK/examples/FullExample/SinglePrecision/SinglePrecision.pde
  • ewbIK/examples/FullExample/SinglePrecision/UI.pde
  • ewbIK/examples/HumanoidLoaderExample/SinglePrecision/Human_Holding_Item/Human_Holding_Item.pde
  • ewbIK/examples/HumanoidLoaderExample/SinglePrecision/Human_Holding_Item/UI.pde

I have also made builds here:

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Most 4x4 matrixes can be converted to Godot's 3x4 matrix as I discovered.

The Godot::Basis is only rotation and [scale].

For the IK system I've used the ChainItem as a scratchpad for in progress IK changes that are a duplicate of the existing Godot::Skeleton (with bones) system. So if I go to the pure local transform schema, I'd need to add caching.

[Edited]

Chain is Godot:Skeleton, while ChainItem is a bone equivalent.

There's another data structure for a camera.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

I have also made builds here:

Great, thanks!

Most 4x4 matrixes can be converted to Godot's 3x4 matrix as I discovered.

Yes, it's not difficult. It's just a weird choice.

So if I go to the pure local transform schema, I'd need to add caching.

Well, if it's just a scratchpad, you can just have a getGlobalBasis() function that recomputes the global transform each time you need it, and worry about optimizing later. Alternatively, you can just start with a naive port of AbstractAxes. Ultimately, the IK Solver works by creating and maintaining a separate internal representation of the skeleton anyway (so intermediate computations don't interfere with what's being displayed, and so that results are atomic), so converting between the two representations on the mimicry step wouldn't be much harder than mimicking under the same representation.

But I would be fairly surprised if Godot doesn't already have some caching scheme implemented. If it doesn't, you might want to consider taking this opportunity to give it one.

I can take a look at what Godot is doing if you wanna point me in the right general direction.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Alright, I looked into Godot's scheme a bit. I believe their Spatial class is a very close analog to ewbIK's AbstractAxes class. And Godot's spatial class does indeed seem to take care of caching and recomputing the global transformation as needed.

Godot's Transform class analogizes most closely to ewbIKs AbstractBasis class,
And Godot's Basis class analogizes most closely to ewbIKs Rot class.

Let me know if you need help.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Note that Spatial is a scene node, while Transform and Basis are objects.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Does it being a scene node present some implementation problem?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Yes, because I shouldn't need to add new spatial nodes. The bones is in the skeleton node.

The skeleton ik cmdd is a different node.

Currently I'm trying to discover what is global, what is skeleton global and what is bone local. This explains why the bones are [at] the root of the mesh rather than relative to the other bones.

Edited:

There's also root of the chain vs a bone in the chain. NOTE: This isn't the root bone of the skeleton.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Here's a comparison. Note the location of the skeleton root. (the orange bones start from a point near the feet).

image

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

(the orange bones start from a point near the feet)

Are you sure? Unless you're using an atypical visualization of Bones, it looks rather like your bones start at the hands and end at the feet.

Currently I'm trying to discover what is global, what is skeleton global and what is bone local. This explains why the bones are [at] the root of the mesh rather than relative to the other bones.

For reference:

  • Bones are transformed relative to their parent bone. The algorithm assumes this.
  • Kusudamas are transformed relative to the parent bone of the bone they are constraining. The algorithm assumes this.
  • The simLocalAxes corresponding to any bone are transformed relative to the simLocalAxes corresponding to their bone's parent bone. The algorithm assumes this.
  • The simConstraintAxes corresponding to any bone are transformed relative to the simLocalAxes corresponding to their bone's parent bone. The algorithm assumes this.
  • IKPins are, if I remember correctly (but I'll double-check) transformed "global-ish", in that by default their transformations are defined relative to the skeleton root's parent. If the skeleton's parent is the global transformation, then IKPins are defined relative to global coordinates. If the skeleton root's parent is defined relative to a moving bus instead, then placing an IKPin for the skeleton's wrists on a handrail inside the bus will cause the skeleton to always keep its wrist at that spot on the handrail, even if the rest of the skeleton is scooting over to the next seat. But this is just the default. You are free to redefine IKPins as being relative to basically anything at any time. In the humanoid box example, I redefined the IKPin transformations of the humanoid's wrist as being relative to the transformation of the box.

(If I didn't make IKPins default transformation be relative to the skeleton's parent, then you should definitely make it like that for your port!)

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I noticed something architecturally different.

Thank you for you explanations.

IKPins are, if I remember correctly (but I'll double-check) transformed "global-ish",

The Godot IK API uses has a (current) limit of a single goal that is in the world. In this demo it is a button on the wall.

This project appears to add ik pins all the way to the root.

I'm calling pins Godot ChainTargets.

Here is the current code. The c++ cpp is the implementation in the same folder.

https://github.com/fire/godot/blob/ik-cmdd-kusudama/scene/animation/skeleton_ik_cmdd.h#L631

  public void updatePinList() {
    pins.clear();
    recursivelyAddToPinnedList(pins, humanArmature.getRootBone());
    if(pins .size() > 0) {
      activePin = pins.get(pins.size()-1);
    } 
  }

  public void recursivelyAddToPinnedList(ArrayList<IKPin> pins, Bone descendedFrom) {
    ArrayList<Bone> pinnedChildren = (ArrayList<Bone>) descendedFrom.getMostImmediatelyPinnedDescendants(); 
    for(Bone b : pinnedChildren) {
      pins.add((IKPin)b.getIKPin());
      b.getIKPin().getAxes().setParent(worldAxes);
    }
    for(Bone b : pinnedChildren) {
      ArrayList<Bone> children = b.getChildren(); 
      for(Bone b2 : children) {
        recursivelyAddToPinnedList(pins, b2);
      }
    }
  }

I wonder if this is a significant change?

It is possible to add all the bones from the root (which root? chain root, or skeleton root) to the target array.

I'm going to not support multiroots in Godot yet (Godot has support for multiroots).

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I'll post the ik demo here so I can reference it.

Note that it has only twist constraints and zero direction constraints.

ik.zip

[Edited]

Good night!

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

This project appears to add ik pins all the way to the root.

I'm not sure what you mean. ewbIK only adds IKpins on the bones the user wants to pin.

The Processing code you quoted is just for UI purposes. It traverses the skeleton to find which bones have pins attached, so that the user can cycle through them with the up and down arrows.

It is possible to add all the bones from the root (which root? chain root, or skeleton root) to the target array.

Within the actual library itself targets aren't kept track of in an array, they're kept track of in a tree structure represented by the SegmentedArmature class.

I can (and probably should) draw a diagram if you feel unclear about what's going on there.

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

Ah, I'm stuck trying to find why the bones aren't attached to their previous bone and why the position is at the floor beside the feet.

Thanks for your help though. Holidays is almost over, so things will slow down.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

At which point in your code does a bones' local transform get composed with the transforms of its ancestors in order to determine the bone's global position/orientation?

from everything-will-be-ik.

fire avatar fire commented on June 8, 2024

I'm having a hard time finding the bug. Is there an order of operations than I can verify step by step?

The idea is to start from a good point.

I found several places that grabbed the skeleton's global pose when given a bone id.

from everything-will-be-ik.

EGjoni avatar EGjoni commented on June 8, 2024

Oh, sorry. I missed this comment somehow.

We should probably try to discuss this over live chat or something. You can walk me through the approach you're taking in Godot, and I can walk you through what the current library expects.

You can e-mail me with your preferred chat venue at [email protected].

from everything-will-be-ik.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.