mrdoob / three.js Goto Github PK
View Code? Open in Web Editor NEWJavaScript 3D Library.
Home Page: https://threejs.org/
License: MIT License
JavaScript 3D Library.
Home Page: https://threejs.org/
License: MIT License
Facilitate hierarchies within scene components where applicable properties of parent components "cascade" down onto children components. For example, position, scale and orientation.
This is an essential feature in modern 3D applications and so I argue that it should be an integral part of three.js
As I've been working on thingiview.js https://github.com/tbuser/thingiview.js (uses three.js, but parses STL and OBJ files directly in javascript without having to preprocess the files) I've noticed that the webgl renderer has gotten a lot slower than it used to be. I'm not sure which update or what has caused it? The canvas renderer seems to have gotten much faster and now seems to be even faster than webgl in most cases.
function normalToComponent(normal) {
return normal < 0 ? _min((1 + normal) * 0.5, 0.5) : 0.5 + _min(normal * 0.5, 0.5);
}
As far I can read this creates a 0 to 1 view from a -1 to 1 view but if you enter a normal of -2 then it return -0.5 while it should return 0.
Or Am I misunderstanding something
At the moment CanvasRendeer, in conjunction with Projector, uses an implementation of the painters algorithm, or depth sorting, to ensure foregrounded objects are rendered on top of backgrounded objects.
This is fully effective only in basic scenes which do not involve intersecting/transcending objects or objects with transparency. If the scene contains transparent objects or objects which transcend one-another, the illusion created by the painters algorithm may fail and objects might be observed "popping" in front of and behind one-another.
Ideally CanvasRenderer should maintain a z-buffer or depth buffer, which it writes depth information to (perhaps using DepthMaterial) as pixels are written to the framebuffer. When drawing an object, the depth buffer is used to determine whether or not the pixel is written to the framebuffer using various compositing operations (GREATER_THAN, GREATER_THAN_OR_EQUAL, etc).
This removes the need to depth sort objects and solves many issues pertaining to transcending/transparent elements "popping" in and out of eachother.
This may also resolve the issue with diagonal lines appearing when drawing transparent objects in CanvasRenderer as mentioned in https://github.com/mrdoob/three.js/issues/closed#issue/41
I'm going to take a stab at establishing support for geometry instancing in three.js (applicable to the WebGL renderer only).
At the same time I will be adding support for consolidating very large numbers of static meshes into a single mesh (to reduce the amount of draw calls) for efficient drawing.
This is scratching a personal itch as it will facilitate the creation of games like Minecraft/Cubelands/Blockland/Infiniminer in three.js but I am sure others will find these features useful in the future
Hello,
I want the transparancy drops from a plane when you move the mouse over it. This also works. But now I want to know how I do, that the plane covers the entire picture and how do I hide the diagonal lines.
screenshot: http://srvdemo.pytalhost.at/screen.jpg
Looking around at other WebGL libraries/frameworks, I noticed a lot of them support loading COLLADA (.dae) files. I've also noticed a lot of modelling packages support this format. Has anyone looked at supporting this format?
Are there plans to support WebGL fog?
Sorry to ask this here, but I did not know any better place:
how can I bring my branch in sync with yours, MrDoob? That is, get your latest files while merging my changes. Or if that is not possible, to only get your files, thus do a 'refork'?
Not entirely sure what changes were made to the camera system in r10 so I'm not sure what to change to get raw2sculpt working with r10.
As you can see it works with r9: http://svc.sl.marvulous.co.uk/raw2sculpt.html
As pyrotechnick was pointing over twitter. Maybe it's worth making Matrix4 Float32Array/Array based. In my system, having it object based is 80-90% slower.
http://jsperf.com/array-vs-object-matrix-multiplication
http://jsperf.com/array-vs-object-matrix-inversion
What do you think Mr.Qualia?
Since the culling of cube-faces is implemented recently by MrDoob, I wondered the following:
I've not implemented culling yet (but will do for static faces in-between cubes) and am wondering if it would be worthwile to do backface culling, i.e. only render the faces that are visible to the player? That would require quite some CPU as it has to be recalculated after every move of the camera, but it lessens the burden on the GPU quite some (right?). What would you advice?
As far as I know most platforms which support SVG support Canvas too. SVG is just another layer of complexity to what is looking like an awesome tool. In my opinion it's better to focus dev on just Canvas.
I installed the provided blender export script to blender's .blender folder but couldn't get it to work. The error message when I tried running the script (I had to manually add a new entry to the Bpymenus file) is:
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679] File "", line 1, in
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679] File "/Applications/blender.app/Contents/MacOS/.blender/scripts/export_threejs_25b.py", line 158
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679] check_existing = BoolProperty(name="Check Existing", description="Check and warn on overwriting existing files", default=True, options={'HIDDEN'})
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679]
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679]
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679] ^
8/22/10 3:38:49 AM [0x0-0x1b22b21].org.blenderfoundation.blender[62679] SyntaxError: invalid syntax
I'm not a python guy, or I use blender. I was trying to export a text object out to see if I can play with three.js. Let me know if you need more details to track issue this down. I'm using blender 2.49b on Mac OSX Snow Leopard
Hello,
In my script (app.) I want to walk through a room (with the arrow keys).
How you can see the lines are changing there position from some points of
view how they shouldnt do. Can you tell me where the mistake is?
I look forward to your replys
<!-- saved from url=(0022)http://internet.e-mail -->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>3d room</title>
<meta http-equiv="content-type" content="text/html; charset=windows-1250">
<style type="text/css">
body {
background-color: white;
margin: 0px;
overflow: hidden;
}
</style>
</head>
<body>
<script type="text/javascript" src="../build/Three.js"></script>
<script type="text/javascript">
keyDown = new Array();
for(i = 0; i < 300; i++) {
keyDown[i] = false;
}
document.onkeydown = function(event) {
keyDown[event.keyCode] = true;
}
document.onkeyup = function(event) {
keyDown[event.keyCode] = false;
}
var SEPARATION = 200,
AMOUNTX = 10,
AMOUNTY = 10,
camera, scene, renderer, angle = 90;
init();
function init()
{
var container = document.createElement('div');
document.body.appendChild(container);
camera = new THREE.Camera( 75, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.z = 0;
camera.position.x = 250;
camera.position.y = 170;
camera.target.position.x = 250;
camera.target.position.y = 150;
camera.target.position.z = 300;
scene = new THREE.Scene();
renderer = new THREE.CanvasRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
container.appendChild( renderer.domElement );
var geometry1 = new THREE.Geometry();
geometry1.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,0) ) );
geometry1.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,0,0) ) );
geometry1.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,290,0) ) );
geometry1.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,290,0) ) );
geometry1.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,0) ) );
var geometry2 = new THREE.Geometry();
geometry2.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,600) ) );
geometry2.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,0,600) ) );
geometry2.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,290,600) ) );
geometry2.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,290,600) ) );
geometry2.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,600) ) );
var geometry3 = new THREE.Geometry();
geometry3.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,0) ) );
geometry3.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,0,600) ) );
var geometry4 = new THREE.Geometry();
geometry4.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,290,0) ) );
geometry4.vertices.push( new THREE.Vertex( new THREE.Vector3 (0,290,600) ) );
var geometry5 = new THREE.Geometry();
geometry5.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,0,0) ) );
geometry5.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,0,600) ) );
var geometry6 = new THREE.Geometry();
geometry6.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,290,0) ) );
geometry6.vertices.push( new THREE.Vertex( new THREE.Vector3 (500,290,600) ) );
var line1 = new THREE.Line( geometry1, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line1);
var line2 = new THREE.Line( geometry2, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line2);
var line3 = new THREE.Line( geometry3, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line3);
var line4 = new THREE.Line( geometry4, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line4);
var line5 = new THREE.Line( geometry5, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line5);
var line6 = new THREE.Line( geometry6, new THREE.LineBasicMaterial( { color: 0x000000 } ) );
scene.addObject(line6);
renderer.render(scene, camera);
setTimeout('loop()', 1000/60);
}
function loop()
{
if (keyDown[38] == true) {
camera.position.x += Math.cos( (angle/360)*2*Math.PI ) * 5;
camera.target.position.x += Math.cos( (angle/360)*2*Math.PI ) * 5;
camera.position.z += Math.sin( (angle/360)*2*Math.PI ) * 5;
camera.target.position.z += Math.sin( (angle/360)*2*Math.PI ) * 5;
}
if (keyDown[40] == true) {
camera.position.x -= Math.cos( (angle/360)*2*Math.PI ) * 5;
camera.target.position.x -= Math.cos( (angle/360)*2*Math.PI ) * 5;
camera.position.z -= Math.sin( (angle/360)*2*Math.PI ) * 5;
camera.target.position.z -= Math.sin( (angle/360)*2*Math.PI ) * 5;
}
if (keyDown[37] == true) {
angle -=5;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
//turn left
}
if (keyDown[39] == true) {
angle += 5;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
// turn right
}
if (camera.position.x < 0)
{
camera.position.x = 0;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
}
if (camera.position.x > 500)
{
camera.position.x = 500;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
}
if (camera.position.z < 2)
{
camera.position.z = 2;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
}
if (camera.position.z > 900)
{
camera.position.z = 900;
camera.target.position.x = camera.position.x + Math.cos( (angle/360)*2*Math.PI ) * 300;
camera.target.position.z = camera.position.z + Math.sin( (angle/360)*2*Math.PI ) * 300;
}
renderer.render(scene, camera);
setTimeout('loop()', 1000/60);
}
</script>
</body>
</html>
I've added two methods to THREE.Camera to implement zooming, but my math is a little off on the zoom function.
See http://github.com/SignpostMarv/three.js/blob/master/examples/camera_zoom.html for the slightly borked but functional example and http://github.com/SignpostMarv/three.js/blob/master/src/cameras/Camera.js#L30 for the changes.
Any chance of adding this? I noticed the meshes have a visibility property, but I couldn't find a method to calculate visibility to the camera.
In r9, doodling the terrain resulted in a nicely smooth visual.
In r10, we have the visual wireframe-like effect (this problem appeared in previous revisions too)
Are there any plans for collision detection?
I use a very simplistic approach in my to-be-game
http://fabricasapiens.nl/projecten/spel/spel.html
But it is based on known positions of square blocks, and I wouldn't know how to easily create collision detection for other meshes. Maybe something with the functions as found in Ray.js but I'm completely new to 3D so I can only use my webdeveloping knowledge to get my head around the things I find :-)
Anyways, are you planning to work on it, or do you have any lead on how I could implement it?
Thanks
Could you please describe how this page looks like and how fast it runs:
http://fabricasapiens.nl/projecten/spel/spel.html
In my case (Ubuntu 10.04 / Chrome 7 --with-webgl) it looks like this:
http://fabricasapiens.nl/projecten/spel/Schermafdruk.png
and runs at approximately 30 fps.
I hoped there would be some sort of 'shades' / gradients on the cubes, but I don't seem to get it working... Might be the problem of Linux Chrome combi...?
I can't find a way to color the particles based on the depth, for example to simulate a 3D star field.
I have looked at all the examples but had no luck, the colors are always set on startup.
Have tried to enable the fog too, but looks like it has no effect on particles.
So far I'm just relying on examples, is there a small documentation listing the objects and methods?
Thanks.
this is inconsistent with lines which do not scale their position (only the relative position of their vertices).. not sure which way this should be but it should probably be consistent.
Is this do-able?
Thanks,
Steven.
I set the location of the camera at xyz 0,1000,0.
Then I set the target of the camera at xyz 0,0,0.
I am thus looking down. I would expect that the target (0,0,0) appears in the center of the screen, but instead it appears at the very bottom of the screen.
I guess that's a bug? Test is (again) here:
http://fabricasapiens.nl/projecten/spel/spel.html
Thanks
As a user of terrible Australian internet I'm still at the mercy of large file sizes and at over 50MB three.js can take a while to download.
Of course given that nature of Git, it's irrelevant once the repo has been initially cloned. However I'm just raising the issue the people with internet access similar to mine may experience. This problem will only grow larger as more and more examples are contributed by the community.
Possible solutions
My vote depends on how the examples will be targeted in the future. Do we wish for them to keep working against master forever?
I say they should be shipped with their own specific version of three.js and therefor seperated into their own repo. You should not have to fear breaking the API just to keep all of the examples working.
Of course to actually reduce the size of the git repo, the large objects will need to be purged from the internal git history (in some respects, "breaking" the repo) if one of these solutions is implemented. Alternatively, the git repo could be reinitialised from scratch but this seems rather harsh.
When I zoom into a object (which I do by incrementing the z vector of the camera position in three.js) I find that once I get to close lines start disappearing while they should still be in the viewing area. Perhaps this is a problem with my near clip (0.000001) or far clip (1000000) being set poorly (those values were based on no real 3d programming experience) but it could also be a bug so I'm posting it here in hopes of some help with the issue.
The lines in the screenshots bellow use THREE.ColorStrokeMaterial and have overdraw set to true (btw. what does overdraw do?). I've drawn a 2d grid with a part on it (this is part of some CNC software I'm working on) but as you can see the grid lines that intersect the bottom of the camera are disappearing as i zoom into them.
A few screen shots of the problem (zooming into the object by incrementing camera.position.z): http://imgur.com/a/JA9ET/threejs_lines_disappearing_due_to_camera_clip_issue
Any advice on how to solve this problem would be greatly appreciated!
The Sphere appears behind the octohedron, even though the octohedron is within the sphere. Any ideas?
I've got a version at this link:
http://davies.gotgeeks.com/test/three.js/
Hi there,
I was wondering if you have any plans on adding some kind off animation support? Either for skeletal animations or vertex animations?
Thanks!
And great work, everyone involved!
Hi all,
I have implemented JigLibJs physics for Three.Js and it works kinda.
I reworked a demo of them (only with cubes) into three.js here: http://fabricasapiens.nl/projecten/threejs/physics/collisions_similar.htm
But I must do something wrong when I fetch the position and rotation from the physic objects and copy them to the three.js objects. See around line 170. Does anybody of you have experience with this or does care to have a look at it?
A zip of the necessary files can be found here:
http://fabricasapiens.nl/projecten/threejs/physics/physics.zip
Clearly some of the examples are written as stress-tests rather than as legitimate examples for the community. One such example is http://mrdoob.github.com/three.js/examples/geometry_earth.html which has alot of people confused as to why it's so slow. Perhaps even remove them from the README altogether...
I think it will reduce the confusion amongst newcomers.
I have created a sphere (using the code below) and I can see its mesh when I include the debug code, however I can't see the sphere without the debug include.
Here's what I'm using to create the sphere, using chrome 7.0.517.44. Can anyone see any mistakes?
var geometry = new Sphere(100, 14, 8, false);
sphere = new THREE.Mesh(
geometry,
new THREE.MeshBasicMaterial( {
color: 0xffffff,
blending: THREE.AdditiveBlending,
wireframe: true,
} )
);
sphere.overdraw = false;
sphere.doubleSided = true;
scene.addObject( sphere );
Thanks
The faces of a cube have some offset if the cube is not a regular hexahedron. This bug can be reproduced by editing the camera_orthographic demo.
Hi,
Demo: http://srvdemo.lima-city.de/3dtemplate/html/3droom.html
Screenshoot: http://srvdemo.lima-city.de/screen.jpg
I want to do that the image if you move with your mouse over it get highlighted. This should be done with a transparent plane. This works well so far. But the transparent plane doesnt cover the whole image from all points of view (screenshot). Now I would like to know why this is.
A pivot for rotation for a objects would useful. Right now I am doing that using a locus equation but in built feature would be useful
I try to get the readme example working with the WebGL renderer, but the only thing I get is an empty screen. How would I go about doing this?
If there are significant differences between the renderers in terms of support, is there a list with what is and what is not supported by the different renderers?
Should I stick with the canvas renderer until the WebGL renderer is at the same level as the canvas renderer, or is it simply a matter of i.e. not using particles, but always meshes?
A ray class would be handy (see ogre::ray) for a reference. I might mock one up for using with the plane class.
I'm not sure if this is related to the Matrix4.makeOrtho method or something in Projector.projectScene, but something weird is happening when using an orthographic projection. The ortho projection matrix seems to be correct, and in the projectScene method, vertices are set to visible only if they lie within the 0 to 1 range.
vertex.__visible = vertexPositionScreen.z > 0 && vertexPositionScreen.z < 1;
This seems fine for the perspective projection matrix, however the orthographic matrix seems to transform the same coords into the 0 to -1 range, which therefore mean they are set to invisible.
I can't quite figure it out. Maybe the matrix needs to be negatively scaled along the z axis? I've tried the following but it doesn't quite work.
m.multiply( THREE.Matrix4.scaleMatrix( 1, 1, -1 ))
As far as I understand, objects are bufferend on the GPU once they are created (using gl.createBuffer() etc). Thus, once all elements are created, I would say that there is no pushing of elements to the GPU anymore, but only telling how they have changed (right?).
Still though, when I draw 10.000 cubes on the screen once, and do not change the objectmatrix in any way, I only get < 1 fps with webGL. How come?
I would argue (with my limited knowledge) that after the first push to the GPU, the rendering would speed up greatly, as the objects don't have to be changed after that point. One could simply tell the GPU: draw your buffers onto the screen, et voila! Or does it not work like that?
Hi all,
I've been profiling my WebGL Minecraft demo, and I found that Matrix4.multiplySelf is the function that takes most time. So I went optimizing it a little, with a 20% boost. It's quite simple: it just limits the number of object lookups.
Dashed and dotted color stroke material options would be handy for drawing dashed and dotted lines. Unfortunately I'm not really sure what the best way to do this would be given that canvas seems to not support dashed or dotted lines. Any thoughts?
https://gist.github.com/737878
There are problems with texture mapping. Or maybe I need a specialized texture file.
If the add function returned this
- then you could do stuff like:
return a.add(b).add(c).add(d).divideScalar(4);
I'm happy to push a patch - any objections?
Absolutely lovely!
What would be the difficulty in adding text to each particle? I have a personal project I would love to incorporate this into, but without being able to add labels, it would be difficult for users to know what's going on.
Being able to draw lines between particles would also be useful, along with event handlers per particle (at the very least, associate a URL with them so that if I click on it, I could "drill down" or go to a page which explains what the user is looking at.
Cheers,
Ovid
Just throwing it out there but it might be valuable to collect some statistics from the people trying out the examples.
For instance, you could sample the FPS and have them sent back to a database (along with browser/graphics card information).
If this is implemented it might be best to do as an opt-in thing since I'm not sure you can actually capture the graphics card info programatically. I'm also not sure of the privacy implications of such a facility.
Hello, how are you ?
To begin, a very big congratulations for this library ! It is amazing =D
Then, i have juste a little question:
I use the library to create a Cube.. it works very well and i can rotate it in x and y ! Now i just want to know if you can say me how can I make that the cube alway arrive in one of his face ! So we can rotate it and when it's finished, we see just a face.
Sorry for my english :)
Thank you !
When running the materials.html example, the grid lines show, but when I switch to WebGL renderer, no grid lines show.
Are lines supported?
In messing around with the Text object I've noticed that lines and particles with a common position vector and vertex do not line up. Can someone verify that this is not a new bug and that it behaves the same in an mrdoob's three.js?
My Test case (where canvas.model.scene is the THREE.js scene object and a render is performed immediately after):
var material = new THREE.ColorFillMaterial(hex = parseInt("AAAADD",16), opacity = 1);
//var text = new THREE.Text("Hello World", material);
var text = new THREE.Particle(material);
//text.parent = canvas.model.canvas_group;
text.autoUpdateMatrix = true;
text.context = "3d";
text.fontScaling = true;
text.scale.setScalar(1);
text.position.set(5, 0.1, 0.5);
canvas.model.scene.add(text);
var geometry = new THREE.Geometry();
for (var i = 0; i< 2; i++)
{
geometry.vertices[i] = new THREE.Vertex(new THREE.Vector3( i*text.position.x , text.position.y , text.position.z*i ));
}
var material = new THREE.ColorStrokeMaterial(lineWidth = 5, hex = parseInt("AAAAAA",16), opacity = 1);
//Creating the line
var line = new THREE.Line( geometry, material );
//hacking in a parent object
//line.parent = canvas.model.canvas_group;
canvas.model.scene.add(line);
I'm working on some speedups for the canvas renderer and I wanna ask If the core classes are gonna change anytime. Because if they don't change,I can make some general assumptions.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.