Introduction: Instructables Universe in Three.js
What's Instructables?
When you come to visit us at Instructables, you'll see a giant touchscreen that's tasked with aiding the explanation. On this touchscreen are about 20,000 points of light, each representing a project on Instructables, the top-performing of all of our featured content. It's there to help answer this question for visitors who may not know or be familiar with it at all, and to give an idea of the sheer breadth of passions in our community. The Instructables Galaxy is one part data-visualization, and one part interactive exhibit. It's not meant to dig into each project, but to introduce them all, and their relationships to each other.
The Galaxy project has been ongoing for most of a year at this point. It's gone through many iterations. For the first time, it is now small enough to work on the web. You'll need Chrome. This fully interactive 3D demonstration will take everything your computer has to offer.
ENTER THE GALAXY
This Instructable is about how the Galaxy is made. I'll take you on my journey. It hasn't been linear. I started in Kinetic.js, and then ditched it for Three.js so that I could get all the stars moving, and get a little more whiz-bang motion when someone touches the screen. Here's where we're going:
- Processing: a sketch that generates the background images
- Kinetic.js: How the "stars" are clustered
- Kinetic.js: How to generate suggestive-looking constellations
- Kinetic.js: Hit-performance with 20,000 individual points, rendered in canvas
- Three.js: Getting Started
- Three.js: Particle Systems
- Three.js: Particle Systems in motion
- Three.js: A different brightness for each star: WebGL Shaders
- Three.js: Hit-performance with 20,000 individual points in 3D
- TweenMax: Easy Animation
- Three.js: Camera motion
- Three.js: Post-processing & effects: dimming, blurring
- Other libraries (special thanks)
Step 1: Processing: Generate the Background Images
First, I spent some time looking at space. After almost ditching the idea right then and there when I noticed that the mac default desktop wallpaper was strikingly close to where I was headed, I continued on to notice some details about the images of space and star clusters themselves:
- Individual stars vary in size, brightness, color (red to blue), and the surrounding clouds of gas
- Clusters appear both because of accumulations of stars and because of the way they light gasses more in their vicinity
- The beauty comes largely from mixed colors
- "Haze" is critical
- Brightness clusters into groups that feel concentric, but aren't even
- Many images have darker areas around the edges; vignette effect
- Star colors need to "blend" with their background color. There were very few blue stars appearing in otherwise red space.
The processing sketch for the background image attempts to turn these rules into code:
PVector center; float diagonal; void setup() { int width = 960, height = 440; size(width,height); center = new PVector(width/2,height/2); diagonal = dist(0,0,center.x,center.y); noiseDetail(5,.5); colorMode(HSB, 1); for (int i = 1; i < 100; i++){ makeNew(i); } } void makeNew(int index){ float hueSeed = random(0.4,1), saturationSeed = random(0.4,1), brightnessSeed = random(0.2); color dark = color(hueSeed,saturationSeed,brightnessSeed); color light = color(hueSeed-random(0.4),saturationSeed-random(0.4),brightnessSeed + random(0.3)); setGradient(0, 0, (float)width, (float)height, dark, light); noStroke(); clouds(1,random(1,3),random(.5,1.5),random(.005,.02),random(.8)); clouds(1,random(1,3),random(1,5),random(.005,.02),random(.8)); clouds(1,random(3,5),random(3,4),random(.005,.02),random(.8)); save("backgrounds/" + index + ".jpg"); } void clouds(float xCoeff, float yCoeff, float lightnessMultiplier, float kNoiseDetail, float maxOpacity){ for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { float v = noise(x*kNoiseDetail*1.2,y*kNoiseDetail*1.2,millis()*.0001); float hue, saturation, lightness, alpha, distance; distance = dist(xCoeff*x,yCoeff*y,xCoeff*center.x,yCoeff*center.y); // note that distance is calculated ellipsoidally hue = 1; // seek range of 0.7-->0.4 (wrapping) saturation = 0.75 - v; lightness = v*lightnessMultiplier; // brighter towards middle alpha = maxOpacity - distance*0.6/diagonal; fill(hue,saturation,lightness,alpha); rect(x,y,1,1); } } } void setGradient(int x, int y, float w, float h, color c1, color c2) { noFill(); for (int i = y; i <= y+h; i++) { float inter = map(i, y, y+h, 0, 1); color c = lerpColor(c1, c2, inter); stroke(c); line(x, i, x+w, i); } }
There's nothing fancy here:
makeNew chooses two colors (one a randomized hue, the other a darker shade of it). It then calls clouds three times with different parameters to generate several superimposed variations of haze. Then it saves the image.
clouds loops over each pixel, mixing perlin noise with a distance-based dropoff for alpha and brightness of the cloud. This adds up to a splotchy + vignette effect for each image, regardless of the "hardness" of the cloud's edge, the size of the cloud in x or y, or the colors involved. The many magic numbers included in this function are the result of trial and-error, not any sort of rigor.
setGradient applies a darker hue to the bottom part of the screen than the top.
setup runs this loop 100 times, so there are some background images to choose from.
These images are later vignetted in JavaScript's canvas to hide the edges. It's certainly true that this could have been done in many other places (processing, photoshop/gimp, threejs), but doing it in Javascript has two advantages:
1) The Images don't need to be vignetted beforehand; if I change my mind on the vignette qualities, I can do so after I see all the pieces together and
2) With the image loaded on the canvas, I have the opportunity to sample its pixels to choose a background color for three.js that blends well with the particular background image.
The code that does this essentially just loads a background image (a random selection from the processing output) and a pre-set transparency image (drawn in Gimp). It uses the transparency JPEG for the alpha channel, and assigns and RGBA pixel based on the background image and the transparency image. The combined output is loaded as a texture for three.js. Inspiration for the vignetting technique comes from this code, full tutorial here.
Step 2: Kinetic.js: Clustering the Stars
Though not immediately apparent, the Galaxy is not arranged randomly.
Click one of the top-level categories, for example, and you'll see a ring:
Zoom into an individual channel, and you'll see a tight cluster of projects:
The projects are clustered by Instructables' categories and channels. The six top-level category rings emanate outwards from the center of the universe, and each channel within each category gets an equal pie-slice of the ring. This results in a "clustered" distribution of the stars, since some categories are fuller than others, and some channels within categories are fuller than others. These clusters are both reflective of the balance of projects in Instructables' categories and channels and provide some some aesthetic appeal.
The code that makes this happen was written in KineticJS, though could have been written in plain JavaScript, Processing, or anything else. It assigns a ring to each of the six categories, and a random point (zero to 2*pi) along that ring that serves as the centerpoint for each channel. This is all done with basic trig functions: x = r*cos(theta) and y = r* sin(theta) where r (radius) is derived from the ring on which each project resides, and theta is derived from the "center angle" of each channel. As projects are processed, Kinetic creates a new layer for the channel (if need be) and adds the project to that layer. The layer is added to the category's layer, and all the layers are added to the stage. Kinetic makes it simple to collapse all these numbers down to world XY coordinates, which is why the code was never rewritten when the project moved to Three.js. Each project gets a random z, within a narrow range.
Perhaps the most interesting part is creating a Gaussian distribution, rather than random scatter around these center points:
rnd_snd : function() { return (Math.random()*2-1)+(Math.random()*2-1)+(Math.random()*2-1); }
random : function(mean, stdev) { return rnd_snd()*stdev+mean; }
This helpful trick provided by proton fish. If you have a target mean and standard deviation for any sort of normal distribution you're trying to generate, a handy near-approximation is to add three uniform randoms together.
Step 3: Kinetic.js: Suggestive Constellations
When you squint, a constellation looks like something.
My approach here was, again, to look through constellations. The constellations I saw seldom looked like specific things (it seemed to me that one person's lion might be another person's mouse, or even just a square with some lines coming out of it), but they did seem to share a suggestive quality derived from some simple and consistent
Geometric Rules:
- There are no crossing lines
- Points (stars) connect mostly with adjacent or near-adjacent stars. It's unusual to have noticeably longer lines.
- There tends to be one (sometimes zero, sometimes two) closed polygon... a "body" of some kind
- Points have one, two, three, or four connections. There are almost never five connections to a single point.
- Constellations consist of approximately 3-20 stars
In pseudo-code, it's something like this:
First pass:
- Start with a random star
- Propose a line to the nearest un-attached star
- Test that this line does not cross any existing lines
- Draw this line if it passes, not if it doesn't
- Move to the next closest star
- Repeat
Second Pass:
- Find stars with no connections
- Find at least one non-crossing line to draw from these stars to connect them
Third Pass:
- Add a handful of non-connecting lines
And finally, the actual code I ended up with:
function ConstellationMaker3D(options){ if (_.isUndefined(THREE) || _.isUndefined(Galaxy) || _.isUndefined(Galaxy.Utilities) || _.isUndefined(Galaxy.TopScene)) { throw new Error("Missing dependencies for ConstellationMaker3D"); } // ConstellationMaker3D is a function of a camera object because the 2-dimensional rules need a particular projection to work from this.init(options); } ConstellationMaker3D.prototype.init = function(options){ var camera = options.camera || Galaxy.Utilities.makeTemporaryCamera(); var nodes = options.nodes; _.bindAll(this, 'getConnections'); this.camera = camera; // three.js camera object this.nodes = this.projectPoints(nodes); // Vector2's (math -- flattened representation of XYZ points) this.segments = []; // Line3's (math). Note these are 2D line segments; the 3d ones are rendered, but not part of the constellation construction this.connections = []; // Array of connected instructable ids. ie, [[id1,id2],[id2,id3]] this.disconnectedNodes = [];// Vector3's not yet dealt with this.lineObject = null; // THREE.Line() object this.calculateConstellation(); if (options.hidden !== true) this.displayConstellation(); }; ConstellationMaker3D.prototype.projectPoints = function(vector3List){ var that = this; return _.map(vector3List,function(vec){ var position = Galaxy.Utilities.vectorWorldToScreenXY(vec,that.camera), vec2 = new THREE.Vector2(position.x,position.y); vec2.instructableId = vec.instructableId; return vec2; }); }; ConstellationMaker3D.prototype.spatialPointsForConnections = function(connectionList){ return _.map(connectionList,function(connectionPair){ return Galaxy.Utilities.worldPointsFromIbleIds(connectionPair); }); }; ConstellationMaker3D.prototype.displayConstellation = function(callback){ // Place THREE.JS objects corresponding to the calculated objects into the scene var connectedPoints3d = this.spatialPointsForConnections(this.connections); var that = this; if (!_.isEmpty(connectedPoints3d)) { // Initialize geometry, add first point var lineGeometry = new THREE.Geometry(); // connect subsequent dots along the chain of connected points _.each(connectedPoints3d,function(pair){ var closerPair = pair; lineGeometry.vertices.push( closerPair[0] ); lineGeometry.vertices.push( closerPair[1] ); }); // display the line var material = new THREE.LineBasicMaterial({ linecap: "round", color: 0xffffff, linewidth: 2, transparent: true, opacity: 0.5 }); this.lineObject = new THREE.Line( lineGeometry, material, THREE.LinePieces ); this.lineObject.name = "constellation"; Galaxy.TopScene.add( this.lineObject ); } if (typeof callback === "function") { callback(); } }; ConstellationMaker3D.prototype.movePointsCloser = function(pair){ // part of displaying the constellation lines is shortening the segments for graphic effect. var end1 = pair[0].clone(); var end2 = pair[1].clone(); // move each point slightly towards the other var diff = end2.clone().sub(end1.clone()); diff.multiplyScalar(0.08); return [end1.add(diff.clone()), end2.sub(diff.clone())]; }; ConstellationMaker3D.prototype.clear = function(){ if (!_.isNull(this.lineObject)) { Galaxy.TopScene.remove(this.lineObject); } }; ConstellationMaker3D.prototype.calculateConstellation = function(){ var currentNode = this.nodes.shift(), that=this; while (this.nodes.length > 0) { currentNode = this.addSegmentFromNode(currentNode); } }; ConstellationMaker3D.prototype.closestNodeToNodeFromNodeSet = function(testNode,nodesToTest){ _.each(nodesToTest,function(potentialNextNode){ potentialNextNode.distance = testNode.distanceTo(potentialNextNode); }); var sorted = _.sortBy(nodesToTest,"distance"); return sorted; } ConstellationMaker3D.prototype.findLineLineIntersection = function(line1,line2){ var eqn1, eqn2, intx, inty; // if the two lines share an end (ie, they are drawn from the same node), pass if (this.shareEndpoint(line1, line2) === true) return false; eqn1 = this.equationForLine(line1); eqn2 = this.equationForLine(line2); // same slope = no intersection if (eqn1.m == eqn2.m) return false; // x-value of intersection point intx = (eqn2.b - eqn1.b) / (eqn1.m - eqn2.m); // y-value of intersection point inty = eqn1.m * intx + eqn1.b; // if x or y are out of range for either line, there's no intersection var range = { minx: Math.min(line1.start.x,line1.end.x), maxx: Math.max(line1.start.x,line1.end.x), miny: Math.min(line1.start.y,line1.end.y), maxy: Math.max(line1.start.y,line1.end.y) }; if (intx < range.minx || intx > range.maxx) return false; if (inty < range.miny || inty > range.maxy) return false; range = { minx: Math.min(line2.start.x,line2.end.x), maxx: Math.max(line2.start.x,line2.end.x), miny: Math.min(line2.start.y,line2.end.y), maxy: Math.max(line2.start.y,line2.end.y) }; if (intx < range.minx || intx > range.maxx) return false; if (inty < range.miny || inty > range.maxy) return false; return true; } ConstellationMaker3D.prototype.equationForLine = function(line){ // eqn's store m & b from y = mx + b var m, b; // slope m = (line.end.y - line.start.y) / (line.end.x - line.start.x); // y-intercept: b = y-mx. Sub in values from a known point. b = line.end.y - m * line.end.x; return {m: m, b: b}; } ConstellationMaker3D.prototype.shareEndpoint = function(line1,line2){ if (line1.start.x == line2.end.x && line1.start.y == line2.end.y) return true; if (line1.end.x == line2.start.x && line1.end.y == line2.start.y ) return true; if (line1.end.x == line2.end.x && line1.end.y == line2.end.y) return true; if (line1.start.x == line2.start.x && line1.start.y == line2.start.y) return true; return false; } ConstellationMaker3D.prototype.addSegmentFromNode = function(node){ var nextNodeList = this.closestNodeToNodeFromNodeSet(node,this.nodes); var proposedLine = this.lineConnectingNodes2D(node,nextNodeList[0]); if (this.lineIntersectsPriorLines(proposedLine) == true) { this.disconnectedNodes.push(node); } else { this.connections.push([node.instructableId,nextNodeList[0].instructableId]); this.segments.push(proposedLine); } this.nodes = _.without(this.nodes,nextNodeList[0]); return nextNodeList[0]; } ConstellationMaker3D.prototype.connectNodeMultipleTimes = function(node,times){ var closest = this.closestNodeToNodeFromNodeSet(node,this.allNodes), lineCount = 0; for (var i = 2; i < closest.length && lineCount < times; i++) { var proposedLine = this.lineConnectingNodes2D(node,closest[i]); if (!this.lineIntersectsPriorLines(proposedLine)) { this.segments.push(proposedLine); this.constellationLayer.add(proposedLine); lineCount++; } } } ConstellationMaker3D.prototype.lineIntersectsPriorLines = function(proposedLine){ var that = this, intersectionFound = false; _.each(this.segments,function(testSegment){ var intersect = that.findLineLineIntersection.apply(that,[testSegment, proposedLine]); if (intersect === true) { intersectionFound = true; } }); return intersectionFound; } ConstellationMaker3D.prototype.lineConnectingNodes2D = function(node1,node2){ return new THREE.Line3(new THREE.Vector3(node1.x,node1.y,0),new THREE.Vector3(node2.x,node2.y,0)); } ConstellationMaker3D.prototype.getConnections = function(instructableId){ // returns an array of instructable id's to which the supplied id has connections. var flat = _.uniq(_.flatten(this.connections)); var index = _.indexOf(flat,instructableId); switch(index) { case -1: return []; case 0: return [flat[1]]; case flat.length-1: return flat[flat.length-2]; default : return [flat[index-1],flat[index+1]]; } console.log(instructableId + ' found at ' + index + ' in '+ flat); }
The move from KineticJS to ThreeJS decidedly complicates things. Constellations are fundamentally 2d in nature: they are connections between points in 3 dimensions (even if you ask Ptolemy), but the constellation itself biases a particular perspective from earth. Lines that appear to us not to cross may in fact cross when you view them from the side, as they do in the interactive demo.
Since ThreeJS operates on 3d objects, a method of collapsing the data to a camera plane became necessary. I introduced some utility methods to get the screen XY coordinates of a world XYZ point, given a camera position:
vectorWorldToScreenXY : function(vector,camera){ // vector assumed to be in world xyz coordinates coming in. var widthHalf = window.Galaxy.Settings.width / 2, heightHalf = window.Galaxy.Settings.height / 2, projector = new THREE.Projector(), screenPosition; projector.projectVector( vector, camera ); screenPosition = { x : ( vector.x * widthHalf ) + widthHalf, y : - ( vector.y * heightHalf ) + heightHalf }; return screenPosition; },
Step 4: Kinetic.js: Hit-Performance With 20,000 Points, Rendered in Canvas
KineticJS is not even my favorite 2D canvas library. I chose it initially over the many many other canvas and SVG JavaScript options (easel, paper, processing, raphael, fabric, the list goes on) for one reason only: this demo.
This is just a gif. Click for the full demo!
Kinetic has built-in event delegation for objects added to its canvases. You register only a single event listener for the entire canvas, and Kinetic provides an easy way to retrieve the single particle that was clicked on. This makes it extremely fast (as canvas goes anyway), even with 20,000 objects onscreen at once. You can do this:
stage.on("click",function(e){ var node = e.targetNode; console.log(node); // the individual object that was clicked! });
... and it will log just about as fast as if there were only one node onscreen.
Kinetic was great to get us to version 1.0 of this project, but ultimately we scrapped it entirely. The problem? Performance. Not in the spot I expected actually: it was the glowing and un-glowing of the stars as they were selected. Simple, but ineffective. There were other problems with 1.0 that were mine: I used a series of "relation" words that people found confusing, and there was some question as to whether the galaxy was about Instructables at all. I'd played down the projects too much.
That, and noahw thought we needed more whiz-bang. Movement, 3D, walk up to the screen and poke at it violently with a fat finger, that sort of thing. This ultimately led into the world of ThreeJS which made the display much more attention grabbing. In many ways though, version 1 was purer and, for me, better. Click to Check out the Kinetic version. Please be aware that this version of the project was never web-optimized at all. That link turns on debug mode, which gives you a small dataset and leaves the mouse cursor visible; it's still a 10mb download. The touchscreen mode is almost 60mb, and there's no loader, so caveat emptor.
Step 5: Three.js: Getting Started
This is just a gif. Click for the full JSFiddle!
Three.js can be challenging to step into for the first time. For me, the confusion was figuring out all the pieces I needed to assemble for just the most basic "Hello, World" kind of example. I could find some good tutorials that would give me the code to put a cube on the screen, rotating, but I couldn't figure out what all the pieces were doing. I'd change something and the whole mess would break.
I hope the diagram above solves this for some of my readers. To get started with threejs you need every piece in that diagram, and you need to assemble them as described even for the most minimal example. There's no such thing as a one-liner to "draw a cube". I'll step through it; if you need more help, I'd start with Aerotwist's tutorial, the best of the ones I found. Also very useful, the "Creating a Scene" page in the ThreeJS docs.
The ThreeJS tutorial mentioned above goes, to me, in a strange order. Though certainly weaker code to someone who already understands it, I would annotate the "animating cube" example like this (the code is the same as the ThreeJS doc, repeated here with a different explanation):
Create a cube:var geometry = new THREE.CubeGeometry(1,1,1); var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } ); var cube = new THREE.Mesh( geometry, material );Create a Scene:
var scene = new THREE.Scene();Add the Cube to the Scene:
scene.add( cube );Create a Renderer:
var renderer = new THREE.WebGLRenderer(); renderer.setSize( window.innerWidth, window.innerHeight ); document.body.appendChild( renderer.domElement );Add a Camera, and Render:
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 ); renderer.render(scene, camera);
Step back for a second. Did it work? Ok, now animate.
Add a Rendering Loop, replace renderer.render() with:function render() { requestAnimationFrame(render); cube.rotation.x += 0.1; cube.rotation.y += 0.1; renderer.render(scene, camera); } render();
... and go home happy!
requestAnimationFrame(render) is a shim for the native method by (approximately) the same name which you could think of as similar to JavaScript's setInterval(). Except there are lots of advantages: requestAnimationFrame won't fire when the page isn't visible, for example, so you don't have to use resources for an animation that isn't being viewed. Tree in the forest kind of situation. It's also an API that allows the browser to redraw many things at once (from JS, CSS, WebGL, etc), so it'll optimize your redraw cycle. Read more.
Step 6: Three.js: Particle Systems
WebGL, and ThreeJS by extension, are great at displaying large and detailed meshes in 3-space. That's what many 3D models consist of, and even when models are other things ("solids" or "NURBs") they are eventually rendered to your computer screen by being first turned into so-called "render meshes".
"Points" aren't really a thing.
Particle systems are basically a way for managing 3-dimensional mesh geometry where you don't care about the edges. If you forget about edges of a mesh, make them invisible, and then give some kind of point-like material quality to the intersections, you have a particle system. Particle systems allow you to move all of the points of a mesh around independently, and can be great for effects that are typically rendered as particles. Yes, you could model an hourglass this way as the individual particles of sand flow past each other, but you could also do something like clouds, snow, yes, stars, crazy self-organization, or a holy mess. Vertices are powerful in other ways, too. If you're willing to write yourself a vertex shader, it's pretty quick to get to a nice chrome ball.
I actually don't use particle systems for much, but threejs makes it easy to apply vertex-only materials to particlesystems. Particle Systems go hand in hand with particle system materials. So instead of cubegeometry + material = cube (from above), we can say particleSystem + particleSystemMaterial = Object3D, and add that to the scene. For more detail on this basic version, I recommend Aerotwist's tutorial on Particles. I ended up going a slightly different way: my own custom-rolled vertex and fragment shaders.
The JavaScript is straight forward enough:
var particleGeometry = new THREE.Geometry(); // add a bunch of vertices to the geometry var particle = new THREE.Vector3(pX, pY, pZ); particleGeometry.vertices.push(particle); // repeat for every point var material = new THREE.ShaderMaterial({ uniforms: uniforms, attributes: attributes, vertexShader: document.getElementById( 'vertexshader' ).textContent, fragmentShader: document.getElementById( 'fragmentshader' ).textContent, blending: THREE.AdditiveBlending, depthTest: false, transparent: true }) var system = new THREE.ParticleSystem( particleGeometry, material ); // scene defined elsewhere scene.add(system);
But the "vertexshader" and "fragmentshader" elements mentioned are WebGL code from an alien planet:
<script type="x-shader/x-vertex" id="vertexshader"> attribute float alpha; attribute float size; attribute vec3 ca; varying vec3 vColor; varying float vAlpha; void main() { vColor = ca; vAlpha = alpha; vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); gl_PointSize = size * (1.0+ 300.0 / length( mvPosition.xyz ) ); gl_Position = projectionMatrix * mvPosition; } </script> <script type="x-shader/x-fragment" id="fragmentshader"> uniform vec3 color; uniform sampler2D texture; varying vec3 vColor; varying float vAlpha; void main() { gl_FragColor = vec4( vColor, vAlpha ); gl_FragColor = vAlpha * texture2D( texture, gl_PointCoord ); } </script>
Writing this shader code was painful for me, because I didn't know WebGL and still don't. What I did learn along the way, in addition to some minutiae that aren't useful to share, was one interesting fact. I never knew why a GPU was useful. Sure, it's good to have a second processor. But why not two CPU's? Just because a GPU is cheaper?
The difference is parallel processing. For a CPU's architecture to paint every pixel on the screen, it basically has to follow code that does each pixel in sequence:
for (each row) { for (each column) { do something to pixel at (row, column); } }
The GPU code looks different because GPU's process pixels in parallel, not sequence. You apply an effect to the whole screen at once, and layer effects on each other. This lets you do some really exciting stuff with very little code. See postprocessing later for a taste.
Step 7: Three.js: Particle Systems in Motion
The stars could be animated in several different ways:
- Each vertex could be moved each frame. This offers enormous flexibility (any point anywhere any time), and is how most of the fireworks / fountain / snow particle-webgl examples work. But with large numbers of points, performance becomes a concern because the JavaScript has to loop over every particle, every frame. If you have 20,000 points with 3 dimensions changing every frame and you want to maintain a silky smooth 60fps frame rate, that's 3.6 million calculations per second. It's likely to slip a little.
- Use vertex shaders to displace the vertices directly in webgl. This is probably the best solution for performance (the JavaScript does nothing each frame, and all of the animation is direct in webgl). You do the displacement and noise calculations directly on the GPU, leaving the CPU entirely free for other tasks, such as user interaction. Here's an excellent demo and tutorial for this kind of trick. You should check out the tornado too, which also uses a related strategy. Though cool looking, this makes it difficult to handle things like a user tapping or clicking on a star. So far as JavaScript is concerned, the vertices are fixed. Mapping the webgl location back to a JavaScript object was either beyond me or not practically possible. I needed to be able to locate stars in space based on user interaction, so this option was out.
- Group the points into objects, and animate each object independently. This ended up being my solution. The six rings for Instructables' six top-level categories are six independent particle systems in ThreeJS. To create an illusion that the Galaxy is constantly in motion, I spin each of the particleSystems at different speeds, in different directions, and around different center points. This requires JavaScript to calculate a new rotation for six objects each frame, but the great majority of the work is done on the GPU which maps each vertex to a point on screen. Since a JavaScript reference is maintained to each point, it's possible to figure out which point a user is tapping when they tap the screen.
Each frame, this happens:
// animation loop function update() { // note: three.js includes requestAnimationFrame shim requestAnimationFrame(update); // Move things around as need be: if (interactionHandler.frozen === false) { particleSystemsArray[0].rotation.z -= 0.00008; particleSystemsArray[1].rotation.z += 0.00002; particleSystemsArray[2].rotation.z += 0.00012; particleSystemsArray[3].rotation.z -= 0.00009; particleSystemsArray[4].rotation.z += 0.00016; particleSystemsArray[5].rotation.z -= 0.00005; sky.rotation.z += 0.00015; // rotate the background image too! } // The little tags that travel with stars need to have updated positions interactionHandler.getTagManager().updateActiveTagPositions(); // draw. I'll explain this code later, but you can think of it // for now as renderer.render() _.each(Galaxy.Composers,function(composer){ composer.render(); }); }
Step 8: Three.js: a Different Brightness for Each Star: WebGL Shaders
Stars aren't uniform. They're different sizes, different brightnesses, and different colors. My trick with having six objects from before didn't seem to apply here: I really wanted each star to be unique, and to reflect the relative importance of the project. Basically, the projects that are the "brightest stars" should shine. Making each vertex unique is a perfect use for WebGL shaders. Recall the shader code from before:
<script type="x-shader/x-vertex" id="vertexshader"> attribute float alpha; attribute float size; attribute vec3 ca; varying vec3 vColor; varying float vAlpha; void main() { vColor = ca; vAlpha = alpha; vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); gl_PointSize = size * (1.0+ 300.0 / length( mvPosition.xyz ) ); gl_Position = projectionMatrix * mvPosition; } </script> <script type="x-shader/x-fragment" id="fragmentshader"> uniform vec3 color; uniform sampler2D texture; varying vec3 vColor; varying float vAlpha; void main() { gl_FragColor = vec4( vColor, vAlpha ); gl_FragColor = vAlpha * texture2D( texture, gl_PointCoord ); } </script>
These shaders work hand-in-hand with this ThreeJS code:
Galaxy.Utilities.projectData = parsedData; Galaxy.Datasource = parsedData; var instructableIds = _.keys(parsedData); // create geometries for each of the six rings, so the particle systems can move independently var particleGeometries = []; _.each(window.Galaxy.Settings.categories,function(){ particleGeometries.push(new THREE.Geometry()); attributes = { size: { type: 'f', value: [] }, ca: { type: 'c', value: [] }, alpha: { type: 'f', value: [] } }; uniforms = { color: { type: "c", value: new THREE.Color( 0xffffff ) }, texture: { type: "t", value: THREE.ImageUtils.loadTexture("images/particle4B.png")} }; shaderMaterialsArray.push(new THREE.ShaderMaterial( { uniforms: uniforms, attributes: attributes, vertexShader: document.getElementById( 'vertexshader' ).textContent, fragmentShader: document.getElementById( 'fragmentshader' ).textContent, blending: THREE.AdditiveBlending, depthTest: false, transparent: true })); }); _.each(instructableIds,function(id){ var pX = parsedData[id].x - window.Galaxy.Settings.width/2, pY = parsedData[id].y - window.Galaxy.Settings.height/2, pZ = Galaxy.Utilities.random(0,10), particle = new THREE.Vector3(pX, pY, pZ); // add each particle to the correct geometry (ring) so it will end up in an associated particle system later var ring = indexForCategory(parsedData[id].category); if (ring !== -1) { particleGeometries[ring].vertices.push(particle); var appearance = Galaxy.Utilities.vertexAppearanceByViews(parsedData[id].views); shaderMaterialsArray[ring].attributes.size.value.push(appearance.size); shaderMaterialsArray[ring].attributes.ca.value.push(appearance.ca); shaderMaterialsArray[ring].attributes.alpha.value.push(appearance.alpha); // we need to keep references both directions. User clicks particle, we need to look up details by id // Also, if we want to highlight related instructables, we'll need fast easy access to vertices with referenced ids. particle.instructableId = id; parsedData[id].particleMaterial = shaderMaterialsArray[ring]; parsedData[id].vertexNumber = particleGeometries[ring].vertices.length-1; } }); // main scene, for all regular galaxy appearances var scene = new THREE.Scene(); Galaxy.Scene = scene; // create the particle system _.each(particleGeometries,function(particleGeometry, index){ particleGeometry.applyMatrix( new THREE.Matrix4().makeTranslation( Math.random()*50, Math.random()*50, 0 ) ); var system = new THREE.ParticleSystem( particleGeometry, shaderMaterialsArray[index] ); particleSystemsArray.push(system); scene.add(system); });
Here's what's happening in plain English:
1. I set up a THREE.ShaderMaterial for each THREE.ParticleSystem. Recall from the last step that there are six ParticleSystems, one for each category of Instructables, and that each ParticleSystem needs two things to be instantiated: a Geometry and a Material. (see ThreeJS: Getting Started above)
2. Each THREE.ShaderMaterial is essentially the same at this point: they're set up to use the fragment and vertex shaders loaded in the script tags above. They include uniforms and attributes passed in from the JavaScript. These are two of the three types of variables you can send to WebGL. As well explained on html5rocks.com:
- Uniforms don't change within a given frame. They are sent to both fragment and vertex shaders. In this case, the color and texture image (the nice glowing star-like image) is the same for each star.
- Attributes apply to individual vertices. They are sent to vertex shaders only. In this case, the attributes that can vary by star are color, size, and alpha.
- Varyings allow the vertex shaders to pass values into the fragment shader.
3. With the empty THREE.ShaderMaterials defined, my next block steps over every Instructable that appears on screen. It passes the Instructable to a helper function that determines how the star should appear, based on the number of views for that Instructable:
vertexAppearanceByViews : function(viewcount){ var prominence = (Math.pow(Math.log(viewcount),3))/(1000); return { size: prominence*9, ca: Galaxy.Utilities.whiteColor, alpha: Math.min(0.15 + 0.3*prominence,0.9) } }
In the final version of the code, I don't actually vary the color of the stars! Alpha does that for me, since each star appears on top of a colored background.
4. Inside this loop, each Particle is pushed into an array of vertices in the Geometry:
var particle = new THREE.Vector3(pX, pY, pZ); ... particleGeometries[ring].vertices.push(particle);
and a corresponding set of "attributes" values is pushed into the corresponding ShaderMaterial:
var appearance = Galaxy.Utilities.vertexAppearanceByViews(parsedData[id].views); shaderMaterialsArray[ring].attributes.size.value.push(appearance.size); shaderMaterialsArray[ring].attributes.ca.value.push(appearance.ca); shaderMaterialsArray[ring].attributes.alpha.value.push(appearance.alpha);
Finally, with an array of particle geometries and corresponding array of particle materials, the two are merged into a single array of ParticleSystems, and each ParticleSystem is added to the scene:
// for each item in the particleGeometries array: var system = new THREE.ParticleSystem( particleGeometry, shaderMaterialsArray[index] ); ... scene.add(system);
Step 9: Three.js: Hit-Performance With 20,000 Points in 3D
ThreeJS has gotten us a long ways. Instead of 20,000 flat, unmoving points, we now have 20,000 points in three-space, spinning around uneven centers and with the ultimate promise that we can animate the camera, too, and really explore inside the Galaxy.
But when you tap a star, how fast can ThreeJS locate it? Pretty fast it turns out. As of this writing, ThreeJS doesn't support raycaster intersections for ParticleSystems (womp womp) but fortunately someone else figured this out. There are lots of forks you can use, or just put the code in the library yourself and rebuild a custom copy. This is what you'd add to Raycaster.js (adapted from similar code I found all over, like here):
... } else if (object instanceof THREE.ParticleSystem) { //See: <a href="https://github.com/mrdoob/three.js/issues/3492"> <a href="https://github.com/mrdoob/three.js/issues/3492"> https://github.com/mrdoob/three.js/issues/3492 </a> </a> var vertices = object.geometry.vertices; var point, distance, intersect, threshold = 3; var localMatrix = new THREE.Matrix4(); var localtempRay = raycaster.ray.clone(); var localOrigin = localtempRay.origin; var localDirection = localtempRay.direction; localMatrix.getInverse(object.matrixWorld); localOrigin.applyMatrix4(localMatrix); localDirection.transformDirection(localMatrix); for ( var i = 0; i < vertices.length; i ++ ) { point = vertices[ i ]; distance = localtempRay.distanceToPoint(point); if ( distance > threshold ) { continue; } intersect = { distance: distance, point: point, face: null, object: object, vertex: i }; intersects.push(intersect); } } else if ...
Once the library is built appropriately (I'll leave this as an exercise for the reader), getting the intersection from a user's tap or mouse-click is pretty straightforward: it's a raycaster using the current camera's position, the scene, and the (x,y) coordinates of the user's click. Psuedo code:
-Get (x,y) of user's click -Turn this into a THREE.Vector3() -un-project the vector based on the camera position -Set up a raycaster with the camera position and unprojected vector -run ray.intersectObjects() -do something with the results
The reality of the code for this project, with its multiple particlesystems and the desire to find the closest intersection amongst intersections from all of the particlesystems is somewhat more complex. Plus, as well explained by Jens Arps, there's some mystery meat when turning a screen point into a 3d vector (you see this in my code too: new THREE.Vector3( ( e.clientX / Galaxy.Settings.width ) * 2 - 1, - ( e.clientY / Galaxy.Settings.height ) * 2 + 1, 0.5 ) ):
canvasClickEvent: function(e){ e.preventDefault(); e.stopPropagation(); this.resetInteractionTimer(); // stops "auto mode" from resuming for 90s var vector = new THREE.Vector3( ( e.clientX / Galaxy.Settings.width ) * 2 - 1, - ( e.clientY / Galaxy.Settings.height ) * 2 + 1, 0.5 ); var projector = new THREE.Projector(); projector.unprojectVector( vector, this.camera ); var ray = new THREE.Raycaster( this.camera.position, vector.sub( this.camera.position ).normalize() ); // If there are already selected stars out in the field, ie, from an author constellation or related group, // we assume the user is trying to select one of those. However, if each of these systems contains // only a single vertex, that indicates the user may just be clicking around individually. So don't use pre-selected // stars for the intersection in that case. var intersectSystems = this.particleSystemsArray, that = this; if (!_.isUndefined(this.__glowingParticleSystems)) { _.each(this.__glowingParticleSystems,function(system){ if (system.geometry.vertices.length !== 1) { // intersect with the glowing systems instead intersectSystems = that.__glowingParticleSystems; } }); } // When the camera is very close to the star that's selected, distance is deceiving. We basically need to adjust hit tolerance based on the distance to camera // Calculate the distance camera --> star by converting star's position to world coords, then measuring // intersection.point = Vector3 // intersection.object = ParticleSystem it's a part of var getCameraDistanceForHit = function(intersection){ var intersectionVect = intersection.point.clone(); intersectionVect = intersection.object.localToWorld(intersectionVect); return intersectionVect.distanceTo(that.camera.position.clone()); }; // intersects sorted by distance so the first item is the "best fit" var intersects = _.sortBy(ray.intersectObjects( intersectSystems, true ),function(intersection){ return getCameraDistanceForHit(intersection) / intersection.distance; }); // When a hit is too close to the camera for its hit tolerance, it doesn't count. Remove those values. intersects = _.filter(intersects, function(intersection){ return getCameraDistanceForHit(intersection) / intersection.distance > 100; }); if ( intersects.length > 0 ) { this.selectVertex(intersects[0]) } else { // no intersections are within tolerance. this.reset({projectTagsAddAfterCameraReset: true}); } },
And it turns out to be pretty snappy!
Step 10: TweenMax: Easy Animation (Camera Motion)
TweenMax is dead simple. Put the library in your project, then pass in an object with numerical values (start and later finish), give it a duration and any options you like (easing, callbacks, etc), and watch your thing animate nicely. My main use for this is actually part of the next step (camera motions), but it removes all the pain from calculating all of the midpoints along an animation path. It also lets you stop, reverse, repeat, etc. Reversing an animation that's already been started and has gone halfway is not a small bit of code.
For my camera motions, the animation was the easy part, even though it requires me to separately animate the camera position, target, and up vector. Here's a random animation:
TweenMax.to(upCurrent,duration/1.5,{x: upGoal.x,y: upGoal.y,z: upGoal.z}); TweenMax.to(targetCurrent,duration/1.5,{x: center.x,y: center.y, z: center.z}); TweenMax.to(positionCurrent,duration,{x: home.x,y: home.y, z: home.z,ease: Power1.easeInOut, onUpdate: function(){ // every frame, update the camera with all of the current values IN THIS ORDER! that.target = new THREE.Vector3(targetCurrent.x,targetCurrent.y,targetCurrent.z); that.camera.position.set(positionCurrent.x,positionCurrent.y,positionCurrent.z); that.camera.up.set( upCurrent.x,upCurrent.y,upCurrent.z ); that.camera.lookAt(that.target.clone()); that.camera.updateProjectionMatrix(); }, onComplete: function(){ // when the animation finishes that.endAnimation(); that.firstClick = true; if (typeof callback === "function") callback(); }, onStart: that.startAnimation })
This can look a little confusing since there are a lot of things happening, but it's simple. There's a variable holding each of these things:
Camera position (the value that tweens)
Camera position (the end point)
Up vector (the value that tweens)
Up vector (the destination orientation)
Target (the value that tweens)
Target (what we'd like to end up looking at)
In addition, in the same scope, it's important to .clone() the THREE.Vector3's representing start position, up, and target, so that the actual values aren't changed by calculations during the animation.
With the exception of the two different durations, all of this animation code could easily be turned into a single TweenMax() call simply by combining the objects into one. For clarity and modularity, I left it separate
Step 11: Three.js: Camera Positioning
Camera control is probably the hardest part of working with ThreeJS. I strongly encourage everyone not to bother with this exercise in frustration. Pick one of the example generic camera controllers, insert the script on your page, and enjoy. If you do need to calculate custom camera parameters, you're probably best off learning about Quaternions, which I did not.
Going the plain-Jane trig route leaves you in a world of pain. You have essentially no useful tools to debug your math, and even once your math is right the results can look wrong because of things like the "Euler Order" (which you should set to YXZ if you want the camera's controls to feel like Pitch, Roll, and Yaw... just set camera.rotation.order = "YXZ";). The camera's rotations are in the camera's own coordinate system, so you always have to remember to give it instructions that way or use the hacky "target" and "lookAt" strategy (which I did) which will invariably lead you to totally screwy orientations.... that's when you start having to set the camera's "up" vector manually to keep it facing up, and you always have to be on the lookout for things like gimbal lock (where you lose a degree of freedom because of two parallel axes in the camera) and the fact that tweening the *values* of the camera's rotation vectors may take you correctly from A-->B in space, but along the wrong rotational direction. You may think it's natural to turn 180 degrees by turning your head sideways, but the simplest rotational path may well be the one that goes straight overhead instead. Blech!
I'm not going to take you through the solutions to all of those issues, and if you look closely you'll find that my solutions are still, in some places, a little rough. Instead, I'm just going to paste all of the camera motion code along with little notes here about what parts of it do. If you're really diving into a piece and you'd like more explanation, go ahead and post a comment so I can flesh out that part.
Camera Criteria: Highlights
- Putting a selected star in the center of the screen (zoomAndDollyToPoint)
- Putting a selected star in the center of the screen with the center of the galaxy in the background to prevent browsing off the edge of the universe (zoomToFitPointsFrom, CAMERA_RELATION.TOWARD_CENTER)
- Moving from one selected star to another without changing the camera's angle (strafeFromPointToPoint)
- Finding the bounding sphere for a group of stars, and then finding a camera position such that those stars would fit on the screen (zoomToFitPointsFrom, all CAMERA_RELATIONs)
- Put adjacent stars in the same cluster in comfortable screen positions relative to a single "currently selected" star in an author's constellation (showThreePointsNicely)
- Return to a "home" position (reset)
- Parameterizing a path for the camera so that it can slowly fly through the galaxy on its own when not attended (wait 90s without clicking to see the animation start) -- (beginAutomaticTravel and cameraSetupForParameter)
The Code:
Galaxy.Settings = Galaxy.Settings || {}; Galaxy.CameraMotions = function(camera){ _.bindAll(this,'zoomToFitPointsFrom','startAnimation','endAnimation','cameraSetupForParameter','beginAutomaticTravel'); this.target = Galaxy.Settings.cameraDefaultTarget.clone(); this.camera = camera; // delete this property eventually this.firstClick = true; this.isAnimating = false; } Galaxy.CameraMotions.prototype = { constructor: Galaxy.CameraMotions, startAnimation: function(){ // startAnimation refers to user-initiated animations. The default animation must be removed if ongoing. this.endAutomaticTravel(); this.isAnimating = true; }, endAnimation: function(){this.isAnimating = false;}, zoomAndDollyToPoint: function(point,callback){ if (this.isAnimating === true) return; // temporarily: the first click will zoom in, and we'll strafe after that. if (this.firstClick === false) { //this.strafeFromPointToPoint(this.target,point,callback); this.zoomToFitPointsFrom([point],this.CAMERA_RELATION.TOWARD_CENTER,callback); return; } this.firstClick = false; var that = this, pointClone = point.clone(), cameraPath = this.cameraPathToPoint(this.camera.position.clone(), point.clone()), currentPosition = {now: 0}, duration = 1.3, upClone = Galaxy.Settings.cameraDefaultUp.clone(), targetCurrent = this.target.clone(); TweenMax.to(targetCurrent,duration/1.5,{ x:pointClone.x, y:pointClone.y, z:pointClone.z }); TweenMax.to(currentPosition,duration,{ now:0.8, onUpdate: function(){ var pos = cameraPath.getPoint(currentPosition.now); that.target = new THREE.Vector3(targetCurrent.x,targetCurrent.y,targetCurrent.z); that.camera.position.set(pos.x,pos.y,pos.z); that.camera.up.set(upClone.x,upClone.y,upClone.z); that.camera.lookAt(that.target); that.camera.updateProjectionMatrix(); }, onStart: that.startAnimation, onComplete: function(){ that.endAnimation(); if (typeof callback === "function") callback(); } }); }, cameraPathToPoint: function(fromPoint,toPoint){ var spline = new THREE.SplineCurve3([ fromPoint, new THREE.Vector3( (toPoint.x-fromPoint.x)*0.5 + fromPoint.x, (toPoint.y-fromPoint.y)*0.5 + fromPoint.y, (toPoint.z-fromPoint.z)*0.7 + fromPoint.z), toPoint ]); return spline; }, strafeFromPointToPoint: function(fromPoint,toPoint,callback){ var dest = toPoint.clone(), current = this.camera.position.clone(), duration = 0.5, that = this; dest.sub(fromPoint.clone()); dest.add(current.clone()); //console.log("\n\n",fromPoint,toPoint,current,dest); if (that.isAnimating === true) return; TweenMax.to(this.camera.position,duration,{x: dest.x,y: dest.y, z: dest.z, onComplete: function(){ that.endAnimation(); that.camera.lookAt(toPoint.clone()); that.target = toPoint.clone(); if (typeof callback === "function") callback(); }, onStart: that.startAnimation }) }, reset: function(callback){ var duration = 2, that = this, home = Galaxy.Settings.cameraDefaultPosition.clone(), center = Galaxy.Settings.cameraDefaultTarget.clone(), upGoal = Galaxy.Settings.cameraDefaultUp.clone(), upCurrent = this.camera.up.clone(), targetCurrent = this.target.clone(), positionCurrent = this.camera.position.clone(); // never do anything when nothing will suffice. The callback should have no delay. if (this.camera.up.equals(Galaxy.Settings.cameraDefaultUp) && this.camera.position.equals(Galaxy.Settings.cameraDefaultPosition) && this.target.equals(Galaxy.Settings.cameraDefaultTarget)) { duration = 0.1; } if (that.isAnimating === true) return; TweenMax.to(upCurrent,duration/1.5,{x: upGoal.x,y: upGoal.y,z: upGoal.z}); TweenMax.to(targetCurrent,duration/1.5,{x: center.x,y: center.y, z: center.z}); TweenMax.to(positionCurrent,duration,{x: home.x,y: home.y, z: home.z,ease: Power1.easeInOut, onUpdate: function(){ that.target = new THREE.Vector3(targetCurrent.x,targetCurrent.y,targetCurrent.z); that.camera.position.set(positionCurrent.x,positionCurrent.y,positionCurrent.z); that.camera.up.set( upCurrent.x,upCurrent.y,upCurrent.z ); that.camera.lookAt(that.target.clone()); that.camera.updateProjectionMatrix(); }, onComplete: function(){ that.endAnimation(); that.firstClick = true; if (typeof callback === "function") callback(); }, onStart: that.startAnimation }) }, CAMERA_RELATION : { ABOVE: 0, SAME_ANGLE: 1, TOWARD_CENTER: 2 }, zoomToFitPointsFrom: function(pointList,cameraRelation,callback) { if (!_.has(_.values(this.CAMERA_RELATION),cameraRelation)) { // console.log(_.values(this.CAMERA_RELATION)); console.error(cameraRelation + " is not one of RELATIVE_LOCATION"); return; } if (this.isAnimating === true) return; // pointList assumed to already be in world coordinates. Figure out bounding sphere, then move camera relative to its center var bSphere = new THREE.Sphere(new THREE.Vector3(0,0,0),5); bSphere.setFromPoints(pointList); // how far away do we need to be to fit this sphere? var targetDistance = (bSphere.radius / (Math.tan(Math.PI*this.camera.fov/360))), cameraPositionEnd, that = this, duration = 1, up = this.camera.up.clone(), currentCameraPosition = this.camera.position.clone(); switch (cameraRelation) { case 0: // CAMERA_RELATION.ABOVE cameraPositionEnd = bSphere.center.clone().add(new THREE.Vector3(40,40,targetDistance)); break; case 1: // CAMERA_RELATION.SAME_ANGLE dollies the camera in/out such that these points become visible var center = bSphere.center.clone(), currentPos = that.camera.position.clone(), finalViewAngle = currentPos.sub(center).setLength(targetDistance); cameraPositionEnd = bSphere.center.clone().add(finalViewAngle); // to prevent camera from going under the background plane: cameraPositionEnd.z = Math.max(cameraPositionEnd.z,40); break; case 2: // CAMERA_RELATION.TOWARD_CENTER Draws a line from world origin through the bounding sphere's center point, // and puts the camera at the end of a vector twice that length. cameraPositionEnd = bSphere.center.clone().multiplyScalar(2); if (cameraPositionEnd.length() < 125) cameraPositionEnd.setLength(125); // It's weird when the camera gets too close to stars in the middle break; } var cameraTargetCurrent = {x: this.target.x, y: this.target.y, z: this.target.z}; var cameraTargetEnd = bSphere.center.clone(); // that.logVec('up',that.camera.up.clone()); // that.logVec('target',that.target.clone()); // that.logVec('position',that.camera.position.clone()); TweenMax.to(cameraTargetCurrent,duration/1.5,{x: cameraTargetEnd.x,y: cameraTargetEnd.y, z: cameraTargetEnd.z}); // DO NOT change "up" for high angle. It gets screwy and spins the camera unpleasantly. if (cameraRelation !== 0) {TweenMax.to(up,duration/1.5,{x: 0,y: 0, z: 1});} TweenMax.to(currentCameraPosition,duration,{x: cameraPositionEnd.x,y: cameraPositionEnd.y, z: cameraPositionEnd.z, onUpdate: function(){ that.target = new THREE.Vector3(cameraTargetCurrent.x,cameraTargetCurrent.y,cameraTargetCurrent.z); that.camera.position.set(currentCameraPosition.x,currentCameraPosition.y,currentCameraPosition.z); that.camera.up.set( up.x,up.y,up.z ); that.camera.lookAt(that.target.clone()); that.camera.updateProjectionMatrix(); }, onComplete: function(){ // that.logVec('up',that.camera.up.clone()); // that.logVec('target',that.target.clone()); // that.logVec('position',that.camera.position.clone()); that.endAnimation(); if (typeof callback === "function") callback(); }, onStart: that.startAnimation }) }, showThreePointsNicely: function(pointList, callback){ // Find a camera location and rotation such that the first point appears towards the bottom of the screen, and // the other two appear up and to the left and right. Or so. if (this.isAnimating === true) return; this.firstClick = false; if (!_.isArray(pointList)) { throw new Error ("Array of points required for showThreePointsNicely"); } else if (pointList.length !== 3) { // just show the first one. this.zoomAndDollyToPoint(pointList[0],callback); return; } var pointZero = pointList[0].clone(); // look at the world from the perspective of the star that will be centered: var viewFromPointZero = function(vector){ return vector.clone().sub(pointZero.clone()); }; // The "bisect" vector is a central angle between the Left and Right stars that we're trying to make visible on screen, along with vector Zero var bisectLocal = viewFromPointZero( pointList[1]).add(viewFromPointZero( pointList[2])).multiplyScalar(0.5); // The linear path would be described as.... var A = viewFromPointZero(pointList[1]); var B = viewFromPointZero(pointList[2]); var theta = Math.acos(A.clone().dot(B.clone()) / A.length() / B.length() ); var distanceAwayBasedOnAngle = Math.min(Math.max(theta*2.5,2),4); var cameraEndPosition = pointZero.clone().sub(bisectLocal.clone().multiplyScalar(distanceAwayBasedOnAngle)); var cameraStartPosition = this.camera.position.clone(); var cameraPathMidpoint = cameraEndPosition.clone().add(cameraStartPosition.clone()).multiplyScalar(0.5); // The circular path around the linear path's midpoint would be, then: var radius = cameraStartPosition.clone().sub(cameraPathMidpoint.clone()).length(); var that = this; var worldPointOnCircularPath = function(t){ var x = radius * Math.cos(t); var y = radius * Math.sin(t); var vectorPointLocal = new THREE.Vector3(x,y,0); return vectorPointLocal.add(cameraPathMidpoint.clone()); }; // backsolve the start angle for the circular path. It's the inverse of x=a+r*cos(theta) => theta = acos((x-a)/r); var startPointRelativeToMidpoint = cameraStartPosition.clone().sub(cameraPathMidpoint.clone()); var startAngle = Math.atan(startPointRelativeToMidpoint.y/startPointRelativeToMidpoint.x); // Is this the start angle or the end angle? It's one or the other, but we need to know which.... The other will be this + PI if (worldPointOnCircularPath(startAngle).setZ(0).distanceTo(cameraStartPosition.clone().setZ(0)) > 100) { // gotta start halfway around instead. such is the world of inverse trig functions startAngle += Math.PI; } var parameters = { t: startAngle, z: cameraStartPosition.clone().z }, duration = 2.0, up = that.camera.up.clone(); var pointZeroClone = pointZero.clone(); TweenMax.to(that.target,duration/1.5,{x: pointZeroClone.x,y: pointZeroClone.y,z: pointZeroClone.z}); TweenMax.to(up,duration/1.5,{x: 0,y: 0, z: 1}); TweenMax.to(parameters,duration,{ t: startAngle - Math.PI, z: cameraEndPosition.clone().z, ease: Power1.easeOut, onUpdate: function(){ var xyCurrent = worldPointOnCircularPath(parameters.t); that.camera.position.set(xyCurrent.x,xyCurrent.y,parameters.z); that.camera.up.set( up.x,up.y,up.z ); that.camera.lookAt(that.target); that.camera.updateProjectionMatrix(); }, onComplete: function(){ that.endAnimation(); if (typeof callback === "function") callback(); }, onStart: that.startAnimation }); }, // The camera can also "travel" while in unattended mode. This behavior requires some code to parametrically define and then animate the complex path, // But it is somewhat different in kind from the user-initiated camera motions described above. beginAutomaticTravel: function(){ // This function absolutely positively must begin from the camera home positions. // console.log('commencing automatic camera travel'); var obj = {cameraParameter: Math.PI/2}, that=this, loopConstants = this.loopConstants(); this.reset(function(){ that.__automaticCameraAnimation = TweenMax.to(obj, loopConstants.duration, { cameraParameter: 5*Math.PI/2, onUpdate:function(){ that.cameraSetupForParameter(obj.cameraParameter,loopConstants); }, ease: null, repeat: -1 // loop infinitely }); }); }, loopConstants: function(){ var galaxyLoopStart = new THREE.Vector3(200,0,15), targetLoopStart = new THREE.Vector3(200,200,5), upLoopStart = new THREE.Vector3(0,0,1); return { duration : 400, // seconds galaxyLoopStart : galaxyLoopStart, targetLoopStart: targetLoopStart, upLoopStart: upLoopStart, galaxyLoopToHome : Galaxy.Settings.cameraDefaultPosition.clone().sub(galaxyLoopStart.clone()), targetLoopToHome : Galaxy.Settings.cameraDefaultTarget.clone().sub(targetLoopStart.clone()), upLoopToHome : Galaxy.Settings.cameraDefaultUp.clone().sub(upLoopStart.clone()) } }, cameraSetupForParameter: function(cameraParameter,loopConstants){ var pos,lookAt = loopConstants.targetLoopStart.clone(),up = loopConstants.upLoopStart.clone(); if (cameraParameter < 2*Math.PI && cameraParameter > Math.PI) { cameraParameter -= Math.PI; // go a full circle around the galaxy pos = new THREE.Vector3(200*Math.cos(cameraParameter),200*Math.sin(cameraParameter),15); var copy = pos.clone(); lookAt = new THREE.Vector3(copy.x ,copy.y + copy.x,5); } else { // after going a full circle around the galaxy, animate all camera characteristics to the "home" position, then repeat from start var pathMultiplier = Math.sin(cameraParameter ); // good from 0 to PI. Outside that range, this goes negative and looks haywire. pos = loopConstants.galaxyLoopStart.clone().add(loopConstants.galaxyLoopToHome.clone().multiplyScalar(pathMultiplier)); lookAt = loopConstants.targetLoopStart.clone().add(loopConstants.targetLoopToHome.clone().multiplyScalar(pathMultiplier)); up = loopConstants.upLoopStart.clone().add(loopConstants.upLoopToHome.clone().multiplyScalar(pathMultiplier)); } this.camera.position.set(pos.x,pos.y,pos.z); this.camera.up.set(up.x,up.y,up.z); this.target = lookAt; this.camera.lookAt(lookAt); this.camera.updateProjectionMatrix(); }, endAutomaticTravel: function(){ if (this.__automaticCameraAnimation) { this.__automaticCameraAnimation.kill(); } }, // DEBUGGING TOOLS logVec: function(message,vec){ console.log(message + ": " + vec.x + " " + vec.y + " " + vec.z); }, addTestCubeAtPosition: function(position){ var cube = new THREE.Mesh( new THREE.CubeGeometry( 5, 5, 5 ), new THREE.MeshNormalMaterial() ); cube.position = position.clone(); Galaxy.Scene.add( cube ); } }
Step 12: Three.js: Post-processing & Effects: Dimming, Blurring
Check this out! (and the tutorial that goes with it)
Crazy effects are possible with postprocessing in ThreeJS, but I just needed something simple: to dim everything except for the constellation I wanted to show. Bright white lines and points on top of an otherwise dimmed scene.
Turns out, the 3D world isn't like KineticJS. I can't just add a layer on top and count on the browser rendering the alpha channel of that layer to dim the stuff behind it. Or rather I could, but I'd need a whole separate scene, canvas, and context, and it's not totally clear what happens when you have two webgl windows on top of each other. I don't believe it's trivial to make a semi-transparent webgl rendering context.
So the next best thing was to add postprocesses to my existing renderer. This is how, when I was talking about Particle Systems in Motion,
renderer.render()became
_.each(Galaxy.Composers,function(composer){ composer.render(); });
Prepare:
1. In the threejs repo, take a look at examples/js/shaders and examples/js/postprocessing
2. Choose some shaders. These are just WebGL code stored in .js files, but they're the same in principle as the shaders I went through when describing A Different Brightness for Each Star.
3. The postprocessing directory has a bunch of utilities, essentially, that let you layer effects on top of each other
Assemble (put the scripts on your page):
- THREE.EffectComposer
- THREE.RenderPass (you need a renderpass so your affects apply to something)
- THREE.ShaderPass (or two or three -- these wrap your effects)
- THREE.CopyShader (in case you want to display without effects sometimes)
- THREE.AdditiveBlendShader (to blend other shaders)
- some shaders you like
The Code:
Galaxy.ComposeScene = function(options){ var composers = []; var mainComposer = new THREE.EffectComposer( renderer, renderTarget ); var renderPass = new THREE.RenderPass( scene, camera ); mainComposer.addPass( renderPass ); if (_.isObject(options) && options.blur === true) { var bluriness = 0.9; // Prepare the blur shader passes var hblur = new THREE.ShaderPass( THREE.HorizontalBlurShader ); hblur.uniforms[ "h" ].value = bluriness / Galaxy.Settings.width; mainComposer.addPass(hblur); var vblur = new THREE.ShaderPass( THREE.VerticalBlurShader ); vblur.uniforms[ "v" ].value = bluriness / Galaxy.Settings.height; mainComposer.addPass( vblur ); var brightnessContrastPass = new THREE.ShaderPass( THREE.BrightnessContrastShader ); brightnessContrastPass.uniforms[ "brightness" ].value = -0.3; brightnessContrastPass.uniforms[ "contrast" ].value = -0.2; mainComposer.addPass(brightnessContrastPass); } else { mainComposer.addPass( new THREE.ShaderPass( THREE.CopyShader ) ); } var topComposer = new THREE.EffectComposer(renderer, renderTarget2); var topRenderPass = new THREE.RenderPass(topScene,camera); topComposer.addPass(topRenderPass); topComposer.addPass( new THREE.ShaderPass( THREE.CopyShader ) ); //////////////////////////////////////////////////////////////////////// // final composer will blend composer2.render() results with the scene //////////////////////////////////////////////////////////////////////// var blendPass = new THREE.ShaderPass( THREE.AdditiveBlendShader ); blendPass.uniforms[ 'tBase' ].value = mainComposer.renderTarget1; blendPass.uniforms[ 'tAdd' ].value = topComposer.renderTarget1; var blendComposer = new THREE.EffectComposer( renderer ); blendComposer.addPass( blendPass ); blendPass.renderToScreen = true; composers.push(mainComposer,topComposer,blendComposer); return composers; };
Basically you'll be calling .render() on a composer now, instead of directly on a renderer. You can assemble pretty much any range of effects you like by first doing a render pass, adding it to the composer, then adding subsequent shaderpasses to the same composer. Eventually, you'd want to add a copyshader to gather everything together, set it to renderToScreen, and render. Boiled down, it might look like this:
var mainComposer = new THREE.EffectComposer( renderer, renderTarget ); var renderPass = new THREE.RenderPass( scene, camera ); mainComposer.addPass( renderPass ); // add a shader var shaderpass = new THREE.ShaderPass( THREE.SomeShader ); ... (do some settings for the shader here) mainComposer.addPass(shaderpass); // add a copy, so webgl knows what to render: var copypass = new THREE.ShaderPass( THREE.CopyShader ); copypass.renderToScreen = true; mainComposer.addPass(copypass); // render mainComposer.render();
But in the case of what I've done here, there are actually two separate composers that render two separate sets of objects in the scene. When a constellation is active, its stars and connecting lines are rendered without blur, and at full brightness. The background stars, however, are darkened and blurred. So to get the effects applied to one set of things and not the other, you need a whole separate scene with cloned objects (topScene), and a whole separate composer stack.
This causes the slight complication that the two need to be blended together as a last step: I could render the topscene, but it would obscure the bottom scene. So the final thing that gets rendered in my case is an additive blend of the two composers' results:
var blendPass = new THREE.ShaderPass( THREE.AdditiveBlendShader ); blendPass.uniforms[ 'tBase' ].value = mainComposer.renderTarget1; blendPass.uniforms[ 'tAdd' ].value = topComposer.renderTarget1; var blendComposer = new THREE.EffectComposer( renderer ); blendComposer.addPass( blendPass ); blendPass.renderToScreen = true;The final bit: you have to render all three composers each frame, or the contents of that composer won't update:
// in the composeScene function return composers.push(mainComposer,topComposer,blendComposer); ... // results of ComposeScene are stored each time the user enters/exits a constellation Galaxy.Composers = Galaxy.ComposeScene(); ... // In the render loop, each composer is rendered _.each(Galaxy.Composers,function(composer){ composer.render(); });
That's all there is to it!
Step 13: Other Libraries (Special Thanks)
There are many open-source libraries that help to make this project possible, even though they aren't central to its mission. I've found them all useful, and I recommend checking them out if they look enticing!
- For the loading indicator when functioning on the web, Pace.js is awesome and easy to implement
- Animation made easy with TweenMax
- Parsing and displaying dates: Moment.js
- Of the many jquery-based on-screen keyboards I came across, Chris Cook's was by far the best
- This project and many others owe a debt to JQuery, Backbone, and Underscore
- I love tinyColor.js. It's best-in-class for dealing with colors in JavaScript
- For CSS-based UI elements, Bootstrap cuts time here and there
- Touch-enabled scrolling in a web environment would be a hassle without Overscroll.js
- Cycle2 is one in a crowded field of image sliders, but it's simple and effective
- Mentioned already: a stupid-simple approach to generating Normally-distributed random numbers deserves a shout-out.
- The world may love require, but I still love head.js
Thanks for reading!