Every three.js project needs at least one HTML file to define the webpage, and a JavaScript file to run your three.js code. The structure and naming choices below aren't required, but will be used throughout this guide for consistency.
index.html<!DOCTYPE html><htmllang="en"><head><metacharset="utf-8"><title>My first three.js app</title><style>
body { margin:0;}</style></head><body><scripttype="module"src="/main.js"></script></body></html>
main.jsimport*as THREE from'three';...
public/
The public/ folder is sometimes also called a "static" folder, because the files it contains are pushed to the website unchanged. Usually textures, audio, and 3D models will go here.
Now that we've set up the basic project structure, we need a way to run the project locally and access it through a web browser. Installation and local development can be accomplished with npm and a build tool, or by importing three.js from a CDN. Both options are explained in the sections below.
Option 1: Install with NPM and a build tool
Development
Installing from the npm package registry and using a build tool is the recommended approach for most users — the more dependencies your project needs, the more likely you are to run into problems that the static hosting cannot easily resolve. With a build tool, importing local JavaScript files and npm packages should work out of the box, without import maps.
Install Node.js. We'll need it to load manage dependencies and to run our build tool.
Install three.js and a build tool, Vite, using a terminal in your project folder. Vite will be used during development, but it isn't part of the final webpage. If you prefer to use another build tool, that's fine — we support modern build tools that can import ES Modules.
# three.js
npm install --save three
# vite
npm install --save-dev vite
From your terminal, run:
npx vite
If everything went well, you'll see a URL like http://localhost:5173 appear in your terminal, and can open that URL to see your web application.
The page will be blank — you're ready to create a scene.
If you want to learn more about these tools before you continue, see:
Later, when you're ready to deploy your web application, you'll just need to tell Vite to run a production build — npx vite build. Everything used by the application will be compiled, optimized, and copied into the dist/ folder. The contents of that folder are ready to be hosted on your website.
Option 2: Import from a CDN
Development
Installing without build tools will require some changes to the project structure given above.
We imported code from 'three' (an npm package) in main.js, and web browsers don't know what that means. In index.html we'll need to add an import map defining where to get the package. Put the code below inside the <head></head> tag, after the styles.
Don't forget to replace <version> with an actual version of three.js, like "v0.149.0". The most recent version can be found on the npm version list.
We'll also need to run a local server to host these files at URL where the web browser can access them. While it's technically possible to double-click an HTML file and open it in your browser, important features that we'll later implement, do not work when the page is opened this way, for security reasons.
Install Node.js, then run serve to start a local server in the project's directory:
npx serve .
If everything went well, you'll see a URL like http://localhost:3000 appear in your terminal, and can open that URL to see your web application.
The page will be blank — you're ready to create a scene.
Many other local static servers are available — some use different languages instead of Node.js, and others are desktop applications. They all work basically the same way, and we've provided a few alternatives below.
More local servers
Command Line
Command line local servers run from a terminal window. The associated programming language may need to be installed first.
npx http-server (Node.js)
npx five-server (Node.js)
python -m SimpleHTTPServer (Python 2.x)
python -m http.server (Python 3.x)
php -S localhost:8000 (PHP 5.4+)
GUI
GUI local servers run as an application window on your computer, and may have a user interface.
When you're ready to deploy your web application, push the source files to your web hosting provider — no need to build or compile anything. The downside of that tradeoff is that you'll need to be careful to keep the import map updated with any dependencies (and dependencies of dependencies!) that your application requires. If the CDN hosting your dependencies goes down temporarily, your website will stop working too.
IMPORTANT: Import all dependencies from the same version of three.js, and from the same CDN. Mixing files from different sources may cause duplicate code to be included, or even break the application in unexpected ways.
Addons
Out of the box, three.js includes the fundamentals of a 3D engine. Other three.js components — such as controls, loaders, and post-processing effects — are part of the addons/ directory. Addons do not need to be installed separately, but do need to be imported separately.
The example below shows how to import three.js with the OrbitControls and GLTFLoader addons. Where necessary, this will also be mentioned in each addon's documentation or examples.
import*as THREE from'three';import{OrbitControls}from'three/addons/controls/OrbitControls.js';import{GLTFLoader}from'three/addons/loaders/GLTFLoader.js';const controls =newOrbitControls( camera, renderer.domElement );const loader =newGLTFLoader();
Some excellent third-party projects are available for three.js, too. These need to be installed separately — see Libraries and Plugins.
The goal of this section is to give a brief introduction to three.js. We will start by setting up a scene, with a spinning cube. A working example is provided at the bottom of the page in case you get stuck and need help.
Before we start
If you haven't yet, go through the Installation guide. We'll assume you've already set up the same project structure (including index.html and main.js), have installed three.js, and are either running a build tool, or using a local server with a CDN and import maps.
Creating the scene
To actually be able to display anything with three.js, we need three things: scene, camera and renderer, so that we can render the scene with camera.
main.js —
import*as THREE from'three';const scene =new THREE.Scene();const camera =new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight,0.1,1000);const renderer =new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
Let's take a moment to explain what's going on here. We have now set up the scene, our camera and the renderer.
There are a few different cameras in three.js. For now, let's use a PerspectiveCamera.
The first attribute is the field of view. FOV is the extent of the scene that is seen on the display at any given moment. The value is in degrees.
The second one is the aspect ratio. You almost always want to use the width of the element divided by the height, or you'll get the same result as when you play old movies on a widescreen TV - the image looks squished.
The next two attributes are the near and far clipping plane. What that means, is that objects further away from the camera than the value of far or closer than near won't be rendered. You don't have to worry about this now, but you may want to use other values in your apps to get better performance.
Next up is the renderer. In addition to creating the renderer instance, we also need to set the size at which we want it to render our app. It's a good idea to use the width and height of the area we want to fill with our app - in this case, the width and height of the browser window. For performance intensive apps, you can also give setSize smaller values, like window.innerWidth/2 and window.innerHeight/2, which will make the app render at quarter size.
If you wish to keep the size of your app but render it at a lower resolution, you can do so by calling setSize with false as updateStyle (the third argument). For example, setSize(window.innerWidth/2, window.innerHeight/2,false) will render your app at half resolution, given that your <canvas> has 100% width and height.
Last but not least, we add the renderer element to our HTML document. This is a <canvas> element the renderer uses to display the scene to us.
"That's all good, but where's that cube you promised?" Let's add it now.
const geometry =new THREE.BoxGeometry(1,1,1);const material =new THREE.MeshBasicMaterial({ color:0x00ff00});const cube =new THREE.Mesh( geometry, material );
scene.add( cube );
camera.position.z =5;
To create a cube, we need a BoxGeometry. This is an object that contains all the points (vertices) and fill (faces) of the cube. We'll explore this more in the future.
In addition to the geometry, we need a material to color it. Three.js comes with several materials, but we'll stick to the MeshBasicMaterial for now. All materials take an object of properties which will be applied to them. To keep things very simple, we only supply a color attribute of 0x00ff00, which is green. This works the same way that colors work in CSS or Photoshop (hex colors).
The third thing we need is a Mesh. A mesh is an object that takes a geometry, and applies a material to it, which we then can insert to our scene, and move freely around.
By default, when we call scene.add(), the thing we add will be added to the coordinates (0,0,0). This would cause both the camera and the cube to be inside each other. To avoid this, we simply move the camera out a bit.
Rendering the scene
If you copied the code from above into the main.js file we created earlier, you wouldn't be able to see anything. This is because we're not actually rendering anything yet. For that, we need what's called a render or animation loop.
function animate(){
renderer.render( scene, camera );}
renderer.setAnimationLoop( animate );
This will create a loop that causes the renderer to draw the scene every time the screen is refreshed (on a typical screen this means 60 times per second). If you're new to writing games in the browser, you might say "why don't we just create a setInterval ?" The thing is - we could, but requestAnimationFrame which is internally used in WebGLRenderer has a number of advantages. Perhaps the most important one is that it pauses when the user navigates to another browser tab, hence not wasting their precious processing power and battery life.
Animating the cube
If you insert all the code above into the file you created before we began, you should see a green box. Let's make it all a little more interesting by rotating it.
Add the following code right above the renderer.render call in your animate function:
cube.rotation.x +=0.01;
cube.rotation.y +=0.01;
This will be run every frame (normally 60 times per second), and give the cube a nice rotation animation. Basically, anything you want to move or change while the app is running has to go through the animation loop. You can of course call other functions from there, so that you don't end up with an animate function that's hundreds of lines.
The result
Congratulations! You have now completed your first three.js application. It's simple, but you have to start somewhere.
The full code is available below and as an editable live example. Play around with it to get a better understanding of how it works.
index.html —
<!DOCTYPE html><htmllang="en"><head><metacharset="utf-8"><title>My first three.js app</title><style>
body { margin:0;}</style></head><body><scripttype="module"src="/main.js"></script></body></html>
main.js —
import*as THREE from'three';const scene =new THREE.Scene();const camera =new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight,0.1,1000);const renderer =new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.setAnimationLoop( animate );
document.body.appendChild( renderer.domElement );const geometry =new THREE.BoxGeometry(1,1,1);const material =new THREE.MeshBasicMaterial({ color:0x00ff00});const cube =new THREE.Mesh( geometry, material );
scene.add( cube );
camera.position.z =5;function animate(){
cube.rotation.x +=0.01;
cube.rotation.y +=0.01;
renderer.render( scene, camera );}
WebGL compatibility check
Even though this is becoming less and less of a problem, some devices or browsers may still not support WebGL 2.
The following method allows you to check if it is supported and display a message to the user if it is not.
Import the WebGL support detection module, and run the following before attempting to render anything.
importWebGLfrom'three/addons/capabilities/WebGL.js';if(WebGL.isWebGL2Available()){// Initiate function or other initializations here
animate();}else{const warning =WebGL.getWebGL2ErrorMessage();
document.getElementById('container').appendChild( warning );}
Drawing lines
Let's say you want to draw a line or a circle, not a wireframe Mesh.
First we need to set up the renderer, scene and camera (see the Creating a scene page).
Use these renderers to draw high-quality text contained in DOM elements to your three.js scene.
This is similar to 1. except that with these renderers elements can be integrated more tightly and dynamically into the scene.
Use this method if you wish to draw text easily on a plane in your three.js scene.
4. Create a model in your favourite 3D application and export to three.js
Use this method if you prefer working with your 3d applications and importing the models to three.js.
5. Procedural Text Geometry
If you prefer to work purely in THREE.js or to create procedural and dynamic 3D
text geometries, you can create a mesh whose geometry is an instance of THREE.TextGeometry:
new THREE.TextGeometry( text, parameters );
In order for this to work, however, your TextGeometry will need an instance of THREE.Font
to be set on its "font" parameter.
See the TextGeometry page for more info on how this can be done, descriptions of each
accepted parameter, and a list of the JSON fonts that come with the THREE.js distribution itself.
BMFonts (bitmap fonts) allow batching glyphs into a single BufferGeometry. BMFont rendering
supports word-wrapping, letter spacing, kerning, signed distance fields with standard
derivatives, multi-channel signed distance fields, multi-texture fonts, and more.
See three-mesh-ui or three-bmfont-text.
Stock fonts are available in projects like
A-Frame Fonts, or you can create your own
from any .TTF font, optimizing to include only characters required for a project.
The troika-three-text package renders
quality antialiased text using a similar technique as BMFonts, but works directly with any .TTF
or .WOFF font file so you don't have to pregenerate a glyph texture offline. It also adds
capabilities including:
Effects like strokes, drop shadows, and curvature
The ability to apply any three.js Material, even a custom ShaderMaterial
Support for font ligatures, scripts with joined letters, and right-to-left/bidirectional layout
Optimization for large amounts of dynamic text, performing most work off the main thread in a web worker
Loading 3D models
3D models are available in hundreds of file formats, each with different
purposes, assorted features, and varying complexity. Although
three.js provides many loaders, choosing the right format and
workflow will save time and frustration later on. Some formats are
difficult to work with, inefficient for realtime experiences, or simply not
fully supported at this time.
This guide provides a workflow recommended for most users, and suggestions
for what to try if things don't go as expected.
Before we start
If you're new to running a local server, begin with
installation
first. Many common errors viewing 3D models can be avoided by hosting files
correctly.
Recommended workflow
Where possible, we recommend using glTF (GL Transmission Format). Both
.GLB and .GLTF versions of the format are
well supported. Because glTF is focused on runtime asset delivery, it is
compact to transmit and fast to load. Features include meshes, materials,
textures, skins, skeletons, morph targets, animations, lights, and
cameras.
Public-domain glTF files are available on sites like
Sketchfab, or various tools include glTF export:
Once you've imported a loader, you're ready to add a model to your scene. Syntax varies among
different loaders — when using another format, check the examples and documentation for that
loader. For glTF, usage with global scripts would be:
You've spent hours modeling an artisanal masterpiece, you load it into
the webpage, and — oh no! 😭 It's distorted, miscolored, or missing entirely.
Start with these troubleshooting steps:
Check the JavaScript console for errors, and make sure you've used an
onError callback when calling .load() to log the result.
View the model in another application. For glTF, drag-and-drop viewers
are available for
three.js and
babylon.js. If the model
appears correctly in one or more applications,
file a bug against three.js.
If the model cannot be shown in any application, we strongly encourage
filing a bug with the application used to create the model.
Try scaling the model up or down by a factor of 1000. Many models are
scaled differently, and large models may not appear if the camera is
inside the model.
Try to add and position a light source. The model may be hidden in the dark.
Look for failed texture requests in the network tab, like
"C:\\Path\To\Model\texture.jpg". Use paths relative to your
model instead, such as images/texture.jpg — this may require
editing the model file in a text editor.
Asking for help
If you've gone through the troubleshooting process above and your model
still isn't working, the right approach to asking for help will get you to
a solution faster. Post a question on the
three.js forum and, whenever possible,
include your model (or a simpler model with the same problem) in any formats
you have available. Include enough information for someone else to reproduce
the issue quickly — ideally, a live demo.
Libraries and Plugins
Listed here are externally developed compatible libraries and plugins for three.js. This
list and the associated packages are maintained by the community and not guaranteed
to be up to date. If you'd like to update this list make a PR!
tresjs - Vue components for 3D graphics built on Three.
Giro3D - Versatile framework built on Three for visualizing and interacting with Geospatial 2D, 2.5D and 3D data.
FAQ
Which 3D model format is best supported?
The recommended format for importing and exporting assets is glTF (GL Transmission Format). Because glTF is focused on runtime asset delivery, it is compact to transmit and fast to load.
three.js provides loaders for many other popular formats like FBX, Collada or OBJ as well. Nevertheless, you should always try to establish a glTF based workflow in your projects first. For more information, see loading 3D models.
We want all objects, regardless of their distance from the camera, to appear the same size, even as the window is resized.
The key equation to solving this is this formula for the visible height at a given distance:
visible_height =2*Math.tan((Math.PI /180)* camera.fov /2)* distance_from_camera;
If we increase the window height by a certain percentage, then what we want is the visible height at all distances
to increase by the same percentage.
This can not be done by changing the camera position. Instead you have to change the camera field-of-view.
Example.
Why is part of my object invisible?
This could be because of face culling. Faces have an orientation that decides which side is which. And the culling removes the backside in normal circumstances. To see if this is your problem, change the material side to THREE.DoubleSide.
material.side = THREE.DoubleSide
Why does three.js sometimes return strange results for invalid inputs?
For performance reasons, three.js doesn't validate inputs in most cases. It's your app's responsibility to make sure that all inputs are valid.
Can I use three.js in Node.js?
Because three.js is built for the web, it depends on browser and DOM APIs that don't always exist in Node.js. Some of these issues can be avoided by using shims like headless-gl and jsdom-global, or by replacing components like TextureLoader with custom alternatives. Other DOM APIs may be deeply intertwined with the code that uses them, and will be harder to work around. We welcome simple and maintainable pull requests to improve Node.js support, but recommend opening an issue to discuss your improvements first.
Useful links
The following is a collection of links that you might find useful when learning three.js.
If you find something that you'd like to add here, or think that one of the links below is no longer
relevant or working, feel free to click the 'edit' button in the bottom right and make some changes!
Note also that as three.js is under rapid development, a lot of these links will contain information that is
out of date - if something isn't working as you'd expect or as one of these links says it should,
check the browser console for warnings or errors. Also check the relevant docs pages.
Help forums
Three.js officially uses the forum and Stack Overflow for help requests.
If you need assistance with something, that's the place to go. Do NOT open an issue on Github for help requests.
Three.js Bookshelf - Looking for more resources about three.js or computer graphics in general?
Check out the selection of literature recommended by the community.
Official three.js examples - these examples are
maintained as part of the three.js repository, and always use the latest version of three.js.
Official three.js dev branch examples -
Same as the above, except these use the dev branch of three.js, and are used to check that
everything is working as three.js being is developed.
Tools
physgl.org - JavaScript front-end with wrappers to three.js, to bring WebGL
graphics to students learning physics and math.
Whitestorm.js – Modular three.js framework with AmmoNext physics plugin.
webgl-reference-card.pdf - Reference of all WebGL and GLSL keywords, terminology, syntax and definitions.
Old Links
These links are kept for historical purposes - you may still find them useful, but be warned that
they may have information relating to very old versions of three.js.
All objects by default automatically update their matrices if they have been added to the scene with
constobject=new THREE.Object3D();
scene.add(object);
or if they are the child of another object that has been added to the scene:
const object1 =new THREE.Object3D();const object2 =new THREE.Object3D();
object1.add( object2 );
scene.add( object1 );//object1 and object2 will automatically update their matrices
However, if you know the object will be static, you can disable this and update the transform matrix manually just when needed.
BufferGeometries store information (such as vertex positions, face indices, normals, colors,
UVs, and any custom attributes) in buffers - that is,
typed arrays.
This makes them generally faster than standard Geometries, at the cost of being somewhat harder to
work with.
With regards to updating BufferGeometries, the most important thing to understand is that
you cannot resize buffers (this is very costly, basically the equivalent to creating a new geometry).
You can however update the content of buffers.
This means that if you know an attribute of your BufferGeometry will grow, say the number of vertices,
you must pre-allocate a buffer large enough to hold any new vertices that may be created. Of
course, this also means that there will be a maximum size for your BufferGeometry - there is
no way to create a BufferGeometry that can efficiently be extended indefinitely.
We'll use the example of a line that gets extended at render time. We'll allocate space
in the buffer for 500 vertices but draw only two at first, using BufferGeometry.drawRange.
const MAX_POINTS =500;// geometryconst geometry =new THREE.BufferGeometry();// attributesconst positions =newFloat32Array( MAX_POINTS *3);// 3 floats (x, y and z) per point
geometry.setAttribute('position',new THREE.BufferAttribute( positions,3));// draw rangeconst drawCount =2;// draw the first 2 points, only
geometry.setDrawRange(0, drawCount );// materialconst material =new THREE.LineBasicMaterial({ color:0xff0000});// lineconst line =new THREE.Line( geometry, material );
scene.add( line );
Next we'll randomly add points to the line using a pattern like:
const positionAttribute = line.geometry.getAttribute('position');let x =0, y =0, z =0;for(let i =0; i < positionAttribute.count; i ++){
positionAttribute.setXYZ( i, x, y, z );
x +=(Math.random()-0.5)*30;
y +=(Math.random()-0.5)*30;
z +=(Math.random()-0.5)*30;}
If you want to change the number of points rendered after the first render, do this:
line.geometry.setDrawRange(0, newValue );
If you want to change the position data values after the first render, you need to
set the needsUpdate flag like so:
positionAttribute.needsUpdate =true;// required after the first render
If you change the position data values after the initial render, you may need to recompute
bounding volumes so other features of the engine like view frustum culling or helpers properly work.
All uniforms values can be changed freely (e.g. colors, textures, opacity, etc), values are sent to the shader every frame.
Also GLstate related parameters can change any time (depthTest, blending, polygonOffset, etc).
The following properties can't be easily changed at runtime (once the material is rendered at least once):
numbers and types of uniforms
presence or not of
texture
fog
vertex colors
morphing
shadow map
alpha test
transparent
Changes in these require building of new shader program. You'll need to set
material.needsUpdate =true
Bear in mind this might be quite slow and induce jerkiness in framerate (especially on Windows, as shader compilation is slower in DirectX than OpenGL).
For smoother experience you can emulate changes in these features to some degree by having "dummy" values like zero intensity lights, white textures, or zero density fog.
You can freely change the material used for geometry chunks, however you cannot change how an object is divided into chunks (according to face materials).
If you need to have different configurations of materials during runtime:
If the number of materials / chunks is small, you could pre-divide the object beforehand (e.g. hair / face / body / upper clothes / trousers for a human, front / sides / top / glass / tire / interior for a car).
If the number is large (e.g. each face could be potentially different), consider a different solution, such as using attributes / textures to drive different per-face look.
InstancedMesh is a class for conveniently access instanced rendering in three.js. Certain library features like view frustum culling or
ray casting rely on up-to-date bounding volumes (bounding sphere and bounding box). Because of the way how InstancedMesh works, the class
has its own boundingBox and boundingSphere properties that supersede
the bounding volumes on geometry level.
Similar to geometries you have to recompute the bounding box and sphere whenever you change the underlying data. In context of InstancedMesh, that
happens when you transform instances via setMatrixAt(). You can use the same pattern like with geometries.
SkinnedMesh follows the same principles like InstancedMesh in context of bounding volumes. Meaning the class has its own version of
boundingBox and boundingSphere to correctly enclose animated meshes.
When calling computeBoundingBox() and computeBoundingSphere(), the class computes the respective bounding volumes based on the current
bone transformation (or in other words the current animation state).
How to dispose of objects
One important aspect in order to improve performance and avoid memory leaks in your application is the disposal of unused library entities.
Whenever you create an instance of a three.js type, you allocate a certain amount of memory. However, three.js creates for specific objects
like geometries or materials WebGL related entities like buffers or shader programs which are necessary for rendering. It's important to
highlight that these objects are not released automatically. Instead, the application has to use a special API in order to free such resources.
This guide provides a brief overview about how this API is used and what objects are relevant in this context.
Geometries
A geometry usually represents vertex information defined as a collection of attributes. three.js internally creates an object of type WebGLBuffer
for each attribute. These entities are only deleted if you call BufferGeometry.dispose(). If a geometry becomes obsolete in your application,
execute the method to free all related resources.
Materials
A material defines how objects are rendered. three.js uses the information of a material definition in order to construct a shader program for rendering.
Shader programs can only be deleted if the respective material is disposed. For performance reasons, three.js tries to reuse existing
shader programs if possible. So a shader program is only deleted if all related materials are disposed. You can indicate the disposal of a material by
executing Material.dispose().
Textures
The disposal of a material has no effect on textures. They are handled separately since a single texture can be used by multiple materials at the same time.
Whenever you create an instance of Texture, three.js internally creates an instance of WebGLTexture.
Similar to buffers, this object can only be deleted by calling Texture.dispose().
If you use an ImageBitmap as the texture's data source, you have to call ImageBitmap.close() at the application level to dispose of all CPU-side resources.
An automated call of ImageBitmap.close() in Texture.dispose() is not possible, since the image bitmap becomes unusable, and the engine has no way of knowing if the image bitmap is used elsewhere.
There are other classes from the examples directory like controls or post processing passes which provide dispose() methods in order to remove internal event listeners
or render targets. In general, it's recommended to check the API or documentation of a class and watch for dispose(). If present, you should use it when cleaning things up.
FAQ
Why can't three.js dispose objects automatically?
This question was asked many times by the community so it's important to clarify this matter. Fact is that three.js does not know the lifetime or scope
of user-created entities like geometries or materials. This is the responsibility of the application. For example even if a material is currently not used for rendering,
it might be necessary for the next frame. So if the application decides that a certain object can be deleted, it has to notify the engine via calling the respective
dispose() method.
Does removing a mesh from the scene also dispose its geometry and material?
No, you have to explicitly dispose the geometry and material via dispose(). Keep in mind that geometries and materials can be shared among 3D objects like meshes.
Does three.js provide information about the amount of cached objects?
Yes. It's possible to evaluate WebGLRenderer.info, a special property of the renderer with a series of statistical information about the graphics board memory
and the rendering process. Among other things, it tells you how many textures, geometries and shader programs are internally stored. If you notice performance problems
in your application, it's a good idea to debug this property in order to easily identify a memory leak.
What happens when you call dispose() on a texture but the image is not loaded yet?
Internal resources for a texture are only allocated if the image has fully loaded. If you dispose a texture before the image was loaded,
nothing happens. No resources were allocated so there is also no need for clean up.
What happens when I call dispose() and then use the respective object at a later point?
That depends. For geometries, materials, textures, render targets and post processing passes the deleted internal resources can be created again by the engine.
So no runtime error will occur but you might notice a negative performance impact for the current frame, especially when shader programs have to be compiled.
Controls and renderers are an exception. Instances of these classes can not be used after dispose() has been called. You have to create new instances in this case.
How should I manage three.js objects in my app? When do I know how to dispose things?
In general, there is no definite recommendation for this. It highly depends on the specific use case when calling dispose() is appropriate. It's important to highlight that
it's not always necessary to dispose objects all the time. A good example for this is a game which consists of multiple levels. A good place for object disposal is when
switching the level. The app could traverse through the old scene and dispose all obsolete materials, geometries and textures. As mentioned in the previous section, it does not
produce a runtime error if you dispose an object that is actually still in use. The worst thing that can happen is performance drop for a single frame.
VRButton.createButton() does two important things: It creates a button which indicates
VR compatibility. Besides, it initiates a VR session if the user activates the button. The only thing you have
to do is to add the following line of code to your app.
Next, you have to tell your instance of WebGLRenderer to enable XR rendering.
renderer.xr.enabled =true;
Finally, you have to adjust your animation loop since we can't use our well known
window.requestAnimationFrame() function. For VR projects we use setAnimationLoop.
The minimal code looks like this:
renderer.setAnimationLoop(function(){
renderer.render( scene, camera );});
Next Steps
Have a look at one of the official WebVR examples to see this workflow in action.
Many three.js applications render their 3D objects directly to the screen. Sometimes, however, you want to apply one or more graphical
effects like Depth-Of-Field, Bloom, Film Grain or various types of Anti-aliasing. Post-processing is a widely used approach
to implement such effects. First, the scene is rendered to a render target which represents a buffer in the video card's memory.
In the next step one or more post-processing passes apply filters and effects to the image buffer before it is eventually rendered to
the screen.
three.js provides a complete post-processing solution via EffectComposer to implement such a workflow.
Workflow
The first step in the process is to import all necessary files from the examples directory. The guide assumes you are using the official
npm package of three.js. For our basic demo in this guide we need the following files.
After all files are successfully imported, we can create our composer by passing in an instance of WebGLRenderer.
const composer =newEffectComposer( renderer );
When using a composer, it's necessary to change the application's animation loop. Instead of calling the render method of
WebGLRenderer, we now use the respective counterpart of EffectComposer.
function animate(){
requestAnimationFrame( animate );
composer.render();}
Our composer is now ready so it's possible to configure the chain of post-processing passes. These passes are responsible for creating
the final visual output of the application. They are processed in order of their addition/insertion. In our example, the instance of RenderPass
is executed first, then the instance of GlitchPass and finally OutputPass. The last enabled pass in the chain is automatically rendered to the screen.
The setup of the passes looks like so:
RenderPass is normally placed at the beginning of the chain in order to provide the rendered scene as an input for the next post-processing step. In our case,
GlitchPass is going to use these image data to apply a wild glitch effect. OutputPass is usually the last pass in the chain which performs sRGB color space conversion and tone mapping.
Check out this live example to see it in action.
Built-in Passes
You can use a wide range of pre-defined post-processing passes provided by the engine. They are located in the
postprocessing directory.
Custom Passes
Sometimes you want to write a custom post-processing shader and include it into the chain of post-processing passes. For this scenario,
you can utilize ShaderPass. After importing the file and your custom shader, you can use the following code to setup the pass.
import{ShaderPass}from'three/addons/postprocessing/ShaderPass.js';import{LuminosityShader}from'three/addons/shaders/LuminosityShader.js';// later in your init routineconst luminosityPass =newShaderPass(LuminosityShader);
composer.addPass( luminosityPass );
The repository provides a file called CopyShader which is a
good starting code for your own custom shader. CopyShader just copies the image contents of the EffectComposer's read buffer
to its write buffer without applying any effects.
Matrix transformations
Three.js uses matrices to encode 3D transformations---translations (position), rotations, and scaling. Every instance of Object3D has a matrix which stores that object's position, rotation, and scale. This page describes how to update an object's transformation.
Convenience properties and matrixAutoUpdate
There are two ways to update an object's transformation:
Modify the object's position, quaternion, and scale properties, and let three.js recompute
the object's matrix from these properties:
object.position.copy( start_position );object.quaternion.copy( quaternion );
By default, the matrixAutoUpdate property is set true, and the matrix will be automatically recalculated.
If the object is static, or you wish to manually control when recalculation occurs, better performance can be obtained by setting the property false:
object.matrixAutoUpdate =false;
And after changing any properties, manually update the matrix:
object.updateMatrix();
Modify the object's matrix directly. The Matrix4 class has various methods for modifying the matrix:
object.matrix.setRotationFromQuaternion( quaternion );object.matrix.setPosition( start_position );object.matrixAutoUpdate =false;
Note that matrixAutoUpdatemust be set to false in this case, and you should make sure not to call updateMatrix. Calling updateMatrix will clobber the manual changes made to the matrix, recalculating the matrix from position, scale, and so on.
Object and world matrices
An object's matrix stores the object's transformation relative to the object's parent; to get the object's transformation in world coordinates, you must access the object's Object3D.matrixWorld.
When either the parent or the child object's transformation changes, you can request that the child object's matrixWorld be updated by calling updateMatrixWorld().
An object can be transformed via Object3D.applyMatrix4. Note: Under-the-hood, this method relies on Matrix4.decompose, and not all matrices are decomposable in this way. For example, if an object has a non-uniformly scaled parent, then the object's world matrix may not be decomposable, and this method may not be appropriate.
Rotation and Quaternion
Three.js provides two ways of representing 3D rotations: Euler angles and Quaternions, as well as methods for converting between the two. Euler angles are subject to a problem called "gimbal lock," where certain configurations can lose a degree of freedom (preventing the object from being rotated about one axis). For this reason, object rotations are always stored in the object's quaternion.
Previous versions of the library included a useQuaternion property which, when set to false, would cause the object's matrix to be calculated from an Euler angle. This practice is deprecated---instead, you should use the setRotationFromEuler method, which will update the quaternion.
Animation system
Overview
Within the three.js animation system you can animate various properties of your models:
the bones of a skinned and rigged model, morph targets, different material properties
(colors, opacity, booleans), visibility and transforms. The animated properties can be faded in,
faded out, crossfaded and warped. The weight and time scales of different simultaneous
animations on the same object as well as on different objects can be changed
independently. Various animations on the same and on different objects can be
synchronized.
To achieve all this in one homogeneous system, the three.js animation system
has completely changed in 2015
(beware of outdated information!), and it has now an architecture similar to
Unity/Unreal Engine 4. This page gives a short overview of the main components of the
system and how they work together.
Animation Clips
If you have successfully imported an animated 3D object (it doesn't matter if it has
bones or morph targets or both) — for example exporting it from Blender with the
glTF Blender exporter and
loading it into a three.js scene using GLTFLoader — one of the response fields
should be an array named "animations", containing the AnimationClips
for this model (see a list of possible loaders below).
Each AnimationClip usually holds the data for a certain activity of the object. If the
mesh is a character, for example, there may be one AnimationClip for a walkcycle, a second
for a jump, a third for sidestepping and so on.
Keyframe Tracks
Inside of such an AnimationClip the data for each animated property are stored in a
separate KeyframeTrack. Assuming a character object has a skeleton,
one keyframe track could store the data for the position changes of the lower arm bone
over time, a different track the data for the rotation changes of the same bone, a third
the track position, rotation or scaling of another bone, and so on. It should be clear,
that an AnimationClip can be composed of lots of such tracks.
Assuming the model has morph targets (for example one morph
target showing a friendly face and another showing an angry face), each track holds the
information as to how the influence of a certain morph
target changes during the performance of the clip.
Animation Mixer
The stored data forms only the basis for the animations - actual playback is controlled by
the AnimationMixer. You can imagine this not only as a player for animations, but
as a simulation of a hardware like a real mixer console, which can control several animations
simultaneously, blending and merging them.
Animation Actions
The AnimationMixer itself has only very few (general) properties and methods, because it
can be controlled by the AnimationActions. By configuring an
AnimationAction you can determine when a certain AnimationClip shall be played, paused
or stopped on one of the mixers, if and how often the clip has to be repeated, whether it
shall be performed with a fade or a time scaling, and some additional things, such crossfading
or synchronizing.
Animation Object Groups
If you want a group of objects to receive a shared animation state, you can use an
AnimationObjectGroup.
Supported Formats and Loaders
Note that not all model formats include animation (OBJ notably does not), and that only some
three.js loaders support AnimationClip sequences. Several that do
support this animation type:
Note that 3ds max and Maya currently can't export multiple animations (meaning animations which are not
on the same timeline) directly to a single file.
Example
let mesh;// Create an AnimationMixer, and get the list of AnimationClip instancesconst mixer =new THREE.AnimationMixer( mesh );const clips = mesh.animations;// Update the mixer on each framefunction update (){
mixer.update( deltaSeconds );}// Play a specific animationconst clip = THREE.AnimationClip.findByName( clips,'dance');const action = mixer.clipAction( clip );
action.play();// Play all animations
clips.forEach(function( clip ){
mixer.clipAction( clip ).play();});
Color management
What is a color space?
Every color space is a collection of several design decisions, chosen together to support a
large range of colors while satisfying technical constraints related to precision and display
technologies. When creating a 3D asset, or assembling 3D assets together into a scene, it is
important to know what these properties are, and how the properties of one color space relate
to other color spaces in the scene.
Color primaries: Primary colors (e.g. red, green, blue) are not absolutes; they are
selected from the visible spectrum based on constraints of limited precision and
capabilities of available display devices. Colors are expressed as a ratio of the primary colors.
White point: Most color spaces are engineered such that an equally weighted sum of
primaries R = G = B will appear to be without color, or "achromatic". The appearance
of achromatic values (like white or grey) depend on human perception, which in turn depends
heavily on the context of the observer. A color space specifies its "white point" to balance
these needs. The white point defined by the sRGB color space is
D65.
Transfer functions: After choosing the color gamut and a color model, we still need to
define mappings ("transfer functions") of numerical values to/from the color space. Does r = 0.5
represent 50% less physical illumination than r = 1.0? Or 50% less bright, as perceived
by an average human eye? These are different things, and that difference can be represented as
a mathematical function. Transfer functions may be linear or nonlinear, depending
on the objectives of the color space. sRGB defines nonlinear transfer functions. Those
functions are sometimes approximated as gamma functions, but the term "gamma" is
ambiguous and should be avoided in this context.
Color model: Syntax for numerically identifying colors within chosen the color gamut —
a coordinate system for colors. In three.js we're mainly concerned with the RGB color
model, having three coordinates r, g, b ∈ [0,1] ("closed domain") or
r, g, b ∈ [0,∞] ("open domain") each representing a fraction of a primary
color. Other color models (HSL, Lab, LCH) are commonly used for artistic control.
Color gamut: Once color primaries and a white point have been chosen, these represent
a volume within the visible spectrum (a "gamut"). Colors not within this volume ("out of gamut")
cannot be expressed by closed domain [0,1] RGB values. In the open domain [0,∞], the gamut is
technically infinite.
Consider two very common color spaces: SRGBColorSpace ("sRGB") and
LinearSRGBColorSpace ("Linear-sRGB"). Both use the same primaries and white point,
and therefore have the same color gamut. Both use the RGB color model. They differ only in
the transfer functions — Linear-sRGB is linear with respect to physical light intensity.
sRGB uses the nonlinear sRGB transfer functions, and more closely resembles the way that
the human eye perceives light and the responsiveness of common display devices.
That difference is important. Lighting calculations and other rendering operations must
generally occur in a linear color space. However, a linear colors are less efficient to
store in an image or framebuffer, and do not look correct when viewed by a human observer.
As a result, input textures and the final rendered image will generally use the nonlinear
sRGB color space.
ℹ️ NOTICE: While some modern displays support wider gamuts like Display-P3,
the web platform's graphics APIs largely rely on sRGB. Applications using three.js
today will typically use only the sRGB and Linear-sRGB color spaces.
Roles of color spaces
Linear workflows — required for modern rendering methods — generally involve more than
one color space, each assigned to a particular role. Linear and nonlinear color spaces are
appropriate for different roles, explained below.
Input color space
Colors supplied to three.js — from color pickers, textures, 3D models, and other sources —
each have an associated color space. Those not already in the Linear-sRGB working color
space must be converted, and textures be given the correct texture.colorSpace assignment.
Certain conversions (for hexadecimal and CSS colors in sRGB) can be made automatically if
the THREE.ColorManagement API is enabled before initializing colors:
THREE.ColorManagement.enabled =true;
THREE.ColorManagement is enabled by default.
Materials, lights, and shaders: Colors in materials, lights, and shaders store
RGB components in the Linear-sRGB working color space.
Vertex colors:BufferAttributes store RGB components in the
Linear-sRGB working color space.
Color textures: PNG or JPEG Textures containing color information
(like .map or .emissiveMap) use the closed domain sRGB color space, and must be annotated with
texture.colorSpace = SRGBColorSpace. Formats like OpenEXR (sometimes used for .envMap or
.lightMap) use the Linear-sRGB color space indicated with texture.colorSpace = LinearSRGBColorSpace,
and may contain values in the open domain [0,∞].
Non-color textures: Textures that do not store color information (like .normalMap
or .roughnessMap) do not have an associated color space, and generally use the (default) texture
annotation of texture.colorSpace = NoColorSpace. In rare cases, non-color data
may be represented with other nonlinear encodings for technical reasons.
⚠️ WARNING: Many formats for 3D models do not correctly or consistently
define color space information. While three.js attempts to handle most cases, problems
are common with older file formats. For best results, use glTF 2.0 (GLTFLoader)
and test 3D models in online viewers early to confirm the asset itself is correct.
Working color space
Rendering, interpolation, and many other operations must be performed in an open domain
linear working color space, in which RGB components are proportional to physical
illumination. In three.js, the working color space is Linear-sRGB.
Output color space
Output to a display device, image, or video may involve conversion from the open domain
Linear-sRGB working color space to another color space. This conversion may be performed in
the main render pass (WebGLRenderer.outputColorSpace), or during post-processing.
renderer.outputColorSpace = THREE.SRGBColorSpace;// optional with post-processing
Display: Colors written to a WebGL canvas for display should be in the sRGB
color space.
Image: Colors written to an image should use the color space appropriate for
the format and usage. Fully-rendered images written to PNG or JPEG textures generally
use the sRGB color space. Images containing emission, light maps, or other data not
confined to the [0,1] range will generally use the open domain Linear-sRGB color space,
and a compatible image format like OpenEXR.
⚠️ WARNING: Render targets may use either sRGB or Linear-sRGB. sRGB makes
better use of limited precision. In the closed domain, 8 bits often suffice for sRGB
whereas ≥12 bits (half float) may be required for Linear-sRGB. If later pipeline
stages require Linear-sRGB input, the additional conversions may have a small
performance cost.
Custom materials based on ShaderMaterial and RawShaderMaterial have to implement their own output color space conversion.
For instances of ShaderMaterial, adding the colorspace_fragment shader chunk to the fragment shader's main() function should be sufficient.
Working with THREE.Color instances
Methods reading or modifying Color instances assume data is already in the
three.js working color space, Linear-sRGB. RGB and HSL components are direct
representations of data stored by the Color instance, and are never converted
implicitly. Color data may be explicitly converted with .convertLinearToSRGB()
or .convertSRGBToLinear().
With ColorManagement.enabled = true set (recommended), certain conversions
are made automatically. Because hexadecimal and CSS colors are generally sRGB, Color
methods will automatically convert these inputs from sRGB to Linear-sRGB in setters, or
convert from Linear-sRGB to sRGB when returning hexadecimal or CSS output from getters.
When an individual color or texture is misconfigured, it will appear darker or lighter than
expected. When the renderer's output color space is misconfigured, the entire scene may appear
darker (e.g. missing conversion to sRGB) or lighter (e.g. a double conversion to sRGB with
post-processing). In each case the problem may not be uniform, and simply increasing/decreasing
lighting does not solve it.
A more subtle issue appears when both the input color spaces and the output color
spaces are incorrect — the overall brightness levels may be fine, but colors may change
unexpectedly under different lighting, or shading may appear more blown-out and less soft
than intended. These two wrongs do not make a right, and it's important that the working
color space be linear ("scene referred") and the output color space be nonlinear
("display referred").
AnimationActions schedule the performance of the animations which are
stored in AnimationClips.
Note: Most of AnimationAction's methods can be chained.
For an overview of the different elements of the three.js animation system
see the "Animation System" article in the "Next Steps" section of the
manual.
mixer - the AnimationMixer that is controlled by
this action. clip - the AnimationClip that holds the animation
data for this action. localRoot - the root object on which this action is
performed. blendMode - defines how the animation is blended/combined
when two or more animations are simultaneously played.
Note: Instead of calling this constructor directly you should instantiate
an AnimationAction with AnimationMixer.clipAction since this method
provides caching for better performance.
Defines how the animation is blended/combined when two or more animations
are simultaneously played. Valid values are NormalAnimationBlendMode
(default) and AdditiveAnimationBlendMode.
If clampWhenFinished is set to true the animation will automatically be
paused on its last frame.
If clampWhenFinished is set to false, enabled will
automatically be switched to false when the last loop of the action has
finished, so that this action has no further impact.
Default is false.
Note: clampWhenFinished has no impact if the action is interrupted (it
has only an effect if its last loop has really finished).
Setting enabled to false disables this action, so that it has no
impact. Default is true.
When the action is re-enabled, the animation continues from its current
time (setting enabled to false doesn't reset the
action).
Note: Setting enabled to true doesn’t automatically restart the
animation. Setting enabled to true will only restart the animation
immediately if the following condition is fulfilled: paused
is false, this action has not been deactivated in the meantime (by
executing a stop or reset command), and neither
weight nor timeScale is 0.
THREE.LoopOnce - playing the clip once, THREE.LoopRepeat - playing the clip with the chosen
number of repetitions, each time jumping from the end of the clip
directly to its beginning, THREE.LoopPingPong - playing the clip with the chosen
number of repetitions, alternately playing forward and backward.
The local time of this action (in seconds, starting with 0).
The value gets clamped or wrapped to 0...clip.duration (according to the
loop state). It can be scaled relatively to the global mixer time by
changing timeScale (using setEffectiveTimeScale or setDuration).
The degree of influence of this action (in the interval [0,1]). Values
between 0 (no impact) and 1 (full impact) can be used to blend between
several actions. Default is 1.
Decelerates this animation's speed to 0 by decreasing timeScale gradually (starting from its current value), within the passed
time interval. This method can be chained.
Returns true if the action’s time is currently running.
In addition to being activated in the mixer (see isScheduled) the following conditions must be fulfilled: paused is equal to false, enabled is equal to true,
timeScale is different from 0, and there is no
scheduling for a delayed start (startAt).
Note: isRunning being true doesn’t necessarily mean that the animation
can actually be seen. This is only the case, if weight is
additionally set to a non-zero value.
Tells the mixer to activate the action. This method can be chained.
Note: Activating this action doesn’t necessarily mean that the animation
starts immediately: If the action had already finished before (by reaching
the end of its last loop), or if a time for a delayed start has been set
(via startAt), a reset must be executed
first. Some other settings (paused=true, enabled=false, weight=0, timeScale=0)
can prevent the animation from playing, too.
This method sets paused to false, enabled
to true, time to 0, interrupts any scheduled fading and
warping, and removes the internal loop count and scheduling for delayed
starting.
Note: .reset is always called by stop, but .reset doesn’t
call .stop itself. This means: If you want both, resetting and stopping,
don’t call .reset; call .stop instead.
# .setDuration ( durationInSeconds : Number ) : this
Sets the duration for a single loop of this action (by adjusting
timeScale and stopping any scheduled warping). This
method can be chained.
Sets the timeScale and stops any scheduled warping. This
method can be chained.
If paused is false, the effective time scale (an internal
property) will also be set to this value; otherwise the effective time
scale (directly affecting the animation at this moment) will be set to
0.
Note: .paused will not be switched to true automatically, if
.timeScale is set to 0 by this method.
Sets the weight and stops any scheduled fading. This method
can be chained.
If enabled is true, the effective weight (an internal
property) will also be set to this value; otherwise the effective weight
(directly affecting the animation at this moment) will be set to 0.
Note: .enabled will not be switched to false automatically, if
.weight is set to 0 by this method.
Defines the time for a delayed start (usually passed as
AnimationMixer.time + deltaTimeInSeconds). This method can be
chained.
Note: The animation will only start at the given time, if .startAt is
chained with play, or if the action has already been
activated in the mixer (by a previous call of .play, without stopping or
resetting it in the meantime).
Synchronizes this action with the passed other action. This method can be
chained.
Synchronizing is done by setting this action’s time and
timeScale values to the corresponding values of the
other action (stopping any scheduled warping).
Note: Future changes of the other action's time and timeScale will not
be detected.
# .warp ( startTimeScale : Number, endTimeScale : Number, durationInSeconds : Number ) : this
Changes the playback speed, within the passed time interval, by modifying
timeScale gradually from startTimeScale to
endTimeScale. This method can be chained.
Events
There are two events indicating when a single loop of the action
respectively the entire action has finished. You can react to them with:
mixer.addEventListener('loop',function( e ){…});// properties of e: type, action and loopDelta
mixer.addEventListener('finished',function( e ){…});// properties of e: type, action and direction
An AnimationClip is a reusable set of keyframe tracks which represent an
animation.
For an overview of the different elements of the three.js animation system
see the "Animation System" article in the "Next Steps" section of the
manual.
name - a name for this clip. duration - the duration of this clip (in seconds). If a
negative value is passed, the duration will be calculated from the passed
tracks array. tracks - an array of KeyframeTracks. blendMode - defines how the animation is blended/combined
when two or more animations are simultaneously played.
Note: Instead of instantiating an AnimationClip directly with the
constructor, you can use one of its static methods to create
AnimationClips: from JSON (parse), from morph target
sequences (CreateFromMorphTargetSequence,
CreateClipsFromMorphTargetSequences) or from animation hierarchies
(parseAnimation) - if your model doesn't already
hold AnimationClips in its geometry's animations array.
Defines how the animation is blended/combined when two or more animations
are simultaneously played. Valid values are NormalAnimationBlendMode
(default) and AdditiveAnimationBlendMode.
Returns an array of new AnimationClips created from the morph target
sequences of a geometry, trying to sort morph target names into
animation-group-based patterns like "Walk_001, Walk_002, Run_001, Run_002...".
Searches for an AnimationClip by name, taking as its first parameter
either an array of AnimationClips, or a mesh or geometry that contains an
array named "animations".
The AnimationMixer is a player for animations on a particular object in
the scene. When multiple objects in the scene are animated independently,
one AnimationMixer may be used for each object.
For an overview of the different elements of the three.js animation system
see the "Animation System" article in the "Next Steps" section of the
manual.
Returns an AnimationAction for the passed clip, optionally using a
root object different from the mixer's default root. The first parameter
can be either an AnimationClip object or the name of an
AnimationClip.
If an action fitting the clip and root parameters doesn't yet exist, it
will be created by this method. Calling this method several times with the
same clip and root parameters always returns the same clip instance.
Deallocates all memory resources for a root object. Before using this
method make sure to call AnimationAction.stop() for all related
actions or alternatively .stopAllAction() when the mixer operates
on a single root.
A group of objects that receives a shared animation state.
For an overview of the different elements of the three.js animation system
see the "Animation System" article in the "Next Steps" section of the
manual.
Usage:
Add objects you would otherwise pass as 'root' to the constructor or the
clipAction method of AnimationMixer and instead pass this object as 'root'.
Note that objects of this class appear as one object to the mixer, so
cache control of the individual objects must be done on the group.
Limitations
The animated properties must be compatible among all objects in the
group.
A single property can either be controlled through a target group or
directly, but not both.
A KeyframeTrack is a timed sequence of
keyframes, which are
composed of lists of times and related values, and which are used to
animate a specific property of an object.
For an overview of the different elements of the three.js animation system
see the "Animation System" article in the "Next Steps" section of the
manual.
In contrast to the animation hierarchy of the
JSON model format a KeyframeTrack doesn't store its single keyframes as
objects in a "keys" array (holding the times and the values for each frame
together in one place).
Instead of this there are always two arrays in a KeyframeTrack: the
times array stores the time values for all keyframes of this
track in sequential order, and the values array contains
the corresponding changing values of the animated property.
A single value, belonging to a certain point of time, can not only be a
simple number, but (for example) a vector (if a position is animated) or a
quaternion (if a rotation is animated). For this reason the values array
(which is a flat array, too) might be three or four times as long as the
times array.
Corresponding to the different possible types of animated values there are
several subclasses of KeyframeTrack, inheriting the most properties and
methods:
The track's name can refer to morph targets or bones or
possibly other values within an animated object. See
PropertyBinding.parseTrackName for the forms of strings that can be
parsed for property binding:
The name can specify the node either using its name or its uuid (although
it needs to be in the subtree of the scene graph node passed into the
mixer). Or, if the track name starts with a dot, the track applies to the
root node that was passed into the mixer.
Usually after the node a property will be specified directly. But you can
also specify a subproperty, such as .rotation[x], if you just want to
drive the X component of the rotation via a float track.
You can also specify bones or multimaterials by using an object name, for
example: .bones[R_hand].scale; the red channel of the diffuse color of the
fourth material in a materials array - as a further example - can be
accessed with .materials[3].diffuse[r].
PropertyBinding will also resolve morph target names, for example:
.morphTargetInfluences[run].
Note: The track's name does not necessarily have to be unique. Multiple
tracks can drive the same property. The result should be based on a
weighted blend between the multiple tracks according to the weights of
their respective actions.
Creates a new DiscreteInterpolant from the
times and values. A
Float32Array can be passed which will receive the results. Otherwise a new
array with the appropriate size will be created automatically.
Creates a new LinearInterpolant from the
times and values. A
Float32Array can be passed which will receive the results. Otherwise a new
array with the appropriate size will be created automatically.
Create a new CubicInterpolant from the
times and values. A
Float32Array can be passed which will receive the results. Otherwise a new
array with the appropriate size will be created automatically.
Performs minimal validation on the tracks. Returns true if valid.
This method logs errors to the console, if a track is empty, if the
value size is not valid, if an item in the times or values array is not a valid number or if the
items in the times array are out of order.
This has the layout: [ incoming | accu0 | accu1 | orig ]
Interpolators can use .buffer as their .result and the data then goes to
'incoming'. 'accu0' and 'accu1' are used frame-interleaved for the
cumulative result and are compared to detect changes. 'orig' stores the
original state of the property.
BooleanKeyframeTrack( name : String, times : Array, values : Array )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified.
This keyframe track type has no interpolation parameter because the
interpolation is always InterpolateDiscrete.
A Track of keyframe values that represent color changes.
The very basic implementation of this subclass has nothing special yet.
However, this is the place for color space parameterization.
Constructor
ColorKeyframeTrack( name : String, times : Array, values : Array, interpolation : Constant )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified, a
flat array of color components between 0 and 1. interpolation - the type of interpolation to use. See
Animation Constants for possible values. Default is
InterpolateLinear.
NumberKeyframeTrack( name : String, times : Array, values : Array, interpolation : Constant )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified. interpolation - the type of interpolation to use. See
Animation Constants for possible values. Default is
InterpolateLinear.
QuaternionKeyframeTrack( name : String, times : Array, values : Array, interpolation : Constant )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified, a
flat array of quaternion components. interpolation - the type of interpolation to use. See
Animation Constants for possible values. Default is
InterpolateLinear.
StringKeyframeTrack( name : String, times : Array, values : Array )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified.
This keyframe track type has no interpolation parameter because the
interpolation is always InterpolateDiscrete.
VectorKeyframeTrack( name : String, times : Array, values : Array, interpolation : Constant )
name - (required) identifier for the KeyframeTrack. times - (required) array of keyframe times. values - values for the keyframes at the times specified, a
flat array of vector components. interpolation - the type of interpolation to use. See
Animation Constants for possible values. Default is
InterpolateLinear.
// create an AudioListener and add it to the cameraconst listener =new THREE.AudioListener();
camera.add( listener );// create a global audio sourceconst sound =new THREE.Audio( listener );// load a sound and set it as the Audio object's bufferconst audioLoader =new THREE.AudioLoader();
audioLoader.load('sounds/ambient.ogg',function( buffer ){
sound.setBuffer( buffer );
sound.setLoop(true);
sound.setVolume(0.5);
sound.play();});
Represents an array of
AudioNodes. Can be used to apply a variety of low-order filters to create
more complex sound effects. In most cases, the array contains instances of
BiquadFilterNodes. Filters are set via Audio.setFilter or
Audio.setFilters.
// create an AudioListener and add it to the cameraconst listener =new THREE.AudioListener();
camera.add( listener );// create an Audio sourceconst sound =new THREE.Audio( listener );// load a sound and set it as the Audio object's bufferconst audioLoader =new THREE.AudioLoader();
audioLoader.load('sounds/ambient.ogg',function( buffer ){
sound.setBuffer( buffer );
sound.setLoop(true);
sound.setVolume(0.5);
sound.play();});// create an AudioAnalyser, passing in the sound and desired fftSizeconst analyser =new THREE.AudioAnalyser( sound,32);// get the average frequency of the soundconst data = analyser.getAverageFrequency();
A non-zero power of two up to 2048, representing the size of the FFT (Fast
Fourier Transform) to be used to determine the frequency domain. See
this page for details.
The AudioListener represents a virtual
listener of the all positional and non-positional audio effects in the
scene.
A three.js application usually creates a single instance of AudioListener. It is
a mandatory constructor parameter for audios entities like Audio and PositionalAudio.
In most cases, the listener object is a child of the camera. So the 3D
transformation of the camera represents the 3D transformation of the
listener.
Code Example
// create an AudioListener and add it to the cameraconst listener =new THREE.AudioListener();
camera.add( listener );// create a global audio sourceconst sound =new THREE.Audio( listener );// load a sound and set it as the Audio object's bufferconst audioLoader =new THREE.AudioLoader();
audioLoader.load('sounds/ambient.ogg',function( buffer ){
sound.setBuffer( buffer );
sound.setLoop(true);
sound.setVolume(0.5);
sound.play();});
// create an AudioListener and add it to the cameraconst listener =new THREE.AudioListener();
camera.add( listener );// create the PositionalAudio object (passing in the listener)const sound =new THREE.PositionalAudio( listener );// load a sound and set it as the PositionalAudio object's bufferconst audioLoader =new THREE.AudioLoader();
audioLoader.load('sounds/song.ogg',function( buffer ){
sound.setBuffer( buffer );
sound.setRefDistance(20);
sound.play();});// create an object for the sound to play fromconst sphere =new THREE.SphereGeometry(20,32,16);const material =new THREE.MeshPhongMaterial({ color:0xff2200});const mesh =new THREE.Mesh( sphere, material );
scene.add( mesh );// finally add the sound to the mesh
mesh.add( sound );
ArrayCamera can be used in order to efficiently render a scene with a
predefined set of cameras. This is an important performance aspect for
rendering VR scenes.
An instance of ArrayCamera always has an array of sub cameras. It's mandatory
to define for each sub camera the viewport property which determines the
part of the viewport that is rendered with this camera.
OrthographicCamera( left : Number, right : Number, top : Number, bottom : Number, near : Number, far : Number )
left — Camera frustum left plane.
right — Camera frustum right plane.
top — Camera frustum top plane.
bottom — Camera frustum bottom plane.
near — Camera frustum near plane.
far — Camera frustum far plane.
See the base Camera class for common properties.
Note that after making changes to most of these properties you will have
to call .updateProjectionMatrix for the changes to take effect.
The valid range is between 0 and the current value of the far plane. Note that, unlike for the PerspectiveCamera, 0 is a
valid value for an OrthographicCamera's near plane.
fullWidth — full width of multiview setup
fullHeight — full height of multiview setup
x — horizontal offset of subcamera
y — vertical offset of subcamera
width — width of subcamera
height — height of subcamera
Sets an offset in a larger
viewing frustum. This
is useful for multi-window or multi-monitor/multi-machine setups. For an
example on how to use it see PerspectiveCamera.
See the base Camera class for common properties.
Note that after making changes to most of these properties you will have
to call .updateProjectionMatrix for the changes to take effect.
Film size used for the larger axis. Default is 35 (millimeters). This
parameter does not influence the projection matrix unless .filmOffset is
set to a nonzero value.
Object distance used for stereoscopy and depth-of-field effects. This
parameter does not influence the projection matrix unless a
StereoCamera is being used. Default is 10.
The valid range is greater than 0 and less than the current value of the
far plane. Note that, unlike for the
OrthographicCamera, 0 is not a valid value for a
PerspectiveCamera's near plane.
Computes the 2D bounds of the camera's viewable rectangle at a given distance along the viewing direction.
Sets minTarget and maxTarget to the coordinates of the lower-left and upper-right corners of the view rectangle.
Computes the width and height of the camera's viewable rectangle at a given distance along the viewing direction.
Copies the result into the target Vector2, where x is width and y is height.
fullWidth — full width of multiview setup
fullHeight — full height of multiview setup
x — horizontal offset of subcamera
y — vertical offset of subcamera
width — width of subcamera
height — height of subcamera
Sets an offset in a larger frustum. This is useful for multi-window or
multi-monitor/multi-machine setups.
For example, if you have 3x2 monitors and each monitor is 1920x1080 and
the monitors are in grid like this:
+---+---+---+
| A | B | C |
+---+---+---+
| D | E | F |
+---+---+---+
const w =1920;const h =1080;const fullWidth = w *3;const fullHeight = h *2;// A
camera.setViewOffset( fullWidth, fullHeight, w *0, h *0, w, h );// B
camera.setViewOffset( fullWidth, fullHeight, w *1, h *0, w, h );// C
camera.setViewOffset( fullWidth, fullHeight, w *2, h *0, w, h );// D
camera.setViewOffset( fullWidth, fullHeight, w *0, h *1, w, h );// E
camera.setViewOffset( fullWidth, fullHeight, w *1, h *1, w, h );// F
camera.setViewOffset( fullWidth, fullHeight, w *2, h *1, w, h );
Note there is no reason monitors have to be the same size or in a grid.
NoColorSpace defines no specific color space. It is commonly used
for textures including normal maps, roughness maps, metalness maps,
ambient occlusion maps, and other non-color data.
SRGBColorSpace (“srgb”) refers to the color space defined by the
Rec. 709 primaries, D65 white point, and nonlinear sRGB transfer
functions. sRGB is the default color space in CSS, and is often found in
color palettes and color pickers. Colors expressed in hexadecimal or CSS
notation are typically in the sRGB color space.
LinearSRGBColorSpace (“srgb-linear”) refers to the sRGB color space
(above) with linear transfer functions. Linear-sRGB is the working color
space in three.js, used throughout most of the rendering process. RGB
components found in three.js materials and shaders are in the Linear-sRGB
color space.
For further background and usage, see Color management.
The constants LEFT and ROTATE have the same underlying value. The
constants MIDDLE and DOLLY have the same underlying value. The constants
RIGHT and PAN have the same underlying value.
These work with all material types. First set the material's blending mode
to THREE.CustomBlending, then set the desired Blending Equation, Source
Factor and Destination Factor.
The usage constants can be used to provide a hint to the API regarding how
the geometry buffer attribute will be used in order to optimize
performance.
Which depth function the material uses to compare incoming pixels Z-depth
against the current Z-depth buffer value. If the result of the comparison
is true, the pixel will be drawn. NeverDepth will never return true. AlwaysDepth will always return true. EqualDepth will return true if the incoming pixel Z-depth
is equal to the current buffer Z-depth. LessDepth will return true if the incoming pixel Z-depth
is less than the current buffer Z-depth. LessEqualDepth is the default and will return true if the
incoming pixel Z-depth is less than or equal to the current buffer
Z-depth. GreaterEqualDepth will return true if the incoming pixel
Z-depth is greater than or equal to the current buffer Z-depth. GreaterDepth will return true if the incoming pixel
Z-depth is greater than the current buffer Z-depth. NotEqualDepth will return true if the incoming pixel
Z-depth is not equal to the current buffer Z-depth.
Which stencil function the material uses to determine whether or not to
perform a stencil operation. NeverStencilFunc will never return true. LessStencilFunc will return true if the stencil reference
value is less than the current stencil value. EqualStencilFunc will return true if the stencil
reference value is equal to the current stencil value. LessEqualStencilFunc will return true if the stencil
reference value is less than or equal to the current stencil value. GreaterStencilFunc will return true if the stencil
reference value is greater than the current stencil value. NotEqualStencilFunc will return true if the stencil
reference value is not equal to the current stencil value. GreaterEqualStencilFunc will return true if the stencil
reference value is greater than or equal to the current stencil value. AlwaysStencilFunc will always return true.
Which stencil operation the material will perform on the stencil buffer
pixel if the provided stencil function passes. ZeroStencilOp will set the stencil value to 0. KeepStencilOp will not change the current stencil
value. ReplaceStencilOp will replace the stencil value with the
specified stencil reference value. IncrementStencilOp will increment the current stencil
value by 1. DecrementStencilOp will decrement the current stencil
value by 1. IncrementWrapStencilOp will increment the current stencil
value by 1. If the value increments past 255 it will be set to 0. DecrementWrapStencilOp will increment the current stencil
value by 1. If the value decrements below 0 it will be set to
255. InvertStencilOp will perform a bitwise inversion of the
current stencil value.
Defines the type of the normal map. For TangentSpaceNormalMap, the
information is relative to the underlying surface. For
ObjectSpaceNormalMap, the information is relative to the object
orientation. Default is TangentSpaceNormalMap.
These define the WebGLRenderer's shadowMap.type property.
BasicShadowMap gives unfiltered shadow maps - fastest, but
lowest quality. PCFShadowMap filters shadow maps using the
Percentage-Closer Filtering (PCF) algorithm (default). PCFSoftShadowMap filters shadow maps using the
Percentage-Closer Filtering (PCF) algorithm with better soft shadows
especially when using low-resolution shadow maps. VSMShadowMap filters shadow maps using the Variance Shadow
Map (VSM) algorithm. When using VSMShadowMap all shadow receivers will
also cast shadows.
These define the WebGLRenderer's toneMapping property. This is used to approximate the appearance of high
dynamic range (HDR) on the low dynamic range medium of a standard computer
monitor or mobile device's screen.
THREE.LinearToneMapping, THREE.ReinhardToneMapping, THREE.CineonToneMapping, THREE.ACESFilmicToneMapping,
THREE.AgXToneMapping and THREE.NeutralToneMapping are built-in implementations of tone mapping.
THREE.CustomToneMapping expects a custom implementation by modifying GLSL code of the material's fragment shader.
See the WebGL / tonemapping example.
THREE.NeutralToneMapping is an implementation based on the Khronos 3D Commerce Group standard tone mapping.
EquirectangularReflectionMapping and EquirectangularRefractionMapping are for use with an equirectangular
environment map. Also called a lat-long map, an equirectangular texture
represents a 360-degree view along the horizontal centerline, and a
180-degree view along the vertical axis, with the top and bottom edges of
the image corresponding to the north and south poles of a mapped
sphere.
These define the texture's wrapS and
wrapT properties, which define horizontal and
vertical texture wrapping.
With RepeatWrapping the texture will simply repeat to
infinity.
ClampToEdgeWrapping is the default. The last pixel of the
texture stretches to the edge of the mesh.
With MirroredRepeatWrapping the texture will repeats to
infinity, mirroring on each repeat.
Magnification Filters
THREE.NearestFilter
THREE.LinearFilter
For use with a texture's magFilter property,
these define the texture magnification function to be used when the pixel
being textured maps to an area less than or equal to one texture element
(texel).
NearestFilter returns the value of the texture element
that is nearest (in Manhattan distance) to the specified texture
coordinates.
LinearFilter is the default and returns the weighted
average of the four texture elements that are closest to the specified
texture coordinates, and can include items wrapped or repeated from other
parts of a texture, depending on the values of wrapS
and wrapT, and on the exact mapping.
For use with a texture's minFilter property,
these define the texture minifying function that is used whenever the
pixel being textured maps to an area greater than one texture element
(texel).
In addition to NearestFilter and LinearFilter, the following four functions can be used for
minification:
NearestMipmapNearestFilter chooses the mipmap that most
closely matches the size of the pixel being textured and uses the
NearestFilter criterion (the texel nearest to the center
of the pixel) to produce a texture value.
NearestMipmapLinearFilter chooses the two mipmaps that
most closely match the size of the pixel being textured and uses the
NearestFilter criterion to produce a texture value from
each mipmap. The final texture value is a weighted average of those two
values.
LinearMipmapNearestFilter chooses the mipmap that most
closely matches the size of the pixel being textured and uses the
LinearFilter criterion (a weighted average of the four
texels that are closest to the center of the pixel) to produce a texture
value.
LinearMipmapLinearFilter is the default and chooses the
two mipmaps that most closely match the size of the pixel being textured
and uses the LinearFilter criterion to produce a texture
value from each mipmap. The final texture value is a weighted average of
those two values.
For use with a texture's format property, these
define how elements of a 2d texture, or texels, are read by shaders.
AlphaFormat discards the red, green and blue components
and reads just the alpha component.
RedFormat discards the green and blue components and reads
just the red component.
RedIntegerFormat discards the green and blue components
and reads just the red component. The texels are read as integers instead
of floating point.
RGFormat discards the alpha, and blue components and reads
the red, and green components.
RGIntegerFormat discards the alpha, and blue components
and reads the red, and green components. The texels are read as integers
instead of floating point.
RGBAFormat is the default and reads the red, green, blue
and alpha components.
RGBAIntegerFormat is the default and reads the red, green,
blue and alpha components. The texels are read as integers instead of
floating point.
LuminanceFormat reads each element as a single luminance
component. This is then converted to a floating point, clamped to the
range [0,1], and then assembled into an RGBA element by placing the
luminance value in the red, green and blue channels, and attaching 1.0 to
the alpha channel.
LuminanceAlphaFormat reads each element as a
luminance/alpha double. The same process occurs as for the LuminanceFormat, except that the alpha channel may have values other than
1.0.
DepthFormat reads each element as a single depth value,
converts it to floating point, and clamps to the range [0,1]. This is the
default for DepthTexture.
DepthStencilFormat reads each element is a pair of depth
and stencil values. The depth component of the pair is interpreted as in
DepthFormat. The stencil component is interpreted based on
the depth + stencil internal format.
There are four S3TC formats available via this extension. These are: RGB_S3TC_DXT1_Format: A DXT1-compressed image in an RGB
image format. RGBA_S3TC_DXT1_Format: A DXT1-compressed image in an RGB
image format with a simple on/off alpha value. RGBA_S3TC_DXT3_Format: A DXT3-compressed image in an RGBA
image format. Compared to a 32-bit RGBA texture, it offers 4:1
compression. RGBA_S3TC_DXT5_Format: A DXT5-compressed image in an RGBA
image format. It also provides a 4:1 compression, but differs to the DXT3
compression in how the alpha compression is done.
For use with a CompressedTexture's
format property, these require support for the
[link:https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_pvrtc/ WEBGL_compressed_texture_pvrtc] extension.
PVRTC is typically only available on mobile devices with PowerVR chipsets,
which are mainly Apple devices.
There are four PVRTC formats
available via this extension. These are: RGB_PVRTC_4BPPV1_Format: RGB compression in 4-bit mode.
One block for each 4×4 pixels. RGB_PVRTC_2BPPV1_Format: RGB compression in 2-bit mode.
One block for each 8×4 pixels. RGBA_PVRTC_4BPPV1_Format: RGBA compression in 4-bit mode.
One block for each 4×4 pixels. RGBA_PVRTC_2BPPV1_Format: RGBA compression in 2-bit mode.
One block for each 8×4 pixels.
R8_SNORM stores the red component on 8 bits. The component
is stored as normalized.
R8I stores the red component on 8 bits. The component is
stored as an integer.
R8UI stores the red component on 8 bits. The component is
stored as an unsigned integer.
R16I stores the red component on 16 bits. The component is
stored as an integer.
R16UI stores the red component on 16 bits. The component
is stored as an unsigned integer.
R16F stores the red component on 16 bits. The component is
stored as floating point.
R32I stores the red component on 32 bits. The component is
stored as an integer.
R32UI stores the red component on 32 bits. The component
is stored as an unsigned integer.
R32F stores the red component on 32 bits. The component is
stored as floating point.
RG8 stores the red and green components on 8 bits each.
RG8_SNORM stores the red and green components on 8 bits
each. Every component is stored as normalized.
RG8I stores the red and green components on 8 bits each.
Every component is stored as an integer.
RG8UI stores the red and green components on 8 bits each.
Every component is stored as an unsigned integer.
RG16I stores the red and green components on 16 bits each.
Every component is stored as an integer.
RG16UI stores the red and green components on 16 bits
each. Every component is stored as an unsigned integer.
RG16F stores the red and green components on 16 bits each.
Every component is stored as floating point.
RG32I stores the red and green components on 32 bits each.
Every component is stored as an integer.
RG32UI stores the red and green components on 32 bits.
Every component is stored as an unsigned integer.
RG32F stores the red and green components on 32 bits.
Every component is stored as floating point.
RGB8 stores the red, green, and blue components on 8 bits
each. RGB8_SNORM stores the red, green, and blue
components on 8 bits each. Every component is stored as normalized.
RGB8I stores the red, green, and blue components on 8 bits
each. Every component is stored as an integer.
RGB8UI stores the red, green, and blue components on 8
bits each. Every component is stored as an unsigned integer.
RGB16I stores the red, green, and blue components on 16
bits each. Every component is stored as an integer.
RGB16UI stores the red, green, and blue components on 16
bits each. Every component is stored as an unsigned integer.
RGB16F stores the red, green, and blue components on 16
bits each. Every component is stored as floating point
RGB32I stores the red, green, and blue components on 32
bits each. Every component is stored as an integer.
RGB32UI stores the red, green, and blue components on 32
bits each. Every component is stored as an unsigned integer.
RGB32F stores the red, green, and blue components on 32
bits each. Every component is stored as floating point
R11F_G11F_B10F stores the red, green, and blue components
respectively on 11 bits, 11 bits, and 10bits. Every component is stored as
floating point.
RGB565 stores the red, green, and blue components
respectively on 5 bits, 6 bits, and 5 bits.
RGB9_E5 stores the red, green, and blue components on 9
bits each.
RGBA8 stores the red, green, blue, and alpha components on
8 bits each.
RGBA8_SNORM stores the red, green, blue, and alpha
components on 8 bits. Every component is stored as normalized.
RGBA8I stores the red, green, blue, and alpha components
on 8 bits each. Every component is stored as an integer.
RGBA8UI stores the red, green, blue, and alpha components
on 8 bits. Every component is stored as an unsigned integer.
RGBA16I stores the red, green, blue, and alpha components
on 16 bits. Every component is stored as an integer.
RGBA16UI stores the red, green, blue, and alpha components
on 16 bits. Every component is stored as an unsigned integer.
RGBA16F stores the red, green, blue, and alpha components
on 16 bits. Every component is stored as floating point.
RGBA32I stores the red, green, blue, and alpha components
on 32 bits. Every component is stored as an integer.
RGBA32UI stores the red, green, blue, and alpha components
on 32 bits. Every component is stored as an unsigned integer.
RGBA32F stores the red, green, blue, and alpha components
on 32 bits. Every component is stored as floating point.
RGB5_A1 stores the red, green, blue, and alpha components
respectively on 5 bits, 5 bits, 5 bits, and 1 bit.
RGB10_A2 stores the red, green, blue, and alpha components
respectively on 10 bits, 10 bits, 10 bits and 2 bits.
RGB10_A2UI stores the red, green, blue, and alpha
components respectively on 10 bits, 10 bits, 10 bits and 2 bits. Every
component is stored as an unsigned integer.
SRGB8 stores the red, green, and blue components on 8 bits
each.
SRGB8_ALPHA8 stores the red, green, blue, and alpha
components on 8 bits each.
DEPTH_COMPONENT32F stores the depth component on 32bits.
The component is stored as floating point.
DEPTH24_STENCIL8 stores the depth, and stencil components
respectively on 24 bits and 8 bits. The stencil component is stored as an
unsigned integer.
DEPTH32F_STENCIL8 stores the depth, and stencil components
respectively on 32 bits and 8 bits. The depth component is stored as
floating point, and the stencil component as an unsigned integer.
Used to define the color space of textures (and the output color space of
the renderer).
If the color space type is changed after the texture has already been used
by a material, you will need to set Material.needsUpdate to true to make the material recompile.
This class stores data for an attribute (such as vertex positions, face
indices, normals, colors, UVs, and any custom attributes ) associated with
a BufferGeometry, which allows for more efficient passing of data
to the GPU. See that page for details and a usage example. When working
with vector-like data, the .fromBufferAttribute( attribute, index )
helper methods on Vector2,
Vector3,
Vector4, and
Color classes may be helpful.
array -- Must be a
TypedArray. Used to instantiate the buffer.
This array should have
itemSize * numVertices
elements, where numVertices is the number of vertices in the associated
BufferGeometry.
itemSize -- the number of values of the array that should
be associated with a particular vertex. For instance, if this attribute is
storing a 3-component vector (such as a position, normal, or color), then
itemSize should be 3.
normalized -- (optional) Applies to integer data only.
Indicates how the underlying data in the buffer maps to the values in the
GLSL code. For instance, if array is an instance of
UInt16Array, and normalized is true, the values 0-+65535 in the array data will be mapped to 0.0f - +1.0f in the GLSL
attribute. An Int16Array (signed) would map from -32768 - +32767 to -1.0f
- +1.0f. If normalized is false, the values will be
converted to floats unmodified, i.e. 32767 becomes 32767.0f.
Represents the number of items this buffer attribute stores. It is internally computed by dividing the array's length by the
itemSize. Read-only property.
Array of objects containing: start: Position at which to start
update. count: The number of components to update.
This can be used to only update some components of stored vectors (for
example, just the component related to color). Use the addUpdateRange
function to add ranges to this array.
Defines the intended usage pattern of the data store for optimization
purposes. Corresponds to the usage parameter of
WebGLRenderingContext.bufferData(). Default is StaticDrawUsage. See usage constants for all
possible values.
Note: After the initial use of a buffer, its usage cannot be changed.
Instead, instantiate a new one and set the desired usage before the next
render.
A representation of mesh, line, or point geometry. Includes vertex
positions, face indices, normals, colors, UVs, and custom attributes
within buffers, reducing the cost of passing all this data to the GPU.
To read and edit data in BufferGeometry attributes, see
BufferAttribute documentation.
Code Example
const geometry =new THREE.BufferGeometry();// create a simple square shape. We duplicate the top left and bottom right// vertices because each vertex needs to appear once per triangle.const vertices =newFloat32Array([-1.0,-1.0,1.0,// v01.0,-1.0,1.0,// v11.0,1.0,1.0,// v21.0,1.0,1.0,// v3-1.0,1.0,1.0,// v4-1.0,-1.0,1.0// v5]);// itemSize = 3 because there are 3 values (components) per vertex
geometry.setAttribute('position',new THREE.BufferAttribute( vertices,3));const material =new THREE.MeshBasicMaterial({ color:0xff0000});const mesh =new THREE.Mesh( geometry, material );
Code Example (Index)
const geometry =new THREE.BufferGeometry();const vertices =newFloat32Array([-1.0,-1.0,1.0,// v01.0,-1.0,1.0,// v11.0,1.0,1.0,// v2-1.0,1.0,1.0,// v3]);const indices =[0,1,2,2,3,0,];
geometry.setIndex( indices );
geometry.setAttribute('position',new THREE.BufferAttribute( vertices,3));const material =new THREE.MeshBasicMaterial({ color:0xff0000});const mesh =new THREE.Mesh( geometry, material );
This hashmap has as id the name of the attribute to be set and as value
the buffer to set it to. Rather than accessing this
property directly, use .setAttribute and .getAttribute to
access attributes of this geometry.
Determines the part of the geometry to render. This should not be set
directly, instead use .setDrawRange. Default is
{ start:0, count:Infinity}
For non-indexed BufferGeometry, count is the number of vertices to render.
For indexed BufferGeometry, count is the number of indices to render.
Split the geometry into groups, each of which will be rendered in a
separate WebGL draw call. This allows an array of materials to be used
with the geometry.
Each group is an object of the form:
{ start:Integer, count:Integer, materialIndex:Integer}
where start specifies the first element in this draw call – the first
vertex for non-indexed geometry, otherwise the first triangle index. Count
specifies how many vertices (or indices) are included, and materialIndex
specifies the material array index to use.
Use .addGroup to add groups, rather than modifying this array
directly.
Every vertex and index must belong to exactly one group — groups must not
share vertices or indices, and must not leave vertices or indices unused.
Allows for vertices to be re-used across multiple triangles; this is
called using "indexed triangles". Each triangle is associated with the
indices of three vertices. This attribute therefore stores the index of
each vertex for each triangular face. If this attribute is not set, the
renderer assumes that each three contiguous positions
represent a single triangle. Default is null.
Hashmap of BufferAttributes holding details of the geometry's morph
targets.
Note: Once the geometry has been rendered, the morph attribute data cannot
be changed. You will have to call .dispose(), and create a new
instance of BufferGeometry.
Used to control the morph target behavior; when set to true, the morph
target data is treated as relative offsets, rather than as absolute
positions/normals. Default is false.
An object that can be used to store custom data about the BufferGeometry.
It should not hold references to functions as these will not be cloned.
Default is an empty object {}.
Computes the bounding box of the geometry, and updates the .boundingBox attribute.
The bounding box is not computed by the engine; it must be computed by your app.
You may need to recompute the bounding box if the geometry vertices are modified.
Computes the bounding sphere of the geometry, and updates the .boundingSphere attribute.
The engine automatically computes the bounding sphere when it is needed, e.g., for ray casting or view frustum culling.
You may need to recompute the bounding sphere if the geometry vertices are modified.
Calculates and adds a tangent attribute to this geometry.
The computation is only supported for indexed geometries and if position,
normal, and uv attributes are defined. When using a tangent space normal
map, prefer the MikkTSpace algorithm provided by
BufferGeometryUtils.computeMikkTSpaceTangents instead.
Computes vertex normals for the given vertex data. For indexed geometries, the method sets each vertex normal to be the average of the face normals of the faces that share that vertex.
For non-indexed geometries, vertices are not shared, and the method sets each vertex normal to be the same as the face normal.
Rotates the geometry to face a point in space. This is typically done as a
one time operation, and not during a loop. Use Object3D.lookAt for
typical real-time mesh usage.
Rotate the geometry about the X axis. This is typically done as a one time
operation, and not during a loop. Use Object3D.rotation for typical
real-time mesh rotation.
Rotate the geometry about the Y axis. This is typically done as a one time
operation, and not during a loop. Use Object3D.rotation for typical
real-time mesh rotation.
Rotate the geometry about the Z axis. This is typically done as a one time
operation, and not during a loop. Use Object3D.rotation for typical
real-time mesh rotation.
# .scale ( x : Float, y : Float, z : Float ) : this
Scale the geometry data. This is typically done as a one time operation,
and not during a loop. Use Object3D.scale for typical real-time
mesh scaling.
Sets an attribute to this geometry. Use this rather than the attributes
property, because an internal hashmap of .attributes is maintained
to speed up iterating over attributes.
Set the .drawRange property. For non-indexed BufferGeometry, count
is the number of vertices to render. For indexed BufferGeometry, count is
the number of indices to render.
Defines a geometry by creating a position attribute based on the given array of points. The array can hold
instances of Vector2 or Vector3. When using two-dimensional data, the z coordinate for all vertices is set to 0.
If the method is used with an existing position attribute, the vertex data are overwritten with the data from the array. The length of the
array must match the vertex count.
Return a non-index version of an indexed BufferGeometry.
# .translate ( x : Float, y : Float, z : Float ) : this
Translate the geometry. This is typically done as a one time operation,
and not during a loop. Use Object3D.position for typical real-time
mesh translation.
Get the seconds passed since the clock started and sets .oldTime to
the current time.
If .autoStart is true and the clock is not running, also starts
the clock.
Get the seconds passed since the time .oldTime was set and sets
.oldTime to the current time.
If .autoStart is true and the clock is not running, also starts
the clock.
// Adding events to a custom objectclassCarextendsEventDispatcher{
start(){this.dispatchEvent({ type:'start', message:'vroom vroom!'});}};// Using events with the custom objectconst car =newCar();
car.addEventListener('start',function(event){
alert(event.message );});
car.start();
This buffer attribute class does not construct a VBO. Instead, it uses
whatever VBO is passed in constructor and can later be altered via the
buffer property.
It is required to pass additional params alongside the VBO. Those are: the
GL context, the GL data type, the number of components per vertex, the
number of bytes per component, and the number of vertices.
The most common use case for this class is when some kind of GPGPU
calculation interferes or even produces the VBOs in question.
buffer — Must be a
WebGLBuffer.
type — One of
WebGL Data Types.
itemSize — The number of values of the array that should be associated
with a particular vertex. For instance, if this attribute is storing a
3-component vector (such as a position, normal, or color), then itemSize
should be 3.
elementSize — 1, 2 or 4. The corresponding size (in bytes) for the given
"type" param.
Defines how often a value of this buffer attribute should be repeated. A
value of one means that each value of the instanced attribute is used for
a single instance. A value of two means that each value is used for two
consecutive instances (and so on). Default is 1.
Defines the intended usage pattern of the data store for optimization
purposes. Corresponds to the usage parameter of
WebGLRenderingContext.bufferData().
The value of data.count. If the
buffer is storing a 3-component item (such as a position, normal, or
color), then this will count the number of such items stored.
A Layers object assigns an Object3D to 1 or more of 32
layers numbered 0 to 31 - internally the layers are stored as a
bit mask, and by
default all Object3Ds are a member of layer 0.
This can be used to control visibility - an object must share a layer with
a camera to be visible when that camera's view is
rendered.
All classes that inherit from Object3D have an
Object3D.layers property which is an instance of this class.
This is the base class for most objects in three.js and provides a set of
properties and methods for manipulating objects in 3D space.
Note that this can be used for grouping objects via the .add( object ) method which adds the object as a child, however it is better to
use Group for this.
Custom depth material to be used when rendering to the depth map. Can only
be used in context of meshes. When shadow-casting with a
DirectionalLight or SpotLight, if you are modifying vertex
positions in the vertex shader you must specify a customDepthMaterial for
proper shadows. Default is undefined.
When this is set, it checks every frame if the object is in the frustum of
the camera before rendering the object. If set to false the object gets
rendered every frame even if it is not in the frustum of the camera.
Default is true.
The layer membership of the object. The object is only visible if it has
at least one layer in common with the Camera in use. This property
can also be used to filter out unwanted objects in ray-intersection tests
when using Raycaster.
When this is set, it calculates the matrix of position, (rotation or
quaternion) and scale every frame and also recalculates the matrixWorld
property. Default is Object3D.DEFAULT_MATRIX_AUTO_UPDATE (true).
If set, then the renderer checks every frame if the object and its
children need matrix updates. When it isn't, then you have to maintain all
matrices in the object and its children yourself. Default is
Object3D.DEFAULT_MATRIX_WORLD_AUTO_UPDATE (true).
This is passed to the shader and used to calculate lighting for the
object. It is the transpose of the inverse of the upper left 3x3
sub-matrix of this object's modelViewMatrix.
The reason for this special matrix is that simply using the
modelViewMatrix could result in a non-unit length of normals (on scaling)
or in a non-perpendicular direction (on non-uniform scaling).
On the other hand the translation part of the modelViewMatrix is not
relevant for the calculation of normals. Thus a Matrix3 is sufficient.
An optional callback that is executed immediately after a 3D object is
rendered. This function is called with the following parameters: renderer,
scene, camera, geometry, material, group.
Please notice that this callback is only executed for renderable 3D
objects. Meaning 3D objects which define their visual appearance with
geometries and materials like instances of Mesh, Line,
Points or Sprite. Instances of Object3D, Group
or Bone are not renderable and thus this callback is not executed
for such objects.
An optional callback that is executed immediately after a 3D object is
rendered to a shadow map. This function is called with the following parameters: renderer,
scene, camera, shadowCamera, geometry, depthMaterial, group.
Please notice that this callback is only executed for renderable 3D
objects. Meaning 3D objects which define their visual appearance with
geometries and materials like instances of Mesh, Line,
Points or Sprite. Instances of Object3D, Group
or Bone are not renderable and thus this callback is not executed
for such objects.
An optional callback that is executed immediately before a 3D object is
rendered. This function is called with the following parameters: renderer,
scene, camera, geometry, material, group.
Please notice that this callback is only executed for renderable 3D
objects. Meaning 3D objects which define their visual appearance with
geometries and materials like instances of Mesh, Line,
Points or Sprite. Instances of Object3D, Group
or Bone are not renderable and thus this callback is not executed
for such objects.
An optional callback that is executed immediately before a 3D object is
rendered to a shadow map. This function is called with the following parameters: renderer,
scene, camera, shadowCamera, geometry, depthMaterial, group.
Please notice that this callback is only executed for renderable 3D
objects. Meaning 3D objects which define their visual appearance with
geometries and materials like instances of Mesh, Line,
Points or Sprite. Instances of Object3D, Group
or Bone are not renderable and thus this callback is not executed
for such objects.
This value allows the default rendering order of
scene graph objects to be
overridden although opaque and transparent objects remain sorted
independently. When this property is set for an instance of Group, all descendants objects will be sorted and rendered together.
Sorting is from lowest to highest renderOrder. Default value is 0.
An object that can be used to store custom data about the Object3D. It
should not hold references to functions as these will not be cloned.
Default is an empty object {}.
Static properties and methods are defined per class rather than per
instance of that class. This means that changing
Object3D.DEFAULT_UP or Object3D.DEFAULT_MATRIX_AUTO_UPDATE
will change the values of up and matrixAutoUpdate for every instance of Object3D (or derived classes)
created after the change has been made (already created Object3Ds will not
be affected).
The default up direction for objects, also used as the default
position for DirectionalLight, HemisphereLight and
Spotlight (which creates lights shining from the top down).
Set to ( 0, 1, 0 ) by default.
Adds object as child of this object. An arbitrary number of objects may
be added. Any current parent on an object passed in here will be removed,
since an object can have at most one parent.
recursive -- If set to true, descendants of the object are copied next to the existing ones.
If set to false, descendants are left unchanged. Default is true.
Copies the given object into this object. Note: Event listeners and
user-defined callbacks (.onAfterRender and .onBeforeRender)
are not copied.
Searches through an object and its children, starting with the object
itself, and returns the first with a matching id.
Note that ids are assigned in chronological order: 1, 2, 3, ...,
incrementing by one for each new object.
name -- String to match to the children's Object3D.name property.
Searches through an object and its children, starting with the object
itself, and returns the first with a matching name.
Note that for most objects the name is an empty string by default. You
will have to set it manually to make use of this method.
name -- the property name to search for.
value -- value of the given property.
optionalTarget -- (optional) target to set the result.
Otherwise a new Array is instantiated. If set, you must clear this
array prior to each call (i.e., array.length = 0;).
Searches through an object and its children, starting with the object
itself, and returns all the objects with a property that matches the value
given.
Abstract (empty) method to get intersections between a casted ray and this
object. Subclasses such as Mesh, Line, and Points
implement this method in order to use raycasting.
callback - A function with as first argument an object3D object.
Like traverse, but the callback will only be executed for visible objects.
Descendants of invisible objects are not traversed.
Note: Modifying the scene graph inside the callback is discouraged.
force - A boolean that can be used to bypass
.matrixWorldAutoUpdate, to recalculate the world matrix of the
object and descendants on the current frame. Useful if you cannot wait for
the renderer to update it on the next frame (assuming
.matrixWorldAutoUpdate set to true).
Updates the global transform of the object and its descendants if the
world matrix needs update (.matrixWorldNeedsUpdate set to true) or
if the force parameter is set to true.
This class is designed to assist with
raycasting. Raycasting is
used for mouse picking (working out what objects in the 3d space the mouse
is over) amongst other things.
Code Example
const raycaster =new THREE.Raycaster();const pointer =new THREE.Vector2();function onPointerMove(event){// calculate pointer position in normalized device coordinates// (-1 to +1) for both components
pointer.x =(event.clientX / window.innerWidth )*2-1;
pointer.y =-(event.clientY / window.innerHeight )*2+1;}function render(){// update the picking ray with the camera and pointer position
raycaster.setFromCamera( pointer, camera );// calculate objects intersecting the picking rayconst intersects = raycaster.intersectObjects( scene.children );for(let i =0; i < intersects.length; i ++){
intersects[ i ].object.material.color.set(0xff0000);}
renderer.render( scene, camera );}
window.addEventListener('pointermove', onPointerMove );
window.requestAnimationFrame(render);
Raycaster( origin : Vector3, direction : Vector3, near : Float, far : Float )
origin — The origin vector where the ray casts from. direction — The direction vector that gives direction to
the ray. Should be normalized. near — All results returned are further away than near. Near
can't be negative. Default value is 0. far — All results returned are closer than far. Far can't be
lower than near. Default value is Infinity.
The far factor of the raycaster. This value indicates which objects can be
discarded based on the distance. This value shouldn't be negative and
should be larger than the near property.
The near factor of the raycaster. This value indicates which objects can
be discarded based on the distance. This value shouldn't be negative and
should be smaller than the far property.
The camera to use when raycasting against view-dependent objects such as
billboarded objects like Sprites. This field can be set manually or
is set when calling "setFromCamera". Defaults to null.
Used by Raycaster to selectively ignore 3D objects when performing
intersection tests. The following code example ensures that only 3D
objects on layer 1 will be honored by the instance of Raycaster.
raycaster.layers.set(1);object.layers.enable(1);
An object with the following properties:
{Mesh:{},Line:{ threshold:1},
LOD:{},Points:{ threshold:1},Sprite:{}}
Where threshold is the precision of the raycaster when intersecting
objects, in world units.
coords — 2D coordinates of the mouse, in normalized device
coordinates (NDC)---X and Y components should be between -1 and 1. camera — camera from which the ray should originate
object — The object to check for intersection with the
ray. recursive — If true, it also checks all descendants.
Otherwise it only checks intersection with the object. Default is true. optionalTarget — (optional) target to set the result.
Otherwise a new Array is instantiated. If set, you must clear this
array prior to each call (i.e., array.length = 0;).
Checks all intersection between the ray and the object with or without the
descendants. Intersections are returned sorted by distance, closest first.
An array of intersections is returned...
[{ distance, point, face, faceIndex,object},...]
distance – distance between the origin of the ray and the
intersection point – point of intersection, in world coordinates face – intersected face faceIndex – index of the intersected face object – the intersected object uv - U,V coordinates at point of intersection uv1 - Second set of U,V coordinates at point of
intersection normal - interpolated normal vector at point of
intersection instanceId – The index number of the instance where the ray
intersects the InstancedMesh
Raycaster delegates to the raycast method of the
passed object, when evaluating whether the ray intersects the object or
not. This allows meshes to respond differently to ray casting
than lines and pointclouds.
Note that for meshes, faces must be pointed towards the origin of the
ray in order to be detected; intersections of the ray passing
through the back of a face will not be detected. To raycast against both
faces of an object, you'll want to set the material's
side property to THREE.DoubleSide.
objects — The objects to check for intersection with the
ray. recursive — If true, it also checks all descendants of the
objects. Otherwise it only checks intersection with the objects. Default
is true. optionalTarget — (optional) target to set the result.
Otherwise a new Array is instantiated. If set, you must clear this
array prior to each call (i.e., array.length = 0;).
Checks all intersection between the ray and the objects with or without
the descendants. Intersections are returned sorted by distance, closest
first. Intersections are of the same form as those returned by
.intersectObject.
Each uniform must have a value property. The type of the value must
correspond to the type of the uniform variable in the GLSL code as
specified for the primitive GLSL types in the table below. Uniform
structures and arrays are also supported. GLSL arrays of primitive type
must either be specified as an array of the corresponding THREE objects or
as a flat array containing the data of all the objects. In other words;
GLSL primitives in arrays must not be represented by arrays. This rule
does not apply transitively. An array of vec2 arrays, each with a length
of five vectors, must be an array of arrays, of either five Vector2
objects or ten numbers.
(*) Same for an (innermost) array (dimension) of the same GLSL type,
containing the components of all vectors or matrices in the array.
Structured Uniforms
Sometimes you want to organize uniforms as structs in your shader code.
The following style must be used so three.js is able to process
structured uniform data.
Returns a clone of this uniform.
If the uniform's value property is an Object with a clone() method,
this is used, otherwise the value is copied by assignment. Array values
are shared between cloned Uniforms.
array -- this can be a typed or untyped (normal) array or an integer
length. An array value will be converted to the Type specified. If a
length is given a new TypedArray will created, initialized with all
elements set to zero.
itemSize -- the number of values of the array that should be associated
with a particular vertex.
normalized -- (optional) indicates how the underlying data in the buffer
maps to the values in the GLSL code.
This object defines what type of actions are assigned to the available mouse buttons.
It depends on the control implementation what kind of mouse buttons and actions are supported.
Default is { LEFT:null, MIDDLE:null, RIGHT:null}.
Possible buttons are: LEFT, MIDDLE, RIGHT.
Possible actions are defined in the Constants page.
This object defines what type of actions are assigned to what kind of touch interaction.
It depends on the control implementation what kind of touch interaction and actions are supported.
Default is { ONE:null, TWO:null}.
Possible buttons are: ONE, TWO.
Possible actions are defined in the Constants page.
data -- A flat array of vertex coordinates.
holeIndices -- An array of hole indices if any.
dim -- The number of coordinates per vertex in the input array.
Triangulates the given shape definition by returning an array of
triangles. A triangle is defined by three consecutive integers
representing vertex indices.
This class generates a Prefiltered, Mipmapped Radiance Environment Map
(PMREM) from a cubeMap environment texture. This allows different levels
of blur to be quickly accessed based on material roughness. Unlike a
traditional mipmap chain, it only goes down to the LOD_MIN level (above),
and then creates extra even more filtered 'mips' at the same LOD_MIN
resolution, associated with higher roughness levels. In this way we
maintain resolution to smoothly interpolate diffuse lighting while
limiting sampling computation.
Note: The minimum MeshStandardMaterial's roughness depends on the
size of the provided texture. If your render has small dimensions or the
shiny parts have a lot of curvature, you may still be able to get away
with a smaller texture size.
scene - The given scene. sigma - (optional) Specifies a blur radius in radians to be
applied to the scene before PMREM generation. Default is 0. near - (optional) The near plane value. Default is 0.1. far - (optional) The far plane value. Default is 100.
Generates a PMREM from a supplied Scene, which can be faster than using an
image if networking bandwidth is low. Optional near and far planes ensure
the scene is rendered in its entirety (the cubeCamera is placed at the
origin).
Pre-compiles the equirectangular shader. You can get faster start-up by
invoking this method during your texture's network fetch for increased
concurrency.
Scales the texture as large as possible within its surface without cropping or stretching the texture. The method preserves the original aspect ratio of the texture. Akin to CSS object-fit: contain.
Scales the texture to the smallest possible size to fill the surface, leaving no empty space. The method preserves the original aspect ratio of the texture. Akin to CSS object-fit: cover.
This value determines the amount of divisions when calculating the
cumulative segment lengths of a curve via .getLengths. To ensure
precision when using methods like .getSpacedPoints, it is
recommended to increase .arcLengthDivisions if the curve is very
large. Default is 200.
t - A position on the curve. Must be in the range [ 0, 1 ].
optionalTarget — (optional) If specified, the result will be
copied into this Vector, otherwise a new Vector will be created.
Returns a vector for a given position on the curve.
u - A position on the curve according to the arc length. Must
be in the range [ 0, 1 ]. optionalTarget — (optional) If specified, the result will be
copied into this Vector, otherwise a new Vector will be created.
Returns a vector for a given position on the curve according to the arc
length.
Update the cumulative segment distance cache. The method must be called
every time curve parameters are changed. If an updated curve is part of a
composed curve like CurvePath, .updateArcLengths() must be
called on the composed curve, too.
Given u in the range ( 0 .. 1 ), returns t also in the range
( 0 .. 1 ). u and t can then be used to give you points which are
equidistant from the ends of the curve, using .getPoint.
t - A position on the curve. Must be in the range [ 0, 1 ].
optionalTarget — (optional) If specified, the result will be
copied into this Vector, otherwise a new Vector will be created.
Returns a unit vector tangent at t. If the derived curve does not
implement its tangent derivation, two points a small delta apart will be
used to find its gradient which seems to give a reasonable approximation.
u - A position on the curve according to the arc length. Must
be in the range [ 0, 1 ]. optionalTarget — (optional) If specified, the result will be
copied into this Vector, otherwise a new Vector will be created.
Returns tangent at a point which is equidistant to the ends of the curve
from the point given in .getTangent.
divisions -- number of pieces to divide the curve into. Default is
12.
Returns an array of points representing a sequence of curves. The
division parameter defines the number of pieces each curve is divided
into. However, for optimization and quality purposes, the actual sampling
resolution for each curve depends on its type. For example, for a
LineCurve, the returned number of points is always just 2.
# .absarc ( x : Float, y : Float, radius : Float, startAngle : Float, endAngle : Float, clockwise : Boolean ) : this
x, y -- The absolute center of the arc.
radius -- The radius of the arc.
startAngle -- The start angle in radians.
endAngle -- The end angle in radians.
clockwise -- Sweep the arc clockwise. Defaults to false.
Adds an absolutely positioned EllipseCurve to the
path.
# .absellipse ( x : Float, y : Float, xRadius : Float, yRadius : Float, startAngle : Float, endAngle : Float, clockwise : Boolean, rotation : Float ) : this
x, y -- The absolute center of the ellipse.
xRadius -- The radius of the ellipse in the x axis.
yRadius -- The radius of the ellipse in the y axis.
startAngle -- The start angle in radians.
endAngle -- The end angle in radians.
clockwise -- Sweep the ellipse clockwise. Defaults to false.
rotation -- The rotation angle of the ellipse in radians, counterclockwise
from the positive X axis. Optional, defaults to 0.
Adds an absolutely positioned EllipseCurve to the
path.
# .arc ( x : Float, y : Float, radius : Float, startAngle : Float, endAngle : Float, clockwise : Boolean ) : this
x, y -- The center of the arc offset from the last call.
radius -- The radius of the arc.
startAngle -- The start angle in radians.
endAngle -- The end angle in radians.
clockwise -- Sweep the arc clockwise. Defaults to false.
# .bezierCurveTo ( cp1X : Float, cp1Y : Float, cp2X : Float, cp2Y : Float, x : Float, y : Float ) : this
This creates a bezier curve from .currentPoint with (cp1X, cp1Y)
and (cp2X, cp2Y) as control points and updates .currentPoint to x
and y.
# .ellipse ( x : Float, y : Float, xRadius : Float, yRadius : Float, startAngle : Float, endAngle : Float, clockwise : Boolean, rotation : Float ) : this
x, y -- The center of the ellipse offset from the last call.
xRadius -- The radius of the ellipse in the x axis.
yRadius -- The radius of the ellipse in the y axis.
startAngle -- The start angle in radians.
endAngle -- The end angle in radians.
clockwise -- Sweep the ellipse clockwise. Defaults to false.
rotation -- The rotation angle of the ellipse in radians, counterclockwise
from the positive X axis. Optional, defaults to 0.
Defines an arbitrary 2d shape plane using paths with optional holes. It
can be used with ExtrudeGeometry, ShapeGeometry, to get
points, or to get triangulated faces.
This creates a line from the currentPath's
offset to X and Y and updates the offset to X and Y.
# .quadraticCurveTo ( cpX : Float, cpY : Float, x : Float, y : Float ) : this
This creates a quadratic curve from the currentPath's offset to x and y with cpX and cpY as control point and
updates the currentPath's offset to x and y.
# .bezierCurveTo ( cp1X : Float, cp1Y : Float, cp2X : Float, cp2Y : Float, x : Float, y : Float ) : this
This creates a bezier curve from the currentPath's offset to x and y with cp1X, cp1Y and cp2X, cp2Y as control
points and updates the currentPath's offset
to x and y.
isCCW -- Changes how solids and holes are generated
Converts the subPaths array into an array of
Shapes. By default solid shapes are defined clockwise (CW) and holes are
defined counterclockwise (CCW). If isCCW is set to true, then those are
flipped.
Create a smooth 3d spline curve from a series of points using the
Catmull-Rom algorithm.
Code Example
//Create a closed wavey loopconst curve =new THREE.CatmullRomCurve3([new THREE.Vector3(-10,0,10),new THREE.Vector3(-5,5,5),new THREE.Vector3(0,0,0),new THREE.Vector3(5,-5,5),new THREE.Vector3(10,0,10)]);const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst curveObject =new THREE.Line( geometry, material );
points – An array of Vector3 points
closed – Whether the curve is closed. Default is false.
curveType – Type of the curve. Default is centripetal.
tension – Tension of the curve. Default is 0.5.
Create a smooth 2d
cubic bezier curve, defined by a start point, endpoint and two control points.
Code Example
const curve =new THREE.CubicBezierCurve(new THREE.Vector2(-10,0),new THREE.Vector2(-5,15),new THREE.Vector2(20,15),new THREE.Vector2(10,0));const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst curveObject =new THREE.Line( geometry, material );
Create a smooth 3d
cubic bezier curve, defined by a start point, endpoint and two control points.
Code Example
const curve =new THREE.CubicBezierCurve3(new THREE.Vector3(-10,0,0),new THREE.Vector3(-5,15,0),new THREE.Vector3(20,15,0),new THREE.Vector3(10,0,0));const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst curveObject =new THREE.Line( geometry, material );
aX – The X center of the ellipse. Default is 0. aY – The Y center of the ellipse. Default is 0. xRadius – The radius of the ellipse in the x direction.
Default is 1. yRadius – The radius of the ellipse in the y direction.
Default is 1. aStartAngle – The start angle of the curve in radians
starting from the positive X axis. Default is 0. aEndAngle – The end angle of the curve in radians starting
from the positive X axis. Default is 2 x Math.PI. aClockwise – Whether the ellipse is drawn clockwise.
Default is false. aRotation – The rotation angle of the ellipse in radians,
counterclockwise from the positive X axis (optional). Default is 0.
Create a smooth 2d
quadratic bezier curve, defined by a startpoint, endpoint and a single control point.
Code Example
const curve =new THREE.QuadraticBezierCurve(new THREE.Vector2(-10,0),new THREE.Vector2(20,15),new THREE.Vector2(10,0));const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst curveObject =new THREE.Line( geometry, material );
Create a smooth 3d
quadratic bezier curve, defined by a startpoint, endpoint and a single control point.
Code Example
const curve =new THREE.QuadraticBezierCurve3(new THREE.Vector3(-10,0,0),new THREE.Vector3(20,15,0),new THREE.Vector3(10,0,0));const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst curveObject =new THREE.Line( geometry, material );
Create a smooth 2d spline curve from a series of points. Internally this
uses Interpolations.CatmullRom to create the curve.
Code Example
// Create a sine-like waveconst curve =new THREE.SplineCurve([new THREE.Vector2(-10,0),new THREE.Vector2(-5,5),new THREE.Vector2(0,0),new THREE.Vector2(5,-5),new THREE.Vector2(10,0)]);const points = curve.getPoints(50);const geometry =new THREE.BufferGeometry().setFromPoints( points );const material =new THREE.LineBasicMaterial({ color:0xff0000});// Create the final object to add to the sceneconst splineObject =new THREE.Line( geometry, material );
Constructor
SplineCurve( points : Array )
points – An array of Vector2 points that define the curve.
BoxGeometry is a geometry class for a rectangular cuboid with a given 'width',
'height', and 'depth'. On creation, the cuboid is centred on the origin,
with each edge parallel to one of the axes.
Code Example
const geometry =new THREE.BoxGeometry(1,1,1);const material =new THREE.MeshBasicMaterial({color:0x00ff00});const cube =new THREE.Mesh( geometry, material );
scene.add( cube );
width — Width; that is, the length of the edges parallel to the X axis.
Optional; defaults to 1.
height — Height; that is, the length of the edges parallel to the Y axis.
Optional; defaults to 1.
depth — Depth; that is, the length of the edges parallel to the Z axis.
Optional; defaults to 1.
widthSegments — Number of segmented rectangular faces along the width of
the sides. Optional; defaults to 1.
heightSegments — Number of segmented rectangular faces along the height of
the sides. Optional; defaults to 1.
depthSegments — Number of segmented rectangular faces along the depth of
the sides. Optional; defaults to 1.
Properties
See the base BufferGeometry class for common properties.
radius — Radius of the capsule. Optional; defaults to 1.
length — Length of the middle section. Optional; defaults to 1.
capSegments — Number of curve segments used to build the caps. Optional;
defaults to 4.
radialSegments — Number of segmented faces around the circumference of the
capsule. Optional; defaults to 8.
Properties
See the base BufferGeometry class for common properties.
CircleGeometry is a simple shape of Euclidean geometry. It is constructed from a
number of triangular segments that are oriented around a central point and
extend as far out as a given radius. It is built counter-clockwise from a
start angle and a given central angle. It can also be used to create
regular polygons, where the number of segments determines the number of
sides.
Code Example
const geometry =new THREE.CircleGeometry(5,32);const material =new THREE.MeshBasicMaterial({ color:0xffff00});const circle =new THREE.Mesh( geometry, material ); scene.add( circle );
radius — Radius of the circle, default = 1.
segments — Number of segments (triangles), minimum = 3, default = 32.
thetaStart — Start angle for first segment, default = 0 (three o'clock
position).
thetaLength — The central angle, often called theta, of the circular
sector. The default is 2*Pi, which makes for a complete circle.
Properties
See the base BufferGeometry class for common properties.
radius — Radius of the cone base. Default is 1.
height — Height of the cone. Default is 1.
radialSegments — Number of segmented faces around the circumference of the
cone. Default is 32
heightSegments — Number of rows of faces along the height of the cone.
Default is 1.
openEnded — A Boolean indicating whether the base of the cone is open or
capped. Default is false, meaning capped.
thetaStart — Start angle for first segment, default = 0 (three o'clock
position).
thetaLength — The central angle, often called theta, of the circular
sector. The default is 2*Pi, which makes for a complete cone.
radiusTop — Radius of the cylinder at the top. Default is 1.
radiusBottom — Radius of the cylinder at the bottom. Default is 1.
height — Height of the cylinder. Default is 1.
radialSegments — Number of segmented faces around the circumference of the
cylinder. Default is 32
heightSegments — Number of rows of faces along the height of the cylinder.
Default is 1.
openEnded — A Boolean indicating whether the ends of the cylinder are open
or capped. Default is false, meaning capped.
thetaStart — Start angle for first segment, default = 0 (three o'clock
position).
thetaLength — The central angle, often called theta, of the circular
sector. The default is 2*Pi, which makes for a complete cylinder.
Properties
See the base BufferGeometry class for common properties.
radius — Radius of the dodecahedron. Default is 1.
detail — Default is 0. Setting this to a value greater than 0 adds
vertices making it no longer a dodecahedron.
geometry — Any geometry object.
thresholdAngle — An edge is only rendered if the angle (in degrees)
between the face normals of the adjoining faces exceeds this value.
default = 1 degree.
Properties
See the base BufferGeometry class for common properties.
shapes — Shape or an array of shapes.
options — Object that can contain the following parameters.
curveSegments — int. Number of points on the curves. Default is 12.
steps — int. Number of points used for subdividing segments along the
depth of the extruded spline. Default is 1.
depth — float. Depth to extrude the shape. Default is 1.
bevelEnabled — bool. Apply beveling to the shape. Default is true.
bevelThickness — float. How deep into the original shape the bevel goes.
Default is 0.2.
bevelSize — float. Distance from the shape outline that the bevel
extends. Default is bevelThickness - 0.1.
bevelOffset — float. Distance from the shape outline that the bevel
starts. Default is 0.
bevelSegments — int. Number of bevel layers. Default is 3.
extrudePath — THREE.Curve. A 3D spline path along which the shape should
be extruded. Bevels not supported for path extrusion.
UVGenerator — Object. object that provides UV generator functions
This object extrudes a 2D shape to a 3D geometry.
When creating a Mesh with this geometry, if you'd like to have a separate
material used for its face and its extruded sides, you can use an array of
materials. The first material will be applied to the face; the second
material will be applied to the sides.
Properties
See the base BufferGeometry class for common properties.
radius — Default is 1.
detail — Default is 0. Setting this to a value greater than 0 adds more
vertices making it no longer an icosahedron. When detail is greater than
1, it's effectively a sphere.
Creates meshes with axial symmetry like vases. The lathe rotates around
the Y axis.
Code Example
const points =[];for(let i =0; i <10; i ++){
points.push(new THREE.Vector2(Math.sin( i *0.2)*10+5,( i -5)*2));}const geometry =new THREE.LatheGeometry( points );const material =new THREE.MeshBasicMaterial({ color:0xffff00});const lathe =new THREE.Mesh( geometry, material );
scene.add( lathe );
points — Array of Vector2s. The x-coordinate of each point must be greater
than zero. Default is an array with (0,-0.5), (0.5,0) and (0,0.5) which
creates a simple diamond shape.
segments — the number of circumference segments to generate. Default is
12.
phiStart — the starting angle in radians. Default is 0.
phiLength — the radian (0 to 2PI) range of the lathed section 2PI is a
closed lathe, less than 2PI is a portion. Default is 2PI.
This creates a LatheGeometry based on the parameters.
Properties
See the base BufferGeometry class for common properties.
radius — Radius of the octahedron. Default is 1.
detail — Default is 0. Setting this to a value greater than zero add
vertices making it no longer an octahedron.
width — Width along the X axis. Default is 1.
height — Height along the Y axis. Default is 1.
widthSegments — Optional. Default is 1.
heightSegments — Optional. Default is 1.
Properties
See the base BufferGeometry class for common properties.
A polyhedron is a solid in three dimensions with flat faces. This class
will take an array of vertices, project them onto a sphere, and then
divide them up to the desired level of detail. This class is used by
DodecahedronGeometry, IcosahedronGeometry,
OctahedronGeometry, and TetrahedronGeometry to generate
their respective geometries.
vertices — Array of points of the form [1,1,1, -1,-1,-1, ... ]
indices — Array of indices that make up the faces of the form
[0,1,2, 2,3,0, ... ]
radius — Float - The radius of the final shape
detail — Integer - How many levels to subdivide the geometry. The
more detail, the smoother the shape.
Properties
See the base BufferGeometry class for common properties.
innerRadius — Default is 0.5.
outerRadius — Default is 1.
thetaSegments — Number of segments. A higher number means the ring will be
more round. Minimum is 3. Default is 32.
phiSegments — Number of segments per ring segment. Minimum is 1. Default is 1.
thetaStart — Starting angle. Default is 0.
thetaLength — Central angle. Default is Math.PI * 2.
Properties
See the base BufferGeometry class for common properties.
Creates an one-sided polygonal geometry from one or more path shapes.
Code Example
const x =0, y =0;const heartShape =new THREE.Shape();
heartShape.moveTo( x +5, y +5);
heartShape.bezierCurveTo( x +5, y +5, x +4, y, x, y );
heartShape.bezierCurveTo( x -6, y, x -6, y +7,x -6, y +7);
heartShape.bezierCurveTo( x -6, y +11, x -3, y +15.4, x +5, y +19);
heartShape.bezierCurveTo( x +12, y +15.4, x +16, y +11, x +16, y +7);
heartShape.bezierCurveTo( x +16, y +7, x +16, y, x +10, y );
heartShape.bezierCurveTo( x +7, y, x +5, y +5, x +5, y +5);const geometry =new THREE.ShapeGeometry( heartShape );const material =new THREE.MeshBasicMaterial({ color:0x00ff00});const mesh =new THREE.Mesh( geometry, material );
scene.add( mesh );
radius — sphere radius. Default is 1.
widthSegments — number of horizontal segments. Minimum value is 3, and the
default is 32.
heightSegments — number of vertical segments. Minimum value is 2, and the
default is 16.
phiStart — specify horizontal starting angle. Default is 0.
phiLength — specify horizontal sweep angle size. Default is Math.PI *
2.
thetaStart — specify vertical starting angle. Default is 0.
thetaLength — specify vertical sweep angle size. Default is Math.PI.
The geometry is created by sweeping and calculating vertexes around the Y
axis (horizontal sweep) and the Z axis (vertical sweep). Thus, incomplete
spheres (akin to 'sphere slices') can be created through the use of
different values of phiStart, phiLength, thetaStart and thetaLength, in
order to define the points in which we start (or end) calculating those
vertices.
Properties
See the base BufferGeometry class for common properties.
radius — Radius of the tetrahedron. Default is 1.
detail — Default is 0. Setting this to a value greater than 0 adds
vertices making it no longer a tetrahedron.
radius - Radius of the torus, from the center of the torus to the center
of the tube. Default is 1.
tube — Radius of the tube. Default is 0.4.
radialSegments — Default is 12
tubularSegments — Default is 48.
arc — Central angle. Default is Math.PI * 2.
Properties
See the base BufferGeometry class for common properties.
Creates a torus knot, the particular shape of which is defined by a pair
of coprime integers, p and q. If p and q are not coprime, the result will
be a torus link.
Code Example
const geometry =new THREE.TorusKnotGeometry(10,3,100,16);const material =new THREE.MeshBasicMaterial({ color:0xffff00});const torusKnot =new THREE.Mesh( geometry, material ); scene.add( torusKnot );
path — Curve - A 3D path that inherits from the Curve base
class. Default is a quadratic bezier curve.
tubularSegments — Integer - The number of segments that make up the
tube. Default is 64.
radius — Float - The radius of the tube. Default is 1.
radialSegments — Integer - The number of segments that make up the
cross-section. Default is 8.
closed — Boolean Is the tube open or closed. Default is false.
Properties
See the base BufferGeometry class for common properties.
dir -- direction from origin. Must be a unit vector. origin -- Point at which the arrow starts. length -- length of the arrow. Default is 1. hex -- hexadecimal value to define color. Default is
0xffff00. headLength -- The length of the head of the arrow. Default
is 0.2 * length. headWidth -- The width of the head of the arrow. Default is
0.2 * headLength.
Properties
See the base Object3D class for common properties.
Helper object to graphically show the world-axis-aligned bounding box
around an object. The actual bounding box is handled with Box3,
this is just a visual helper for debugging. It can be automatically
resized with the BoxHelper.update method when the object it's
created from is transformed. Note that the object must have a
BufferGeometry for this to work, so it won't work with Sprites.
object -- (optional) the object3D to show the
world-axis-aligned boundingbox. color -- (optional) hexadecimal value that defines the box's
color. Default is 0xffff00.
Creates a new wireframe box that bounds the passed object. Internally this
uses Box3.setFromObject to calculate the dimensions. Note that this
includes any children.
Properties
See the base LineSegments class for common properties.
Methods
See the base LineSegments class for common methods.
Helper object to assist with visualizing a DirectionalLight's
effect on the scene. This consists of plane and a line representing the
light's position and direction.
The color parameter passed in the constructor. Default is undefined. If
this is changed, the helper's color will update the next time
update is called.
Methods
See the base Object3D class for common properties.
size -- The size of the grid. Default is 10.
divisions -- The number of divisions across the grid. Default is 10.
colorCenterLine -- The color of the centerline. This can be a
Color, a hexadecimal value and an CSS-Color name. Default is
0x444444
colorGrid -- The color of the lines of the grid. This can be a
Color, a hexadecimal value and an CSS-Color name. Default is
0x888888
Creates a new GridHelper of size 'size' and divided into 'divisions' segments
per side. Colors are optional.
Methods
See the base LineSegments class for common methods.
radius -- The radius of the polar grid. This can be any positive number.
Default is 10.
sectors -- The number of sectors the grid will be divided into. This can
be any positive integer. Default is 16.
rings -- The number of rings. This can be any positive integer. Default is
8.
divisions -- The number of line segments used for each circle. This can be
any positive integer that is 3 or greater. Default is 64.
color1 -- The first color used for grid elements. This can be a
Color, a hexadecimal value and an CSS-Color name. Default is
0x444444
color2 -- The second color used for grid elements. This can be a
Color, a hexadecimal value and an CSS-Color name. Default is
0x888888
Creates a new PolarGridHelper of radius 'radius' with 'sectors' number of sectors
and 'rings' number of rings, where each circle is smoothed into
'divisions' number of line segments. Colors are optional.
Methods
See the base LineSegments class for common methods.
The color parameter passed in the constructor. Default is undefined. If
this is changed, the helper's color will update the next time
update is called.
plane -- the plane to visualize. size -- (optional) side length of plane helper. Default is
1. color -- (optional) the color of the helper. Default is
0xffff00.
Creates a new wireframe representation of the passed plane.
The color parameter passed in the constructor. Default is undefined. If
this is changed, the helper's color will update the next time
update is called.
The color parameter passed in the constructor. Default is undefined. If
this is changed, the helper's color will update the next time
update is called.
This light globally illuminates all objects in the scene equally.
This light cannot be used to cast shadows as it does not have a direction.
Code Example
const light =new THREE.AmbientLight(0x404040);// soft white light
scene.add( light );
Constructor
AmbientLight( color : Integer, intensity : Float )
color - (optional) Numeric value of the RGB component of
the color. Default is 0xffffff. intensity - (optional) Numeric value of the light's
strength/intensity. Default is 1.
A light that gets emitted in a specific direction. This light will behave
as though it is infinitely far away and the rays produced from it are all
parallel. The common use case for this is to simulate daylight; the sun is
far enough away that its position can be considered to be infinite, and
all light rays coming from it are parallel.
A common point of confusion for directional lights is that setting the
rotation has no effect. This is because three.js's DirectionalLight is the
equivalent to what is often called a 'Target Direct Light' in other
applications.
This means that its direction is calculated as pointing from the light's
position to the target's position
(as opposed to a 'Free Direct Light' that just has a rotation
component).
The reason for this is to allow the light to cast shadows - the
shadow camera needs a position to calculate shadows
from.
See the target property below for details on updating the
target.
Code Example
// White directional light at half intensity shining from the top.const directionalLight =new THREE.DirectionalLight(0xffffff,0.5);
scene.add( directionalLight );
DirectionalLight( color : Integer, intensity : Float )
color - (optional) hexadecimal color of the light. Default
is 0xffffff (white). intensity - (optional) numeric value of the light's
strength/intensity. Default is 1.
If set to true light will cast dynamic shadows. Warning: This is
expensive and requires tweaking to get shadows looking right. See the
DirectionalLightShadow for details. The default is false.
The DirectionalLight points from its position to
target.position. The default position of the target is (0,0,0). Note: For the target's position to be changed to anything other than the
default, it must be added to the scene using
scene.add( light.target );
This is so that the target's matrixWorld gets
automatically updated each frame.
It is also possible to set the target to be another object in the scene
(anything with a position property), like so:
skyColor - (optional) hexadecimal color of the sky. Default
is 0xffffff. groundColor - (optional) hexadecimal color of the ground.
Default is 0xffffff. intensity - (optional) numeric value of the light's
strength/intensity. Default is 1.
Abstract base class for lights - all other light types inherit the
properties and methods described here.
Constructor
Light( color : Integer, intensity : Float )
color - (optional) hexadecimal color of the light. Default
is 0xffffff (white). intensity - (optional) numeric value of the light's
strength/intensity. Default is 1.
Creates a new Light. Note that this is not intended to be called directly
(use one of derived classes instead).
Properties
See the base Object3D class for common properties.
Light probes are an alternative way of adding light to a 3D scene. Unlike
classical light sources (e.g. directional, point or spot lights), light
probes do not emit light. Instead they store information about light
passing through 3D space. During rendering, the light that hits a 3D
object is approximated by using the data from the light probe.
Light probes are usually created from (radiance) environment maps. The
class LightProbeGenerator can be used to create light probes from
instances of CubeTexture or WebGLCubeRenderTarget. However,
light estimation data could also be provided in other forms e.g. by WebXR.
This enables the rendering of augmented reality content that reacts to
real world lighting.
The current probe implementation in three.js supports so-called diffuse
light probes. This type of light probe is functionally equivalent to an
irradiance environment map.
color - (optional) hexadecimal color of the light. Default
is 0xffffff (white). intensity - (optional) numeric value of the light's
strength/intensity. Default is 1. distance - Maximum range of the light. Default is 0 (no
limit). decay - The amount the light dims along the distance of the
light. Default is 2.
If set to true light will cast dynamic shadows. Warning: This is
expensive and requires tweaking to get shadows looking right. See the
PointLightShadow for details. The default is false.
The amount the light dims along the distance of the light. Default is
2.
In context of physically-correct rendering the default value should not be
changed.
When distance is zero, light will attenuate according to inverse-square
law to infinite distance. When distance is non-zero, light will attenuate
according to inverse-square law until near the distance cutoff, where it
will then attenuate quickly and smoothly to 0. Inherently, cutoffs are not
physically correct.
RectAreaLight emits light uniformly across the face a rectangular plane.
This light type can be used to simulate light sources such as bright
windows or strip lighting.
color - (optional) hexadecimal color of the light. Default
is 0xffffff (white). intensity - (optional) the light's intensity, or brightness.
Default is 1. width - (optional) width of the light. Default is 10. height - (optional) height of the light. Default is 10.
color - (optional) hexadecimal color of the light. Default
is 0xffffff (white). intensity - (optional) numeric value of the light's
strength/intensity. Default is 1. distance - Maximum range of the light. Default is 0 (no
limit). angle - Maximum angle of light dispersion from its
direction whose upper bound is Math.PI/2. penumbra - Percent of the spotlight cone that is attenuated
due to penumbra. Takes values between zero and 1. Default is zero. decay - The amount the light dims along the distance of the
light.
If set to true light will cast dynamic shadows. Warning: This is
expensive and requires tweaking to get shadows looking right. See the
SpotLightShadow for details. The default is false.
The amount the light dims along the distance of the light. Default is
2.
In context of physically-correct rendering the default value should not be
changed.
When distance is zero, light will attenuate according to inverse-square
law to infinite distance. When distance is non-zero, light will attenuate
according to inverse-square law until near the distance cutoff, where it
will then attenuate quickly and smoothly to 0. Inherently, cutoffs are
not physically correct.
The Spotlight points from its position to
target.position. The default position of the target is (0,0,0). Note: For the target's position to be changed to anything other than the
default, it must be added to the scene using
scene.add( light.target );
This is so that the target's matrixWorld gets
automatically updated each frame.
It is also possible to set the target to be another object in the scene
(anything with a position property), like so:
const targetObject =new THREE.Object3D();
scene.add(targetObject);
light.target = targetObject;
The spotlight will now track the target object.
A Texture used to modulate the color of the light. The spot light
color is mixed with the RGB value of this texture, with a ratio
corresponding to its alpha value. The cookie-like masking effect is
reproduced using pixel values (0, 0, 0, 1-cookie_value). Warning:
.map is disabled if .castShadow is false.
The light's view of the world. This is used to generate a depth map of the
scene; objects behind other objects from the light's perspective will be
in shadow.
Shadow map bias, how much to add or subtract from the normalized depth
when deciding whether a surface is in shadow.
The default is 0. Very tiny adjustments here (in the order of 0.0001) may
help reduce artifacts in shadows
The distribution map generated using the internal camera; an occlusion is
calculated based on the distribution of depths. Computed internally during
rendering.
A Vector2 defining the width and height of the shadow map.
Higher values give better quality shadows at the cost of computation time.
Values must be powers of 2, up to the
WebGLRenderer.capabilities.maxTextureSize for a given device,
although the width and height don't have to be the same (so, for example,
(512, 1024) is valid). The default is (512,512).
When set to true, shadow maps will be updated in the next render call.
Default is false. If you have set .autoUpdate to false, you
will need to set this property to true and then make a render call to
update the light's shadow.
Defines how much the position used to query the shadow map is offset along
the object normal. The default is 0. Increasing this value can be used to
reduce shadow acne especially in large scenes where light shines onto
geometry at a shallow angle. The cost is that shadows may appear
distorted.
Setting this to values greater than 1 will blur the edges of the
shadow.
High values will cause unwanted banding effects in the shadows - a greater
mapSize will allow for a higher value to be used here
before these effects become visible.
If WebGLRenderer.shadowMap.type is set to PCFSoftShadowMap,
radius has no effect and it is recommended to increase
softness by decreasing mapSize instead.
//Create a WebGLRenderer and turn on shadows in the rendererconst renderer =new THREE.WebGLRenderer();
renderer.shadowMap.enabled =true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;// default THREE.PCFShadowMap//Create a DirectionalLight and turn on shadows for the lightconst light =new THREE.DirectionalLight(0xffffff,1);
light.position.set(0,1,0);//default; light shining from top
light.castShadow =true;// default false
scene.add( light );//Set up shadow properties for the light
light.shadow.mapSize.width =512;// default
light.shadow.mapSize.height =512;// default
light.shadow.camera.near =0.5;// default
light.shadow.camera.far =500;// default//Create a sphere that cast shadows (but does not receive them)const sphereGeometry =new THREE.SphereGeometry(5,32,32);const sphereMaterial =new THREE.MeshStandardMaterial({ color:0xff0000});const sphere =new THREE.Mesh( sphereGeometry, sphereMaterial );
sphere.castShadow =true;//default is false
sphere.receiveShadow =false;//default
scene.add( sphere );//Create a plane that receives shadows (but does not cast them)const planeGeometry =new THREE.PlaneGeometry(20,20,32,32);const planeMaterial =new THREE.MeshStandardMaterial({ color:0x00ff00})const plane =new THREE.Mesh( planeGeometry, planeMaterial );
plane.receiveShadow =true;
scene.add( plane );//Create a helper for the shadow camera (optional)const helper =new THREE.CameraHelper( light.shadow.camera );
scene.add( helper );
Constructor
DirectionalLightShadow( )
Creates a new DirectionalLightShadow. This is not intended to be called directly - it is
called internally by DirectionalLight.
Properties
See the base LightShadow class for common properties.
The light's view of the world. This is used to generate a depth map of the
scene; objects behind other objects from the light's perspective will be
in shadow.
The light's view of the world. This is used to generate a depth map of the
scene; objects behind other objects from the light's perspective will be
in shadow.
The default is a PerspectiveCamera with
near clipping plane at 0.5. The
fov will track the angle
property of the owning SpotLight via the
update method. Similarly, the
aspect property will track the aspect of
the mapSize. If the distance
property of the light is set, the far
clipping plane will track that, otherwise it defaults to 500.
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded animation clips. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called if load errors.
Begin loading from url and pass the loaded animation to onLoad.
Class for loading an
AudioBuffer.
This uses the FileLoader internally for loading
files.
Code Example
// instantiate a listenerconst audioListener =new THREE.AudioListener();// add the listener to the camera
camera.add( audioListener );// instantiate audio objectconst oceanAmbientSound =new THREE.Audio( audioListener );// add the audio object to the scene
scene.add( oceanAmbientSound );// instantiate a loaderconst loader =new THREE.AudioLoader();// load a resource
loader.load(// resource URL'audio/ambient_ocean.ogg',// onLoad callbackfunction( audioBuffer ){// set the audio object buffer to the loaded object
oceanAmbientSound.setBuffer( audioBuffer );// play the audio
oceanAmbientSound.play();},// onProgress callbackfunction( xhr ){
console.log((xhr.loaded / xhr.total *100)+'% loaded');},// onError callbackfunction( err ){
console.log('An error happened');});
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded text response. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called when load errors.
Begin loading from url and pass the loaded AudioBuffer to
onLoad.
url — the path or URL to the file. This can also be a
Data URI.d onLoad — Will be called when load completes. The argument
will be the loaded BufferGeometry. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called when load errors.
Begin loading from url and call onLoad with the parsed response content.
url — the path or URL to the file. This can also be a
Data URI. onLoad (optional) — Will be called when load completes.
The argument will be the loaded texture. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called when load errors.
Begin loading from url and pass the loaded texture to onLoad. The method
also returns a new texture object which can directly be used for material
creation.
CubeTextureLoader can be used to load cube maps. The loader returns an instance of CubeTexture and expects the cube map to
be defined as six separate images representing the sides of a cube. Other cube map definitions like vertical and horizontal cross,
column and row layouts are not supported.
The loaded CubeTexture is in sRGB color space. Meaning the colorSpace
property is set to THREE.SRGBColorSpace by default.
Code Example
const scene =new THREE.Scene();
scene.background =new THREE.CubeTextureLoader().setPath('textures/cubeMaps/').load(['px.png','nx.png','py.png','ny.png','pz.png','nz.png']);
urls — array of 6 urls to images, one for each side of the
CubeTexture. The urls should be specified in the following order: pos-x,
neg-x, pos-y, neg-y, pos-z, neg-z. They can also be
Data URIs.
Note that, by convention, cube maps are specified in a coordinate system
in which positive-x is to the right when looking up the positive-z axis --
in other words, using a left-handed coordinate system. Since three.js uses
a right-handed coordinate system, environment maps used in three.js will
have pos-x and neg-x swapped. onLoad (optional) — Will be called when load completes.
The argument will be the loaded texture. onProgress (optional) — This callback function is
currently not supported. onError (optional) — Will be called when load errors.
Begin loading from url and pass the loaded texture to
onLoad. The method also returns a new texture object which can directly be
used for material creation.
Abstract base class to load generic binary textures formats (rgbe, hdr,
...). This uses the FileLoader internally for loading files, and
creates a new DataTexture.
Examples
See the
RGBELoader
for an example of a derived class.
url — the path or URL to the file. This can also be a
Data URI. onLoad (optional) — Will be called when load completes.
The argument will be the loaded texture. onProgress (optional) — Will be called while load
progresses.The argument will be the ProgressEvent instance, which contains
.lengthComputable, .total and .[page:Integer
loaded]. If the server does not set the Content-Length header;
.total will be 0. onError (optional) — Will be called when load errors.
Begin loading from url and pass the loaded texture to onLoad. The method
also returns a new texture object which can directly be used for material
creation.
A low level class for loading resources with Fetch, used internally by
most loaders. It can also be used directly to load any file type that does
not have a loader.
Code Example
const loader =new THREE.FileLoader();//load a text file and output the result to the console
loader.load(// resource URL'example.txt',// onLoad callbackfunction( data ){// output the text to the console
console.log( data )},// onProgress callbackfunction( xhr ){
console.log((xhr.loaded / xhr.total *100)+'% loaded');},// onError callbackfunction( err ){
console.error('An error happened');});
*Note:* The cache must be enabled using
THREE.Cache.enabled =true;
This is a global property and only needs to be set once to be used by all
loaders that use FileLoader internally. Cache is a cache
module that holds the response from each request made through this loader,
so each file is requested once.
url — the path or URL to the file. This can also be a
Data URI. onLoad (optional) — Will be called when loading completes.
The argument will be the loaded response. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called if an error occurs.
Load the URL and pass the response to the onLoad function.
Change the response type. Valid values are: text or empty string (default) - returns the data as
String. arraybuffer - loads the data into a
ArrayBuffer and returns that. blob - returns the data as a
Blob. document - parses the file using the
DOMParser. json - parses the file using
JSON.parse.
A loader for loading an Image as an
ImageBitmap. An ImageBitmap provides an asynchronous and resource
efficient pathway to prepare textures for rendering in WebGL.
Unlike FileLoader, ImageBitmapLoader does not avoid multiple concurrent
requests to the same URL.
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded image. onProgress (optional) — This callback function is
currently not supported. onError (optional) — Will be called when load errors.
Begin loading from url and return the image object that
will contain the data.
// instantiate a loaderconst loader =new THREE.ImageLoader();// load a image resource
loader.load(// resource URL'image.png',// onLoad callbackfunction( image ){// use the image, e.g. draw part of it on a canvasconst canvas = document.createElement('canvas');const context = canvas.getContext('2d');
context.drawImage( image,100,100);},// onProgress callback currently not supportedundefined,// onError callbackfunction(){
console.error('An error happened.');});
Please note three.js r84 dropped support for ImageLoader progress events.
For an ImageLoader that supports progress events, see
this thread.
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded image. onProgress (optional) — This callback function is
currently not supported. onError (optional) — Will be called when load errors.
Begin loading from url and return the image object that will
contain the data.
url — A string containing the path/URL of the file to be
loaded. onProgress (optional) — A function to be called while the
loading is in progress. The argument will be the ProgressEvent instance,
which contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0.
This method is equivalent to .load, but returns a
Promise.
Whether the XMLHttpRequest uses credentials such as cookies, authorization
headers or TLS client certificates. See
XMLHttpRequest.withCredentials.
Note that this has no effect if you are loading files locally or from the
same domain.
url — The absolute or relative url resolve. path — The base path for relative urls to be resolved against.
Resolves relative urls against the given path. Absolute paths, data urls,
and blob urls will be returned as is. Invalid urls will return an empty
string.
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded Material. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called when load errors.
This uses the FileLoader internally for loading files.
Code Example
const loader =new THREE.ObjectLoader();
loader.load(// resource URL"models/json/example.json",// onLoad callback// Here the loaded data is assumed to be an objectfunction( obj ){// Add the loaded object to the scene
scene.add( obj );},// onProgress callbackfunction( xhr ){
console.log((xhr.loaded / xhr.total *100)+'% loaded');},// onError callbackfunction( err ){
console.error('An error happened');});// Alternatively, to parse a previously loaded JSON structureconstobject= loader.parse( a_json_object );
scene.add(object);
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument
will be the loaded object. onProgress (optional) — Will be called while load
progresses. The argument will be the ProgressEvent instance, which
contains .lengthComputable, .total and
.loaded. If the server does not set the Content-Length
header; .total will be 0. onError (optional) — Will be called when load errors.
Begin loading from url and call onLoad with the parsed response content.
onLoad — Will be called when parsed completes. The
argument will be the parsed object.
Parse a JSON structure and return a three.js object. This is used
internally by .load() but can also be used directly to parse a
previously loaded JSON structure.
json — required. The JSON source to parse. geometries — required. The geometries of the
JSON. materials — required. The materials of the JSON. animations — required. The animations of the JSON.
This is used by .parse() to parse any 3D objects in the JSON
structure.
Class for loading a texture. This uses the
ImageLoader internally for loading files.
Code Example
const texture =new THREE.TextureLoader().load('textures/land_ocean_ice_cloud_2048.jpg');// immediately use the texture for material creation const material =new THREE.MeshBasicMaterial({ map:texture });
Code Example with Callbacks
// instantiate a loaderconst loader =new THREE.TextureLoader();// load a resource
loader.load(// resource URL'textures/land_ocean_ice_cloud_2048.jpg',// onLoad callbackfunction( texture ){// in this example we create the material when the texture is loadedconst material =new THREE.MeshBasicMaterial({
map: texture
});},// onProgress callback currently not supportedundefined,// onError callbackfunction( err ){
console.error('An error happened.');});
Please note three.js r84 dropped support for TextureLoader progress
events. For a TextureLoader that supports progress events, see
this thread.
url — the path or URL to the file. This can also be a
Data URI. onLoad (optional) — Will be called when load completes.
The argument will be the loaded texture. onProgress (optional) — This callback function is
currently not supported. onError (optional) — Will be called when load errors.
Begin loading from the given URL and pass the fully loaded texture
to onLoad. The method also returns a new texture object which can
directly be used for material creation. If you do it this way, the texture
may pop up in your scene once the respective loading process is finished.
A global instance of the LoadingManager, used by
most loaders when no custom manager has been specified.
This will be sufficient for most purposes, however there may be times when
you desire separate loading managers for say, textures and models.
Code Example
You can optionally set the onStart,
onLoad, onProgress,
onError functions for the
manager. These will then apply to any loaders using the
DefaultLoadingManager.
Note that these shouldn't be confused with the similarly named functions
of individual loaders, as they are intended for displaying information
about the overall status of loading, rather than dealing with the data
that has been loaded.
Handles and keeps track of loaded and pending data. A default global
instance of this class is created and used by loaders if not supplied
manually - see DefaultLoadingManager.
In general that should be sufficient, however there are times when it can
be useful to have separate loaders - for example if you want to show
separate loading bars for objects and textures.
Code Example
This example shows how to use LoadingManager to track the progress of
OBJLoader.
In addition to observing progress, a LoadingManager can be used to
override resource URLs during loading. This may be helpful for assets
coming from drag-and-drop events, WebSockets, WebRTC, or other APIs. An
example showing how to load an in-memory model using Blob URLs is below.
// Blob or File objects created when dragging files into the webpage.const blobs ={'fish.gltf': blob1,'diffuse.png': blob2,'normal.png': blob3};const manager =new THREE.LoadingManager();// Initialize loading manager with URL callback.const objectURLs =[];
manager.setURLModifier(( url )=>{
url = URL.createObjectURL( blobs[ url ]);
objectURLs.push( url );return url;});// Load as usual, then revoke the blob URLs.const loader =newGLTFLoader( manager );
loader.load('fish.gltf',(gltf)=>{
scene.add( gltf.scene );
objectURLs.forEach(( url )=> URL.revokeObjectURL( url ));});
onLoad — (optional) this function will be called when all
loaders are done. onProgress — (optional) this function will be called when
an item is complete. onError — (optional) this function will be called a loader
encounters errors.
Creates a new LoadingManager.
This function will be called when loading starts. The arguments are: url — The url of the item just loaded. itemsLoaded — the number of items already loaded so far. itemsTotal — the total amount of items to be loaded.
This function will be called when an item is complete. The arguments
are: url — The url of the item just loaded. itemsLoaded — the number of items already loaded so far. itemsTotal — the total amount of items to be loaded.
By default this is undefined, unless passed in the constructor.
regex — A regular expression. loader — The loader.
Registers a loader with the given regular expression. Can be used to
define what loader should be used in order to load specific files. A
typical use case is to overwrite the default loader for textures.
// add handler for TGA textures
manager.addHandler(/\.tga$/i,newTGALoader());
callback — URL modifier callback. Called with url argument,
and must return resolvedURL.
If provided, the callback will be passed each resource URL before a
request is sent. The callback may return the original URL, or a new URL to
override loading behavior. This behavior can be used to load assets from
.ZIP files, drag-and-drop APIs, and Data URIs.
Note: The following methods are designed to be called internally by
loaders. You shouldn't call them directly.
A material for drawing wireframe-style geometries.
Code Example
const material =new THREE.LineBasicMaterial({
color:0xffffff,
linewidth:1,
linecap:'round',//ignored by WebGLRenderer
linejoin:'round'//ignored by WebGLRenderer});
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from LineBasicMaterial)
can be passed in here.
Materials describe the appearance of objects. They are
defined in a (mostly) renderer-independent way, so you don't have to
rewrite materials if you decide to use a different renderer.
The following properties and methods are inherited by all other material
types (although they may have different defaults).
Enables alpha hashed transparency, an alternative to .transparent or .alphaTest.
The material will not be rendered if opacity is lower than a random threshold.
Randomization introduces some grain or noise, but approximates alpha blending without
the associated problems of sorting. Using TAARenderPass can reduce the resulting noise.
Enables alpha to coverage. Can only be used with MSAA-enabled contexts
(meaning when the renderer was created with antialias parameter set to
true). Enabling this will smooth aliasing on clip plane edges and alphaTest-clipped edges.
Default is false.
Represents the alpha value of the constant blend color. Default is 0.
This property has only an effect when using custom blending with ConstantAlpha or OneMinusConstantAlpha.
Represent the RGB values of the constant blend color. Default is 0x000000.
This property has only an effect when using custom blending with ConstantColor or OneMinusConstantColor.
Blending equation to use when applying blending. Default is
AddEquation. See the blending equation
constants for all possible values.
The material's blending must be set to CustomBlending
for this to have any effect.
Blending source. Default is SrcAlphaFactor.
See the source factors constants for all
possible values.
The material's blending must be set to CustomBlending
for this to have any effect.
User-defined clipping planes specified as THREE.Plane objects in world
space. These planes apply to the objects this material is attached to.
Points in space whose signed distance to the plane is negative are clipped
(not rendered). This requires WebGLRenderer.localClippingEnabled to
be true. See the WebGL / clipping /intersection example. Default is null.
Whether to render the material's color. This can be used in conjunction
with a mesh's renderOrder property to create invisible
objects that occlude other objects. Default is true.
Custom defines to be injected into the shader. These are passed in form of
an object literal, with key/value pairs. { MY_CUSTOM_DEFINE:'', PI2:Math.PI *2}. The pairs are defined in both vertex and fragment shaders.
Default is undefined.
Whether to have depth test enabled when rendering this material. Default
is true. When the depth test is disabled, the depth write will also be
implicitly disabled.
Whether double-sided, transparent objects should be rendered with a single
pass or not. Default is false.
The engine renders double-sided, transparent objects with two draw calls
(back faces first, then front faces) to mitigate transparency artifacts.
There are scenarios however where this approach produces no quality gains
but still doubles draw calls e.g. when rendering flat vegetation like
grass sprites. In these cases, set the forceSinglePass flag to true to
disable the two pass rendering to avoid performance issues.
Whether stencil operations are performed against the stencil buffer. In
order to perform writes or comparisons against the stencil buffer this
value must be true. Default is false.
Which stencil operation to perform when the comparison function returns
false. Default is KeepStencilOp. See the stencil
operations constants for all possible values.
Which stencil operation to perform when the comparison function returns
true but the depth test fails. Default is KeepStencilOp.
See the stencil operations constants for all possible
values.
Which stencil operation to perform when the comparison function returns
true and the depth test passes. Default is KeepStencilOp.
See the stencil operations constants for all possible
values.
Float in the range of 0.0 - 1.0 indicating how transparent the
material is. A value of 0.0 indicates fully transparent, 1.0 is fully
opaque.
If the material's transparent property is not set to
true, the material will remain fully opaque and this value will only
affect its color.
Default is 1.0.
Defines which side of faces cast shadows. When set, can be THREE.FrontSide,
THREE.BackSide, or THREE.DoubleSide.
Default is null.
If null, the side casting shadows is determined as follows:
Defines whether this material is tone mapped according to the renderer's
toneMapping setting. It is ignored when rendering to a render target or using post processing.
Default is true.
Defines whether this material is transparent. This has an effect on
rendering as transparent objects need special treatment and are rendered
after non-transparent objects.
When set to true, the extent to which the material is transparent is
controlled by setting its opacity property.
Default is false.
Defines whether vertex coloring is used. Default is false. The engine
supports RGB and RGBA vertex colors depending on whether a three (RGB) or
four (RGBA) component color buffer attribute is used.
An object that can be used to store custom data about the Material. It
should not hold references to functions as these will not be cloned.
Default is an empty object {}.
An optional callback that is executed immediately before the shader
program is compiled. This function is called with the shader source code
as a parameter. Useful for the modification of built-in materials.
Unlike properties, the callback is not supported by .clone(),
.copy() and .toJSON().
This callback is only supported in WebGLRenderer (not WebGPURenderer).
In case onBeforeCompile is used, this callback can be used to identify
values of settings used in onBeforeCompile, so three.js can reuse a cached
shader or recompile the shader for this material as needed.
For example, if onBeforeCompile contains a conditional statement like: if( black ){
shader.fragmentShader = shader.fragmentShader.replace('gl_FragColor = vec4(1)','gl_FragColor = vec4(0)')}
then customProgramCacheKey should be set like this: material.customProgramCacheKey =function(){return black ?'1':'0';}
Unlike properties, the callback is not supported by .clone(),
.copy() and .toJSON().
A material for drawing geometries in a simple shaded (flat or wireframe)
way.
This material is not affected by lights.
Constructor
MeshBasicMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
Intensity of the ambient occlusion effect. Range is 0-1, where 0
disables ambient occlusion. Where intensity is 1 and the .aoMap
red channel is also 1, ambient light is fully occluded on a surface.
Default is 1.
How much the environment map affects the surface; also see
.combine. The default value is 1 and the valid range is between 0
(no reflections) and 1 (full reflections).
The index of refraction (IOR) of air (approximately 1) divided by the
index of refraction of the material. It is used with environment mapping
modes THREE.CubeRefractionMapping and THREE.EquirectangularRefractionMapping.
The refraction ratio should not exceed 1. Default is 0.98.
A material for drawing geometry by depth. Depth is based off of the camera
near and far plane. White is nearest, black is farthest.
Constructor
MeshDepthMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
MeshDistanceMaterial is internally used for implementing shadow mapping with
PointLights.
Can also be used to customize the shadow casting of an object by assigning
an instance of MeshDistanceMaterial to Object3D.customDistanceMaterial. The
following examples demonstrates this approach in order to ensure
transparent parts of objects do no cast shadows.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
A material for non-shiny surfaces, without specular highlights.
The material uses a non-physically based
Lambertian
model for calculating reflectance. This can simulate some surfaces (such
as untreated wood or stone) well, but cannot simulate shiny surfaces with
specular highlights (such as varnished wood). MeshLambertMaterial uses per-fragment
shading.
Due to the simplicity of the reflectance and illumination models,
performance will be greater when using this material over the
MeshPhongMaterial, MeshStandardMaterial or
MeshPhysicalMaterial, at the cost of some graphical accuracy.
Constructor
MeshLambertMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
Intensity of the ambient occlusion effect. Range is 0-1, where 0
disables ambient occlusion. Where intensity is 1 and the .aoMap
red channel is also 1, ambient light is fully occluded on a surface.
Default is 1.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
Set emissive (glow) map. Default is null. The emissive map color is
modulated by the emissive color and the emissive intensity. If you have an
emissive map, be sure to set the emissive color to something other than
black.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
The index of refraction (IOR) of air (approximately 1) divided by the
index of refraction of the material. It is used with environment mapping
modes THREE.CubeRefractionMapping and THREE.EquirectangularRefractionMapping.
The refraction ratio should not
exceed 1. Default is 0.98.
MeshMatcapMaterial is defined by a MatCap (or Lit Sphere) texture, which encodes the
material color and shading.
MeshMatcapMaterial does not respond to lights since the matcap image file encodes
baked lighting. It will cast a shadow onto an object that receives shadows
(and shadow clipping works), but it will not self-shadow or receive
shadows.
Constructor
MeshMatcapMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
The color map. May optionally include an alpha channel, typically combined
with .transparent or .alphaTest.
Default is null. The texture map color is modulated by the
diffuse .color.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
A material that maps the normal vectors to RGB colors.
Constructor
MeshNormalMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
Properties
See the base Material class for common properties.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
A material for shiny surfaces with specular highlights.
The material uses a non-physically based
Blinn-Phong
model for calculating reflectance. Unlike the Lambertian model used in the
MeshLambertMaterial this can simulate shiny surfaces with specular
highlights (such as varnished wood). MeshPhongMaterial uses per-fragment shading.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
Intensity of the ambient occlusion effect. Range is 0-1, where 0
disables ambient occlusion. Where intensity is 1 and the .aoMap
red channel is also 1, ambient light is fully occluded on a surface.
Default is 1.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
Set emissive (glow) map. Default is null. The emissive map color is
modulated by the emissive color and the emissive intensity. If you have an
emissive map, be sure to set the emissive color to something other than
black.
The color map. May optionally include an alpha channel, typically combined
with .transparent or .alphaTest.
Default is null. The texture map color is modulated by the
diffuse .color.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
How much the environment map affects the surface; also see
.combine. The default value is 1 and the valid range is between 0
(no reflections) and 1 (full reflections).
The index of refraction (IOR) of air (approximately 1) divided by the
index of refraction of the material. It is used with environment mapping
modes THREE.CubeRefractionMapping and THREE.EquirectangularRefractionMapping.
The refraction ratio should not exceed 1. Default is 0.98.
The specular map value affects both how much the specular surface
highlight contributes and how much of the environment map affects the
surface. Default is null.
An extension of the MeshStandardMaterial, providing more advanced
physically-based rendering properties:
Anisotropy: Ability to represent the anisotropic property of materials
as observable with brushed metals.
Clearcoat: Some materials — like car paints, carbon fiber, and
wet surfaces — require a clear, reflective layer on top of another layer
that may be irregular or rough. Clearcoat approximates this effect,
without the need for a separate transparent surface.
Iridescence: Allows to render the effect where hue varies
depending on the viewing angle and illumination angle. This can be seen on
soap bubbles, oil films, or on the wings of many insects.
Physically-based transparency: One limitation of
.opacity is that highly transparent materials
are less reflective. Physically-based .transmission provides a
more realistic option for thin, transparent surfaces like glass.
Advanced reflectivity: More flexible reflectivity for
non-metallic materials.
Sheen: Can be used for representing cloth and fabric materials.
As a result of these complex shading features, MeshPhysicalMaterial has a
higher performance cost, per pixel, than other three.js materials. Most
effects are disabled by default, and add cost as they are enabled. For
best results, always specify an environment map when using
this material.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material and
MeshStandardMaterial) can be passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Red and green channels represent the anisotropy direction in [-1,1] tangent,
bitangent space, to be rotated by .anisotropyRotation. The blue channel
contains strength as [0,1] to be multiplied by .anisotropy. Default is null.
The rotation of the anisotropy in tangent, bitangent space, measured in radians
counter-clockwise from the tangent. When .anisotropyMap is present, this
property provides additional rotation to the vectors in the texture. Default is 0.0.
Density of the medium given as the average distance that light travels in
the medium before interacting with a particle. The value is given in world
space units, and must be greater than zero. Default is Infinity.
Represents the intensity of the clear coat layer, from 0.0 to 1.0. Use
clear coat related properties to enable multilayer materials that have a
thin translucent layer over the base layer. Default is 0.0.
Defines the strength of the angular separation of colors (chromatic aberration) transmitting through a relatively clear volume.
Any value zero or larger is valid, the typical range of realistic values is [0,1].
Default is 0 (no dispersion).
This property can be only be used with transmissive objects, see .transmission.
Degree of reflectivity, from 0.0 to 1.0. Default is 0.5, which
corresponds to an index-of-refraction of 1.5.
This models the reflectivity of non-metallic materials. It has no effect
when metalness is 1.0
The intensity of the iridescence layer, simulating RGB color shift based on the angle between the surface and the viewer, from 0.0 to 1.0. Default is 0.0.
Array of exactly 2 elements, specifying minimum and maximum thickness of the iridescence layer.
Thickness of iridescence layer has an equivalent effect of the one .thickness has on .ior.
Default is [100,400].
If .iridescenceThicknessMap is not defined, iridescence thickness will use only the second element of the given array.
A texture that defines the thickness of the iridescence layer, stored in the green channel.
Minimum and maximum values of thickness are defined by .iridescenceThicknessRange array:
0.0 in the green channel will result in thickness equal to first element of the array.
1.0 in the green channel will result in thickness equal to second element of the array.
Values in-between will linearly interpolate between the elements of the array.
A float that scales the amount of specular reflection for non-metals only.
When set to zero, the model is effectively Lambertian. From 0.0 to
1.0. Default is 1.0.
The thickness of the volume beneath the surface. The value is given in the
coordinate space of the mesh. If the value is 0 the material is
thin-walled. Otherwise the material is a volume boundary. Default is 0.
Degree of transmission (or optical transparency), from 0.0 to 1.0.
Default is 0.0.
Thin, transparent or semitransparent, plastic or glass materials remain
largely reflective even if they are fully transmissive. The transmission
property can be used to model these materials.
When transmission is non-zero, opacity should be
set to 1.
A standard physically based material, using Metallic-Roughness
workflow.
Physically based rendering (PBR) has recently become the standard in many
3D applications, such as
Unity,
Unreal and
3D Studio Max.
This approach differs from older approaches in that instead of using
approximations for the way in which light interacts with a surface, a
physically correct model is used. The idea is that, instead of tweaking
materials to look good under specific lighting, a material can be created
that will react 'correctly' under all lighting scenarios.
In practice this gives a more accurate and realistic looking result than
the MeshLambertMaterial or MeshPhongMaterial, at the cost of
being somewhat more computationally expensive. MeshStandardMaterial uses per-fragment
shading.
Note that for best results you should always specify an environment map
when using this material.
For a non-technical introduction to the concept of PBR and how to set up a
PBR material, check out these articles by the people at
marmoset:
Technical details of the approach used in three.js (and most other PBR
systems) can be found is this
paper from Disney
(pdf), by Brent Burley.
Constructor
MeshStandardMaterial( parameters : Object )
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
Intensity of the ambient occlusion effect. Range is 0-1, where 0
disables ambient occlusion. Where intensity is 1 and the .aoMap
red channel is also 1, ambient light is fully occluded on a surface.
Default is 1.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
Set emissive (glow) map. Default is null. The emissive map color is
modulated by the emissive color and the emissive intensity. If you have an
emissive map, be sure to set the emissive color to something other than
black.
The environment map. To ensure a physically correct rendering, you should
only add environment maps which were preprocessed by
PMREMGenerator. Default is null.
The color map. May optionally include an alpha channel, typically combined
with .transparent or .alphaTest.
Default is null. The texture map color is modulated by the diffuse .color.
How much the material is like a metal. Non-metallic materials such as wood
or stone use 0.0, metallic use 1.0, with nothing (usually) in between.
Default is 0.0. A value between 0.0 and 1.0 could be used for a rusty
metal look. If metalnessMap is also provided, both values are multiplied.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
How rough the material appears. 0.0 means a smooth mirror reflection, 1.0
means fully diffuse. Default is 1.0. If roughnessMap is also provided,
both values are multiplied.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
Intensity of the ambient occlusion effect. Range is 0-1, where 0
disables ambient occlusion. Where intensity is 1 and the .aoMap
red channel is also 1, ambient light is fully occluded on a surface.
Default is 1.
The texture to create a bump map. The black and white values map to the
perceived depth in relation to the lights. Bump doesn't actually affect
the geometry of the object, only the lighting. If a normal map is defined
this will be ignored.
The displacement map affects the position of the mesh's vertices. Unlike
other maps which only affect the light and shade of the material the
displaced vertices can cast shadows, block other objects, and otherwise
act as real geometry. The displacement texture is an image where the value
of each pixel (white being the highest) is mapped against, and
repositions, the vertices of the mesh.
How much the displacement map affects the mesh (where black is no
displacement, and white is maximum displacement). Without a displacement
map set, this value is not applied. Default is 1.
Set emissive (glow) map. Default is null. The emissive map color is
modulated by the emissive color and the emissive intensity. If you have an
emissive map, be sure to set the emissive color to something other than
black.
The color map. May optionally include an alpha channel, typically combined
with .transparent or .alphaTest.
Default is null. The texture map color is modulated by the diffuse .color.
The texture to create a normal map. The RGB values affect the surface
normal for each pixel fragment and change the way the color is lit. Normal
maps do not change the actual shape of the surface, only the lighting. In
case the material has a normal map authored using the left handed
convention, the y component of normalScale should be negated to compensate
for the different handedness.
const vertices =[];for(let i =0; i <10000; i ++){const x = THREE.MathUtils.randFloatSpread(2000);const y = THREE.MathUtils.randFloatSpread(2000);const z = THREE.MathUtils.randFloatSpread(2000);
vertices.push( x, y, z );}const geometry =new THREE.BufferGeometry();
geometry.setAttribute('position',new THREE.Float32BufferAttribute( vertices,3));const material =new THREE.PointsMaterial({ color:0x888888});const points =new THREE.Points( geometry, material );
scene.add( points );
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
This class works just like ShaderMaterial, except that definitions
of built-in uniforms and attributes are not automatically prepended to the
GLSL shader code.
Code Example
const material =new THREE.RawShaderMaterial({
uniforms:{
time:{value:1.0}},
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragmentShader').textContent,});
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material and
ShaderMaterial) can be passed in here.
A material rendered with custom shaders. A shader is a small program
written in
GLSL
that runs on the GPU. You may want to use a custom shader if you need to:
implement an effect not included with any of the built-in materials
combine many objects into a single BufferGeometry in order to
improve performance
ShaderMaterial
A ShaderMaterial will only be rendered properly by
WebGLRenderer, since the GLSL code in the
vertexShader
and fragmentShader
properties must be compiled and run on the GPU using WebGL.
As of THREE r72, directly assigning attributes in a ShaderMaterial is no
longer supported. A BufferGeometry instance must be used instead,
using BufferAttribute instances to define custom attributes.
Built in attributes and uniforms are passed to the shaders along with
your code. If you don't want the WebGLProgram to add anything to
your shader code, you can use RawShaderMaterial instead of this
class.
You can use the directive #pragma unroll_loop_start and #pragma
unroll_loop_end in order to unroll a for loop in GLSL by the shader
preprocessor. The directive has to be placed right above the loop. The
loop formatting has to correspond to a defined standard.
You can specify two different types of shaders for each material:
The vertex shader runs first; it receives attributes, calculates /
manipulates the position of each individual vertex, and passes
additional data (varyings) to the fragment shader.
The fragment ( or pixel ) shader runs second; it sets the color of
each individual "fragment" (pixel) rendered to the screen.
There are three types of variables in shaders: uniforms, attributes, and
varyings:
Uniforms are variables that have the same value for all vertices -
lighting, fog, and shadow maps are examples of data that would be
stored in uniforms. Uniforms can be accessed by both the vertex shader
and the fragment shader.
Attributes are variables associated with each vertex---for instance,
the vertex position, face normal, and vertex color are all examples of
data that would be stored in attributes. Attributes can only be
accessed within the vertex shader.
Varyings are variables that are passed from the vertex shader to the
fragment shader. For each fragment, the value of each varying will be
smoothly interpolated from the values of adjacent vertices.
Note that within the shader itself, uniforms and attributes act like
constants; you can only modify their values by passing different values
to the buffers from your JavaScript code.
Built-in attributes and uniforms
The WebGLRenderer provides many attributes and uniforms to
shaders by default; definitions of these variables are prepended to your
fragmentShader and vertexShader code by the WebGLProgram when
the shader is compiled; you don't need to declare them yourself. See
WebGLProgram for details of these variables.
Some of these uniforms or attributes (e.g. those pertaining lighting,
fog, etc.) require properties to be set on the material in order for
WebGLRenderer to copy the appropriate values to the GPU - make
sure to set these flags if you want to use these features in your own
shader.
If you don't want WebGLProgram to add anything to your shader
code, you can use RawShaderMaterial instead of this class.
Custom attributes and uniforms
Both custom attributes and uniforms must be declared in your GLSL shader
code (within vertexShader and/or fragmentShader). Custom uniforms
must be defined in both the uniforms property of your
ShaderMaterial, whereas any custom attributes must be defined via
BufferAttribute instances. Note that varyings only need to be
declared within the shader code (not within the material).
To declare a custom attribute, please reference the
BufferGeometry page for an overview, and the
BufferAttribute page for a detailed look at the BufferAttribute
API.
When creating your attributes, each typed array that you create to hold
your attribute's data must be a multiple of your data type's size. For
example, if your attribute is a THREE.Vector3 type, and
you have 3000 vertices in your BufferGeometry, your typed array
value must be created with a length of 3000 * 3, or 9000 (one value
per-component). A table of each data type's size is shown below for
reference:
Note that attribute buffers are not refreshed automatically when their
values change. To update custom attributes, set the needsUpdate flag
to true on the BufferAttribute of the geometry (see
BufferGeometry for further details).
To declare a custom Uniform, use the uniforms property:
uniforms:{
time:{value:1.0},
resolution:{value:new THREE.Vector2()}}
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
Properties
See the base Material class for common properties.
When the rendered geometry doesn't include these attributes but the
material does, these default values will be passed to the shaders. This
avoids errors when buffer data is missing.
this.defaultAttributeValues ={'color':[1,1,1],'uv':[0,0],'uv1':[0,0]};
Defines custom constants using #define directives within the GLSL code
for both the vertex shader and the fragment shader; each key/value pair
yields another directive:
defines:{
FOO:15,
BAR:true}
yields the lines
#define FOO 15#define BAR true
in the GLSL code.
An object with the following properties:
this.extensions ={
clipCullDistance:false,// set to use vertex shader clipping
multiDraw:false// set to use vertex shader multi_draw / enable gl_DrawID};
Fragment shader GLSL code. This is the actual code for the shader. In the
example above, the vertexShader and fragmentShader code is extracted
from the DOM; it could be passed as a string directly or loaded via AJAX
instead.
An object of the form:
{"uniform1":{value:1.0},"uniform2":{value:2}}
specifying the uniforms to be passed to the shader code; keys are uniform
names, values are definitions of the form
{value:1.0}
where value is the value of the uniform. Names must match the name of
the uniform, as defined in the GLSL code. Note that uniforms are refreshed
on every frame, so updating the value of the uniform will immediately
update the value available to the GLSL code.
Vertex shader GLSL code. This is the actual code for the shader. In the
example above, the vertexShader and fragmentShader code is extracted
from the DOM; it could be passed as a string directly or loaded via AJAX
instead.
Generates a shallow copy of this material. Note that the vertexShader and
fragmentShader are copied by reference, as are the definitions of the
attributes; this means that clones of the material will share the same
compiled WebGLProgram. However, the uniforms are copied byvalue, which allows you to have different sets of uniforms for different
copies of the material.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
Properties
See the base Material classes for common properties.
parameters - (optional) an object with one or more
properties defining the material's appearance. Any property of the
material (including any property inherited from Material) can be
passed in here.
The exception is the property color, which can be
passed in as a hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally. SpriteMaterials are not
clipped by using Material.clippingPlanes.
Properties
See the base Material class for common properties.
The alpha map is a grayscale texture that controls the opacity across the
surface (black: fully transparent; white: fully opaque). Default is
null.
Only the color of the texture is used, ignoring the alpha channel if one
exists. For RGB and RGBA textures, the WebGL renderer
will use the green channel when sampling this texture due to the extra bit
of precision provided for green in DXT-compressed and uncompressed RGB 565
formats. Luminance-only and luminance/alpha textures will also still work
as expected.
min - (optional) Vector2 representing the lower (x,
y) boundary of the box. Default is ( + Infinity, + Infinity ). max - (optional) Vector2 representing the upper (x,
y) boundary of the box. Default is ( - Infinity, - Infinity ).
Expands this box equilaterally by vector. The width of this
box will be expanded by the x component of vector in both
directions. The height of this box will be expanded by the y component of
vector in both directions.
Returns the intersection of this and box, setting the upper
bound of this box to the lesser of the two boxes' upper bounds and the
lower bound of this box to the greater of the two boxes' lower bounds.
Returns true if this box includes zero points within its bounds.
Note that a box with equal lower and upper bounds still includes one
point, the one both bounds share.
min - (required ) Vector2 representing the lower (x,
y) boundary of the box. max - (required) Vector2 representing the upper (x,
y) boundary of the box.
Sets the lower and upper (x, y) boundaries of this box.
Please note that this method only copies the values from the given
objects.
Unions this box with box, setting the upper bound of this box
to the greater of the two boxes' upper bounds and the lower bound of this
box to the lesser of the two boxes' lower bounds.
Represents an axis-aligned bounding box (AABB) in 3D space.
Code Example
const box =new THREE.Box3();const mesh =new THREE.Mesh(new THREE.SphereGeometry(),new THREE.MeshBasicMaterial());// ensure the bounding box is computed for its geometry// this should be done only once (assuming static geometries)
mesh.geometry.computeBoundingBox();// ...// in the animation loop, compute the current bounding box with the world matrix
box.copy( mesh.geometry.boundingBox ).applyMatrix4( mesh.matrixWorld );
min - (optional) Vector3 representing the lower (x,
y, z) boundary of the box. Default is ( + Infinity, + Infinity, + Infinity
). max - (optional) Vector3 representing the upper (x,
y, z) boundary of the box. Default is ( - Infinity, - Infinity, - Infinity
).
object - Object3D to expand the box by.
precise - (optional) expand the bounding box as little as necessary at the
expense of more computation. Default is false.
Expands the boundaries of this box to include object and
its children, accounting for the object's, and children's, world
transforms. The function may result in a larger box than strictly
necessary (unless the precise parameter is set to true).
Expands this box equilaterally by vector. The width of this
box will be expanded by the x component of vector in both
directions. The height of this box will be expanded by the y component of
vector in both directions. The depth of this box will be
expanded by the z component of vector in both directions.
Computes the intersection of this and box, setting the upper
bound of this box to the lesser of the two boxes' upper bounds and the
lower bound of this box to the greater of the two boxes' lower bounds. If
there's no overlap, makes this box empty.
Returns true if this box includes zero points within its bounds.
Note that a box with equal lower and upper bounds still includes one
point, the one both bounds share.
object - Object3D to compute the bounding box
of.
precise - (optional) compute the smallest world-axis-aligned bounding box
at the expense of more computation. Default is false.
Computes the world-axis-aligned bounding box of an Object3D
(including its children), accounting for the object's, and children's,
world transforms. The function may result in a larger box than strictly
necessary.
Computes the union of this box and box, setting the upper
bound of this box to the greater of the two boxes' upper bounds and the
lower bound of this box to the lesser of the two boxes' lower bounds.
A Color instance is represented by RGB components in the linear working
color space, which defaults to LinearSRGBColorSpace. Inputs
conventionally using SRGBColorSpace (such as hexadecimals and CSS
strings) are converted to the working color space automatically.
// converted automatically from SRGBColorSpace to LinearSRGBColorSpaceconst color =new THREE.Color().setHex(0x112233);
Source color spaces may be specified explicitly, to ensure correct
conversions.
// assumed already LinearSRGBColorSpace; no conversionconst color =new THREE.Color().setRGB(0.5,0.5,0.5);// converted explicitly from SRGBColorSpace to LinearSRGBColorSpaceconst color =new THREE.Color().setRGB(0.5,0.5,0.5,SRGBColorSpace);
If THREE.ColorManagement is disabled, no conversions occur. For details,
see Color management.
Iterating through a Color instance will yield its components (r, g, b) in
the corresponding order.
Code Examples
A Color can be initialised in any of the following ways:
//empty constructor - will default whiteconst color1 =new THREE.Color();//Hexadecimal color (recommended)const color2 =new THREE.Color(0xff0000);//RGB stringconst color3 =new THREE.Color("rgb(255, 0, 0)");const color4 =new THREE.Color("rgb(100%, 0%, 0%)");//X11 color name - all 140 color names are supported.//Note the lack of CamelCase in the nameconst color5 =new THREE.Color('skyblue');//HSL stringconst color6 =new THREE.Color("hsl(0, 100%, 50%)");//Separate RGB values between 0 and 1const color7 =new THREE.Color(1,0,0);
r - (optional) If arguments g and
b are defined, the red component of the color. If they are
not defined, it can be a
hexadecimal triplet (recommended),
a CSS-style string, or another Color instance. g - (optional) If it is defined, the green component of the
color. b - (optional) If it is defined, the blue component of the
color.
Note that standard method of specifying color in three.js is with a
hexadecimal triplet,
and that method is used throughout the rest of the
documentation.
When all arguments are defined then r is the
red component, g is the green component and b is
the blue component of the color.
When only r is defined:
color - color to converge on. alpha - interpolation factor in the closed interval [0,1].
Linearly interpolates this color's RGB values toward the RGB values of the
passed argument. The alpha argument can be thought of as the ratio between
the two colors, where 0.0 is this color and 1.0 is the first argument.
color1 - the starting Color. color2 - Color to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Sets this color to be the color linearly interpolated between color1
and color2 where alpha is the percent distance along
the line connecting the two colors - alpha = 0 will be color1,
and alpha = 1 will be color2.
color - color to converge on. alpha - interpolation factor in the closed interval [0,1].
Linearly interpolates this color's HSL values toward the HSL values of the
passed argument. It differs from the classic .lerp by not
interpolating straight from one color to the other, but instead going
through all the hues in between those two colors. The alpha argument can
be thought of as the ratio between the two colors, where 0.0 is this color
and 1.0 is the first argument.
# .offsetHSL ( h : Float, s : Float, l : Float ) : this
Adds the given h, s, and l to this
color's values. Internally, this converts the color's r,
g and b values to HSL, adds h,
s, and l, and then converts the color back to
RGB.
r - (optional) If arguments g and b are defined, the red component of the color. If they are
not defined, it can be a hexadecimal triplet (recommended), a CSS-style string, or another Color instance. g - (optional) If it is defined, the green component of the color. b - (optional) If it is defined, the blue component of the color.
See the Constructor above for full details about possible arguments. Delegates to .copy,
.setStyle, .setRGB or .setHex depending on input type.
Sets this color from a CSS-style string. For example, "rgb(250, 0,0)",
"rgb(100%, 0%, 0%)", "hsl(0, 100%, 50%)", "#ff0000", "#f00", or "red" ( or
any X11 color name -
all 140 color names are supported ).
Translucent colors such as "rgba(255, 0, 0, 0.5)" and "hsla(0, 100%, 50%,
0.5)" are also accepted, but the alpha-channel coordinate will be
discarded.
Note that for X11 color names, multiple words such as Dark Orange become
the string 'darkorange'.
Subtracts the RGB components of the given color from the RGB components of
this color. If this results in a negative component, that component is set
to zero.
radius - distance from the origin to a point in the x-z
plane. Default is 1.0. theta - counterclockwise angle in the x-z plane measured in
radians from the positive z-axis. Default is 0. y - height above the x-z plane. Default is 0.
Euler angles describe a rotational transformation by rotating an object on
its various axes in specified amounts per axis, and a specified axis
order.
Iterating through a Euler instance will yield its components (x, y, z,
order) in the corresponding order.
Code Example
const a =new THREE.Euler(0,1,1.57,'XYZ');const b =new THREE.Vector3(1,0,1);
b.applyEuler(a);
Constructor
Euler( x : Float, y : Float, z : Float, order : String )
x - (optional) the angle of the x axis in radians. Default is
0. y - (optional) the angle of the y axis in radians. Default is
0. z - (optional) the angle of the z axis in radians. Default is
0. order - (optional) a string representing the order that the
rotations are applied, defaults to 'XYZ' (must be upper case).
The order in which to apply rotations. Default is 'XYZ', which means that
the object will first be rotated around its X axis, then its Y axis and
finally its Z axis. Other possibilities are: 'YZX', 'ZXY', 'XZY', 'YXZ'
and 'ZYX'. These must be in upper case.
Three.js uses intrinsic Tait-Bryan angles. This means that rotations are
performed with respect to the local coordinate system. That is, for
order 'XYZ', the rotation is first around the local-X axis (which is the
same as the world-X axis), then around local-Y (which may now be different
from the world Y-axis), then local-Z (which may be different from the
world Z-axis).
array of length 3 or 4. The optional 4th argument corresponds
to the order.
Assigns this euler's x angle to array[0].
Assigns this euler's y angle to array[1].
Assigns this euler's z angle to array[2].
Optionally assigns this euler's order to array[3].
Resets the euler angle with a new order by creating a quaternion from this
euler angle and then setting this euler angle with the quaternion and the
new order.
Warning: this discards revolution information.
# .set ( x : Float, y : Float, z : Float, order : String ) : this
x - the angle of the x axis in radians. y - the angle of the y axis in radians. z - the angle of the z axis in radians. order - (optional) a string representing the order that the
rotations are applied.
Sets the angles of this euler transform and optionally the order.
m - a Matrix4 of which the upper 3x3 of matrix is a
pure rotation matrix
(i.e. unscaled). order - (optional) a string representing the order that the
rotations are applied.
Sets the angles of this euler transform from a pure rotation matrix based
on the orientation specified by order.
q - a normalized quaternion. order - (optional) a string representing the order that the
rotations are applied.
Sets the angles of this euler transform from a normalized quaternion based
on the orientation specified by order.
Frustums are used to determine
what is inside the camera's field of view. They help speed up the
rendering process - objects which lie outside a camera's frustum can
safely be excluded from rendering.
This class is mainly intended for use internally by a renderer for
calculating a camera or shadowCamera's frustum.
p0 - (optional) defaults to a new Plane. p1 - (optional) defaults to a new Plane. p2 - (optional) defaults to a new Plane. p3 - (optional) defaults to a new Plane. p4 - (optional) defaults to a new Plane. p5 - (optional) defaults to a new Plane.
parameterPositions -- array of positions
sampleValues -- array of samples
sampleSize -- number of samples
resultBuffer -- buffer to store the interpolation results.
point - return the closest point on the line to this
point. clampToLine - whether to clamp the returned value to the
line segment. target — the result will be copied into this Vector3.
Returns the closets point on the line. If clampToLine is
true, then the returned value will be clamped to the line segment.
point - the point for which to return a point parameter.
clampToLine - Whether to clamp the result to the range [0,1].
Returns a point parameter based on the closest point as projected on the
line segment. If clampToLine is true, then the returned
value will be between 0 and 1.
# .inverseLerp ( x : Float, y : Float, value : Float ) : Float
x - Start point. y - End point. value - A value between start and end.
Returns the percentage in the closed interval [0,1] of the given value
between the start and end point.
# .lerp ( x : Float, y : Float, t : Float ) : Float
x - Start point. y - End point. t - interpolation factor in the closed interval [0,1].
Returns a value linearly interpolated
from two known points based on the given interval -
t = 0 will return x and t = 1 will
return y.
# .damp ( x : Float, y : Float, lambda : Float, dt : Float ) : Float
x - Current point. y - Target point. lambda - A higher lambda value will make the movement more
sudden, and a lower value will make the movement more gradual. dt - Delta time in seconds.
Smoothly interpolate a number from x toward y in
a spring-like manner using the dt to maintain frame rate
independent movement. For details, see
Frame rate independent damping using lerp.
x — Value to be mapped. a1 — Minimum value for range A. a2 — Maximum value for range A. b1 — Minimum value for range B. b2 — Maximum value for range B.
Linear mapping of x from range [a1, a2] to range [b1, b2].
Deterministic pseudo-random float in the interval [0,1]. The integer
seed is optional.
# .smoothstep ( x : Float, min : Float, max : Float ) : Float
x - The value to evaluate based on its position between min
and max. min - Any x value below min will be 0. max - Any x value above max will be 1.
Returns a value between 0-1 that represents the percentage that x has
moved between min and max, but smoothed or slowed down the closer X is to
the min and max.
q - the quaternion to be set a - the rotation applied to the first axis, in radians b - the rotation applied to the second axis, in radians
c - the rotation applied to the third axis, in radians order - a string specifying the axes order: 'XYX', 'XZX',
'YXY', 'YZY', 'ZXZ', or 'ZYZ'
Sets quaternion q from the
intrinsic Proper Euler Angles
defined by angles a, b, and c,
and order order.
Rotations are applied to the axes in the order specified by order:
rotation by angle a is applied first, then by angle
b, then by angle c. Angles are in radians.
The constructor and set() method take arguments in
row-major
order, while internally they are stored in the elements
array in column-major order.
This means that calling
m.set(11,12,21,22);
will result in the elements array containing:
m.elements =[11,21,12,22];
and internally all calculations are performed using column-major ordering.
However, as the actual ordering makes no difference mathematically and
most people are used to thinking about matrices in row-major order, the
three.js documentation shows matrices in row-major order. Just bear in
mind that if you are reading the source code, you'll have to take the
transpose of any matrices
outlined here to make sense of the calculations.
Creates a 2x2 matrix with the given arguments in row-major order. If no arguments are provided, the constructor initializes
the Matrix2 to the 3x3 identity matrix.
The constructor and set() method take arguments in
row-major
order, while internally they are stored in the elements
array in column-major order.
This means that calling
m.set(11,12,13,21,22,23,31,32,33);
will result in the elements array containing:
m.elements =[11,21,31,12,22,32,13,23,33];
and internally all calculations are performed using column-major ordering.
However, as the actual ordering makes no difference mathematically and
most people are used to thinking about matrices in row-major order, the
three.js documentation shows matrices in row-major order. Just bear in
mind that if you are reading the source code, you'll have to take the
transpose of any matrices
outlined here to make sense of the calculations.
Creates a 3x3 matrix with the given arguments in row-major order. If no arguments are provided, the constructor initializes
the Matrix3 to the 3x3 identity matrix.
Inverts this matrix, using the
analytic method.
You can not invert with a determinant of zero. If you
attempt this, the method produces a zero matrix instead.
v a translation transform from vector.
or x - the amount to translate in the X axis. y - the amount to translate in the Y axis.
Sets this matrix as a 2D translation transform:
Set this matrix to the upper 3x3 matrix of the Matrix4 m.
# .setUvTransform ( tx : Float, ty : Float, sx : Float, sy : Float, rotation : Float, cx : Float, cy : Float ) : this
tx - offset x ty - offset y sx - repeat x sy - repeat y rotation - rotation, in radians. Positive values rotate
counterclockwise cx - center x of rotation cy - center y of rotation
Sets the UV transform matrix from offset, repeat, rotation, and center.
array - (optional) array to store the resulting vector in. If
not given a new array will be created. offset - (optional) offset in the array at which to put the
result.
Writes the elements of this matrix to an array in
column-major format.
The most common use of a 4x4 matrix in 3D computer graphics is as a
Transformation Matrix.
For an introduction to transformation matrices as used in WebGL,
check out
this tutorial.
This allows a Vector3 representing a point in 3D space to undergo
transformations such as translation, rotation, shear, scale, reflection,
orthogonal or perspective projection and so on, by being multiplied by the
matrix. This is known as applying the matrix to the vector.
Object3D.matrix: This stores the local transform of the object.
This is the object's transformation relative to its parent.
Object3D.matrixWorld: The global or world transform of the
object. If the object has no parent, then this is identical to the local
transform stored in matrix.
Object3D.modelViewMatrix: This represents the object's
transformation relative to the camera's coordinate system. An object's
modelViewMatrix is the object's matrixWorld pre-multiplied by the
camera's matrixWorldInverse.
The constructor and set() method take arguments in
row-major
order, while internally they are stored in the elements array in column-major order.
This means that calling
const m =new THREE.Matrix4();
m.set(11,12,13,14,21,22,23,24,31,32,33,34,41,42,43,44);
will result in the elements array containing:
m.elements =[11,21,31,41,12,22,32,42,13,23,33,43,14,24,34,44];
and internally all calculations are performed using column-major ordering.
However, as the actual ordering makes no difference mathematically and
most people are used to thinking about matrices in row-major order, the
three.js documentation shows matrices in row-major order. Just bear in
mind that if you are reading the source code, you'll have to take the
transpose of any matrices
outlined here to make sense of the calculations.
Extracting position, rotation and scale
There are several options available for extracting position, rotation and
scale from a Matrix4.
Creates a 4x4 matrix with the given arguments in row-major order. If no arguments are provided, the constructor initializes
the Matrix4 to the 4x4 identity matrix.
Note: Not all matrices are decomposable in this way. For example, if an
object has a non-uniformly scaled parent, then the object's world matrix
may not be decomposable, and this method may not be appropriate.
Inverts this matrix, using the
analytic method.
You can not invert with a determinant of zero. If you
attempt this, the method produces a zero matrix instead.
axis — Rotation axis, should be normalized. theta — Rotation angle in radians.
Sets this matrix as rotation transform around axis by
theta radians.
This is a somewhat controversial but mathematically sound alternative to
rotating via Quaternions. See the discussion
here.
Sets the rotation component (the upper left 3x3 matrix) of this matrix to
the rotation specified by the given Euler Angle. The rest of
the matrix is set to the identity. Depending on the order
of the euler, there are six possible outcomes. See
this page for a complete list.
Sets the rotation component of this matrix to the rotation specified by
q, as outlined
here. The
rest of the matrix is set to the identity. So, given q =
w + xi + yj + zk, the resulting matrix will be:
xy - the amount to shear X by Y. xz - the amount to shear X by Z. yx - the amount to shear Y by X. yz - the amount to shear Y by Z. zx - the amount to shear Z by X. zy - the amount to shear Z by Y.
normal - (optional) a unit length Vector3 defining
the normal of the plane. Default is (1,0,0). constant - (optional) the signed distance from the origin to
the plane. Default is 0.
Apply a Matrix4 to the plane. The matrix must be an affine, homogeneous
transform.
If supplying an optionalNormalMatrix, it can be created
like so:
const optionalNormalMatrix =new THREE.Matrix3().getNormalMatrix( matrix );
line - the Line3 to check for intersection. target — the result will be copied into this Vector3.
Returns the intersection point of the passed line and the plane. Returns
null if the line does not intersect. Returns the line's starting point if
the line is coplanar with the plane.
# .set ( normal : Vector3, constant : Float ) : this
normal - a unit length Vector3 defining the normal
of the plane. constant - the signed distance from the origin to the plane.
Sets this plane's normal and constant
properties by copying the values from the given normal.
# .setComponents ( x : Float, y : Float, z : Float, w : Float ) : this
x - x value of the unit length normal vector. y - y value of the unit length normal vector. z - z value of the unit length normal vector. w - the value of the plane's constant
property.
Set the individual components that define the plane.
Translates the plane by the distance defined by the offset
vector. Note that this only affects the plane constant and will not affect
the normal vector.
Returns the rotational conjugate of this quaternion. The conjugate of a
quaternion represents the same rotation in the opposite direction about
the rotational axis.
Computes the squared
Euclidean length
(straight-line length) of this quaternion, considered as a 4 dimensional
vector. This can be useful if you are comparing the lengths of two
quaternions, as this is a slightly more efficient calculation than
length().
qb - The other quaternion rotation t - interpolation factor in the closed interval [0,1].
Handles the spherical linear interpolation between quaternions.
t represents the amount of rotation between this quaternion
(where t is 0) and qb (where t
is 1). This quaternion is set to the result. Also see the static version
of the slerp below.
// rotate a mesh towards a target quaternion
mesh.quaternion.slerp( endQuaternion,0.01);
m - a Matrix4 of which the upper 3x3 of matrix is a
pure rotation matrix
(i.e. unscaled).
Sets this quaternion from rotation component of m.
Adapted from the method
here.
Sets this quaternion to the rotation required to rotate direction vector
vFrom to direction vector vTo.
Adapted from the method
here. vFrom and vTo are assumed to be normalized.
array - An optional array to store the quaternion. If not
specified, a new array will be created. offset - (optional) if specified, the result will be copied
into this Array.
Returns the numerical elements of this quaternion in an array of format
[x, y, z, w].
dst - The output array. dstOffset - An offset into the output array. src0 - The source array of the starting quaternion. srcOffset0 - An offset into the array src0. src1 - The source array of the target quaternion. srcOffset1 - An offset into the array src1. t - Normalized interpolation factor (between 0 and 1).
This SLERP implementation assumes the quaternion data are managed in flat
arrays.
dst - The output array. dstOffset - An offset into the output array. src0 - The source array of the starting quaternion. srcOffset0 - An offset into the array src0. src1 - The source array of the target quaternion. srcOffset1 - An offset into the array src1.
This multiplication implementation assumes the quaternion data are managed
in flat arrays.
A ray that emits from an origin in a certain direction. This is used by
the Raycaster to assist with
raycasting. Raycasting is
used for mouse picking (working out what objects in the 3D space the mouse
is over) amongst other things.
origin - (optional) the origin of the Ray. Default
is a Vector3 at (0, 0, 0). direction - Vector3 The direction of the Ray.
This must be normalized (with Vector3.normalize) for the methods to
operate properly. Default is a Vector3 at (0, 0, -1).
v0 - the start of the line segment. v1 - the end of the line segment.
optionalPointOnRay - (optional) if this is provided, it receives the point
on this Ray that is closest to the segment.
optionalPointOnSegment - (optional) if this is provided, it receives the
point on the line segment that is closest to this Ray.
Get the squared distance between this Ray and a line segment.
a, b, c - The Vector3
points making up the triangle. backfaceCulling - whether to use backface culling. target — the result will be copied into this Vector3.
Intersect this Ray with a triangle, returning the intersection
point or null if there is no intersection.
point - Vector3 The point to clamp. target — the result will be copied into this Vector3.
Clamps a point within the sphere. If the point is outside the sphere, it
will clamp it to the closest point on the edge of the sphere. Points
already inside the sphere will not be affected.
Checks to see if the sphere is empty (the radius set to a negative
number).
Spheres with a radius of 0 contain only their center point and are not
considered to be empty.
Computes the minimum bounding sphere for an array of points.
If optionalCenteris given, it is used as the sphere's
center. Otherwise, the center of the axis-aligned bounding box
encompassing points is calculated.
radius - the radius, or the
Euclidean distance
(straight-line distance) from the point to the origin. Default is
1.0. phi - polar angle in radians from the y (up) axis. Default is
0. theta - equator angle in radians around the y (up) axis.
Default is 0.
The poles (phi) are at the positive and negative y axis. The equator
(theta) starts at positive z.
a - the first corner of the triangle. Default is a
Vector3 at (0,0,0). b - the second corner of the triangle. Default is a
Vector3 at (0,0,0). c - the final corner of the triangle. Default is a
Vector3 at (0,0,0).
point - Position of interpolated point. p1 - Position of first vertex. p2 - Position of second vertex. p3 - Position of third vertex. v1 - Value of first vertex. v2 - Value of second vertex. v3 - Value of third vertex. target — Result will be copied into this Vector.
Returns the value barycentrically interpolated for the given point on the
triangle. Returns null if the triangle is degenerate.
attribute - The attribute to interpolate.
p1 - Index of first vertex.
p2 - Index of second vertex.
p3 - Index of third vertex.
barycoord - The barycoordinate value to use to interpolate. target — Result will be copied into this Vector.
Returns the value barycentrically interpolated for the given attribute and indices.
Class representing a 2D vector.
A 2D vector is an ordered pair of numbers (labeled x and y),
which can be used to represent a number of things, such as:
A point in 2D space (i.e. a position on a plane).
A direction and length across a plane. In three.js the length will
always be the Euclidean distance
(straight-line distance) from (0,0) to (x, y)
and the direction is also measured from (0,0) towards (x, y).
Any arbitrary ordered pair of numbers.
There are other things a 2D vector can be used to represent, such as
momentum vectors, complex numbers and so on, however these are the most
common uses in three.js.
Iterating through a Vector2 instance will yield its components (x, y) in
the corresponding order.
Code Example
const a =new THREE.Vector2(0,1);//no arguments; will be initialised to (0, 0)const b =new THREE.Vector2();const d = a.distanceTo( b );
Constructor
Vector2( x : Float, y : Float )
x - the x value of this vector. Default is 0. y - the y value of this vector. Default is 0.
Computes the squared distance from this vector to v. If you
are just comparing the distance with another distance, you should compare
the distance squared instead as it is slightly more efficient to
calculate.
Calculates the cross product
of this vector and v. Note that a 'cross-product'
in 2D is not well-defined. This function computes a geometric
cross-product often used in 2D graphics
Computes the square of the
Euclidean length
(straight-line length) from (0, 0) to (x, y). If you are comparing the
lengths of vectors, you should compare the length squared instead as it is
slightly more efficient to calculate.
v - Vector2 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Linearly interpolates between this vector and v, where
alpha is the percent distance along the line - alpha = 0 will be this
vector, and alpha = 1 will be v.
v1 - the starting Vector2. v2 - Vector2 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Sets this vector to be the vector linearly interpolated between
v1 and v2 where alpha is the percent
distance along the line connecting the two vectors - alpha = 0 will be
v1, and alpha = 1 will be v2.
array - (optional) array to store this vector to. If this is
not provided, a new array will be created. offset - (optional) optional offset into the array.
Returns an array [x, y], or copies x and y into the provided array.
Class representing a 3D vector.
A 3D vector is an ordered triplet of numbers (labeled x, y, and
z), which can be used to represent a number of things, such as:
A point in 3D space.
A direction and length in 3D space. In three.js the length will always
be the Euclidean distance
(straight-line distance) from (0,0,0) to (x, y, z) and
the direction is also measured from (0,0,0) towards (x, y, z).
Any arbitrary ordered triplet of numbers.
There are other things a 3D vector can be used to represent, such as
momentum vectors and so on, however these are the most common uses in
three.js.
Iterating through a Vector3 instance will yield its components (x, y, z)
in the corresponding order.
Code Example
const a =new THREE.Vector3(0,1,0);//no arguments; will be initialised to (0, 0, 0)const b =new THREE.Vector3();const d = a.distanceTo( b );
Constructor
Vector3( x : Float, y : Float, z : Float )
x - the x value of this vector. Default is 0. y - the y value of this vector. Default is 0. z - the z value of this vector. Default is 0.
Computes the squared distance from this vector to v. If you
are just comparing the distance with another distance, you should compare
the distance squared instead as it is slightly more efficient to
calculate.
Computes the square of the
Euclidean length
(straight-line length) from (0, 0, 0) to (x, y, z). If you are comparing
the lengths of vectors, you should compare the length squared instead as
it is slightly more efficient to calculate.
v - Vector3 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Linearly interpolate between this vector and v, where alpha
is the percent distance along the line - alpha = 0 will be this vector,
and alpha = 1 will be v.
v1 - the starting Vector3. v2 - Vector3 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Sets this vector to be the vector linearly interpolated between
v1 and v2 where alpha is the percent
distance along the line connecting the two vectors - alpha = 0 will be
v1, and alpha = 1 will be v2.
array - (optional) array to store this vector to. If this is
not provided a new array will be created. offset - (optional) optional offset into the array.
Returns an array [x, y, z], or copies x, y and z into the provided
array.
Class representing a 4D vector.
A 4D vector is an ordered quadruplet of numbers (labeled x, y, z,
and w), which can be used to represent a number of things, such as:
A point in 4D space.
A direction and length in 4D space. In three.js the length will always
be the Euclidean distance
(straight-line distance) from (0,0,0,0) to (x, y, z, w)
and the direction is also measured from (0,0,0,0) towards (x, y, z, w).
Any arbitrary ordered quadruplet of numbers.
There are other things a 4D vector can be used to represent, however these
are the most common uses in three.js.
Iterating through a Vector4 instance will yield its components (x, y, z, w) in the corresponding order.
Code Example
const a =new THREE.Vector4(0,1,0,0);//no arguments; will be initialised to (0, 0, 0, 1)const b =new THREE.Vector4();const d = a.dot( b );
Constructor
Vector4( x : Float, y : Float, z : Float, w : Float )
x - the x value of this vector. Default is 0. y - the y value of this vector. Default is 0. z - the z value of this vector. Default is 0. w - the w value of this vector. Default is 1.
array - the source array. offset - (optional) offset into the array. Default is 0.
Sets this vector's x value to be array[ offset +0], y
value to be array[ offset +1]z value to be array[ offset +2]
and w value to be array[ offset +3].
If index equals 0 returns the x value.
If index equals 1 returns the y value.
If index equals 2 returns the z value.
If index equals 3 returns the w value.
Computes the square of the
Euclidean length
(straight-line length) from (0,0,0,0) to (x, y, z, w). If you are
comparing the lengths of vectors, you should compare the length squared
instead as it is slightly more efficient to calculate.
v - Vector4 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Linearly interpolates between this vector and v, where
alpha is the percent distance along the line - alpha =0 will be this
vector, and alpha =1 will be v.
v1 - the starting Vector4. v2 - Vector4 to interpolate towards. alpha - interpolation factor, typically in the closed
interval [0,1].
Sets this vector to be the vector linearly interpolated between
v1 and v2 where alpha is the percent
distance along the line connecting the two vectors - alpha = 0 will be
v1, and alpha = 1 will be v2.
array - (optional) array to store this vector to. If this is
not provided, a new array will be created. offset - (optional) optional offset into the array.
Returns an array [x, y, z, w], or copies x, y, z and w into the provided
array.
parameterPositions -- array of positions
sampleValues -- array of samples
sampleSize -- number of samples
resultBuffer -- buffer to store the interpolation results.
parameterPositions -- array of positions
sampleValues -- array of samples
sampleSize -- number of samples
resultBuffer -- buffer to store the interpolation results.
parameterPositions -- array of positions
sampleValues -- array of samples
sampleSize -- number of samples
resultBuffer -- buffer to store the interpolation results.
parameterPositions -- array of positions
sampleValues -- array of samples
sampleSize -- number of samples
resultBuffer -- buffer to store the interpolation results.
A special version of Mesh with multi draw batch rendering support. Use
BatchedMesh if you have to render a large number of objects with the same
material but with different geometries or world transformations. The usage of
BatchedMesh will help you to reduce the number of draw calls and thus improve the overall
rendering performance in your application.
maxInstanceCount - the max number of individual instances planned to be added and rendered. maxVertexCount - the max number of vertices to be used by all unique geometries. maxIndexCount - the max number of indices to be used by all unique geometries. material - an instance of Material. Default is a new MeshBasicMaterial.
If true then the individual objects within the BatchedMesh are sorted to improve overdraw-related artifacts.
If the material is marked as "transparent" objects are rendered back to front and if not then they are
rendered front to back. Default is true.
Computes the bounding box, updating .boundingBox attribute.
Bounding boxes aren't computed by default. They need to be explicitly
computed, otherwise they are null.
Computes the bounding sphere, updating .boundingSphere
attribute.
Bounding spheres aren't computed by default. They need to be explicitly
computed, otherwise they are null.
Takes a sort a function that is run before render. The function takes a list of instances to sort and a camera. The objects
in the list include a "z" field to perform a depth-ordered sort with.
geometry: The geometry to add into the BatchedMesh.
reservedVertexRange: Optional parameter specifying the amount of vertex buffer space to reserve for the added geometry. This
is necessary if it is planned to set a new geometry at this index at a later time that is larger than the original geometry. Defaults to
the length of the given geometry vertex buffer.
reservedIndexRange: Optional parameter specifying the amount of index buffer space to reserve for the added geometry. This
is necessary if it is planned to set a new geometry at this index at a later time that is larger than the original geometry. Defaults to
the length of the given geometry index buffer.
Adds the given geometry to the BatchedMesh and returns the associated geometry id referring to it to be used in other functions.
geometryId: The id of a geometry to remove from the BatchedMesh that was previously added via "addGeometry". Any instances referencing
this geometry will also be removed as a side effect.
geometryId: The id of a previously added geometry via "addGeometry" to add into the BatchedMesh to render.
Adds a new instance to the BatchedMesh using the geometry of the given geometryId and returns a new id referring to the new instance to be used
by other functions.
geometryId: Which geometry id to replace with this geometry.
geometry: The geometry to substitute at the given geometry id.
Replaces the geometry at geometryId with the provided geometry. Throws an error if there is not enough space reserved for geometry.
Calling this will change all instances that are rendering that geometry.
Resizes the available space in BatchedMesh's vertex and index buffer attributes to the provided sizes. If the provided arguments shrink the geometry buffers
but there is not enough unused space at the end of the geometry attributes then an error is thrown.
maxVertexCount - the max number of vertices to be used by all unique geometries to resize to. maxIndexCount - the max number of indices to be used by all unique geometries to resize to.
Resizes the necessary buffers to support the provided number of instances. If the provided arguments shrink the number of instances but there are not enough
unused ids at the end of the list then an error is thrown.
maxInstanceCount - the max number of individual instances that can be added and rendered by the BatchedMesh.
A special version of the Group object that defines clipping planes for decendant objects.
ClippingGroups can be nested, with clipping planes accumulating by type: intersection or union.
Note: ClippingGroup is only supported with WebGPURenderer.
User-defined clipping planes specified as THREE.Plane objects in world
space. These planes apply to the objects that are children of this ClippingGroup.
Points in space whose signed distance to the plane is negative are clipped
(not rendered). See the webgpu / clipping example. Default is [].
This is almost identical to an Object3D. Its purpose is to
make working with groups of objects syntactically clearer.
Code Example
const geometry =new THREE.BoxGeometry(1,1,1);const material =new THREE.MeshBasicMaterial({color:0x00ff00});const cubeA =new THREE.Mesh( geometry, material );
cubeA.position.set(100,100,0);const cubeB =new THREE.Mesh( geometry, material );
cubeB.position.set(-100,-100,0);//create a group and add the two cubes//These cubes can now be rotated / scaled etc as a groupconstgroup=new THREE.Group();group.add( cubeA );group.add( cubeB );
scene.add(group);
Constructor
Group( )
Properties
See the base Object3D class for common properties.
A special version of Mesh with instanced rendering support. Use
InstancedMesh if you have to render a large number of objects with the same
geometry and material(s) but with different world transformations. The usage
of InstancedMesh will help you to reduce the number of draw calls and thus
improve the overall rendering performance in your application.
The number of instances. The count value passed into the constructor
represents the maximum number of instances of this mesh. You can change
the number of instances at runtime to an integer value in the range [0, count].
If you need more instances than the original count value, you have to
create a new InstancedMesh.
Computes the bounding box of the instanced mesh, and updates the .boundingBox attribute.
The bounding box is not computed by the engine; it must be computed by your app.
You may need to recompute the bounding box if an instance is transformed via .setMatrixAt().
Computes the bounding sphere of the instanced mesh, and updates the .boundingSphere attribute.
The engine automatically computes the bounding sphere when it is needed, e.g., for ray casting or view frustum culling.
You may need to recompute the bounding sphere if an instance is transformed via .setMatrixAt().
index: The index of an instance. Values have to be in the
range [0, count].
matrix: A 4x4 matrix representing the local transformation
of a single instance.
Sets the given local transformation matrix to the defined instance. Make
sure you set .instanceMatrix.needsUpdate
to true after updating all the matrices.
An array of weights typically from 0-1 that specify how much of the morph
is applied. Undefined by default, but reset to a blank array by
.updateMorphTargets().
Computes an array of distance values which are necessary for
LineDashedMaterial. For each vertex in the geometry, the method
calculates the cumulative length from the current point to the very
beginning of the line.
A continuous line that connects back to the start.
This is nearly the same as Line; the only difference is that it is
rendered using
gl.LINE_LOOP instead of
gl.LINE_STRIP,
which draws a straight line to the next vertex, and
connects the last vertex back to the first.
Level of Detail - show meshes with more or less geometry based on distance
from the camera.
Every level is associated with an object, and rendering can be switched
between them at the distances specified. Typically you would create, say,
three meshes, one for far away (low detail), one for mid range (medium
detail) and one for close up (high detail).
Code Example
const lod =new THREE.LOD();//Create spheres with 3 levels of detail and create new LOD levels for themfor(let i =0; i <3; i++){const geometry =new THREE.IcosahedronGeometry(10,3- i );const mesh =new THREE.Mesh( geometry, material );
lod.addLevel( mesh, i *75);}
scene.add( lod );
Whether the LOD object is updated automatically by the renderer per frame
or not. If set to false, you have to call LOD.update() in the
render loop by yourself. Default is true.
Each level is an object with the following properties: object - The Object3D to display at this level. distance - The distance at which to display this level of
detail. hysteresis - Threshold used to avoid flickering at LOD
boundaries, as a fraction of distance.
object - The Object3D to display at this level. distance - The distance at which to display this level of
detail. Default 0.0. hysteresis - Threshold used to avoid flickering at LOD
boundaries, as a fraction of distance. Default 0.0.
Adds a mesh that will display at a certain distance and greater. Typically
the further away the distance, the lower the detail on the mesh.
An instance of material derived from the Material base class or an
array of materials, defining the object's appearance. Default is a
MeshBasicMaterial.
An array of weights typically from 0-1 that specify how much of the morph
is applied. Undefined by default, but reset to a blank array by
updateMorphTargets.
An array of weights typically from 0-1 that specify how much of the morph
is applied. Undefined by default, but reset to a blank array by
updateMorphTargets.
A mesh that has a Skeleton with bones that can then be
used to animate the vertices of the geometry.
Code Example
const geometry =new THREE.CylinderGeometry(5,5,5,5,15,5,30);// create the skin indices and skin weights manually// (typically a loader would read this data from a 3D model for you)const position = geometry.attributes.position;const vertex =new THREE.Vector3();const skinIndices =[];const skinWeights =[];for(let i =0; i < position.count; i ++){
vertex.fromBufferAttribute( position, i );// compute skinIndex and skinWeight based on some configuration dataconst y =( vertex.y + sizing.halfHeight );const skinIndex =Math.floor( y / sizing.segmentHeight );const skinWeight =( y % sizing.segmentHeight )/ sizing.segmentHeight;
skinIndices.push( skinIndex, skinIndex +1,0,0);
skinWeights.push(1- skinWeight, skinWeight,0,0);}
geometry.setAttribute('skinIndex',new THREE.Uint16BufferAttribute( skinIndices,4));
geometry.setAttribute('skinWeight',new THREE.Float32BufferAttribute( skinWeights,4));// create skinned mesh and skeletonconst mesh =new THREE.SkinnedMesh( geometry, material );const skeleton =new THREE.Skeleton( bones );// see example from THREE.Skeletonconst rootBone = skeleton.bones[0];
mesh.add( rootBone );// bind the skeleton to the mesh
mesh.bind( skeleton );// move the bones and manipulate the model
skeleton.bones[0].rotation.x =-0.1;
skeleton.bones[1].rotation.x =0.2;
Either AttachedBindMode or DetachedBindMode. AttachedBindMode means the skinned mesh
shares the same world space as the skeleton. This is not true when using DetachedBindMode
which is useful when sharing a skeleton across multiple skinned meshes.
Default is AttachedBindMode.
Computes the bounding box of the skinned mesh, and updates the .boundingBox attribute.
The bounding box is not computed by the engine; it must be computed by your app.
If the skinned mesh is animated, the bounding box should be recomputed per frame.
Computes the bounding sphere of the skinned mesh, and updates the .boundingSphere attribute.
The bounding sphere is automatically computed by the engine when it is needed, e.g., for ray casting and view frustum culling.
If the skinned mesh is animated, the bounding sphere should be recomputed per frame.
The sprite's anchor point, and the point around which the sprite rotates.
A value of (0.5, 0.5) corresponds to the midpoint of the sprite. A value
of (0, 0) corresponds to the lower left corner of the sprite. The default
is (0.5, 0.5).
Get intersections between a casted ray and this sprite.
Raycaster.intersectObject() will call this method. The raycaster
must be initialized by calling Raycaster.setFromCamera() before
raycasting against sprites.
The WebGL renderer displays your beautifully crafted scenes using
WebGL.
Constructor
WebGLRenderer( parameters : Object )
parameters - (optional) object with properties defining the
renderer's behavior. The constructor also accepts no parameters at all.
In all cases, it will assume sane defaults when parameters are missing.
The following are valid parameters:
canvas - A
canvas
where the renderer draws its output. This corresponds to the
domElement property below. If not passed
in here, a new canvas element will be created. context - This can be used to attach the
renderer to an existing
RenderingContext. Default is null. precision - Shader precision. Can be "highp", "mediump"
or "lowp". Defaults to "highp" if supported by the device. alpha - controls the default clear alpha value. When set to
true, the value is 0. Otherwise it's 1. Default is false. premultipliedAlpha - whether the renderer will assume that
colors have
premultiplied alpha.
Default is true. antialias - whether to perform antialiasing. Default is
false. stencil - whether the drawing buffer has a
stencil buffer of at
least 8 bits. Default is false. preserveDrawingBuffer - whether to preserve the buffers
until manually cleared or overwritten. Default is false. powerPreference - Provides a hint to the user agent
indicating what configuration of GPU is suitable for this WebGL context.
Can be "high-performance", "low-power" or "default". Default is
"default". See
WebGL spec for details. failIfMajorPerformanceCaveat - whether the renderer
creation will fail upon low performance is detected. Default is false.
See WebGL spec for details. depth - whether the drawing buffer has a
depth buffer of at least
16 bits. Default is true. logarithmicDepthBuffer - whether to use a logarithmic depth
buffer. It may be necessary to use this if dealing with huge differences
in scale in a single scene. Note that this setting uses gl_FragDepth if
available which disables the
Early Fragment Test
optimization and can cause a decrease in performance.
Default is false. See the camera / logarithmicdepthbuffer example.
reverseDepthBuffer - whether to use a reverse depth buffer. Requires the EXT_clip_control extension.
This is a more faster and accurate version than logarithmic depth buffer. Default is false.
An object containing details about the capabilities of the current
RenderingContext.
- floatFragmentTextures: whether the context supports the
OES_texture_float extension.
- floatVertexTextures: true if floatFragmentTextures
and vertexTextures are both true.
- getMaxAnisotropy(): Returns the maximum available
anisotropy.
- getMaxPrecision(): Returns the maximum available precision
for vertex and fragment shaders.
- isWebGL2: true if the context in use is a
WebGL2RenderingContext object.
- logarithmicDepthBuffer: true if the logarithmicDepthBuffer
was set to true in the constructor.
- maxAttributes: The value of gl.MAX_VERTEX_ATTRIBS.
- maxCubemapSize: The value of
gl.MAX_CUBE_MAP_TEXTURE_SIZE. Maximum height * width of cube map
textures that a shader can use.
- maxFragmentUniforms: The value of
gl.MAX_FRAGMENT_UNIFORM_VECTORS. The number of uniforms that can be used
by a fragment shader.
- maxSamples: The value of gl.MAX_SAMPLES. Maximum number
of samples in context of Multisample anti-aliasing (MSAA).
- maxTextureSize: The value of gl.MAX_TEXTURE_SIZE.
Maximum height * width of a texture that a shader use.
- maxTextures: The value of gl.MAX_TEXTURE_IMAGE_UNITS.
The maximum number of textures that can be used by a shader.
- maxVaryings: The value of gl.MAX_VARYING_VECTORS. The
number of varying vectors that can used by shaders.
- maxVertexTextures: The value of
gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS. The number of textures that can be
used in a vertex shader.
- maxVertexUniforms: The value of
gl.MAX_VERTEX_UNIFORM_VECTORS. The maximum number of uniforms that can
be used in a vertex shader.
- precision: The shader precision currently being used by
the renderer.
- reverseDepthBuffer: true if the reverseDepthBuffer
was set to true in the constructor and the context
supports the EXT_clip_control extension.
- vertexTextures: true if # .maxVertexTextures : Integeris greater than 0 (i.e. vertex textures can be used).
User-defined clipping planes specified as THREE.Plane objects in world
space. These planes apply globally. Points in space whose dot product with
the plane is negative are cut away. Default is [].
- checkShaderErrors: If it is true, defines whether
material shader programs are checked for errors during compilation and
linkage process. It may be useful to disable this check in production for
performance gain. It is strongly recommended to keep these checks enabled
during development. If the shader does not compile and link - it will not
work and associated material will not render. Default is true.
- onShaderError( gl, program, glVertexShader,
glFragmentShader ): A callback function that can be used for custom error
reporting. The callback receives the WebGL context, an instance of
WebGLProgram as well two instances of WebGLShader representing the vertex
and fragment shader. Assigning a custom function disables the default
error reporting. Default is null.
A canvas
where the renderer draws its output.
This is automatically created by the renderer in the constructor (if not
provided already); you just need to add it to your page like so: document.body.appendChild( renderer.domElement );
- get( extensionName : String ): Used to check whether
various extensions are supported and returns an object with details of the
extension if available. This method can check for the following
extensions:
An object with a series of statistical information about the graphics
board memory and the rendering process. Useful for debugging or just for
the sake of curiosity. The object contains the following fields:
memory:
geometries
textures
render:
calls
triangles
points
lines
frame
programs
By default these data are reset at each render call but when having
multiple render passes per frame (e.g. when using post processing) it can
be preferred to reset with a custom pattern. First, set autoReset to
false.
renderer.info.autoReset =false;
Call reset() whenever you have finished to render a single frame.
renderer.info.reset();
This contains the reference to the shadow map, if used.
- enabled: If set, use shadow maps in the scene. Default is
false.
- autoUpdate: Enables automatic updates to the shadows in
the scene. Default is true.
If you do not require dynamic lighting / shadows, you may set this to
false when the renderer is instantiated.
- needsUpdate: When set to true, shadow maps in the scene
will be updated in the next render call. Default is false.
If you have disabled automatic updates to shadow maps
(shadowMap.autoUpdate =false), you will need to set this to true and
then make a render call to update the shadows in your scene.
- type: Defines shadow map type (unfiltered, percentage
close filtering, percentage close filtering with bilinear filtering in
shader). Options are:
Defines whether the renderer should sort objects. Default is true.
Note: Sorting is used to attempt to properly render objects that have some
degree of transparency. By definition, sorting objects may not work in all
cases. Depending on the needs of application, it may be necessary to turn
off sorting and use other methods to deal with transparency rendering e.g.
manually determining each object's rendering order.
Tells the renderer to clear its color, depth or stencil drawing buffer(s).
This method initializes the color buffer to the current clear color
value.
Arguments default to true.
Compiles all materials in the scene with the camera. This is useful to precompile shaders before the first rendering.
If you want to add a 3D object to an existing scene, use the third optional parameter for applying the target scene.
Note that the (target) scene's lighting and environment should be configured before calling this method.
Asynchronous version of .compile(). The method returns a Promise that resolves when the
given scene can be rendered without unnecessary stalling due to shader compilation.
This method makes use of the KHR_parallel_shader_compile WebGL extension.
Copies the pixels of a texture in the bounds 'srcRegion' in the destination texture starting from the given position.
2D Texture, 3D Textures, or a mix of the two can be used as source and destination texture arguments for copying between layers of 3d textures.
The depthTexture and texture property of render targets are supported as well.
When using render target textures as srcTexture and dstTexture, you must make sure both render targets are initialized e.g. via .initRenderTarget().
Initializes the given texture. Useful for preloading a texture rather than
waiting until first render (which can cause noticeable lags due to decode
and GPU upload overhead).
Initializes the given WebGLRenderTarget memory. Useful for initializing a render
target so data can be copied into it using .copyTextureToTexture
before it has been rendered to.
buffer - Uint8Array is the only destination type supported in all cases,
other types are renderTarget and platform dependent. See
WebGL spec for details.
Reads the pixel data from the renderTarget into the buffer you pass in.
This is a wrapper around
WebGLRenderingContext.readPixels().
For reading out a WebGLCubeRenderTarget use
the optional parameter activeCubeFaceIndex to determine which face should
be read.
Render a scene or another type of object
using a camera.
The render is done to a previously specified renderTarget
set by calling .setRenderTarget or to the canvas as usual.
By default render buffers are cleared before rendering but you can prevent
this by setting the property autoClear to
false. If you want to prevent only certain buffers being cleared you can
set either the autoClearColor,
autoClearStencil or
autoClearDepth properties to false. To
forcibly clear one or more buffers call .clear.
Can be used to reset the internal WebGL state. This method is mostly
relevant for applications which share a single WebGL context across
multiple WebGL libraries.
renderTarget -- The renderTarget that needs to be
activated. When null is given, the canvas is set as the active render
target instead.
activeCubeFace -- Specifies the active cube side (PX 0, NX 1, PY 2, NY 3,
PZ 4, NZ 5) of WebGLCubeRenderTarget. When passing a
WebGLArrayRenderTarget or WebGL3DRenderTarget this indicates
the z layer to render in to (optional).
activeMipmapLevel -- Specifies the active mipmap level (optional).
Enable or disable the scissor test. When this is enabled, only the pixels
within the defined scissor area will be affected by further renderer
actions.
Resizes the output canvas to (width, height) with device pixel ratio taken
into account, and also sets the viewport to fit that size, starting in (0,
0). Setting updateStyle to false prevents any style changes
to the output canvas.
A
render target
is a buffer where the video card draws pixels for a scene
that is being rendered in the background. It is used in different effects,
such as applying postprocessing to a rendered image before displaying it
on the screen.
width - The width of the renderTarget. Default is 1. height - The height of the renderTarget. Default is 1.
options - optional object that holds texture parameters for an
auto-generated target texture and depthBuffer/stencilBuffer booleans. For
an explanation of the texture parameters see Texture. The
following are valid options:
Defines whether the stencil buffer should be resolved when rendering into a multisampled render target.
This property has no effect when .resolveDepthBuffer is set to false.
Default is true.
width - the width of the render target, in pixels. Default is 1. height - the height of the render target, in pixels. Default is 1. depth - the depth of the render target. Default is 1. options - optional object that holds texture parameters for an
auto-generated target texture and depthBuffer/stencilBuffer booleans. See WebGLRenderTarget for details.
width - the width of the render target, in pixels. Default is 1. height - the height of the render target, in pixels. Default is 1. depth - the depth/layer count of the render target. Default is 1. options - optional object that holds texture parameters for an
auto-generated target texture and depthBuffer/stencilBuffer booleans. See WebGLRenderTarget for details.
size - the size, in pixels. Default is 1.
options - (optional) object that holds texture parameters for an
auto-generated target texture and depthBuffer/stencilBuffer booleans. For
an explanation of the texture parameters see Texture. The
following are valid options:
Call this to clear the renderTarget's color, depth, and/or stencil
buffers. The color buffer is set to the renderer's current clear color.
Arguments default to true.
src -- An object representing uniform definitions.
Clones the given uniform definitions by performing a deep-copy. That means
if the value of a uniform refers to an object like a
Vector3 or Texture, the cloned uniform will refer to a new
object reference.
uniforms -- An array of objects containing uniform definitions.
Merges the given uniform definitions into a single object. Since the
method internally uses .clone(), it performs a deep-copy when
producing the merged uniform definitions.
This class represents an abstraction of the WebXR Device API and is
internally used by WebGLRenderer. WebXRManager also provides a public
interface that allows users to enable/disable XR and perform XR related
tasks like for instance retrieving controllers.
Returns an instance of ArrayCamera which represents the XR camera
of the active XR session. For each view it holds a separate camera object
in its cameras property.
The camera's fov is currently not used and does not reflect the fov of
the XR camera. If you need the fov on app level, you have to compute in
manually from the XR camera's projection matrices.
Returns a Group representing the so called target ray space of
the XR controller. Use this space for visualizing 3D objects that support
the user in pointing tasks like UI interaction.
Returns a Group representing the so called grip space of the XR
controller. Use this space if the user is going to hold other 3D objects
like a lightsaber.
Note: If you want to show something in the user's hand AND offer a
pointing ray at the same time, you'll want to attached the handheld object
to the group returned by .getControllerGrip() and the ray to the
group returned by .getController(). The idea is to have two
different groups in two different coordinate spaces for the same WebXR
controller.
Returns a Group representing the so called hand or joint space
of the XR controller. Use this space for visualizing the user's hands when
no physical controllers are used.
Specifies the scaling factor to use when determining the size of the
framebuffer when rendering to a XR device. The value is relative to the
default XR device display resolution. Default is 1. A value of 0.5
would specify a framebuffer with 50% of the display's native resolution.
Note: It is not possible to change the framebuffer scale factor while
presenting XR content.
Can be used to configure a spatial relationship with the user's physical
environment. Depending on how the user moves in 3D space, setting an
appropriate reference space can improve tracking. Default is
local-floor. Please check out the
MDN
for possible values and their use cases.
Updates the state of the XR camera. Use this method on app level if you
set .cameraAutoUpdate to false. The method requires the non-XR
camera of the scene as a parameter. The passed in camera's transformation
is automatically adjusted to the position of the XR camera when calling
this method.
Note: It is not possible to change the reference space type while
presenting XR content.
The maximum distance at which fog stops being calculated and applied.
Objects that are more than 'far' units away from the active camera won't
be affected by fog.
This class contains the parameters that define exponential squared fog,
which gives a clear view near the camera and a faster than exponentially
densening fog farther from the camera.
Code Example
const scene =new THREE.Scene();
scene.fog =new THREE.FogExp2(0xcccccc,0.002);
Constructor
FogExp2( color : Integer, density : Float )
The color parameter is passed to the Color constructor to set the
color property. Color can be a hexadecimal integer or a CSS-style string.
Sets the blurriness of the background. Only influences environment maps
assigned to Scene.background. Valid input is a float between 0
and 1. Default is 0.
Sets the environment map for all physical materials in the scene. However,
it's not possible to overwrite an existing texture assigned to
MeshStandardMaterial.envMap. Default is null.
Describes that a specific layer of the texture needs to be updated.
Normally when needsUpdate is set to true, the
entire compressed texture array is sent to the GPU. Marking specific
layers will only transmit subsets of all mipmaps associated with a
specific depth in the array which is often much more performant.
CubeTexture is almost equivalent in functionality and usage to
Texture. The only differences are that the images are an array of 6
images as opposed to a single image, and the mapping options are
THREE.CubeReflectionMapping (default) or THREE.CubeRefractionMapping
This creates a Data3DTexture with repeating data, 0 to 255
// create a buffer with some dataconst sizeX =64;const sizeY =64;const sizeZ =64;const data =newUint8Array( sizeX * sizeY * sizeZ );let i =0;for(let z =0; z < sizeZ; z ++){for(let y =0; y < sizeY; y ++){for(let x =0; x < sizeX; x ++){
data[ i ]= i %256;
i ++;}}}// use the buffer to create the textureconst texture =new THREE.Data3DTexture( data, sizeX, sizeY, sizeZ );
texture.needsUpdate =true;
1 by default. Specifies the alignment requirements for the start of each
pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on
double-word boundaries). See
glPixelStorei for more information.
Creates an array of textures directly from raw data, width and height and
depth.
Constructor
DataArrayTexture( data, width, height, depth )
The data argument must be an
ArrayBufferView.
The properties inherited from Texture are the
default, except magFilter and minFilter default to THREE.NearestFilter.
The properties flipY and generateMipmaps are initially set to false.
The interpretation of the data depends on type and format: If the type is
THREE.UnsignedByteType, a Uint8Array will be useful for addressing the
texel data. If the format is THREE.RGBAFormat, data needs four values for
one texel; Red, Green, Blue and Alpha (typically the opacity).
For the packed types, THREE.UnsignedShort4444Type and
THREE.UnsignedShort5551Type all color components of one texel can be
addressed as bitfields within an integer element of a Uint16Array.
In order to use the types THREE.FloatType and THREE.HalfFloatType, the
WebGL implementation must support the respective extensions
OES_texture_float and OES_texture_half_float. In order to use
THREE.LinearFilter for component-wise, bilinear interpolation of the
texels based on these types, the WebGL extensions OES_texture_float_linear
or OES_texture_half_float_linear must also be present.
Code Example
This creates a DataArrayTexture where each texture has a different color.
// create a buffer with color dataconst width =512;const height =512;const depth =100;const size = width * height;const data =newUint8Array(4* size * depth );for(let i =0; i < depth; i ++){const color =new THREE.Color(Math.random(),Math.random(),Math.random());const r =Math.floor( color.r *255);const g =Math.floor( color.g *255);const b =Math.floor( color.b *255);for(let j =0; j < size; j ++){const stride =( i * size + j )*4;
data[ stride ]= r;
data[ stride +1]= g;
data[ stride +2]= b;
data[ stride +3]=255;}}// used the buffer to create a DataArrayTextureconst texture =new THREE.DataArrayTexture( data, width, height, depth );
texture.needsUpdate =true;
1 by default. Specifies the alignment requirements for the start of each
pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on
double-word boundaries). See
glPixelStorei for more information.
Describes that a specific layer of the texture needs to be updated.
Normally when needsUpdate is set to true, the
entire compressed texture array is sent to the GPU. Marking specific
layers will only transmit subsets of all mipmaps associated with a
specific depth in the array which is often much more performant.
The data argument must be an
ArrayBufferView.
Further parameters correspond to the properties
inherited from Texture, where both magFilter and minFilter default
to THREE.NearestFilter.
The interpretation of the data depends on type and format: If the type is
THREE.UnsignedByteType, a Uint8Array will be useful for addressing the
texel data. If the format is THREE.RGBAFormat, data needs four values for
one texel; Red, Green, Blue and Alpha (typically the opacity).
For the packed types, THREE.UnsignedShort4444Type and
THREE.UnsignedShort5551Type all color components of one texel can be
addressed as bitfields within an integer element of a Uint16Array.
In order to use the types THREE.FloatType and THREE.HalfFloatType, the
WebGL implementation must support the respective extensions
OES_texture_float and OES_texture_half_float. In order to use
THREE.LinearFilter for component-wise, bilinear interpolation of the
texels based on these types, the WebGL extensions OES_texture_float_linear
or OES_texture_half_float_linear must also be present.
Code Example
// create a buffer with color dataconst width =512;const height =512;const size = width * height;const data =newUint8Array(4* size );const color =new THREE.Color(0xffffff);const r =Math.floor( color.r *255);const g =Math.floor( color.g *255);const b =Math.floor( color.b *255);for(let i =0; i < size; i ++){const stride = i *4;
data[ stride ]= r;
data[ stride +1]= g;
data[ stride +2]= b;
data[ stride +3]=255;}// used the buffer to create a DataTextureconst texture =new THREE.DataTexture( data, width, height );
texture.needsUpdate =true;
1 by default. Specifies the alignment requirements for the start of each
pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on
double-word boundaries). See
glPixelStorei for more information.
Default is THREE.UnsignedIntType. The following are options and how they map to internal
gl depth format types depending on the stencil format, as well:
THREE.UnsignedIntType -- Uses DEPTH_COMPONENT24 or DEPTH24_STENCIL8 internally. THREE.FloatType -- Uses DEPTH_COMPONENT32F or DEPTH32F_STENCIL8 internally. THREE.UnsignedShortType -- Uses DEPTH_COMPONENT16 internally. Stencil buffer is unsupported when using this type.
This is used to define the comparison function used when comparing texels in the depth texture to the value in the depth buffer.
Default is null which means comparison is disabled.
const pixelRatio = window.devicePixelRatio;const textureSize =128* pixelRatio;// instantiate a framebuffer textureconst frameTexture =newFramebufferTexture( textureSize, textureSize );// calculate start position for copying part of the frame dataconst vector =newVector2();
vector.x =( window.innerWidth * pixelRatio /2)-( textureSize /2);
vector.y =( window.innerHeight * pixelRatio /2)-( textureSize /2);// render the scene
renderer.clear();
renderer.render( scene, camera );// copy part of the rendered frame into the framebuffer texture
renderer.copyFramebufferToTexture( frameTexture, vector );
When the property is set to true, the engine allocates the memory for the texture (if necessary) and triggers the actual texture upload to the GPU next time the source is used.
This property is only relevant when .needUpdate is set to true and provides more control on how texture data should be processed.
When dataReady is set to false, the engine performs the memory allocation (if necessary) but does not transfer the data into the GPU memory. Default is true.
Create a texture to apply to a surface or as a reflection or refraction
map.
Note: After the initial use of a texture, its dimensions, format, and type
cannot be changed. Instead, call .dispose() on the texture and
instantiate a new one.
Code Example
// load a texture, set wrap mode to repeatconst texture =new THREE.TextureLoader().load("textures/water.jpg");
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set(4,4);
An image object, typically created using the TextureLoader.load
method. This can be any image (e.g., PNG, JPG, GIF, DDS) or video (e.g.,
MP4, OGG/OGV) type supported by three.js.
To use video as a texture you need to have a playing HTML5 video element
as a source for your texture image and continuously update this texture as
long as video is playing - the VideoTexture class
handles this automatically.
How the image is applied to the object. An object type of THREE.UVMapping
is the default, where the U,V coordinates are used to
apply the map.
See the texture constants page for other mapping types.
This defines how the texture is wrapped vertically and corresponds to V
in UV mapping.
The same choices are available as for # .wrapS : number.
NOTE: tiling of images in textures only functions if image dimensions are
powers of two (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, ...) in
terms of pixels. Individual dimensions need not be equal, but each must be
a power of two. This is a limitation of WebGL, not three.js.
How the texture is sampled when a texel covers more than one pixel. The
default is THREE.LinearFilter, which takes the four
closest texels and bilinearly interpolates among them. The other option is
THREE.NearestFilter, which uses the value of the closest
texel.
See the texture constants page for details.
How the texture is sampled when a texel covers less than one pixel. The
default is THREE.LinearMipmapLinearFilter, which uses
mipmapping and a trilinear filter.
The number of samples taken along the axis through the pixel that has the
highest density of texels. By default, this value is 1. A higher value
gives a less blurry result than a basic mipmap, at the cost of more
texture samples being used. Use renderer.capabilities.getMaxAnisotropy()
to find the maximum valid anisotropy value for the GPU; this value is usually a power of 2.
The default value is obtained using a combination of .format and .type.
The GPU format allows the developer to specify how the data is going to be
stored on the GPU.
See the texture constants page for details regarding all
supported internal formats.
How many times the texture is repeated across the surface, in each
direction U and V. If repeat is set greater than 1 in either direction,
the corresponding Wrap parameter should also be set to THREE.RepeatWrapping
or THREE.MirroredRepeatWrapping to
achieve the desired tiling effect.
Whether to update the texture's uv-transform .matrix
from the texture properties .offset,
.repeat, .rotation, and
.center. True by default. Set this to false if you
are specifying the uv-transform matrix directly.
The uv-transform matrix for the texture. Updated by the renderer from the
texture properties .offset, .repeat,
.rotation, and .center
when the texture's .matrixAutoUpdate
property is true. When .matrixAutoUpdate
property is false, this matrix may be set manually.
Default is the identity matrix.
4 by default. Specifies the alignment requirements for the start of each
pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on
double-word boundaries). See
glPixelStorei for more information.
An object that can be used to store custom data about the texture. It
should not hold references to functions as these will not be cloned.
Default is an empty object {}.
The data definition of a texture. A reference to the data source can be
shared across textures. This is often useful in context of spritesheets
where multiple textures render the same data but with different texture
transformations.
Make copy of the texture. Note this is not a "deep copy", the image is
shared. Besides, cloning a texture does not automatically mark it for a
texture upload. You have to set .needsUpdate to
true as soon as its image property (the data source) is fully loaded or
ready.
Note: After the initial use of a texture, the video cannot be changed.
Instead, call .dispose() on the texture and instantiate a new one.
Code Example
// assuming you have created a HTML video element with id="video"const video = document.getElementById('video');const texture =new THREE.VideoTexture( video );
CCDIKSolver solves Inverse Kinematics Problem with CCD Algorithm.
CCDIKSolver is designed to work with SkinnedMesh but also can be used with MMDLoader or GLTFLoader skeleton.
Import
CCDIKSolver is an add-on, and must be imported explicitly.
See Installation / Addons.
mesh — SkinnedMesh for which CCDIKSolver solves IK problem. iks — An array of Object specifying IK parameter. target, effector, and link-index are index integers in .skeleton.bones.
The bones relation should be "links[ n ], links[ n - 1 ], ..., links[ 0 ], effector" in order from parent to child.
MMDAnimationHelper handles animation of MMD assets loaded by MMDLoader with MMD special features as IK, Grant, and Physics.
It uses CCDIKSolver and MMDPhysics inside.
Import
MMDAnimationHelper is an add-on, and must be imported explicitly.
See Installation / Addons.
pmxAnimation - If it is set to true, the helper follows the complex and costly PMX animation system.
Try this option only if your PMX model animation doesn't work well. Default is false.
A WeakMap which holds animation stuffs used in helper for objects added to helper. For example, you can access AnimationMixer for an added SkinnedMesh with "helper.objects.get( mesh ).mixer"
Add an SkinnedMesh, Camera, or Audio to helper and setup animation. The animation durations of added objects are synched.
If camera/audio has already been added, it'll be replaced with a new one.
mesh — SkinnedMesh which changes the posing. It doesn't need to be added to helper. vpd — VPD content obtained by MMDLoader.loadVPD params — (optional)
Arcball controls allow the camera to be controlled by a virtual trackball with full touch support and advanced navigation functionality.
Cursor/finger positions and movements are mapped over a virtual trackball surface
represented by a gizmo and mapped in intuitive and consistent camera movements.
Dragging cursor/fingers will cause camera to orbit around the center of the trackball in a conservative way (returning to the starting point
will make the camera to return to its starting orientation).
In addition to supporting pan, zoom and pinch gestures, Arcball controls provide focus functionality with a double click/tap for
intuitively moving the object's point of interest in the center of the virtual trackball.
Focus allows a much better inspection and navigation in complex environment.
Moreover Arcball controls allow FOV manipulation (in a vertigo-style method) and z-rotation.
Saving and restoring of Camera State is supported also through clipboard
(use ctrl+c and ctrl+v shortcuts for copy and paste the state).
Unlike OrbitControls and TrackballControls, ArcballControls doesn't require .update to be called externally in an animation loop when animations
are on.
To use this, as with all files in the /examples directory, you will have to
include the file separately in your HTML.
Import
ArcballControls is an add-on, and must be imported explicitly.
See Installation / Addons.
const renderer =new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );const scene =new THREE.Scene();const camera =new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight,1,10000);const controls =newArcballControls( camera, renderer.domElement, scene );
controls.addEventListener('change',function(){
renderer.render( scene, camera );});//controls.update() must be called after any manual changes to the camera's transform
camera.position.set(0,20,100);
controls.update();
If true, camera's near and far values will be adjusted every time zoom is performed trying to maintain the same visible portion
given by initial near and far values ( PerspectiveCamera only ).
Default is false.
Set a new mouse action by specifying the operation to be performed and a mouse/key combination. In case of conflict, replaces the existing one.
Operations can be specified as 'ROTATE', 'PAN', 'FOV' or 'ZOOM'.
Mouse inputs can be specified as mouse buttons 0, 1 and 2 or 'WHEEL' for wheel notches.
Keyboard modifiers can be specified as 'CTRL', 'SHIFT' or null if not needed.
Removes a mouse action by specifying its mouse/key combination.
Mouse inputs can be specified as mouse buttons 0, 1 and 2 or 'WHEEL' for wheel notches.
Keyboard modifiers can be specified as 'CTRL', 'SHIFT' or null if not needed.
Returns the Raycaster object that is used for user interaction. This object is shared between all instances of
ArcballControls. If you set the .layers property of the ArcballControls, you will also want to
set the .layers property on the Raycaster with a matching value, or else the ArcballControls
won't work as expected.
This option only works if the DragControls.objects array contains a single draggable group object.
If set to true, DragControls does not transform individual objects but the entire group. Default is false.
Whether or not the camera's height influences the forward movement speed. Default is false.
Use the properties .heightCoef, .heightMin and .heightMax for configuration.
FlyControls enables a navigation similar to fly modes in DCC tools like Blender. You can arbitrarily transform the camera in
3D space without any limitations (e.g. focus on a specific target).
Import
FlyControls is an add-on, and must be imported explicitly.
See Installation / Addons.
MapControls is intended for transforming a camera over a map from bird's eye perspective.
The class shares its implementation with OrbitControls but uses a specific preset for mouse/touch interaction and disables screen space panning by default.
Import
MapControls is an add-on, and must be imported explicitly.
See Installation / Addons.
This object contains references to the mouse actions used by the controls.
controls.mouseButtons ={
LEFT: THREE.MOUSE.PAN,
MIDDLE: THREE.MOUSE.DOLLY,
RIGHT: THREE.MOUSE.ROTATE
}
Defines how the camera's position is translated when panning. If true, the camera pans in screen space.
Otherwise, the camera pans in the plane orthogonal to the camera's up direction.
Default is false.
Orbit controls allow the camera to orbit around a target.
To use this, as with all files in the /examples directory, you will have to
include the file separately in your HTML.
Import
OrbitControls is an add-on, and must be imported explicitly.
See Installation / Addons.
const renderer =new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );const scene =new THREE.Scene();const camera =new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight,1,10000);const controls =newOrbitControls( camera, renderer.domElement );//controls.update() must be called after any manual changes to the camera's transform
camera.position.set(0,20,100);
controls.update();function animate(){
requestAnimationFrame( animate );// required if controls.enableDamping or controls.autoRotate are set to true
controls.update();
renderer.render( scene, camera );}
Set to true to automatically rotate around the target. Note that if this is enabled, you must call .update
() in your animation loop. If you want the auto-rotate speed to be independent of the frame rate (the refresh rate of the display), you must pass the time deltaTime, in seconds, to .update().
How fast to rotate around the target if .autoRotate is true. Default is 2.0, which equates to 30 seconds
per orbit at 60fps. Note that if .autoRotate is enabled, you must call .update
() in your animation loop.
The damping inertia used if .enableDamping is set to true. Default is 0.05. Note that for this to work, you must
call .update () in your animation loop.
Set to true to enable damping (inertia), which can be used to give a sense of weight to the controls. Default is false.
Note that if this is enabled, you must call .update () in your animation loop.
Enable or disable horizontal and vertical rotation of the camera. Default is true.
Note that it is possible to disable a single axis by setting the min and max of the
polar angle or azimuth angle to the same value,
which will cause the vertical or horizontal rotation to be fixed at that value.
This object contains references to the keycodes for controlling camera panning. Default is the 4 arrow keys.
controls.keys ={
LEFT:'ArrowLeft',//left arrow
UP:'ArrowUp',// up arrow
RIGHT:'ArrowRight',// right arrow
BOTTOM:'ArrowDown'// down arrow} See KeyboardEvent.code for a full list of keycodes.
How far you can orbit horizontally, upper limit. If set, the interval [ min, max ] must be a sub-interval of [ - 2 PI, 2 PI ], with ( max - min < 2 PI ). Default is Infinity.
How far you can orbit horizontally, lower limit. If set, the interval [ min, max ] must be a sub-interval of [ - 2 PI, 2 PI ], with ( max - min < 2 PI ). Default is Infinity.
This object contains references to the mouse actions used by the controls.
controls.mouseButtons ={
LEFT: THREE.MOUSE.ROTATE,
MIDDLE: THREE.MOUSE.DOLLY,
RIGHT: THREE.MOUSE.PAN
}
Defines how the camera's position is translated when panning. If true, the camera pans in screen space.
Otherwise, the camera pans in the plane orthogonal to the camera's up direction.
Default is true.
The focus point of the .minTargetRadius and .maxTargetRadius limits. It can be updated manually at any point to change the center of interest for the .target.
Update the controls. Must be called after any manual changes to the camera's transform,
or in the update loop if .autoRotate or .enableDamping are set. deltaTime, in seconds, is optional,
and is only required if you want the auto-rotate speed to be independent of the frame rate (the refresh rate of the display).
TrackballControls is similar to OrbitControls. However, it does not maintain a constant camera up vector.
That means if the camera orbits over the “north” and “south” poles, it does not flip to stay "right side up".
Import
TrackballControls is an add-on, and must be imported explicitly.
See Installation / Addons.
This class can be used to transform objects in 3D space by adapting a similar interaction model of DCC tools like Blender.
Unlike other controls, it is not intended to transform the scene's camera.
TransformControls expects that its attached 3D object is part of the scene graph.
Import
TransformControls is an add-on, and must be imported explicitly.
See Installation / Addons.
domElement: The HTML element used for event listeners. (optional)
Creates a new instance of TransformControls.
Events
change
Fires if any type of change (object or property change) is performed. Property changes
are separate events you can add event listeners to. The event type is "propertyname-changed".
mouseDown
Fires if a pointer (mouse/touch) becomes active.
mouseUp
Fires if a pointer (mouse/touch) is no longer active.
objectChange
Fires if the controlled 3D object is changed.
Properties
See the base Controls class for common properties.
By default, 3D objects are continuously rotated. If you set this property to a numeric value (radians), you can define in which
steps the 3D object should be rotated. Default is null.
By default, 3D objects are continuously translated. If you set this property to a numeric value (world units), you can define in which
steps the 3D object should be translated. Default is null.
Returns the Raycaster object that is used for user interaction. This object is shared between all instances of
TransformControls. If you set the .layers property of the TransformControls, you will also want to
set the .layers property on the Raycaster with a matching value, or else the TransformControls
won't work as expected.
ConvexGeometry can be used to generate a convex hull for a given array of 3D points.
The average time complexity for this task is considered to be O(nlog(n)).
Import
ConvexGeometry is an add-on, and must be imported explicitly.
See Installation / Addons.
DecalGeometry can be used to create a decal mesh that serves different kinds of purposes e.g. adding unique details
to models, performing dynamic visual environmental changes or covering seams.
Import
DecalGeometry is an add-on, and must be imported explicitly.
See Installation / Addons.
mesh — Any mesh object.
position — Position of the decal projector.
orientation — Orientation of the decal projector.
size — Size of the decal projector.
const geometry =new THREE.ParametricGeometry( THREE.ParametricGeometries.klein,25,25);const material =new THREE.MeshBasicMaterial({ color:0x00ff00});const klein =new THREE.Mesh( geometry, material );
scene.add( klein );
func — A function that takes in a u and v value each between 0 and 1 and modifies a third Vector3 argument. Default is a function that generates a curved plane surface.
slices — The count of slices to use for the parametric function. Default is 8.
stacks — The count of stacks to use for the parametric function. Default is 8.
Properties
See the base BufferGeometry class for common properties.
size — Relative scale of the teapot. Optional; Defaults to 50.
segments — Number of line segments to subdivide each patch edge. Optional; Defaults to 10.
bottom — Whether the bottom of the teapot is generated or not. Optional; Defaults to true.
lid — Whether the lid is generated or not. Optional; Defaults to true.
body — Whether the body is generated or not. Optional; Defaults to true.
fitLid — Whether the lid is slightly stretched to prevent gaps between the body and lid or not. Optional; Defaults to true.
blinn — Whether the teapot is scaled vertically for better aesthetics or not. Optional; Defaults to true.
Properties
See the base BufferGeometry class for common properties.
A class for generating text as a single geometry. It is constructed by providing a string of text, and a set of
parameters consisting of a loaded font and settings for the geometry's parent ExtrudeGeometry.
See the FontLoader page for additional details.
Import
TextGeometry is an add-on, and must be imported explicitly.
See Installation / Addons.
const light =new THREE.RectAreaLight(0xffffbb,1.0,5,5);const helper =newRectAreaLightHelper( light );
light.add( helper );// helper must be added as a child of the light
The color parameter passed in the constructor. Default is undefined. If this is changed, the helper's color will update
the next time update is called.
Visualizes an object's vertex normals.
Requires that normals have been specified in a custom attribute or
have been calculated using computeVertexNormals.
Import
VertexNormalsHelper is an add-on, and must be imported explicitly.
See Installation / Addons.
object -- object for which to render vertex normals. size -- (optional) length of the arrows. Default is 1. color -- (optional) hex color of the arrows. Default is 0xff0000.
Properties
See the base LineSegments class for common properties.
Visualizes an object's vertex tangents.
Requires that tangents have been specified in a custom attribute or
have been calculated using computeTangents.
Import
VertexTangentsHelper is an add-on, and must be imported explicitly.
See Installation / Addons.
object -- object for which to render vertex tangents. size -- (optional) length of the arrows. Default is 1. color -- (optional) hex color of the arrows. Default is 0x00ffff.
Properties
See the base LineSegments class for common properties.
This adds functionality beyond Line, like arbitrary line width and changing width to be in world units.
It extends LineSegments2, simplifying constructing segments from a chain of points.
Import
Line2 is an add-on, and therefore must be imported explicitly.
See Installation / Addons.
geometry — (optional) Pair(s) of vertices representing each line segment. material — (optional) Material for the line. Default is a LineMaterial with random color.
Properties
See the base LineSegments2 class for common properties.
A material for drawing wireframe-style geometries.
Unlike LineBasicMaterial, it supports arbitrary line widths and allows using world units instead of screen space units.
This material is used with LineSegments2 and Line2.
Lines are always rendered with round caps and round joints.
parameters - (optional) an object with one or more properties defining the material's appearance.
Any property of the material (including any property inherited from ShaderMaterial) can be passed in here.
The exception is the property color, which can be passed in as a number or hexadecimal string and is 0xffffff (white) by default.
Color.set( color ) is called internally.
Properties
See the base ShaderMaterial class for common properties.
The size of the viewport, in screen pixels.
This must be kept updated to make screen-space rendering accurate.
The LineSegments2.onBeforeRender callback performs the update for visible objects.
Default is [1,1].
A series of lines drawn between pairs of vertices.
This adds functionality beyond LineSegments, like arbitrary line width and changing width to be in world units.
The Line2 extends this object, forming a polyline instead of individual segments.
Import
LineSegments2 is an add-on, and therefore must be imported explicitly.
See Installation / Addons.
geometry — (optional) Pair(s) of vertices representing each line segment. material — (optional) Material for the line. Default is a LineMaterial with random color.
Rhinoceros is a 3D modeler used to create, edit, analyze, document, render, animate, and translate NURBS curves, surfaces, breps, extrusions, point clouds, as well as polygon meshes and SubD objects.
rhino3dm.js is compiled to WebAssembly from the open source geometry library openNURBS.
The loader currently uses rhino3dm.js 8.4.0.
1 NURBS curves are discretized to a hardcoded resolution.
2 Types which are based on BREPs and NURBS surfaces are represented with their "Render Mesh". Render meshes might not be associated with these objects if they have not been displayed in an appropriate display mode in Rhino (i.e. "Shaded", "Rendered", etc), or are created programmatically, for example, via Grasshopper or directly with the rhino3dm library. As of rhino3dm.js@8.0.0-beta2, BrepFace and Extrusions can be assigned a mesh representation, but these must be generated by the user.
3 SubD objects are represented by subdividing their control net.
4 Whether a Rhino Document (File3dm) is loaded or parsed, the returned object is an Object3D with all Rhino objects (File3dmObject) as children. File3dm layers and other file level properties are added to the resulting object's userData.
5 All resulting three.js objects have useful properties from the Rhino object (i.e. layer index, name, etc.) populated in their userData object.
6 Rhino and Three.js have a different coordinate system. Upon import, you should rotate the resulting Object3D by -90º in x or set the THREE.Object3D.DEFAULT_UP at the beginning of your application:
THREE.Object3D.DEFAULT_UP.set(0,0,1);
Keep in mind that this will affect the orientation of all of the Object3Ds in your application.
url — A string containing the path/URL of the .3dm file. onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading from url and call the onLoad function with the resulting Object3d.
// Instantiate a loaderconst loader =newRhino3dmLoader();// Specify path to a folder containing WASM/JS libraries or a CDN.// For example, /jsm/libs/rhino3dm/ is the location of the library inside the three.js repository// loader.setLibraryPath( '/path_to_library/rhino3dm/' );
loader.setLibraryPath('https://cdn.jsdelivr.net/npm/rhino3dm@8.4.0/');// Load a 3DM file
loader.load(// resource URL'model.3dm',// called when the resource is loadedfunction(object){
scene.add(object);},// called as loading progressesfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
buffer — An ArrayBuffer representing the Rhino File3dm document. onLoad — A function to be called after the loading is successfully completed. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Parse a File3dm ArrayBuffer and call the onLoad function with the resulting Object3d.
See this example for further reference.
import rhino3dm from'https://cdn.jsdelivr.net/npm/rhino3dm@8.4.0'// Instantiate a loaderconst loader =newRhino3dmLoader();// Specify path to a folder containing WASM/JS libraries or a CDN.
loader.setLibraryPath('https://cdn.jsdelivr.net/npm/rhino3dm@8.4.0');const rhino =await rhino3dm();
console.log('Loaded rhino3dm.');// create Rhino Document and add a point to itconst doc =new rhino.File3dm();const ptA =[0,0,0];const point =new rhino.Point( ptA );
doc.objects().add( point,null);// create a copy of the doc.toByteArray data to get an ArrayBufferconst buffer =newUint8Array( doc.toByteArray()).buffer;
loader.parse( buffer,function(object){
scene.add(object);});
value — Path to folder containing the JS and WASM libraries.
// Instantiate a loaderconst loader =newRhino3dmLoader();// Specify path to a folder containing the WASM/JS library:
loader.setLibraryPath('/path_to_library/rhino3dm/');// or from a CDN:
loader.setLibraryPath('https://cdn.jsdelivr.net/npm/rhino3dm@8.4.0');
workerLimit - Maximum number of workers to be allocated. Default is 4.
Sets the maximum number of Web Workers
to be used during decoding. A lower limit may be preferable if workers are also for other tasks
in the application.
A loader for geometry compressed with the Draco library.
Draco is an open source library for compressing and
decompressing 3D meshes and point clouds. Compressed geometry can be significantly smaller,
at the cost of additional decoding time on the client device.
Standalone Draco files have a .drc extension, and contain vertex positions,
normals, colors, and other attributes. Draco files do not contain materials,
textures, animation, or node hierarchies – to use these features, embed Draco geometry
inside of a glTF file. A normal glTF file can be converted to a Draco-compressed glTF file
using glTF-Pipeline. When
using Draco with glTF, an instance of DRACOLoader will be used internally by GLTFLoader.
It is recommended to create one DRACOLoader instance and reuse it to avoid loading and creating multiple
decoder instances.
Import
DRACOLoader is an add-on, and must be imported explicitly.
See Installation / Addons.
// Instantiate a loaderconst loader =newDRACOLoader();// Specify path to a folder containing WASM/JS decoding libraries.
loader.setDecoderPath('/examples/jsm/libs/draco/');// Optional: Pre-fetch Draco WASM/JS module.
loader.preload();// Load a Draco geometry
loader.load(// resource URL'model.drc',// called when the resource is loadedfunction( geometry ){const material =new THREE.MeshStandardMaterial({ color:0x606060});const mesh =new THREE.Mesh( geometry, material );
scene.add( mesh );},// called as loading progressesfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .drc file. onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading from url and call the onLoad function with the decompressed geometry.
workerLimit - Maximum number of workers to be allocated. Default is 4.
Sets the maximum number of Web Workers
to be used during decoding. A lower limit may be preferable if workers are also for other tasks
in the application.
Class for loading a font in JSON format. Returns a font, which is an
array of Shapes representing the font.
This uses the FileLoader internally for loading files.
url — the path or URL to the file. This can also be a
Data URI. onLoad — Will be called when load completes. The argument will be the loaded font. onProgress — Will be called while load progresses. The argument will be the XMLHttpRequest instance, which contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — Will be called when load errors.
Begin loading from url and pass the loaded font to onLoad.
glTF (GL Transmission Format) is an
open format specification
for efficient delivery and loading of 3D content. Assets may be provided either in JSON (.gltf)
or binary (.glb) format. External files store textures (.jpg, .png) and additional binary
data (.bin). A glTF asset may deliver one or more scenes, including meshes, materials,
textures, skins, skeletons, morph targets, animations, lights, and/or cameras.
GLTFLoader uses ImageBitmapLoader whenever possible. Be advised that image bitmaps are not automatically GC-collected when they are no longer referenced,
and they require special handling during the disposal process. More information in the How to dispose of objects guide.
Import
GLTFLoader is an add-on, and must be imported explicitly.
See Installation / Addons.
// Instantiate a loaderconst loader =newGLTFLoader();// Optional: Provide a DRACOLoader instance to decode compressed mesh dataconst dracoLoader =newDRACOLoader();
dracoLoader.setDecoderPath('/examples/jsm/libs/draco/');
loader.setDRACOLoader( dracoLoader );// Load a glTF resource
loader.load(// resource URL'models/gltf/duck/duck.gltf',// called when the resource is loadedfunction( gltf ){
scene.add( gltf.scene );
gltf.animations;// Array<THREE.AnimationClip>
gltf.scene;// THREE.Group
gltf.scenes;// Array<THREE.Group>
gltf.cameras;// Array<THREE.Camera>
gltf.asset;// Object},// called while loading is progressingfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
When loading textures externally (e.g., using TextureLoader) and applying them to a glTF model,
textures must be configured. Textures referenced from the glTF model are configured automatically by
GLTFLoader.
// If texture is used for color information (.map, .emissiveMap, .specularMap, ...), set color space
texture.colorSpace = THREE.SRGBColorSpace;// UVs use the convention that (0, 0) corresponds to the upper left corner of a texture.
texture.flipY =false;
Custom extensions
Metadata from unknown extensions is preserved as “.userData.gltfExtensions” on Object3D, Scene, and Material instances,
or attached to the response “gltf” object. Example:
url — A string containing the path/URL of the .gltf or .glb file. onLoad — A function to be called after the loading is successfully completed. The function receives the loaded JSON response returned from parse. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading from url and call the callback function with the parsed response content.
ktx2Loader — Instance of KTX2Loader, to be used for loading KTX2 compressed textures.
# .parse ( data : ArrayBuffer, path : String, onLoad : Function, onError : Function ) : undefined
data — glTF asset to parse, as an ArrayBuffer, JSON string or object. path — The base path from which to find subsequent glTF resources such as textures and .bin data files. onLoad — A function to be called when parse completes. onError — (optional) A function to be called if an error occurs during parsing. The function receives error as an argument.
Parse a glTF-based ArrayBuffer, JSON string or object and fire onLoad callback when complete. The argument to onLoad will be an Object that contains loaded parts: .scene, .scenes, .cameras, .animations, and .asset.
KTX 2.0 is a container format for various GPU texture formats. The loader
supports Basis Universal GPU textures, which can be quickly transcoded to
a wide variety of GPU texture compression formats. While KTX 2.0 also allows
other hardware-specific formats, this loader does not yet parse them.
This loader parses the KTX 2.0 container and transcodes to a supported GPU compressed
texture format. The required WASM transcoder and JS wrapper are available from the
examples/jsm/libs/basis
directory.
Import
KTX2Loader is an add-on, and must be imported explicitly.
See Installation / Addons.
url — A string containing the path/URL of the .ktx2 file. onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Load from url and call the onLoad function with the transcoded CompressedTexture.
Detects hardware support for available compressed texture formats, to determine
the output format for the transcoder. Must be called before loading a texture.
An LDraw asset (a text file usually with extension .ldr, .dat or .txt) can describe
just a single construction piece, or an entire model.
In the case of a model the LDraw file can reference other LDraw files, which are loaded
from a library path set with setPartsLibraryPath. You usually download
the LDraw official parts library, extract to a folder and point setPartsLibraryPath to it.
Library parts will be loaded by trial and error in subfolders 'parts', 'p' and 'models'.
These file accesses are not optimal for web environment, so a script tool has been made
to pack an LDraw file with all its dependencies into a single file, which loads much faster.
See section 'Packing LDraw models'. The LDrawLoader example loads several packed files.
The official parts library is not included due to its large size.
Import
LDrawLoader is an add-on, and must be imported explicitly.
See Installation / Addons.
// Instantiate a loaderconst loader =newLDrawLoader();// Optionally set library parts path// loader.setPartsLibraryPath( path to library );// Load a LDraw resource
loader.load(// resource URL'models/car.ldr_Packed.mpd',// called when the resource is loadedfunction(group){// Optionally, use LDrawUtils.mergeObject() from// 'examples/jsm/utils/LDrawUtils.js' to merge all// geometries by material (it gives better runtime// performance, but building steps are lost)// group = LDrawUtils.mergeObject( group );
scene.add(group);},// called while loading is progressingfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
To pack a model with all its referenced files, download the
Official LDraw parts library
and use the following Node script:
utils/packLDrawModel.js
It contains instructions on how to setup the files and execute it.
Metadata in .userData
LDrawLoader returns a Group object which contains an object hierarchy. Depending of each subobject
type, its .userData member will contain the following members:
In a Group, the userData member will contain:
.numBuildingSteps: Only in the root Group, Indicates total number of building steps in
the model. These can be used to set visibility of objects to show different building steps, which is
done in the example.
.buildingStep: Indicates the building index of this step.
.category: Contains, if not null, the String category for this piece or model.
.keywords: Contains, if not null, an array of String keywords for this piece or model.
.code: Indicates the LDraw code for this material.
.edgeMaterial: Only in a Mesh material, indicates the LineBasicMaterial belonging to edges
of the same color code (in the LDraw format, each surface material is also related to an edge material)
.conditionalEdgeMaterial: Only in a LineSegments material, indicates the Material belonging
to conditional edges of the same color code.
url — A string containing the path/URL of the LDraw file. onLoad — A function to be called after the loading is successfully completed. The function receives the loaded JSON response returned from parse. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading from url and call the callback function with the parsed response content.
path — Path to library parts files to load referenced parts from. This is different from Loader.setPath, which indicates the path to load the main asset from.
This method must be called prior to .load unless the model to load does not reference library parts (usually it will be a model with all its parts packed in a single file)
map — Set a map from String to String which maps referenced library filenames to new filenames. If a fileMap is not specified (the default), library parts will be accessed by trial and error in subfolders 'parts', 'p' and 'models'.
# .parse ( text : String, path : String, onLoad : Function, onError : Function ) : undefined
text — LDraw asset to parse, as string. path — The base path from which to find other referenced LDraw asset files. onLoad — A function to be called when parse completes.
Parse a LDraw file contents as a String and fire onLoad callback when complete. The argument to onLoad will be an Group that contains hierarchy of Group, Mesh and LineSegments (with other part data in .userData fields).
For an already loaded LDraw asset, returns the Material associated with the main color code.
This method can be useful to modify the main material of a model or part that exposes it.
The main color code is the standard way to color an LDraw part. It is '16' for triangles and '24' for edges. Usually
a complete model will not expose the main color (that is, no part uses the code '16' at the top level, because they
are assigned other specific colors) An LDraw part file on the other hand will expose the code '16' to be colored, and
can have additional fixed colors.
This async method preloads materials from a single LDraw file. In the official parts library there is a special
file which is loaded always the first (LDConfig.ldr) and contains all the standard color codes. This method is
intended to be used with not packed files, for example in an editor where materials are preloaded and parts are
loaded on demand.
url — A string containing the path/URL of the .3dl file. onLoad — (optional) A function to be called after the loading is successfully completed. The function receives the result of the parse method. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Parse a 3dl data string and fire onLoad callback when complete. The argument to onLoad will be an object containing the following LUT data: .size, .texture and .texture3D.
url — A string containing the path/URL of the .cube file. onLoad — (optional) A function to be called after the loading is successfully completed. The function receives the result of the parse method. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
MMDLoader creates Three.js Objects from MMD resources as PMD, PMX, VMD, and VPD files.
See MMDAnimationHelper for MMD animation handling as IK, Grant, and Physics.
If you want raw content of MMD resources, use .loadPMD/PMX/VMD/VPD methods.
// Instantiate a loaderconst loader =newMMDLoader();// Load a MMD model
loader.load(// path to PMD/PMX file'models/mmd/miku.pmd',// called when the resource is loadedfunction( mesh ){
scene.add( mesh );},// called when loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .pmd or .pmx file. onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
url — A string or an array of string containing the path/URL of the .vmd file(s).If two or more files are specified, they'll be merged. object — SkinnedMesh or Camera. Clip and its tracks will be fitting to this object. onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading VMD motion file(s) from url(s) and fire the callback function with the parsed AnimationClip.
modelUrl — A string containing the path/URL of the .pmd or .pmx file. vmdUrl — A string or an array of string containing the path/URL of the .vmd file(s). onLoad — A function to be called after the loading is successfully completed. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, that contains .total and .loaded bytes. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading PMD/PMX model file and VMD motion file(s) from urls and fire the callback function with an Object containing parsed SkinnedMesh and AnimationClip fitting to the SkinnedMesh.
A loader for loading an .mtl resource, used internally by OBJLoader.
The Material Template Library format (MTL) or .MTL File Format is a companion file format to .OBJ that describes surface shading
(material) properties of objects within one or more .OBJ files.
url — A string containing the path/URL of the .mtl file. onLoad — (optional) A function to be called after the loading is successfully completed. The function receives the loaded MTLLoader.MaterialCreator instance. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Begin loading from url and return the loaded material.
A loader for loading a .obj resource.
The OBJ file format is a simple data-format
that represents 3D geometry in a human readable format as the position of each vertex, the UV position of
each texture coordinate vertex, vertex normals, and the faces that make each polygon defined as a list of
vertices, and texture vertices.
// instantiate a loaderconst loader =newOBJLoader();// load a resource
loader.load(// resource URL'models/monster.obj',// called when resource is loadedfunction(object){
scene.add(object);},// called when loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .obj file. onLoad — (optional) A function to be called after the loading is successfully completed. The function receives the loaded Object3D as an argument. onProgress — (optional) A function to be called while the loading is in progress. The function receives a XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives error as an argument.
Begin loading from url and call onLoad with the parsed response content.
Returns an Object3D. It contains the parsed meshes as Mesh and lines as LineSegments.
All geometry is created as BufferGeometry. Default materials are created as MeshPhongMaterial.
If an obj object or group uses multiple materials while declaring faces, geometry groups and an array of materials are used.
// instantiate a loaderconst loader =newPCDLoader();// load a resource
loader.load(// resource URL'pointcloud.pcd',// called when the resource is loadedfunction( points ){
scene.add( points );},// called when loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .pcd file. onLoad — (optional) A function to be called after loading is successfully completed. The function receives loaded Object3D as an argument. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Begin loading from url and call onLoad with the parsed response content.
// instantiate a loaderconst loader =newPDBLoader();// load a PDB resource
loader.load(// resource URL'models/pdb/caffeine.pdb',// called when the resource is loadedfunction( pdb ){const geometryAtoms = pdb.geometryAtoms;const geometryBonds = pdb.geometryBonds;const json = pdb.json;
console.log('This molecule has '+ json.atoms.length +' atoms');},// called when loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .pdb file. onLoad — (optional) A function to be called after loading is successfully completed. The function receives the object having the following properties. geometryAtoms, geometryBonds and the JSON structure. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Begin loading from url and call onLoad with the parsed response content.
A loader for loading a .svg resource. Scalable Vector Graphics is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation.
// instantiate a loaderconst loader =newSVGLoader();// load a SVG resource
loader.load(// resource URL'data/svgSample.svg',// called when the resource is loadedfunction( data ){const paths = data.paths;constgroup=new THREE.Group();for(let i =0; i < paths.length; i ++){const path = paths[ i ];const material =new THREE.MeshBasicMaterial({
color: path.color,
side: THREE.DoubleSide,
depthWrite:false});const shapes =SVGLoader.createShapes( path );for(let j =0; j < shapes.length; j ++){const shape = shapes[ j ];const geometry =new THREE.ShapeGeometry( shape );const mesh =new THREE.Mesh( geometry, material );group.add( mesh );}}
scene.add(group);},// called when loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when loading has errorsfunction( error ){
console.log('An error happened');});
url — A string containing the path/URL of the .svg file. onLoad — (optional) A function to be called after loading is successfully completed. The function receives an array of ShapePath as an argument. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains total and loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Begin loading from url and call onLoad with the response content.
// instantiate a loaderconst loader =newTGALoader();// load a resourceconst texture = loader.load(// resource URL'textures/crate_grey8.tga'// called when loading is completedfunction( texture ){
console.log('Texture is loaded');},// called when the loading is in progressfunction( xhr ){
console.log(( xhr.loaded / xhr.total *100)+'% loaded');},// called when the loading failsfunction( error ){
console.log('An error happened');});const material =new THREE.MeshPhongMaterial({
color:0xffffff,
map: texture
});
url — A string containing the path/URL of the .tga file. onLoad — (optional) A function to be called after loading is successfully completed. The function receives loaded DataTexture as an argument. onProgress — (optional) A function to be called while the loading is in progress. The argument will be the XMLHttpRequest instance, which contains .total and .loaded bytes. If the server does not set the Content-Length header; .total will be 0. onError — (optional) A function to be called if an error occurs during loading. The function receives the error as an argument.
Begin loading from url and pass the loaded texture to onLoad. The texture is also directly returned for immediate use (but may not be fully loaded).
LensflareElement( texture : Texture, size : Float, distance : Float, color : Color )
texture - THREE.Texture to use for the flare. size - (optional) size in pixels distance - (optional) (0-1) from light source (0 = at light source) color - (optional) the Color of the lens flare
Used to implement post-processing effects in three.js. The class manages a chain of post-processing passes
to produce the final visual result. Post-processing passes are executed in order of their addition/insertion.
The last pass is automatically rendered to screen.
Import
EffectComposer is an add-on, and must be imported explicitly.
See Installation / Addons.
Returns true if the pass for the given index is the last enabled pass in the pass chain.
Used by EffectComposer to determine when a pass should be rendered to screen.
Sets device pixel ratio. This is usually used for HiDPI device to prevent blurring output.
Thus, the semantic of the method is similar to WebGLRenderer.setPixelRatio().
width -- The width of the EffectComposer.
height -- The height of the EffectComposer.
Resizes the internal render buffers and passes to (width, height) with device pixel ratio taken into account.
Thus, the semantic of the method is similar to WebGLRenderer.setSize().
An exporter to compress geometry with the Draco library.
Draco is an open source library for compressing and
decompressing 3D meshes and point clouds. Compressed geometry can be significantly smaller,
at the cost of additional decoding time on the client device.
Standalone Draco files have a .drc extension, and contain vertex positions,
normals, colors, and other attributes. Draco files do not contain materials,
textures, animation, or node hierarchies – to use these features, embed Draco geometry
inside of a glTF file. A normal glTF file can be converted to a Draco-compressed glTF file
using glTF-Pipeline.
Import
DRACOExporter is an add-on, and must be imported explicitly.
See Installation / Addons.
object | object — Mesh or Points to encode. options — Optional export options
decodeSpeed - int. Indicates how to tune the encoder regarding decode speed (0 gives better speed but worst quality). Default is 5
encodeSpeed - int. Indicates how to tune the encoder parameters (0 gives better speed but worst quality). Default is 5.
encoderMethod - int. Either sequential (very little compression) or Edgebreaker. Edgebreaker traverses the triangles of the mesh in a deterministic, spiral-like way which provides most of the benefits of this data format. Default is DRACOExporter.MESH_EDGEBREAKER_ENCODING.
quantization - Array of int. Indicates the precision of each type of data stored in the draco file in the order (POSITION, NORMAL, COLOR, TEX_COORD, GENERIC). Default is [ 16, 8, 8, 8, 8 ]
EXR ( Extended Dynamic Range) is an
open format specification
for professional-grade image storage format of the motion picture industry. The purpose of
format is to accurately and efficiently represent high-dynamic-range scene-linear image data
and associated metadata. The library is widely used in host application software where accuracy
is critical, such as photorealistic rendering, texture access, image compositing, deep compositing,
and DI.
Import
EXRExporter is an add-on, and must be imported explicitly.
See Installation / Addons.
glTF (GL Transmission Format) is an
open format specification
for efficient delivery and loading of 3D content. Assets may be provided either in JSON (.gltf)
or binary (.glb) format. External files store textures (.jpg, .png) and additional binary
data (.bin). A glTF asset may deliver one or more scenes, including meshes, materials,
textures, skins, skeletons, morph targets, animations, lights, and/or cameras.
Import
GLTFExporter is an add-on, and must be imported explicitly.
See Installation / Addons.
// Instantiate a exporterconst exporter =newGLTFExporter();// Parse the input and generate the glTF output
exporter.parse(
scene,// called when the gltf has been generatedfunction( gltf ){
console.log( gltf );
downloadJSON( gltf );},// called when there is an error in the generationfunction( error ){
console.log('An error happened');},
options
);
Export objects (It will create a new Scene to hold all the objects)
exporter.parse( object1,...)
exporter.parse([ object1, object2 ],...)
Mix scenes and objects (It will export the scenes as usual but it will create a new scene to hold all the single objects).
exporter.parse([ scene1, object1, object2, scene2 ],...)
Generates a .gltf (JSON) or .glb (binary) output from the input (Scenes or Objects).
This is just like the .parse() method, but instead of
accepting callbacks it returns a promise that resolves with the
result, and otherwise accepts the same options.
// Instantiate an exporterconst exporter =newOBJExporter();// Parse the input and generate the OBJ outputconst data = exporter.parse( scene );
downloadFile( data );
PLY (Polygon or Stanford Triangle Format) is a
file format for efficient delivery and loading of simple, static 3D content in a dense format.
Both binary and ascii formats are supported. PLY can store vertex positions, colors, normals and
uv coordinates. No textures or texture references are saved.
Import
PLYExporter is an add-on, and must be imported explicitly.
See Installation / Addons.
// Instantiate an exporterconst exporter =newPLYExporter();// Parse the input and generate the ply outputconst data = exporter.parse( scene, options );
downloadFile( data );
input — Object3D onCompleted — Will be called when the export completes. The argument will be the generated ply ascii or binary ArrayBuffer. options — Export options
excludeAttributes - array. Which properties to explicitly exclude from the exported PLY file. Valid values are 'color', 'normal', 'uv', and 'index'. If triangle indices are excluded, then a point cloud is exported. Default is an empty array.
binary - bool. Export in binary format, returning an ArrayBuffer. Default is false.
Generates ply file data as string or ArrayBuffer (ascii or binary) output from the input object. The data that is returned is the same
that is passed into the "onCompleted" function.
If the object is composed of multiple children and geometry, they are merged into a single mesh in the file.
STL files describe only the surface geometry
of a three-dimensional object without any representation of color, texture or other common model attributes.
The STL format specifies both ASCII and binary representations, with binary being more compact.
STL files contain no scale information or indexes, and the units are arbitrary.
Import
STLExporter is an add-on, and must be imported explicitly.
See Installation / Addons.
const lut =newLut('rainbow',512);const color = lut.getColor(0.5);
Constructor
Lut( colormap : String, count : Number )
colormap - Sets a colormap from predefined colormaps. Available colormaps are: rainbow, cooltowarm, blackbody, grayscale. Default is rainbow.
count - Sets the number of colors used to represent the data array. Default is 32.
# .addColorMap ( name : String, arrayOfColors : Array ) : this
name — The name of the color map.
arrayOfColors — An array of color values. Each value is an array holding a threshold and the actual color value as a hexadecimal number.
Utility class for sampling weighted random points on the surface of a mesh.
Weighted sampling is useful for effects like heavier foliage growth in certain areas of terrain, or concentrated particle emissions from specific parts of a mesh. Vertex weights may be written programmatically, or painted by hand as vertex colors in 3D tools like Blender.
Import
MeshSurfaceSampler is an add-on, and must be imported explicitly.
See Installation / Addons.
// Create a sampler for a Mesh surface.const sampler =newMeshSurfaceSampler( surfaceMesh ).setWeightAttribute('color').build();const mesh =new THREE.InstancedMesh( sampleGeometry, sampleMaterial,100);const position =new THREE.Vector3();const matrix =new THREE.Matrix4();// Sample randomly from the surface, creating an instance of the sample// geometry at each sample point.for(let i =0; i <100; i ++){
sampler.sample( position );
matrix.makeTranslation( position.x, position.y, position.z );
mesh.setMatrixAt( i, matrix );}
scene.add( mesh );
Creates a new MeshSurfaceSampler. If the input geometry is indexed, a non-indexed copy is made. After construction, the sampler is not able to return samples until build is called.
Specifies a vertex attribute to be used as a weight when sampling from the surface. Faces with higher weights are more likely to be sampled, and those with weights of zero will not be sampled at all. For vector attributes, only .x is used in sampling.
If no weight attribute is selected, sampling is randomly distributed by area.
Processes the input geometry and prepares to return samples. Any configuration of the geometry or sampler must occur before this method is called. Time complexity is O(n) for a surface with n faces.
Selects a random point on the surface of the input geometry, returning the position and optionally the normal vector, color and UV Coordinate at that point. Time complexity is O(log n) for a surface with n faces.
center — The center of the OBB. (optional) halfSize — Positive halfwidth extents of the OBB along each axis. (optional) rotation — The rotation of the OBB. (optional)
Applies the given transformation matrix to this OBB. This method can be used to transform the
bounding volume with the world matrix of a 3D object in order to keep both entities in sync.
This class is an alternative to Clock with a different API design and behavior.
The goal is to avoid the conceptual flaws that became apparent in Clock over time.
Timer has an .update() method that updates its internal state. That makes it possible to call .getDelta() and .getElapsed() multiple times per simulation step without getting different values.
The class uses the Page Visibility API to avoid large time delta values when the app is inactive (e.g. tab switched or browser hidden).
timestamp -- (optional) The current time in milliseconds. Can be obtained from the
requestAnimationFrame
callback argument. If not provided, the current time will be determined with
performance.now.
Updates the internal state of the timer. This method should be called once per simulation step
and before you perform queries against the timer (e.g. via .getDelta()).
Using interpolated vertex normals, the mesh faces will blur at the edges and appear smooth.
You can control the smoothness by setting the cutOffAngle.
To try to keep the original normals, set tryKeepNormals to true.
Reference to the destination vertex. The origin vertex can be obtained by querying the destination of its twin, or of the previous half-edge. Default is undefined.
eyeVertex - The vertex that is added to the hull. horizonEdge - A single edge of the horizon.
Creates a face with the vertices 'eyeVertex.point', 'horizonEdge.tail' and 'horizonEdge.head' in CCW order.
All the half edges are created in CCW order thus the face is always pointing outside the hull
Adds a vertex to the hull with the following algorithm
Compute the 'horizon' which is a chain of half edges. For an edge to belong to this group it must be the edge connecting a face that can see 'eyeVertex' and a face which cannot see 'eyeVertex'.
All the faces that can see 'eyeVertex' have its visible vertices removed from the assigned vertex list.
A new set of faces is created with each edge of the 'horizon' and 'eyeVertex'. Each face is connected with the opposite horizon face and the face on the left/right.
The vertices removed from all the visible faces are assigned to the new faces if possible.
eyePoint - The 3D-coordinates of a point. crossEdge - The edge used to jump to the current face. face - The current face being tested. horizon - The edges that form part of the horizon in CCW order.
Computes a chain of half edges in CCW order called the 'horizon'. For an edge to be part of the horizon it must join a face that can see 'eyePoint' and a face that cannot see 'eyePoint'.
vertex - The vertex to remove. face - The target face.
Removes a vertex from the 'assigned' list of vertices and from the given face. It also makes sure that the link from 'face' to the first vertex it sees in 'assigned' is linked correctly after the removal.
CSS2DRenderer is a simplified version of CSS3DRenderer. The only transformation that is supported is translation.
The renderer is very useful if you want to combine HTML based labels with 3D objects. Here too, the respective DOM elements are wrapped into an instance of CSS2DObject and added to the scene graph.
CSS2DRenderer only supports 100% browser and display zoom.
Import
CSS2DRenderer is an add-on, and must be imported explicitly.
See Installation / Addons.
element - A HTMLElement
where the renderer appends its child-elements.
This corresponds to the domElement property below.
If not passed in here, a new div element will be created.
A HTMLElement where the renderer appends its child-elements.
This is automatically created by the renderer in the constructor (if not provided already).
CSS3DRenderer can be used to apply hierarchical 3D transformations to DOM elements
via the CSS3 transform property.
This renderer is particularly interesting if you want to apply 3D effects to a website without
canvas based rendering. It can also be used in order to combine DOM elements with WebGL
content.
There are, however, some important limitations:
It's not possible to use the material system of three.js.
It's also not possible to use geometries.
CSS3DRenderer only supports 100% browser and display zoom.
CSS3DRendererCSS3DObjectCSS3DSprite
Import
CSS3DRenderer is an add-on, and must be imported explicitly.
See Installation / Addons.
element - A HTMLElement
where the renderer appends its child-elements.
This corresponds to the domElement property below.
If not passed in here, a new div element will be created.
A HTMLElement where the renderer appends its child-elements.
This is automatically created by the renderer in the constructor (if not provided already).
SVGRenderer can be used to render geometric data using SVG. The produced vector graphics are particular useful in the following use cases:
Animated logos or icons
Interactive 2D/3D diagrams or graphs
Interactive maps
Complex or animated user interfaces
SVGRenderer has various advantages. It produces crystal-clear and sharp output which is independent of the actual viewport resolution.
SVG elements can be styled via CSS. And they have good accessibility since it's possible to add metadata like title or description (useful for search engines or screen readers).
There are, however, some important limitations:
No advanced shading
No texture support
No shadow support
Import
SVGRenderer is an add-on, and must be imported explicitly.
See Installation / Addons.
MikkTSpace -- Instance of examples/jsm/libs/mikktspace.module.js, or mikktspace npm package. Await MikkTSpace.ready before use.
negateSign -- Whether to negate the sign component (.w) of each tangent. Required for normal map conventions in some formats, including glTF.
Computes vertex tangents using the MikkTSpace algorithm.
MikkTSpace generates the same tangents consistently, and is used in most modelling tools and
normal map bakers. Use MikkTSpace for materials with normal maps, because inconsistent
tangents may lead to subtle visual issues in the normal map, particularly around mirrored
UV seams.
In comparison to this method, BufferGeometry.computeTangents (a
custom algorithm) generates tangents that probably will not match the tangents
in other software. The custom algorithm is sufficient for general use with a
ShaderMaterial, and may be faster than MikkTSpace.
Returns the original BufferGeometry. Indexed geometries will be de-indexed.
Requires position, normal, and uv attributes.
Returns the current attributes (Position and Normal) of a morphed/skinned Object3D whose geometry is a
BufferGeometry, together with the original ones: An Object with 4 properties:
positionAttribute, normalAttribute, morphedPositionAttribute and morphedNormalAttribute.
Helpful for Raytracing or Decals (i.e. a DecalGeometry applied to a morphed Object
with a BufferGeometry will use the original BufferGeometry, not the morphed/skinned one,
generating an incorrect result.
Using this function to create a shadow Object3D the DecalGeometry can be correctly generated).
Interleaves a set of attributes and returns a new array of corresponding attributes that share
a single InterleavedBuffer instance. All attributes must have compatible types. If merge does not
succeed, the method returns null.
Merges a set of attributes into a single instance. All attributes must have compatible properties
and types, and InterleavedBufferAttributes are not supported. If merge does not succeed, the method
returns null.
geometry -- Instance of BufferGeometry to merge the vertices of.
tolerance -- The maximum allowable difference between vertex attributes to merge. Defaults to 1e-4.
Returns a new BufferGeometry with vertices for which all similar vertex attributes
(within tolerance) are merged.
geometry -- Instance of BufferGeometry.
drawMode -- The draw mode of the given geometry. Valid inputs are THREE.TriangleStripDrawMode and THREE.TriangleFanDrawMode.
Returns a new indexed geometry based on THREE.TrianglesDrawMode draw mode. This mode corresponds to the gl.TRIANGLES WebGL primitive.
Set a PerspectiveCamera's projectionMatrix and quaternion to exactly frame the corners of an arbitrary rectangle using Kooima's Generalized Perspective Projection formulation.
NOTE: This function ignores the standard parameters; do not call updateProjectionMatrix() after this! toJSON will also not capture the off-axis matrix generated by this function.
geometry -- The geometry for the set of materials.
materials -- The materials for the object.
Creates a new Group that contains a new mesh for each material defined in materials. Beware that this is not the same as an array of materials which defines multiple materials for 1 mesh.
This is mostly useful for objects that need both a material and a wireframe implementation.
object -- The object to traverse (uses traverseVisible internally).
func -- The binary function applied for the reduction. Must have the signature: (value: T, vertex: Vector3): T.
initialValue -- The value to initialize the reduction with. This is required
as it also sets the reduction type, which is not required to be Vector3.
Akin to Array.prototype.reduce(), but operating on the vertices of all the
visible descendant objects, in world space. Additionally, it can operate as a
transform-reduce, returning a different type T than the Vector3 input. This
can be useful for e.g. fitting a viewing frustum to the scene.
mesh -- InstancedMesh in which instances will be sorted.
compareFn -- Comparator function defining the sort order.
Sorts the instances within an InstancedMesh, according to a user-defined
callback. The callback will be provided with two arguments, indexA
and indexB, and must return a numerical value. See
Array.prototype.sort
for more information on sorting callbacks and their return values.
Because of the high performance cost, three.js does not sort
InstancedMesh instances automatically. Manually sorting may be
helpful to improve display of alpha blended materials (back to front),
and to reduce overdraw in opaque materials (front to back).
Clones the given object and its descendants, ensuring that any SkinnedMesh instances
are correctly associated with their bones. Bones are also cloned, and must be descendants of
the object passed to this method. Other data, like geometries and materials, are reused by
reference.
XREstimatedLight uses WebXR's light estimation to create
a light probe, a directional light, and (optionally) an environment map
that model the user's real-world environment and lighting.
As WebXR updates the light and environment estimation, XREstimatedLight
automatically updates the light probe, directional light, and environment map.
It's important to specify light-estimation as an optional or required
feature when creating the WebXR session, otherwise the light estimation
can't work.
See here
for browser compatibility information, as this is still an experimental feature in WebXR.
To use this, as with all files in the /examples directory, you will have to
include the file separately in your HTML.
Import
XREstimatedLight is an add-on, and must be imported explicitly.
See Installation / Addons.
renderer.xr.enabled =true;// Don't add the XREstimatedLight to the scene initially.// It doesn't have any estimated lighting values until an AR session starts.const xrLight =newXREstimatedLight( renderer );
xrLight.addEventListener('estimationstart',()=>{
scene.add( xrLight );if( xrLight.environment ){
scene.environment = xrLight.environment;}});
xrLight.addEventListener('estimationend',()=>{
scene.remove( xrLight );
scene.environment =null;});// In order for lighting estimation to work, 'light-estimation' must be included as either// an optional or required feature.
document.body.appendChild(XRButton.createButton( renderer,{
optionalFeatures:['light-estimation']}));
Constructor for the GLSL program sent to vertex and fragment shaders, including default uniforms and attributes.
Built-in uniforms and attributes
Vertex shader (unconditional):
// = object.matrixWorld
uniform mat4 modelMatrix;// = camera.matrixWorldInverse * object.matrixWorld
uniform mat4 modelViewMatrix;// = camera.projectionMatrix
uniform mat4 projectionMatrix;// = camera.matrixWorldInverse
uniform mat4 viewMatrix;// = inverse transpose of modelViewMatrix
uniform mat3 normalMatrix;// = camera position in world space
uniform vec3 cameraPosition;// default vertex attributes provided by BufferGeometry
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
Note that you can therefore calculate the position of a vertex in the vertex shader by:
gl_Position = projectionMatrix * modelViewMatrix * vec4( position,1.0);
or alternatively
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position,1.0);
Vertex shader (conditional):
#ifdef USE_TANGENT
attribute vec4 tangent;#endif#if defined( USE_COLOR_ALPHA )// vertex color attribute with alpha
attribute vec4 color;#elifdefined( USE_COLOR )// vertex color attribute
attribute vec3 color;#endif#ifdef USE_MORPHTARGETS
attribute vec3 morphTarget0;
attribute vec3 morphTarget1;
attribute vec3 morphTarget2;
attribute vec3 morphTarget3;#ifdef USE_MORPHNORMALS
attribute vec3 morphNormal0;
attribute vec3 morphNormal1;
attribute vec3 morphNormal2;
attribute vec3 morphNormal3;#else
attribute vec3 morphTarget4;
attribute vec3 morphTarget5;
attribute vec3 morphTarget6;
attribute vec3 morphTarget7;#endif#endif#ifdef USE_SKINNING
attribute vec4 skinIndex;
attribute vec4 skinWeight;#endif#ifdef USE_INSTANCING
// Note that modelViewMatrix is not set when rendering an instanced model,// but can be calculated from viewMatrix * modelMatrix.//// Basic Usage:// gl_Position = projectionMatrix * viewMatrix * modelMatrix * instanceMatrix * vec4(position, 1.0);
attribute mat4 instanceMatrix;#endif