This video is all about Blender vocabulary.
Have you ever watched a Blender tutorial or asked a question in a forum and you hear
someone casually throw out a term like “Inverse Kinematics” or “Driver” and you had
no idea what they’re talking about. Hopefully, A “mesh” in 3D software refers to the combination
of vertices, edges and faces that form the shape of an object. It also refers to the specific type
of an object that uses this type of geometry. Geometry by the way is another term that simply
describes the 3 dimensional shapes you create. All meshes are objects, but not all objects
are meshes. Like curves for example. Primitives are the most basic shapes
you start with. Like planes, cubes, spheres, cones and of course, monkey heads. Planar is an adjective that basically
means flat or level. A four sided plane will enter a Blender scene as
planar. But if you move a vertex, you can cause it to be non-planar. This
is generally bad. Tris are always planar. An Ngon is a face that is
made up of five or more sides. Faces with four sides are called quads.
Faces with three sides are called tris. Faces with two sides….well….they don’t exist
because that’s not possible. Generally “clean geometry” includes only tris and quads. Avoid
Ngons when you can. They can cause shading issues. Proportional editing is a way to have
the movement of selected vertices affect unselected vertices nearby and in
proportion based on their distance. You turn it on with the icon up here and there
are different ways in which the vertices can be affected. This is called “falloff.” The shortcut
“o” toggles proportional editting on and off. And while you’re moving, rotating or scaling
something, scrolling the middle mouse wheel up and down changes how far the proportional
editing will have an effect. It works in object mode too where you can have unselected
objects affected when you move a selected object. Rendering (the more common name for “image
synthesis”) is the process of taking all of the data in your scene - the objects, materials
and lighting - and turning all of that into a 2D image. Rendering requires a “Render
Engine” which is a computer software that tells a computer how to render. Blender
has three built-in render engines (well technically four - I’ll get to that later).
But there are many more that can be set up with Blender. Each one calculates a little
bit differently and at different speeds. Ray tracing is a method of calculating light
by tracing a light’s path as it would travel in the real world. Ray tracing calculates what
happens to a light when it hits an object. Is it reflected, is it refracted or
does it get absorbed by the object? Ray tracing uses a lot of computer calculations
and is how the Cycles render engine works. Rendering in Eevee does not use ray tracing
but uses a process called “Rasterization” which does a lot more guesswork and cutting
of corners. It’s faster but not as accurate. When you start off rendering in Cycles, you have
a grainy image. But Blender progressively reduces this graininess by repeatedly “sampling”
each pixel to determine how it should look. You can control how many attempts will
be made at this process and each attempt is called a “sample.” Higher samples
have less noise but they take longer. Fireflies are those annoying bright spots
left in a render either because not enough sampling was done or the light settings were
too much for the render settings being used. By the way, likes on this video -
very much appreciated, thank you! In geometry, a normal is a direction or line that
is perpendicular to something. In 3D software, the “normals” represent directions of part of a
mesh. In Blender, there are actually three types of normals (face, vertex and split normals).
Normals also have an inside and an outside orientation.
If you want to see the normals of a mesh, go into edit mode and go to the overlays menu. Down at
the bottom, you can toggle the visibility of face normals, split normals and vertex normals. And
you can change the length of the display lines. You can also turn on “face orientation” to see
if you’re looking at the inside or outside of the face. You can recalculate normals in edit mode
by selecting your mesh and pressing “Shift+N.” UV’s or UV mapping are necessary
when you want to project an image or material onto a 3D object. Images are
two dimensional and in order to get them to fit onto a 3D object, you have to do
something known as “UV unwrapping.” Imagine having to print the Coca
Cola logo onto a cylindrical soda can and having to flatten it out first to do so.
UV unwrapping is like cutting the can open and laying it out flat so you can
get your image on it how you want. The U and the V represent the two axes of the
2 dimensional image because “X,” “Y,” and “Z” are already used on the 3D object and it would
be confusing to have two X axes and two Y axes. The pivot point is a point in 3D
space around which all rotation, scaling and mirroring transformations are
centered. You can change the pivot point up here. Pivot points can be the individual
origins of each object you’re transforming, they can be the average between those points, or
it could be the 3D cursor and a few other options. The term “Falloff” can be seen in different
places. Whenever someone refers to “falloff” they’re talking about how an effect dissipates
either over time or distance. For example, when using the wave modifier, the falloff is
how far away the waves will start to die down. With proportional editing, falloff is both
how far away the verticies will be affected and in what way that effect will be diminished.
Most lights in Blender have falloff which is how the light becomes weaker and weaker the farther
you get from it. Sun lamps don’t have falloff. A spline is another name for a curve. Curve objects are controlled with
control points instead of vertices. The more common types of splines are bezier
curves and nurbs curves. So spline means curve. Shading is the process of altering the color
of an object based on its angle to lights and distance from lights. In simple terms, it’s
creating the look of an object’s surface - also known as its material. You do this generally
within the shader editor using various nodes. Whenever you see “Alpha” in materials or when
rendering, it refers to a technique of mapping transparency and translucency. Certain file types
like PNG’s can be rendered in RGB which means there are red, green and blue channels. Or they
can be RGBA. The added “A” is for Alpha. Here’s a render with a transparent background but no alpha
channel. Here’s the same render but with an alpha channel that makes certain areas transparent.
You will see this in texturing and you can map different areas of a texture to be transparent.
So alpha always refers to transparency. Ambient occlusion, or “AO” is a shadowing
technique used to make 3D objects look more realistic by simulating soft shadows
where parts of the geometry touch. It’s the shading you see right here
where different parts come together. Baking, in general, is the act of pre-computing
something in order to speed up some other process later. There are different things you can
bake in Blender. You most commonly hear of baking material textures. This means you take
all of the information like color, lighting, roughness, ambient occlusion and more. You
process all of that information one time into a single image texture and it saves
computation time later. It’s especially handy in really big scenes, and animations or video
games where things need to be more responsive. You can also bake things like
physics simulations and animations so that they don’t have to be recalculated
on every single frame when you render. “Procedural.” When something is done procedurally, it means it’s done through computer calculations
based on instructions and there’s no manual input. It’s common in materials. A procedural
material is made up entirely of nodes mixed together with different value inputs.
PBR materials on the other hand - PBR stands for “Physically Based Rendering” - use
images as inputs. The term “procedural” can also be used for other things. You might hear someone
making “procedural waves” or “procedural hair.” It means it’s all done by entering
values into nodes with no manual work. A render farm is a service that allows you
to upload your 3D file and have it rendered on someone else’s computer network,
usually much, much faster than you could do yourself. There are many render
farms with different pricing structures. There are also render pools that you can join
to have your files rendered on other computers as long as you agree to use your computer
for rendering other people’s files. I tried out a render farm for the first time and made and
entire video about it if you want to check it out. Subscribe. Subscribe is not a Blender
term, but it is what I would love for you to do if you’ve made it this far and
are enjoying the video. Thank you so much! An HDRI or HDR Image is a high-dynamic
range image. They’re in 360 degree format and can be placed into your world to provide
global and more realistic lighting. Often, they are images of skies but can also be
interior scenes and studio light set ups. Probably the best place to get
free ones is from Polyhaven.com. Culling. Culling basically means that Blender
ignores something and doesn’t display it. You see it in a few different places. In Eevee and
in solid view, you can turn on backface culling which means the back of faces won’t display.
Camera culling means things outside the camera’s view will be ignored by Blender.
Distance culling means things beyond a certain distance will be ignored.
So culling basically means excluding or ignoring something from either
your render or your scene in general. You’ll see the term “seed” or “seed
value” when something can be randomized. Basically changing the seed value gives you a
different randomization. So, go make something random and if you don’t like how it looks, keep
changing the seed until you get what you want. If you see “clamp” or hear the term “clamping”
it means to limit something to a range. In the render settings under light paths,
clamping is an option to limit the maximum light value from a given sample - or
calculation point - when rendering. You would do this as a last resort
to remove artifacts in a render. But, you may see clamping used in other
areas and it basically means “limiting.” This little symbol you see in
places stands for “Fake User” and in order to understand a fake user, I
guess I should explain what a real user is. Objects are data users when they have data from
something like a material or node setup assigned to them. But, when a material, for example,
is not being used by any object in the scene, Blender will purge it the next time it closes.
It won’t be saved. To keep this from happening, you can press the “Fake User” button and it’ll
treat the material as if it’s assigned to a real object so it doesn’t get deleted. It’s a strange
thing to call it I guess, but that’s what it is. Volumetrics relates to volume, which,
if you remember from high school math, is the space within a 3 dimensional
object. So, in Blender, it’s how you want the area inside your object to be
shaded. There are a handful of volume shaders that allow you to give the inside of
an object different effects like fog or smoke. Grease Pencil is the name of the 2D side of
Blender. Grease Pencil objects are different types of objects that allow you to draw in 3D
space. The most basic of which are strokes. If you open Blender, you can choose a 2D workspace
and actually draw and animate 2 dimensionally. Freestyle is technically another rendering engine
inside of Blender. It takes a 3D scene and uses object information and depth to draw lines on
selected edges. It’s a really cool way to convert a 3D scene into 2D line art and you can change a
lot of settings to get different stroke effects. Pressure Sensitivity is what this symbol
means and whenever you see it, it means that the value can be controlled dynamically with a
pressure-sensitive drawing tablet. When you’re using a drawing tablet and have this box checked,
that means the harder you press down on the pen, the more this value will be added. You can use it
for strength, radius, opacity and other things in Blender. If you’re interested in a drawing tablet,
I will link to the one I use in the description. Constraints are a way of controlling or limiting
an object, often with data from another object. You add constraints to objects from the object
constraints tab in the properties panels. For example, you could have one
object’s transformation copy another’s or you could limit an object’s
movement to a path or a curve. Modifiers are things that you add
to objects for some sort of effect but that work non-destructively. Meaning they
don’t affect the original data of the object and can be removed or changed later. You add modifiers
from the modifiers tab in the properties panel. There are over 50 modifiers and
each one does something different. The mirror modifier causes an object’s
geometry to be mirrored. The curve modifier allows you to bend objects along a curve.
So many things you can do with modifiers. Particle Systems are a technique to simulate
certain things that you need lots and lots of in your scene - without manually placing
each one. You can add a particle system to an object in the particle settings tab found in
the properties panel. Common uses include leaves, strands of hair or fur, falling snow,
scattered debris, sparks. You get the picture. Each individual item created in
a particle system is known as a particle. Instancing is a way of creating a copy
of something but using the data from the original object. You may hear something
is an “instance” or an “instanced object.” It’s a more efficient way of duplicating something
because it doesn’t really add new data to a scene. An instanced copy basically tells Blender “I am an
object but to see what object I’m supposed to be, refer to this other object.” Instancing occurs
when you make a linked duplicate, when you make an array, when you make a collection instance, a
particle system and in a lot of other areas. Parenting. One object can be parented to
another which means it follows the object and is transformed when the parent
object is transformed. The object being parented to a parent object
is known as the “child object” or children. Bones are also often parented to
other bones to create a rigged armature. Motion Blur is an effect that occurs in cameras
- both in still images and in video - where fast moving objects appear blurry. By default,
Blender renders everything as a perfectly still scene. But, this doesn’t look realistic when
you have objects that are moving very fast. So, Blender allows you to
add motion blur to a scene. Depth of Field is a very important concept in
photography. Even if you don’t know what it is, you’ve probably noticed the
effect in pictures or movies. Cameras focus on objects at a certain distance
range and objects outside of that range - either closer or farther away - appear blurry. How
blurry they appear can be adjusted in cameras by increments called “F-Stops” and the entire
effect can be simulated in Blender cameras too. Adding Depth of Field or DOF
to a scene is a great effect. Caustics are those light effects you see in real
life when light touches glass or water. In many 3D softwares - including Blender - you can include
them in the calculation of your renders. There are actually two types of caustics: reflective
and refractive. They can be turned on or off by going to render properties and expanding the
light paths tab. They can be pretty draining on render computations so turn them off if you don’t
need them to save on render times and memory. IOR stands for “Index of Refraction” and it’s a
property of transparent and translucent materials. When light rays move from one medium to another,
they bend. The IOR relates to the angle at which the light bends and can be set when adding
transparent or translucent elements to a material. Many materials such as water,
ice and various types of glass have known IOR’s you can look up and set them
to match the material you’re trying to create. MatCap. MatCap is short for “Material
Capture.” It’s a method of faking materials, lighting and reflections across your
entire scene using just a single image. They’re usually used for previews and
can be accessed here in preview mode. You can create or download your own and
you can add them in the preferences. You can actually render these out using the
workbench render engine too. So these are MatCaps. Drivers are math functions that can change
the characteristic of something based on the characteristic of something else. For example, you
could create a driver that causes the rotation of an object to be driven by the movement of
another object. The more the second object moves the more the first object rotates.
You can add mathematical functions so you could say that for every x amount of movement
by one object, the other object will rotate y amount. Math is confusing but if you hear people
talking about drivers, this is what they mean. A vertex group is a collection of vertices
from your object. These are used when you want to tell Blender to limit an operation
to only part of your mesh. Maybe to say where you want hair or grass to grow on an object or
what part of an object should be affected when a certain bone is moved. You can create them
by going to the object data properties panel, adding a new vertex group and then
assigning vertices to that group. Weight Painting is a process of giving
vertices within a vertex group a weight value between zero and one. These weights
can be used for different things later. Like density maps for particle systems or to
tell bones how much to affect parts of a mesh. You can paint these weight values in
weight paint mode. Blue is no weight, red is full weight and the other
colors are somewhere in between. Voxel is the cubic, 3D equivalent
of the square 2D pixel. The word is a combination of “volume” and “pixel.”
Physics simulations use voxels to store smoke and fire data. It’s also become a
fairly popular style of art. Thanks Minecraft. Subsurface Scattering or “SSS” is an effect
that occurs when light passes through an object and then scatters underneath it. The best example
is human skin. The pinkish hue from skin is caused by light penetrating the skin and scattering under
the surface. So it’s called subsurface scattering. You can give materials subsurface scattering in
Blender to mimic this effect for more realism. In animation, keyframes are used to tell Blender
“this is how something should be at this frame” and “this is how something should be
at this frame.” Blender, I need you to figure out what happens in between. To
animate this cube moving across the screen, we don’t tell Blender where it needs to
be at every single frame. We set a start and end keyframe and it figures out the
rest - a process known as interpolation. Interpolation is the filling in of frames between
two keyframes. For example, if a keyframe is set for this cube here on frame 1 and then over
here on frame 50, Blender will interpolate the movement of the cube over the 50 frames.
In the timeline, graph editor or dope sheet, you can change how Blender interpolates. By
default, Blender uses a “bezier” interpolation so the cube starts moving slowly, speeds up
and then slows down as it stops. Pressing “T” in any of the animation editors will
give you different interpolation options. Linear means the speed will remain
the same throughout the animation. An “F-Curve” is a curve that holds the
animation values of a specific property. In the graph editor, you can view the f-curves
of properties you have animated. As this cube moves up and then back down, its movement along
the “Z” axis is represented in this f-curve. Nonlinear Animation. Nonlinear animation
is an animation technique that allows you to edit motions as a whole and not just
as individual key frames. You basically save a series of keyframes as an action
and in the non-linear animation editor (NLA editor for short), you can move those
actions around, change their speed and blend them together. Nonlinear animation is
an advanced and powerful way to animate. Shape Keys are an animation tool that
allows you to deform a mesh object. They may also be referred to as “morph
targets” or “blend shapes” in other software. When you add a shape key, you add a basis
shape for the mesh. Then you can add multiple deformations of that mesh and animate between
them. I have a video on shape keys for more. Mathematically, a vector is an object with
both magnitude and direction. You’ll see the term vector in Blender and a vector always
has three values representing the X, Y and Z axes. They’re used to denote the location of
something in 3D space or directional information. Kinematics is a mechanical term referring to
the movement of objects and in Blender it’s used to describe how parented objects, or more
commonly bones, move. Forward Kinematics is when a parented bone controls its child bone. Inverse
Kinematics is when child bones actually control how the parent bones move. Here’s an example.
This leg is set up with forward kinematics. Moving the thigh moves all the bones below
it. If I move the foot, it does not affect the upper portions of the leg.
However, this rig is set up with inverse kinematics. With this, I move the
foot and the rest of the leg follows. The control is inverted. Both forward kinematics
and inverse kinematics have their uses. A cryptomatte is a feature in Blender
that allows you to isolate objects and materials in the compositor. You have to
have it turned on in the render passes section. But after you render, it can mask
objects or materials in your image allowing you to make color adjustments
on individual objects or by material. Tessellation is the tiling of a plane
using one or more geometric shapes. It basically means replacing
a face with another object. The tissue tools add-on allows you to
tessellate. You can tell the addon to replace every face of one object with an instance
of another object. That’s tessellation. “Multiple Importance” is still confusing even
to me, but when using a ray-trace render engine like Cycles, it’s an option to have rays
sent directly toward the emissive materials instead of randomly finding them. In short,
sometimes using this method can reduce noise, but it can also reduce lighting from
other areas of your scene and the only recommendations I can find is try it on
a render and see if it’s better or not.