top of page

How to Create a 3D Prop for a Video Game


three stylized game props: treasure chests with vines as part of the structure
an example of a 3D game prop

There was a time when 3D game development was out of reach for most independent game developers. Graphics programming involved a lot of complicated matrix math and a deep understanding of how graphics cards function.


Nowadays, with widely accessible 3D software and several robust game engines available, anyone can make a 3D game with some effort and practice.


Instead of learning how to mathematically transform points in three-dimensional space, or how to write an OpenGL shader from scratch, all you need to learn is how to create a 3D model and load it into an engine.


What is a 3D model?

At its heart, a 3D model is a file that contains a list of coordinate points. A coordinate point is called a vertex, the plural being "vertices".


Each set of three vertices describes a “triangle”, or a flat face of the object. In most 3D software, two triangles are displayed together as one quad, but under the hood they are saved as triangles.


Building up all these points into a bunch of faces, you eventually have a 3D representation of a shape. There are many file types that encode this 3D information in different ways, but some of the most common 3D file types are OBJ, STL, and FBX.


a lumpy 3D shape in an editor with individual points and their faces visible
a basic 3D model with vertices and faces

Once you have a 3D model you can use it in a variety of ways:

  • You can import it into a game engine like Unity or Unreal and use it in a game

  • You can use a 3D rendering software like Blender or Maya to create beautiful scenes

  • You can use it in an animation, like a short film or marketing video

  • Some 3D models can even be used for 3D printing, if they are a shape that can physically be 3D printed

Which software should you use?

If you want to make a 3D model for yourself, you have a lot of options. Some 3D applications are free, and some are... very much not free.


Maya and ZBrush are industry favorites, used by a lot of large media companies, and their price tags are steep. As subscription models are becoming more the norm, the cost is slightly more affordable than it used to be, but I haven’t used Maya since I was a student with a free student copy.


SketchUp is an option that is more affordable, but can still be a significant investment for someone who is just learning, and I'm not familiar enough with it to know whether it's a good option for game assets.


Blender, on the other hand, is completely free and open-source for both personal use or commercial use. It does everything that the “professional” tools do, though the interface has a bit of a learning curve and since it is open-source the development team lags slightly behind the ‘cutting edge’ of 3D.


I have been using Blender off-and-on for close to 15 years, and it just keeps getting better. I would heartily recommend Blender to anyone whether they are a beginner or not. It just gets the job done.


That being said, if you are vying for a corporate job in the 3D industry, it may be valuable to learn the more expensive tools so you’re a competitive candidate when applying for work. Make the decision that is best for you.


The modeling process

Once you have your 3D software installed and running, you can start creating. Every 3D software will have a viewport that shows your 3D model, and it will have ways to interact with the points in that model. Blender starts every new file with the beloved “default cube” in the center of a grid so you never have to stare at the dreaded "blank page".


You can use certain actions to select and move the coordinate points, add or remove parts of the model, or use software tools to generate or modify the model shape.


a screenshot of the 3D app "Blender" with an in-progress model of a treasure chest in the viewport
a screencap of 3D modeling in Blender

Think of this space as your canvas, or your lump of clay. You can use this workspace to create any kind of 3D model you want: a game prop, a character, scenery.


One critical consideration in 3D modeling is the size and complexity of the model, often measured in “tris” or “polys”. This refers to the number of faces, or the number of triangles that make up the model.


When a model is being displayed (rendered) by a game engine or other 3D software, having more polys means it will use more processing power. In games, too many polys can be debilitating for performance or even grind it to a halt.


There are ways to create a very high-poly mesh and later build a lower-poly mesh from that original model (search “retopology” if you want to learn more about this), but I like to take an easier route.


Rather than creating realistic game objects, I embrace a low-poly style and build models that are very performant from the beginning. If you compare this process to other art media, this is similar to an artist opting for a simple style over a photo-realistic style.


Low-poly is not appropriate for every project (many AAA games use a detailed, realistic style, and it is expected in certain franchises), but as an indie game developer I have a lot of control over art style, especially when I am working on a solo project. It can be hard to find a balance between simplicity and flair, but using this process dramatically reduces the amount of work I have to do on each model.


The texturing process

Once you’ve created the shape of your 3D model by moving, adding, and tweaking coordinate points, you are not quite done.


Well, you might be. At this stage, your model may be ready for 3D printing, but for anything rendered digitally you still need to add color, a texture, a surface to your model.


In most software, if you don’t include a texture you will end up with a plain grey shape that responds poorly to digital lighting in the scene. So, how do we add color?


If we were to add color to our models by associating a color value with each coordinate point (each vertex), we would need to have a staggering number of points to create anything with detail.


a visual example of how many vertices a simple cube would need to have for per-vertex colors with detail
this is too many vertices for such a simple shape, and there's a better way!

Instead, we can use images. Graphics cards are really good at using images. In fact, they’re much better at it than rendering a million polys in the shape of a photo-realistic human face. The problem is that an image is two-dimensional and our model is three-dimensional.


This is where UVs come in. UVs are two-dimensional coordinates attached to each vertex (a three-dimensional coordinate) on our 3D model. They explain which point on an image texture that vertex should get its color from.


For example, remember that each face of our object consists of 3 vertices that make up a triangle. If we say “point 1 is at (0,0) pixels in the image, point 2 is at (30, 50), and point 3 is at (0, 20) pixels on the image, we are essentially drawing a triangle on the image from point 1 to point 2 to point 3, and that slice of the image is placed on that face.


It sounds complicated, but 3D software has tools to make this easy to do and it is well worth the learning curve. It allows us to have a much higher resolution of detail on the surface of the model without causing a huge increase in processing power needed to render it.


3D modeling software has a visual interface for setting up the UV coordinates, so you still don’t have to do any math here. In Blender there is a window that shows each vertex in a position on the 2D image that you can put side-by-side with your 3D viewport.


a screenshot of Blender with the UV editor screen up
on the right is a 3D creature model I created and on the left is the image texture with the UV coordinates shown

You can select parts of the mesh in either panel, move them around, and see how the model looks in real time. You can also use software tools to automatically "unwrap" the model, which will place each face evenly across the image texture.


Many 3D artists will have an image texture that looks more representative of what they're creating (e.g. a face would be visible on a character texture like this), but I use a slightly strange process called "Lazy UV unwrapping" that instead uses these blocks of color. Shoutout to Joyce (a.k.a. "Minion's Art") for sharing public tutorials on how to use this process. I also describe it in my Skillshare course linked below.


When editing the UVs for a model, you want to line up each piece of the model with the appropriate part of the image. If I were texturing a character's face, I can select the parts of the model that make up the nose and drag it in the UV editor along the image until it lines up with the nose in the image. I can grab the points on the model that make up the eyes and drag them over the eyes. In the end, I have connected each point to its appropriate place on the image and my model has been textured.


There are also workflows where you can paint directly onto the model in an editor. This still uses UVs behind the scenes, but instead of creating the image first and lining up your UVs later, you are doing the process in reverse: lining up the UVs and then editing the image.


What else can we do in 3D software?

If you’re a more advanced 3D artist, you can also use image textures for other things, using the same set of UV coordinates. To understand this, remember that images are just files that store a grid of information.


In most cases, we use that grid to save a picture. Each pixel in an image stores 4 numbers: a number for the red value, the green value, the blue value, and the alpha value (also known as the transparency value) for that pixel. These four values are usually components of a color, but they don't have to be. We can store any number in here that is useful to us.


You can use image files to store values for a “normal” map or “bump” map. In this case, instead of a color we are storing 4 numbers that represent a vector, or a direction. This direction is then used when rendering to determine how light reflects off a 3D object. Since we have a higher resolution in the texture than in the 3D model, we can simulate "bumps" on a face even when the vertices form a flat triangle.


You can use textures in a similar way to mark parts on the model which have emission, where it actually gives emits light in a certain color. This is just an overview of a more advanced topic, but understanding the connection between your 3D model and UV coordinates unlocks a lot of possibilities.


You can start learning right now!

Once you have your model’s shape (the 3D points) and color (the UV coordinates), you are done! From here, you can export your model from the 3D software and import it into the game engine of your choice. For Unity, importing can be as easy as dragging the file into the Assets folder in a project.


If you’re ready to get hands-on with 3D art, I do have a Skillshare course that covers my entire game prop workflow from start to finish with detailed demonstrations, and I only use free software in the course. Anyone with a computer can follow along.


You can get a free trial for Skillshare with my link here. Please note, this link is a referral link and I may receive a portion of your purchase if you decide to get a premium subscription. That being said, you can watch my course or any other Skillshare course during the free trial with no strings attached.


It’s exciting to see how accessible game development and 3D art have become in the past decade or so. I really believe that anyone can learn how to make beautiful 3D models, and I’m happy to help. Let me know if you have any questions or requests for additional writings!


Comments


bottom of page