3D software
that is available and how it is used within the industry.
Some of the software that is available within the Games
industry is:
Maya
3DS MAX
Softimage
This software is used for polygon modelling, manipulating
individual and small groups of polygons these are good for making a base models
that are then added to in different programmes.
Mudbox
ZBrush
These are sculpting types of programmes, these use a huge
amount of polygons compared to the ones above, these are usually used when you
want there to be a lot of detail within a character or an object.
Blender
Sculptris (this uses voxels)
--
In practice the software would typically be used like this;
after the designs have been done and dusted, they would start is the programme
Maya – they use this software to build the basic model of whatever it is that
they are creating, be it a character or an object like a car.
They would then import that model into the programme ZBrush
–this is to get the higher dependability of detail into the character or object
– this is by the use of an amazing amount of polygons that will be used due to
wanting the higher quality within the project that is being worked on.
They
would then Bake this into a normal or bump map, these both serve the same purpose
as to what it is that they do, and also into ambient occlusion maps, this is
where the software tries to approximate the way that light would give off of an
object in real life. These are both to help the model when it is formed to look
more realistic within its environment.
The baked textures will then be applied back into the Maya
software and then this will be applied to the model.
They do this because it uses far less polygons, so the
rendering time will be faster. So they make a high quality model and they then
basically make that 2D and wrap it around the less polygon model in Maya, so
that it looks to have all the detail, but in actuality there are a small number
of polygons that the baking allows them to do.
Geometric
theory
A polygon is usually a triangle, this is because there are
only three points and no matter what point you are at you can always touch the
other points and it will look like a triangle if there are more points used,
the points in space can become distorted and what might have started out as a
square has been worked out to look like more of an hour glass shape. However it
is hard to create objects with triangles, it is much easier to do so with
squares. These are later then converted into triangles, and this is easily done
when building with squares.
Triangles can be represented by three points in space along
the x,y,z axis. 3D modelling is basically manipulating these points into what
it is that you want to create. These points are called a vertex. A vertex
contains information about the x,y,z coordinates of where it is within the 3D
space. It contains the UV coordinates as well, this is when doing texture
mapping and is basically the same point but in 2D space rather than 3D, this is
because UV’s are a 2D representation of a 3D model. It will also hold the
information of the normal.
A normal is an object such as line or a vector –
this is perpendicular to a given object. This can be looked into at 2D and 3D
level – in the case of 3D, the surface normal, or what I have been calling it,
simply, normal, to a surface at a point P
is a vector that is perpendicular to the tangent plane to the surface at P.
Mesh
construction techniques
Box /Subdivision
Modelling
Box modelling is a polygonal modelling technique in which
the creator would start with a geometric primitive, with is a cube, sphere,
cylinder, etc – they then model this shape, refining it until the final design
is achieved.
People that box model usually work in stages, starting with
the low resolution model. They then refine the shape and then subdividing the
mesh to smooth out hard edges and to add in different bits of detail. This
method is then repeated until the model contains enough detail that it conveys
the original design.
This is one of the most common forms of polygon modelling
and is often used in combination with edge modelling techniques.
Extrusion Modelling
This is a method where the user creates a 2D shape which
traces the outline of an object, possibly from a drawing. The user would then
create a second image of the subject but from a different angle and would extrude
the 2D shape into 3D, following the shapes out line. This is usually common for
creating faces and heads. When doing this, in general an artist will model half
a head and then would mirror it in some way; this will ensure that the model is
symmetrical.
Edge/Contour
Modelling
This is also a polygon technique; however this is
fundamentally different from its box modelling counter part. Edge modelling,
instead of starting with a primitive shape (cube, sphere, etc-) and then
refining. A model like this is essentially built piece by piece, and this is
done by placing loops of polygon faces along prominent contours, and then
filling any gaps between them.
This is a good way to create a face; this is because it is
harder to create one through box modelling alone. The precision of contour
modelling can be invaluable. Instead of trying to shape a part of the face with
a cube, which can be confusing, it’s much easier to build an out line, like, of
a mouth or eye and then model the rest from that.
Digital
Sculpting
This is a way of modelling that helps free modellers from
the painstaking constraints of topology and edge flow, this is a type of
modelling that allows them to intuitively create 3D models from scratch, in a
similar fashion to sculpting digital clay.
This way, meshes are created organically, using a graphics
tablet to mould and shape, almost like a sculptor would use utensils on clay.
This has taken creating creatures and characters to a whole new level. This
makes the process faster, and also allows designers to work with
high-resolution meshes that may contain millions of polygons. They produce a
previously unthinkable amount of surface detail in the models that are created.
Procedural Modelling
This is modelling that is generated algorithmically, rather
than being created by hand. This is scenes or objects that are created based on
user definable rules or parameters.
This is plugging in information about a scene into a
computer programme, so imagine that there was a large amount of trees that you
would like to create. Then using this algorithm I would be able to create
number of randomized trees based on a premise. This would make a forest or a
field or something along those lines.
Displaying
3D objects
Direct3d
from direct made by Microsoft –
“The
Direct3D API imposes some constraints on the processing model in order to
achieve optimal rendering performance. Direct3D 11 introduces the Compute
Shader as a way to access this computational capability without so many
constraints. It opens the door to operations on more general data-structures
than just arrays, and to new classes of algorithms as well. Key features
include: communication of data between threads, and a rich set of primitives
for random access and streaming I/O operations. These features enable faster
and simpler implementations of techniques already in use, such as imaging and
post-processing effects, and also open up new techniques that become feasible
on Direct3D 11–class hardware.” http://www.microsoft.com/en-gb/download/details.aspx?id=23803
Direxct3D is used to render three dimensional graphics in
applications where performance is important, an example of this would be games.
Direct3d allows applications to run full screen instead of embedded in a
window, though they could still run through a window of programmed for that
feature. It uses hardware acceleration if it is available on the graphics card;
this is to allow the hardware acceleration for the entire 3D rendering.
OpenGL –
OpenGL (open graphics library) is a cross language,
multiplatform application programming interface (API) for rendering 2D and also
3D computer graphics, this is used to achieve hardware-accelerated rendering.
OpenGL is also profited by a non-profit company.
There are also a number of other game engines that allow for
the displaying of 3D objects.
3D Rad
Ardor3D
Axiom Engine
Cafu Engine
Crystal Space
Grit
Ardor3D
Axiom Engine
Cafu Engine
Crystal Space
Grit
There are also many more game engines that could be mentioned, but there are too many to go into detail about in this report.
Level of
Detail (LoD)
This involves decreasing the complexity of a 3D model
representation as it moves away from the viewer or according to the metrics
like, an objects importance or position. LoD techniques increase the efficiency
of rendering by decreasing the workload at other stages, like the vertex
transformations. When the Level of Detail is decreased it usually will go
unnoticed because of the object appearance when in the distance or if it is moving fast.
Recently, LoD techniques also included shader management to
keep control of the pixel complexity.
This is an image of a Rabbit going down in the amount of polygons, loosing some of its detail.
This is an image showing the object in the distance and in the foreground, as you can see, you do not really need all the detail you have in the creature in the foreground and you do in the background.
Pre-Rendered
vs. Real-time Rendering -
Pre-rendering is the process in which footage is not
rendered in real-time. Instead, the video is a recording of footage that has
previously rendered on a different programme. Pre-rendered material may also be
outsourced by the developer to an outside production company.
Real-time rendering is one of the interactive areas of
computer graphics; this means that mock images are created fast enough that a
player or viewer can interact with the environment through a computer. Video
games are the most common place that real-time rendering is found. The rate at
which images are shown is measured in frames per second. The frame rate is the
measurement of how quickly an imaging device produces unique constructive
images.
An advantage of Pre to Real is the ability to use graphic
models that are more complex and more intensive. This is due to the possibility
of using multiple computers over an extended period of time to render the end
result.
A disadvantage of Pre to Real, in the case of gaming, is the
lower level of interactivity with the player. This means that while these
Pre-rendered scenes are playing on a game, a player can usually just watch, or
do the very minimal in any instant.
Another disadvantage for Pre-rendered assets is that changes
cannot be made during gameplay. A game that has Pre-rendered backgrounds is
forced to use the fixed camera angles, also a game with a scene like this,
generally cannot reflect any changes the game characters might have undergone
previously in the game.
Lighting
Real-time
rendering
To make a model have a more realistic appearance one or more
light sources are usually used in the scene when manipulating the model.
However, this stage cannot be reached without completing the 3D scene being
transformed into the view space, the view space
Pre-rendered
This type of rendering in a game has a problem where
lighting cannot be easily change the state of the lighting in a convincing
manner.
Pre-rendered lighting is a technique that is losing popularity. Instead processor-intensive ray tracing algorithms can be used during a game’s production to generate light textures, which are then simply applied on top of the usual hand drawn textures.
Shaders
Are a set of instructions that are programmed into a computer
to generate how light would bounce off a surface.
Real-time
rendering
This is going to put a lot more on the shader software –
this is because there are many more surfaces and angle in which light will
bounce off, mostly in games, this is because a user can look around most
environments from nearly any position, and this will change the angles of where
light will hit a surface.
Pre-rendered
Shaders are put under less stress when they are programmed
into Pre-rendered scenes, this is because they are fixed camera angles, so
there are only so many surfaces that light will bounce off.
Textures
Simply put – Pre-rendered textures are higher resolution and
have a lot more detail in them, compared to that of real-time rendered
textures. This is because real-time rendering does not have the memory or power
to be running high resolution textured assets and environments per-frame.
Pre-rendered scenes can allow for the size due to it being more of a video than
actual game play, so they can put the detail and resolution to use and make the
pre-rendered scenes better.
Polygon
count
Real-time
rendering
In real-time rendering the polygon count is low, as to
having to be able to render out a scene pretty quickly, and the frame rate has
to be quite quick so that the environment and the character move pretty
seamlessly though a scene.
The use of low poly meshes are mostly confined to computer
games and other software that an outside user must manipulate 3D objects in
real time because processing power is limited on personal computers or games
consoles and these have a need for higher frame rates.
Pre-rendered
This uses a massive amount of polygons compared to real-time
rendering. This is because it will already be rendered previously, so when this
is in a scene it will run more smoothly than the real-time rendering.
Films or still images have a higher polygon count because
rendering does not need to be done in real time, which would require higher
frame rates. This type of rendering is usually not constricted by computer
possessing power because they are usually use a large network of computers,
which is sometimes known as a rendering farm. These are more detailed and
higher quality because of the amount of polygons used.
This is the end of my 3D Report.
No comments:
Post a Comment