Assimp
v4.1. (December 2018)
|
The assimp library returns the imported data in a collection of structures. aiScene forms the root of the data, from here you gain access to all the nodes, meshes, materials, animations or textures that were read from the imported file. The aiScene is returned from a successful call to assimp::Importer::ReadFile(), aiImportFile() or aiImportFileEx() - see the Usage page for further information on how to use the library.
By default, all 3D data is provided in a right-handed coordinate system such as OpenGL uses. In this coordinate system, +X points to the right, +Y points upwards and +Z points out of the screen towards the viewer. Several modeling packages such as 3D Studio Max use this coordinate system as well (or a rotated variant of it). By contrast, some other environments use left-handed coordinate systems, a prominent example being DirectX. If you need the imported data to be in a left-handed coordinate system, supply the aiProcess_MakeLeftHanded flag to the ReadFile() function call.
The output face winding is counter clockwise. Use aiProcess_FlipWindingOrder to get CW data.
Outputted polygons can be literally everything: they're probably concave, self-intersecting or non-planar, although our built-in triangulation (aiProcess_Triangulate postprocessing step) doesn't handle the two latter.
The output UV coordinate system has its origin in the lower-left corner:
Use the aiProcess_FlipUVs flag to get UV coordinates with the upper-left corner als origin.
A typical 4x4 matrix including a translational part looks like this:
with (X1, X2, X3)
being the local X base vector, (Y1, Y2, Y3)
being the local Y base vector, (Z1, Z2, Z3)
being the local Z base vector and (T1, T2, T3)
being the offset of the local origin (the translational part). All matrices in the library use row-major storage order. That means that the matrix elements are stored row-by-row, i.e. they end up like this in memory: [X1, Y1, Z1, T1, X2, Y2, Z2, T2, X3, Y3, Z3, T3, 0, 0, 0, 1]
.
Note that this is neither the OpenGL format nor the DirectX format, because both of them specify the matrix layout such that the translational part occupies three consecutive addresses in memory (so those matrices end with [..., T1, T2, T3, 1]
), whereas the translation in an Assimp matrix is found at the offsets 3, 7 and 11 (spread across the matrix). You can transpose an Assimp matrix to end up with the format that OpenGL and DirectX mandate. To be very precise: The transposition has nothing to do with a left-handed or right-handed coordinate system but 'converts' between row-major and column-major storage format.
11.24.09: We changed the orientation of our quaternions to the most common convention to avoid confusion. However, if you're a previous user of Assimp and you update the library to revisions beyond SVNREV 502, you have to adapt your animation loading code to match the new quaternion orientation.
Nodes are little named entities in the scene that have a place and orientation relative to their parents. Starting from the scene's root node all nodes can have 0 to x child nodes, thus forming a hierarchy. They form the base on which the scene is built on: a node can refer to 0..x meshes, can be referred to by a bone of a mesh or can be animated by a key sequence of an animation. DirectX calls them "frames", others call them "objects", we call them aiNode.
A node can potentially refer to single or multiple meshes. The meshes are not stored inside the node, but instead in an array of aiMesh inside the aiScene. A node only refers to them by their array index. This also means that multiple nodes can refer to the same mesh, which provides a simple form of instancing. A mesh referred to by this way lives in the node's local coordinate system. If you want the mesh's orientation in global space, you'd have to concatenate the transformations from the referring node and all of its parents.
Most of the file formats don't really support complex scenes, though, but a single model only. But there are more complex formats such as .3ds, .x or .collada scenes which may contain an arbitrary complex hierarchy of nodes and meshes. I for myself would suggest a recursive filter function such as the following pseudocode:
This function copies a node into the scene graph if it has children. If yes, a new scene object is created for the import node and the node's meshes are copied over. If not, no object is created. Potential child objects will be added to the old targetParent, but there transformation will be correct in respect to the global space. This function also works great in filtering the bone nodes - nodes that form the bone hierarchy for another mesh/node, but don't have any mesh themselves.
All meshes of an imported scene are stored in an array of aiMesh* inside the aiScene. Nodes refer to them by their index in the array and providing the coordinate system for them, too. One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node. The mesh refers to its material in the same way as the node refers to its meshes: materials are stored in an array inside aiScene, the mesh stores only an index into this array.
An aiMesh is defined by a series of data channels. The presence of these data channels is defined by the contents of the imported file: by default there are only those data channels present in the mesh that were also found in the file. The only channels guaranteed to be always present are aiMesh::mVertices and aiMesh::mFaces. You can test for the presence of other data by testing the pointers against NULL or use the helper functions provided by aiMesh. You may also specify several post processing flags at Importer::ReadFile() to let assimp calculate or recalculate additional data channels for you.
At the moment, a single aiMesh may contain a set of triangles and polygons. A single vertex does always have a position. In addition it may have one normal, one tangent and bitangent, zero to AI_MAX_NUMBER_OF_TEXTURECOORDS (4 at the moment) texture coords and zero to AI_MAX_NUMBER_OF_COLOR_SETS (4) vertex colors. In addition a mesh may or may not have a set of bones described by an array of aiBone structures. How to interpret the bone information is described later on.
See the Material System Page.
A mesh may have a set of bones in the form of aiBone objects. Bones are a means to deform a mesh according to the movement of a skeleton. Each bone has a name and a set of vertices on which it has influence. Its offset matrix declares the transformation needed to transform from mesh space to the local space of this bone.
Using the bones name you can find the corresponding node in the node hierarchy. This node in relation to the other bones' nodes defines the skeleton of the mesh. Unfortunately there might also be nodes which are not used by a bone in the mesh, but still affect the pose of the skeleton because they have child nodes which are bones. So when creating the skeleton hierarchy for a mesh I suggest the following method:
a) Create a map or a similar container to store which nodes are necessary for the skeleton. Pre-initialise it for all nodes with a "no".
b) For each bone in the mesh:
b1) Find the corresponding node in the scene's hierarchy by comparing their names.
b2) Mark this node as "yes" in the necessityMap.
b3) Mark all of its parents the same way until you 1) find the mesh's node or 2) the parent of the mesh's node.
c) Recursively iterate over the node hierarchy
c1) If the node is marked as necessary, copy it into the skeleton and check its children
c2) If the node is marked as not necessary, skip it and do not iterate over its children.
Reasons: you need all the parent nodes to keep the transformation chain intact. For most file formats and modelling packages the node hierarchy of the skeleton is either a child of the mesh node or a sibling of the mesh node but this is by no means a requirement so you shouldn't rely on it. The node closest to the root node is your skeleton root, from there you start copying the hierarchy. You can skip every branch without a node being a bone in the mesh - that's why the algorithm skips the whole branch if the node is marked as "not necessary".
You should now have a mesh in your engine with a skeleton that is a subset of the imported hierarchy.
An imported scene may contain zero to x aiAnimation entries. An animation in this context is a set of keyframe sequences where each sequence describes the orientation of a single node in the hierarchy over a limited time span. Animations of this kind are usually used to animate the skeleton of a skinned mesh, but there are other uses as well.
An aiAnimation has a duration. The duration as well as all time stamps are given in ticks. To get the correct timing, all time stamp thus have to be divided by aiAnimation::mTicksPerSecond. Beware, though, that certain combinations of file format and exporter don't always store this information in the exported file. In this case, mTicksPerSecond is set to 0 to indicate the lack of knowledge.
The aiAnimation consists of a series of aiNodeAnim's. Each bone animation affects a single node in the node hierarchy only, the name specifying which node is affected. For this node the structure stores three separate key sequences: a vector key sequence for the position, a quaternion key sequence for the rotation and another vector key sequence for the scaling. All 3d data is local to the coordinate space of the node's parent, that means in the same space as the node's transformation matrix. There might be cases where animation tracks refer to a non-existent node by their name, but this should not be the case in your every-day data.
To apply such an animation you need to identify the animation tracks that refer to actual bones in your mesh. Then for every track:
a) Find the keys that lay right before the current anim time.
b) Optional: interpolate between these and the following keys.
c) Combine the calculated position, rotation and scaling to a transformation matrix
d) Set the affected node's transformation to the calculated matrix.
If you need hints on how to convert to or from quaternions, have a look at the Matrix&Quaternion FAQ. I suggest using logarithmic interpolation for the scaling keys if you happen to need them - usually you don't need them at all.
Normally textures used by assets are stored in separate files, however, there are file formats embedding their textures directly into the model file. Such textures are loaded into an aiTexture structure.
In previous versions, the path from the query for AI_MATKEY_TEXTURE(textureType, index)
would be *<index>
where <index>
is the index of the texture in aiScene::mTextures. Now this call will return a file path for embedded textures in FBX files. To test if it is an embedded texture use aiScene::GetEmbeddedTexture. If the returned pointer is not null, it is embedded und can be loaded from the data structure. If it is null, search for a separate file. Other file types still use the old behaviour.
If your rely on the old behaviour, you can use Assimp::Importer::SetPropertyBool with the key AI_CONFIG_IMPORT_FBX_EMBEDDED_TEXTURES_LEGACY_NAMING to force the old behaviour.
There are two cases: