yarcunham Posted January 6 Share Posted January 6 I have managed to reverse the 3d files fairly well. I can get the vertices, normals, faces and (multiple) uvs out of the files, but there is still some mystery data in there. The files are in little endian order. This is a screenshot of the hex dump of a very minimal 3d model. It's a quad: 4 vertices, 2 faces, 4 uv coordinates. No normals, no skinning data. The first 2 things I've surrounded here are some kind of asset meta data, header and footer. The only field whose meaning I know is the first 4 bytes, which is the size of the payload. (320 in the screenshot, highlighted with a peach background) The first 120 bytes of the mesh file I have just ignored, that has worked so far. In this surrounded area, the first field with the light blue background, is the size of the vertex data. In this case it is 48 bytes, or 0x30. The field's offset is 136 (0x88) from the start of the file including the meta header, 120 (0x78) from the start of the payload. The second field, with green backround is the start of the face data, expressed as an offset from the start of the payload. Its value is 304, or 0x130. If you seek from the start of the file, add the header's size, 16 (0x10) to the offset. The 3rd field, in a darker blue, is the number of "face entries", 6 in this case. It's a number divisible by 3 because every group of 3 entries defines 1 face. 6/3 = 2. 2 faces. The 4th field, in red, is the "vertex stride", 12 (0xC), which tells you how many space a single vertex takes in the file. The vertex data size above is evenly divisable by the vertex stride: 48 bytes/12 bytes = 4. 4 vertices. The next 18 16-bit values starting from file offset 234, 0xEA (payload offset 218, 0xDA) tell what data is present in each vertex, 0xff means that the field is not present in the data. The fields are, starting with the blue field whose value is 0x0193: position data type, (red) normal data type the next 8 fields are unknown, but present in some data. I assume they are some kind of skinning data, but I haven't been able to figure them out. (green, value 0x0109) uv0 data type (brown, light blue, green) uv1, uv2, uv3 data type the last 5 fields have always been 0xff in the files I've seen. The values in those fields that I've observed are, and their probable meanings. The sizes are accurate, meaning that if you add up the sizes of the vertex data present, it should add up to exactly "vertex stride" defined above. 0xff: not present 0x0021: 2D vector with float values (x/y, u/v) size: 8 bytes 0x0022: 3D vector with float values (x/y/z) size: 12 bytes 0x0023: 4D vector with float values (x/y/z/w) size: 16 bytes 0x0083: (unsure): 4D vector with u8 values (x/y/z/w) size: 4 bytes 0x0103: (unsure): 2D vector with u16 values (x/y, u/v), size: 4 bytes 0x0183: (unsure): 3D vector with s8 values and 1 byte of padding (x/y/z), size: 4 bytes 0x0191: 2D vector with S16 values (x/y, u/v), size 4 bytes. 0x0192: 3D vector with S16 values (x/y/z), size 6 bytes. 0x0193: 4D vector with S16 values (x/y/z/w), size 8 bytes. To convert any of the "S16" vectors to float, divide their components by 32767f. With the 8-bit values, divide by 127f. So, looking at the data table, we can see that the vertex data in this case is: positions using the 0x0193 4D-vector (the W-component is unused), and UV0 with the 0x0191 2D-vector. Total size: 8 + 4 bytes = 12, which is equal to the vertex stride defined earlier in the file. Next section has the vertex data, I have separately highlighted the firt vertex: The purple, brown and pink fields are the x, y and z positions of the first vertex. The following 0 is unused/padding. The position of the first vertex is (after dividing all the values by 32768) is 0.34864, 0, 0.87246. After the position (and padding) come the U (blue) and V (orange) coordinates of the vertex: 0,0. The other 3 vertices follow the same pattern. The last section in the payload of this file is the face defitions. Nothing surprising here: 3 index values that refer to the vertices defined earlier, 2 times because there are 2 faces: 2->1->0 and 3->0->2. The last part of the whole file is the footer of the meta file container. I don't know what any of these values mean, but so far it hasn't been a problem. Link to comment Share on other sites More sharing options...
bobo Posted January 7 Share Posted January 7 (edited) "Yes~ Although you have described this file in great detail, I think perhaps you should send a sample file so that people can observe it better." Edited January 7 by bobo Link to comment Share on other sites More sharing options...
yarcunham Posted January 7 Author Share Posted January 7 Two flaws I have noticed in my process: all the UV maps are flipped on the Y-axis and all the 3D models are flipped on the X-axis. I did notice the UV map flipping almost immediately, because the textures didn't line up, but I only noticed the 3D-model X-axis issue because I looked at some screenshots from the game and noticed that the environments are flipped. So: I suppose the Starlite engine uses left-handed coordinates. Positive X-axis goes "left" in 3D space. I'm not sure if it's Z-up or Y-up, because I'm outputting model files that I am then importing into Blender to check my work and that conversion process can add its own changes. I suppose I could try doing the importing directly in Blender using python. Anyway, left-handed coordinates also explain the UV map Y-axis issue, where 0,0 would be the top-left corner and X increases to the right and Y increases down Link to comment Share on other sites More sharing options...
yarcunham Posted January 7 Author Share Posted January 7 (edited) # see improved script in post below This script makes a very rudimentary import from the file format to blender. It reads normals and uvs, but does not set them to the mesh. It also reads and stores all the unknown data. Maybe it can be saved as attributes on the vertices and inspected that way. the "yourFileHere.dat" is one of the files you have extracted from the assets.bundle file. assets.bundle has a simple format: u32 decompressed_size, u32 compressed_size, 64bit hash value, [compressed_size bytes of potentially zstd compressed data]. if the uncompressed and compressed sizes match, then that chunk is uncompressed. if the uncompressed size is 65536 bytes, then concatenate this chunk with the following chunks until you find one that is not 65536 bytes. Edited January 8 by yarcunham removed outdated script Link to comment Share on other sites More sharing options...
yarcunham Posted January 8 Author Share Posted January 8 (edited) # updated again, below This is an improved version of the import script. It fixes the up-axis and handedness to match Blender's conventions. It also applies custom normals and uv maps. This is as far as I have gotten in figuring out the data format. I suppose I could try adding the extra vertex data as attributes to the vertices in Blender to try to figure out what's the deal with it. Edited January 8 by yarcunham updated the script again Link to comment Share on other sites More sharing options...
yarcunham Posted January 8 Author Share Posted January 8 import bmesh import os import struct from enum import Enum import mathutils import bpy WRAPPER_HEADER_LENGTH = 16 WRAPPER_FOOTER_LENGTH = 28 WRAPPER_LENGTH = WRAPPER_HEADER_LENGTH + WRAPPER_FOOTER_LENGTH MESH_HEADER_LENGTH = 256 WRAPPER_AND_MESH_HEADER_LENGTH = WRAPPER_LENGTH + MESH_HEADER_LENGTH SHORT_DIVISOR = 32767.0 class VertexData(Enum): NOT_PRESENT = 0xffff FLOAT2 = 0x0021 FLOAT3 = 0x0022 FLOAT4 = 0x0023 UBYTE4 = 0x0083 USHORT2 = 0x0103 SBYTE3_PAD = 0x0183 SHORT2 = 0x0191 SHORT3 = 0x0192 SHORT4 = 0x0193 size_of_vertex_data = dict( [(VertexData.NOT_PRESENT, 0), (VertexData.FLOAT2, 8), (VertexData.FLOAT3, 12), (VertexData.FLOAT4, 16), (VertexData.UBYTE4, 4), (VertexData.USHORT2, 4), (VertexData.SHORT2, 4), (VertexData.SHORT3, 6), (VertexData.SHORT4, 8), ]) vertex_data_to_attribute_type = dict( [(VertexData.NOT_PRESENT, None), (VertexData.FLOAT2, 'FLOAT2'), (VertexData.FLOAT3, 'FLOAT_VECTOR'), (VertexData.FLOAT4, 'QUATERNION'), (VertexData.UBYTE4, 'QUATERNION'), (VertexData.USHORT2, 'FLOAT2'), (VertexData.SHORT2, 'FLOAT2'), (VertexData.SHORT3, 'FLOAT_VECTOR'), (VertexData.SHORT4, 'QUATERNION'), ]) def read_mesh_header(data: bytes): header_data = {'vertex_data_size': struct.unpack_from('<I', data, 120)[0], 'face_start_offset': struct.unpack_from('<I', data, 144)[0], 'num_face_entries': struct.unpack_from('<I', data, 152)[0], 'vertex_stride': struct.unpack_from('<I', data, 180)[0], 'vertex_data': {'position': VertexData(struct.unpack_from('<H', data, 218)[0]), 'normal': VertexData(struct.unpack_from('<H', data, 220)[0]), 'unkn0': VertexData(struct.unpack_from('<H', data, 222)[0]), 'unkn1': VertexData(struct.unpack_from('<H', data, 224)[0]), 'unkn2': VertexData(struct.unpack_from('<H', data, 226)[0]), 'unkn3': VertexData(struct.unpack_from('<H', data, 228)[0]), 'unkn4': VertexData(struct.unpack_from('<H', data, 230)[0]), 'unkn5': VertexData(struct.unpack_from('<H', data, 232)[0]), 'unkn6': VertexData(struct.unpack_from('<H', data, 234)[0]), 'unkn7': VertexData(struct.unpack_from('<H', data, 236)[0]), 'uv0': VertexData(struct.unpack_from('<H', data, 238)[0]), 'uv1': VertexData(struct.unpack_from('<H', data, 240)[0]), 'uv2': VertexData(struct.unpack_from('<H', data, 242)[0]), 'uv3': VertexData(struct.unpack_from('<H', data, 244)[0]), 'unkn8': VertexData(struct.unpack_from('<H', data, 246)[0]), 'unkn9': VertexData(struct.unpack_from('<H', data, 248)[0]), 'unkn10': VertexData(struct.unpack_from('<H', data, 250)[0]), 'unkn11': VertexData(struct.unpack_from('<H', data, 252)[0]), }} failed_sanity_checks = [] if header_data['vertex_data_size'] > len(data): failed_sanity_checks.append('vertex data size ({0}) too big '.format(header_data['vertex_data_size'])) if header_data['face_start_offset'] > len(data): failed_sanity_checks.append( 'face start offset ({0}) is past the end of file'.format(header_data['face_start_offset'])) if header_data['num_face_entries'] % 3 != 0: failed_sanity_checks.append( 'number of face entries ({0}) is not divisable by 3'.format(header_data['num_face_entries'])) if header_data['face_start_offset'] + header_data['num_face_entries'] * 2 > len(data): failed_sanity_checks.append('number of faces ({0}) would run past the end of file at offset ({1})'.format( header_data['num_face_entries'], header_data['face_start_offset'])) if header_data['vertex_stride'] > header_data['vertex_data_size']: failed_sanity_checks.append( 'vertex stride ({0}) is bigger than vertex data size ({1})'.format(header_data['vertex_stride'], header_data['vertex_data_size'])) if header_data['vertex_data_size'] % header_data['vertex_stride'] != 0: failed_sanity_checks.append( 'vertex data size ({1}) is not evenly divisible by vertex stride ({0})'.format(header_data['vertex_stride'], header_data[ 'vertex_data_size'])) if header_data[ 'vertex_stride'] > 288: # 288 comes from 18 fields, each with a 4-element float vector. 18 * 4 * 4 bytes failed_sanity_checks.append( 'vertex stride ({0}) is suspiciously big (>288)'.format(header_data['vertex_stride'])) calculated_vertex_data_size = size_of_vertex_data[header_data['vertex_data']['position']] + size_of_vertex_data[ header_data['vertex_data']['normal']] + size_of_vertex_data[header_data['vertex_data']['uv0']] + \ size_of_vertex_data[header_data['vertex_data']['uv1']] + size_of_vertex_data[ header_data['vertex_data']['uv2']] + size_of_vertex_data[ header_data['vertex_data']['uv3']] + size_of_vertex_data[ header_data['vertex_data']['unkn0']] + size_of_vertex_data[ header_data['vertex_data']['unkn1']] + size_of_vertex_data[ header_data['vertex_data']['unkn2']] + size_of_vertex_data[ header_data['vertex_data']['unkn3']] + size_of_vertex_data[ header_data['vertex_data']['unkn4']] + size_of_vertex_data[ header_data['vertex_data']['unkn5']] + size_of_vertex_data[ header_data['vertex_data']['unkn6']] + size_of_vertex_data[ header_data['vertex_data']['unkn7']] + size_of_vertex_data[ header_data['vertex_data']['unkn8']] + size_of_vertex_data[ header_data['vertex_data']['unkn9']] + size_of_vertex_data[ header_data['vertex_data']['unkn10']] + size_of_vertex_data[ header_data['vertex_data']['unkn11']] if calculated_vertex_data_size != header_data['vertex_stride']: failed_sanity_checks.append( 'vertex stride ({0}) does not match calculated vertex size ({1})'.format(header_data['vertex_stride'], calculated_vertex_data_size)) if len(failed_sanity_checks): raise RuntimeError(', '.join(failed_sanity_checks)) return header_data def read_vertex_data(data: bytes, offset: int, vtype: VertexData) -> (mathutils.Vector, int): if vtype == VertexData.NOT_PRESENT: return None, offset if vtype == VertexData.FLOAT2: return mathutils.Vector(struct.unpack_from('<ff', data, offset)), offset + 8 if vtype == VertexData.FLOAT3: return mathutils.Vector(struct.unpack_from('<fff', data, offset)), offset + 12 if vtype == VertexData.FLOAT4: return mathutils.Vector(struct.unpack_from('<ffff', data, offset)), offset + 16 if vtype == VertexData.UBYTE4: elems = struct.unpack_from('<cccc', data, offset) return mathutils.Vector((int.from_bytes(elems[0], 'little') / 255.0, int.from_bytes(elems[1], 'little') / 255.0, int.from_bytes(elems[2], 'little') / 255.0, int.from_bytes(elems[3], 'little') / 255.0)), offset + 4 if vtype == VertexData.USHORT2: elems = struct.unpack_from('<HH', data, offset) return mathutils.Vector((elems[0] / SHORT_DIVISOR, elems[1] / SHORT_DIVISOR)), offset + 4 if vtype == VertexData.SBYTE3_PAD: elems = struct.unpack_from('<cccc', data, offset) return mathutils.Vector((int.from_bytes(elems[0], 'little', signed=True) / 127.0, int.from_bytes(elems[1], 'little', signed=True) / 127.0, int.from_bytes(elems[2], 'little', signed=True) / 127.0)), offset + 4 # 4th element discarded on purpose if vtype == VertexData.SHORT2: elems = struct.unpack_from('<hh', data, offset) return mathutils.Vector((elems[0] / SHORT_DIVISOR, elems[1] / SHORT_DIVISOR)), offset + 4 if vtype == VertexData.SHORT3: elems = struct.unpack_from('<hhh', data, offset) return mathutils.Vector( (elems[0] / SHORT_DIVISOR, elems[1] / SHORT_DIVISOR, elems[2] / SHORT_DIVISOR)), offset + 6 if vtype == VertexData.SHORT4: elems = struct.unpack_from('<hhhh', data, offset) return mathutils.Vector((elems[0] / SHORT_DIVISOR, elems[1] / SHORT_DIVISOR, elems[2] / SHORT_DIVISOR, elems[3] / SHORT_DIVISOR)), offset + 8 raise RuntimeError("Unknown data type " + str(vtype)) def read_mesh_data(header: dict, data: bytes) -> dict: mesh_data = {'positions': [], 'normals': [], 'uv0': [], 'uv1': [], 'uv2': [], 'uv3': [], 'unknown': [[], [], [], [], [], [], [], [], [], [], [], []], 'faces': []} current_offset = 256 vertex_data_end = current_offset + header['vertex_data_size'] while current_offset < vertex_data_end: position, current_offset = read_vertex_data(data, current_offset, header['vertex_data']['position']) if position is not None: position.resize_3d() position = position.xzy # convert left-handed y-up to right-handed z-up position.y = -position.y # position.x = -position.x # mesh_data['positions'].append(position) normal, current_offset = read_vertex_data(data, current_offset, header['vertex_data']['normal']) if normal is not None: normal.resize_3d() normal = normal.xzy # convert left-handed y-up to right-handed z-up normal.y = -normal.y # normal.x = -normal.x # normal.normalize() mesh_data['normals'].append(normal) for i in range(8): unknown, current_offset = read_vertex_data(data, current_offset, header['vertex_data']['unkn' + str(i)]) if unknown is not None: mesh_data['unknown'][i].append(unknown) for i in range(4): uv, current_offset = read_vertex_data(data, current_offset, header['vertex_data']['uv' + str(i)]) if uv is not None: uv.resize_2d() mesh_data['uv' + str(i)].append(uv) for i in range(8, 12): unknown, current_offset = read_vertex_data(data, current_offset, header['vertex_data']['unkn' + str(i)]) if unknown is not None: mesh_data['unknown'][i].append(unknown) current_offset = header['face_start_offset'] face_data_end = header['face_start_offset'] + header['num_face_entries'] * 2 while current_offset < face_data_end: face = struct.unpack_from('<HHH', data, current_offset) mesh_data['faces'].append((face[0], face[2], face[1])) # as part of handedness change, need to flip faces current_offset += 6 return mesh_data def process_single(path: str): if os.path.getsize(path) < WRAPPER_AND_MESH_HEADER_LENGTH: raise RuntimeError('file {0} is too small'.format(path)) with open(path, 'rb') as in_file: input_data = in_file.read() meta_header = struct.unpack_from('<IIII', input_data) if meta_header[0] > len(input_data): raise RuntimeError('size in file "{0}" header ({1}) is too big'.format(path, meta_header[0])) mesh_header = read_mesh_header(input_data[16:]) mesh_data = read_mesh_data(mesh_header, input_data[16:]) # print(str(mesh_header)) # print(str(mesh_data)) mesh = bpy.data.meshes.new("TestMesh") # add the new mesh obj = bpy.data.objects.new(mesh.name, mesh) col = bpy.data.collections["Collection"] col.objects.link(obj) bpy.context.view_layer.objects.active = obj edges = [] mesh.from_pydata(mesh_data['positions'], edges, mesh_data['faces']) mesh.normals_split_custom_set_from_vertices(mesh_data['normals']) bpy.context.view_layer.objects.active = obj bpy.ops.object.mode_set(mode='EDIT') bm = bmesh.from_edit_mesh(mesh) for uv in range(4): uv_name = 'uv' + str(uv) uv_indices = mesh_data[uv_name] if len(uv_indices): mesh.uv_layers.new(name=uv_name) mesh.uv_layers[uv_name].active = True uv_layer = bm.loops.layers.uv.verify() for face in bm.faces: for loop in face.loops: loop[uv_layer].uv = uv_indices[loop.vert.index] bmesh.update_edit_mesh(mesh) if len(mesh_data['uv0']): mesh.uv_layers['uv0'].active = True bpy.ops.object.mode_set(mode='OBJECT') for i in range(12): unknown_data = mesh_data['unknown'][i] if len(unknown_data): v_data_type = mesh_header['vertex_data']['unkn' + str(i)] d_type = vertex_data_to_attribute_type[v_data_type] attribute = mesh.attributes.new(name='unknown' + str(i), type=d_type, domain='POINT') if d_type == 'QUATERNION': for j in range(len(mesh.vertices)): attribute.data[j].value = unknown_data[j] else: for j in range(len(mesh.vertices)): attribute.data[j].vector = unknown_data[j] process_single('yourFileHere.dat') Okay, this is as far as I have gotten. Now the unknown properties are stored as vertex attributes and you can look at them in geometry nodes spreadsheet and you can use them in the shader with the attribute node. The data UBYTE4 data type would probably be better stored as a byte color value rather than a quaternion, but whatever Link to comment Share on other sites More sharing options...
yarcunham Posted January 13 Author Share Posted January 13 import bpy import mathutils import os import struct from math import sqrt bind_pose_translations: list[mathutils.Vector] = [] bind_pose_rotations: list[mathutils.Quaternion] = [] name_to_index:dict[str, int] = {} def process_single(path: str): bind_pose_translations.clear() bind_pose_rotations.clear() name_to_index.clear() with open(path, 'rb') as in_file: input_data = in_file.read() num_bones = input_data[9] start_of_skeleton = (num_bones - 1) * 8 + 25 #skip unknown data if (len(input_data) < start_of_skeleton + 12): raise RuntimeError('Too short') ozz_skeleton = input_data[start_of_skeleton:] offset = 0 magic = ozz_skeleton[offset:offset + 12] offset = 13 decoded = magic.decode('utf-8') if decoded != 'ozz-skeleton': raise RuntimeError('Magic doesn\'t match: {1}', decoded) version = int.from_bytes(ozz_skeleton[offset:offset + 4], 'little') offset = 17 if version != 1: raise RuntimeError('Wrong version: {1}', version) num_joints = int.from_bytes(ozz_skeleton[offset:offset + 4], 'little') offset = 21 if num_joints != num_bones: raise RuntimeError('Number of joints in ozz-skeleton ({0}) does not match the number in wrapper ({1})', num_joints, num_bones) if num_joints > 1023: raise RuntimeError('Too many joints: {1}', num_joints) chars_count = int.from_bytes(ozz_skeleton[offset:offset + 4], 'little') offset = 25 if chars_count > len(ozz_skeleton) - offset: raise RuntimeError('Joint names run past the end of buffer: {0}'.format(chars_count)) names = [name.decode('utf8') for name in ozz_skeleton[offset:offset + chars_count].split(b'\0') if name] for i, name in enumerate(names): name_to_index[name] = i offset = offset + chars_count if len(names) != num_joints: raise RuntimeError('Number of names ({0}) does not match number of joints ({1})'.format(len(names), num_joints)) joint_property_version = int.from_bytes(ozz_skeleton[offset: offset + 4], 'little') if joint_property_version != 1: raise RuntimeError('wrong joint property version ({0}) at offset {1}'.format(joint_property_version, offset)) offset += 4 end_of_joint_properties = offset + num_joints * 3 parents = [] while offset < end_of_joint_properties: parents.append(int.from_bytes(ozz_skeleton[offset: offset + 2], 'little')) offset += 3 # the third byte is a boolean indicating 'leaf' which is not needed bind_pose_scales: list[mathutils.Vector] = [] joints_left = num_joints while offset < len(ozz_skeleton): # translations txs = struct.unpack_from('<ffff', ozz_skeleton, offset) tys = struct.unpack_from('<ffff', ozz_skeleton, offset + 16) tzs = struct.unpack_from('<ffff', ozz_skeleton, offset + 32) for i in range(min(joints_left, 4)): # position = mathutils.Vector((tzs[i], -txs[i], tys[i])) # possible alternative to change handedness position = mathutils.Vector((txs[i], tys[i], tzs[i])) bind_pose_translations.append(position) offset += 48 # rotations rxs = struct.unpack_from('<ffff', ozz_skeleton, offset) rys = struct.unpack_from('<ffff', ozz_skeleton, offset + 16) rzs = struct.unpack_from('<ffff', ozz_skeleton, offset + 32) rws = struct.unpack_from('<ffff', ozz_skeleton, offset + 48) for i in range(min(joints_left, 4)): #rotation = mathutils.Quaternion((rws[i], -rzs[i], rxs[i], -rys[i])) # possible alternative to change handedness rotation = mathutils.Quaternion((rws[i], rxs[i], rys[i], rzs[i])) bind_pose_rotations.append(rotation) offset += 64 # scales sxs = struct.unpack_from('<ffff', ozz_skeleton, offset) sys = struct.unpack_from('<ffff', ozz_skeleton, offset + 16) szs = struct.unpack_from('<ffff', ozz_skeleton, offset + 32) for i in range(min(joints_left, 4)): bind_pose_scales.append(mathutils.Vector((sxs[i], sys[i], szs[i]))) offset += 48 basename = os.path.basename(path) armature = bpy.data.armatures.new('a-' + basename) obj = bpy.data.objects.new(basename, armature) bpy.data.collections["Collection"].objects.link(obj) bpy.context.view_layer.objects.active = obj bpy.ops.object.mode_set(mode='EDIT') bones = [armature.edit_bones.new(name) for name in names] for i, bone in enumerate(bones): bone.head = (0, 0, 0) bone.tail = (0, 0.05, 0) parent = parents[i] if parent != 1023: bone.parent = bones[parent] bone.transform(get_matrix(bone)) bpy.ops.object.mode_set(mode='OBJECT') rot_z_90 = mathutils.Quaternion((1 / sqrt(2), 0, 0, -1 / sqrt(2))).to_matrix().to_4x4() def get_matrix(bone): if bone is None: return mathutils.Matrix.Identity(4) # return rot_z_90 # when changing handedness, the armature appears rotated 90 degrees. doing this as a fix seems wrong index = name_to_index[bone.name] local_matrix = bind_pose_rotations[index].to_matrix().to_4x4() local_matrix.translation = bind_pose_translations[index] parent_global_matrix = get_matrix(bone.parent) return parent_global_matrix @ local_matrix process_single(r'your_skeleton_here.dat') This skeleton import script works most of the way. It definitely gets the positions right, except the skeletons are oriented along the Y-axis and not the Z-axis and the left and right side of the skeleton is flipped. Same issue as with the meshes, but I'm less confident about flipping the skeleton since rotations get really broken when they break. There is the unkown data that I skip while parsing the file and its length is 16 * (number_of_bones - 1), so that could actually be the orientation of the all the bones except for the root bone, maybe as a 4-component short or half precision float Link to comment Share on other sites More sharing options...
bobo Posted January 13 Share Posted January 13 Yes, it's very valuable for learning. I'll take a good look at this article. In fact, I've been paying attention to your research for a long time. 1 Link to comment Share on other sites More sharing options...
yarcunham Posted January 13 Author Share Posted January 13 import math import os import os.path import struct import mathutils import bpy from mathutils import Quaternion from os import walk MINIMUM_LENGTH = (14 + # b'ozz-animation\0' 4 + # u32 version 4 + # float duration 4 + # u32 number of tracks 4 + # u32 translation count 4 + # u32 rotation count 4) # u32 scale count kSqrt2 = 1.4142135623730950488016887242097 # copied from ozz-animation source kInt2Float = 1.0 / (32767.0 * kSqrt2) eps = 1e-16 # copied from ozz-animation source, index by `largest` element, get the other 3 quat_mapping = [ [1, 2, 3], [0, 2, 3], [0, 1, 3], [0, 1, 2] ] def short_to_float(short: int) -> float: return short * kInt2Float def elems_to_quat(largest: int, sign: int, a: float, b: float, c: float) -> Quaternion: quat_elems = [0, 0, 0, 0] a_index = quat_mapping[largest][0] b_index = quat_mapping[largest][1] c_index = quat_mapping[largest][2] quat_elems[a_index] = a quat_elems[b_index] = b quat_elems[c_index] = c dot = quat_elems[0] ** 2 + quat_elems[1] ** 2 + quat_elems[2] ** 2 + quat_elems[3] ** 2 ww0 = max(eps, 1 - dot) w0 = ww0 / math.sqrt(ww0) if sign: w0 = -w0 quat_elems[largest] = w0 quat = mathutils.Quaternion((quat_elems[3], quat_elems[0], quat_elems[1], quat_elems[2])) #not entirely sure if this should be #if math.fabs(quat.magnitude - 1) > 0.01: # raise RuntimeError('Quaternion magnitude not 1: ({0}) for quaternion {1}. Converted from elems largest: {2}, sign: {3}, a: {4}, b: {5}, c: {6}'.format(quat.magnitude, quat, largest, sign, a, b, c)) return mathutils.Quaternion((quat_elems[3], quat_elems[0], quat_elems[1], quat_elems[2])) def analyze_single(path: str): if os.path.getsize(path) < MINIMUM_LENGTH: raise RuntimeError('file {0} is too small'.format(path)) with open(path, 'rb') as in_file: input_data = in_file.read() offset = 0 magic = input_data[offset:offset + 14] offset = 14 if magic != b'ozz-animation\0': raise RuntimeError('Magic doesn\'t match: {1}', magic) version = int.from_bytes(input_data[offset:offset + 4], 'little') offset = 18 if version != 3: raise RuntimeError('Incorrect version {1} (only version 3 is supported)', version) duration = struct.unpack_from('<f', input_data[offset:offset + 4])[0] offset = 22 num_tracks = int.from_bytes(input_data[offset:offset + 4], 'little') offset = 26 translation_count = int.from_bytes(input_data[offset:offset + 4], 'little') offset += 4 + 12 * translation_count rotation_count = int.from_bytes(input_data[offset:offset + 4], 'little') offset += 4 + 14 * rotation_count scale_count = int.from_bytes(input_data[offset:offset + 4], 'little') print('{6} bytes {0}: duration:{1}, number of tracks:{2}, number of keyframes: location {3}, rotation {4}, scale {5}'.format( path, duration, num_tracks, translation_count, rotation_count, scale_count, os.path.getsize(path))) def is_zero_vector(vec: mathutils.Vector) -> bool: return vec.length_squared < 1e-10 def is_one_vector(vec: mathutils.Vector) -> bool: return math.fabs(vec.length_squared - 3) < 1e-10 def process_single(path: str): if os.path.getsize(path) < MINIMUM_LENGTH: raise RuntimeError('file {0} is too small'.format(path)) with open(path, 'rb') as in_file: input_data = in_file.read() offset = 0 magic = input_data[offset:offset + 14] offset = 14 if magic != b'ozz-animation\0': raise RuntimeError('Magic doesn\'t match: {1}', magic) version = int.from_bytes(input_data[offset:offset + 4], 'little') offset = 18 if version != 3: raise RuntimeError('Incorrect version {1} (only version 3 is supported)', version) duration = struct.unpack_from('<f', input_data[offset:offset + 4])[0] offset = 22 num_tracks = int.from_bytes(input_data[offset:offset + 4], 'little') offset = 26 translation_count = int.from_bytes(input_data[offset:offset + 4], 'little') offset = 30 print('translation count:{0}'.format(translation_count)) for i in range(translation_count): time = struct.unpack_from('<f', input_data[offset:offset + 4])[0] track = struct.unpack_from('<H', input_data[offset + 4:offset + 6])[0] tx = struct.unpack_from('<e', input_data[offset + 6:offset + 8])[0] ty = struct.unpack_from('<e', input_data[offset + 8:offset + 10])[0] tz = struct.unpack_from('<e', input_data[offset + 10:offset + 12])[0] if track >= num_tracks: test_vec = mathutils.Vector((tx, ty, tz)) if not is_zero_vector(test_vec): raise RuntimeError( 'track ({0}) >= num_tracks ({1}) and length != 0 ({2})'.format(track, num_tracks, test_vec.length)) offset += 12 continue bone = bpy.context.active_object.pose.bones[track] bone.location = (tx, ty, tz) frame_offset = 1 + time * bpy.context.scene.render.fps bone.keyframe_insert('location', frame=frame_offset, keytype='GENERATED') offset += 12 rotation_count = int.from_bytes(input_data[offset:offset + 4], 'little') offset += 4 for i in range(rotation_count): time = struct.unpack_from('<f', input_data[offset:offset + 4])[0] track = struct.unpack_from('<H', input_data[offset + 4:offset + 6])[0] largest = input_data[offset + 6] sign = input_data[offset + 7] r1 = short_to_float(struct.unpack_from('<h', input_data[offset + 7:offset + 9])[0]) r2 = short_to_float(struct.unpack_from('<h', input_data[offset + 9:offset + 11])[0]) r3 = short_to_float(struct.unpack_from('<h', input_data[offset + 11:offset + 13])[0]) if track >= num_tracks: test_vec = mathutils.Vector((r1, r2, r3)) if not is_zero_vector(test_vec): raise RuntimeError( 'track ({0}) >= num_tracks ({1}) and length != 0 ({2}), offset:0x{3:x}'.format(track, num_tracks, test_vec.length, offset)) offset += 14 continue quat = elems_to_quat(largest, sign, r1, r2, r3) bone = bpy.context.active_object.pose.bones[track] bone.rotation_quaternion = quat frame_offset = 1 + time * bpy.context.scene.render.fps bone.keyframe_insert('rotation_quaternion', frame=frame_offset, keytype='GENERATED') offset += 14 scale_count = int.from_bytes(input_data[offset:offset + 4], 'little') offset += 4 for i in range(scale_count): time = struct.unpack_from('<f', input_data[offset:offset + 4])[0] track = struct.unpack_from('<H', input_data[offset + 4:offset + 6])[0] sx = struct.unpack_from('<e', input_data[offset + 6:offset + 8])[0] sy = struct.unpack_from('<e', input_data[offset + 8:offset + 10])[0] sz = struct.unpack_from('<e', input_data[offset + 10:offset + 12])[0] if track >= num_tracks: test_vec = mathutils.Vector((sx, sy, sz)) if not is_one_vector(test_vec): raise RuntimeError( 'track ({0}) >= num_tracks ({1}) and not 1 scale ({2}), scale keyframe {3}, offset:0x{4:x}'.format( track, num_tracks, test_vec, i, offset)) offset += 12 continue bone = bpy.context.active_object.pose.bones[track] bone.scale = (sx, sy, sz) frame_offset = 1 + time * bpy.context.scene.render.fps bone.keyframe_insert('scale', frame=frame_offset, keytype='GENERATED') offset += 12 def read_dir(path: str): for (dirpath, dirnames, filenames) in walk(path): for name in filenames: try: analyze_single(os.path.join(dirpath, name)) except RuntimeError: pass #read_dir(r'some_directory/somewhere/with/animations') process_single(r'your_file_here.dat') This script is meant to import animations to an armature, but because all the bone orientations are all kinds of broken, this might not work at all. It "works" in the sense that it reads the animation file and assigns keyframes to the bones in the selected armature using the values it reads from the animation file. But the animation itself is just all kinds of busted. Potential causes are: the animation is read correctly, but because the resting orientation of the bones is wrong, the animations break more and more the further down the bone chain you get. The animation isn't even read correctly and then the bone orientations multiply already wrong rotations event more. And if the animation isn't read correctly, here are a couple of possible causes: The decompression of the animation from the ozz-animation file isn't right (I tried porting the c++ code to pyhton, but I could have misunderstood it), the handedness and up-axis issue, again. And lastly: ozz-animation orders its quaterninons in XYZW and blender in WXYZ, but I already reorder the W component, so maybe that isn't it. Link to comment Share on other sites More sharing options...
yarcunham Posted January 13 Author Share Posted January 13 I could attach some example files for the model, the skeleton and the animation, but I don't want to break the rules Link to comment Share on other sites More sharing options...
yarcunham Posted January 13 Author Share Posted January 13 Oh yeah, one thing I have noticed while trying to figure out what the unknown vertex data means is that one of the data types only seems to have values that are divisible by 3. So 0, 3, 6, 9, 12, 15 and so on. It makes me feel like it's some kind of an offset to an array where the values are 3 elements long, but I don't know, I'm flailing. Link to comment Share on other sites More sharing options...
yarcunham Posted Sunday at 08:11 PM Author Share Posted Sunday at 08:11 PM The mesh files, skeleton and animations are in the attached 7z file. I have not been able to progress much in the last week or so. I have identified a couple of new pieces of data in the mesh files: at offset 0x50 there is a u32 that points to a data block that contains some kind of 4x4 (presumably transformation) matrices. Offset 0x58 holds a u32 that tells the number of transformation matrices. Here is a screenshot of the matrices in the "corneas" mesh file: As you can see, the diagonal values are float 1 (00 00 80 3f). I don't know the relevance of this data unfortunately. Offset 0x60 hods a u32 that tells the offset of some data block after the matrix. There is a u32 value at 0x68 that probably tells the number of data in the block, but I haven't been able to reliably figure out the size of a single piece of data. Offset 0x70 holds a u32 tells the offset of some other data block after the first. Again there is a u32 value at 0x78 that tells the number of data in the block. Also the skeleton files have some unknown data in them before the ozz skeleton data. I have to assume that some combination of the unknown vertex data + the matrices + the 2 "after matrix data" + maybe the extra data in the skeleton file allows for binding the mesh to the skeleton. I have also not been able to animate the skeleton properly. I'm pretty sure that starlite engine uses left-handed Y-up coordinates. Whatever the case, I have not been able to convert the transformations in the animation files into a form that produces anything except a glichy mess MTA2-angela.7z Link to comment Share on other sites More sharing options...
yarcunham Posted Monday at 12:42 PM Author Share Posted Monday at 12:42 PM Lol, I guess "Left-handed Y-up confirmed" by this blog post: https://outfit7.com/blog/tech/building-the-ultimate-mobile-game-engine-starlite Relevant screenshot from the editor: Note the axis gadgets in the center and corner of the main editor screen. Y is up, Z is toward the bottom left, X is toward the top left. Link to comment Share on other sites More sharing options...
yarcunham Posted 17 hours ago Author Share Posted 17 hours ago I found one major problem in the animation import script: blender does not preserve the indices of bones when you change their parents, so I have been assigning keyframes to entirely wrong bones the whole time. The first 6 bones seem like they have the same index, then it gets wildly out of sync. This is some of the output of a debugging script, where the first number is the original index, the second number is the index blender assigns to the bone and the string is the bone's name: (0, 0, 'root'), (1, 1, 'C_main_root__SET'), (2, 2, 'C_skin_joints__SET'), (3, 3, 'c_root_uJnt'), (4, 4, 'c_addon_uJnt'), (5, 5, 'c_data_uJnt'), (6, 6, 'c_offset_uJnt'), (7, 242, 'l_armAddon_uJnt'), (8, 243, 'c_camera_uJnt'), (9, 244, 'r_armAddon_uJnt'), (10, 7, 'c_hip_00_uJnt'), (11, 8, 'rb_light_top_uJnt'), (12, 12, 'cf_light_top_uJnt'), (13, 16, 'lf_light_top_uJnt'), (14, 20, 'cb_light_top_uJnt'), (15, 24, 'cb_tight_top_uJnt'), (16, 28, 'lb_tight_top_uJnt'), (17, 32, 'ls_medium_top_uJnt'), (18, 36, 'cf_tight_top_uJnt'), (19, 40, 'rf_light_top_uJnt'), (20, 44, 'cb_medium_top_uJnt'), (21, 48, 'lb_light_top_uJnt'), (22, 52, 'r_femurRibbon_01_uJnt'), (23, 62, 'lf_medium_top_uJnt'), (24, 66, 'c_tail00_uJnt'), Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now