Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 11/17/2025 in Posts

  1. I don't know if there are more models in that unpacked folder, you need to check that so examine each file there. Just remember that characters use shorts in vertices buffer, I think I saw other file with floats but maybe that file is not a character or maybe it is but with floats, I really don't know, lol. Here is the script if you want to test it: fmt_black_ps2_prototype_DB.py
    2 points
  2. The "repeated offsets" pointing to new blocks indicate that the main BIGFILE.CAT is acting as a master container that holds smaller, self-contained archives inside it. The Master Index: Points to large chunks of data (e.g., "Level 1 Data", "Level 2 Data"). The "New Block": When you go to that offset, you find a new header (signature 01 00 01 00). The Inner Index: This new header has its own list of files. Because this block is treated as a standalone file by the game engine once loaded, its offsets start at 0 (relative to the start of that block), not relative to the start of the whole disc. [ MASTER CAT (BIGFILE) ] |-- Header |-- Index Entry 1: Offset 1000 -> Points to "Level 1 Block" |-- Index Entry 2: Offset 5000 -> Points to "Level 2 Block" | |... [Data at Offset 1000] ... | +-> [ NESTED CAT (Level 1) ] |-- Header (starts at Master Offset 1000) |-- Index Entry A: Offset 10 (Absolute: 1010) |-- Index Entry B: Offset 50 (Absolute: 1050) |-- Data... Why did developers do this? (The Logic) This approach was necessary due to the hardware limitations of the PlayStation 1 (PS1): RAM Constraints: The PS1 has only 2MB of RAM. It cannot keep a massive table of thousands of file offsets in memory at all times. Modular Loading: The game loads the "Master Index" to find the location of the current level's data. It then streams that specific "Block" (Nested CAT) into memory. Relative Addressing: Once the "Block" is loaded into a specific memory address, the game engine reads the inner offsets. Since these offsets are relative to the start of the block (0), the engine can easily calculate memory pointers without needing to know where the block was originally located on the CD.
    2 points
  3. 2 points
  4. I've just released new version of ImageHeat 🙂 https://github.com/bartlomiejduda/ImageHeat/releases/tag/v0.39.1 Changelog: - Added new Nintendo Switch unswizzle modes (2_16 and 4_16) - Added support for PSP_DXT1/PSP_DXT3/PSP_DXT5/BGR5A3 pixel formats - Fixed issue with unswizzling 4-bit GameCube/WII textures - Added support for hex offsets (thanks to @MrIkso ) - Moved image rendering logic to new thread (thanks to @MrIkso ) - Added Ukrainian language (thanks to @MrIkso ) - Added support for LZ4 block decompression - Added Portuguese Brazillian language (thanks to @lobonintendista ) - Fixed ALPHA_16X decoding - Adjusted GRAY4/GRAY8 naming - Added support section in readme file
    2 points
  5. The textures are compressed with ZSTD - just that type 0 means the whole file is not compressed. But there doesn't seem to be any encryption once decompressed - looks something like ETC format:
    2 points
  6. Thanks for some info from here and made a tool for unpacking and packing localize map files, if someone is interested in it. https://github.com/dest1yo/wwm_utils
    2 points
  7. I'm trying my best to make it load somehow
    2 points
  8. It's been a while since this topic is up and i have found a way to deal with this: -Step 1: From the .farc files, use either the tool mentioned at the first post of this thread, or download QuickBMS and use the virtua_fighter_5 bms script i included in the zip file below to extract them into bin files. -Step 2: Download noesis and install the noesis-project-diva plugin (https://github.com/h-kidd/noesis-project-diva/tree/main , or in the included zip file) in order to view and extract the textures/models and use them in Blender or a 3d modeling software of your choice. KancolleArcade.zip
    2 points
  9. This is actually very helpful. Thank you. I seen the same repeating groups of ~10 unsigned 16-bit value and came to a similar conclusion. The constants like 0, 1632x (~0x3FCx), 21845 (0x5555), 39322 (0x999A), 43691 (0xAAAB), 52429 (0xCCCC), 56798 (0xDDDE), 63488 (0xF800) hex patterns are classic fixed-point / normalized values (fractions of 0xFFFF), which is exactly what you’d expect for compressed curves (rotations, maybe scales or tangents)... I am not 100% sure though. Your step back that you are seeing is because multiple tracks/bones are interleaved via small index tables or there’s a separate header that says “keyframe list starts at X for track Y. I have not figured it out. Also, I am only assuming. It is an educated guess. 2d72 1100 looks more smooth because it is probably for the static pieces. That is also a guess. I don't really know. I haven't found a clean stride/format yet from any area to the extent I was happy with any result. Thankfully your poking proves it isn't baked matrices. However, it might have "junk" data inside of it or switch between the two different formats on the fly which would have a call/read from the game engine the game was made with "elf statements/running". For the last two days I have been looking for a header or track and still haven't found one. I don't know what the meaning "definition" for each of the 10 values per entry (time? quat? s,t,r? tangents? flags?). Also haven't figured out how these numbers convert back to usable floats/matrices for a bone rig. You found the right haystack to be looking for the needle here and I thank you for this. The repeated 16-bit values like 0x3FCx, 0x5555, 0xAAAA, 0xF800, and the way they change over “time” gave light on this. That matches my expectation that GARO is storing proper animation curves rather than just baked matrices which most people would have assumed because of the static model additive animations. Sorry for repeating myself here. I have a client for this game that is CONSTANTLY having me repeat over and over some of this information and it started to turn into habit. Going back to the “step back” jumps you pointed out I believe show the timeline might be split into several blocks (per-bone or per-channel segments) instead of one clean linear stream, which is probably why tools that only understand standard RWANM fail on this game. I don’t have much experience with your viewer. The tools I use are far different and I have been doing a lot of direct hex poking along with using renderware tools, so I’m still trying to figure out the parameters you showcase here. When you mention 2d72 1100 in step 3, is that essentially a stride / FVF setup you’re using to visualize the data as a point cloud? That was my original assumption but now I am second guessing myself. Do you have any thoughts yet on how those 10-value records break down (e.g. time + rotation + something else), or on where the per-track headers might sit? You helped a lot with this and I am very thankful.
    1 point
  10. For the selected values in rectangles you may start at 0x208, 0x20A or 0x20C - none of the point clouds looks promising.
    1 point
  11. Hi i was wondering if there is a way to open the .arc files for The Sims on the original Xbox I'm curious to see if the archived files are in .IFF format i tried using The Sims 2 .arc QuickBMS script but to no avail EDIT: Found a script that work ironically it was a Hulk .arc script. Thank You ikskoks
    1 point
  12. I fear there's not too many who dealed with PS2 animations in this forum, me included. Do you have skeleton data? I checked the dff mesh, looks doable, but usually I don't like to work from scratch:
    1 point
  13. if there are any unreleased models they should be pretty easy to find, afaik most(probably all?) of the package files have the same checksums as the retail game (dvddata\aid\zpackage) - i cant say ive tested every single one but of the ones i have that has been the case. there are some other assets that are in the file system but i dont know if any are different, eg dvddata\aid\bfdmodel\characters\SinglePlayerVehicles\MachineGunTurret is actually the model shown in my original post, theyre not compressed so you can easily use a texture viewer to see if anything looks interesting (almost all textures are DXT1/DXT5) sometimes theyre just the same model as the original but with a larger texture, eg 512x512 instead of 128x128, the demo version of conker is an example of this as well, his texture on the demo is higher res than the retail game. if anything most of the "easter eggs" are probably buried in the original/actual game and just go un noticed, for example in one of my chats with Uber Winfrey he pointed out Berrys dresser has this ontop of it: or the gargoyle statues: safe to say the rareware guys had an interesting sense of humor. ------------------------------------------ as for any tool, the closest thing to it would be Uber Winfrey's blender script. Most of my work has just been around documenting the structs/data, and while extracting some of the raw data is easy enough there is still a ton of stuff that needs to be figured out like for example Animation data is still WIP and would probably need to translate the shaders for the model to look right eg Fur, Cloth, Shine etc since it uses old VSH/PSH shaders instead of HLSL, they're pre compiled so they need to be decompiled from binary data, figuring out the shader params and what/how they're used for etc. these aren't unsolvable problems but its definitely outside of my skillset which means even if i find time for it its going to be slow.
    1 point
  14. there is a lot of overlap, in fact the Kameo alpha uses the exact same CAFF version as the Conker Demo. the lvl files are just an archive/container of sorts while the assets themselves are packaged inside individual "CAFF" containers. for example: the file struct is fairly basic for the LVL container itself(for imhex): struct LVL_ENTRY { u32 unk_00; u32* address : u32; }; struct LVL_FILE_TABLE { u32 count; LVL_ENTRY array[count]; }; LVL_FILE_TABLE table @ 0x00; they're similar in a sense that Conker stores the assets inside a "ZPackage" CAFF container instead of using an external container(.LVL) to store the individual CAFF assets. unlike the earlier games like GBTG kameo also stores a lot of data inside pushbuffer commands including things like the triangles/shaders/shaderparams. but as far as models go theyre very similar (as thats what i've spent the most time on) not sure about the other assets tho.
    1 point
  15. There is a "thanks button" (just in case you didn't know).
    1 point
  16. Drag and drop .resources files into the script this will extract all of it's content from there. (Developed for the Reloaded exe version, no errors there) Next Steps: BMD6MODEL/BMODEL - https://reshax.com/topic/18566-wolfenstein-the-new-order-wolfenstein-the-old-blood-bmd6modelbmodel-files/#comment-101213 BIMAGE - https://reshax.com/topic/18567-wolfenstein-the-new-order-wolfenstein-the-old-blood-bimage-files/#comment-101214 STREAMED.RESOURCES - https://reshax.com/topic/18568-wolfenstein-the-new-order-wolfenstein-the-old-blood-streamedresources/#comment-101215 VIRTUALTEXTURES - https://reshax.com/topic/18569-wolfenstein-the-new-order-wolfenstein-the-old-blood-pages-virtualtextures/#comment-101216 wolfesntein_resources.py wolfesntein_resources.zip
    1 point
  17. Yeah if you are fan of defining almost 800 resource types then go for it. I have faith in you...
    1 point
  18. Today I am gonna show you, how to reverse eningeer any Binary 3D Models, turns out this is not that hard and actually one of the cooolest things in reverse enigneering! (Uncompressed and un-encrypted models obviously). +====TUTORIAL SECTION=====+ INTRODUCTION But how do all those models store their 3D Data? Well, the answer is simple, there is no magic here, All 3D Models are just made up of *Vertecies*, *Faces*, *Vertex UV Coordinates* and *Vertex Normal Coordinates* They are definatelly *must* somewhere there in your file (this place is called buffer) and there is absolutelly no extra magic in here. This is how the Vertecies look like: v 1.0 4.0 3.0 <= X, Y, Z matrix coordinates (usually from 0.01 to 1000) v 2.0 3.0 4.0 <= Point values so are usually floats v 6.0 2.0 3.0 <= Usually stable, values don't varry to much in max and min values This is how faces looks like: f 1 2 3 <= Takes all those previous vertecies and makes a triangle out of them This is how UV Vertex coords look lke: vt 0.2 0.3 <= 2D coordinate of the first vertex (usually from 0.1 to 1.0) vt 0.5 0.2 <= Point values so are usually floats vt 0.3 0.1 <= Usually stable, values don't warry to much in max and min values This is how Vertex normals look like: [not so important actually] vn 0.745 0.845 0.360 <= X, Y, Z matriz coordinates (usually from 0.01 to 1) vn 0.320 0.625 0.270 <= Point values so are usually floats, so "v2 x, y, z" vn 0.430 0.320 0.390 <= Usually stable, values don't warry much in max and min values The result is a simple triangle that has it's own UV Map too. This is how the simplest 3D Model format OBJ stores their 3D Model data, hovewer we can say that all of the binary models store their 3D Data in OBJ file format there is just one more thing to it. Binary formats have only two ways of storing their 3D Data (Aside faces) in a Separate way and Structured way, here is how it looks like: Separate way: vertex_buffer = [ v1 1.0 4.0 3.0 <= X, Y, Z matrix coordinates (usually from 0.01 to 1000) v2 2.0 3.0 4.0 <= Point values so are usually floats, so "v2 x, y, x" v3 6.0 2.0 3.0 <= Usually stable, values don't varry to much in max and min values ... ] face_buffer = [ f1 1 2 3 <= Takes all those previous vertecies and makes triangle out of them, so "f1 v1, v2, v3" ... ] uv_coords_buffer = [ vt1 0.2 0.3 <= 2D coordinate of the first vertex (usually from 0.1 to 1.0) vt2 0.5 0.2 <= Point values so are usually floats, so "vt2 u, v" vt3 0.3 0.1 <= Usually stable, values don't warry to nuch in max and min values ... ] vertex_normals_buffer = { vn1 0.745 0.845 0.360 <= X, Y, X matrix coordinates (usually from 0.01 to 1) vn2 0.320 0.625 0.270 <= Point values so are usually floats, so "v2, x, y, z" vn3 0.450 0.310 0.390 <= Usually stable, values don't warry much in max and min values ... } Structured way: buffer = [ {v1 1.0 4.0 3.0, vt1 0.2 0.3, vn1 0.745 0.845 0.360} {v2 2.0 3.0 4.0, vt2 0.5 0.2, vn2 0.320 0.625 0.270} {v3 6.0 2.0 3.0, vt3 0.3 0.1, vn3 0.450 0.310 0.390} ... ] BINARY DATA The data in each file can be viewed as binary no matter if it was readable or unreadable or even empty before, viewing it in binary will spoil immediatelly everything. And while binary files are all the same, the way we read it changes drastically everything! To view your binary file yiou must dump HEX from it or load it into HEX Viewer: Example file: Addres: HEX Bytes: ASCII: 0012BFC0 48 53 68 61 70 65 5F 31 37 00 00 00 00 00 01 00 HShape_17....... <= First line contains ASCII strings 0012BFD0 00 00 0A 00 00 00 22 00 00 10 00 00 00 00 0C 00 ......"......... <= Second line does not contain ASCII strings 0012BFE0 00 00 61 32 76 2E 6F 62 6A 43 6F 6F 72 64 01 00 ..a2v.objCoord.. <= Third line contains ASCII strings 0012BFF0 00 00 FF FF FF FF 02 00 00 00 47 04 00 00 82 56 ..........G....V <= Fourth line contains interesting "00 00 FF FF FF FF" buffer mark 0012C000 F9 40 39 94 59 43 76 26 13 41 BB 61 FB 40 5A A4 [email protected]&.A.a.@Z. <= Fifth line starts containg the actual float Vertex coordinates! But looks random in ASCII strings! 0012C010 5B 43 95 B7 00 41 8F 70 CB 40 C1 4A 5B 43 31 08 [[email protected][C1. <= Sixth line contains actual float Vertex coordinates! But looks random in ASCII strings! 0012C020 12 41 8A 8E C9 40 E7 5B 59 43 E8 82 1D 41 90 A0 .A...@.[YC...A.. <= Seventh line contains actual flaot Vertex coordinates! But looks still random in ASCII strings! 0012C030 62 40 21 90 58 43 05 DD 1C 41 BC B3 78 40 D7 63 b@[email protected] <= Eight line contains actual float Vertex coordinates! But looks again random in ASCII strings! But what are those floats, shorts and ASCII? The Bits are the smallest units of computer data they are either 0 or 1 and comma. The Bytes hgovewer is a combined 8 Bits that can actually start representing some data. Those are Bits ranging from 0 to 255, where 0 is also precieved as an important value (So 256 combinations), (I represented them in HEX, 0-F values, so a 256 combinations) Here is one Byte for example: 10110111 (32 16 8 4 2 1 = 256 bits as sum), combining Bytes together we can make multiple data types. This are all of the data types: Byte/Char => 1 Byte, unsigned/signed (8 Bits) |Example: 48 <= H | ASCII Word/Short => 2 bytes, unsigned/signed (16 Bits) |Example: 48 53 <= HS | ASCII Dword/Int => 4 bytes, unsigned/signed (32 Bits) |Example: 48 53 68 61 <= HShap | ASCII ULONG32/Long => 4 Byte, unsigned/signed (32 Bits) |Example: 48 53 68 61 <= HShap | ASCII ULONG64/Long Long => 8 Byte, unsigned/signed (64 Bits) |Example: 48 53 68 61 70 65 5F 31 <= HShape_17 | ASCII float => 4 bytes, for represnting floating point values (32 Bits) |Example: 48 53 68 61 <= HShap | ASCII double => 8 bytes, for representing more precise floating point values (64 Bits) |Example: 48 53 68 61 70 65 5F 31 <= HShape_17 | ASCII String/Char => A Sequence/Array of Characters terminated by the nulll character |Example: 48 53 68 61 70 65 5F 31 <= HShape_17 | ASCII Big-Endin vs Little-Endian: Reading in Big-Endian for example a float byte will read it normally, left-to-right 48 53 68 61 "HShap", where's Little-endin reads byte in reverse order, right-to-left 61 68 53 48 "paSH". Big-Endians were mainly used in PS3, Xbox360 and Wii platforms where Little-Endians are mainly in Windows, PS4, Xbox One, Nintendo Switch. TRYING TO REVERSE THE BINARY 3D FORMAT But how do we actually apply this info into reverse engineering the binary 3D file format structure and even converting it into an OBJ Model. Assuming that you have the actual decompressed/uncompressed and decrypted/unencrypted binary 3D model file, you can actually visualize the 3D Data geometry while analyzing the HEX from it in realtime! ModelResearcherUltimate is the program that will enable this opportunities. First of, Level 1: Start with vertecies count 500, type: float, carefully try different offsets while printing the values and render it too, until you see a countinous very stable output without insanelly big or small values. (from 0.001 to 1000). If nothing works try with different Endianess, then try a different type (unlikely). If the mesh appears but random vertecies appear too that means that the Data structure is sctructured and you need to try different Padding or even Pad inters sometimes. Second of, Level 2: Start with vertex UV coordinates count [exactly how many vertecies], type: float, carefully try different offsets while printing the values and rendering it too, until you see a countinous stable output without insanelyy big or small values (from 0.0001 to 1.) If nothing works try different type, since you already know the Endianes and Structure. Third of, Level 3: Start with faces, they are actually very carefully linked with vertecies, so the errors will constantly appear, carefully try different offsets while printing the values, don't render it, it will often just throw the errors. You will need see the full values without floating points that are very stable in output without big and small values, if nothing works try different type or even the format. Fourth of, Level 4: [To be honest I didn't know what to write here, normals are pretty useless though, you can just flip them and calculate, very easily in programs like Blender in just a few clicks, so it's not worth your brainstorming!] Practical steps: Here is how BAD Data will look like: [random, disoriented pattern, extreamly low and extreamly big values occur] v -0.0000 -0.0000 -184538016.0000 v -0.0000 15.7924 -158665664.0000 v -0.0000 90990377942005974930976407552.0000 -17551224.0000 v -0.0000 -3386287.2500 -115467744.0000 v -0.0000 15397417210601645679040601784320.0000 -22963316.0000 v -0.0000 15397417210601645679040601784320.0000 -22963316.0000 vt 0.0000 1785889664.0000 vt 0.0000 140283808776479363868647227392.0000 vt 0.0000 10997215558668704718782464.0000 vt 0.0000 -516472.2188 vt 0.0000 -0.0000 vt 0.0000 0.0000 f 57856 10240 3073 f 3073 64769 57856 f 31744 64768 3072 f 57857 64768 58112 f 57856 58112 58368 f 58112 59136 58368 Here is how GOOD data looks like: [strong countinous repating pattern, values are pretty much very similiar] v -0.0733 0.0012 1.6030 v -0.0735 -0.0118 1.6023 v -0.0776 -0.0146 1.5900 v -0.0718 -0.0247 1.6005 v -0.0784 0.0009 1.5913 v -0.0784 0.0009 1.5913 vt 0.0008 0.6221 vt 0.0316 0.6229 vt 0.0344 0.6543 vt 0.0628 0.6246 vt 0.0008 0.6539 vt 0.9978 0.6533 f 226 296 268 f 268 253 226 f 124 253 268 f 226 253 227 f 226 227 228 f 227 231 228 Changing Offfset (oftenly) or Endianess or Type will instanly give the different results including BAd data drastically turning into a GOOD data so keep that in mind and play with those offsets. There is just one small but very important step left, most of the time those binary files leave also values like Vertex count (UV Coords and Vertex Normals count is the same as Vertex always), Face count, buffer mark and even Vertex stride! (Vertex Stride = Vertex Padding + 12, UV Coords stride = UV Coords stride + 8). They are essentially at the begginning of the mesh buffer and are pretty easy to find and are always placed in the same way hovewer, this time I personally recommend finding them using the dedicated HEX viewer, my recommendadions are IM Hex, truly the open-sourse king in terms of ease of use.
    1 point
  19. *.abc is font map. Maybe you can add some characters in it. Now for the texture. Original is 32 bit rgba, yours is DXT5 which is not exact as org. Also i noticed you didn't change alpha channel of the char which is crucial for correct display.
    1 point
  20. D:\88>py fgo_arcade_mot_parser.py --mot CHARA_POSE_SVT_0088_S01.mot --skl SVT_0088_S02.skl --bone arm_r --frames 100 0 0.000007 0.402854 0.002541 0.915261 1 0.000120 0.975284 0.052175 -0.214706 2 0.069035 0.000000 0.000000 -0.997614 3 0.000000 -0.276927 -0.000008 -0.960891 4 0.000000 -0.736034 -0.000022 -0.676945 5 0.000000 0.738174 0.000023 0.674610 6 0.000000 0.848413 0.000028 0.529335 7 0.000000 0.998036 0.000034 0.062648 8 0.000000 0.955551 0.000035 0.294825 9 0.000000 0.787267 0.000030 0.616612 10 0.000000 0.853046 0.000034 0.521836 11 0.000000 0.978375 0.000040 0.206837 12 0.000000 0.950974 0.000040 -0.309270 13 0.000000 0.988709 0.000043 -0.149846 14 0.000000 0.955742 0.000043 0.294205 15 0.000000 0.335155 0.000015 0.942163 16 0.000000 0.042013 0.000002 0.999117 17 0.000000 0.055729 0.000004 0.998446 18 0.000000 0.600873 0.000037 0.799344 19 0.000000 0.431736 0.000025 0.902000 20 0.000000 -0.474964 -0.000031 0.880005 21 0.000000 0.005943 0.000002 0.999982 22 0.000000 0.472832 0.000034 0.881153 23 0.000000 0.562161 0.000044 0.827028 24 0.000000 0.544627 0.000045 0.838678 25 0.000000 0.498805 0.000045 0.866714 26 0.000000 0.663745 0.000065 0.747959 27 0.000000 0.598301 0.000061 0.801271 28 0.000000 0.237227 0.000025 0.971454 29 0.000000 -0.314591 -0.000039 0.949227 30 0.000000 -0.813891 -0.000100 0.581017 31 0.000000 -0.989480 -0.000131 -0.144672 32 0.000000 -0.995146 -0.000143 -0.098406 33 0.000000 -0.920574 -0.000144 0.390568 34 0.000000 -0.220975 -0.000032 0.975279 35 0.000000 0.662326 0.000116 0.749216 36 0.000000 0.995018 0.000179 -0.099691 37 0.000000 0.960525 0.000182 0.278192 38 0.000000 0.592240 0.000118 0.805762 39 0.000000 -0.021210 -0.000011 0.999775 40 0.000000 -0.047665 -0.000008 0.998863 41 0.000000 0.314114 0.000081 0.949385 42 0.000000 0.421506 0.000118 0.906826 43 0.000000 0.740391 0.000225 0.672176 44 0.000000 0.996086 0.000322 -0.088392 45 0.000000 0.610455 0.000199 -0.792051 46 0.000000 -0.147819 -0.000063 -0.989014 47 0.000000 -0.758611 -0.000329 -0.651544 48 0.000000 -0.633201 -0.000271 -0.773987 49 0.000000 0.144974 0.000071 -0.989435 50 0.327163 0.016943 0.000008 -0.944816 51 0.000075 0.714520 0.031271 -0.698916 52 0.000000 0.231376 0.000223 0.972864 53 0.000000 0.589910 0.000409 0.807469 54 0.000000 0.814108 0.000585 0.580713 55 0.000000 0.510365 0.000383 0.859958 56 0.000000 0.119739 0.000083 0.992805 57 0.000000 -0.291270 -0.000258 0.956641 58 0.000000 -0.409780 -0.000399 0.912184 59 0.000000 -0.622229 -0.000662 0.782835 60 -0.000002 -0.941428 -0.001064 0.337213 61 -0.000002 -0.966760 -0.001144 0.255684 62 -0.000002 -0.940971 -0.001175 0.338484 63 -0.000002 -0.957375 -0.001293 0.288845 64 -0.000003 -0.922744 -0.001354 0.385411 65 -0.000003 -0.793428 -0.001291 0.608662 66 -0.000004 -0.998795 -0.001788 -0.049037 67 -0.000003 -0.836432 -0.001628 -0.548068 68 -0.000004 -0.931258 -0.001932 -0.364356 69 -0.000005 -0.993580 -0.002216 -0.113106 70 -0.000005 -0.983486 -0.002361 0.180970 71 -0.000005 -0.812511 -0.002115 0.582941 72 -0.000005 -0.710143 -0.002080 0.704054 73 -0.000006 -0.820454 -0.002625 0.571706 74 -0.000008 -0.999984 -0.003401 0.004570 75 -0.000007 -0.792709 -0.002817 -0.609593 76 -0.000002 -0.308776 -0.001126 -0.951134 77 0.000004 0.259678 0.001201 -0.965695 78 0.000009 0.733887 0.003312 -0.679263 79 0.000013 0.975185 0.004912 -0.221336 80 0.000014 0.997418 0.005142 0.071628 81 0.000013 0.998641 0.004884 0.051882 82 0.000012 0.999473 0.004623 0.032116 83 0.000012 0.999914 0.004360 0.012337 84 0.000011 0.999964 0.004096 -0.007446 85 0.000010 0.999622 0.003830 -0.027227 86 0.000009 0.998889 0.003563 -0.046997 87 0.000009 0.997764 0.003294 -0.066748 88 0.000008 0.996250 0.003024 -0.086474 89 0.000007 0.994345 0.002753 -0.106165 90 0.000007 0.992051 0.002481 -0.125815 91 0.000006 0.989368 0.002208 -0.145416 92 0.000005 0.986298 0.001934 -0.164960 93 0.000004 0.982843 0.001659 -0.184439 94 0.000004 0.979002 0.001383 -0.203846 95 0.000003 0.974778 0.001107 -0.223173 96 0.000002 0.970173 0.000831 -0.242413 97 0.000001 0.965188 0.000554 -0.261558 98 0.000000 0.959825 0.000277 -0.280601 99 0.000000 0.954086 0.000000 -0.299534
    1 point
  21. SLES_526.07.ELF (PS2 game executable) SCRIPT.PTD (1,728,512 bytes) 1. INITIAL FILE ANALYSIS: hexdump -C SCRIPT.PTD | head -50 First 32 bytes: unknown header Bytes 0x20-0x11F: 256-byte table Rest: encrypted data With Radare2, I examined the ELF a bit: Found function at 0x0010da30 (file handling?) 3. CALL TRACING: 0x0010da30 → 0x10ccf0 → 0x10a3f8 → 0x0010d850 4. DECODING FUNCTION ANALYSIS (0x0010d850): 🔓 ALGORITHMS FOUND: 1. LZSS DECOMPRESSION: def yklz_lzss_decompress(data): param_byte = data[7] # Always 0x0A shift = param_byte - 8 # 2 mask = (1 << shift) - 1 # 3 decompressed_size = int.from_bytes(data[8:12], 'little') # ... standard LZSS implementation 2. DECRYPTION ALGORITHM (0x0010D850): def junroma_decrypt_exact(data): if len(data) <= 0x120: return data result = bytearray(data) t1 = 0 # Initialized to 0 base_ptr = 0 # Base pointer (t3 in code) for i in range(0x120, len(data)): encrypted_byte = data[i] # t7 = (t1 XOR encrypted_byte) + base_ptr t7 = (t1 ^ encrypted_byte) + base_ptr # decrypted_byte = TABLE[(t7 + 0x20) & 0xFF] idx = (t7 + 0x20) & 0xFF decrypted_byte = SUBSTITUTION_TABLE[idx] result[i] = decrypted_byte # t1 = (t1 - 1) & 0xFF t1 = (t1 - 1) & 0xFF return bytes(result) 3. SUBSTITUTION TABLE (0x0055F7A0): SUBSTITUTION_TABLE = bytes([ 0x82, 0x91, 0x42, 0x88, 0x35, 0xBB, 0x0F, 0x85, 0x96, 0x2C, 0x56, 0xFF, 0x8E, 0x3C, 0x7C, 0x0D, 0x61, 0xBF, 0xB8, 0xEF, 0xD1, 0x16, 0x07, 0xEE, 0x4F, 0x09, 0xCB, 0x0C, 0xE2, 0xB1, 0xDD, 0x12, 0xFB, 0x08, 0x89, 0x8B, 0x03, 0xC9, 0x27, 0x19, 0x6A, 0x32, 0x5D, 0xCD, 0x98, 0x17, 0xF4, 0xE7, 0x9F, 0x1A, 0xF9, 0x1B, 0x6C, 0x5C, 0x44, 0x3B, 0x6E, 0x3E, 0x60, 0xD5, 0x4D, 0x21, 0x43, 0x4E, 0x65, 0xFD, 0x0B, 0x92, 0x8C, 0x2B, 0x41, 0xED, 0x76, 0x22, 0xC1, 0x74, 0xA3, 0x47, 0x14, 0x67, 0xE0, 0xDE, 0x0A, 0xE3, 0x1E, 0x5F, 0x1C, 0x84, 0xEA, 0xA0, 0x02, 0x69, 0x52, 0xB9, 0xC5, 0x20, 0x6D, 0xC8, 0x79, 0xD0, 0x05, 0x77, 0xB3, 0xDA, 0x7F, 0xBA, 0xF1, 0xB2, 0x72, 0x9E, 0x9A, 0xB5, 0x6B, 0x1F, 0x58, 0xD2, 0x11, 0xA6, 0xD8, 0x80, 0x23, 0x46, 0x73, 0xB6, 0x2E, 0xE4, 0xAD, 0x81, 0xC6, 0xDB, 0x57, 0x95, 0x01, 0xEC, 0xC4, 0xF2, 0xEB, 0xDF, 0xC0, 0x28, 0x49, 0xE9, 0x37, 0x15, 0x5E, 0x34, 0x31, 0x00, 0xA8, 0x8D, 0x9C, 0xBC, 0xA2, 0x62, 0x90, 0xCA, 0x66, 0x3D, 0x70, 0x4C, 0x24, 0x48, 0xBE, 0xA9, 0x5A, 0x94, 0xD4, 0xF5, 0x1D, 0x38, 0x25, 0x8F, 0x26, 0xB4, 0x83, 0x45, 0x8A, 0x5B, 0xFC, 0x63, 0xA4, 0xFA, 0xAF, 0xF8, 0x10, 0xAB, 0x53, 0x54, 0x2F, 0xDC, 0xF6, 0xD3, 0x0E, 0x68, 0xE1, 0x59, 0xAA, 0x30, 0xC2, 0x51, 0xD7, 0xE6, 0xB0, 0xBD, 0x6F, 0x06, 0x93, 0x7D, 0x3A, 0xF7, 0x04, 0x78, 0x2D, 0x55, 0xA5, 0x2A, 0xA7, 0x40, 0x71, 0x9B, 0x7A, 0xC3, 0xD6, 0xFE, 0xCF, 0xE5, 0x4A, 0x7B, 0xC7, 0x99, 0xF0, 0xCC, 0x3F, 0xAC, 0xB7, 0x87, 0x7E, 0x33, 0x13, 0x97, 0xE8, 0x75, 0xCE, 0xA1, 0x50, 0x4B, 0x39, 0xD9, 0x86, 0x64, 0x9D, 0x29, 0x36, 0x18, 0xAE, 0xF3 ]) EXTRATION CODE (PYTHON) import os import struct # ==================== SUBSTITUTION TABLE ==================== SUBSTITUTION_TABLE = bytes([ 0x82, 0x91, 0x42, 0x88, 0x35, 0xBB, 0x0F, 0x85, 0x96, 0x2C, 0x56, 0xFF, 0x8E, 0x3C, 0x7C, 0x0D, # ... (same as above, shortened for brevity) 0xE8, 0x75, 0xCE, 0xA1, 0x50, 0x4B, 0x39, 0xD9, 0x86, 0x64, 0x9D, 0x29, 0x36, 0x18, 0xAE, 0xF3 ]) # ==================== LZSS DECOMPRESSION ==================== def yklz_lzss_decompress(data): if len(data) < 16: raise ValueError("File too small (smaller than the 16-byte header).") # Extract header parameters param_byte = data[7] shift = param_byte - 8 if shift < 0: shift = 4 mask = (1 << shift) - 1 # Decompressed size (uint32, little-endian) decompressed_size = int.from_bytes(data[8:12], 'little') # LZSS decompression src_pos = 16 output = bytearray() while len(output) < decompressed_size and src_pos < len(data): flags = data[src_pos] src_pos += 1 for bit in range(8): if len(output) >= decompressed_size or src_pos >= len(data): break is_reference = (flags & 0x80) != 0 flags = (flags << 1) & 0xFF if not is_reference: # Literal byte output.append(data[src_pos]) src_pos += 1 else: # Reference (match) if src_pos + 1 >= len(data): break b1 = data[src_pos] b2 = data[src_pos + 1] src_pos += 2 length = (b1 >> shift) + 3 offset = ((b1 & mask) << 8) | b2 offset += 1 start_index = len(output) - offset for i in range(length): idx = start_index + i if idx < 0: output.append(0) else: output.append(output[idx]) return bytes(output) # ==================== JRS DECRYPTION ==================== def junroma_decrypt_exact(data): """ EXACT implementation of the decryption algorithm (0x0010D850). Confirmed by debugger analysis. Initial t1 = 0 t3 (base_ptr) = 0 for our data Start offset: 0x120 """ if len(data) <= 0x120: return data result = bytearray(data) # Initial t1 = 0 (confirmed by debugger) t1 = 0 # base_ptr = 0 (t3 = base pointer to data) base_ptr = 0 for i in range(0x120, len(data)): encrypted_byte = data[i] # t7 = (t1 XOR encrypted_byte) + base_ptr t7 = (t1 ^ encrypted_byte) + base_ptr # decrypted_byte = TABLE[(t7 + 0x20) & 0xFF] idx = (t7 + 0x20) & 0xFF decrypted_byte = SUBSTITUTION_TABLE[idx] result[i] = decrypted_byte # t1 = (t1 - 1) & 0xFF t1 = (t1 - 1) & 0xFF return bytes(result) # ==================== JRS FILE EXTRACTION ==================== def extract_jrs_files(data): """Extracts individual .JRS files from decrypted data.""" jrs_magic = b'\x8F\x83\xDB\xCF' # JRS Magic: "純ロマ" files = [] pos = 0 while pos < len(data): idx = data.find(jrs_magic, pos) if idx == -1: break # Determine size (find next magic or use header size) next_magic = data.find(jrs_magic, idx + 4) if next_magic != -1: file_size = next_magic - idx else: file_size = len(data) - idx file_data = data[idx:idx + file_size] files.append({ 'offset': idx, 'size': file_size, 'data': file_data, 'is_jrs': True }) pos = idx + file_size return files # ==================== COMPLETE EXTRACTOR ==================== def extract_all_yklz_sections(): """Extracts and processes all YKLZ sections from SCRIPT.PTD.""" print("=== JUNJOU ROMANTICA PS2 EXTRACTOR ===") # Read file try: with open('SCRIPT.PTD', 'rb') as f: raw_data = f.read() print(f"File SCRIPT.PTD read: {len(raw_data):,} bytes") except FileNotFoundError: print("❌ ERROR: SCRIPT.PTD not found") return # Search for YKLZ sections yklz_signature = b'YKLZ' positions = [] pos = 0 while True: idx = raw_data.find(yklz_signature, pos) if idx == -1: break positions.append(idx) pos = idx + 1 print(f"YKLZ sections found: {len(positions)}") if not positions: print("❌ No YKLZ sections found") return # Create directories os.makedirs('EXTRACTED', exist_ok=True) os.makedirs('EXTRACTED/JRS_FILES', exist_ok=True) total_jrs = 0 # Process each section for i, pos in enumerate(positions): print(f"\n--- Processing section #{i:03d} (offset 0x{pos:08X}) ---") # Extract YKLZ data if i + 1 < len(positions): next_pos = positions[i + 1] yklz_data = raw_data[pos:next_pos] else: yklz_data = raw_data[pos:] try: # 1. Decompress LZSS decompressed = yklz_lzss_decompress(yklz_data) print(f" Decompressed: {len(decompressed):,} bytes") # Save decompressed version decomp_filename = f'EXTRACTED/section_{i:03d}_decompressed.bin' with open(decomp_filename, 'wb') as f: f.write(decompressed) # 2. Apply decryption (except section 0) if i == 0: # Section 0 is ASCII text without encryption decrypted = decompressed print(f" Section 0 (metadata) - not decrypted") else: decrypted = junroma_decrypt_exact(decompressed) print(f" Decryption applied (from offset 0x120)") # Save decrypted version decrypted_filename = f'EXTRACTED/section_{i:03d}_decrypted.bin' with open(decrypted_filename, 'wb') as f: f.write(decrypted) # 3. Extract JRS files if decrypted[:4] == b'\x8F\x83\xDB\xCF': print(f" ✅ Contains JRS file(s)") jrs_files = extract_jrs_files(decrypted) for j, jrs in enumerate(jrs_files): filename = f'EXTRACTED/JRS_FILES/section_{i:03d}_file_{j:03d}.jrs' with open(filename, 'wb') as f: f.write(jrs['data']) print(f" JRS file #{j}: {jrs['size']:,} bytes") total_jrs += 1 # Analyze JRS header if len(jrs['data']) >= 0x40: version = int.from_bytes(jrs['data'][4:8], 'little') declared_size = int.from_bytes(jrs['data'][12:16], 'little') print(f" Version: {version}, Declared size: {declared_size:,}") # 4. For section 0, show ASCII content if i == 0: try: text = decrypted.decode('ascii', errors='ignore').strip() if text: print(f" ASCII content: {text[:100]}...") # Save as text text_filename = f'EXTRACTED/section_000_metadata.txt' with open(text_filename, 'w', encoding='utf-8') as f: f.write(text) except: pass except Exception as e: print(f" ❌ Error: {e}") import traceback traceback.print_exc() print(f"\n{'='*60}") print("EXTRACTION COMPLETED!") print(f"Total JRS files extracted: {total_jrs}") print(f"Everything saved to: EXTRACTED/") # ==================== EXECUTION ==================== if __name__ == "__main__": extract_all_yklz_sections() NOW, THE RESULTING FILES ALL HAVE A HEADER WITH THE MAGIC NUMBER "JUNROMA" AND APPEAR TO BE ENCRYPTED. TRYING THE SAME ALGORITHMS OR APPLYING SHIFT-JIS JAPANESE DIDN'T WORK.
    1 point
  22. Please don't publish tutorials until you finish them. Also, Raw Texture Cooker is outdated. It's better to use ImageHeat https://github.com/bartlomiejduda/ImageHeat It supports more pixel formats etc.
    1 point
  23. Got a point cloud character and some normals, edit: got rid of them:
    1 point
  24. When you choose "uncompressed" the file size should be bigger than for a DXT5 file. I'd try some other tool, maybe Gimp, for testing.
    1 point
  25. hi need help ripping and reverse engineering the Geekjam, Toejam, and the Earl models from Toejam and Earl III for a animation. below are the .funk files (which is located in the bdl folder for some reason idk) and .bmt files for each character. files for toe jammin and fateral.zip
    1 point
  26. Found a bug with the python script. To the left is the wrist_r data from the .mot file. To the write is the data I got from exporting the .mot file to .csv. Maybe they are actually SUPPOSED TO BE this. I could be wrong. But am curious.
    1 point
  27. I remember to make a request in your github about it. 👍 Somehow, we were not able to see these textures in ImageHeat, only after extraction and decompression. Anyway, for the Switch textures it seems to be an issue as h3x3r said above and I confirm it too. In the attachment you find all the textures in UNIFORM.TEX (including jersey-color) from the Switch version already decompressed. The stock texture file is in the Switch files in the first post (UNIFORM.TEX). In the screenshot below you see the parameters for the jersey-color texture. Maybe useful when you have time to check it to help you fix ImageHeat. UNIFORM Switch decompressed.zip
    1 point
  28. I don’t know if this has any effect. https://web.archive.org/web/20230000000000fw_/https://www.zenhax.com/viewtopic.php?t=3547
    1 point
  29. I've moved this topic to graphic file formats, rather than 3d Models where you posted it. Also, please don't start another post for the exact same thing. I've deleted your other post as a duplicate. Please read the rules before posting again.
    1 point
  30. The game have update and they hard-coded new text in .mpk lua script, because some words have many different meaning depend on the context. With packet sniffing, i observed that the game download some .pak file from easebar.com and put them in .mpk file. These file are encrypted.
    1 point
  31. Decided to extract some key frames. from that .mot file I shared earlier. Wonder if there is any insight. The NaNs are interesting tho.
    1 point
  32. It's Unity, but seems to have a protection layer so it can't be opened in Asset Studio. Game Assembly: https://www.mediafire.com/file/3i7kvobi4nacnbh/GameAssembly.zip/file THO.zip
    1 point
  33. fmt_FGOArcade_mot.py Still incomplete
    1 point
  34. tool.py Here a working script that will output json file with { hash: text, ... }
    1 point
  35. I made a blender addon to import models, textures and animations for dolphin wave and other games that used the same engine. it can import lzs and lza files as is. You don't need to decrypt or decompress the files https://github.com/Al-Hydra/blenderBUM
    1 point
  36. thanks 2001_800xcsled.zip 1999_440xcrsled.zip
    1 point
  37. To whoever ends up here in the future, there is a really simple to use utility to convert files from Xbox ADPCM to PCM and vice-versa on Github: Sergeanur/XboxADPCM Thanks for the thread, I really thought the WAV files I had were lost forever due to an obsolete codec..! In my case, I am porting the PT-BR voiceover of Max Payne from PC to Xbox, which I am surprised wasn't done before.
    1 point
  38. make some ajustments! now its working
    1 point
  39. When i get home, i will compile the decompressor/compressor unpack and pck tool, is one all tool. std::vector<uint8_t> compressLZSSBlock(const std::vector<uint8_t>& input) { const int MIN_MATCH = 3; // comprimento mínimo para virar par const int MAX_MATCH = 17; // (0xF + 2) const int DICT_SIZE = 4096; const size_t n = input.size(); // Dicionário igual ao do descompressor std::vector<uint8_t> dict_buf(DICT_SIZE, 0); size_t dict_index = 1; // mesmo índice inicial do descompressor size_t producedBytes = 0; // quantos bytes já foram "gerados" (saída lógica) std::vector<uint32_t> flagWords; uint32_t curFlag = 0; int bitsUsed = 0; auto pushFlagBit = [&](bool isLiteral) { if (bitsUsed == 32) { flagWords.push_back(curFlag); curFlag = 0; bitsUsed = 0; } if (isLiteral) { // bit 1 = literal (mesmo significado do descompressor) curFlag |= (1u << (31 - bitsUsed)); } ++bitsUsed; }; std::vector<uint8_t> literals; std::vector<uint8_t> pairs; literals.reserve(n); pairs.reserve(n / 2 + 16); size_t pos = 0; while (pos < n) { size_t bestLen = 0; uint16_t bestOffset = 0; if (producedBytes > 0) { // tamanho máximo possível para este match (não pode passar do fim do input) const size_t maxMatchGlobal = std::min(static_cast<size_t>(MAX_MATCH), n - pos); // percorre todos os offsets possíveis do dicionário for (int off = 1; off < DICT_SIZE; ++off) { if (dict_buf[off] != input[pos]) continue; // --- SIMULAÇÃO DINÂMICA DO DESCOMPRESSOR PARA ESTE OFFSET --- uint8_t candidateBytes[MAX_MATCH]; size_t candidateLen = 0; for (size_t l = 0; l < maxMatchGlobal; ++l) { const int src_index = (off + static_cast<int>(l)) & 0x0FFF; // valor em src_index, levando em conta que o próprio bloco // pode sobrescrever posições do dicionário (overlap) uint8_t b = dict_buf[src_index]; // Se src_index for igual a algum índice de escrita deste MESMO par // (dict_index + j), usamos o byte já "gerado" candidateBytes[j] for (size_t j = 0; j < l; ++j) { const int dest_index = (static_cast<int>(dict_index) + static_cast<int>(j)) & 0x0FFF; if (dest_index == src_index) { b = candidateBytes[j]; break; } } if (b != input[pos + l]) { // não bate com o input, para por aqui break; } candidateBytes[l] = b; ++candidateLen; } if (candidateLen >= static_cast<size_t>(MIN_MATCH) && candidateLen > bestLen) { bestLen = candidateLen; bestOffset = static_cast<uint16_t>(off); if (bestLen == static_cast<size_t>(MAX_MATCH)) break; // não tem como melhorar } } } if (bestLen >= static_cast<size_t>(MIN_MATCH)) { // --- CODIFICA COMO PAR (offset, length) --- pushFlagBit(false); // 0 = par uint16_t lengthField = static_cast<uint16_t>(bestLen - 2); // 1..15 uint16_t pairVal = static_cast<uint16_t>((bestOffset << 4) | (lengthField & 0x0F)); pairs.push_back(static_cast<uint8_t>(pairVal & 0xFF)); pairs.push_back(static_cast<uint8_t>((pairVal >> 8) & 0xFF)); // Atualiza o dicionário exatamente como o DESCOMPRESSOR: // for (i = 0; i < length; ++i) { // b = dict[(offset + i) & 0xFFF]; // out.push_back(b); // dict[dict_index] = b; // dict_index = (dict_index + 1) & 0xFFF; // } for (size_t i = 0; i < bestLen; ++i) { int src_index = (bestOffset + static_cast<uint16_t>(i)) & 0x0FFF; uint8_t b = dict_buf[src_index]; dict_buf[dict_index] = b; dict_index = (dict_index + 1) & 0x0FFF; } pos += bestLen; producedBytes += bestLen; } else { // --- LITERAL SIMPLES --- pushFlagBit(true); // 1 = literal uint8_t literal = input[pos]; literals.push_back(literal); dict_buf[dict_index] = literal; dict_index = (dict_index + 1) & 0x0FFF; ++pos; ++producedBytes; } } // Par terminador (offset == 0) pushFlagBit(false); pairs.push_back(0); pairs.push_back(0); // Flush do último flagWord if (bitsUsed > 0) { flagWords.push_back(curFlag); } // Monta o bloco final: [u32 off_literals][u32 off_pairs][flags...][literais...][pares...] const size_t off_literals = 8 + flagWords.size() * 4; const size_t off_pairs = off_literals + literals.size(); const size_t totalSize = off_pairs + pairs.size(); std::vector<uint8_t> block(totalSize); auto write_u32_le = [&](size_t pos, uint32_t v) { block[pos + 0] = static_cast<uint8_t>(v & 0xFF); block[pos + 1] = static_cast<uint8_t>((v >> 8) & 0xFF); block[pos + 2] = static_cast<uint8_t>((v >> 16) & 0xFF); block[pos + 3] = static_cast<uint8_t>((v >> 24) & 0xFF); }; write_u32_le(0, static_cast<uint32_t>(off_literals)); write_u32_le(4, static_cast<uint32_t>(off_pairs)); size_t p = 8; for (uint32_t w : flagWords) { block[p + 0] = static_cast<uint8_t>(w & 0xFF); block[p + 1] = static_cast<uint8_t>((w >> 8) & 0xFF); block[p + 2] = static_cast<uint8_t>((w >> 16) & 0xFF); block[p + 3] = static_cast<uint8_t>((w >> 24) & 0xFF); p += 4; } std::copy(literals.begin(), literals.end(), block.begin() + off_literals); std::copy(pairs.begin(), pairs.end(), block.begin() + off_pairs); return block; } @morrigan my compressor, try it, and let me know the results.
    1 point
  40. Actually the LZSS provide above, is wrong, for the files. I did the reverse enginner of the algorithim, Try the tool, see if the image get right TenchuWoH_DeCompressor.zip
    1 point
  41. by the way if you need names of audio files put thesescript.zip in AetherGazerLauncher\AetherGazer\AetherGazer_Data\StreamingAssets\Windows folder , run process.py then it will change every audio .ys files to proper names.
    1 point
  42. use my plugin for Noesis arc_zlib_plzp_lang_vfs.py (which I mentioned earlier) it recursively unpacks all files, at the output you will get *.png, *.wav, *.pm3, *.vram, *.text, *.pvr and e.t.c You can also find a link to the plugin for 3D models *.vram above in the same topic. (*.pvr can open in PVRTexTool)
    1 point
  43. I wrote a VSF UNPACK/PACK program and a new decompressor/compressor for zlib. Now, it accepts large files and there's no need to use QuickBMS anymore. The .vfs file can now be larger than the original. I did a test, and it worked with the image below, maybe you can put music and soundeffects as well, If the .py file doesn't work, you must install tkinter via pip. VFS_PackUnpack_Tool.py zlib_DeCompressor.py
    1 point
×
×
  • Create New...