Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 10/15/2024 in all areas

  1. Dear Reshax community, We’re grateful to have you as part of the Reshax family, and we hope you’ve found value in the discussions and resources our forum provides. As you may be aware, maintaining an active online community comes with costs—particularly for hosting and the necessary licenses to keep the forum running smoothly and securely. For almost a year, we’ve been covering these expenses ourselves to ensure the forum remains accessible to all. However, as the community continues to grow, so do the associated costs. We need your support to help cover the expenses for hosting services and licensing fees, which are essential for keeping the forum operational and maintaining its quality. Any contribution, no matter how small, will go directly towards these costs, allowing us to continue providing a reliable and engaging platform for everyone. Your support will help us ensure that the forum remains a place for learning, sharing, and connecting without disruption. We understand that not everyone is in a position to contribute, but if you are able to help, it would be greatly appreciated and would make a significant difference in keeping Reshax running strong. Thank you for being a part of our community and for considering supporting us. Together, we can sustain Reshax as a valuable resource for all. DONATION : https://reshax.com/clients/donations/1-license-and-hosting-cost/ Your Reshax team forever :)
    7 points
  2. sorry for the radio silence; when Lake House fully arrives along with the QOL update tomorrow i'll make an update to the script (eta tbd) that'll have updated offsets for hopefully all previously supported models (and new ones if applicable).
    5 points
  3. ImageHeat Download --> https://github.com/bartlomiejduda/ImageHeat
    3 points
  4. I did a bit of work on these formats. The BZZ headers do change slightly from game to game, but the compression algorithm seems to be the same. One day I might combine all the header formats into one script, but for now you should be able to use this Python script to decompress the data in The Grinch for further analysis. It's not perfect because the format is crap (i.e. it doesn't store decompressed size that I can see, no proper filenames or file types, etc.). grinch_bzz.zip
    3 points
  5. Yay, but what really bothers me is that he gives very little to no feedback once you have helped him. (But in some way he helps keeping the forum alive.)
    3 points
  6. I finally managed to do it, so here's a little guide in case anyone wants to do a similar translation in the future. 1. Extract the files using BaTool. 2. Delete the first 29 bytes from MainGame_enUS.lua using a hex editor. 3. You can now access the editable text using any decompiler application. I used decompiler.com. 4. When your translation is done, we need to compile the .lua file. I used this binary. After downloading it, you can compile it with CMD with the command luac5.1 outputfilename.lua inputfilename.lua. 5. We add back the 29 bytes we deleted previously in the compiled file. 6. We create the pck file using BaTool. 7. Voila!
    2 points
  7. Hello. I wrote an simple export\import script for the game text files. I don't know if there's a public tool for it, but I'll share it in case it's useful for others. 🙂 (Instructions for use can be found in the attached INFO.txt!) Stronghold Crusader 2 - Localization Tool - (N o o B).zip
    2 points
  8. Tell me if you encounter any other Pixel Format. If so send me the file. from inc_noesis import * import noesis import rapi import os def registerNoesisTypes(): handle = noesis.register("Death end re:Quest 2", ".xet") noesis.setHandlerTypeCheck(handle, noepyCheckType) noesis.setHandlerLoadRGBA(handle, noepyLoadRGBA) noesis.logPopup() return 1 def noepyCheckType(data): bs = NoeBitStream(data) if len(data) < 20: return 0 return 1 def noepyLoadRGBA(data, texList): bs = NoeBitStream(data) baseName = rapi.getExtensionlessName(rapi.getLocalFileName(rapi.getInputName())) TexWidth = bs.readUShort() TexHeight = bs.readUShort() bs.read(2) TexPixelFormat = bs.readUByte() bs.read(5) if TexPixelFormat == 0: bs.TexSize = TexWidth * TexHeight //2 # DXT1 print("Pixel Format > DXT1") elif TexPixelFormat == 32: bs.TexSize = TexWidth * TexHeight # DXT5 print("Pixel Format > DXT5") else: print("Unknown Pixel Format >",TexPixelFormat) data = bs.readBytes(bs.TexSize) if TexPixelFormat == 0: texFmt = noesis.NOESISTEX_DXT1 elif TexPixelFormat == 32: texFmt = noesis.NOESISTEX_DXT5 texList.append(NoeTexture(rapi.getInputName(), TexWidth, TexHeight, data, texFmt)) return 1
    2 points
  9. Here's noesis python script for *.rsb from inc_noesis import * import noesis import rapi import os def registerNoesisTypes(): handle = noesis.register("Ghost Recon", ".rsb") noesis.setHandlerTypeCheck(handle, noepyCheckType) noesis.setHandlerLoadRGBA(handle, noepyLoadRGBA) noesis.logPopup() return 1 def noepyCheckType(data): bs = NoeBitStream(data) if len(data) < 20: return 0 return 1 def noepyLoadRGBA(data, texList): bs = NoeBitStream(data) baseName = rapi.getExtensionlessName(rapi.getLocalFileName(rapi.getInputName())) bs.read(4) TexWidth = bs.readUInt() TexHeight = bs.readUInt() bs.read(12) TexPixelFormat = bs.readUInt() if TexPixelFormat == 4: bs.TexSize = TexWidth * TexHeight *2 # b4g4r4a4 print("Pixel Format > b4g4r4a4") elif TexPixelFormat == 0: bs.TexSize = TexWidth * TexHeight *2 # b5g6r5a0 print("Pixel Format > b5g6r5a0") data = bs.readBytes(bs.TexSize) if TexPixelFormat == 4: data = rapi.imageDecodeRaw(data, TexWidth, TexHeight, "b4 g4 r4 a4") texFmt = noesis.NOESISTEX_RGBA32 elif TexPixelFormat == 0: data = rapi.imageDecodeRaw(data, TexWidth, TexHeight, "b5 g6 r5 a0") texFmt = noesis.NOESISTEX_RGBA32 texList.append(NoeTexture(rapi.getInputName(), TexWidth, TexHeight, data, texFmt)) return 1
    2 points
  10. I updated the script. It unpacks all QOB, CHR, and MAP files into working .obj files. There are no normals or UVs, I don't know if someone can take a look at the '.txt' files that get generated and advise me if they see any normals or UVs. The script should now be much easier to read and follow. The MAP02 and MAP03 files do have an error, but that is at the end of the file and doesn't have any vertices that are used in the .obj files. I can't figure out the logic to unpack the complete file for those two, but I'll work on it when I have time. Meanwhile, if anyone can figure out what the rest of the vertices/float values are I can update the script, and learn something new along the way as I am still quite new to 3D models. Ghost Recon_convert.py
    2 points
  11. Maybe they changed encryption for filenames, because script should work as well. Try the tool then BATool -u data.pck BATool -r data.pck Repacked archive will be inside data folder BATool.rar
    2 points
  12. Version 1.3

    14 downloads

    A tool that extracts the SCEE London Studios PS3 PACKAGE (.PKF, .PKD, .PAK, .THEMES) files. It supports all of its known variants - plain, compressed, encrypted, 32/64-bit. Compiled for 64-bit Windows and Linux. Usage is simple: scee_london "/path/to/Pack0.pkd" "/path/to/out_dir" Alternatively, on Windows it's also possible to just drag and drop the PACKAGE file onto the executable.
    2 points
  13. Everyone here helps each other, they have helped each other before and they will help each other now. So please don't talk rubbish. Advertise elsewhere.
    2 points
  14. So, I do not have a lot of experience with 3d models, and that is a understatement. But I put this script together, it unpacks the first map(m01_caves), I am not slogging or testing a second map, all characters and weapons. It does so horribly, only some vertices I could identify get written to the .obj with some faces, there is no normals or uv's in the files. Everything else is in the txt file. Hope this helps you. Good luck. MAP_convert.py CHR_convert.py QOB_convert.py Edit: I was bored, but I can not feed anybody any answers, I am still learning, have fun with it. Edit2: this script is standalone, it does not work with neosis nor blender.
    2 points
  15. https://www.mediafire.com/file/d0w3dhv1zo338zd/Red_Dead_Redemption_TextTools.rar/file
    2 points
  16. Guess dblF has his answers already, 6 weeks later. If 'no', here's the link to tri strips generating code. Didn't check it now; Noesis swaps indices, btw. (The source of hex2obj is not public; other than that of Make_Obj, what was meant assumedly.) edit: added build_strips() snippet there
    2 points
  17. You can partially use this program, but it doesn't work properly and needs to be fixed. https://github.com/Foxxyyy/Magic-RDR
    2 points
  18. Hi! Files: Localization\*\*.locres Tool: https://github.com/amrshaheen61/UE4LocalizationsTool
    2 points
  19. Post doesn't follow the rules - please read them.
    1 point
  20. Try this sample, it will work for character model UVs: https://drive.google.com/file/d/1KTEz6fwL5lYPlp-usZPemBEUilKetRKI/view?usp=sharing
    1 point
  21. When? 20 years ago? From the "About" info : "It works with most DF1 3di files." But well, I'm not surprised that you can't think of anything better than to complain instead of saying thank you to FwO Raven. I'll leave you alone now with 3di files. Good luck! Keep in mind that it's voxel mesh when waiting for someone to create you a script...
    1 point
  22. I am trying to find the UV's now, and got tired of looking for something to convert the rsb files.......... sooooooooo rsb_converter.py Works as far as I can see.
    1 point
  23. If you think the forum is being misused, then report it. Please don't bring a new Xentax(discord) philosophy here. I've said it many times: people ask questions, and others may or may not answer. We are not pushing anyone to help. If I see that the forum no longer makes sense, it will be closed down. The real issue I see is not giving people credit, which is a huge red flag for me, and I will take strict action against it. On the other hand, if you think this place could be improved, apply to be a moderator and help make it better. I would be more than happy to give you a moderator role to help us out.
    1 point
  24. Sorry, I just wanted to draw the attention of friends who understand Chinese. I will pay special attention to this next time I post.
    1 point
  25. You need translate all text in .strtbl and all text in .wst. Тexts in .wst is duplicated from .strtbl. If you translate only .strtbl, the game can load untranslated text from .wst.
    1 point
  26. Yes, but you can also rewrite it to the bms form. Well he we go. But not sure how to reverse extension order for example "xet" > "tex" so extensions will be in reversed order. Nothing cardial... Also no file names either. I bind offset as a file name. ############################################ # Death end re:Quest 2 *.DAT files # ############################################ get BaseFileName basename idstring "GDAT" get Files uint32 for i = 0 < Files get Offset uint32 get RawSize uint32 savepos EndTable goto Offset savepos ExtOffset get ExtCheck byte if ExtCheck == 0 getdstring Extension 0x3 else goto ExtOffset getdstring Extension 0x4 endif getdstring Dummy 0x7C endian big getdstring CompressionSign 0x4 get Size uint32 get ZSize uint32 get Unknown uint32 endian little savepos Offset string Name p "%s/%u.%s" BaseFileName Offset Extension clog Name Offset ZSize Size goto EndTable next i
    1 point
  27. This is one of the first versions of the plugin, it is not fully developed. And as far as I remember, I removed it from public access on xentax and asked not to distribute it on the network.
    1 point
  28. I wonder how many threads/requests it will take until you'll change your tactic? Current one is not not very successful, apparently. I'd suggest to provide more background information. Do more searches on your own, for example in xentax archives which id-daemon provided. What I found is http://forum.xentax.com/viewtopic.php?f=10&t=9897&hilit=Driv3r&start=15 for example. This link is not valid any more, but Durik256 gave good instructions how to get posts like this one from the archives. -------------------------- Tools link, C#, needs to be compiled
    1 point
  29. I've been doing a Noesis script for the beta of Once Human. Still got a few things to do on it, but it should work for most of the models so far: Edit: Read the notes at the start of the script regarding the various files needed. once_human_mesh.zip
    1 point
  30. This game contains very little text + is also very complicated to translate. It's not worth your while. But... The texts in this file -->PILGRIM-Windows.ucas<-- are scattered in many places in .uasset files.... Export: https://fmodel.app/ Edit: https://github.com/amrshaheen61/UE4LocalizationsTool Import: https://github.com/rm-NoobInCoding/UnrealReZen
    1 point
  31. I've never seen the text in an exe file, so it's probably a mistake. Extract the files from the archives and look for the text again.
    1 point
  32. Success. Full name tables dumped for NG1-3, with a reasonable degree of automation. Ultimately landed up parsing the EXEs PE headers, and then doing something like this: inline uint64_t NameTableExtractor::calculateRDataAddress(const uint32_t relative_to_data) const { return relative_to_data + (pe_info.rdata_raw_addr - pe_info.rdata_virt_addr); } inline uint64_t NameTableExtractor::calculateDataAddress(const uint32_t relative_to_text) const { return relative_to_text + (pe_info.data_raw_addr - pe_info.data_virt_addr); } inline uint64_t NameTableExtractor::extractOffsetDQAddress(uint8_t op[8]) const { return (op[2] << 16) | (op[1] << 8) | op[0]; } The dumper just scanned the .rdata section, finding strings that conformed to the file name convention. These where then correlated, by address, with their reference locations in the data sections. Entries where clustered and indexed by their reference location, and a second more permissive name matching pass was run on gaps within clusters just to make sure any paths that didn't contain a separator where filled in. Still a lot left to do. I've got preliminary big endian support for unpacking archives from console versions, and identified all content-type hints within the archive entries. A lot of the art-assets share a similar header structure, and I'm pretty far along with the overall structure of those file types. Given the popularity of some of the formats, it shouldn't take too long to get decent bi-directional conversion done. Then it's just a matter of handling game-specific data, like movesets.
    1 point
  33. IT support officialy way to localize the game : https://github.com/camposantogames/firewatch_localization
    1 point
  34. Thanks to both of you, I appreciate both your help and your time :)
    1 point
  35. Greetings guys. I'm trying to decode .imag files, which can be found in .ca archives of the game Alien Nations 2 (die völker 2). I've been trying to understand how it's possible to convert these custom-formatted images into a commonly supported format, like PNG or GIF. And since I'm pretty new to reverse engineering, would appreciate any help. If I'm getting it right - these images are of 16-bit colour, must be RGB565 or RGBA4444. The first and the third byte define the image size. (e.g. 28x28), the fifth byte is a some kind of a flag - (probably palette?) - but it's always set to 1. And the pixelarray offset is 24 bytes, again, if I get it right. Although, what confuses me much - this pixelarray is always shorter than it's supposed to be for 16-bit image: Looks like it's definitely compressed in some way, so I'm looking for a piece of advice from the people who now how to deal with such stuff. Hope you'll be able to give me some help or possible guidance. I've added examples of .imag files I've been working with - they're relatively small images (28x28 and 24x24) [in attachment]. I've picked the `sausage` icon to start with - it's relatively small image and pretty easy to find all the appearances in the game. Interestingly enough, almost every icon in the archive is prefixed with `b,g,h,s,w`, e.g.: [bsausage, gsausage, hsausage, ssausage, wsausage]. I've added all of them to the example archive. The meaning of these prefixes is not yet quite clear to me, tho. But looks like all of them are used for different places in-game: "b, h" are 28x28 "g, s, w" are 24x24 I want to preserve the awesome artwork of this game and other old-school gems, so it won't be lost in time. I did try to find contact somehow the programmers who've been working on this title. But it's been too far long ago, I'm afraid. And either it's not possible to find their contacts, or they simply do not respond. Thanks in advance! examples.zip
    1 point
  36. + https://fmodel.app unpack .pak +https://github.com/trumank/repak repack .pak
    1 point
  37. Why is somebody nowadays frequently demanding someone to make a tool for them, as if it doesn't require the slightest effort? A fellow like that is either too naive or a total egoist.
    1 point
  38. Hi I hope this helps, please use it in conjunction with the script if anything is unclear in either. I got lazy in some parts of the script. Documentation for WGG files: Header - 88 Bytes (All pointers and offsets always starts after the header) 8 Bytes - Magic(HVSIWGG) 4 Bytes - Version1, Always 2 4 Bytes - Version2, Always 13 4 Bytes - EndOffset, also Filesize 8 Bytes - Object Separator, used to separate objects in file 4 Bytes - Last Chunk Pointer 4 Bytes - Unknown, usually 0 4 Bytes - Amount of Tables 4 Bytes - Size of object list 4 Bytes - Number of Vertex Chunks 4 Bytes - Pointer to start of vertex information 4 Bytes - Number of Face Chunks 4 Bytes - Pointer to start of face information 4 Bytes - Number of vertex chunks with no face information, called "Positional Chunk" herein 4 Bytes - Face EndOffset 4 Bytes - Size of Last chunk to EndOffset 4 Bytes - Unknown, might be pointer to start of object list, always 0 4 Bytes - Size of "Unique Face data" 4 Bytes - Size of "Unique Face data" and "Unique Transformation Data" 4 Bytes - Size of unlisted chunk, mostly unused 4 Bytes - Size of Last chunk(Duplicate) Object List:(For each 16 bytes in "Size of object list") 8 Bytes - Object Name 8 Bytes - Object Separator Tables:(For each "Amount of Tables") Table Header: (24 Bytes) 4 Bytes - Number of entries in table 4 Bytes - Size of "Unique Face data" in objects 4 Bytes - Number of objects 8 Bytes - Name of table 4 Bytes - Size of "Unique Transformation Data" in objects Table Data:(40 Bytes - For each "Number of entries in table") 4 Bytes - Number of the Vertex chunk containing data 4 Bytes - Number of the Face chunk containing data 4 Bytes - Number of the Positional chunk containing data 4 Bytes - Number of vertices to skip in vertex chunk 4 Bytes - Number of faces to skip in face chunk 4 Bytes - Size of faces in object 2 Bytes - Unknown 2 Bytes - Number of vertices in object 2 Bytes - Unknown 2 Bytes - Which object from object list does this table belong to 2 Bytes - Number of faces in object 2 Bytes - Unknown 2 Bytes - Unknown 2 Bytes - Unknown Vertex Chunk:(For each "Number of Vertex Chunks") Header:(60 Bytes) 4 Bytes - Type of Vertex Chunk, 0 Normal, 4 Split, or 8 Transformational 4 Bytes - Count of vertex entries 4 Bytes - Flags, 80 = only vertices, A0 = Vertices and UV's, A8 = Vertices, normals and UV's 4 Bytes - Unknown 2 Bytes - Unknown 2 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 2 Bytes - Unknown 2 Bytes - Unknown 4 Bytes - Unknown Float 4 Bytes - Unknown Float 4 Bytes - Unknown Float 2 Bytes - Unknown 2 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Size of vertex entries Data:(For length of "Size of vertex entries*Count of vertex entries" either 12,16,20,24,32,36, or 40) 12 Bytes (3 Floats)= Contains only vertices * Count of vertex entries 16 Bytes (4 Floats)= Contains vertices and extra unknown float * Count of vertex entries 20 Bytes (5 Floats)= Contains vertices and UVs * Count of vertex entries 24 Bytes (6 Floats)= Contains vertices and extra unknown float and UVs * Count of vertex entries 32 Bytes (8 Floats)= Contains vertices and normals and UVs * Count of vertex entries 36 Bytes (9 Floats)= Contains vertices and extra unknown float and normals and UVs * Count of vertex entries 40 Bytes (10 Floats)= Contains vertices and two extra unknown floats and normals and UVs * Count of vertex entries Face Chunk:(For each "Number of Face Chunks") Header:(8 Bytes) 4 Bytes - unknown 4 Bytes - Size of face information Data:(For length of "Size of face information") 6 Bytes - face 1, 2 and 3 each 2 Bytes * ("Size of face information"/3) Positional Chunk:(For each "Number of Positional Chunks") - Unlike other chunks, all headers are together, no data is between them. Header:(72 Bytes) 4 Bytes - Count of entries 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Size of entries 4 Bytes - Unknown Data:(Size = "Size of entries*Count of entries" Always starts at "Last Chunk Pointer") 12 Bytes (3 Floats)= Contains only vertices * Count of entries If the vertex chunk has a 4 or 8 in first 4 Bytes then a large portion of the file is dedicated to "strange" face and transformational data, called "Unique" herein, there is never more than 1. Sometimes with 4 you can extract the data from the usuall route through the tables, but 8 never has table data accompanying the header. If the "Unique Face data" or "Unique Transformation Data" has a value other than 0 then it is referring to this chunks data. The data starts at "Last Chunk Pointer", for the length of "Size of "Unique Face data"", what is odd is that it contains duplicates. Face Data(Count = "Size of "Unique Face data"/16): 4 Bytes - Face 1 4 Bytes - Face 2 4 Bytes - Face 3 2 Bytes - Unknown(Seemed like a index number from object list, but that is untrue for some/most files, unless some weird logix applys) 1 Bytes - Unknown 1 Bytes - Unknown After the face data is "Unique Transformation Data", it is used for normal data in the script, but this is wrong as some of the chunks that start with 4 have normal data in the vertex chunk. Transformation Data: 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Unknown float 4 Bytes - Index number for face data 4 Bytes - Unknown 4 Bytes - Index number for other transformation data or 0xFFFFFFFF 4 Bytes - Index number for next transformation data or 0xFFFFFFFF Filling in any unknown data would be awesome, and very much appreciated, though I am pretty much finished with this project. I am attaching all the files I have hereto. WGG Sample.zip WGG_convert - Final.py
    1 point
  39. It looks like the data is in blocks of 32x32 (0x200 bytes), so you need to untwiddle each block separately and then join them together to get your final image.
    1 point
  40. Hi, here is the final version. As far as I can see and tested it extracts everything correctly, some files have duplicate mesh data at 2 different places, in these cases, only one gets exported. When I have some time I'll write documentation on the layout for future use, I still can't figure out what the extra section is, it seems like transformation data. But it isn't needed to extract the mesh, so I am conveniently ignoring it, though I am saving it as "normals" in the obj file that is generated🙈😁. These normals might not be of any use to anybody. The can be easily removed from the script or the .obj in question. WGG_convert - Final.py
    1 point
  41. OK It seem all gs images are linear format. They can be converted into bmp directly. So I make a quick and dirty BMS script to do these job. The converting process is a little bit slow so it convert single gs image once at a time! Also 0x02 type (5551 pixel format) is not supported since there are only a few. You can use other program like TextureFinder to convert them. how it works: a. convert gsp into gs (using GSPextract.bms) b. convert gs to bmp (using GS2BMP.bms) there are some batch files to help you running QuickBMS in easy. Just edit the path of the batch files path will work. Extract_GSP_to_GS(Drag_and_Drop_Here).bat and Convert_GS_to_BMP(Drag_and_Drop_Here).bat can be in anywhere and as the name said drop the source file into it will do. if you need batch conversion, place Batch_Convert_GS_to_BMP.bat in the same folder as the gs files are and double click the .bat file G_Scripts2.rar PS: feel free to edit these scripts
    1 point
  42. dress2_white dress2_white, part There were times when people were glad to get the mesh of one model. Nowadays they're expecting fully fledged plugins. The times they're changing...
    1 point
  43. I made my solution in C#, I understand that it's late, but I felt sorry to delete it, so I decided to post it too)) Urban_Chaos_Riot_Response_SAF.zip
    1 point
  44. Okay this is raw .bms script which should work. Open txt Document put following text (quoted) inside document save and rename to *.bms Open quickbms and choose 4gb then activate script choose rdg.bin and save on any folder. Open files with g1m importer in Noesis.
    1 point
  45. New tutorial from me about translating Unity games https://ikskoks.pl/tutorial-how-to-translate-unity-games-using-uabea/
    1 point
  46. Try to decompress with offzip http://aluigi.altervista.org/mytoolz/offzip.zip offzip.exe -a war_yan_a.x You should get 2 files out of this sample. Also after decompressingg this doesn't look like graphics data, more like 3D model data.
    1 point
  47. 39 downloads

    Naraka:Bladepoint convert
    1 point
  48. Version 1.0.0

    730 downloads

    Here is a list of all (or almost all) Xentax topics archived by wayback machine. Find topic name with search or filter (see "spiderman" screenshot as example), then copy URL for the list, and open it. There you can read the whole topic, with instructions and comments. But there will be no attached files. You can get attached files from archive.org - https://archive.org/details/xentax.7z in "attachments" folder. They are sorted by forum number and topic ID. So you have to look into corresponding forum folder (16 = 3d models in this example) and topic folder (20634 for spiderman PS4) - there you can find all files attached to that topic for each post (in there are many).
    1 point
  49. Version 1.0

    58 downloads

    Tool to unpack/repack braid.dat archive of the Braid Anniversary Edition. Usage : unpack: -u archive_name repack: -r archive name compression_level When repacking, optionally you can specify compression level, legit values are from -4(fastest) to 9(slowest). Default value is 6(devs used it), but it's pretty slow, very slow I would say, so I decided to add this option at least for the testing purposes.
    1 point
  50. 🤔 What is the point of having a telephone if you can send postal mail? Each has their place. Discord is poor medium for following a single thread (often times you'll see 2 or 3 active conversations intermingled, which makes it quite confusing to follow later if you weren't one of the active participants) and it's bad for discoverability (nicely walled gardens that search engines knows nothing about), but it's decent for higher bandwidth quick chats while solving a problem. Also it's sometimes useful to have a fallback in case there are ever issues with the main website (down time, maintenance...) just to inquire what's going on. Personally I'd promote asking questions in the forum *first*, so all later readers can easily benefit from the answers.
    1 point
×
×
  • Create New...