Sponsored By

Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs.

The Full Throttle Remastered FMV Pipeline: Part 2

This is an overview of the full motion video pipeline for Full Throttle Remastered. Starting from the 20-year old LucasArts archives, we'll investigate how we were able to achieve our remastered FMVs.

Trevor Diem, Blogger

August 29, 2018

22 Min Read

...Welcome Back

In my previous blog post, The Full Throttle Remastered FMV Pipeline: Part 1, I talked about how we extracted the contents of the original FMV files and built some tools to dig through the ~67GB of archives in search of all of the intermediate, constituent parts that were used to create the FMV. These parts are a basis for creating the remastered content of the FMV and acted as a blueprint to get started.

As my previous blog post mentioned, the remastering pipeline is split into three routes: Remastering the hand-painted frames, remastering the 3D models & remastering the audio. The following dialogue will discuss more specifics within the pipeline and tricks we used to automate most of the video creation.

We upscaled all of the original hand painted frames,to fit within the 4K resolution (3840x2160). Adding additional width to the remastered scene and accounting for the fact that the game was displayed in non-square pixels, this meant that all remastered assets were authored at 4440x2400 pixels.

We chose to use Adobe Animate to remaster all hand-painted FMV frames since we already had a pipeline in place from work on Day of the Tentacle Remastered. The art team was already familiar with the process so it was a no-brainer.

Hand-Painted Remaster Example

The original 3D models within the archives were in 3D Studio Release 3 format. Luckily, modern versions of 3D Studio Max were able to import all mesh and cinematic keyframe data using yet another automation script. We then converted this intermediate file to be used with Autodesk Maya, where the artists would work their remaster magic.

New shaders were applied to the mesh surface to give it a new feel, higher fidelity textures were applied and the mesh data was greatly improved upon to give the model a smoother look. Additionally, the film gates for all cinematic cameras were widened to match our authoring resolution of 4440x2400 pixels since the original camera was in a shorter aspect ratio.

3D Model Remaster Example

And as for the audio, most of the original high-fidelity versions were found, with some exceptions. The English VO studio recordings were packed in within the archives but the other VO languages, managed by external partners, were not available. We also found the original music by The Gone Jackals used throughout the FMV. And the SFX, some were replaced by 'thicker' versions of a similar type of sound.

Below is a flow diagram roughly explaining how we treated the initial assets and mapped them to their remastered counterpart. The original extracted video frames (via SanExtract.exe) were used as a sort of ‘ground truth’ to compare against all of the archive data files. Archive manifest files were generated from a recursive search through all of the archive data; These were then used to easily find all unique files of a specific file type.

Our SanWrangler was used as a visual comparison between the original ‘ground truth’ frames and the archive datas. A user would then be able to visually map archive files to the original frames and save it as an XML dependency map. Once the dependency map was created, it was a matter of using a Python script to auto-generate the hand-painted blueprint files and the Maya 3D blueprint files from the original assets. These files were a starting point for the art team to take over and add their special remastered sauce.

Original Asset Extraction & Blueprint Creation

This was really our first step of many in order to end up with a finished, remastered FMV. Sure, now we’ve got a starting point of files that need to be remastered, but how do we even put all of these pieces back together?

The following will discuss the methods of automation used throughout the FMV pipeline. These methods aren’t just isolated to generation of FMVs or even this specific game; I feel they are quite universal and can be repurposed for many aspects of game development.

After all, like most art pipelines, this is going to be an iterative process. There might be a bug somewhere that an artist will have to fix in a source file and something, somewhere, is going to have to re-export a bunch of asset dependent files. I think we’d all prefer this work to be done by a computer rather than a fallible human.

For the purposes of Full Throttle Remastered, we knew exactly how the videos should look and sound, they just needed to be better looking/sounding. All of the videos were to match the originals, cut by cut,  including camera paths, audio volume and pan adjustments, etc. And in order to do that, we needed to know how the original FMV pipeline might have worked. After all, this ~67GB of data from the LucasArts archives contained a lot of insight into how the original stuff worked. This was a great starting point.

The Original FMV Pipeline

Now this is a bit nostalgic but I feel it's important to talk about the 'digital archeology' aspects of this kind of game remastering. After all, understanding the original pipeline answered a lot of questions and it provided insight as to how an asset got transformed into the end result. And when we construct the new remastered FMV, we need to apply the same transformations to our remastered source assets to ensure that the final product looks and feels as similar to the original as possible. This includes things like:

  • The placement of audio tracks on the timeline

  • The volume and pan adjustments of the audio tracks during runtime playback

  • The frame composition and mapping of each video cut to the final product

A tool called SMUSHFT (SMUSH for Full Throttle) allowed the FMV author to place video and audio resources on a timeline and then encode a resulting FMV (.san) to be consumed by the game engine. The videos were segmented into a series of cuts that were then stitched together when producing the final result. SMUSHFT allowed the user to visually place these resources along a timeline and iterate on a video if needed.

Now it goes without saying that I didn’t work on the original game. I can only infer things about how the original assets were authored by looking through the archive data and seeing what kind of file formats and executables were packaged within the archive data. Anyway, it appears that the 3D models were authored in Autodesk 3D Studio Release 3 while the hand-painted parts were created in DeluxePaint Animation v1.0. It’s unknown to me what steps were used in the generation of the waveform data for the audio, but each audio clip used (.sad) contains keyframed volume and stereo pan information embedded within so the audio mix could be generated at runtime.

The Original FMV Pipeline Flow

And once these individual frame parts were done being authored, there was a frame combination process. This combination process would combine the 3D frame renders with the hand-painted animation frames (amongst other things) resulting in the final product used by SMUSHFT (the .nut files). Once the project was ready to be encoded, the video was processed and the final result (.san) was ready for the game engine playback.

SMUSHFT did the final encoding for the original video file format (.san) and each video file had a project file (.pro) that defined how the video was put together (audio, video, subtitle locations). We wanted to extract this information so we could then generate an Adobe Premiere Pro project file to use for encoding the remastered 4K version of the video. This required us to reverse engineer the SMUSHFT project file.

Reversing File Formats

Having the source code is great because you can just read through the code and figure out how the project file was created/read. Without source code, it's a matter of opening up the project file in a hex editor to identify patterns within the file. This was our method used to extract the useful contents of the SMUSHFT project file.

Since we were able to run the original SMUSHFT in DOSBox, we were able to see the user facing interface of the program, which provided some insight into the file format. Consider this screenshot when opening up an original .pro file:

SMUSHFT Project Example

You'll notice a few things. There are named resources (2027.NUT, 2027.SAD, IN_06A.NUT, etc.). These named resources are most likely to be found as visible ASCII characters within the file. Also, you see frame counters on the top of the timeline and incrementing layer numbers on the left side of the timeline. And last, each resource within the timeline exists on a specific  frame number and lasts for a certain duration. Being able to extract this information from the original project files allowed us to know where to auto-magically place the new assets on the timeline in Adobe Premiere Pro.

Adobe Premiere Pro Project Example

Adobe Premiere Pro Project Example

Opening up the original project file within a hex editor yields some quite useful information. Consider the hex representation of the above example:

SMUSHFT Project File in a Hex Editor

We can start visually scanning the .pro file using a hex editor (I love using Hexplorer) and start looking for patterns. Easily found are the named resources in ASCII format which are null-terminated. And around the same area in memory are a bunch of values stored as shorts (two-byte integers). Comparing numbers viewed through the SMUSHFT tool and numbers seen while looking at the hex project file representation provided us with a basis to properly convert the original project file to a modern video editing suite like Adobe Premiere Pro.

The Automation Tool Belt

A majority of this pipeline was automated and hands-off. One of the reasons being is that the content of the videos was already set in stone from the original; We were really doing a content upgrade. And that being the case, there wasn’t as much room to entirely change the format of the FMV, we just needed to figure out a way to re-create the videos with higher fidelity assets while minimizing production time.

First, I'll say that talking with the content creation (art, audio) team is a big first thing to do before trying to automate anything. The reason being that most automation processes require content creators to adhere to a specific set of rules when setting up projects, file locations, which tools are to be used, etc. For this project, that meant agreeing on content authoring tools for hand-painted frames, 3D models and audio and eventually the video authoring suite that would tie it all together. Another point to agree on is identify which aspects of the pipeline are hands-on and which are hands-off (automated).

That said, we agreed on the following:

  • Hand-painted frames would be authored in Adobe Animate at 4440x2400 pixels

  • 3D Models & Animations would be authored in Autodesk Maya and would be rendered manually, also at 4440x2400 pixels

  • Audio files would be delivered in 48Khz 16-bit .wav format

  • Video segment files would be initially generated automatically, and an artist could modify that file any way they wanted (with some exceptions)

  • The final steps to stitch and encode the FMV would be automated

We used a few methods of automation to get the tools to be as 'automatible' as possible. Python was chosen as the 'glue' used to tie everything together since it's quite extensible with various other binding libraries and it’s easy to write, extend and maintain. We also made use of its internal support for platform-agnostic file manipulation (copying, moving, deleting).

Python - Calling Standalone Executables, Retrieving Results

Python’s subprocess library is great since you can kick off another executable and even wait for it to finish doing its thing. It allows retrieval of the program return code as well as access to the stdout & stderr buffer.


import subprocess
 
# The command to execute
command = 'SanExtract.exe -f -i credits.san -o \"C:/output_dir/\" '
 
# Execute the command via subprocess
child = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 
 
# Wait for process to complete, returns stdout & stderr buffers
stdout, stderr = child.communicate() 
 
# Retrieve the return code from the process
return_code = child.returncode

Python example of interacting with executables

Python - Win32 API

The Win32 API is really helpful as it gave us access to send keyboard and mouse messages to the Windows OS from a script. For example, you can create a function to click the mouse at a specific X,Y screen location:


import win32api
 
def ClickXY(x,y):
    win32api.SetCursorPos((x,y))
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)

Python example simulating a mouse click

You can even send keyboard stroke events (with or without control modifiers):


import win32api
import win32con
 
def PressKey(code, modifierCode=None):
    if modifierCode:
        win32api.keybd_event(modifierCode, 0, 0, 0)
 
    win32api.keybd_event(code, 0, win32con.KEYEVENTF_EXTENDEDKEY | 0, 0)
    time.sleep(0.021)
    win32api.keybd_event(code, 0, win32con.KEYEVENTF_EXTENDEDKEY | win32con.KEYEVENTF_KEYUP, 0)
 
    if modifierCode:
        win32api.keybd_event(modifierCode, 0, win32con.KEYEVENTF_KEYUP, 0)

Python example simulating keystrokes

There’s a lot more to it, but for our purposes, the above examples helped greatly. So, given any active Windows program, you can send it keyboard events and it’ll start typing just as if you were typing things into a keyboard, hotkeys included.

Python - Computer Vision For Button Clicking

The most unique was using computer vision software in areas where tools could not be automated through internal scripting. You see, most modern tools have some sort of scripting support but still require user intervention. For example, 3D Studio Max allows you to run their MAXScript files via command-line. In this scenario, we’ve run a script to auto-import a 3D mesh file at which point, 3D Studio Max auto-boots up and displays the Shape Import dialog that a user now has to click on:

Example of Shape Import Dialog

Example of Shape Import Dialog

Ok, so you wrote a script to automate things and now you’ve got to sit there like a Drinking Bird to peck at the keys when it asks you!?!? Rather than have a human sit at a keyboard waiting for to click on a popup, we can have our script take a screenshot, use the OpenCV Python bindings to search for a template button image and then auto-click it. Given the example above, here’s the template image we’ve used.

Template Image for ok_button.png

Template Image for ok_button.png

Make note that the template image contains additional features (text for “Single Object” and “Multiple Objects”). This allows us to retrieve a more deterministic search result. Below is an example of the Python script used to auto-click on the found location of a template image::


import cv2
import ImageGrab
 
# "Constants"
TEMPLATE_THRESHOLD = 0.25
CLICK_OFFSET = 20
 
# Read the template image to search for
template_image = cv2.imread('images/ok_button.png', 0)
 
# Screenshot the current desktop and load it to a cv2 format
screen = ImageGrab.grab()
screen.save('screen.png')
screen_image = cv2.imread('screen.png', 0)
 
# Search for the template within the screenshot and retrieve search results
match_result = cv2.matchTemplate(screen_image, template_image, cv2.TM_SQDIFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(match_result)
 
# If below the threshold, it's likely we know where to click
if min_val < TEMPLATE_THRESHOLD:
    ClickXY(min_loc[0]+CLICK_OFFSET, min_loc[1]+CLICK_OFFSET)

Python example clicking on a screen element using OpenCV

Now, the above examples are all Python-centric. But there are cases in which we needed closer control of the Windows OS windowing system. This lead us to develop native tools which use the Windows Automation API.

Windows Native (C++) - Windows Automation API

The Windows Automation API exposes the legacy Microsoft Active Accessibility API (MSAA) as well as the Microsoft UI Automation API. For a good overview, feel free to consult Microsoft’s landing page on the subject.

At the end of the day, we are able to query certain Windows controls (buttons, text inputs, tabs, menu items) and figure out where those things are spatially located on-screen and then click or interact with them. The Windows SDK also comes with some testing tools which allow you to see which properties are exposed. This acted as a good starting point to map out what could be automated given a specific program.

Inspect.exe is quite useful for showing a windows control hierarchy within a program so you have a rough outline of where things like menu controls exist and how to refer to the window controls within the automation API calls.

Inspect.exe Example

Inspect.exe Example

That said, once you know the control hierarchy of a Windows program, you know how to find it from the main window handle and can start clicking things like menu items through the API like so:


#include <WinUser.h>
#include <UIAutomation.h>
 
// Click on a sub-menu item given the Window & Menu handles.
void ClickSubMenu(HWND hwnd, HMENU hmenu, const char *pMenuName)
{
    // Iterate through the menu items of the window
    int menu_item_count = GetMenuItemCount(hmenu);
    for(int menu_id = 0; menu_id < menu_item_count; ++menu_id)
    {
        char menu_name[MAX_PATH];
        int len = GetMenuString(hmenu, menu_id, reinterpret_cast<LPSTR>(&menu_name[0]), sizeof(menu_name), MF_BYPOSITION);
 
        // Look for the specific menu you're searching for and click it
        // Make sure to set the window active before doing it...
        if(!strcmp(pMenuName, menu_name)) {
            // now get the rect and click the center
            RECT rect;
            BOOL success = GetMenuItemRect(hwnd, hmenu, menu_id, &rect);
            if(success) {
                SetActiveWindow(hwnd);
                POINT point = GetMiddlePoint(rect);
                SetCursorPos(point.x, point.y);
                mouse_event(MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTDOWN, point.x, point.y, 0, 0);
                mouse_event(MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_LEFTUP, point.x, point.y, 0, 0);
                Sleep(DO_TASK_INTERVAL_WAIT_MS);
            }
        }
    }
}

C++ Example clicking on a Windows Menu control

And of course, sending keystrokes to an active window is as simple as:


#include <WinUser.h>
#include <UIAutomation.h>
 
// Type the character string to the given window handle
static void TypeCharacters(HWND window_handle, const char *pString) {
    int len = strlen(pString);
    for(int count = 0; count < len; ++count)
    {
        SendMessage(window_handle, WM_CHAR, (WPARAM)pString[count], (LPARAM)0);
        Sleep(CHARACTER_REPEAT_INTERVAL_MS);
    }
}

C++ Example simulating keystrokes

There’s certainly a lot more that those APIs have to offer. I’ve found using the Inspect.exe tool sheds light on what specific Window elements of any given program are accessible.

Intermediate Text Formats

Part of our process involved saving files as text representations and then modifying the values within the text representations. After all, tools contain a user interface for modifying the state of some sort of backing data. And if you know what that backing data is supposed to be, you don’t need to go through the tool, you can just modify the backing data. The trick is, you need to know how to manipulate that backing data; This can be troublesome when trying to modify proprietary file formats. Wouldn’t it be great if everything was just a text file you could go in and muck around with?

The trick comes with finding ways to circumvent the proprietary file formats that most tools have. The method usually involves taking advantage of the Import and Export options found in most modern commercial tools. Below are some examples:

Adobe Premiere Pro saves as a proprietary file format, but you can import/export projects as a Final Cut Pro XML. Once exported to the XML representation, it’s a matter of fixing up the XML to what we want it to be and then re-importing the project back into Adobe Premiere Pro.

Another example used is fixing up texture references found in the legacy 3D mesh format of the Autodesk 3D Studio Release 3. Upon importing the original mesh file, we saved the newly converted mesh as an intermediate ASCII .fbx file. Once in that format, it was a matter of scrubbing the text file and replacing all of the texture strings with valid ones.

Adobe Animate/Flash is funny since it turns out .fla files are really just .zip files which are kinda broken. Its uncompressed representation is stored in XFL format which can reference non-XFL objects (like bitmap images) from a local folder reference. The lead engineer at Double Fine, Oliver Franzke, provided a modified Python script to to ZIP compression/decompression on .fla files so we could create/manipulate those files.

Use-Case Examples

3D Studio Max

A modern version of 3D Studio Max was used to import the original .prj file into a scene and then save out an ASCII .fbx. For each .prj that needed to be converted, a MaxScript (.ms) file was auto-generated from a Python script and looked something like this:


importFile "G:\FullThrottle_Backup\FullThrottle_SourceAssets\BENBIKE.PRJ" #noPrompt

MaxScript example importing a 3d model file

And then that .ms file was simply invoked from a Python command by way of the 3dsmax.exe:


3dsmax.exe -U MAXScript "C:\FullThrottleRemastered\import_prj.ms"

Console command example invoking an executable given a named MaxScript file

Like stated above, this method would eventually cause 3D Studio Max to pop up a UI dialog box which needed to be clicked on. Our use of the OpenCV Python bindings aided in clicking this box so the original file could be imported without user intervention. After the file had been imported, a series of keyboard menu keys were pressed (via Python’s win32api) to run yet another MAXScript file that would export the model as an ASCII .fbx file. Once the .fbx was saved as a plaintext file, all of the model’s texture dependency strings were replaced with modern format image references. The newly modified .fbx file was then auto-loaded again in 3DSMax and exported as a .max file. At this point, the .max file was ready for an artist to remaster.

Adobe Animate/Flash

Adobe Animate/Flash was used to remaster all hand-painted FMV resources. We took the original hand-painted frames (320x200 pixels) found by SanWrangler and used those as the blueprint. The images were upscaled to fit the authoring size of 4440x2400 and then we automatically generated a .fla file using a Python script.

After that it was a matter of auto-generating the .fla file from scratch using existing knowledge of the Adobe Animate/Flash XFL file format. We were able to leverage the toolset already created bo Oliver Franzke to generate a blueprint version of the hand-painted animation files.

Adobe Premiere Pro

The Windows Automation API was quite useful when determining where certain Premiere Pro Windows controls were on screen. And in some cases, there were no hotkey bindings. Upon retrieving the location of the menu control, it's a matter of moving the cursor and sending a click event to that location.

Now that was great, but some controls are rendered by other means that are not visible by the Windows Automation API. In this area we choose to use the OpenCV Python bindings to be able to use OpenCV within our scripting environment. This was most useful for Adobe Premiere Pro since, although it does have some JavaScript scripting support, the type of control required was not available through their API.

Additionally, Adobe Premiere Pro project files are of a proprietary binary format. Therefore, we couldn’t just magically create a Premiere Pro file, but we can make use of the Import... functionality which allows us to import a Final Cut Pro file, which is an XML format. It was a matter of generating the correct XML file that lays all of the resources on the timeline correctly and then auto-importing that Final Cut Pro .xml to convert it to the format we needed. After that, we were able to auto-queue frame exports so they could be combined into the final video.

All The Steps

Below is a somewhat generalized flow diagram which identifies all of the automated parts in our new pipeline. Each automated segment is surrounded by a round-cornered rectangle with additional information about which automation techniques were used. 

Simplified Automation Flow for Remastered FMV

Simplified Automation Flow for Remastered FMV

You’ll note that most of the work with Adobe Premiere Pro required the use of Python as well as specialized Windows Native code. The reason for this was due to the complexity of Premiere Pro’s windowing structure and we needed to use the native Windows Automation API to ensure all of the dependent child windows of that application were properly interacted with.

Tied Together

Utilizing the above methods, we were able to set up multiple automation machines to split up the workload of all of the videos. A Slack Bot was also integrated to provide automation feedback into our Slack channel on the status of the videos running through the pipeline so we knew if something blew up.

Example Automation of Adobe Premiere Pro

Problems Encountered

This sounds great but throughout the project we did encounter issues. I'll just enumerate the main points.

1) Final audio mix iteration. All of the audio files were being remastered on a piecemeal basis. Therefore, if there was something like a 'BANG!' SFX, the audio engineer had no idea where it would be placed within the mix, they would have to wait for the video to be encoded before finding out how it sounded.

2) Storage of uncompressed intermediate files. The frames were kept in an uncompressed format until the very end when encoded into the final video. This meant that there were a lot of frames on local storage, and of subset of that were stored in source control. This type of storage bloat is not insignificant and can be quite costly depending on the source control used (We used Perforce).

3) Turnaround time. A good chunk of the pipeline was automated, freeing up engineers to work on other things. However, the turnaround time for a video can be a bit costly. The most time intensive part was encoding 4k frames. We did have methods to take a look at the state of the assets within Perforce to determine which steps needed to be re-run, but the method wasn't as granular as it could have been.

Next Steps

Geez that was a mouthful! Although our implementation of this pipeline was quite specific to the task at hand, I feel the individual automation methods can be used universally throughout game development. Now that this it out of the way, there’s still the topic surrounding runtime playback of the FMVs. This includes topics such as multi-lingual audio stream encoding as well as frame syncing of the classic FMV playback. Stay tuned for Part 3!

Read more about:

Featured Blogs

About the Author(s)

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like