informa
16 min read
Features

Book Excerpt - Implementing a Digital Asset Management System: Workflow Integration

Depending on the type of project you are working on, there may be a high potential for process automation and optimization which is one of the ideal ways to profit from a digital asset management (DAM) system. This excerpt from Focal Press' Implementing a Digital Management Systems focuses on integrating your workflow with a DAM system.

The following is a selected excerpt from Implementing a Digital Asset Management System (ISBN 0-240-80665-4) published by Focal Press.

--

Depending on the type of project you are working on, there may be a high potential for process automation and optimization. While this is not necessarily something to worry about from the start, it is one of the ideal ways to profit from a DAM system. Any kind of automation also has the advantage that it results in improvements for the users, giving them a reason to appreciate the extra work that they have to put into using the DAM.

How processes can be automated and improved depends largely on the DAM system you are using. Even if you are only using a well-managed directory structure on your server, you can still write scripts that maintain and control data.

Use metadata

Many DAM systems offer metadata storage. Metadata can be used to provide the users and the system with information about the assets stored in the database. The most common example is EXIF information stored in images made by digital cameras. This typically includes detailed information about when and with what settings the pictures were taken, but it can also tell the system how the camera was held when the picture was taken. This allows an image viewer to automatically rotate the image to display it on screen.

Metadata is either automatically generated, as in a digital camera, or it can be added by the user. If you create an asset library with thousands of images, it's sensible to use metadata to describe the images so they can be found easily later on.

If you want to automate data management, you can also use metadata to tag files in your asset management system. For example, if you mark all files that should later be used for distribution, it is then very easy to set up tools to search for this information in the database and automatically extract all the assets that are tagged accordingly.

Adding metadata can be a tedious task, and getting every user to faithfully add the relevant information can be difficult. Therefore, try to make this as easy as possible but also make sure that users can't work around the process. It helps to give users pre-defined values that they can choose from. One good way to encourage metadata entry is to use popup dialogs that ask for specific information upon import.

As an example, think about an image library. You want to make sure that images have enough metadata to be located quickly later. Using a text field that can be filled with any kind of description may lead to unsearchable information—words could be misspelled and descriptions may be subjective. Decide beforehand which criteria can be used to describe the images and then use selection boxes to allow the user to pick from these options. A free text field should only be used for additional information.


Some DAM systems provide annotations: metadata that is used to mark change request in assets

Having incomplete information in your DAM will make it unusable, and if the users ever get the impression that the metadata is incomplete they will stop relying on it. When rolling out your DAM, make sure that any kind of metadata you want to use in there is already defined, and that it is clear to users what they are supposed to provide the system and what they can search for.

Automate repetitive tasks

Repetitive tasks are the first thing that can be automated once you have a well-defined data structure. Imagine that you are creating textures for 3D visualization. This is typically done in a 2D application such as Photoshop and the result is then exported to the format of the 3D engine being used. Most 3D engines prefer textures in a specific resolution and bit depth, and in some cases you even need different resolutions of the texture. For users, this means that after they complete a change they have to save the original file, then make sure it is converted to the correct format, and finally move it to the correct target locations.

Most of this can easily be automated; you can even have scripts to check the result. All users have to do is to save their changed files and then tell the DAM to do the rest. With proper logic, the script can even check the texture for the correct format and send the user an error notification if anything is wrong.

Automating with scripts is very easy even for the inexperienced programmer. Start with the single steps that you want to automate and then slowly build a set of tools to take the burden away from the user. Most operating systems have built-in automation tools that can interface with applications on different levels. Examples are Shell scripts on Unix derivatives, the Windows Scripting Host, and Apple Script.


An automated conversion or compilation pipeline can automate data exchange between departments

Integrate your tools

Having external scripts helps the user significantly, but it is even better if the applications being used can directly interface with the DAM system. Many companies that work with in-house formats create their own tools. This is especially true for game companies and large CG productions. Since these specialized applications are created by the users, they can change any aspect of it that they want to. If the DAM system you are using is open enough, these in-house tools can directly interface with the DAM, ridding the user of the need to work outside of them.

As soon as the content creation tools are tightly integrated with the DAM, production can also benefit from the additional information that they can store in a DAM. If the DAM supports some kind of meta-information storage, the integrated applications can add information that it needs to “understand” the assets.

To use 3D data as an example again, once a 2D application has finished exporting an image to the DAM in all required resolutions, it could then add information about the file it came from to each resulting image. If after two years of working you suddenly realize you have to change something in the exported texture, you can simply click on the texture in your DAM-integrated tool, and it will tell you which original file has to be modified to perform the changes you need.


Alienbrain offers plug-ins for most major 2D and 3D tools

Although integrating tools with a DAM creates an extremely powerful solution, it is only feasible for companies with experienced programmers. Most applications and tools require C programming experience to customize, and writing good and usable tool integrations requires much experience in the tool's use.


Integrate your workflow

Some DAM systems provide specific workflow support with varying degrees of flexibility. To determine how a DAM will accommodate your project, first analyze how data is created and modified in your projects, and then think how this data moves through the projects. If each file is only handled by one or two people, the workflow is pretty simple—trying to model it in the DAM probably wouldn't afford much added benefit. But if the data has a number of interdependencies and is modified and used by many team members, implementing a suitable workflow might lead to quality improvements.

Depending on the DAM system you are using, “workflow” has a variety of meanings. Systems geared towards software development usually make a distinction between different versions and branches (variations) of the files and folders, and the workflow is all about controlling the dependencies of these files. In document-oriented systems, the focus is more on controlling who has access to the files.

You should never try to force a brand-new workflow on your production, rather look at how you are currently doing things and then try to reinforce and bolster these existing workflows using your DAM system. This is a good chance, however, to think about workflow and how it could be improved; perhaps this is the right moment to introduce some enhancements. Especially if the DAM system chosen is tailored for a specific industry, it should reflect best practices in its defaults, so think before changing the factory workflow settings, but don't keep them if they don't fit your company's working culture.

One very logical place to integrate a workflow is at the end of the production, when it comes to reviewing the data that the content creators produced. With a DAM system you can mark the files that have been finished and approved to ensure that nothing leaves your company that has not been reviewed by the employees responsible.

Another typical usage is in asset creation. The lead artist has to review each asset that is produced to make sure that it fits into the general look of the production. Without a DAM with workflow support, this means that each artist has to show each asset to the lead artist, either on his machine or by sending him a link via e-mail. With a workflow system, all the artist has to do is to mark the asset as finished, automatically triggering a notification to the lead artist. He can then review the asset in the DAM and, if necessary, reject the asset, asking the artist to make some changes. This tracking guarantees that assets used in the later production are actually reviewed and approved.


Typical approval process for artwork

Similar workflows can, of course, be extended through the whole production pipeline, depending on the capabilities of the system. If you are unable to model your complete process with the DAM tool of your choice, then try to break it down into manageable parts. Even controlling short processes like the approval step can help the production tremendously.


Example: Implementation plan in game production -- Case study of GreatGames

This short example follows one typical implementation of a DAM system. While this example won't go into too much detail, it contains enough information to serve as a rough guide. All of these topics are covered in this chapter, so if you need more help with each step, refer back to the previous sections. Information about choosing the right product has already been covered in previous chapters and won't be discussed here.

•  Test the solution
•  Prepare the setup
•  Prepare the users
•  Deploy the software
•  Automate your processes
•  Integrate your tools

We are once again looking at GreatGames, discussed in previous chapters. They are a mid-sized company with 50 users and a mixed setup of mainly Windows XP and some Mac OS X machines. The server will be a Linux machine, with a dedicated file server running Samba.

Testing the solution

After reviewing the requirements, GreatGames decides to use Superbranch, a (fictitious) Open Source SCM able to manage binary assets. The main advantages are the cost of the software (it's free) and good support of development tools. Since there is no specialized support for artists, their feedback is especially important during the testing.

Since GreatGames is not able to test a whole production spanning two years' runtime, the team decides to set up a small group of users who are going to test the software with assets from the previous production. Since there is currently a bug fix—a patch—in production, the users decide to produce the patch with the system.

The team consists of:

•  Two programmers
•  One 2D artist
•  Two 3D artists
•  One project manager.

Adam and Albert, the administrators, set up the server with the data from the previous project, and everyone on the test team gets a client installed on their system.

During the test it becomes obvious that the system is difficult for artists to use, since some of the system's functions a

Adam and Albert run extensive load tests on the system, using automated command line clients. The software behaves quite well under load and seems to handle the amount of data from the last project without any issues. Questions regarding the configuration and backup of the system are quickly answered with the help of the large user base on the Internet.

Peter, the programming leader, reports that programmers are extremely satisfied with the functionality and the integration into their programming tools. Most things work as expected and even though there is some missing functionality, what is there is enough for them to work with the system.

The project manager, Sam, is not convinced though, since there are few tools to help him to monitor the progress of the project. The patch is produced without any problems however, and after getting feedback from everyone, Sam gets the impression that even the artists were able to get used to the system after a while. Since the artist team at GreatGames is used to learning complicated tools, it seems it will not be such a problem for them. Although there are many missing features, the administrators report that the system is very stable, so Sam decides to give the software a try in the next project.

Preparation of the setup

Once the decision has been made, Adam and Albert start setting up the production server. Since they have already been through the configuration process, they finish quickly. But the production of the next project has already started, and they realize they can't just spend a day copying over data onto the production server and then installing the clients on each machine. Therefore, they decide to move the data onto the asset management system team by team. This allows them to install only a few machines per day, making sure that, at worst, only a few users are prevented from working.

Using the database from the software test, Adam sets up a backup system with both tape backups for long-term security and hard disk copies for fast data restoration. Since the data volume is quite high, he also updates the network backbone to a faster standard.

After running some tests on the system, doing backups, and trying out how long it takes them to restore a damaged system, Adam and Albert are confident that the system will work during production.

Preparing the users

Albert installs the clients on each machine. Since at least one user from each team participated in the test, these users are asked to train everyone on their team. Using the test setup that is still running, they explain all the features that are useful to their colleagues and conduct tutorial sessions with each of them. After a day of training, everyone is confident that they can start working.

Deployment of the software

With the clients installed and the servers ready, Adam starts moving the data to the DAM system. As soon as the move is started, the old file servers are made read-only, so changes will have to be made on the new system.

This works smoothly until he gets to the artists. The amount of data is so large that he has to halt production for half a day. Luckily the team is still in the concept phase and the artists can easily spend a productive workday using paper and pencils. From then on, Adam and Albert relocate data during the night, eating pizza and drinking plenty of coffee.

Review of the system

After using the new DAM for a month, Sam, the project manager, starts evaluating the setup. As he had already seen during the testing phase, the programmers had no problem with the system at all. Some artists were annoyed in the beginning though, and tried working around the system as much as they could. This led to situations where data was missing on the central server.

Sam spends some time in discussions with these “problematic” users and their team leaders and finally convinces them to accept the system. It takes some additional training and a few configuration adjustments from Albert for them to realize the system's benefits and start using it properly.

Some bugs in the software itself are slowing down production in some cases, and Adam starts looking on Superbranch's website for fixes. Since none can be found, he compiles a list of these issues and gives them to each user. With this list all users are able to work around the issues most of the time.

On the administration side, there were no serious issues except for one crash of the RAID system, which was replaced within two hours. With the hard disk backup, Adam was able to restore the system quickly, with only the data that was created within the six hours before the crash lost.

Automation of processes

When it becomes clear that the system itself is working well, Sam sets up a list of process improvements and automations together with David, the lead artist, and Peter from programming.

Peter then writes a set of scripts with the help of Adam to automate these processes. Not everything runs as desired, but some of the most tedious tasks can be automated. Sam decides to accept the fact that some things still have to be done manually.

Integration of tools

One big issue left is that the artists have to cope with unfamiliar tools and thus they are not working efficiently. Since the DAM system chosen is not artist oriented, they sometimes even have to use command line-based tools to submit the data they are working on. In some cases they produce large amounts of files, which then have to be selectively imported into the DAM. To solve these problems, David pulls in two programmers who have been working on the in-house tools before. Together, they design an integration that helps the artists to store and access data in the DAM without having to worry about the details. This actually works quite well, but the task consumed a large amount of the programmer's time. And even after one month of usage there are still bugs that the programmers have to fix. But the artists are mostly happy!Conclusion

Looking at the implementation at Superbranch, it becomes clear that not everything is ideal. For example, the DAM system was not the best solution for the artists, forcing them to get used to a complicated work environment. The additional time spent integrating the in-house tools cost large amounts of work time. On the other hand, there were few problems with the system itself, and the production was able to finish without any issues. 

 

_____________________________________________________

 

Latest Jobs

Treyarch

Playa Vista, California
6.20.22
Audio Engineer

Digital Extremes

London, Ontario, Canada
6.20.22
Communications Director

High Moon Studios

Carlsbad, California
6.20.22
Senior Producer

Build a Rocket Boy Games

Edinburgh, Scotland
6.20.22
Lead UI Programmer
More Jobs   

CONNECT WITH US

Register for a
Subscribe to
Follow us

Game Developer Account

Game Developer Newsletter

@gamedevdotcom

Register for a

Game Developer Account

Gain full access to resources (events, white paper, webinars, reports, etc)
Single sign-on to all Informa products

Register
Subscribe to

Game Developer Newsletter

Get daily Game Developer top stories every morning straight into your inbox

Subscribe
Follow us

@gamedevdotcom

Follow us @gamedevdotcom to stay up-to-date with the latest news & insider information about events & more