Introduction- Adam Pletcher
Many of you might be wondering “how can I do that?” I know I can do more important things for my projects, but that‟s not my studio‟s culture. My advice is simple: Show them what a TA can do. Tech Artists have a unique view into the two major worlds of game development, and nobody is better equipped to bring change than you. Find the slowest, most hated tool or pipeline at your studio, carve out some time, learn what you need to and create something better. Nothing sparks revolution faster than a working prototype. Show it to your artists, have them show it to the art directors. If you‟ve made their lives easier, you‟ve already won the battle.
You Have to Start Somewhere – Arthur Shek
How do tech artists make an impact? Everywhere I‟ve been, the successful tech art team makes useful stuff for artists over and over and learn as they go. The unifying talent is the ability to present simple and intuitive interfaces to complexity.
In building the Tech Art team at Turn 10, I found that in the battle between game features and artist tools, there is only sadness to be had. It‟s tough to prioritize the two in the same bucket.
Instead, it is much easier to justify the need for tech artists when you present the fact that you can spend two weeks writing a tool that would save a man year‟s worth of time over the course of a project. Splitting up the problem by looking at separate cost savings helped justification of my team vs. arguing in the apples to oranges comparison of game features to artist tools. It also helped that allocating some resources to the Tech Art team enabled our studio to take artist support and various existing art problems off developer plates.
In the end, presenting this information to our studio management, we have a Tech Art team that‟s made up of myself, 2 traditional tech artists and 2 tools developers. I‟m sure there will be evolution of this balance, but the important thing is that looking at things with this lens helped us establish a tech art group.
In thinking about what I‟d talk about for a more technical contribution, I thought about the diverse mix of skillsets I knew would be represented here. What can a tech artist who calls himself a rigger have in common with a VFX artist? As I mentioned before and you‟ll hear throughout the day, the common thread binding us all is our ability to solve a wide range of problems with technical fundamentals. For me, having been involved in numerous portfolio reviews for riggers and other TD roles at Disney, what differentiates the great from the mediocre is the demonstrated ability to dive into at least basic programming to enhance your skillset or productivity.
In our industry we have mediocre Batman‟s and awesome Batman‟s. You become an awesome Batman by continually adding tricks and new weapons to your toolbelt. In the spirit of my topic, you have to “start somewhere”. For the remainder of my section, I wanted to demonstrate how a small investment in basic scripting can make you a more powerful tech artist, regardless of what you consider your primary focus. These fundamental skills are core to the
functionality of my team.
Building Technology – Rob Galanakis
I don’t think there’s a domain in game development that I haven’t heard of a Tech Artist infiltrating. We’re no longer holding things together from the shadows- we’re taking on mission critical roles and features from start to finish
Better tools, better pipelines, better workflows, a better type of development for artists and all members of our teams. And over the last few years our efforts have been a resounding success. I get the feeling for the first time that studios are by and large starting to understand us. This is important. Part of the reason Jeff Hanna organized the first TA Bootcamp last year was to address this issue of developers in general not knowing what to make of this uncategorizable phenomenon called Tech Art.
更好的工具，更好的流水线，更好的工作流，为美术和团队人员更好地开发。过去几年我们的努力相当成功，我有一种感觉，整个工作室第一次开始理解我们。这很重要。Jeff Hanna去年第一次组织TA Bootcamp的原因大概是解决开发者中常见的，把这种无法归类的现象叫做TA
The first key is to get a process for support tasks into place. Unless your Tech Artists can focus and have room to breathe, they are never going to learn any of the other necessary skills.
The second key is to set up code review. I don’t think any subject has caused so many headaches for me but it is absolutely essential, and I’m going to talk about the how and why of reviewing code.
The last area is collaboration. Collaboration both in working on common projects, and team cohesiveness. To develop those big tools and systems effectively, you need to be able to apply more than one or two people to a single problem.
Providing support has side effects. The codebase grows. It grows much faster than it should, because code is rarely shared properly and is often copied and pasted, or existing code is unknown and unused. And the relationships TA’s have with customers is different for every group. What we develop when we work like this is not shared property, and we never develop a shared identity. We don’t develop cohesive practices, a unified style, a vision or architecture. So every step further down the individualized support path we go, the more redundant work we do, the harder things are to unwind. The stack of debt grows and grows.
This works as long as your TA’s have time and success. But I’ve seen it with every competent TA I’ve worked with. You build up this stack so you can work effectively. And eventually it turns to shit, and takes all your effort to support it. And now you can’t really develop new features properly. Your truck factor becomes high. Your code becomes brittle. Andyour genius and ethic can keep things together for a while. But at some point, you run out of hours in the day. Until then, you can give high quality support, which is expected- you’re a skilled bunch and ours is a labor of love. But as soon as you hit that wall, you start to stumble, you fail hard. The end goal of a support process- what all the changes I propose hope to achieve- is to have the most productive Tech Artists possible.
Realize how much support you do, and trust others to do it. That‟s it. Most people don‟t realize just how much time they spend supporting artists and designers, answering programmer questions, adding features, fixing bugs, twiddling around with stuff. A support process forces a reckoning with this behavior.
And the idea is, once you realize how much time you actually spend in support, and once you have a process where other people can actually cross over and help you, you can start to breathe again. It is the first step down a long road to building technology.
My recommended support structure is something like this. There is of course the direct feedback, via face-to-face or other channels. That is not going away. But important here are the technical mechanisms. Build a way to catch when errors happen in your tools, and get those errors, along with callstack and logs, reported to your team as emails, or into a bug tracker, or whatever. I’d also suggest giving users a big shiny ‘HELP’ button when you can’t do it automatically. These things are not hard to set up.
Everything goes into a single task list. Let the Tech Artists draw from this list, and manage it internally. Do not involve management or gate tasks by assigning them out in some bureaucratic fashion. And use daily standups and the task list to keep track of what people are doing.
A successful culture of review creates a mind meld between your team. You have shared domain knowledge, you feed off of each other‟s skills, you develop a single cohesive set of standards and idioms. There are a hundred benefits to code review; and being here at GDC, at the forefront of the industry, probably means these benefits are obvious.
And herein lies the problem. It takes at least a few weeks for code reviews to start reaping benefits. It takes even longer when you have a team that has never built technology properly and, quite honestly, don’t care much for software and project management theory. So code review sounds great, people start doing it, but it takes time, and causes tension. Because they cause tension,
someone decides to make them optional as a way to diffuse the tension. Now because they’re optional, people don’t do them thoroughly or at all, so they have much less benefit. And if they have less benefit, they become more optional. This is a vicious cycle that results in people not doing reviews at all.
And there is a cure but it is a hard pill to swallow. Code reviews are mandatory, and a single Tech Artist has the final word. Let me be clear about this and state in no uncertain terms: I have never seen or heard of code reviews being successfully implemented on a Tech Art team without them being mandatory and without a single arbiter of correctness. It doesn’t matter who the arbiter is- it can be the lead, or a senior member, but there needs to be a single person responsible for the process and the quality of the reviews, at least at first. And it doesn’t matter whether you do over the shoulder reviews, or have software, or use email, you just need to make sure every Tech Artist is getting their code reviewed, and reviewing other people’s code. Just remember though that making code reviews mandatory allows them to be successful, it does not cause them to be successful.
But over the past few years, this myth has largely disappeared in the professional programming world. But we have our own version of it in the Tech Art world. We have the animation guy, the character guy, the environment guy. The guy that wrote system A and the girl that wrote system B. Each of us love the idea of being a badass Sherlock Holmes in our field of expertise. We love having the freedom to do what we want to do, to not really have to answer to anyone, to do amazing things and get the silent glory. But as fun as it is, we need to be the police force as well. We need to be clever like Holmes, yes, but we also need to give out parking tickets and we need to be able to respond to crises or operate in force. There is a time and a place for us to act as a solitary genius, but it must become the exception, and not the rule.
But the support splits our focus- we need to provide support, and develop new tech. And when it comes down to it, support is going to win every time.
So we‟re stuck providing full time support, but we all love doing those bigger things as well. So we work 50, 60, 70 hour weeks, and develop those bigger things in our spare time. And then something terrible happens. The new tech we develop comes with its own support burden, and suddenly we‟re working overtime just to provide support.
但是支持分散了我们的精神，我们需要提供支持，开发新技术，但当它来临，维护会占据所有时间。我们卡在提供全职支持上，但我们也爱做大事情，所以我们工作50 60 70小时，在空闲时间开发更大的事情。突然可怕的事情发生，新工具有自己的负担，忽然我们必须超时工作来提供支持。
I‟ve been guilty of it and I‟ve seen it time and time again. So not only is collaboration a requirement for building technology, it is also a requirement for keeping your own sanity. Being able to share responsibility means you can load balance your
So we come back to fixing our support process. Remember this unique stack-per-TA? This is an impediment to collaboration. So we need to get rid of it. Now we have people that are actually sharing code. This is a form of collaboration, to be sure, but it is a reactive, not creative, form. It finds code that should be shared, and shares it. When problems come in, they can be fixed or informed by multiple people. It is collaboration for support. This is important but this isn’t the creative form of collaboration I’m talking about here.
But if we combine that sort of reactive collaboration with code review, we reinforce the ability of the team to work together, grow and learn together. In my experience, I’ve found that it isn’t until a team establishes code review that they establish a ‘team identity,’ where they turn from a group of individuals who have the same job title, into a team that can collaborate together to create new things, as a team, and handle the burdens of tech art, as a team.
So we combine these two things- reactive collaboration, and team unification- and something marvelous happens. We move past the point where we can only load balance support, and we can start to load balance new features, new tools, internal work, everything. Suddenly we have the bandwidth to build technology.
Collaboration is your ultimate weapon against the likelihood of failing as your team succeeds. As you get higher on this pyramid of responsibility, there is a rapid increase of the skills and amount of work required in the tasks you are asked to do, and want to do. And a Tech Art team, as a rule, rarely has the experience with the sort of tasks that are at the tip of this pyramid. These are studio-wide features and tools that usually have very little to do with the specific expertise of Tech Art. Or they are super-critical, paradigm-shifting, fundamental changes to art pipelines, that can hose a team if not executed properly.
I‟m going to be straight with you. Making the changes I‟m advocating for is going to rub some people the wrong way. Changing how they interact with artists, forcing them through procedural hoops for code reviews, putting them into collaborative projects where they feel awkward- there‟s no way to get around the fact that this is going to have a negative impact on morale. You have to go into this with the belief that the suffering is temporary. Making these changes has led to huge gains, but I can say personally they‟ve caused hardships, for me and for others. I‟ll talk about some of those difficulties in a bit, but in all cases, we came through it together and were better off for it.
But the goal of this presentation isn‟t to encourage you to become a programmer. We. Are. Tech Artists. We need to respect that history and who we are. If you‟re uneasy with these changes, I‟m not asking you to forsake what you love. I‟m asking you to sharpen or relearn your programming skills, so that you may become a better Tech Artist. You can know how to program without being a programmer. And knowing how to program well will allow you to get stuff done quickly, which makes you a better Tech Artist.
Build it On Stone – Seth Gibson
The reason I feel like this is a valuable topic is that well, everyone is writing foundation code in the Tech Art world today, from students and jr Tech Artists, all the way up to old men like myself, so I get the impression that everyone understands how to write tools and build pipelines for artists, but what I feel like we‟re missing is conversation around how to build tools and pipelines for Tech Artists. This can be a bit of an uncomfortable topic, because it really is the drab, unsexy work that you can‟t really convince a producer to let you do, and you‟re not gonna get that appreciative feedback from artists for writing the Big Red Button, and I know as Tech Artists, we like to dive in, we like to just get our hands dirty and go. But having worked on both ends of the spectrum, I can tell you right now that I don‟t think I could ever work in an environment that DIDN‟T have a Tech Art infrastructure, and I hope this is something we can all start to take to heart as a discipline if we aren‟t already, because otherwise…
Jump back to today though, and I feel like things like this have been pretty well settled and the idea of Tech Artists as programmers is definitely not quite the foreign concept it was back in those dark times. So That‟s the big overarching premise here is that we are very much software engineers nowadays, and we should start thinking, acting, and working like software engineers.
At some point in your career you may have written a really quick, dirty, one off tool, in the privacy of your own cubicle and of course you washed your hands afterwards, all the while thinking to yourself “ah, it‟s just a one off, we‟ll just fix this content and call it good.” This in and of itself isn‟t a bad thing, and it‟s not even a bad thing if you write a lot of one-off tools and scripts. What differentiates between good and bad one-offs is the underlying framework on which the tools are built. Much in the same way that we like to think of/design UIs in our tools to be layers on top of separate core functionality, we can abstract that idea out to tools in general, what we call tools should really just be layers on top of our infrastructure and frameworks that manipulate content and data, and it‟s bad infrastructure that keeps you from writing those one-off tools while preserving process and data integrity…
…what ends up happening instead is that without that roadmap of infrastructure to follow, each tool potentially becomes its own little mini-pipeline with its own data model and execution patterns, so as we write more of these little one-off sovereigns, we end up diverging the content and data paths so much that when we try and wrangle things back in, we end up with these huge tools that are trying to account for every one of those little forks in the content path we created and so we get this thing that‟s more akin to a rube goldberg device than…say a waterslide, we‟re standing at the top dropping content into one end and hoping it comes out one of the chutes that we can see the end of. And while that‟s all well fun, or at least it sounds fun, the real tragedy here is that we‟ve totally engineered flexibility out of the pipeline because our content has to be conditioned so specifically, and likewise scalability, because even small changes require massive amounts of code, again to account for each one of these diverging content paths from all of our pipelines-in-a-tool
So all the bad news, fire, and brimstone from the repercussions of bad infrastructure behind us, now we can focus on happier things, like how a good infrastructure can not just make life for Tech Art, but can in fact affect the whole production all the way through ship. This actually has to do with how we think about/approach pre-production, as well as what we actually end up implementing, and the way to think about that is to never lose sight of production. One of the…I don‟t want to say mistakes, but maybe one of the less pragmatic approaches we tend to take in tech art sometimes is to get caught up in that whole headiness of pre-production along with our friends on the art team, so we do things like create these amazing shaders for example that in no way are going to run at frame rate, but it‟s cool because it‟s pre-pro and it‟s throwaway…until it‟s alpha and we have to ship with it so were stuck trying to optimize this thing that…really wasn‟t built to be optimized. But if instead, we approach everything with that infrastructure first mentality, sure we can build those crazy shaders, but, we‟ll build them in such a way that we can take it apart and reconfigure it when we need to get some cycles back. The same high-level paradigm should apply to our pipelines, and of course that starts with…good
Now, if we‟ve gone through pre-pro infrastructure first, we set ourselves to go into production in such a manner that we should have a pretty good idea of what kind of game we‟re making, which means we know what content we‟re going to be building and from this we should have an idea of what the pipelines might look like, and while that might not be enough for us to dive in and start building tools yet, it puts us in a position where all the big questions should be answered and we can at least start asking the smaller questions now, like…what should the tools look like. And if we‟ve taken the opportunity in pre-production to start putting down some very rough infrastructure, that being gathering external libraries, experimenting with different SDKs and patterns, I think we‟ll find that developing our production infrastructure becomes a fairly simple task, along with our tool and pipeline development.
Building a new pipeline and toolset just becomes a matter of…writing more one-off scripts against our battle tested infrastructure, and if our infrastructure was such that our content pipeline wasn‟t the divergent, branching nightmare from our previous conversation, we even set up art and design to start in that same strong position, since they‟ll have tons of content that they can easily strip down and re-purpose for the next project.
So what IS a Tech Art Director? Well, let‟s first look at what a Tech Art Director shouldn‟t be. The biggest fallacy I‟ve seen is putting the Tech Art “Director” in a role that‟s really more akin to just a more experienced Tech Artist. You tend to see this a lot at companies that start by hiring junior Tech Artists to fill the gaps, not to disparage junior Tech Artists at all, but the expectation becomes that a more experienced Tech Artist is just someone who can solve bigger problems faster, but is never really given that authority to set down the parameters of those problems. The issue there is that Tech Artists are suited to tackle a very unique set of problems, which like Tech Art itself, doesn‟t fall entirely in the domain of art or engineering, so what ends up happening is a potentially production-changing resource is not leveraged properly often times at the expense of the production. And that doesn‟t make anyone happy.
So we still haven‟t answered the question, and sadly it‟s not as simple as just taking what a Tech Art Director is not and flipping it around. A good place to start is with the idea that Tech Art is a bridge, that Tech Artists have feet in both the disciplines of art and engineering. That said, a good Tech Art Director needs to be both production artist and software engineer. Now, within the lower ranks, it‟s probably permissible to be more focused in one direction, but by the time one gets up to the directorial level, you really need to be able to not just understand the conversations on both sides of the fence, but you also need to be able to contribute and even push back. The corollary to that is that the Tech Art Director needs to be seen as an equal in management to both Art and Engineering Directors, as opposed to this catch all for the whims the two. Ultimately, the Tech Art Director needs to understand that the needs of the many outweigh the needs of the few, or in this case, sometimes art has to take a back seat to the overall scope of production.
Now when I talk about writing documentation, I don‟t mean putting it on the wiki or whatever other systems that often get set up with the idea that people are going to update it and of course people are going to read it, right? Because everyone reads the freakin manual, and I‟m sure you guys who have worked with Wikis or any sort of other communal documentation know that they come with…let‟s say varying degress of success. No, I‟m talking about dedicated documentation systems like Robodoc, or doxygen, serious documentation generators that use markup languages and hook into IDEs and build processes but produce professional looking documentation. As a Tech Art Director, lead, or otherwise an infrastructure builder, writing documentation should be a required task, if nothing else for the educational benefit. I remember when I started writing a style guide about 9 months ago, my thought was, “Oh this‟ll be easy, I‟ll take some of the google style, some PEP-8, change a few things that I don‟t like, and we‟ll be good to go”…and it‟s when you actually step back and try writing code against your style guide, you start to realize it‟s not that easy,
So it was one day when I was writing these unittests that I realized I was writing tests that I knew would pass, because I knew the code worked, and about an hour into writing a test, I realized exactly what I was doing, that was I was writing my setup methods to create specific environments in which the test would pass, and the complexity of my test cases made me realize that…aha!…this code is too complicated. So from an infrastructure standpoint, this is one of the things we use unittests for, we don‟t necessarily use it to catch bugs (well, not all the time), but instead we use to keep our interfaces simple, and in doing so we provide an extra layer of documentation, in the form of common use cases.
The easiest way to know for sure how to handle an error is to setup a system wherein you always know exactly what the error you‟re handling is. Sorcery? Blasphemy? Well, not quite. By coupling a good logging API with custom exceptions, we can pretty much catch, handle, and redirect any undesireable results in such a manner as to be able to provide USEFUL feedback to both the user and the developer. Since we went to the lengths of setting up unittests, we should also be able to pare out the more common cases of built-in exceptions, which we could also handle ourselves. So for instance, we have a bit of functionality that we know is going to raise a ValueError sometimes in situations that may be beyond our control (just for the sake of this conversation), based on our unittest. Since we know what the case is that raises that exception, we could create our own subclass of ValueError that handles our specific case and returns useful data. Obviously this is a very naïve and ideal situation, but you get the idea.
…And with all that in place, we‟ve now created an error reporting structure that starts with a user knowing exactly what‟s happened, and maybe even how to talk to Tech Art about it. Once Tech Art steps in, we can very easily look at the traceback and know that we‟ve caught one of our own exceptions. Couple this with our carefully built development sandboxes, and we can iterate with the affected artist directly, off-line, to resolve the issue while the rest of the team continues to work. We fix the problem, we merge that fix into the head branch, and production rolls merrily along.
Joinning the dark side – Ben Cloward
This year, I’ve chosen to talk about a different kind of challenge that we faced at Bioware – and this is more of a social problem than a technical challenge
Our programmers considered the art in the game to be the main cause of the performance issues because it was implemented in an inefficient way.
The artists, on the other hand, felt that getting the frame rate up was the job of the programmers, and they were frustrated at the programming department for delivering tools that were difficult to use and that caused a lot of headaches and lost time.
We joined the dark side! Two technical artists were chosen to move their desks into the room where the programmers worked and to work together with them in adding new features and tools, and in optimizing the game. In this talk, I’m going to share my experiences as an artist living and working among the programmers on our team – and show how this simple act of moving into the programmer space was a major part of the solution to our social issues.
Designing and building complex systems requires constant communication and collaboration between art and programming.
With tech artists embedded with the programmers, we changed this process around. First, I would collaborate with the artists to create a prototype tool that met all of the artists’ requirements. Through this collaboration with the artists, I would become familiar with what they really wanted and how they intended to use the tool. I was also able to control (to a certain extent) the size and scope of the system to make sure that it wouldn’t hurt the performance of the game and that the artists’ expectations for the tool didn’t get so high that the concept would be too complex or require too much programmer time.
Once the artists were happy with the prototype, I would document the requirements for the tool and show it to the programmers.
Once development started, since I was sitting with the programmers, it was a very natural for me to watch and guide the process, ensuring that the original vision was maintained. Often, the programmer would come up with his own ideas about what the tool should be. Sometimes these ideas were improvements, but sometimes they changed the nature of the tool. We would discuss changes to the original prototype and I would make sure that the original intent was maintained and that the artists would get the tool that they asked for
Once the tool was nearing completion, I wrote documentation for the tool and helped the artists learn how to use it. Since I had been involved with the original prototype of the tool and involved during the tool’s development, I was the best qualified to write the documentation and it was easy for me to teach.
I’d like to stress that this step – educating the artists – is important. It needs to be more than just an email that says “Hey, we have a new tool. Go read this document that I wrote about it.” You really need to sit with the artists and show them the tool, and then watch them use it. By doing this, you can make sure that the tool is doing what the artists need AND that the artists are using the tool correctly.
No matter how well a tool is designed, if you don’t teach the artists how to use it properly, they’ll find a way to make a mess with it – and it’s always easier to teach first than to clean up the mess later.
Having an art representative in the programmer area helped the artists in a couple of other ways too. When the programmers wanted to make an optimization to the game that might make an impact on the art, I was there to stand up for the artists and help the programmers know when an optimization was going too far or when the performance benefit out-weighed the small loss in quality.
Also, since I worked with the programmers every day, I was aware of all of the projects they were working on to improve performance and quality. When new builds of the game engine went out, I sent out an email to the art team to help them understand the new improvements in the build that would impact them. This increased visibility helped the artists to see that the programmers were working hard to improve the game.
As a result of these changes, the artists got tools that matched and sometimes exceeded their expectations and their ability to create game art was improved. hey also had more information about what the programmers were doing to improve the game. Their trust in the programmers increased.
Now I’d like to switch gears and talk about how the programmers benefited from having embedded tech artists join them. As I mentioned earlier, the major concern that the programmers had is that the artists were creating art that was wasteful and inefficient. They believed that the main cause for the low frame-rate was that the art was too heavy. One thing that I had noticed before moving into the programmer area was that the programmers often found specific examples of bad art and said things like, “Man, this texture is huge. They need to fix this.” or “This model is super dense and you only ever see it from a distance.
Since no one from the art department was present, these off-the-cuff complaints were basically just getting thrown out there with no one to respond to them. The programmers had a lot of built-up frustration at seeing in-efficient assets and no one seemed to be fixing them. After moving in with the programmers, I made it a point to jump up and respond when ever I heard a programmer complain about the art. I’d make a note about the asset in question and either fix it myself or pass it along to the right artist to optimize.
I basically become the programmer’s complaint department. Even if things didn’t get fixed right away, the programmers at least felt like someone was listening and responding to their complaints. This served to ease a lot of the tension that the programmers felt toward the artists.
We went several steps beyond just responding to off-the-cuff complaints. One of our major initiatives was a full audit of all the assets in the game.
We created frame-rate and memory usage heat maps of all of the planets, created lists of textures that were too large, and models that used too many triangles. We put a lot of effort into gathering all of the information that the art team needed to make the game run faster using less memory. Some of the items we were able to go in and fix ourselves but most of the time we would create a report of actionable items and pass it along to a lead artist so that the work could be divided up among his team.
With the right information in hand, the art team was able to reduce texture memory usage significantly and increase frame rate.
Beyond texture size and triangle counts, we also investigated other resource drains such as the density and clip distance of terrain details, the complexity of our cloth simulations, and LOD settings on our character skeletons. In one case, I found that reducing the draw distance on the grass by about half raised our frame rate by 10 frames per second with no noticeable visual difference. It’s pretty exciting when you can find that kind of improvement.
once saw that the tech artists at Crytek created t-shirts for themselves that said “Digital Janitor” on them. I think this describes this project pretty well.
After all of our work optimizing the game’s art, we wanted to make sure these optimizations would remain that way and that future assets would be created in an optimal way. Basically we wanted to avoid having to do this type of clean-up project again.
The first thing that we did was to educate the artists. This mostly took place during the optimization process. As we passed list of assets along to artists to optimize, we would also take some time to explain the metrics that we were hoping to improve. We would show them the heat maps and help them understand what it was that caused the problem. Teaching the artists how to make efficient art, and how to check the on-screen metrics to make sure that their work was within budget was half of the solution.
The other half of the solution was to build smarter export tools. Our tech art team is responsible for the tools that export all assets into the game including models and textures. This means that we have a point in the pipeline where we can add checks to see if assets are optimal. We took advantage of this opportunity mostly with texture maps. Our exporter already had some context for how each map would be used, so we taught it the dimensions that a texture should be for each usage case. When a texture is exported, the size defaults to our optimal size or the original size, which ever is smaller. We also have a maximum size built into the exporter. This prevents textures from getting exported at insane resolutions.
In addition to optimizing the art assets, I was also given the task of writing the low-end shaders for the game. In the options, the players can select to use low or high quality shaders. The low-end shaders are mostly for people that have weaker hardware. This was an ideal task for me, for several reasons. First of all, as an artist, I had a strong understand of what features of the shaders were core to the look and style of the game, so I could remove the right set of things without the art team going up in arms against me. Second, I was able to off-load this large task from the programming team and free them up to do other
Finally, on a personal note, I had a laptop at home that barely met our minimum hardware requirement and I wanted to be able the run the game on it. I set a personal goal to improve performance enough so that the game could run on my laptop, but without losing the distinct artistic style that we had defined.
After working on the project for awhile, I was able to reduce many of the shaders to around a quarter of their original instruction count by removing optional features, moving some math into the vertex shaders, and simplifying core functions. On our low-end target hardware, the game ran an average of 20 frames per second faster when using my optimized shaders. And most importantly, it ran well on my laptop.
Here, I want to pause for a minute and talk about the importance of measuring things. Your worth as a tech artist is determined by your ability to solve problems. Making the game run faster and making the artists’ work more efficient are two examples of the types of problems we solve. If you want to show your worth – and thus build the value of tech art as a discipline within your studio, you need to be able to put a number on it and use cold, hard facts. Before you start any project to solve a problem, try to measure the results as they currently stand. Figure out how long it takes the artist to accomplish his task with his current tool. Make a list of how much texture memory each level is using. Write down what the instruction counts are on all the shaders. Once you have this initial data, go to work to make things more efficient. When you’re done, take your measurements again. With this data in hand, you can show the team and the company – hey, I saved a week of artist time. Or – I increased the frame rate by 50%. Or – All of the levels are now within the memory budget – and the visuals still look as good as they did before. Having this type of fact-based information to share goes a long way toward increasing the respect that your company will have for the tech artists.
I believe that the programming team learned to respect the art team more when they saw all of the effort that we were putting in to optimize the art and the shaders in the game. The artists earned the respect of the programmers by being an active part of the solution.
In summary, moving technical artists in to work with the programers had several key benefits. First of all, the artists gained direct representation on the programming team. This was beneficial because we were able to participate in and guide the development of new tools, help the artists see what the programmers were doing to improve the game, and protect the interests of the art team from over-optimization. As a result of these changes, the artists now have a greater respect for the programming team and are more willing to work together and collaborate with them.
Second, the programmers gained direct access to the art team. Since I was sitting right there with them, whenever they had a question about the art, they could just ask. As we worked to improve the efficiency of the art assets and optimize our memory usage, the programmers saw the work that we were doing and experienced the performance gains that we achieved. I was also able to complete a couple of shader projects that the programmers would have had to do and free them up for other tasks. These changes helped increase the level of respect and trust that the programmers had for the artists.
This increased level of confidence and trust that the teams had for each other is important – because it means that they are more willing to work together. I’m not going to say that the relationships between the teams are perfect. We still have room for more improvement in some areas. But we have made significant progress – and that progress shows in the product that we shipped. I also don’t want to take the credit for these improvements. I was in the middle of the process, but it was really the willingness of the programmers to allow an artist to be a part of their team, and the willingness of the artists to work hard to improve performance that made this happen. So unlike in the movies where joining the dark side leads to hate, misery, and suffering, in our case joining the dark side led to greater trust, confidence, collaboration, and in the end – a better game.
So, after sharing some of my experiences working together with the programmers for the past two years, I’d like to leave you with four key ideas. The first is guide. When working to design a new tool or solution to a problem, you need to guide the artists so that their expectations for what the tool will be able to do don’t get out of control. Then you need to guide the process of creating the tool so that the end results matches the original vision and so that the tool accomplishes the intended task.
The second is teach. As a tech artist, it’s your job to make sure that the artists on your team know the pipeline, know the tools, and know what’s acceptable and what isn’t – in terms of creating art that will perform well. If you end up with a mess to clean up at the end, you can only blame yourself for not being a better teacher.
Next – measure. Capture data both before you begin a project and once you have completed it so that you can show the company – in hard facts – what benefit your efforts have achieved.
And finally, build trust. This one is the most important. Artists and programmers that trust each other, will work together and collaborate to build incredible games. Artists and programmers that don’t trust each other, won’t work together – and your project will suffer. Do all you can to help each team see what the other is doing. Bring the teams together and help them collaborate.
Lessons in Tool Development – Jason Hayes
Strategic is the high-level vision of a tool. At this level, you should be looking at the big picture of how the tool fits into your overall pipeline, and how it affects the user workflow. This is also the most effective point at which you can save your company money and increase the productivity of your team. As our industry moves into another cycle of next generation hardware, I believe pipelines will become larger and more complex, so we need to build things that are scalable and most importantly, save your company money. I don’t know how common it is for Technical Artists to think about the tools they write in this way: your company is paying you to write tools to make the content pipeline an efficient machine, and so you must look at the best way to spend that money.
Tactical is the low-level view of a tool. This is where the architectural design, implementation and code reviews of tools happen.
I’d like to start the presentation by talking about the Strategic level of tool development, and the first part of that process for us is what I call Tool Briefs.
Tool briefs are short documents, typically one page or less that describe the need, criteria and scope of the proposed tool. They are a strategic document, and don’t delve into the details and logistics of how the tool will be implemented. At Volition, tool briefs are written and approved prior to any technical spec being drafted.
The primary purpose of the tool brief is to make sure that everyone is on the same page about what will be delivered. They give your Manager the opportunity to assess the cost of implementing the tool, and they also provide an easy point to make course corrections early in the design process. Moreover, it gives everyone involved an opportunity to ask questions, and provides your Director a window into how you are thinking.
At Volition, our tool briefs are made up of three simple questions. These questions are intentionally designed to be kept short and focused to make the person writing the tool brief really think and question what they are about to build and write it in such a way that it communicates the tool to a wide audience.
The Description (What is it?) This one is pretty straightforward. Here, you describe the needs, criteria and scope of the tool. The Function (How might the end-user use the tool?) Sometimes it’s easy to overlook how the tool might affect the end-users productivity. It’s important to keep this in mind and how the tool fits into the big picture of the overall process. It’s very easy to add new tools to a pipeline, but it’s very difficult to keep it running smoothly for your team. The Justification (Why does it need to exist?) This is the part of the brief that provides the rationale for why we should be creating the tool. The following is a quote that I feel fits the bill of a Tool Brief perfectly:
Mapping out the workflow as a flow chart is a good way to get a high level perspective on the end users process. If your pipeline is creating bottlenecks for your users, then mapping it out should reveal where the problem areas are, then you can create a plan of action to address the issue. A good approach to how to map out the pipeline is to associate how long they spend on each part of the process.
Another way to familiarize yourself with the end-user is to sit down with them and watch how they are working. It’s very important to build those working relationships with the people we are supporting. They need to feel confident that the tools we are developing are there to make their lives better. Besides, if the person you are watching doesn’t get creeped out over this, it can be an eye-opening experience for a lot of Tech Artists. Sometimes, you’ll see artists using your tools in ways you didn’t expect and would have probably only surfaced by watching them work.
Something I like to do is hold bi-weekly dependency meetings with each art discipline on the team. The meetings are fact-finding missions to discover what the artists are working on and what they have coming up. At Volition, we use Hansoft to manage the project’s tasks and backlog, but trying to determine dependencies in that software is nearly impossible, so I avoid it altogether and meet for 30 minutes to talk.
The meetings are also great opportunity for the artists to bring up any other issues they are having and have become my most valuable meetings. I would encourage you guys to implement these into your process if you don’t already.
Running behind schedule doesn’t always mean that something is getting overengineered. In most cases, we just underestimate how long something will take, usually because we discover things over the course of development. But it is a warning sign and if the person is running behind schedule, it’s probably a good time to step in and take a look at what’s going on.
Making the code too generic is another one of those tricky and subjective aspects of software development. One way to tell if something is being too generic is to look at the set of requirements. If you are expanding a focused set of requirements into something that can be a “jack of all trades”, you are probably overengineering something. For example, say you are tasked to build a simple tool that is supposed to track how shaders are used on content. Well, you decide that you want to turn it into a generic system that can track how any piece of data is being used on content. It would become incredibly complex and stop being good at showing how shaders are used on your content.
Another indicator of when something is being overengineered is when the code has become difficult to follow, which usually means it has also become difficult to maintain. A warning sign for this is when someone who is using your code is frequently asking questions about how it works. When this happens, the best method I’ve found to surface overengineering is to go to a white board and have the TA(s) write out what they are trying to do.
Probably the best way of detecting overengineering is when someone is designing too far in to the future.
But at the end of the day, good design practice equals code that is easy to follow and maintain. Bad design practice ends in code that is too difficult to understand and maintain, and ends up costing you time and money.
Phase 1 – Basic design and implementation.
At this phase, the basic system is functional. Verification takes place based on the goal/task lists and associated expectations. The system is not polished, but sufficiently fleshed out to allow iteration work to start.
Phase 2 – First pass of iteration.
In this phase, the system or tool is sufficiently fleshed out that a decision can be made as to whether it fulfills its promise or exhibits fundamental issues. Validation is focusing on whether the feature is “on track”. It’s also the point where the determination is made as to whether this is a tool or system that could potentially be cut.
Phase 3 – Second pass of iteration.
This is the first polish phase and the tool or system should be fully implemented. Features at this point are locked down and only minor additions are acceptable (subject to approval). An evaluation is made to assess whether or not the tool is successful and whether it should be kept or cut. You can only enter the final phase if the tool or system is being kept.
Phase 4 – Final polish pass.
Here, the features of the system or tool are locked down and it’s essentially production ready.
•Allow the team and QA to easily record bugs directly from the game.
•Provide art an easy way to provide direction.
•Allow the team and QA to easily navigate to bugs in the game.
In order to make the end-user more productive, keep the number of clicks in your interface to a minimum. Using keyboard shortcuts are also good, but just be careful that they aren’t the only way to do something. I’ve actually used editors and tools in the past where this was the case and certain functionality of how to do something became lore of the tool, absent in any documentation.
Intelligent grouping and keeping alignment of controls in mind and consistent will also make your interface appear clean and simple.
In Viewmaster, to make sure communication was kept simple and in clear paths, I created a Management layer. The Management layer is how subsystems interact with each other. It becomes important at the Class or Object level of the code. Instead of having to pass every subsystem to each other, I only need to pass the primary manager class.
作者最后推荐了 Code Complete 2rd version
而像工具bug追踪，infrastructure, tool brief, code review这些，也应当是重要的工作内容，都是我们组欠缺的。