A year and a half ago, I made the transition from XNA-based development to Unity-based development. One of the primary reasons, aside from the easy multi-platform support, was the ability to use C# as the primary scripting language. Not only would this make the transition easier since it was one less thing to learn, but I also hoped I could re-use a lot of what I’d done in XNA for my Unity-based games.
Now with 4 full games ported to 2 major new platforms (iOS and PC/Mac), an all-new game based on the same XNA-based game structure, and another larger game on the way, I think I can say the transition was a success. Many more XBLIG developers have made the transition from XNA to Unity, but some are still in the process or are only considering it still. For them, and for others who prefer a more code-based game structure than Unity normally provides, I’m now going to share more details of my particular XNA-to-Unity path.
Note that this is by no means the “best” path, as it has some limitations that others may not be able to live with, and there are probably more efficient ways of doing some things that I just don’t know about. So what works for me may not work for you. But there might be some ideas that are helpful even if you’re an experienced Unity developer.
Ported in 4 Days
My first XNA-to-Unity port was Super Crossfire. There was lots of new stuff to learn, so it took over a month before I had Crossfire 2 (what Super Crossfire is based on) really working on PC. It mostly resembled the Xbox 360 (aka XBLIG/XNA) version, yet it had lots of small visual artifacts, and sound was only minimally working. There was also no way it would run reasonably well on an iOS-based device. I’ll get more into why that was and how I “fixed” it in a later article.
By my third and fourth XNA-to-Unity ports (Ballistic and Fireball), I had the majority of the porting work done in 4 days for each of them. The final polish work and fixing up minor issues took longer, but the later part of those 4 days of work for those two ports was interface, resolution, and controller-related. The early part–going from the XNA version to a playable Unity PC build–took about a day and a half. That day and a half of work is what I’ll focus on detailing, because that’s where a game goes from being XNA-based to Unity-based.
The Big Challenges
Some of the XNA-to-Unity process is pretty much Find and Replace. You need to make all the XNA Vectors, Math functions, and other framework-based calls compatible with Unity. What requires a bit more thinking are tackling the following issues (at least the first time you do them):
1) Game Structure: This is actually not that hard. You have a single Unity game object that creates your old XNA-based Game object, and updates (and draws) every frame. In all my Unity-based games there is only 1 primary game object, and 1 or 2 cameras (the 2nd camera is for interface when the camera moves in the game).
2) Drawing: You probably used SpriteBatch in XNA. That doesn’t exist in Unity. This is the one area where my solution may not work for most people and it requires some manual work, but I’ve found it worthwhile. If you use 3D objects, my solution definitely won’t work, but you could modify it to.
3) Audio: This isn’t too tricky, it’s just a little mind-numbing to align and test all the volume and filter settings. I used XACT for XNA (so I could use filters), but I’m not sure if most people use XACT, so part of this may be easier for you.
4) Interface and Controllers: Oh man is this *not* fun, particularly because I used 1 resolution in XNA (1280×720). I’m not sure how much detail I will give on this because it’s so dependent on your game, but I’ll try to highlight how I dealt with resolution issues on a couple different games, and how to deal mouse/touch-based collision detection.
(If there’s 1 thing I miss most in XNA, it’s only having to deal with 1 resolution. Switching to a non-fixed framerate is also annoying, but doesn’t take nearly as long.)
As I mentioned before, I have 1 primary game object in my game scene in Unity. This object has a transform (because all Unity objects do), a box collider component for mouse/touch input detection, and a script called UnityManager attached to it. The main object is at the origin, never moves, and doesn’t directly draw anything. This main object creates an RGame object, which is short for “Radiangame Game”. Name it whatever you want, but the RGame object is just the code I used for the main Game class from XNA (minus some stuff that doesn’t need to be there anymore).
Other objects that do the actual drawing are parented to the main object, and they have a transform and script called UnityRenderLayer. I call these objects “Render Layers”. Render Layers are basically there to generate and manage custom meshes, which is how Unity recommends doing custom/dynamic drawing.
There’s one Render Layer for each material change that I do during drawing, which for my games is a fixed number. In Slydris, for example, there are 3 render layers (gameplay/general, additive particles, and UI). I almost always use the same texture sheets for the different Layers, so it’s really just switching between additive and not, or to do special effects, or switch between different cameras, that require another Render Layer. There will be a *lot* more about Render Layers in a future article.
Then there are 1 or 2 cameras in the scene. If the game has a moving camera separate from the interface like Inferno, it has two cameras. If the gameplay is single-screen like Slydris, there is one camera. These camera objects in Unity have a transform component, a camera (obviously), an audio listener, and a script called UnityCamera. All the UnityCamera script does is set the aspect ratio/FOV at start-up, and move the camera based on where the main game tells it to go. In Super Crossfire, it does all sorts of crazy math to move it up and down based on where the player is, but there’s no reason those computations couldn’t be in the RGame object mentioned above.
The last Unity-based object is the Audio Layer. It’s parented to the main camera, and is just a transform and a script called UnityAudioLayer. UnityAudioLayer has lists of the sound effects and music in the Unity editor as an array of sound files and as an Enum in code. It handles all the actual playing and filtering of sound effects and music.
And that’s it for what goes into Unity. There are a few objects in my game scene, scripts attached to each one, and all the real magic happens in RGame. The RGame class has an Update and Draw function just like my XNA-based Game classes did.
Almost all the same sub-classes are created and used the same on PC as they were on the Xbox (for me that’s ParticleManager, PlayerManager, EnemyManager, and more). The two code files that change the most from XNA to Unity are GameMain.cs and GameDraw.cs (both have parts of the RGame class), so there are more details to cover about how and why they change. The rest of my game scripts from XNA are now in the “Scripts/Game” folder in Unity, with many only having minor changes from the original XNA versions.
So that’s Part 1 of this series. Come back in a week or two for Part 2.
Also posted on radiangames.com