Planning Tech Art for Network Multiplayer

Planning Tech Art for Network Multiplayer

developing solution for known and anticipated challenges for art and tech art pipelines and tools

CONTENT

  • process summary

  • problem/challenge identification process

  • identified problems / challenges

  • information gathering

  • solution formation

  • solution development plan

  • iteration and refinement plan

 

PROCESS SUMMARY

The way I approached planning tech art solutions for this particular network multiplayer project follows a general iterative designed solution approach. I knew, going in, that I didn’t have enough experience to employ a confident rapid 90% solution, so I include a structured discovery phase into this process, as well as planned for iterative and improvement cycles for refinement as the project evolved and new information was acquired.

On a larger project or when working with a larger team, the discovery phase would require cross-functional input and assistance in discovery, but for this project I’m also wearing those other hats. I’ll include snippets on how I would approach this step, but those steps were NOT a part of this project.

 

PROBLEM/CHALLENGE IDENTIFICATION PROCESS

  • [PC] This project’s main goal is to be a test bed for network multiplayer and Steam integration — two of us started this project with the explicit goal to learn more about a specific subset of tools (Photon Fusion and Steamworks). To that end, the first platform of consideration are PCs running Windows. PC builds in Unity are generally non-restrictive in terms of performance and features, but with a highly mature customer base, it was important to consider common player expectations with respect to features, performance, and capabilities.

    [Browser/WebGL] Also from previous projects, what I’ve found to be helpful in the early testing (i.e. post-prototype pre-alpha) stages of development is to be able to run playtests over browser based deployments, hosted on web. Especially for small indie teams, it minimizes logistical overhead in terms of distribution and version synchronization (which is especially important for multiplayer games). Unity’s WebGL builds and the browser ecosystem impose heavy performance limitations, so research and testing was done. Specifically, the single-threaded limitations of WebGL builds, known impacted Unity systems that benefit from multi-threading, and relationship between performance between a native build and a WebGL build on the same hardware across multiple hardware profiles.

  • The game is cooperative, which is my personal area of interest. This also means that the project would be able to use a wide variety of network topologies. Without play or product design limitations, we explored options and settled on Photon Fusion for ease of integration and ubiquity for our use case. Once committed to the tool, the focus was on translation of core gameplay features to Fusion’s API and architecture, as well as testing of anticipated features at scale to understand how their API handled different types of anticipated stresses.

    The advantage of a high level tool like Fusion is that it handles many of the most low-level functionality for you, but the downside is that it obfuscates how it chooses to handle this at the lowest levels. But with high level tools, the upside of generalized approaches comes with downside that they were designed for a general use case in mind, and you won’t know where your pipeline or approach is out of bounds without hitting a wall.

    To that end, I focused on the features we knew we’d be reliant on, and pushed them hard to see how far we could get before performance was impacted. This included everything from input message sizes, animation synchronization, numbers of network transforms and objects, RPCs, and players’ geographic distance to servers.

  • Overall the idea was to do the best I could based off of researched and documented solutions, identify what and WHY things were becoming expensive, and when all else fails, bias for action and iteration.

    The game’s design anticipates large numbers of mobs to create a sense of overwhelming numbers, numerous animated 3D meshes, and a large number of particle-based effects. In many conventional offline PC games, this can already potentially result in rendering performance issues, but coupled with the above platform limitations, and network multiplayer considerations, it was important to identify potential problem areas, and employ effective engine-provided solutions for the platforms we were deploying to, and test all assumptions.

    Given the scope and complexity of all of these factors, where known solutions could be found, they were employed. Otherwise the easiest available first pass solution was employed until it could be identified that they were inadequate.

 

IDENTIFIED PROBLEMS AND CHALLENGES

  • Fusion offers a high level approach to synchronizing and interpolating a wide variety of common data types for a variety of use cases. This includes the current animation state for Skinned Mesh Renderers with Animator Components. This would allow the host/server to appropriately select what Animator values are appropriate, meaning booleans and triggers could drive transitions, and changes in float based properties could be synchronized and smoothly interpolated between update states.

    Based on their provided examples, this seems like an intended solution for player characters and other important and limited Skinned Meshes, but not for instances in very high numbers. Based on our testing methodology, we couldn’t determine if this was an issue with configuration, hardware, or implementation; of if this was as intended.

    The problem seemed to be the volume of inputs overwhelming the bandwidth for a sufficiently local-like experience.

    Solution:

    Where possible non-player based animations were localized. To preclude the need for multiple solutions, animators are driven by localized code using locally observable parameters like changes in position for locomotion, events like player actions, or game-state independent values like engine time for transform-based and shader-based animations.

  • Effects, similar to Animation State Synchronization, can be driven using a variety of features in Fusion, but recognizing the bandwidth bottleneck at anticipated entity counts in the game, to the extent possible, the effects pipelines rely on game-state essential messages, events, and callbacks to trigger, position, and orient various effects.

    Essential effects in a game are the only feedback players have to understand the quality of their play, how the game is evaluating their play, and how they can refine their heuristics for how they mechanically interact with the game’s rules — aka Game State Feedback. These effects in a shooter can involve, who the player has hit, whether these hit entities manifest a play-relevant change in state, who or what has affected their values or state and HOW.

    Problem Part 1 - minimizing bandwidth

    Though using RPCs or unique inter-instance messaging can give the greatest control, it can contribute to network congestion.

    Solution Part 1

    To the extent possible, effects are hooked up to existing events and messaging. For example, when an object is Network Spawned, the spawning callback and be used to ensure that spawning effects are called at the right time.

    Problem Part 2 - keeping iteration and testing elegant

    Solution Part 2 -

    The VFX and SFX systems in the game have been generalized to allow for local and remote triggering. This means that iteratively, we can iterate these systems hookups to the game-state affecting controllers. This means where appropriate, we have the option of moving effects from locally driven to remotely driven based on an effect-to-effect basis.

    Also, employing an event manager pattern, these choices don’t need to be managed on the programmer level. In this case, it’s all just me, but on teams where these are split proficiencies, the programmer doesn’t need to know how to sequence all of the effects. Instead they can simply call events at the appropriate hookups.

    Problem Part 3 - using callbacks and existing events with missing data

    Solution Part 3 -

    An intermediate effects event controller is used. This allows for the coordination of effects like playing a sound effect, and firing different particle systems in different directions.

    This also allows for the control of timing through tasks. This is helpful when spawning happens and an object is in a position other than their first intended position. This controller allows for unnoticeable delays to allow objects to arrive at their intended positions before effects are fired.

    This also allows for a local instance of the controller to observe the game state and pass information that an event originating within classes of different scopes might not possess. For example, a dying mob might know its own type and position but might not know the direction from where it was shot, and the trajectory of the projectile that hit it, or the positions of neighboring entities for contextual effects like “chain lighting”.

  • In general, Unity seems to leverage multi-threading to a certain extent for Skinned Mesh Renderers. The way a rigged mesh animates is that individual vertices and their spatial relationship to various bones in the body are blended between the updated positions of those bones based on a weighted basis. This means that a single rigged mesh might be very difficult to multi-thread, but different meshes can be resolved with minimal issue across multiple threads.

    My assumption as to why this is constrained to a limited number of threads is that these skinned mesh based animations run on the Fixed Update rate because they have implications for the physics system.

    The implication is that skinned mesh-based animations will always be the highest-quality real time animation approach, but it currently the most expensive approach. The cost is compounded by the number of bones, the number of vertices in a skinned mesh, the number of bones each vertex must blend between, and the cost of what drives the bone rotations like physics or inverse kinematics.

    Problem - Skinned meshes are generally expensive

    Solution Part 1 - vertex deformation

    To the extent possible, leverage other animation techniques such as shader-based vertex deformation animations. This works for environmental assets like foliage cards for trees and grass, but also geometrically definable animation loops like fish, snakes, jellyfish, etc.

    Solution Part 2 - mesh segmentation

    Where shaders are inadequate, use multiple meshes for linked segments. this can work for things like mechanical joints, insect limb segments, and upper/lower jaw halves. This effectively creates meshes with a single bone with the downside of requiring an additional draw call per segment. But this approach has the upside of being batchable in shader as long as the scale doesn’t change.

    Solution Part 3 - particle based animations

    This is less solution, so much as a commitment. To the extent possible, particle based effects will be used to drive effects. This allows for engine optimized management of mesh lifecycle management.

    One helpful approach to minimize use of the garbage collector is to use a combination of a dynamic object pool (where new resources are created when the pool is too small, but stored for reuse again) and those particle emitters which will initialize with a pool of managed particles.

    For example, the same particle emitter simulating particle behavior in world space could be repositioned and re-fired for all of the same projectile types.

Concept Art/Style Guide Stylized Lighting Breakdown

Concept Art/Style Guide Stylized Lighting Breakdown