UI is one area in Unreal games that result in a surprisingly large number of crashes and synchronization bugs. We have begun exploring how to reduce those problems.


To make things concrete, here is a typical health bar widget from an Unreal-based game:

void UHealthBarWidget::ConnectCharacterToWidget(AMyCharacter* CharacterToConnect)
    PlayerStartingHealth = GetWorld()->GetGameState<AMyGameState>()->Stats->PlayerStartingHealth;
    ConnectedCharacter->OnHealthChangedDelegate.AddDynamic(this, &UHealthBarWidget::HealthBarVisualizeCharacterHealth);


This is pretty standard C++ support code for a widget. Why is this piece of code allegedly more problematic than standard C++ gameplay code?

Gameplay logic and UI logic is often written at different times, by different people. Someone working on UI logic will use whatever state is available in the game objects; someone writing game logic will expose a bunch of parameters to allow future UI logic development to access it, without needing to make further changes to the game logic. Someone working on UI needs to have a thorough understanding both of how to develop UI logic, and how the gameplay-related objects exist. Conversely, someone working on gameplay needs to have a thorough understanding of how the UI logic works when making changes to gameplay-related objects.

The root of the problem is that the UI and gameplay logic operate in two different domains, but there is normally no clear separation between the two. When UI logic accesses gameplay state, it does so for a very specific purpose. UI logic usually only needs a small subset of the game state.

Game logic sometimes uses events to trigger ripple effects across sets of game objects. The same trigger mechanisms, sometimes even the same triggers, are often used to send events to the UI. This results in similar challenges to those already seen with game state; it is easier to send events to the UI in the middle of tight gameplay loops, the subsequently developed UI logic will be written to rely on being called at that exact point in time, and suddenly it is difficult to modify the surrounding gameplay code without accidentally breaking the UI.

We do not yet have any battle-tested solutions, but we are exploring several different options. Most of these are based on the principle of decoupling game state/logic and UI state/logic.



Instead of having UI logic dig into the bowels of gameplay state, create a number of objects that hold only the state that is necessary for the UI to update itself. Extend the gameplay code so that it writes the required information to the UI state objects when necessary. Allow UI logic to read from the UI state objects, but do not allow it to access gameplay state directly.

Minimize the state in the UI objects: for example, if gameplay objects have a “health” parameter each, but the UI only needs to know whether a gameplay object is dead or alive, the UI state object should only contain a bool, and the game logic should convert health into dead/alive status before updating the UI state.

The UI state objects represent a specification, which programmers can look at both from a gameplay programming and a UI programming point-of-view. It removes ambiguity.

The introduction of the UI state means that it is gameplay logic that walks its own data structures. The person who writes the gameplay logic is best suited to understand which pointers / gameplay state can be invalid or uninitialized at which moment, and that person will also tackle those problems before the data is written to the UI state.

UI objects sometimes have a lifetime that is not connected to the lifetime of the gameplay objects; some UI elements will pop up and disappear repeatedly based on user input; other UI elements will persist across level transitions, etc. By moving the determination of UI state to the gameplay side, the evaluation is always done in the context of gameplay logic processing, and it will be easier for programmers to know when the UI state needs to be updated to remain valid. This reduces the need to “Null pointer check everything” that otherwise seeps into UI logic (to avoid UI-related crashes in the short run), and which in turn would result in hard-to-debug UI sync bugs in the long run.

Finally, there is anecdotal evidence that having a separate UI state can increase overall performance of complex UIs, despite the extra memory footprint and extra work required on the gameplay side; the UI state is laid out linearly in memory (compared to traditional pointer chasing), sometimes revisiting similar state several times, and the separate UI state results in fewer cache misses once the UI grows large.


Let the gameplay layer express all desired changes to UI state via events. All data is included as event payloads. It is the responsibility of the UI side to maintain internal state dependent on these events. This makes the gameplay side very simple, but it does on the other hand result in implicit state machines on the UI side, with lots of room for strange errors. It seems to be a simple strategy on the surface, but it requires lots of care on the UI side, especially when things get tricky.


If the contents of UI state objects can change at any time, then all UI widgets will constantly need to refresh themselves. It would be good if changes to UI state would be accompanied by some form of events/notifications. One way of doing this is by making UI state changes transactional; the gameplay side needs to declare when it begins and ends changing a UI state object, and the UI side only reacting when a set of changes is complete. Couple this with a delta check, and the UI will be able to do incremental updates with a minimum of extra code.


A naive introduction of UI state and change events results in UI state being updated in the middle of game logic. It would be better to separate the two; write to an event stream, and let the UI logic process the event stream once it is UI update time.


Instead of having UI logic dig into gameplay objects and register itself as listening to various gameplay objects, use a publish-subscribe design: the UI logic registers listeners with a message broker, the gameplay logic posts messages to the message broker, and the broker delivers these to the appropriate callbacks in the UI logic.

This way, there is no direct link between gameplay logic and UI logic. This makes it easier to test each part in isolation. It also ties in well with non-instant UI updates; don’t dispatch messages immediately, but buffer them and replay them when it is UI update time. The main drawback is that the loose coupling makes it more difficult to follow the flow from gameplay logic through message bus to UI logic in source code.


A typical Widget in an Unreal application serves the roles of View, sometimes also Model, sometimes also Controller. The MVC or MVVM patterns are well-understood and offer default strategies for how to introduce a separate UI state and how to standardize the communication paths to and from a widget.

It is not obvious how the objects involved in an MVC/MVVM model should get created by the Unreal application framework, however. This may also result in a lot of boilerplate. The jury is still out on this one.


With a good decoupling of gameplay and UI, we could use different test frameworks for them… if only there were effective testing frameworks for Unreal in the first place. Oh well, that sounds like something further along the roadmap!


There are many ways to approach gameplay and UI decoupling. Regardless of method chosen, it can reduce the overall complexity of your game and also reduce the number of crashes & bugs you encounter.

Do you want to work on these sorts of problems? We are hiring!