Getting started with Unity DOTS — Part 1: ECS
“Performance by default”
This is what the Unity development team is promising with the new Unity DOTS. The name stands for Data-Oriented Technology Stack and gives you a new way to build and structure your game for extreme performance.
Unity DOTS can be used to build any game you have in mind, but the new tech stack shines the brightest for games with tens of thousands entities on screen at a time.
DOTS shies from the traditional Unity development cycle. It offers a different approach to architecting your game, whose main idea is to take advantage of the CPU’s cache and your GPU’s SIMD capabilities to achieve incredible performance output.
DOTS is a stack which means that it is a set of many packages:
- Entities (preview)
- C# Job System
- Burst Compiler
- Unity Physics (preview)
- Unity NetCode (preview)
- DSPGraph (experimental)
- Unity Animation (experimental)
- DOTS runtime (preview)
In this first part we will focus on Entities, which is Unity’s implementation of the Entity Component System design pattern.
There is also the C# Job System, which provides an easy way to introduce multithreading to your project and Burst, which is a math-optimized compiler that translated IL/.NET code to highly efficient native code using LLVM.
Entity Component System — Overview
You can think of ECS as a pattern just like MVC (Model-View-Controller), MVVM (Model-View-View Model), etc. It is basically a way to structure game code in a data-oriented way for maximium performance.
In short, you have three different concepts to work with:
- Entity— you can think of this as the Unity DOTS game object substitute, only unlike game objects it does not have any default components, layers or tags. It is basically an identifier for a set of DOTS components.
- Component — DOTS components differ from Unity’s default ones. When we write code in a data-oriented way we don’t include logic in any of our components. They are only used to model the data that will be processed by our system.
- System — This is where the logic happens. Systems are scripts that typically read component data and compute the next state of your game.
How it all works under the hood
Here is a diagram from Unity’s documentation:
In this example, we have 3 entities in our world: A, B and C. The flow can be described like this:
- A system fetches all translation and rotation components in our world.
- It performs calculations in bulk*.
- Finally, it writes the output result back to all of the fetched entities.
*In the above diagram T is an array of all translations, R is an array of all rotations. The multiplication operation represents a Hadamard product — the translation at index i is multiplied by the rotatation at index i. The result is an array of values (L2W) computed in bulk.
Now, in the above example Entities A and B have a Renderer
component, while Entity C does not. We could have easily told our system to fetch all entities with a Renderer
, in which case we would have ignored Entity C. You can also do the reverse and fetch all entities that do not have a Renderer
.
Archetypes
To achieve its great performance, ECS groups entities into archetypes. An archetype is basically a classification for entities that have the exact same components.
Archetypes are dynamic — you can add / remove components from entities at runtime to change their archetypes. Adding a Renderer
to Entity C would make it of Archetype M.
Memory Chunks
So, why are archetypes useful? The answer is they are well utilized in memory chunks. When ECS allocates memory, it does so in chunks (represented in code by an ArchetypeChunk
object). Every chunk contains multiple entities of the same archetype.
This memory organisation provides a faster way to query entities via their components.
Chunk properties:
- When adding a new Entity X and all chunks are full, a new one is allocated for it that will contain only entities of X’s archetype.
- Adding / Removing components from an entity would result in it moving to a different chunk corresponding to its new archetype.
- Entities are not stored in a specific order inside chunks.
- Chunks are tightly-packed — ECS does its best to avoid leaving ‘holes’ in chunks. A new entity will take up the first available slot in existing chunks. The ‘hole’ left by a removed entity would get filled up by the last entity in the chunk.
- Shared components are another criteria for splitting chunks. All entities in a chunk have the same shared component field values. As an example, if all your entities have a shared component with a single enum field, they are going to be grouped in chunks based on the enum value they have. You can use this to group similar entities together and optimize memory usage. However, if you overuse this and make everything shared, data that changes often would generate many chunks and cause poor chunk utilization. When you make a component shared, it lives in a single place in memory. In addition, entities do not keep its index in memory, but rather, the chunk they reside in does.
How systems are organised
Systems are classes that inherit from the SystemBase
class. It provides basic lifecycle functions, as well as access to querying and jobs.
public struct ExampleSystem : SystemBase
{
protected override void OnCreate() { ... }
protected override void OnUpdate() { ... }
}
ECS organises systems by World and then by group. Every World contains an EntityManager and a set of ComponentSystems. Groups and systems are both ComponentSystems.
By default you get a World with a few predefined groups — initialization, simulation, presentation. Then, all available systems are instantiated and added to the predefined simulation group.
You can specify update order of systems in the same group via attributes. Use [UpdateInGroup]
attribute to place a system in a group. Use [UpdateBefore]
and [UpdateAfter]
to order systems within groups. [DisableAutoCreation]
prevents a system from being created in the default world initialization. Groups are nestable and can also be ordered. If there is no particular order of execution, ECS computes one deterministically.
You can also create as many worlds as you need, though most commonly you only have a simulation and rendering world.
Entities
Entities are managed by an EntityManager
. An Entity Manager
is responsible for maintaining a list of all entities within a World and organising entity data for optimal performance.
When you add components to an entity, The Entity Manager
creates an EntityArchetype
struct that you can then use as a template for creating other entities.
Creating entities
You can use the EntityManager.CreateEntity()
function to create an entity from an:
- array of components
- EntityArchetype
- existing entity
You can also create arrays and chunks of entities at once.
Adding and removing components
Altering an entity’s components would cause the EntityManager
to recompute memory chunks. Therefore, this operation is more expensive than one might initially assume.
Note: You cannot modify shared component data or destroy entities from within a job. That is because job can be parallelized — so if one thread modifies a shared component, all other threads would be working with invalid data. Changes like this are instead perfomed by filling up an EntityCommandBuffer
with commands and executing them after the job is done.
Entity queries
Entity queries are how you query your world for the entities you want to update.
Note: The main way to do this is via Entities.ForEach
. However, under the hood, it converts your queries to Entity Queries.
Creating a query
var query = GetEntityQuery(
typeof(Health),
ComponentType.ReadOnly<Speed>()
);
When querying entities, you have to specify if you will read / write to each component. In the example above, the assumption is we will write data to Health
, while the Speed
component will only be used for reading. Marking components as read-only will result in more optimized code.
In lambda queries we use ref
to mark that we will write to a parameter component and in
to mark is as read-only. Refs
are always before ins
in the parameter order.
Entities.ForEach((ref A outputComponent, in B readComponent) => { ... });
EntityQueryDesc
You can specify more complex queries via EntityQueryDesc
:
var description = new EntityQueryDesc
{
None = new ComponentType[]{ typeof(Health) },
Any = new ComponentType[]{ typeof(Strength), typeof(Agility) },
All = new ComponentType[]{ typeof(Stamina), typeof(Dexterity) }
}
var query = GetEntityQuery(description);
Do not check for optional components with a query. Use ArchetypeChunk.Has<TOptional>()
. Chunk components have the same archetypes, so checking a chunk is more efficient than checking every entity.
Tags
Tags are simply empty components used to make querying easier.
public struct TagComponent : IComponentData { }
WriteGroups
You can give a component a WriteGroup, which is a set of other components. WriteGroups are used to override the output of a system.
Use case: Say we have thousands of soldiers that regenerate their health every second. Some of the soldiers get a buff that doubles their health regeneration. We add a DoubleHealthRegenBuff
component to them. Now, we already have a system that updates their health at 1x speed, but we want to replace it with a 2x system. We can use write groups for this.
[WriteGroup(SoldierHealth)]
public struct DoubleHealthRegenBuff : IComponentData { ... }public struct SoldierHealth : IComponentData { ... }
// NormalHealthRegenSystem.cs:
...
Entities
.WithEntityQueryOptions(EntityQueryOptions.FilterWriteGroup)
.ForEach((ref SoldierHealth health) => {
// compute health regen at 1x speed
}).ScheduleParallel();
The NormalHealthRegen
system will do the following when running the query:
- Detect
SoldierHealth
is an output component (because ofref
). - Look up
SoldierHealth
write group and findDoubleHealthRegenBuff
. - Exclude all soldiers with the
DoubleHealthRegenBuff
from the query.
This is useful when NormalHealthRegenSystem
is in another package. We only need to enable filtering write groups in it and then we can freely add other components its write group to exclude entities.
This approach also saves us computational resources, because we are not calculating buffed soldiers’ health, that eventually be overridden by a different system.
Query options
Queries also have an optional Options parameter that can have the following values:
- None — default, no options.
- IncludePrefab — include archetypes that contain the special Prefab component
- IncludeDisabled — include archetypes that contain the special Disabled component
- FilterWriteGroup — enable write groups
Usage:
var queryDescription = new EntityQueryDesc
{
...
Options = EntityQueryDescOptions.FilterWriteGroup
};
Combining queries
var first = new EntityQueryDesc { ... };
var second = new EntityQueryDesc { ... };
var query = GetEntityQuery(new EntityQueryDesc[] { first, second });
Caching queries
Entity queries are typically cached by their systems. However when comparing two queries, filters are not taken into consideration. If you want to cache a query with its filters, use onCreate()
:
public class ExampleSystem : SystemBase
{
private EntityQuery query;
protected override void OnCreate()
{
query = GetEntityQuery(...);
}
}
Filters
One way to look at shared components is that they group entities in memory according to the values of the fields in the shared component struct. If we have a shared component with a single integer field, entities with the same integer value would be in the same chunks in memory.
There are two types of filters. One is used to filter entities based on their shared component field values and the others based on change.
Shared component filters
struct Shared : ISharedComponentData
{
public int Group;
}class ExampleSystem: SystemBase
{
EntityQuery query;
protected override void OnCreate(int capacity)
{
query = GetEntityQuery(typeof(Shared));
}
protected override void OnUpdate()
{
query.SetFilter(new Shared { Group = 2 });
...
}
}
Change filters
To filter out all entities whose component value has not changed, use Change filters:
query.SetFilterChanged(typeof(Health));
Note: Change filters work by detecting if a system has declared write access to a component (via ref
). It does not matter if any values actually changed, so always mark your components as read-only if they are not outputs.
Executing queries
You can execute a query with the following methods:
.ToEntityArray()
— returns an array of entities.ToComponentDataArray<T>()
— returns the components of type T of the selected entities.CreateArchetypeChunkArray()
— the same as.ToEntityArray()
, only you get all of the chunks that contain the entities.
You can also execute a query by passing it to a job that implements IJobChunk
:
protected override void OnUpdate()
{
var job = new ExampleJob() { ... };
return job.ScheduleParallel(query, this.Dependency);
}
Queries are also computed internally using jobs. By passing the query to the Schedule()
method, you schedule the query’s jobs alongside the system’s jobs and take advantage of parallel processing.
Components
Components are what holds that data in your game. We will take a look at the different component types provided by ECS.
Standard components
This is your standard data component. It is implemented via the IComponentData
interface.
public struct ExampleComponent : IComponentData
{
public int value;
public string text;
}
While tradtional Unity components are classes, UCS Components are structs
. This means they are passed by value, not by reference, so modifying data must be done like this:
var transform = group.transform[i]; // Readtransform.position += deltaPosition; // Modify
group.transform[i] = transform; // Write
Shared components
The interface used to implement shared components is ISharedComponentData
.
public struct Shared : ISharedComponentData { ... }
Shared components are automatically reference counted and as a rule of thumb should change rarely, as it leads to modifying the existing chunks.
System state components
System state components are implemented by the ISystemStateComponent
interface.
In essence, a state component is identical to a standard component, the only difference is its lifecycle.
When destroying a normal entity, all of the components associated with it are found and deleted, after which its entity id is recycled. However, state components are not deleted when an entity is destroyed.
Use case: Manually free up any resources you might have allocated for an entity.
Example: Say you have a system that contains a HashMap that keeps game metadata for every entity id. In a normal scenario that does not use state components, upon deleting an entity its id would get recycled for later use. However, our dictionary uses the entity’s id to keep metadata for it. We need to delete this record, so it does not get associated with a newly allocated entity.
public struct ExampleComponent : IComponentData { ... }
public struct ExampleStateComponent : ISystemStateComponent { }// ExampleSystem.cs
...
protected override void OnUpdate()
{
// this query gives all entities that are active in our game
var activeEntitiesQuery = new EntityQueryDesc()
{
All = new ComponentType[]{ typeof(ExampleComponent) },
None = new ComponentType[]{ typeof(ExampleStateComponent) }
}; if (activeEntitiesCount > 0)
{
// perform example system logic
} ... // here we decide to delete an entity
entity.AddComponent<ExampleStateComponent>();
PostUpdateCommands.DestroyEntity(entity); ... var deletedEntitiesQuery = new EntityQueryDesc()
{
All = new ComponentType[]{ typeof(ExampleStateComponent) },
None = new ComponentType[]{ typeof(ExampleComponent) }
}; ...
if (deletedEntitiesCount > 0)
{
// remove entity from hash map
// Remove state component, no need to destroy again
PostUpdateCommands.RemoveComponent<ExampleStateComponent>();
}
}
Because ExampleStateComponent
does not get removed with DestroyEntity()
we can use it to track when an entity has been deleted and do so manually.
Dynamic buffer components
Defined with IBufferElementData
.
This component allows you to associate dynamic array data with an entity.
public struct ExampleBuffer : IBufferElementData
{
public int Value;
}
Adding this to an entity would allocated a dynamic array for it, which can be access via EntityCommandBuffer
:
public EntityCommandBuffer.Concurrent ecb;...DynamicBuffer<MyBuffer> buffer = ecb.AddBuffer<MyBuffer>(i, entity);
//Reinterpret MyBuffer plain int buffer
DynamicBuffer<int> intBuffer = buffer.Reinterpret<int>();intBuffer.Add(3);
intBuffer.RemoveAt(0);
Buffers resize dynamically, but you can set an initial internal capacity using the [InternalBufferCapacity(X)]
attribute. When a buffer exceeds its internal capacity, it allocates heap memory outside the current chunk and moves all elements there.
If you add two buffers of the same type to an entity, references to the first one would be automatically invalidated.
Chunk components
Chunk components store data common for all entities in a specific chunk.
Use case: If you have objects that are close to each other stored in the same chunk, you can keep a collective bounding box for them in a chunk component. That way you can implement a more optimised frustum / occlusion culling.
Use case: Chunk invalidation. You can keep track of processing data in a chunk component, so you only process chunks that have been modified in some way.
Chunk components are stored on a per-chunk basis, however, they are still part of the entities’ archetype. Therefore, removing a chunk component would lead to moving the entities to a different chunk.
If you change the value of a chunk component via an entity, all entities in the same chunk are affected.
If you then change an entity’s archetype and move it to a different chunk that has the same chunk component, the value of that chunk component will not change.
Chunk components are declared like standard components, but you use different functions to work with them.
Creating a chunk component
public struct ChunkComponent : IComponentData { ... }
Adding a chunk component to an entity
EntityManager.AddChunkComponentData<ChunkComponent>(entity);
Querying
var readWrite = new EntityQueryDesc()
{
All = new ComponentType[]
{ ComponentType.ChunkComponent<ChunkComponent>() }
};var readOnly = new EntityQueryDesc()
{
All = new ComponentType[]
{ ComponentType.ChunkComponentReadOnly<ChunkComponent>() }
};
Creating archetypes
You must specify chunk components explicitly when creating entities from an archetype.
EntityManager.CreateArchetype(
ComponentType.ChunkComponent(typeof(ChunkComponent)));
Reading chunk components
Some useful EntityManager functions when reading entities with chunk components:
EntityManager.AddChunkComponentData<ChunkComponent>(entity);
EntityManager.SetChunkComponentData<ChunkComponent>(chunk, value);
EntityManager.HasChunkComponent<ChunkComponent>(entity);
EntityManager.GetChunkComponentData<ChunkComponent>(chunk);
EntityManager.GetChunkComponentData<ChunkComponent>(entity);
EntityManager.RemoveChunkComponent<ChunkComponent>(entity);
Systems
In general all your system will extend the SystemBase class.
Creating systems
Every system has the following lifecycle:
All system events run on the main thread. Ideally, the OnUpdate()
function should schedule the jobs that perform most of the work.
Scheduling jobs
There are four different mechanisms to schedule jobs from a system.
Using Entities.ForEach
in your system
In short, Entities.ForEach
executes a lambda function you define over all the entities selected by an entity query.
To execute it, use Schedule()
or ScheduleParallel()
to run the job on backgound threads, or Run()
to run it in the main thread.
class ExampleSystem : SystemBase
{
protected override void OnUpdate()
{
Entities
.ForEach((ref ComponentA a, in ComponentB b) =>
{
// Read b and write to a
})
.Schedule();
}
}
Note: Notice the use of ref
and in
to mark write access to components. Entities.ForEach
can automatically infer that you want to fetch these components based on your lambda’s parameters.
Similarly to an EntityQuery, Entities.ForEach
supports All
, Any
and None
:
Entities.WithAll<A>()
.WithAny<X, Y, Z>()
.WithNone<P>()
.ForEach((ref M output, in N input) => ...)
.Schedule();
You can also enable change filtering:
Entities
.WithChangeFilter<T>()
.ForEach(...)
.ScheduleParallel();
And shared component filtering:
Entities
.WithSharedComponentFilter(sharedComponent)
.ForEach(...)
.ScheduleParallel();
Defining the ForEach
function
A typical Entities.ForEach
function may look like this:
Entities.ForEach(
(Entity entity,
int entityInQueryIndex,
ref T componentT,
in H componentH) => { ... })
You may pass up to 8 parameters to ForEach
, but they must be in the following order:
- Pass-by-value parameters (special parameters)
- Writable parameters via
ref
- Read-only parameters via
in
Components are structs, which are passed by value. That is why writing requires ref
and also why writing to a component passed via in
will not update it (because you would be modifying a copy).
Note: You cannot pass chunk components to Entities.ForEach
.
Special ForEach
parameters
ForEach
supports a few special parameters you have to specify in the beginning of the parameter list:
Entity entity
— instance of the current entityint entityInQueryIndex
— the index of the entity in the list of all entities selected by the query.int nativeThreadIndex
— a unique index for the thread executing the current iteration of the lambda function.
Native containers
Native containers in ECS are data structures that provide a safe wrapper for C# native memory. They allow jobs to access main thread data directly instead of working with a copy. Containers have to be deallocated manually.
Example: NativeArrays
provide you with a linear memory layout and are not garbage collected.
There is also NativeList
, NativeHashMap
, NativeMultiHashMap
, NativeQueue
. More on them in Part 2.
Capturing variables
A variable that is used inside a lambda expression, but declared outside of it is called a captured variable.
- Only native containers and blittable types can be captured.
- A job can only write to captured variables that are native containers. To return a single value, create a native array with one element.
Using Job.WithCode in your system
This function allows you to easily run a lambda as a single background job.
The following example computes the sum of numbers from 1
to 10
. Result has to be stored in a NativeArray
to get data out of the job.
public class SumJob : SystemBase
{
protected override void OnUpdate()
{
var result = new NativeArray<int>(1, Allocator.TempJob);
Job.WithCode(() =>
{
for (int i = 1; i <= 10; i++)
{
numbers[0] += i;
}
}).Schedule();
result.Dispose();
}
}
Note: Job.WithCode
does not take any parameters, but you can capture variables, like we did in the example above. Captured variables must be a native container or a blittable type. You can ‘return’ data by writing it to a native array.
You can run the lambda function in two ways:
Schedule()
— runs the function as a single, non-parallel job. The code runs on a background thread and thus can take better advantage of available CPU resources.Run()
— executes the function immediately on the main thread.
Note: Job.WithCode
can be Burst compiled in most cases so executing code there is faster.
Using IJobChunk jobs in your system
Implement the IJobChunk
interface to interate your data chunk by chunk. You need to create a job and schedule it in an OnUpdate()
function by passing an EntityQuery as a parameter.
Example: We will heal all our soldiers 20 health points using IJobChunk
.
// in a system class[BurstCompile]
struct HealJob : IJobChunk
{
public ArchetypeChunkComponentType<SoldierHealth> HealthType; public void Execute(ArchetypeChunk chunk, int chunkIndex, int firstEntityIndex)
{
var soldierHealths = chunk.GetNativeArray(HealthType);
for (var i = 0; i < soldierHealths.Length; i++)
{
soldierHealths[i] = new SoldierHealth
{
Health = soldierHealths[i] + 20
}
}
}
}protected override void OnUpdate()
{
var isReadOnly = false; var job = new HealJob
{
HealthType = GetArchetypeChunkComponentType<SoldierHealth>(isReadOnly)
}; this.Dependency = job.ScheduleParallel(query, this.Dependency);
}
Note: Do not cache GetArchetypeChunkComponentType<T>()
.
Using manual iteration in a system
You can request all chunks explicitly and then use IJobParallelFor
to process them in parallel.
public class HealthSystem : SystemBase
{
[BurstCompile]
struct HealingJob : IJobParallelFor
{
[DeallocateOnJobCompletion]
public NativeArray<ArchetypeChunk> Chunks;
public ArchetypeChunkComponentType<Health> HealthType; public void Execute(int chunkIndex)
{
var chunk = Chunks[chunkIndex];
var healthChunk = chunk.GetNativeArray(HealthType);
for (int i = 0; i < chunk.Count; i++)
{
var health = healthChunk[i];
health.Value += 20;
healthChunk[i] = health;
}
}
}
EntityQuery query;
protected override void OnCreate()
{
var queryDesc = new EntityQueryDesc
{
All = new ComponentType[]{ typeof(Health) }
};
query = GetEntityQuery(queryDesc);
}
protected override void OnUpdate()
{
var healthType = GetArchetypeChunkComponentType<Health>();
var chunks = query
.CreateArchetypeChunkArray(Allocator.TempJob);
var healingJob = new HealingJob
{
Chunks = chunks,
HealthType = healthType,
}; this.Dependency = healingJob.Schedule(chunks.Length,32, this.Dependency);
}
}
Use this if you need to manage chunks in a more complicated manner where the normal simplified model would not be appropriate.
Job dependencies
Unity analyzes system dependecies based on the components it reads and modifies. If a system reads data that another system then modifies or vice versa, the second system depends on the first one. To prevent race conditions, the job scheduler makes sure that all the jobs a system depends on have finished before it runs that system’s jobs.
Sync points
Sync points are points in your code where execution has to wait for all scheduled jobs to finish. Sync points prevent you from using worker threads, so you should avoid them.
Sync points are caused by structural changes:
- Creating / Deleting entities
- Adding / Removing components from an entity
- Modifying shared component data
In other words, anything that changes an entity’s archetype or reorders entities in a chunk is a structural change. They can only happen on the main thread and, therefore, require a sync point.
Entity Command Buffers
ECBs allow you to queue up changes, so they can later take effect on the main thread. They are helpful when:
- You do not have access to an EntityManager.
- You create a sync point.
When using ECBs from parallel jobs, you must ensure you are using EntityCommandBuffer.Concurrent
.
Entity Command Buffer Systems
ECB Systems allow you to play back queued commands in a clearly defined point in frame. Your default initialization world contains 3 main system groups — initialization, simulation, presentation. Every group contains two ECB systems — one that runs before and one that runs after all other systems in the group. You can fetch these systems and replay your commands during that point in the update cycle.
Example: Deleting an entity at the end of the simulation group update.
struct SoldierHealth : IComponentData
{
public int Value;
}
class LifetimeSystem : SystemBase
{
EndSimulationEntityCommandBufferSystem ecbSystem;
protected override void OnCreate()
{
base.OnCreate();
ecbSystem = World
.GetOrCreateSystem<EndSimulationEntityCommandBufferSystem>();
}
protected override void OnUpdate()
{
var ecb = ecbSystem.CreateCommandBuffer().ToConcurrent(); Entities
.ForEach((Entity entity, int entityInQueryIndex, in SoldierHealth health) =>
{
if (health.Value <= 0)
{
ecb.DestroyEntity(entityInQueryIndex, entity);
}
}).ScheduleParallel();
// Make sure that the ECB system knows about our job
ecbSystem.AddJobHandleForProducer(this.Dependency);
}
}
Use case: Entity Command Buffer Systems are useful to batch all structural changes into a single ECB system, so you can have only one sync point.
Versions numbers
Version numbers (generations) detect changes. They are used skip calculations if no data has been modified.
Check if a Version B
is more recent that Version A
with the following logic:
bool result = (VersionB - VersionA) > 0;
There is no guarantee by how much a version number would increase.
The following ECS concepts use versioning:
- Chunk—
EntityId.Version
increases every time an entity is destroyed. Entity ids are recycled so a version mismatch means the previous entity was destroyed. - World —
World.Version
increases every time a system is created / destroyed. - EntityDataManager —
EntityDataManager.GlobalVersion
increases before every job component system update. - System —
System.LastSystemVersion
takes the value ofEntityDataManager.GlobalVersion
after every job component system update. - Chunk —
Chunk.ChangeVersion
is an array of versions. Values are for every component type in the chunk’s archetype and are set toEntityDataManager.GlobalVersion
every time a component type marked as writable in a system. EntityManager.m_ComponentTypeOrderVersion[]
SharedComponentDataManager.m_SharedComponentVersion[]
Authoring
Authoring scripts are standard MonoBehaviour scripts attached to game objects that ECS then converts to entities.
During game object conversion ECS takes existing components like Transform
and replaces them with ECS components, like LocalToWorld
.
You can implement IConvertGameObjectToEntity
inside a MonoBehaviour to customize the conversion steps.
There are two ways to convert game objects to entities:
- With a
ConvertToEntity
MonoBehaviour — this converts the game object runtime. - With a SubScene — By adding game objects to a subscene you can save them to the disk as entities. Loading this data happens very fast runtime.
Note: Entities documentation explicitly states you should expect many changes to current authoring workflow.
Generated authoring components
Say you create an ECS component and want to add it to a game object in your scene.
You can convert it to a MonoBehaviour with a [GenerateAuthoringComponent]
annotation.
[GenerateAuthoringComponent]
public struct ExampleComponent : IComponentData { ... }
You can also generate authoring components for IBufferElementData
:
[GenerateAuthoringComponent]
public struct BufferElement: IBufferElementData { ... }
This would generate a corresponding MonoBehaviour with a List
field.
Rendering Entities
The Hybrid Renderer is responsible for rendering entities in DOTS.
Note: You need this package installed to render entities.
Common patterns
Some useful patterns to follow while writing DOTS code.
Static methods in Entities.ForEach
public class ExampleSystem : SystemBase
{
protected override void OnUpdate()
{
Entities
.WithName("ExampleSystem_ForEach")
.ForEach((ref Example example)
=> Calculate(ref example))
.ScheduleParallel();
}
static void Calculate(ref Example example) { ... }
}
This has the following benefits:
Calculate()
logic is more reusable.OnUpdate()
is cleaner and more readable.Calculate()
can be Burst-compiled for better performance.
Encapsulate data and method in a captured value-type
public class ExampleSystem : SystemBase
{
struct OperationData
{
float m_Data;
float m_DeltaTime;
public OperationData(float data, float deltaTime)
=> (m_Data, m_DeltaTime) = (data, deltaTime); public void Calculate(ref Example example) { ... }
}
protected override void OnUpdate()
{
var input = ...;
var operation = new OperationData(input, Time.DeltaTime); Entities.ForEach((ref Example example)
=> operation.DoWork(ref example))
.ScheduleParallel();
}
}
This pattern helps organise data and logic into a single unit.
Note: This pattern copies data into your job structs. There could be overhead with when working with large job structs and that could be a sign you have to split your job into smaller ones.
Conclusion
This concludes the introductory tutorial on Entities. Next up we will take a look into the C# Job System and the Burst compiler.