The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). Additional documentation and release notes are available at [Multiplayer Documentation](https://docs-multiplayer.unity3d.com). ## [1.1.0] - 2022-10-21 ### Added - Added `NetworkManager.IsApproved` flag that is set to `true` a client has been approved.(#2261) - `UnityTransport` now provides a way to set the Relay server data directly from the `RelayServerData` structure (provided by the Unity Transport package) throuh its `SetRelayServerData` method. This allows making use of the new APIs in UTP 1.3 that simplify integration of the Relay SDK. (#2235) - IPv6 is now supported for direct connections when using `UnityTransport`. (#2232) - Added WebSocket support when using UTP 2.0 with `UseWebSockets` property in the `UnityTransport` component of the `NetworkManager` allowing to pick WebSockets for communication. When building for WebGL, this selection happens automatically. (#2201) - Added position, rotation, and scale to the `ParentSyncMessage` which provides users the ability to specify the final values on the server-side when `OnNetworkObjectParentChanged` is invoked just before the message is created (when the `Transform` values are applied to the message). (#2146) - Added `NetworkObject.TryRemoveParent` method for convenience purposes opposed to having to cast null to either `GameObject` or `NetworkObject`. (#2146) ### Changed - Updated `UnityTransport` dependency on `com.unity.transport` to 1.3.0. (#2231) - The send queues of `UnityTransport` are now dynamically-sized. This means that there shouldn't be any need anymore to tweak the 'Max Send Queue Size' value. In fact, this field is now removed from the inspector and will not be serialized anymore. It is still possible to set it manually using the `MaxSendQueueSize` property, but it is not recommended to do so aside from some specific needs (e.g. limiting the amount of memory used by the send queues in very constrained environments). (#2212) - As a consequence of the above change, the `UnityTransport.InitialMaxSendQueueSize` field is now deprecated. There is no default value anymore since send queues are dynamically-sized. (#2212) - The debug simulator in `UnityTransport` is now non-deterministic. Its random number generator used to be seeded with a constant value, leading to the same pattern of packet drops, delays, and jitter in every run. (#2196) - `NetworkVariable<>` now supports managed `INetworkSerializable` types, as well as other managed types with serialization/deserialization delegates registered to `UserNetworkVariableSerialization<T>.WriteValue` and `UserNetworkVariableSerialization<T>.ReadValue` (#2219) - `NetworkVariable<>` and `BufferSerializer<BufferSerializerReader>` now deserialize `INetworkSerializable` types in-place, rather than constructing new ones. (#2219) ### Fixed - Fixed `NetworkManager.ApprovalTimeout` will not timeout due to slower client synchronization times as it now uses the added `NetworkManager.IsApproved` flag to determined if the client has been approved or not.(#2261) - Fixed issue caused when changing ownership of objects hidden to some clients (#2242) - Fixed issue where an in-scene placed NetworkObject would not invoke NetworkBehaviour.OnNetworkSpawn if the GameObject was disabled when it was despawned. (#2239) - Fixed issue where clients were not rebuilding the `NetworkConfig` hash value for each unique connection request. (#2226) - Fixed the issue where player objects were not taking the `DontDestroyWithOwner` property into consideration when a client disconnected. (#2225) - Fixed issue where `SceneEventProgress` would not complete if a client late joins while it is still in progress. (#2222) - Fixed issue where `SceneEventProgress` would not complete if a client disconnects. (#2222) - Fixed issues with detecting if a `SceneEventProgress` has timed out. (#2222) - Fixed issue #1924 where `UnityTransport` would fail to restart after a first failure (even if what caused the initial failure was addressed). (#2220) - Fixed issue where `NetworkTransform.SetStateServerRpc` and `NetworkTransform.SetStateClientRpc` were not honoring local vs world space settings when applying the position and rotation. (#2203) - Fixed ILPP `TypeLoadException` on WebGL on MacOS Editor and potentially other platforms. (#2199) - Implicit conversion of NetworkObjectReference to GameObject will now return null instead of throwing an exception if the referenced object could not be found (i.e., was already despawned) (#2158) - Fixed warning resulting from a stray NetworkAnimator.meta file (#2153) - Fixed Connection Approval Timeout not working client side. (#2164) - Fixed issue where the `WorldPositionStays` parenting parameter was not being synchronized with clients. (#2146) - Fixed issue where parented in-scene placed `NetworkObject`s would fail for late joining clients. (#2146) - Fixed issue where scale was not being synchronized which caused issues with nested parenting and scale when `WorldPositionStays` was true. (#2146) - Fixed issue with `NetworkTransform.ApplyTransformToNetworkStateWithInfo` where it was not honoring axis sync settings when `NetworkTransformState.IsTeleportingNextFrame` was true. (#2146) - Fixed issue with `NetworkTransform.TryCommitTransformToServer` where it was not honoring the `InLocalSpace` setting. (#2146) - Fixed ClientRpcs always reporting in the profiler view as going to all clients, even when limited to a subset of clients by `ClientRpcParams`. (#2144) - Fixed RPC codegen failing to choose the correct extension methods for `FastBufferReader` and `FastBufferWriter` when the parameters were a generic type (i.e., List<int>) and extensions for multiple instantiations of that type have been defined (i.e., List<int> and List<string>) (#2142) - Fixed the issue where running a server (i.e. not host) the second player would not receive updates (unless a third player joined). (#2127) - Fixed issue where late-joining client transition synchronization could fail when more than one transition was occurring.(#2127) - Fixed throwing an exception in `OnNetworkUpdate` causing other `OnNetworkUpdate` calls to not be executed. (#1739) - Fixed synchronization when Time.timeScale is set to 0. This changes timing update to use unscaled deltatime. Now network updates rate are independent from the local time scale. (#2171) - Fixed not sending all NetworkVariables to all clients when a client connects to a server. (#1987) - Fixed IsOwner/IsOwnedByServer being wrong on the server after calling RemoveOwnership (#2211)
297 lines
12 KiB
C#
297 lines
12 KiB
C#
using System;
|
|
using Unity.Collections;
|
|
using Unity.Collections.LowLevel.Unsafe;
|
|
using Unity.Networking.Transport;
|
|
|
|
namespace Unity.Netcode.Transports.UTP
|
|
{
|
|
/// <summary>Queue for batched messages meant to be sent through UTP.</summary>
|
|
/// <remarks>
|
|
/// Messages should be pushed on the queue with <see cref="PushMessage"/>. To send batched
|
|
/// messages, call <see cref="FillWriterWithMessages"/> or <see cref="FillWriterWithBytes"/>
|
|
/// with the <see cref="DataStreamWriter"/> obtained from <see cref="NetworkDriver.BeginSend"/>.
|
|
/// This will fill the writer with as many messages/bytes as possible. If the send is
|
|
/// successful, call <see cref="Consume"/> to remove the data from the queue.
|
|
///
|
|
/// This is meant as a companion to <see cref="BatchedReceiveQueue"/>, which should be used to
|
|
/// read messages sent with this queue.
|
|
/// </remarks>
|
|
internal struct BatchedSendQueue : IDisposable
|
|
{
|
|
// Note that we're using NativeList basically like a growable NativeArray, where the length
|
|
// of the list is the capacity of our array. (We can't use the capacity of the list as our
|
|
// queue capacity because NativeList may elect to set it higher than what we'd set it to
|
|
// with SetCapacity, which breaks the logic of our code.)
|
|
private NativeList<byte> m_Data;
|
|
private NativeArray<int> m_HeadTailIndices;
|
|
private int m_MaximumCapacity;
|
|
private int m_MinimumCapacity;
|
|
|
|
/// <summary>Overhead that is added to each message in the queue.</summary>
|
|
public const int PerMessageOverhead = sizeof(int);
|
|
|
|
internal const int MinimumMinimumCapacity = 4096;
|
|
|
|
// Indices into m_HeadTailIndicies.
|
|
private const int k_HeadInternalIndex = 0;
|
|
private const int k_TailInternalIndex = 1;
|
|
|
|
/// <summary>Index of the first byte of the oldest data in the queue.</summary>
|
|
private int HeadIndex
|
|
{
|
|
get { return m_HeadTailIndices[k_HeadInternalIndex]; }
|
|
set { m_HeadTailIndices[k_HeadInternalIndex] = value; }
|
|
}
|
|
|
|
/// <summary>Index one past the last byte of the most recent data in the queue.</summary>
|
|
private int TailIndex
|
|
{
|
|
get { return m_HeadTailIndices[k_TailInternalIndex]; }
|
|
set { m_HeadTailIndices[k_TailInternalIndex] = value; }
|
|
}
|
|
|
|
public int Length => TailIndex - HeadIndex;
|
|
public int Capacity => m_Data.Length;
|
|
public bool IsEmpty => HeadIndex == TailIndex;
|
|
public bool IsCreated => m_Data.IsCreated;
|
|
|
|
/// <summary>Construct a new empty send queue.</summary>
|
|
/// <param name="capacity">Maximum capacity of the send queue.</param>
|
|
public BatchedSendQueue(int capacity)
|
|
{
|
|
// Make sure the maximum capacity will be even.
|
|
m_MaximumCapacity = capacity + (capacity & 1);
|
|
|
|
// We pick the minimum capacity such that if we keep doubling it, we'll eventually hit
|
|
// the maximum capacity exactly. The alternative would be to use capacities that are
|
|
// powers of 2, but this can lead to over-allocating quite a bit of memory (especially
|
|
// since we expect maximum capacities to be in the megabytes range). The approach taken
|
|
// here avoids this issue, at the cost of not having allocations of nice round sizes.
|
|
m_MinimumCapacity = m_MaximumCapacity;
|
|
while (m_MinimumCapacity / 2 >= MinimumMinimumCapacity)
|
|
{
|
|
m_MinimumCapacity /= 2;
|
|
}
|
|
|
|
m_Data = new NativeList<byte>(m_MinimumCapacity, Allocator.Persistent);
|
|
m_HeadTailIndices = new NativeArray<int>(2, Allocator.Persistent);
|
|
|
|
m_Data.ResizeUninitialized(m_MinimumCapacity);
|
|
|
|
HeadIndex = 0;
|
|
TailIndex = 0;
|
|
}
|
|
|
|
public void Dispose()
|
|
{
|
|
if (IsCreated)
|
|
{
|
|
m_Data.Dispose();
|
|
m_HeadTailIndices.Dispose();
|
|
}
|
|
}
|
|
|
|
/// <summary>Write a raw buffer to a DataStreamWriter.</summary>
|
|
private unsafe void WriteBytes(ref DataStreamWriter writer, byte* data, int length)
|
|
{
|
|
#if UTP_TRANSPORT_2_0_ABOVE
|
|
writer.WriteBytesUnsafe(data, length);
|
|
#else
|
|
writer.WriteBytes(data, length);
|
|
#endif
|
|
}
|
|
|
|
/// <summary>Append data at the tail of the queue. No safety checks.</summary>
|
|
private void AppendDataAtTail(ArraySegment<byte> data)
|
|
{
|
|
unsafe
|
|
{
|
|
var writer = new DataStreamWriter((byte*)m_Data.GetUnsafePtr() + TailIndex, Capacity - TailIndex);
|
|
|
|
writer.WriteInt(data.Count);
|
|
|
|
fixed (byte* dataPtr = data.Array)
|
|
{
|
|
WriteBytes(ref writer, dataPtr + data.Offset, data.Count);
|
|
}
|
|
}
|
|
|
|
TailIndex += sizeof(int) + data.Count;
|
|
}
|
|
|
|
/// <summary>Append a new message to the queue.</summary>
|
|
/// <param name="message">Message to append to the queue.</param>
|
|
/// <returns>
|
|
/// Whether the message was appended successfully. The only way it can fail is if there's
|
|
/// no more room in the queue. On failure, nothing is written to the queue.
|
|
/// </returns>
|
|
public bool PushMessage(ArraySegment<byte> message)
|
|
{
|
|
if (!IsCreated)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
// Check if there's enough room after the current tail index.
|
|
if (Capacity - TailIndex >= sizeof(int) + message.Count)
|
|
{
|
|
AppendDataAtTail(message);
|
|
return true;
|
|
}
|
|
|
|
// Move the data at the beginning of of m_Data. Either it will leave enough space for
|
|
// the message, or we'll grow m_Data and will want the data at the beginning anyway.
|
|
if (HeadIndex > 0 && Length > 0)
|
|
{
|
|
unsafe
|
|
{
|
|
UnsafeUtility.MemMove(m_Data.GetUnsafePtr(), (byte*)m_Data.GetUnsafePtr() + HeadIndex, Length);
|
|
}
|
|
|
|
TailIndex = Length;
|
|
HeadIndex = 0;
|
|
}
|
|
|
|
// If there's enough space left at the end for the message, now is a good time to trim
|
|
// the capacity of m_Data if it got very large. We define "very large" here as having
|
|
// more than 75% of m_Data unused after adding the new message.
|
|
if (Capacity - TailIndex >= sizeof(int) + message.Count)
|
|
{
|
|
AppendDataAtTail(message);
|
|
|
|
while (TailIndex < Capacity / 4 && Capacity > m_MinimumCapacity)
|
|
{
|
|
m_Data.ResizeUninitialized(Capacity / 2);
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
// If we get here we need to grow m_Data until the data fits (or it's too large).
|
|
while (Capacity - TailIndex < sizeof(int) + message.Count)
|
|
{
|
|
// Can't grow m_Data anymore. Message simply won't fit.
|
|
if (Capacity * 2 > m_MaximumCapacity)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
m_Data.ResizeUninitialized(Capacity * 2);
|
|
}
|
|
|
|
// If we get here we know there's now enough room for the message.
|
|
AppendDataAtTail(message);
|
|
return true;
|
|
}
|
|
|
|
/// <summary>
|
|
/// Fill as much of a <see cref="DataStreamWriter"/> as possible with data from the head of
|
|
/// the queue. Only full messages (and their length) are written to the writer.
|
|
/// </summary>
|
|
/// <remarks>
|
|
/// This does NOT actually consume anything from the queue. That is, calling this method
|
|
/// does not reduce the length of the queue. Callers are expected to call
|
|
/// <see cref="Consume"/> with the value returned by this method afterwards if the data can
|
|
/// be safely removed from the queue (e.g. if it was sent successfully).
|
|
///
|
|
/// This method should not be used together with <see cref="FillWriterWithBytes"> since this
|
|
/// could lead to a corrupted queue.
|
|
/// </remarks>
|
|
/// <param name="writer">The <see cref="DataStreamWriter"/> to write to.</param>
|
|
/// <returns>How many bytes were written to the writer.</returns>
|
|
public int FillWriterWithMessages(ref DataStreamWriter writer)
|
|
{
|
|
if (!IsCreated || Length == 0)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
unsafe
|
|
{
|
|
var reader = new DataStreamReader(m_Data.AsArray());
|
|
|
|
var writerAvailable = writer.Capacity;
|
|
var readerOffset = HeadIndex;
|
|
|
|
while (readerOffset < TailIndex)
|
|
{
|
|
reader.SeekSet(readerOffset);
|
|
var messageLength = reader.ReadInt();
|
|
|
|
if (writerAvailable < sizeof(int) + messageLength)
|
|
{
|
|
break;
|
|
}
|
|
else
|
|
{
|
|
writer.WriteInt(messageLength);
|
|
|
|
var messageOffset = HeadIndex + reader.GetBytesRead();
|
|
WriteBytes(ref writer, (byte*)m_Data.GetUnsafePtr() + messageOffset, messageLength);
|
|
|
|
writerAvailable -= sizeof(int) + messageLength;
|
|
readerOffset += sizeof(int) + messageLength;
|
|
}
|
|
}
|
|
|
|
return writer.Capacity - writerAvailable;
|
|
}
|
|
}
|
|
|
|
/// <summary>
|
|
/// Fill the given <see cref="DataStreamWriter"/> with as many bytes from the queue as
|
|
/// possible, disregarding message boundaries.
|
|
/// </summary>
|
|
///<remarks>
|
|
/// This does NOT actually consume anything from the queue. That is, calling this method
|
|
/// does not reduce the length of the queue. Callers are expected to call
|
|
/// <see cref="Consume"/> with the value returned by this method afterwards if the data can
|
|
/// be safely removed from the queue (e.g. if it was sent successfully).
|
|
///
|
|
/// This method should not be used together with <see cref="FillWriterWithMessages"/> since
|
|
/// this could lead to reading messages from a corrupted queue.
|
|
/// </remarks>
|
|
/// <param name="writer">The <see cref="DataStreamWriter"/> to write to.</param>
|
|
/// <returns>How many bytes were written to the writer.</returns>
|
|
public int FillWriterWithBytes(ref DataStreamWriter writer)
|
|
{
|
|
if (!IsCreated || Length == 0)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
var copyLength = Math.Min(writer.Capacity, Length);
|
|
|
|
unsafe
|
|
{
|
|
WriteBytes(ref writer, (byte*)m_Data.GetUnsafePtr() + HeadIndex, copyLength);
|
|
}
|
|
|
|
return copyLength;
|
|
}
|
|
|
|
/// <summary>Consume a number of bytes from the head of the queue.</summary>
|
|
/// <remarks>
|
|
/// This should only be called with a size that matches the last value returned by
|
|
/// <see cref="FillWriter"/>. Anything else will result in a corrupted queue.
|
|
/// </remarks>
|
|
/// <param name="size">Number of bytes to consume from the queue.</param>
|
|
public void Consume(int size)
|
|
{
|
|
// Adjust the head/tail indices such that we consume the given size.
|
|
if (size >= Length)
|
|
{
|
|
HeadIndex = 0;
|
|
TailIndex = 0;
|
|
|
|
// This is a no-op if m_Data is already at minimum capacity.
|
|
m_Data.ResizeUninitialized(m_MinimumCapacity);
|
|
}
|
|
else
|
|
{
|
|
HeadIndex += size;
|
|
}
|
|
}
|
|
}
|
|
}
|