mate.wrappers package

Wrapper classes for the Multi-Agent Tracking Environment.

class mate.wrappers.EnhancedObservation(env: BaseEnvironmentType, team: Literal['both', 'camera', 'target', 'none'] = 'both')[source]

Bases: ObservationWrapper

Enhance the agent’s observation, which sets all observation mask to True. The targets can observe the empty status of all warehouses even when far away.

load_config(config: Dict[str, Any] | str | None = None) None[source]

Reinitialize the Multi-Agent Tracking Environment from a dictionary mapping or a JSON/YAML file.

observation(observation: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Returns a modified observation.

class mate.wrappers.SharedFieldOfView(env: BaseEnvironmentType, team: Literal['both', 'camera', 'target', 'none'] = 'both')[source]

Bases: ObservationWrapper

Share field of view among agents in the same team, which applies the “or” operator over the observation masks. The target agents also share the empty status of warehouses.

load_config(config: Dict[str, Any] | str | None = None) None[source]

Reinitialize the Multi-Agent Tracking Environment from a dictionary mapping or a JSON/YAML file.

observation(observation: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Returns a modified observation.

class mate.wrappers.RescaledObservation(env: MultiAgentTracking)[source]

Bases: ObservationWrapper

Rescale all entity states in the observation to [-1., +1.]. (Not used in the evaluation script.)

observation(observation: Tuple[ndarray, ndarray] | ndarray) Tuple[ndarray, ndarray] | ndarray[source]

Returns a modified observation.

rescale_observation(observation: ndarray, team: Team) ndarray[source]
class mate.wrappers.RelativeCoordinates(env: MultiAgentTracking)[source]

Bases: ObservationWrapper

Convert all locations of other entities in the observation to relative coordinates (exclude the current agent itself). (Not used in the evaluation script.)

observation(observation: Tuple[ndarray, ndarray] | ndarray) Tuple[ndarray, ndarray] | ndarray[source]

Returns a modified observation.

convert_coordinates(observation: ndarray, team: Team) ndarray[source]
class mate.wrappers.MoreTrainingInformation(env: BaseEnvironmentType)[source]

Bases: Wrapper

Add more environment and agent information to the info field of step(), enabling full observability of the environment. (Not used in the evaluation script.)

step(action: Tuple[ndarray, ndarray]) Tuple[Tuple[ndarray, ndarray], Tuple[float, float], bool, Tuple[List[dict], List[dict]]] | Tuple[Tuple[ndarray, ndarray], Tuple[List[float], List[float]], Tuple[List[bool], List[bool]], Tuple[List[dict], List[dict]]][source]

Steps through the environment with action.

class mate.wrappers.DiscreteCamera(env: BaseEnvironmentType, levels: int = 5)[source]

Bases: ActionWrapper

Wrap the environment to allow cameras to use discrete actions.

load_config(config: Dict[str, Any] | str | None = None) None[source]

Reinitialize the Multi-Agent Tracking Environment from a dictionary mapping or a JSON/YAML file.

action(action: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Convert joint action of cameras from discrete to continuous.

reverse_action(action: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Convert joint action of cameras from continuous to discrete.

static discrete_action_grid(levels)[source]
class mate.wrappers.DiscreteTarget(env: BaseEnvironmentType, levels: int = 5)[source]

Bases: ActionWrapper

Wrap the environment to allow targets to use discrete actions.

load_config(config: Dict[str, Any] | str | None = None) None[source]

Reinitialize the Multi-Agent Tracking Environment from a dictionary mapping or a JSON/YAML file.

reset(**kwargs) Tuple[ndarray, ndarray][source]

Resets the environment with kwargs.

action(action: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Convert joint action of targets from discrete to continuous.

reverse_action(action: Tuple[ndarray, ndarray]) Tuple[ndarray, ndarray][source]

Convert joint action of targets from continuous to discrete.

static discrete_action_grid(levels)[source]
class mate.wrappers.AuxiliaryCameraRewards(env: BaseEnvironmentType | MultiCamera | MultiTarget, coefficients: Dict[str, float | Callable[[int, int, int, float, float], float]], reduction: Literal['mean', 'sum', 'max', 'min', 'none'] = 'none')[source]

Bases: Wrapper

Add additional auxiliary rewards for each individual camera. (Not used in the evaluation script.)

The auxiliary reward is a weighted sum of the following components:

  • raw_reward (the higher the better): team reward returned by the environment (shared, range in \((-\infty, 0]\)).

  • coverage_rate (the higher the better): coverage rate of all targets in the environment (shared, range in \([0, 1]\)).

  • real_coverage_rate (the higher the better): coverage rate of targets with cargoes in the environment (shared, range in \([0, 1]\)).

  • mean_transport_rate (the lower the better): mean transport rate of the target team (shared, range in \([0, 1]\)).

  • soft_coverage_score (the higher the better): soft coverage score is proportional to the distance from the target to the camera’s boundary (individual, range in \([-1, N_{\mathcal{T}}]\)).

  • num_tracked (the higher the better): number of targets tracked the camera (shared, range in \([0, N_{\mathcal{T}}]\)).

  • baseline: constant \(1\).

ACCEPTABLE_KEYS = ('raw_reward', 'coverage_rate', 'real_coverage_rate', 'mean_transport_rate', 'soft_coverage_score', 'num_tracked', 'baseline')
REDUCERS = {'max': <function amax>, 'mean': <function mean>, 'min': <function amin>, 'sum': <function sum>}
reset(**kwargs) ndarray[source]

Resets the environment with kwargs.

step(action: Tuple[ndarray, ndarray] | ndarray) Tuple[Tuple[ndarray, ndarray], Tuple[List[float], List[float]], Tuple[List[bool], List[bool]], Tuple[List[dict], List[dict]]] | Tuple[ndarray, List[float], List[bool], List[dict]][source]

Steps through the environment with action.

static compute_soft_coverage_scores(env) ndarray[source]

Compute all soft coverage score for each individual camera.

static compute_soft_coverage_score(camera, targets, tracked_bits: ndarray) List[float][source]

The soft coverage score is proportional to the distance from the target to the camera’s boundary.

class mate.wrappers.AuxiliaryTargetRewards(env: BaseEnvironmentType | MultiCamera | MultiTarget, coefficients: Dict[str, float | Callable[[int, int, int, float, float], float]], reduction: Literal['mean', 'sum', 'max', 'min', 'none'] = 'none')[source]

Bases: Wrapper

Add additional auxiliary rewards for each individual target. (Not used in the evaluation script.)

The auxiliary reward is a weighted sum of the following components:

  • raw_reward (the higher the better): team reward returned by the environment (shared, range in \([0, +\infty)\)).

  • coverage_rate (the lower the better): coverage rate of all targets in the environment (shared, range in \([0, 1]\)).

  • real_coverage_rate (the lower the better): coverage rate of targets with cargoes in the environment (shared, range in \([0, 1]\)).

  • mean_transport_rate (the higher the better): mean transport rate of the target team (shared, range in \([0, 1]\)).

  • normalized_goal_distance (the lower the better): the normalized value of the distance to destination, or the nearest non-empty warehouse when the target is not loaded (individual, range in \([0, \sqrt{2}]\)).

  • sparse_delivery (the higher the better): a boolean value that indicates whether the target reaches the destination (individual, range in \({0, 1}\)).

  • soft_coverage_score (the lower the better): soft coverage score is proportional to the distance from the target to the camera’s boundary (individual, range in \([-1, N_{\mathcal{C}}]\)).

  • is_tracked (the lower the better): a boolean value that indicates whether the target is tracked by any camera or not. (individual, range in \({0, 1}\)).

  • is_colliding (the lower the better): a boolean value that indicates whether the target is colliding with obstacles, cameras’ barriers of terrain boundary. (individual, range in \({0, 1}\)).

  • baseline: constant \(1\).

ACCEPTABLE_KEYS = ('raw_reward', 'coverage_rate', 'real_coverage_rate', 'mean_transport_rate', 'normalized_goal_distance', 'sparse_delivery', 'soft_coverage_score', 'is_tracked', 'is_colliding', 'baseline')
REDUCERS = {'max': <function amax>, 'mean': <function mean>, 'min': <function amin>, 'sum': <function sum>}
reset(**kwargs) ndarray[source]

Resets the environment with kwargs.

step(action: Tuple[ndarray, ndarray] | ndarray) Tuple[Tuple[ndarray, ndarray], Tuple[List[float], List[float]], Tuple[List[bool], List[bool]], Tuple[List[dict], List[dict]]] | Tuple[ndarray, List[float], List[bool], List[dict]][source]

Steps through the environment with action.

mate.wrappers.group_reset(agents: Iterable[AgentType], joint_observation: ndarray | Iterable[ndarray]) None[source]

Reset a group of agents.

mate.wrappers.group_step(env: BaseEnvironmentType, agents: Iterable[AgentType], joint_observation: ndarray | Iterable[ndarray], infos: List[dict] | None = None, deterministic: bool | None = None) List[int | ndarray][source]

Helper function to do a environment step for a group of agents.

mate.wrappers.group_observe(agents: Iterable[AgentType], joint_observation: ndarray | Iterable[ndarray], infos: List[dict] | None = None) List[int | ndarray][source]

Set the observation for a group of agents.

mate.wrappers.group_communicate(env: BaseEnvironmentType, agents: Iterable[AgentType]) None[source]

Send and receive messages from a group of agents to the environment.

mate.wrappers.group_act(agents: Iterable[AgentType], joint_observation: ndarray | Iterable[ndarray], infos: List[dict] | None = None, deterministic: bool | None = None) List[int | ndarray][source]

Get the joint action of a group of agents.

class mate.wrappers.MultiCamera(env: BaseEnvironmentType, target_agent: TargetAgentBase)[source]

Bases: SingleTeamMultiAgent

Wrap the environment into a single-team multi-agent environment that users can use the Gym API to train and/or evaluate their camera agents.

class mate.wrappers.SingleCamera(env: BaseEnvironmentType, other_camera_agent: CameraAgentBase, target_agent: TargetAgentBase)[source]

Bases: SingleTeamSingleAgent

Wrap the environment to a single-team single-agent environment that users can use the Gym API to train and/or evaluate their camera agent.

class mate.wrappers.MultiTarget(env: BaseEnvironmentType, camera_agent: CameraAgentBase)[source]

Bases: SingleTeamMultiAgent

Wraps the environment into a single-team multi-agent environment that users can use the Gym API to train and/or evaluate their target agents.

class mate.wrappers.SingleTarget(env: BaseEnvironmentType, other_target_agent: TargetAgentBase, camera_agent: CameraAgentBase)[source]

Bases: SingleTeamSingleAgent

Wrap the environment to a single-team single-agent environment that users can use the Gym API to train and/or evaluate their target agent.

class mate.wrappers.MessageFilter(env: MultiAgentTracking, filter: Callable[[MultiAgentTracking, Message], bool])[source]

Bases: Wrapper

Filter messages from agents of intra-team communications. (Not used in the evaluation script.)

Users can use this wrapper to implement a communication channel with limited bandwidth, limited communication range, or random dropout. This wrapper can be applied multiple times with different filter functions.

Note

The filter function can also modify the message content. Users can use this to add channel signal noises etc.

send_messages(messages: Message | Iterable[Message]) None[source]

Buffer the messages from an agent to others in the same team.

The environment will send the messages to recipients’ through method receive_messages(), and also info field of step() results.

class mate.wrappers.RestrictedCommunicationRange(env: MultiAgentTracking, range_limit: float)[source]

Bases: MessageFilter

Add a restricted communication range to channels. (Not used in the evaluation script.)

static filter(env: MultiAgentTracking, message: Message, range_limit: float) bool[source]

Filter out messages beyond range limit.

class mate.wrappers.RandomMessageDropout(env: MultiAgentTracking, dropout_rate: float)[source]

Bases: MessageFilter

Randomly drop messages in communication channels. (Not used in the evaluation script.)

static filter(env: MultiAgentTracking, message: Message, dropout_rate: float) bool[source]

Randomly drop messages.

class mate.wrappers.NoCommunication(env: MultiAgentTracking, team: Literal['both', 'camera', 'target', 'none'] = 'both')[source]

Bases: MessageFilter

Disable intra-team communications, i.e., filter out all messages.

class mate.wrappers.ExtraCommunicationDelays(env: MultiAgentTracking, delay: int | Callable[[MultiAgentTracking, Message], int] = 3)[source]

Bases: Wrapper

Add extra message delays to communication channels. (Not used in the evaluation script.)

Users can use this wrapper to implement a communication channel with random delays.

reset(**kwargs) Tuple[ndarray, ndarray] | ndarray[source]

Resets the environment with kwargs.

send_messages(messages: Message | Iterable[Message]) None[source]

Buffer the messages from an agent to others in the same team.

The environment will send the messages to recipients’ through method receive_messages(), and also info field of step() results.

class mate.wrappers.RenderCommunication(env: MultiAgentTracking, duration: int | None = 20)[source]

Bases: Wrapper

Draw arrows for intra-team communications in rendering results.

load_config(config: Dict[str, Any] | str | None = None) None[source]

Reinitialize the Multi-Agent Tracking Environment from a dictionary mapping or a JSON/YAML file.

reset(**kwargs) Tuple[ndarray, ndarray][source]

Resets the environment with kwargs.

step(action: Tuple[ndarray, ndarray]) Tuple[Tuple[ndarray, ndarray], Tuple[float, float], bool, Tuple[List[dict], List[dict]]][source]

Steps through the environment with action.

callback(unwrapped: MultiAgentTracking, mode: str) None[source]

Draw communication messages as arrows.

class mate.wrappers.RepeatedRewardIndividualDone(env: BaseEnvironmentType | MultiCamera | MultiTarget, target_done_at_destination=False)[source]

Bases: Wrapper

Repeat the reward field and assign individual done field of step(), which is similar to the OpenAI Multi-Agent Particle Environment. (Not used in the evaluation script.)

step(action: Tuple[ndarray, ndarray] | ndarray) Tuple[Tuple[ndarray, ndarray], Tuple[List[float], List[float]], Tuple[List[bool], List[bool]], Tuple[List[dict], List[dict]]] | Tuple[ndarray, List[float], List[bool], List[dict]][source]

Steps through the environment with action.

mate.wrappers.WrapperMeta

alias of EnvMeta

class mate.wrappers.WrapperSpec(wrapper, *args, **kwargs)[source]

Bases: object

Helper class for creating environments with wrappers.