Co-simulation#

class invertedai.cosimulation.BasicCosimulation(location: str, ego_agent_mask: Optional[List[bool]] = None, monitor_infractions: bool = False, get_birdview: bool = False, random_seed: Optional[int] = None, traffic_lights: bool = False, **kwargs)[source]#

Stateful wrapper around the Inverted AI API to simplify co-simulation. All arguments to initialize() can be passed to the constructor here and a sufficient combination of them must be passed as required by initialize(). This wrapper caches static agent attributes and propagates the recurrent state, so that only states of ego agents and NPCs need to be exchanged with it to perform co-simulation. Typically, each time step requires a single call to self.npc_states() and a single call to self.step().

This wrapper only supports a minimal co-simulation functionality. For more advanced use cases, call initialize() and drive() directly.

Parameters:
  • location – Location name as expected by initialize().

  • ego_agent_mask – List indicating which agent is ego, meaning that it is controlled by you externally. The order in this list should be the same as that used in arguments to initialize().

  • monitor_infraction – Whether to monitor driving infractions, at a small increase in latency and payload size.

  • get_birdview – Whether to render the bird’s eye view of the simulation state at each time step. It drastically increases the payload received from Inverted AI servers and therefore slows down the simulation - use only for debugging.

  • random_seed – Controls the stochastic aspects of simulation for reproducibility.

property agent_attributes: List[AgentAttributes]#

The attributes (length, width, rear_axis_offset) for all agents, including ego.

property agent_count: int#

The total number of agents, both ego and NPCs.

property agent_states: List[AgentState]#

The predicted states for all agents, including ego.

property birdview: Image#

If get_birdview was set in the constructor, this is the image showing the current state of the simulation.

property ego_agent_mask: List[bool]#

Lists which agents are ego, which means that you control them externally. It can be updated during the simulation, but see caveats in user guide regarding the quality of resulting predictions.

property ego_attributes#

Returns the attributes of ego agents in order. The NPC agents are excluded.

property ego_states#

Returns the predicted states of ego agents in order. The NPC agents are excluded.

property infractions: Optional[List[InfractionIndicators]]#

If monitor_infractions was set in the constructor, lists infractions currently committed by each agent, including ego agents.

property light_states: Optional[Dict[int, TrafficLightState]]#

Returns the traffic light states if any exists on the map.

property location: str#

Location name as recognized by Inverted AI API.

property npc_states: List[AgentState]#

Returns the predicted states of NPCs (non-ego agents) in order. The predictions for ego agents are excluded.

step(current_ego_agent_states: List[AgentState]) None[source]#

Calls drive() to advance the simulation by one time step. Current states of ego agents need to be provided to synchronize with your local simulator.

Parameters:

current_ego_agent_states – States of ego agents before the step.

Returns:

None - call self.npc_states() to retrieve predictions.