Reinforcement Learning (RL) with Simulate

Simulate is designed to provide easy and scalable integration with reinforcement learning algorithms. The core abstraction is through the RLEnv class that wraps a Scene. The RLEnv allows an Actuator to be manipulated by an external agent or policy.

It is core to the design of Simulate that we are not creating Agents, but rather providing an interface for applications of machine learning and embodied AI. The core API for RL applications can be seen below, where Simulate constrains the information that flows from the Scene to the external agent through an Actuator abstraction.



At release, we include a set of pre-designed Actor’s that can act or navigate a scene. An Actor inherits from an Object3D and has sensors, actuators, and action mappings.

Core Classes

Actuator

class simulate.Actuator

< >

( mapping: typing.List[simulate.assets.action_mapping.ActionMapping] actuator_tag: typing.Optional[str] = None n: typing.Optional[int] = None low: typing.Union[float, typing.List[float], numpy.ndarray, NoneType] = None high: typing.Union[float, typing.List[float], numpy.ndarray, NoneType] = None shape: typing.Optional[typing.List[int]] = None dtype: str = 'float32' seed: typing.Optional[int] = None )

Parameters

  • (we always have a scene-level gym dict space). —
  • n (int or List[int]) — for discrete actions, the number of possible actions for multi-binary actions, the number of possible binary actions or a list of the number of possible actions for each dimension low — low bound of continuous action space dimensions, either a float or list of floats high — high bound of continuous action space dimensions, either a float or list of floats shape — shape of continuous action space, should match low/high dtype — sampling type for continuous action spaces only

An Asset Actuator can be used to move an asset in the scene.

The actuator is designed to be a part of an Actor that manipulates a scene.

We define:

RLEnv

class simulate.RLEnv

< >

( scene_or_map_fn: typing.Union[typing.Callable, simulate.scene.Scene] n_maps: typing.Optional[int] = 1 n_show: typing.Optional[int] = 1 time_step: typing.Optional[float] = 0.03333333333333333 frame_skip: typing.Optional[int] = 4 **engine_kwargs )

RL environment wrapper for Simulate scene. Uses functionality from the VecEnv in stable baselines 3 For more information on VecEnv, see the source https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html

reset

< >

( ) obs (Dict)

Returns

obs (Dict)

the observation of the environment after reset.

Resets the actors and the scene of the environment.

sample_action

< >

( ) action

Returns

action

TODO

Samples an action from the actors in the environment. This function loads the configuration of maps and actors to return the correct shape across multiple configurations.

step

< >

( action: typing.Union[typing.Dict, typing.List, numpy.ndarray] ) observation (Dict)

Parameters

  • action (Dict or List) — TODO verify, a dict with actuator tags as keys and as values a Tensor of shape (n_show, n_actors, n_actions)

Returns

observation (Dict)

TODO reward (float): TODO done (bool): TODO info: TODO

The step function for the environment, follows the API from OpenAI Gym.

ActionMapping

class simulate.ActionMapping

< >

( action: str amplitude: float = 1.0 offset: float = 0.0 axis: typing.Optional[typing.List[float]] = None position: typing.Optional[typing.List[float]] = None use_local_coordinates: bool = True is_impulse: bool = False max_velocity_threshold: typing.Optional[float] = None )

Parameters

Map a RL agent action to an actor physical action

The conversion is as follows (where X is the RL input action and Y the physics engine action e.g. force, torque, position): Y = Y + (X - offset) * amplitude For discrete action we assume X = 1.0 so that amplitude can be used to define the discrete value to apply.

“max_velocity_threshold” can be used to limit the max resulting velocity or angular velocity after the action was applied :

Included Actors

class simulate.SimpleActor

< >

( name: typing.Optional[str] = None position: typing.Optional[typing.List[float]] = None rotation: typing.Optional[typing.List[float]] = None scaling: typing.Union[float, typing.List[float], NoneType] = None transformation_matrix: typing.Optional[numpy.ndarray] = None material: typing.Optional[simulate.assets.material.Material] = None parent: typing.Optional[ForwardRef('Asset')] = None children: typing.Union[ForwardRef('Asset'), typing.List[ForwardRef('Asset')], NoneType] = None **kwargs )

Parameters

  • name (str) — position — length 3 list of the position of the agent, defaults to (0,0,0) rotation — length 3 list of the rotation of the agent, defaults to (0,0,0) scaling — transformation_matrix — parent — children —

Creates a bare-bones RL agent in the scene.

A SimpleActor is a sphere asset with:

class simulate.EgocentricCameraActor

< >

( mass: float = 1.0 name: typing.Optional[str] = None position: typing.Optional[typing.List[float]] = None rotation: typing.Optional[typing.List[float]] = None scaling: typing.Union[float, typing.List[float], NoneType] = None camera_height: int = 40 camera_width: int = 40 camera_name: typing.Optional[str] = None transformation_matrix: typing.Optional[numpy.ndarray] = None material: typing.Optional[simulate.assets.material.Material] = None parent: typing.Optional[ForwardRef('Asset')] = None children: typing.Union[ForwardRef('Asset'), typing.List[ForwardRef('Asset')], NoneType] = None **kwargs )

Parameters

  • mass (float, Optional) —
  • name (str) — position — length 3 list of the position of the agent, defaults to (0,0,0) rotation — length 3 list of the rotation of the agent, defaults to (0,0,0) scaling — camera_height — pixel height of first-person camera observations camera_width — pixel width of first-person camera observations transformation_matrix — parent — children —

Create an Egocentric RL Actor in the Scene — essentially a basic first-person agent.

An egocentric actor is a capsule asset with:

Future Applications

In the future we intend to support more functionality such as multi-agent RL, accelerated physics, and more.