Simulate is designed to provide easy and scalable integration with reinforcement learning algorithms.
The core abstraction is through the RLEnv class that wraps a Scene.
The RLEnv allows an Actuator to be manipulated by an external agent or policy.
It is core to the design of Simulate that we are not creating Agents, but rather providing an interface for applications of machine learning and embodied AI. The core API for RL applications can be seen below, where Simulate constrains the information that flows from the Scene to the external agent through an Actuator abstraction.
At release, we include a set of pre-designed Actor’s that can act or navigate a scene. An Actor inherits from an Object3D and has sensors, actuators, and action mappings.
( mapping: typing.List[simulate.assets.action_mapping.ActionMapping] actuator_tag: typing.Optional[str] = None n: typing.Optional[int] = None low: typing.Union[float, typing.List[float], numpy.ndarray, NoneType] = None high: typing.Union[float, typing.List[float], numpy.ndarray, NoneType] = None shape: typing.Optional[typing.List[int]] = None dtype: str = 'float32' seed: typing.Optional[int] = None )
Parameters
An Asset Actuator can be used to move an asset in the scene.
The actuator is designed to be a part of an Actor that manipulates a scene.
We define:
( scene_or_map_fn: typing.Union[typing.Callable, simulate.scene.Scene] n_maps: typing.Optional[int] = 1 n_show: typing.Optional[int] = 1 time_step: typing.Optional[float] = 0.03333333333333333 frame_skip: typing.Optional[int] = 4 **engine_kwargs )
RL environment wrapper for Simulate scene. Uses functionality from the VecEnv in stable baselines 3 For more information on VecEnv, see the source https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html
(
)
→
obs (Dict)
Returns
obs (Dict)
the observation of the environment after reset.
Resets the actors and the scene of the environment.
Samples an action from the actors in the environment. This function loads the configuration of maps and actors to return the correct shape across multiple configurations.
(
action: typing.Union[typing.Dict, typing.List, numpy.ndarray]
)
→
observation (Dict)
The step function for the environment, follows the API from OpenAI Gym.
( action: str amplitude: float = 1.0 offset: float = 0.0 axis: typing.Optional[typing.List[float]] = None position: typing.Optional[typing.List[float]] = None use_local_coordinates: bool = True is_impulse: bool = False max_velocity_threshold: typing.Optional[float] = None )
Parameters
Map a RL agent action to an actor physical action
The conversion is as follows (where X is the RL input action and Y the physics engine action e.g. force, torque, position): Y = Y + (X - offset) * amplitude For discrete action we assume X = 1.0 so that amplitude can be used to define the discrete value to apply.
“max_velocity_threshold” can be used to limit the max resulting velocity or angular velocity after the action was applied :
( name: typing.Optional[str] = None position: typing.Optional[typing.List[float]] = None rotation: typing.Optional[typing.List[float]] = None scaling: typing.Union[float, typing.List[float], NoneType] = None transformation_matrix: typing.Optional[numpy.ndarray] = None material: typing.Optional[simulate.assets.material.Material] = None parent: typing.Optional[ForwardRef('Asset')] = None children: typing.Union[ForwardRef('Asset'), typing.List[ForwardRef('Asset')], NoneType] = None **kwargs )
Creates a bare-bones RL agent in the scene.
A SimpleActor is a sphere asset with:
( mass: float = 1.0 name: typing.Optional[str] = None position: typing.Optional[typing.List[float]] = None rotation: typing.Optional[typing.List[float]] = None scaling: typing.Union[float, typing.List[float], NoneType] = None camera_height: int = 40 camera_width: int = 40 camera_name: typing.Optional[str] = None transformation_matrix: typing.Optional[numpy.ndarray] = None material: typing.Optional[simulate.assets.material.Material] = None parent: typing.Optional[ForwardRef('Asset')] = None children: typing.Union[ForwardRef('Asset'), typing.List[ForwardRef('Asset')], NoneType] = None **kwargs )
Parameters
float, Optional) —
str) —
position — length 3 list of the position of the agent, defaults to (0,0,0)
rotation — length 3 list of the rotation of the agent, defaults to (0,0,0)
scaling —
camera_height — pixel height of first-person camera observations
camera_width — pixel width of first-person camera observations
transformation_matrix —
parent —
children —
Create an Egocentric RL Actor in the Scene — essentially a basic first-person agent.
An egocentric actor is a capsule asset with: