rlSimulinkEnv

Create a reinforcement learning environment using a dynamic model implemented in Simulink

Description

example

env = rlSimulinkEnv(mdl,agentBlock,obsInfo,actInfo) creates a reinforcement learning environment object env using the Simulink® model name mdl, the path to the agent block agentBlock, observation information obsInfo, and action information actInfo.

env = rlSimulinkEnv(___,'UseFastRestart',fastRestartToggle) creates a reinforcement learning environment object env with additional option to enable fast restart.

Examples

collapse all

For this example, consider the rlSimplePendulumModel Simulink model. The model is a simple frictionless pendulum that is initially hanging in a downward position.

Open the model.

mdl = 'rlSimplePendulumModel';
open_system(mdl)

Assign the agent block path information, and create rlNumericSpec and rlFiniteSetSpec objects for the observation and action information. You can use dot notation to assign property values of the rlNumericSpec and rlFiniteSetSpec objects.

agentBlk = [mdl '/RL Agent'];
obsInfo = rlNumericSpec([3 1])
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [3 1]
       DataType: "double"

actInfo = rlFiniteSetSpec([2 1])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2x1 double]
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

obsInfo.Name = 'observations';
actInfo.Name = 'torque';

Create the reinforcement learning environment for the Simulink model using information extracted in the previous steps.

env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: []
    UseFastRestart: 'on'

You can also include a reset function using dot notation. For this example, consider randomly initializing theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: @(in)setVariable(in,'theta0',randn,'Workspace',mdl)
    UseFastRestart: 'on'

Input Arguments

collapse all

Simulink model name, specified as a string or character vector.

Agent block path, specified as a string or character vector. The specified agent block can be inside of a model reference.

For more information on configuring an agent block for reinforcement learning, see RL Agent.

Observation information, specified as an array of one of the following:

For more information, see getObservationInfo.

Action information, specified as an array of one of the following:

For more information, see getActionInfo.

Option to toggle fast restart, specified as either 'on' or 'off'. Fast restart allows you to perform iterative simulations without compiling a model or terminating the simulation each time.

For more information on fast restart, see How Fast Restart Improves Iterative Simulations (Simulink).

Output Arguments

collapse all

Reinforcement learning environment, returned as a SimulinkEnvWithAgent object.

For more information on reinforcement learning environments, see Create Simulink Environments for Reinforcement Learning.

Introduced in R2019a