Extract action and observation information that you can use to create other environments or agents.
The reinforcement learning environment for this example is the simple longitudinal dynamics for ego car and lead car. The training goal is to make the ego car travel at a set velocity while maintaining a safe distance from lead car by controlling longitudinal acceleration (and braking). This example uses the same vehicle model as the Adaptive Cruise Control System Using Model Predictive Control (Model Predictive Control Toolbox) example.
Open the model and create the reinforcement learning environment.
mdl = 'rlACCMdl';
open_system(mdl);
agentblk = [mdl '/RL Agent'];
% create the observation info
obsInfo = rlNumericSpec([3 1],'LowerLimit',-inf*ones(3,1),'UpperLimit',inf*ones(3,1));
obsInfo.Name = 'observations';
obsInfo.Description = 'information on velocity error and ego velocity';
% action Info
actInfo = rlNumericSpec([1 1],'LowerLimit',-3,'UpperLimit',2);
actInfo.Name = 'acceleration';
% define environment
env = rlSimulinkEnv(mdl,agentblk,obsInfo,actInfo)
env =
SimulinkEnvWithAgent with properties:
Model : rlACCMdl
AgentBlock : rlACCMdl/RL Agent
ResetFcn : []
UseFastRestart : on
The reinforcement learning environment env is a SimulinkWithAgent object with the above properties.
Extract the action and observation information from the reinforcement learning environment env.
obsInfoExt =
rlNumericSpec with properties:
LowerLimit: [3x1 double]
UpperLimit: [3x1 double]
Name: "observations"
Description: "information on velocity error and ego velocity"
Dimension: [3 1]
DataType: "double"
The action information contains acceleration values while the observation information contains the velocity and velocity error values of the ego vehicle.