Create parallel pool on cluster
parpool
starts a parallel pool of workers using the
default cluster profile. With default preferences, MATLAB® starts a pool on the local machine with one worker per physical
CPU core, up to the preferred number of workers. For more information on
parallel preferences, see Specify Your Parallel Preferences.
In general, the pool size is specified by your parallel preferences and the
default profile. parpool
creates a pool on the default
cluster with its NumWorkers
in the range [1,
preferredNumWorkers]
for running parallel language features.
preferredNumWorkers
is the value defined in your parallel
preferences. For all factors that can affect your pool size, see Pool Size and Cluster Selection.
parpool
enables the full functionality of the parallel
language features in MATLAB by creating a special job on a pool of workers, and connecting the
MATLAB client to the parallel pool. Parallel language features include
parfor
, parfeval
,
parfevalOnAll
, spmd
, and
distributed
. If possible, the working folder on the
workers is set to match that of the MATLAB client session.
parpool(
creates and
returns a pool with the specified number of workers.
poolsize
)poolsize
can be a positive integer or a range specified
as a 2-element vector of integers. If poolsize
is a range,
the resulting pool has size as large as possible in the range requested.
Specifying the poolsize
overrides the number of workers
specified in the preferences or profile, and starts a pool of exactly that
number of workers, even if it has to wait for them to be available. Most
clusters have a maximum number of workers they can start. If the profile
specifies a MATLAB Job Scheduler cluster, parpool
reserves its
workers from among those already running and available under that MATLAB Job Scheduler. If the profile specifies a local or third-party
scheduler, parpool
instructs the scheduler to start the
workers for the pool.
parpool(___,
applies the specified values for certain properties when starting the
pool.Name,Value
)
returns a parallel.Pool object to
the client workspace representing the pool on the cluster. You can use the pool
object to programmatically delete the pool or to access its properties. Use
poolobj
= parpool(___)delete(pool)
to shut down the parallel pool.
The pool status indicator in the lower-left corner of the desktop shows the client session connection to the pool and the pool status. Click the icon for a menu of supported pool actions.
With a pool running: With no pool running:
If you set your parallel preferences to automatically create a parallel
pool when necessary, you do not need to explicitly call the
parpool
command. You might explicitly create a pool
to control when you incur the overhead time of setting it up, so the pool is
ready for subsequent parallel language constructs.
delete(poolobj)
shuts down the parallel pool. Without a
parallel pool, spmd
and parfor
run
as a single thread in the client, unless your parallel preferences are set
to automatically start a parallel pool for them.
When you use the MATLAB editor to update files on the client that are attached to a
parallel pool, those updates automatically propagate to the workers in the
pool. (This automatic updating does not apply to Simulink® model files. To propagate updated model files to the workers,
use the updateAttachedFiles
function.)
If possible, the working folder on the workers is initially set to match that of the MATLAB client session. Subsequently, the following commands entered in the client Command Window also execute on all the workers in the pool:
This behavior allows you to set the working folder and the command search
path on all the workers, so that subsequent pool activities such as
parfor
-loops execute in the proper context.
When changing folders or adding a path with cd
or
addpath
on clients with Windows® operating systems, the value sent to the workers is the UNC
path for the folder if possible. For clients with Linux® operating systems, it is the absolute folder location.
If any of these commands does not work on the client, it is not executed
on the workers either. For example, if addpath
specifies
a folder that the client cannot access, the addpath
command is not executed on the workers. However, if the working folder can
be set on the client, but cannot be set as specified on any of the workers,
you do not get an error message returned to the client Command
Window.
Be careful of this slight difference in behavior in a mixed-platform
environment where the client is not the same platform as the workers, where
folders local to or mapped from the client are not available in the same way
to the workers, or where folders are in a nonshared file system. For
example, if you have a MATLAB client running on a Microsoft®
Windows operating system while the MATLAB workers are all running on Linux operating systems, the same argument to
addpath
cannot work on both. In this situation, you
can use the function pctRunOnAll
to assure that
a command runs on all the workers.
Another difference between client and workers is that any
addpath
arguments that are part of the matlabroot
folder are not
set on the workers. The assumption is that the MATLAB install base is
already included in the workers’ paths. The rules for
addpath
regarding workers in the pool are:
Subfolders of the matlabroot
folder are not
sent to the workers.
Any folders that appear before the first occurrence of a
matlabroot
folder are added to the top of the
path on the workers.
Any folders that appear after the first occurrence of a
matlabroot
folder are added after the
matlabroot
group of folders on the workers’
paths.
For example, suppose that matlabroot
on the client is
C:\Applications\matlab\
. With an open parallel pool,
execute the following to set the path on the client and all workers:
addpath('P1', 'P2', 'C:\Applications\matlab\T3', 'C:\Applications\matlab\T4', 'P5', 'C:\Applications\matlab\T6', 'P7', 'P8');
Because T3
, T4
, and
T6
are subfolders of matlabroot
,
they are not set on the workers’ paths. So on the workers, the pertinent
part of the path resulting from this command is:
P1 P2 <worker original matlabroot folders...> P5 P7 P8
If you are using Macintosh or Linux, and see problems during large parallel pool creation, see Recommended System Limits for Macintosh and Linux.
Composite
| delete
| distributed
| gcp
| parallel.defaultClusterProfile
| parallel.pool.Constant
| parcluster
| parfeval
| parfevalOnAll
| parfor
| pctRunOnAll
| spmd