CM FORTRAN USER'S GUIDE Version 2.1, January 1994 Copyright (c) 1994 Thinking Machines Corporation. CHAPTER 1: INTRODUCTION ************************ This manual provides information on compiling and executing CM Fortran programs on supported CM system configurations: o CM-5 with or without vector units o CM-200 o CM-2 with 64-bit floating-point accelerator It also describes tools and utilities that assist program development, and notes some miscellaneous details of the implementation. This chapter introduces the execution environment for CM Fortran programs, with emphasis on the Connection Machine model CM-5 and its CMOST operating system. Comparable information for users of the CM-2 and CM-200 systems appears in the manual CM User's Guide. Users of the CM-5 should refer to the CM-5 User's Guide for more information. The chapters that follow describing the compiler, the development tools and utilities, and various features of the CM Fortran implementation apply to all supported CM hardware configurations. CM Fortran programmers may wish to consult the CM User's Guide and CM-5 User's Guide for information about development tools, file systems, and libraries available on the CM system. 1.1 INTRODUCING THE CM-5 SUPERCOMPUTER --------------------------------------- A CM-5 system is a massively parallel, scalable supercomputer containing a few or thousands of processors. These processors may be divided into two categories, computational processors and control processors. o Computational processors make up the vast majority of processors inside the CM-5 system. They do the actual computations on data in parallel, communicating with each other to share data as necessary. o Control processors manage the CM-5's computational processors and I/O devices. They provide major OS services for the system, handling the system's user interface, its I/O and network interfaces, and its system administration and diagnostic interfaces. A group of computational processors under the control of a single control processor is called a partition, and the control processor is called the partition manager. Interprocessor communication networks connect all processors, of both types, to provide rapid, high-bandwidth communication within and between user processes. The CM-5 operating system provides for both spacesharing and timesharing among user processes. o Spacesharing occurs when the system administrator partitions the CM-5, allotting so many processors to one partition, so many to another. The system administrator also decides which users have access to a given partition. Administrators can change partition sizes or access rules as needed to suit the needs of their sites. o All partitions run CMOST: the CM Operating System Timeshared. Therefore, timesharing is the natural state on all partitions. CMOST is an enhanced version of the UNIX operating system. Users of the CM-5 thus have access to all standard UNIX facilities, as well as to the special tools and utilities provided by CM software to facilitate parallel programming. 1.1.1 The User's View of the System ------------------------------------ Figure 1 diagrams a sample CM-5 system as it appears to a user. This particular system has two partition managers, named Mars and Venus. Each of them is currently managing a partition of 256 processing nodes (SPARC nodes). Since this system has the optional vector unit hardware, each SPARC node in turn manages 4 vector units. The system also has control processors managing some I/O peripherals and one that is dedicated as a system console. [ Figure Omitted ] Figure 1. A sample CM-5 system. 1.1.2 Gaining Access --------------------- You gain access to a CM-5 system through one or more of its partition managers. The CM-5 is usually accessed across a network, either by logging in remotely (via the UNIX rlogin command) or by running a remote shell (via the rsh command). For example, the users in Figure 1 (shown at workstations "somewhere on the network") can log in remotely or use remote shells to run programs on either partition. Typical command sequences might be % rlogin mars [ login sequence ] % a.out % rsh venus my_program Once you have logged in or established your shell, you are operating in the CMOST timesharing environment, with the following resources available to you: o A partition manager (equivalent to a UNIX workstation). You initiate program execution on this processor, which utilizes computational processors and I/O devices as needed. o All the parallel processors in the partition. Under the CMOST timesharing environment, all the nodes (and vector units) are available to, and all are used by, every parallel program running on that partition. o All the I/O devices on the CM-5 (assuming you have been granted access to the appropriate file systems). Execution and debugging of CM Fortran programs are initiated on one of the CM-5's partition managers, either from the CMOST shell or from the Prism development environment. Compilation can be done either on a CM-5 partition manager or on a separate workstation that has CMOST and the CM Fortran compiler installed. 1.1.3 Executing Programs ------------------------- Programs executing on the CM-5 begin their execution on the partition manager, which downloads program blocks to the nodes. You begin execution on CMOST in the same way as on the UNIX operating system: % a.out You can submit interactive jobs directly, or, if your site is using DJM, you can use the jrun command. To submit a batch job, you can use the UNIX at and batch commands, the commands of the NQS system, or the DJM jsub command. (See the documentation for NQS for the CM system or for DJM.) The partition manager runs the full CMOST operating system; it initiates program execution on the computational processors, performs I/O, and handles timesharing and other OS tasks for the partition. Each SPARC node runs a stripped-down version of the operating system, which enables it to fetch instructions, to manage the processing of its local data, and to communicate with the partition manager and with the other nodes in the partition. If a program needs access outside the partition to read from an I/O device, for example, or to pass data to a process running on another partition it goes through the partition manager to do so. (The partition manager, running in supervisor mode, can access any address in the system.) The partition manager passes a request to read data from a disk file to the processor controlling that disk; data from the disk is, however, read directly into the processing nodes rather than passed through the partition manager, thus allowing a much more rapid transfer. 1.1.4 Checking System Status ----------------------------- Partitions are not permanent. They are defined by the system administrator to meet the site's needs, and they can be changed by the administrator as needed. The system shown in Figure 1, for example, could be reconfigured as a single partition, with Venus controlling all the nodes and Mars either inactive or acting as a stand-alone compile server. Similarly, if some nodes needed to be taken out of service temporarily, the partition could be reconfigured around them. The most common questions about system status on a CM-5, therefore, are, o How large is this partition at this time? o How many user processes are running on it? o How much time and memory has been allocated to each process? You can use the cmps command to answer these questions; cmps tells you how many vector units or nodes the partition manager currently controls and gives information about the jobs it is running. The UNIX ps command (on which cmps is modeled) provides similar information on workload for the partition manager itself. Note that the cmps command provides information only about the partition on which the command runs. If you are logged into Mars, cmps provides information only about the partition controlled by Mars. To find out about conditions on Venus, you would use a remote shell, and type % rsh venus cmps 1.1.5 A Note to CM-2 Users --------------------------- Users familiar with the Connection Machine model CM-2 or CM-200 will notice certain differences in the user environment on the CM-5. o The commands that attach the front-end program to the parallel processors on the CM-2, cmattach and cmcoldboot, are not needed (and do not exist) on the CM-5 system. A CM-5 partition manager is always "attached" to its parallel nodes. o All the I/O devices on the system are accessible from any partition. o The CM-2 informational commands cmfinger and cmlist are not available on the CM-5. 1.2 CM FORTRAN AND OTHER CM-5 SOFTWARE --------------------------------------- The CM-5 provides a complete environment for developing, compiling, and executing CM Fortran programs. This environment includes: o A run-time system that optimizes parallel execution, with functionality available to users through the CM Fortran language and libraries. o Data parallel CM libraries, including the visualization librariesCM/AVS and CMX11, the scientific software library CMSSL, and the CM file system libraries SFS (for Scalable Disk Array) and CMFS (for DataVault and devices such as CM-HIPPI). o The message-passing library CMMD, which permits node-level programming in CM Fortran and other languages (CM-5 only). o Prism, a programming environment that integrates debugging, performance analysis, and data visualization tools. Prism is available as a windowed environment on terminals and workstations that run the X Window System. It can also be used in command mode from the CMOST shell. o A variety of development tools and facilities, described later in this manual, accessible from CMOST. 1.3 CM FORTRAN AND SYSTEM TERMINOLOGY -------------------------------------- Some architectural differences between the CM-2 and CM-5 systems are reflected in their respective terms for system components. The CM-5 term partition manager corresponds roughly to the CM-2 term front-end processor, although the serial and parallel processing components are more closely integrated in the CM-5 system. Also, the CM-2 term CM refers to the parallel processors only (as distinct from the front end). When referring to the CM-5, the term CM includes both the control processors and the computational processors. Some of the CM-2 system terminology has found its way into CM Fortran compiler switches and directives and system messages. Since CM Fortran strives to reduce visible platform dependencies, such terms as front end, front-end array, and CM array continue to be used on the CM-5. CM-5 users should read front-end array to mean an array that is stored and processed on the partition manager (except in the case of the nodal execution model, in which front-end arrays are on the nodes), and should read CM array to mean an array that is stored and processed on the parallel processing elements (vector units or SPARCs). ***************************************************************** The information in this document is subject to change without notice and should not be construed as a commitment by Think- ing Machines Corporation. Thinking Machines reserves the right to make changes to any product described herein. Although the information in this document has been reviewed and is believed to be reliable, Thinking Machines Corporation assumes no liability for errors in this document. Thinking Machines does not assume any liability arising from the application or use of any information or product described herein. ***************************************************************** Connection Machine (r) is a registered trademark of Thinking Machines Corporation. CM, CM-2, CM-200, CM-5, CM-5 Scale 3, and DataVault are trademarks of Thinking Machines Corporation. CMOST, CMAX, and Prism are trademarks of Thinking Machines Corporation. C* (r) is a registered trademark of Thinking Machines Corporation. Paris, *Lisp, and CM Fortran are trademarks of Thinking Machines Corporation. CMMD, CMSSL, and CMX11 are trademarks of Thinking Machines Corporation. CMview is a trademark of Thinking Machines Corporation. Scalable Computing (SC) is a trademark of Thinking Machines Corporation. Scalable Disk Array (SDA) is a trademark of Thinking Machines Corporation. Thinking Machines (r) is a registered trademark of Thinking Machines Corporation. SPARC and SPARCstation are trademarks of SPARC International, Inc. Sun, Sun-4, SunOS, Sun FORTRAN, and Sun Workstation are trademarks of Sun Microsystems, Inc. UNIX is a trademark of UNIX System Laboratories, Inc. The X Window System is a trademark of the Massachusetts Institute of Technology. Copyright (c) 1991-1994 by Thinking Machines Corporation. All rights reserved. This file contains documentation produced by Thinking Machines Corporation. Unauthorized duplication of this documentation is prohibited. Thinking Machines Corporation 245 First Street Cambridge, Massachusetts 02142-1264 (617) 234-1000