Index of /nchou/p/multifacet/projects/kevin_dissertation/multifacet_thesis/workloads/ecperf/ecperf6.1

[ICO]NameLast modifiedSizeDescription

[PARENTDIR]Parent Directory  -  
[TXT]License.html2001-12-19 12:00 12K 
[TXT]NTDriver.html2001-12-19 13:03 1.6K 
[TXT]ReleaseNotes.txt2001-12-19 12:00 3.4K 
[DIR]SCCS/2006-12-13 18:10 -  
[DIR]ant/2006-12-13 18:10 -  
[DIR]bin/2006-12-13 18:15 -  
[   ]build.xml2001-12-19 12:00 13K 
[DIR]classes/2006-12-13 18:15 -  
[DIR]config/2006-12-13 18:15 -  
[IMG]deploy_dist.jpg2001-12-19 12:00 111K 
[IMG]deploy_std.jpg2001-12-19 12:00 94K 
[DIR]jars/2006-12-13 18:15 -  
[   ]ri.xml2001-12-19 13:03 2.6K 
[DIR]schema/2006-12-13 18:15 -  
[DIR]src/2006-12-13 18:15 -  

ECperf TM README

1.0 Final Release

Copyright (c) 1998-2001 by Sun Microsystems, Inc. All Rights Reserved.
This note describes the directory structure of the ECperf kit and how to use it. The file you are currently viewing is located as README.html in the top level ECperf directory. We will call  this top level directory as $ECPERF_HOME.
NOTE:  This ECperf kit is distributed under the legal control and obligations of the Java Community Process (JCP).  Public release of information contained in this draft of the ECperf specification, kit or results obtained by running the kit are not permitted.
Portions of the code developed by Apache Software Foundation

Contents

Assumptions

 ECperf can be run on any Java platform. Most of the code is implemented in Java, however there are a few scripts used to start certain programs. These scripts are provided in .sh format for Unix platforms and .bat format for Windows.

Directory Structure

Java Package Structure

All ECperf Java classes are located in the com.sun.ecperf package. The following lists the sub packages of com.sun.ecperf:

Packaging of Enterprise Java Beans

The beans adhere to EJB1.1 specification. There are currently two versions for some of the beans - CMP and BMP. The CMP version is to be used with Container Managed Persistence and the BMP version must be used with Bean Managed Persistence. The BMP beans are sub-classed from CMP. The source file for the Dummy version is named <bean>DumEJB.java,  the CMP and BMP are respectively named <bean>CmpEJB.java and <bean>BmpEJB.java. The kit includes pre-compiled class files for all the beans (both CMP and BMP versions) and you can package them into appropriate ejb-jars for your environment.
Of the packages listed above, corp, orders, mfg, and supplier contains the beans used for the 4 ECperf domains respectively. Each domain package contains a distinct sub-packages for each EJB and its helper classes. The EJB package contains a sub-package named "ejb" which holds the bean implementation including the home and remote interfaces. For instance, the following shows the fully qualified class names for the OrderLineEnt entity bean:
The helper classes for the orders domain are in the helper sub-package under orders as follows:

Other Java packages

Besides the 4 domains, several other packages are included :

NOTE: To run charts application you need JClass Chart from Sitraka Software

Building, Deploying and Running ECperf

There are several components required to deploy, test and run ECperf. These are :
    An EJB Container to deploy all the ECperf Beans.
    A Web Container to deploy the web client as well as Delivery servlet in the Supplier Domain.
    An external Web Server to run the Supplier Emulator Servlet. Note that the emulator is not part of the SUT.
    The ECperf Driver that runs on external client machine(s).
A diagram showing the various components for a Standard Workload can be found  here.


A diagram showing the various components for a Distributed Workload can be found  here.

There are currently two mechanisms to build and deploy the benchmark. One uses makefiles and the other uses ant ( http://jakarta.apache.org/ant ).

 Build and Deployment Process using ant

 Build and Deployment Process using make

Build and Deployment process using ant

Build and Deployment process for the J2EE RI
Build and Deployment process for another appserver
Building the ECperf driver with ant

Build and Deployment Process on the J2EE RI with ant

               Create the database(s)
    These properties are reasonably straight forward and the file contains further documentation.

     

Build and Deployment process  on another appserver using ant

This example assumes the DB is created and that the supplier emulator and the ECperf EJB's will be deployed to the same running server on the same machine (this is the simplest of all cases)
    Choose a short name for your appserver. For this example, let's call your appserver "mango"
    Copy the file config/ri.env to config/mango.env and edit the properties particular to mango. It may very well be that you won't need to make any changes to these settings at this time, but you may need to add settings later to support deploy actions.
    Copy the directory src/deploy/reference to src/deploy/mango with all its contents. Add vendor specific deployment descriptors as necessary.
    Try a build by doing "ant -Dappserver=mango" This will do a full compile and try to build all the files needed for deployment.
    Edit the *db.env files in the config directory so they have correct db driver info in preparation for doing a db load.
    Try a database load by doing "ant -Dappserver=mango loaddb" Once those are working, you should create a new file, mango.xml which has the deploy rules. These will vary widely but you can use ri.xml as an example. You should try to create the targets:

              ecperf-war-deploy

    ecperf-ejb-deploy
    supplier--deploy
    emulator--deploy
    When these are working, you should just package up the mango.xml, config/mango.env, and src/deploy/mango/* files, as well as any of the confg/*db.env files that may have native jdbc drivers referenced in them, for convenience. Include instructions on how to deploy ECperf for your appserver.

Summary of steps required to run ECperf on a new appserver

1) Set up the appserver and database
2) Add the ant bin dir to your path / classpath
3) Adjust settings in config/mango.env for your local environment.
4) On the main ECperf server from $ECPERF_HOME do "ant -find mango.xml ecperf-ejb deploy"
5) On the main web server, even if it's the same, from $ECPERF_HOME do
"ant -find mango.xml ecperf-war-deploy supplier-war-deploy"
6) On the emulator server from $ECPERF_HOME do: "ant -find mango.xml emulator-war-deploy"
7) Load the database from $ECPERF_HOME by doing "ant -find mango.xml loaddb"

 

Building the ECperf Driver with ant

Source for the ECperf client driver is now being packaged with the ECperf kit and targets are included in the ant build.xml file to remove and re-build the driver.jar and launcher.jar files, these targets are:-
clean-driver
driver

Build and Deployment Process using make

There are several steps that must be followed in order to run the ECperf benchmark.

Create the Database(s)

For the Standard Workload, you can create a single database that houses all the 4 Domains. For the Distributed Workload, you must create 4 separate databases, one for each domain.
A database creation script is provided for Oracle, DB2 and Sybase in schema/<DBMS>/createdb.sh. This may have to be modified slightly for different versions of the DBMS if they don't work. The scripts will have to be ported for other DBMS products.
Standard SQL scripts for creating the database schema are provided in schema/sql. These are intended to give a starting point for creating  schemas for other database products. See schema/sybase/setup_ecperf for directions on how to create the databases for Sybase and schema/DB2/README to create the databases for DB2.
Create the database and tables for Oracle as follows :
    Read the comments in createdb.sh and edit the script if necessary.
    Run the script as many times as the number of databases that you want to create. Here is an example for the standard workload : createdb.sh ecperf /export/home/oracle/dbs/ecperf_db where ecperf is the name of the database and the 2nd argument is the name of the directory that will contain the database files. Note that Oracle requires the database name to be <= 8 characters.
    The schema_?.sh (where ? is C, O, M or S) scripts create the schema for the 4 domains. Edit the section marked ######  datafiles ###### to set the path names of the Oracle devices appropriately.  The default values will work for a file system based database. Note that the tablespace sizes may have to be increased when creating larger databases.  The scripts accept 2 arguments: database name and directory where database files must be created.  To create a single database named ecperf for all 4 domains and locate it in the directory /export/home/oracle/dbs/ecperf_db, use the commands :
    schema_C.sh ecperf /export/home/oracle/dbs/ecperf_db
    schema_O.sh ecperf /export/home/oracle/dbs/ecperf_db
    schema_M.sh ecperf /export/home/oracle/dbs/ecperf_db
    schema_S.sh ecperf /export/home/oracle/dbs/ecperf_db
    Run the schema_U.sh script in a similar manner to create the sequence tables used to generate primary keys. This script must be run once per database.

Generic Build Instructions for using make

The following steps need to be followed before attempting to compile or deploy the beans.

Build and Deployment Process for the J2EE RI version 1.2.1

Follow the steps outlined in Generic Build Instructions. In the src directory, two generic makefiles are provided. In addition, a J2EE RI-specific Makefile is provided.
To build the RI jar files and deploy them on the RI :

Building for Vendor-Specific Application Servers

Follow the steps outlined in Generic Build Instructions . Deployment descriptors are provided in the src/deploy/reference directory and must be used without modifications. Makefiles  are only provided for the J2EE RI and need to be ported for other application servers. Note that although the reference xml files can be broken up, combined etc., their contents cannot be modified in any way. If you wish to use the CMP beans, a mapping between the bean fields and database fields is provided in the file src/deploy/README.CMP. CMP versions of all the xmls are also provided in src/deploy/reference/*.xml.CMP. The RI makefiles descriptors can be used as a starting point:
    $ cd $ECPERF_HOME/src
    $ cp ri.mk <appsserver>.mk
    $ cd deploy
    $ cp -r reference <appsserver>
    Edit src/deploy/<appsserver>/*.xml or add xml deployment descriptors as needed by the application server.
    Edit src/<appsserver>.mk. Change it to build for the new application server.
    Edit src/Makefile to include the new server to the list defined by the variable SERVERS.
    Edit src/ecperfInclude.mk to check for the environment variables required for the new <appsserver>.mk file.
    Generate and deploy the bean jar files for your appserver :


    $ cd $ECPERF_HOME/src
    $ make <appserver>

    Generate and deploy the emulator files. Note that you can deploy the emulator in the RI (or any other Web Container) if you choose to.  There is no requirement that the Emulator be deployed in your appserver. Use the appropriate makefile :

    $ cd $ECPERF_HOME/src
    $ make <appserver>.emulator

The application including the web interface and supplier emulator should now be deployed.
NOTES :
    Using the CMP version of SequenceEnt requires that UtilDataSource defined in util.xml have its isolation level set to SERIALIZABLE. The CMP version will only work if your Container/DBMS support the Serializable isolation level correctly. Consequently, we  suggest that you use the BMP version of SequenceEnt.
    The webclient and emulator are deployed as .war files. If your appserver does not support these formats, then you will have to make appropriate changes to <appserver>.mk to compile and deploy them.

Load the Database(s)

    Now that the database and tables are  created, and the load programs are compiled, follow these steps to load the database:
    Edit the $ECPERF_HOME/config/appsserver file and replace the current name with the name of the application server being used.
    If the file $ECPERF_HOME/config/<appsserver>.env does not exist, copy it from $ECPERF_HOME/config/ri.env
    Edit $ECPERF_HOME/config/<appsserver>.env file and ensure the JDBC_CLASSPATH variable points to a valid JDBC driver and ECPERF_HOME points to root of ECperf distribution. Note that the other variables are not used during the load.
    Uncomment jdbc URL and Driver class in the $ECPERF_HOME/config/*db.properties files depanding on what database and driver you are using to access it. In the centralized configuration, the contents of all files are the same.
    Run the bin/loaddb.sh script from the $ECPERF_HOME directory to load all the databases. Read the script to understand the order in which loads must be done. The LoadCorp program writes out data to temporary files in /tmp. These files are then read by the other load programs to populate their respective domains. If any of the load programs fail, follow these steps :
    If LoadCorp failed, all of the domains will have to be re-loaded. Re-run bin/loaddb.sh.
    If one of the other domain loads fail, you can selectively re-load only this domain as long as the /tmp/*pipe files created by LoadCorp have not been deleted.
    After successfully loading the database, you should re-start Oracle with the $ORACLE_HOME/dbs/init<db_name>.ora startup file. This file is created by the createdb.sh script and has a larger shared memory segment, processes etc. required for doing benchmark runs.

    NOTE : If you reload database you need to restart the appserver  as the SequenceSes bean caches primary key values to insert new rows in the database.

Testing the ECperf deployment using the Web interface

Assuming the application server is configured properly and the beans have been successfully deployed, you should be able to test the functionality of the ECperf application using the web interface. If the deployment descriptor puts the web pages under the ECperf context as defined by the deployment descriptors supplied for the J2EE RI, you should be able to access the web page by pointing to the URL


http://<ECPERF_HOST>:<ECPERF_PORT>/ECperf/

Test that all the functionality is in place by trying the following transactions through the web interface :
    In Orders Domain :
          Create a new order with one of the items having a quantity of more than 100 . This should cause a large order to be created in the Manufacturing Domain
          Change the order
          Get order information
          Get information on all orders of a customer
          Cancel an order

    In Manufacturing Domain :
          Get a large order and start processing a work order based on it
          Move the resulting work order through its various stages.
          Create a new work order
          Cancel the work order
To make sure Delivery and Emulator servlets are functioning :
   You should be able to verify by connecting to the following URLs
         http://<ECPERF_HOST>:<ECPERF_PORT>/Supplier/DeliveryServlet
         http://<EMULATOR_HOST>:<EMULATOR_PORT>/Emulator/EmulatorServlet

If all of the above run successfully, you are ready to run the benchmark using the Driver.

 
Description of the Driver
The ECperf Driver consists of several Java programs and is designed to run on multiple client machines, using an arbitrary number of JVMs to ensure that the Driver has no inherent scalability limitations. Note that none of the client machines are part of the SUT.


We define the following terms :

The Driver consists of the following components :
    The actual applications that implement the workload as defined in the specifications. These are OrderEntry, PlannedLine and LargeOrderLine.
    The Agents, one per type of application. Each Agent manages all the threads of its application. You can configure as many Agents as you wish on any number of client machines. The Agents are OrdersAgent, MfgAgent and LargeOLAgent.
    The Controller, which runs on the Master machine and with which all the Agents register.
    The Driver, which runs on the Master machine and which is responsible for doing a benchmark run.

How The Driver Works

The Driver communicates with all the Agents using RMI. The Driver reads the run properties and configures the Agents appropriately. Each Agent will then run as many threads of their respective workload. For example, the OrdersAgent will run OrderEntry. The number of threads is determined by the scaling rules of the specification and are equally distributed amongst all Agents. Each thread runs independently, executing the workload according to the rules defined in the spec. When the run completes, the Driver co-ordinates with the Agents to retrieve all the stats and prints out the reports.

The Driver's InitialContext and Lookups

In the past, some people have had problems with the way the Driver gets its InitialContext and lookups. This section explains exactly how this is done.


The Driver obtains the JNDI InititalContext by executing the following code in each of the Agent processes :

Note that this context is shared amongst all the threads of the application that the Agent starts.


Each of the bean home interfaces that the applications require is looked up using the absolute home name of the bean. An example is shown below :
            OrderSesHome orderSesHome =
                (OrderSesHome) PortableRemoteObject.narrow
                (ctx.lookup("OrderSesHome"), OrderSesHome.class);

If your app server needs extra properties to be set for the InitialContext, this can be done by setting them in jndi.properties in the CLASSPATH or changing the variable JAVA in config/<appserver>.env. The example  given below shows a generic implementation (see below for AppServer specific Values) :

     JAVA="$JAVA_HOME/bin/java

     -Djava.naming.factory.initial=vendor naming factory goes here

     -Djava.naming.provider.url=vendor url goes here
iPlanet
-Djava.naming.factory.initial=com.sun.jndi.cosnaming.CNCtxFactory
-Djava.naming.provider.url=iiop://${ECPERF_HOST}:${IIOP_PORT}
Web Logic
-Djava.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
-Djava.naming.provider.url=t3://${ECPERF_HOST}:${ECPERF_PORT}
WebSphere
-Djava.naming.factory.initial=com.ibm.ws.naming.ldap.WsnLdapInitialContextFactory
-Djava.naming.provider.url=iiop://${ECPERF_HOST}:${ECPERF_PORT}"

Configuring and Running the Driver

Read  the description of the Driver  first to understand how it works. Follow these steps to configure the Driver for any particular application server:
    Edit $ECPERF_HOME/config/appsserver to contain the name of the application server  you want to run.
    Edit $ECPERF_HOME/config/<appsserver>.env to contain the correct values for the following variables :
    Edit $ECPERF_HOME/config/run.properties and set the ECperf run parameters appropriately. The parameters are described in comments in this file. Ensure that dumpStats is 0 if you are not running the charting application, as otherwise the Driver will hang waiting for the reader on the pipe. If you cannot meet the mix requirements for the OrderEntry application, these can be adjusted by changing various Weight parameters (custsWeight, ordsWeight, chgoWeight and newoWeight). All the parameters can be multiplied by a factor of 10 or 100 for better granularity.

    Edit $ECPERF_HOME/config/agent.properties. There are currently no values you can change in this file.

    Check that the method used by the Driver to get its  InitialContext and do lookups will work for your appserver.

    Now, run the driver by running the script bin/driver.sh [<driver host>] from the $ECPERF_HOME directory.

    You can abort a run at any time by pressing Ctrl-C. This will cause all the processes to exit gracefully.

     
Notes:
              If no <driver host> is specified by default local host is used.

             We have sometimes experienced binding exceptions at the start of the first time run. For now, just cancel the script (by pressing Ctrl-C) and rerun it again. This problem usually does not persist to the second run.

    The driver.sh script starts all Agents on the Master machine. If you want to start the Agents on different client systems, the script needs to be edited appropriately.
    We have also provided a driver.bat file and the instructions to run it is provided in NTDriver.html

Driver Run Output

After a successful run, you should see a directory run_number created in the directory specified by the outDir property in config/run.properties. The run number will start at 1 and be incremented for each run. The current run's number is stored in the file ecperf.seq in the user's home directory. For example, if outDir = /export/home/ecperf/output, the first run's output will go into the directory /export/home/ecperf/output/1. In the run output directory, you will find the following files :

 

Run output files and their description


ECperf.summary A summary file giving the final ECperf metrics
Audit.report A summary file giving the Audit report
Orders.summary A summary file giving the results of the OrderEntry application
Orders.detail A detail file giving histogram data that can be used to produce graphs
ords.err A log of any errors encountered during the run. 
Mfg.summary A summary file giving the results of the Mfg applications
Mfg.detail  A detail file giving histogram data of the Mfg applications
plannedlines.err A log of errors encountered by the PlannedLine application
loline.err A log of errors encountered by the LargeOrderLine application
delivery.err A log of errors by Supplier Domain servlet
emulator.err A log of errors by Emulator servlet

NOTE: All errors encountered by the Agents/Applications will be logged in the respective .err file. Only errors encountered by the Driver will appear on the screen in which the Driver was started. As such, you should always check the error logs in the run output directory.
 

Interpreting Detail files

The Driver produces two detail files, Orders.detail and Mfg.detail in the run output directory. Each of these files have throughput and response time data that are required to produce the graphs mentioned in Clause 4.10 of the specification.


Here is an example of the first few lines from an Orders.detail file :

Neworder Throughput
TIME COUNT OF TX.
0                     85
30                   78
60                   68
90                   79

This means that 85 neworder transactions were completed in the first 30 seconds of the benchmark run, 78 in the next 30 seconds, 68 in the 3rd 30 second interval and so on.
Now, let us look at an example of response time data :

NEWORDER
0.000               147
0.100                140
0.200                     8
0.300                     0

This shows that 147 transactions completed within 0.1 seconds, 140 had a response time in between 0.1 and 0.2 seconds and only 8 transactions had a response time greater than 0.2 seconds. Note that response time data are gathered only during steady-state where as throughput data are collected for the entire duration of the run.
 

Running Atomicity Tests

Clause 4.11.1 of the specification details the requirements for atomicity of the transactions. The code to run the various atomicity tests is implemented via the Debug class in the OrderEnt and LargeOrderEnt beans. Debug level 4 is used for the atomicity tests. To run Atomicity Tests 1 and 2, change the environment variable debuglevel of OrderEnt to 4 in orders.xml. Similarly, to run Atomicity Test 3, change debuglevel of LargeOrderEnt to 4 in mfg.xml.