RunJobFlow creates and starts running a new job flow. The job flow will run the steps specified. Once the job flow completes, the cluster is stopped and the HDFS partition is lost. To prevent loss of data, configure the last step of the job flow to store results in Amazon S3. If the JobFlowInstancesDetail KeepJobFlowAliveWhenNoSteps
parameter is set to TRUE
, the job flow will transition to the WAITING state rather than shutting down once the steps have completed.
For additional protection, you can set the JobFlowInstancesDetail TerminationProtected
parameter to TRUE
to lock the job flow and prevent it from being terminated by API call, user intervention, or in the event of a job flow error.
A maximum of 256 steps are allowed in each job flow.
If your job flow is long-running (such as a Hive data warehouse) or complex, you may require more than 256 steps to process your data. You can bypass the 256-step limitation in various ways, including using the SSH shell to connect to the master node and submitting queries directly to the software running on the master node, such as Hive and Hadoop. For more information on how to do this, go to Add More than 256 Steps to a Job Flow in the Amazon Elastic MapReduce Developer's Guide .
For long running job flows, we recommend that you periodically store your results.