Container for the parameters to the BatchWriteItem operation. The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.

BatchWriteItem cannot update items. To update items, use the UpdateItem API.

The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.

Note that if none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem will return a ProvisionedThroughputExceededException.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.

With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.

If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, you must update or delete the specified items one at a time. In both situations, BatchWriteItem provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.

Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.

If one or more of the following is true, DynamoDB rejects the entire batch write operation:

  • One or more tables specified in the BatchWriteItem request does not exist.

  • Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.

  • You try to perform multiple operations on the same item in the same BatchWriteItem request. For example, you cannot put and delete the same item in the same BatchWriteItem request.

  • There are more than 25 requests in the batch.

  • Any individual item in a batch exceeds 400 KB.

  • The total request size exceeds 16 MB.

Наследование: AmazonDynamoDBRequest
        internal BatchWriteItemResponse BatchWriteItem(BatchWriteItemRequest request)
        {
            var marshaller = new BatchWriteItemRequestMarshaller();
            var unmarshaller = BatchWriteItemResponseUnmarshaller.Instance;

            return Invoke<BatchWriteItemRequest,BatchWriteItemResponse>(request, marshaller, unmarshaller);
        }
        /// <summary>
        /// Initiates the asynchronous execution of the BatchWriteItem operation.
        /// <seealso cref="Amazon.DynamoDBv2.IAmazonDynamoDB"/>
        /// </summary>
        /// 
        /// <param name="request">Container for the necessary parameters to execute the BatchWriteItem operation.</param>
        /// <param name="cancellationToken">
        ///     A cancellation token that can be used by other objects or threads to receive notice of cancellation.
        /// </param>
        /// <returns>The task object representing the asynchronous operation.</returns>
        public Task<BatchWriteItemResponse> BatchWriteItemAsync(BatchWriteItemRequest request, System.Threading.CancellationToken cancellationToken = default(CancellationToken))
        {
            var marshaller = new BatchWriteItemRequestMarshaller();
            var unmarshaller = BatchWriteItemResponseUnmarshaller.Instance;

            return InvokeAsync<BatchWriteItemRequest,BatchWriteItemResponse>(request, marshaller, 
                unmarshaller, cancellationToken);
        }
 /// <summary>
 /// Initiates the asynchronous execution of the BatchWriteItem operation.
 /// <seealso cref="Amazon.DynamoDBv2.AmazonDynamoDB.BatchWriteItem"/>
 /// </summary>
 /// 
 /// <param name="batchWriteItemRequest">Container for the necessary parameters to execute the BatchWriteItem operation on
 ///          AmazonDynamoDBv2.</param>
 /// <param name="callback">An AsyncCallback delegate that is invoked when the operation completes.</param>
 /// <param name="state">A user-defined state object that is passed to the callback procedure. Retrieve this object from within the callback
 ///          procedure using the AsyncState property.</param>
 /// 
 /// <returns>An IAsyncResult that can be used to poll or wait for results, or both; this value is also needed when invoking EndBatchWriteItem
 ///         operation.</returns>
 public IAsyncResult BeginBatchWriteItem(BatchWriteItemRequest batchWriteItemRequest, AsyncCallback callback, object state)
 {
     return invokeBatchWriteItem(batchWriteItemRequest, callback, state, false);
 }
 IAsyncResult invokeBatchWriteItem(BatchWriteItemRequest batchWriteItemRequest, AsyncCallback callback, object state, bool synchronized)
 {
     IRequest irequest = new BatchWriteItemRequestMarshaller().Marshall(batchWriteItemRequest);
     var unmarshaller = BatchWriteItemResponseUnmarshaller.GetInstance();
     AsyncResult result = new AsyncResult(irequest, callback, state, synchronized, signer, unmarshaller);
     Invoke(result);
     return result;
 }
 /// <summary>
 /// The <i>BatchWriteItem</i> operation puts or deletes multiple items in one or more
 /// tables. A single call to <i>BatchWriteItem</i> can write up to 16 MB of data, which
 /// can comprise as many as 25 put or delete requests. Individual items to be written
 /// can be as large as 400 KB.
 /// 
 ///  <note> 
 /// <para>
 /// <i>BatchWriteItem</i> cannot update items. To update items, use the <i>UpdateItem</i>
 /// API.
 /// </para>
 ///  </note> 
 /// <para>
 /// The individual <i>PutItem</i> and <i>DeleteItem</i> operations specified in <i>BatchWriteItem</i>
 /// are atomic; however <i>BatchWriteItem</i> as a whole is not. If any requested operations
 /// fail because the table's provisioned throughput is exceeded or an internal processing
 /// failure occurs, the failed operations are returned in the <i>UnprocessedItems</i>
 /// response parameter. You can investigate and optionally resend the requests. Typically,
 /// you would call <i>BatchWriteItem</i> in a loop. Each iteration would check for unprocessed
 /// items and submit a new <i>BatchWriteItem</i> request with those unprocessed items
 /// until all items have been processed.
 /// </para>
 ///  
 /// <para>
 /// Note that if <i>none</i> of the items can be processed due to insufficient provisioned
 /// throughput on all of the tables in the request, then <i>BatchWriteItem</i> will return
 /// a <i>ProvisionedThroughputExceededException</i>.
 /// </para>
 ///  <important> 
 /// <para>
 /// If DynamoDB returns any unprocessed items, you should retry the batch operation on
 /// those items. However, <i>we strongly recommend that you use an exponential backoff
 /// algorithm</i>. If you retry the batch operation immediately, the underlying read or
 /// write requests can still fail due to throttling on the individual tables. If you delay
 /// the batch operation using exponential backoff, the individual requests in the batch
 /// are much more likely to succeed.
 /// </para>
 ///  
 /// <para>
 /// For more information, see <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations">Batch
 /// Operations and Error Handling</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </para>
 ///  </important> 
 /// <para>
 /// With <i>BatchWriteItem</i>, you can efficiently write or delete large amounts of data,
 /// such as from Amazon Elastic MapReduce (EMR), or copy data from another database into
 /// DynamoDB. In order to improve performance with these large-scale operations, <i>BatchWriteItem</i>
 /// does not behave in the same way as individual <i>PutItem</i> and <i>DeleteItem</i>
 /// calls would. For example, you cannot specify conditions on individual put and delete
 /// requests, and <i>BatchWriteItem</i> does not return deleted items in the response.
 /// </para>
 ///  
 /// <para>
 /// If you use a programming language that supports concurrency, such as Java, you can
 /// use threads to write items in parallel. Your application must include the necessary
 /// logic to manage the threads. With languages that don't support threading, such as
 /// PHP, you must update or delete the specified items one at a time. In both situations,
 /// <i>BatchWriteItem</i> provides an alternative where the API performs the specified
 /// put and delete operations in parallel, giving you the power of the thread pool approach
 /// without having to introduce complexity into your application.
 /// </para>
 ///  
 /// <para>
 /// Parallel processing reduces latency, but each specified put and delete request consumes
 /// the same number of write capacity units whether it is processed in parallel or not.
 /// Delete operations on nonexistent items consume one write capacity unit.
 /// </para>
 ///  
 /// <para>
 /// If one or more of the following is true, DynamoDB rejects the entire batch write operation:
 /// </para>
 ///  <ul> <li> 
 /// <para>
 /// One or more tables specified in the <i>BatchWriteItem</i> request does not exist.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Primary key attributes specified on an item in the request do not match those in the
 /// corresponding table's primary key schema.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// You try to perform multiple operations on the same item in the same <i>BatchWriteItem</i>
 /// request. For example, you cannot put and delete the same item in the same <i>BatchWriteItem</i>
 /// request. 
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// There are more than 25 requests in the batch.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Any individual item in a batch exceeds 400 KB.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// The total request size exceeds 16 MB.
 /// </para>
 ///  </li> </ul>
 /// </summary>
 /// <param name="requestItems">A map of one or more table names and, for each table, a list of operations to be performed (<i>DeleteRequest</i> or <i>PutRequest</i>). Each element in the map consists of the following: <ul> <li> <i>DeleteRequest</i> - Perform a <i>DeleteItem</i> operation on the specified item. The item to be deleted is identified by a <i>Key</i> subelement: <ul> <li> <i>Key</i> - A map of primary key attribute values that uniquely identify the ! item. Each entry in this map consists of an attribute name and an attribute value. For each primary key, you must provide <i>all</i> of the key attributes. For example, with a hash type primary key, you only need to provide the hash attribute. For a hash-and-range type primary key, you must provide <i>both</i> the hash attribute and the range attribute. </li> </ul> </li> <li> <i>PutRequest</i> - Perform a <i>PutItem</i> operation on the specified item. The item to be put is identified by an <i>Item</i> subelement: <ul> <li> <i>Item</i> - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a <i>ValidationException</i> exception. If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. </li> </ul> </li> </ul></param>
 /// 
 /// <returns>The response from the BatchWriteItem service method, as returned by DynamoDB.</returns>
 /// <exception cref="Amazon.DynamoDBv2.Model.InternalServerErrorException">
 /// An error occurred on the server side.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ItemCollectionSizeLimitExceededException">
 /// An item collection is too large. This exception is only returned for tables that have
 /// one or more local secondary indexes.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ProvisionedThroughputExceededException">
 /// The request rate is too high, or the request is too large, for the available throughput
 /// to accommodate. The AWS SDKs automatically retry requests that receive this exception;
 /// therefore, your request will eventually succeed, unless the request is too large or
 /// your retry queue is too large to finish. Reduce the frequency of requests by using
 /// the strategies listed in <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#APIRetries">Error
 /// Retries and Exponential Backoff</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ResourceNotFoundException">
 /// The operation tried to access a nonexistent table or index. The resource might not
 /// be specified correctly, or its status might not be <code>ACTIVE</code>.
 /// </exception>
 public BatchWriteItemResponse BatchWriteItem(Dictionary<string, List<WriteRequest>> requestItems)
 {
     var request = new BatchWriteItemRequest();
     request.RequestItems = requestItems;
     return BatchWriteItem(request);
 }
 /// <summary>
 /// <para>The <i>BatchWriteItem</i> operation puts or deletes multiple items in one or more tables. A single call to <i>BatchWriteItem</i> can
 /// write up to 1 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 64
 /// KB.</para> <para><b>NOTE:</b> BatchWriteItem cannot update items. To update items, use the UpdateItem API. </para> <para>The individual
 /// <i>PutItem</i> and <i>DeleteItem</i> operations specified in <i>BatchWriteItem</i> are atomic; however <i>BatchWriteItem</i> as a whole is
 /// not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the
 /// failed operations are returned in the <i>UnprocessedItems</i> response parameter. You can investigate and optionally resend the requests.
 /// Typically, you would call <i>BatchWriteItem</i> in a loop. Each iteration would check for unprocessed items and submit a new
 /// <i>BatchWriteItem</i> request with those unprocessed items until all items have been processed.</para> <para>To write one item, you can use
 /// the <i>PutItem</i> operation; to delete one item, you can use the <i>DeleteItem</i> operation.</para> <para>With <i>BatchWriteItem</i> , you
 /// can efficiently write or delete large amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into
 /// Amazon DynamoDB. In order to improve performance with these large-scale operations, <i>BatchWriteItem</i> does not behave in the same way as
 /// individual <i>PutItem</i> and <i>DeleteItem</i> calls would For example, you cannot specify conditions on individual put and delete
 /// requests, and <i>BatchWriteItem</i> does not return deleted items in the response.</para> <para>If you use a programming language that
 /// supports concurrency, such as Java, you can use threads to write items in parallel. Your application must include the necessary logic to
 /// manage the threads.</para> <para>With languages that don't support threading, such as PHP, <i>BatchWriteItem</i> will write or delete the
 /// specified items one at a time. In both situations, <i>BatchWriteItem</i> provides an alternative where the API performs the specified put
 /// and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your
 /// application.</para> <para>Parallel processing reduces latency, but each specified put and delete request consumes the same number of write
 /// capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.</para>
 /// <para>If one or more of the following is true, Amazon DynamoDB rejects the entire batch write operation:</para>
 /// <ul>
 /// <li> <para>One or more tables specified in the <i>BatchWriteItem</i> request does not exist.</para> </li>
 /// <li> <para>Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key
 /// schema.</para> </li>
 /// <li> <para>You try to perform multiple operations on the same item in the same <i>BatchWriteItem</i> request. For example, you cannot put
 /// and delete the same item in the same <i>BatchWriteItem</i> request. </para> </li>
 /// <li> <para>The total request size exceeds 1 MB.</para> </li>
 /// <li> <para>Any individual item in a batch exceeds 64 KB.</para> </li>
 /// 
 /// </ul>
 /// </summary>
 /// 
 /// <param name="batchWriteItemRequest">Container for the necessary parameters to execute the BatchWriteItem service method on
 ///          AmazonDynamoDBv2.</param>
 /// 
 /// <returns>The response from the BatchWriteItem service method, as returned by AmazonDynamoDBv2.</returns>
 /// 
 /// <exception cref="ItemCollectionSizeLimitExceededException"/>
 /// <exception cref="ResourceNotFoundException"/>
 /// <exception cref="ProvisionedThroughputExceededException"/>
 /// <exception cref="InternalServerErrorException"/>
 public BatchWriteItemResponse BatchWriteItem(BatchWriteItemRequest batchWriteItemRequest)
 {
     IAsyncResult asyncResult = invokeBatchWriteItem(batchWriteItemRequest, null, null, true);
     return EndBatchWriteItem(asyncResult);
 }
		internal BatchWriteItemResponse BatchWriteItem(BatchWriteItemRequest request)
        {
            var task = BatchWriteItemAsync(request);
            try
            {
                return task.Result;
            }
            catch(AggregateException e)
            {
                ExceptionDispatchInfo.Capture(e.InnerException).Throw();
                return null;
            }
        }
        /// <summary>
        /// <para>The <i>BatchWriteItem</i> operation puts or deletes multiple items in one or more tables. A single call to <i>BatchWriteItem</i> can
        /// write up to 1 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 64
        /// KB.</para> <para><b>NOTE:</b> BatchWriteItem cannot update items. To update items, use the UpdateItem API. </para> <para>The individual
        /// <i>PutItem</i> and <i>DeleteItem</i> operations specified in <i>BatchWriteItem</i> are atomic; however <i>BatchWriteItem</i> as a whole is
        /// not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the
        /// failed operations are returned in the <i>UnprocessedItems</i> response parameter. You can investigate and optionally resend the requests.
        /// Typically, you would call <i>BatchWriteItem</i> in a loop. Each iteration would check for unprocessed items and submit a new
        /// <i>BatchWriteItem</i> request with those unprocessed items until all items have been processed.</para> <para>Note that if <i>none</i> of the
        /// items can be processed due to insufficient provisioned throughput on all of the tables in the request, then <i>BatchGetItem</i> will throw a
        /// <i>ProvisionedThroughputExceededException</i> .</para> <para>To write one item, you can use the <i>PutItem</i> operation; to delete one
        /// item, you can use the <i>DeleteItem</i> operation.</para> <para>With <i>BatchWriteItem</i> , you can efficiently write or delete large
        /// amounts of data, such as from Amazon Elastic MapReduce (EMR), or copy data from another database into DynamoDB. In order to improve
        /// performance with these large-scale operations, <i>BatchWriteItem</i> does not behave in the same way as individual <i>PutItem</i> and
        /// <i>DeleteItem</i> calls would For example, you cannot specify conditions on individual put and delete requests, and <i>BatchWriteItem</i>
        /// does not return deleted items in the response.</para> <para>If you use a programming language that supports concurrency, such as Java, you
        /// can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that
        /// don't support threading, such as PHP, you must update or delete the specified items one at a time. In both situations, <i>BatchWriteItem</i>
        /// provides an alternative where the API performs the specified put and delete operations in parallel, giving you the power of the thread pool
        /// approach without having to introduce complexity into your application.</para> <para>Parallel processing reduces latency, but each specified
        /// put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on
        /// nonexistent items consume one write capacity unit.</para> <para>If one or more of the following is true, DynamoDB rejects the entire batch
        /// write operation:</para>
        /// <ul>
        /// <li> <para>One or more tables specified in the <i>BatchWriteItem</i> request does not exist.</para> </li>
        /// <li> <para>Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key
        /// schema.</para> </li>
        /// <li> <para>You try to perform multiple operations on the same item in the same <i>BatchWriteItem</i> request. For example, you cannot put
        /// and delete the same item in the same <i>BatchWriteItem</i> request. </para> </li>
        /// <li> <para>The total request size exceeds 1 MB.</para> </li>
        /// <li> <para>Any individual item in a batch exceeds 64 KB.</para> </li>
        /// 
        /// </ul>
        /// </summary>
        /// 
        /// <param name="batchWriteItemRequest">Container for the necessary parameters to execute the BatchWriteItem service method on
        /// AmazonDynamoDBv2.</param>
        /// 
        /// <returns>The response from the BatchWriteItem service method, as returned by AmazonDynamoDBv2.</returns>
        /// 
        /// <exception cref="T:Amazon.DynamoDBv2.Model.ItemCollectionSizeLimitExceededException" />
        /// <exception cref="T:Amazon.DynamoDBv2.Model.ResourceNotFoundException" />
        /// <exception cref="T:Amazon.DynamoDBv2.Model.ProvisionedThroughputExceededException" />
        /// <exception cref="T:Amazon.DynamoDBv2.Model.InternalServerErrorException" />
        /// <param name="cancellationToken">
        ///     A cancellation token that can be used by other objects or threads to receive notice of cancellation.
        /// </param>
		public Task<BatchWriteItemResponse> BatchWriteItemAsync(BatchWriteItemRequest batchWriteItemRequest, CancellationToken cancellationToken = default(CancellationToken))
        {
            var marshaller = new BatchWriteItemRequestMarshaller();
            var unmarshaller = BatchWriteItemResponseUnmarshaller.GetInstance();
            return Invoke<IRequest, BatchWriteItemRequest, BatchWriteItemResponse>(batchWriteItemRequest, marshaller, unmarshaller, signer, cancellationToken);
        }
        /// <summary>
        /// Initiates the asynchronous execution of the BatchWriteItem operation.
        /// </summary>
        /// 
        /// <param name="request">Container for the necessary parameters to execute the BatchWriteItem operation on AmazonDynamoDBClient.</param>
        /// <param name="callback">An AsyncCallback delegate that is invoked when the operation completes.</param>
        /// <param name="state">A user-defined state object that is passed to the callback procedure. Retrieve this object from within the callback
        ///          procedure using the AsyncState property.</param>
        /// 
        /// <returns>An IAsyncResult that can be used to poll or wait for results, or both; this value is also needed when invoking EndBatchWriteItem
        ///         operation.</returns>
        public IAsyncResult BeginBatchWriteItem(BatchWriteItemRequest request, AsyncCallback callback, object state)
        {
            var marshaller = new BatchWriteItemRequestMarshaller();
            var unmarshaller = BatchWriteItemResponseUnmarshaller.Instance;

            return BeginInvoke<BatchWriteItemRequest>(request, marshaller, unmarshaller,
                callback, state);
        }
Пример #10
0
 /// <summary>
 /// The <i>BatchWriteItem</i> operation puts or deletes multiple items in one or more
 /// tables. A single call to <i>BatchWriteItem</i> can write up to 16 MB of data, which
 /// can comprise as many as 25 put or delete requests. Individual items to be written
 /// can be as large as 400 KB.
 /// 
 ///  <note> 
 /// <para>
 /// <i>BatchWriteItem</i> cannot update items. To update items, use the <i>UpdateItem</i>
 /// API.
 /// </para>
 ///  </note> 
 /// <para>
 /// The individual <i>PutItem</i> and <i>DeleteItem</i> operations specified in <i>BatchWriteItem</i>
 /// are atomic; however <i>BatchWriteItem</i> as a whole is not. If any requested operations
 /// fail because the table's provisioned throughput is exceeded or an internal processing
 /// failure occurs, the failed operations are returned in the <i>UnprocessedItems</i>
 /// response parameter. You can investigate and optionally resend the requests. Typically,
 /// you would call <i>BatchWriteItem</i> in a loop. Each iteration would check for unprocessed
 /// items and submit a new <i>BatchWriteItem</i> request with those unprocessed items
 /// until all items have been processed.
 /// </para>
 ///  
 /// <para>
 /// Note that if <i>none</i> of the items can be processed due to insufficient provisioned
 /// throughput on all of the tables in the request, then <i>BatchWriteItem</i> will return
 /// a <i>ProvisionedThroughputExceededException</i>.
 /// </para>
 ///  <important> 
 /// <para>
 /// If DynamoDB returns any unprocessed items, you should retry the batch operation on
 /// those items. However, <i>we strongly recommend that you use an exponential backoff
 /// algorithm</i>. If you retry the batch operation immediately, the underlying read or
 /// write requests can still fail due to throttling on the individual tables. If you delay
 /// the batch operation using exponential backoff, the individual requests in the batch
 /// are much more likely to succeed.
 /// </para>
 ///  
 /// <para>
 /// For more information, see <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations">Batch
 /// Operations and Error Handling</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </para>
 ///  </important> 
 /// <para>
 /// With <i>BatchWriteItem</i>, you can efficiently write or delete large amounts of data,
 /// such as from Amazon Elastic MapReduce (EMR), or copy data from another database into
 /// DynamoDB. In order to improve performance with these large-scale operations, <i>BatchWriteItem</i>
 /// does not behave in the same way as individual <i>PutItem</i> and <i>DeleteItem</i>
 /// calls would. For example, you cannot specify conditions on individual put and delete
 /// requests, and <i>BatchWriteItem</i> does not return deleted items in the response.
 /// </para>
 ///  
 /// <para>
 /// If you use a programming language that supports concurrency, you can use threads to
 /// write items in parallel. Your application must include the necessary logic to manage
 /// the threads. With languages that don't support threading, you must update or delete
 /// the specified items one at a time. In both situations, <i>BatchWriteItem</i> provides
 /// an alternative where the API performs the specified put and delete operations in parallel,
 /// giving you the power of the thread pool approach without having to introduce complexity
 /// into your application.
 /// </para>
 ///  
 /// <para>
 /// Parallel processing reduces latency, but each specified put and delete request consumes
 /// the same number of write capacity units whether it is processed in parallel or not.
 /// Delete operations on nonexistent items consume one write capacity unit.
 /// </para>
 ///  
 /// <para>
 /// If one or more of the following is true, DynamoDB rejects the entire batch write operation:
 /// </para>
 ///  <ul> <li> 
 /// <para>
 /// One or more tables specified in the <i>BatchWriteItem</i> request does not exist.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Primary key attributes specified on an item in the request do not match those in the
 /// corresponding table's primary key schema.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// You try to perform multiple operations on the same item in the same <i>BatchWriteItem</i>
 /// request. For example, you cannot put and delete the same item in the same <i>BatchWriteItem</i>
 /// request. 
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// There are more than 25 requests in the batch.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Any individual item in a batch exceeds 400 KB.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// The total request size exceeds 16 MB.
 /// </para>
 ///  </li> </ul>
 /// </summary>
 /// <param name="requestItems">A map of one or more table names and, for each table, a list of operations to be performed (<i>DeleteRequest</i> or <i>PutRequest</i>). Each element in the map consists of the following: <ul> <li> <i>DeleteRequest</i> - Perform a <i>DeleteItem</i> operation on the specified item. The item to be deleted is identified by a <i>Key</i> subelement: <ul> <li> <i>Key</i> - A map of primary key attribute values that uniquely identify the ! item. Each entry in this map consists of an attribute name and an attribute value. For each primary key, you must provide <i>all</i> of the key attributes. For example, with a hash type primary key, you only need to provide the hash attribute. For a hash-and-range type primary key, you must provide <i>both</i> the hash attribute and the range attribute. </li> </ul> </li> <li> <i>PutRequest</i> - Perform a <i>PutItem</i> operation on the specified item. The item to be put is identified by an <i>Item</i> subelement: <ul> <li> <i>Item</i> - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a <i>ValidationException</i> exception. If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. </li> </ul> </li> </ul></param>
 /// <param name="cancellationToken">
 ///     A cancellation token that can be used by other objects or threads to receive notice of cancellation.
 /// </param>
 /// 
 /// <returns>The response from the BatchWriteItem service method, as returned by DynamoDB.</returns>
 /// <exception cref="Amazon.DynamoDBv2.Model.InternalServerErrorException">
 /// An error occurred on the server side.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ItemCollectionSizeLimitExceededException">
 /// An item collection is too large. This exception is only returned for tables that have
 /// one or more local secondary indexes.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ProvisionedThroughputExceededException">
 /// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests
 /// that receive this exception. Your request is eventually successful, unless your retry
 /// queue is too large to finish. Reduce the frequency of requests and use exponential
 /// backoff. For more information, go to <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#APIRetries">Error
 /// Retries and Exponential Backoff</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ResourceNotFoundException">
 /// The operation tried to access a nonexistent table or index. The resource might not
 /// be specified correctly, or its status might not be <code>ACTIVE</code>.
 /// </exception>
 public Task<BatchWriteItemResponse> BatchWriteItemAsync(Dictionary<string, List<WriteRequest>> requestItems, System.Threading.CancellationToken cancellationToken = default(CancellationToken))
 {
     var request = new BatchWriteItemRequest();
     request.RequestItems = requestItems;
     return BatchWriteItemAsync(request, cancellationToken);
 }
Пример #11
0
		internal BatchWriteItemResponse BatchWriteItem(BatchWriteItemRequest request)
        {
            var task = BatchWriteItemAsync(request);
            try
            {
                return task.Result;
            }
            catch(AggregateException e)
            {
                throw e.InnerException;
            }
        }
Пример #12
0
        /// <summary>
        /// Initiates the asynchronous execution of the BatchWriteItem operation.
        /// <seealso cref="Amazon.DynamoDBv2.IAmazonDynamoDB.BatchWriteItem"/>
        /// </summary>
        /// 
        /// <param name="request">Container for the necessary parameters to execute the BatchWriteItem operation.</param>
        /// <param name="cancellationToken">
        ///     A cancellation token that can be used by other objects or threads to receive notice of cancellation.
        /// </param>
        /// <returns>The task object representing the asynchronous operation.</returns>
		public async Task<BatchWriteItemResponse> BatchWriteItemAsync(BatchWriteItemRequest request, CancellationToken cancellationToken = default(CancellationToken))
        {
            var marshaller = new BatchWriteItemRequestMarshaller();
            var unmarshaller = BatchWriteItemResponseUnmarshaller.GetInstance();
            var response = await Invoke<IRequest, BatchWriteItemRequest, BatchWriteItemResponse>(request, marshaller, unmarshaller, signer, cancellationToken)
                .ConfigureAwait(continueOnCapturedContext: false);
            return response;
        }
Пример #13
0
        public void BatchSamples()
        {
            EnsureTables();

            {
                #region BatchGet Sample 1

                // Define attributes to get and keys to retrieve
                List<string> attributesToGet = new List<string> { "Author", "Title", "Year" };
                List<Dictionary<string, AttributeValue>> sampleTableKeys = new List<Dictionary<string, AttributeValue>>
                {
                    new Dictionary<string, AttributeValue>
                    {
                        { "Author", new AttributeValue { S = "Mark Twain" } },
                        { "Title", new AttributeValue { S = "The Adventures of Tom Sawyer" } }
                    },
                    new Dictionary<string, AttributeValue>
                    {
                        { "Author", new AttributeValue { S = "Mark Twain" } },
                        { "Title", new AttributeValue { S = "Adventures of Huckleberry Finn" } }
                    }
                };

                // Construct get-request for first table
                KeysAndAttributes sampleTableItems = new KeysAndAttributes
                {
                    AttributesToGet = attributesToGet,
                    Keys = sampleTableKeys
                };

                #endregion

                #region BatchGet Sample 2

                // Define keys to retrieve
                List<Dictionary<string, AttributeValue>> authorsTableKeys = new List<Dictionary<string, AttributeValue>>
                {
                    new Dictionary<string, AttributeValue>
                    {
                        { "Author", new AttributeValue { S = "Mark Twain" } },
                    },
                    new Dictionary<string, AttributeValue>
                    {
                        { "Author", new AttributeValue { S = "Booker Taliaferro Washington" } },
                    }
                };

                // Construct get-request for second table
                //  Skip setting AttributesToGet property to retrieve all attributes
                KeysAndAttributes authorsTableItems = new KeysAndAttributes
                {
                    Keys = authorsTableKeys,
                };

                #endregion

                #region BatchGet Sample 3

                // Create a client
                AmazonDynamoDBClient client = new AmazonDynamoDBClient();

                // Construct table-keys mapping
                Dictionary<string, KeysAndAttributes> requestItems = new Dictionary<string, KeysAndAttributes>();
                requestItems["SampleTable"] = sampleTableItems;
                requestItems["AuthorsTable"] = authorsTableItems;

                // Construct request
                BatchGetItemRequest request = new BatchGetItemRequest
                {
                    RequestItems = requestItems
                };

                BatchGetItemResult result;
                do
                {
                    // Issue request and retrieve items
                    result = client.BatchGetItem(request);

                    // Iterate through responses
                    Dictionary<string, List<Dictionary<string, AttributeValue>>> responses = result.Responses;
                    foreach (string tableName in responses.Keys)
                    {
                        // Get items for each table
                        List<Dictionary<string, AttributeValue>> tableItems = responses[tableName];

                        // View items
                        foreach (Dictionary<string, AttributeValue> item in tableItems)
                        {
                            Console.WriteLine("Item:");
                            foreach (var keyValuePair in item)
                            {
                                Console.WriteLine("{0} : S={1}, N={2}, SS=[{3}], NS=[{4}]",
                                    keyValuePair.Key,
                                    keyValuePair.Value.S,
                                    keyValuePair.Value.N,
                                    string.Join(", ", keyValuePair.Value.SS ?? new List<string>()),
                                    string.Join(", ", keyValuePair.Value.NS ?? new List<string>()));
                            }
                        }
                    }

                    // Some items may not have been retrieved!
                    //  Set RequestItems to the result's UnprocessedKeys and reissue request
                    request.RequestItems = result.UnprocessedKeys;

                } while (result.UnprocessedKeys.Count > 0);

                #endregion
            }


            {
                #region BatchWrite Sample 1

                // Create items to put into first table
                Dictionary<string, AttributeValue> item1 = new Dictionary<string, AttributeValue>();
                item1["Author"] = new AttributeValue { S = "Mark Twain" };
                item1["Title"] = new AttributeValue { S = "A Connecticut Yankee in King Arthur's Court" };
                item1["Pages"] = new AttributeValue { N = "575" };
                Dictionary<string, AttributeValue> item2 = new Dictionary<string, AttributeValue>();
                item2["Author"] = new AttributeValue { S = "Booker Taliaferro Washington" };
                item2["Title"] = new AttributeValue { S = "My Larger Education" };
                item2["Pages"] = new AttributeValue { N = "313" };
                item2["Year"] = new AttributeValue { N = "1911" };

                // Create key for item to delete from first table
                //  Hash-key of the target item is string value "Mark Twain"
                //  Range-key of the target item is string value "Tom Sawyer, Detective"
                Dictionary<string, AttributeValue> keyToDelete1 = new Dictionary<string, AttributeValue>
                {
                    { "Author", new AttributeValue { S = "Mark Twain" } },
                    { "Title", new AttributeValue { S = "Tom Sawyer, Detective" } }
                };

                // Construct write-request for first table
                List<WriteRequest> sampleTableItems = new List<WriteRequest>();
                sampleTableItems.Add(new WriteRequest
                {
                    PutRequest = new PutRequest { Item = item1 }
                });
                sampleTableItems.Add(new WriteRequest
                {
                    PutRequest = new PutRequest { Item = item2 }
                });
                sampleTableItems.Add(new WriteRequest
                {
                    DeleteRequest = new DeleteRequest { Key = keyToDelete1 }
                });

                #endregion

                #region BatchWrite Sample 2

                // Create key for item to delete from second table
                //  Hash-key of the target item is string value "Francis Scott Key Fitzgerald"
                Dictionary<string, AttributeValue> keyToDelete2 = new Dictionary<string, AttributeValue>
                {
                    { "Author", new AttributeValue { S = "Francis Scott Key Fitzgerald" } },
                };

                // Construct write-request for first table
                List<WriteRequest> authorsTableItems = new List<WriteRequest>();
                authorsTableItems.Add(new WriteRequest
                {
                    DeleteRequest = new DeleteRequest { Key = keyToDelete2 }
                });

                #endregion

                #region BatchWrite Sample 3

                // Create a client
                AmazonDynamoDBClient client = new AmazonDynamoDBClient();

                // Construct table-keys mapping
                Dictionary<string, List<WriteRequest>> requestItems = new Dictionary<string, List<WriteRequest>>();
                requestItems["SampleTable"] = sampleTableItems;
                requestItems["AuthorsTable"] = authorsTableItems;

                BatchWriteItemRequest request = new BatchWriteItemRequest { RequestItems = requestItems };
                BatchWriteItemResult result;
                do
                {
                    // Issue request and retrieve items
                    result = client.BatchWriteItem(request);

                    // Some items may not have been processed!
                    //  Set RequestItems to the result's UnprocessedItems and reissue request
                    request.RequestItems = result.UnprocessedItems;

                } while (result.UnprocessedItems.Count > 0);

                #endregion
            }
        }
 /// <summary>
 /// Initiates the asynchronous execution of the BatchWriteItem operation.
 /// </summary>
 /// 
 /// <param name="request">Container for the necessary parameters to execute the BatchWriteItem operation on AmazonDynamoDBClient.</param>
 /// <param name="callback">An Action delegate that is invoked when the operation completes.</param>
 /// <param name="options">A user-defined state object that is passed to the callback procedure. Retrieve this object from within the callback
 ///          procedure using the AsyncState property.</param>
 public void BatchWriteItemAsync(BatchWriteItemRequest request, AmazonServiceCallback<BatchWriteItemRequest, BatchWriteItemResponse> callback, AsyncOptions options = null)
 {
     options = options == null?new AsyncOptions():options;
     var marshaller = new BatchWriteItemRequestMarshaller();
     var unmarshaller = BatchWriteItemResponseUnmarshaller.Instance;
     Action<AmazonWebServiceRequest, AmazonWebServiceResponse, Exception, AsyncOptions> callbackHelper = null;
     if(callback !=null )
         callbackHelper = (AmazonWebServiceRequest req, AmazonWebServiceResponse res, Exception ex, AsyncOptions ao) => { 
             AmazonServiceResult<BatchWriteItemRequest,BatchWriteItemResponse> responseObject 
                     = new AmazonServiceResult<BatchWriteItemRequest,BatchWriteItemResponse>((BatchWriteItemRequest)req, (BatchWriteItemResponse)res, ex , ao.State);    
                 callback(responseObject); 
         };
     BeginInvoke<BatchWriteItemRequest>(request, marshaller, unmarshaller, options, callbackHelper);
 }
 /// <summary>
 /// The <i>BatchWriteItem</i> operation puts or deletes multiple items in one or more
 /// tables. A single call to <i>BatchWriteItem</i> can write up to 16 MB of data, which
 /// can comprise as many as 25 put or delete requests. Individual items to be written
 /// can be as large as 400 KB.
 /// 
 ///  <note> 
 /// <para>
 /// <i>BatchWriteItem</i> cannot update items. To update items, use the <i>UpdateItem</i>
 /// API.
 /// </para>
 ///  </note> 
 /// <para>
 /// The individual <i>PutItem</i> and <i>DeleteItem</i> operations specified in <i>BatchWriteItem</i>
 /// are atomic; however <i>BatchWriteItem</i> as a whole is not. If any requested operations
 /// fail because the table's provisioned throughput is exceeded or an internal processing
 /// failure occurs, the failed operations are returned in the <i>UnprocessedItems</i>
 /// response parameter. You can investigate and optionally resend the requests. Typically,
 /// you would call <i>BatchWriteItem</i> in a loop. Each iteration would check for unprocessed
 /// items and submit a new <i>BatchWriteItem</i> request with those unprocessed items
 /// until all items have been processed.
 /// </para>
 ///  
 /// <para>
 /// Note that if <i>none</i> of the items can be processed due to insufficient provisioned
 /// throughput on all of the tables in the request, then <i>BatchWriteItem</i> will return
 /// a <i>ProvisionedThroughputExceededException</i>.
 /// </para>
 ///  <important> 
 /// <para>
 /// If DynamoDB returns any unprocessed items, you should retry the batch operation on
 /// those items. However, <i>we strongly recommend that you use an exponential backoff
 /// algorithm</i>. If you retry the batch operation immediately, the underlying read or
 /// write requests can still fail due to throttling on the individual tables. If you delay
 /// the batch operation using exponential backoff, the individual requests in the batch
 /// are much more likely to succeed.
 /// </para>
 ///  
 /// <para>
 /// For more information, go to <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations">Batch
 /// Operations and Error Handling</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </para>
 ///  </important> 
 /// <para>
 /// With <i>BatchWriteItem</i>, you can efficiently write or delete large amounts of data,
 /// such as from Amazon Elastic MapReduce (EMR), or copy data from another database into
 /// DynamoDB. In order to improve performance with these large-scale operations, <i>BatchWriteItem</i>
 /// does not behave in the same way as individual <i>PutItem</i> and <i>DeleteItem</i>
 /// calls would. For example, you cannot specify conditions on individual put and delete
 /// requests, and <i>BatchWriteItem</i> does not return deleted items in the response.
 /// </para>
 ///  
 /// <para>
 /// If you use a programming language that supports concurrency, such as Java, you can
 /// use threads to write items in parallel. Your application must include the necessary
 /// logic to manage the threads. With languages that don't support threading, such as
 /// PHP, you must update provides an alternative where the API performs the specified
 /// put and delete operations in parallel, giving you the power of the thread pool approach
 /// without having to introduce complexity into your application.
 /// </para>
 ///  
 /// <para>
 /// Parallel processing reduces latency, but each specified put and delete request consumes
 /// the same number of write capacity units whether it is processed in parallel or not.
 /// Delete operations on nonexistent items consume one write capacity unit.
 /// </para>
 ///  
 /// <para>
 /// If one or more of the following is true, DynamoDB rejects the entire batch write operation:
 /// </para>
 ///  <ul> <li> 
 /// <para>
 /// One or more tables specified in the <i>BatchWriteItem</i> request does not exist.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Primary key attributes specified on an item in the request do not match those in the
 /// corresponding table's primary key schema.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// You try to perform multiple operations on the same item in the same <i>BatchWriteItem</i>
 /// request. For example, you cannot put and delete the same item in the same <i>BatchWriteItem</i>
 /// request. 
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// There are more than 25 requests in the batch.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// Any individual item in a batch exceeds 400 KB.
 /// </para>
 ///  </li> <li> 
 /// <para>
 /// The total request size exceeds 16 MB.
 /// </para>
 ///  </li> </ul>
 /// </summary>
 /// <param name="requestItems">A map of one or more table names and, for each table, a list of operations to be performed (<i>DeleteRequest</i> or <i>PutRequest</i>). Each element in the map consists of the following: <ul> <li> <i>DeleteRequest</i> - Perform a <i>DeleteItem</i> operation on the specified item. The item to be deleted is identified by a <i>Key</i> subelement: <ul> <li> <i>Key</i> - A map of primary key attribute values that uniquely identify the ! item. Each entry in this map consists of an attribute name and an attribute value. For each primary key, you must provide <i>all</i> of the key attributes. For example, with a hash type primary key, you only need to provide the hash attribute. For a hash-and-range type primary key, you must provide <i>both</i> the hash attribute and the range attribute. </li> </ul> </li> <li> <i>PutRequest</i> - Perform a <i>PutItem</i> operation on the specified item. The item to be put is identified by an <i>Item</i> subelement: <ul> <li> <i>Item</i> - A map of attributes and their values. Each entry in this map consists of an attribute name and an attribute value. Attribute values must not be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests that contain empty values will be rejected with a <i>ValidationException</i> exception. If you specify any attributes that are part of an index key, then the data types for those attributes must match those of the schema in the table's attribute definition. </li> </ul> </li> </ul></param>
 /// 
 /// <returns>The response from the BatchWriteItem service method, as returned by DynamoDB.</returns>
 /// <exception cref="Amazon.DynamoDBv2.Model.InternalServerErrorException">
 /// An error occurred on the server side.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ItemCollectionSizeLimitExceededException">
 /// An item collection is too large. This exception is only returned for tables that have
 /// one or more local secondary indexes.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ProvisionedThroughputExceededException">
 /// The request rate is too high, or the request is too large, for the available throughput
 /// to accommodate. The AWS SDKs automatically retry requests that receive this exception;
 /// therefore, your request will eventually succeed, unless the request is too large or
 /// your retry queue is too large to finish. Reduce the frequency of requests by using
 /// the strategies listed in <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#APIRetries">Error
 /// Retries and Exponential Backoff</a> in the <i>Amazon DynamoDB Developer Guide</i>.
 /// </exception>
 /// <exception cref="Amazon.DynamoDBv2.Model.ResourceNotFoundException">
 /// The operation tried to access a nonexistent table or index. The resource might not
 /// be specified correctly, or its status might not be <code>ACTIVE</code>.
 /// </exception>
 public void BatchWriteItemAsync(Dictionary<string, List<WriteRequest>> requestItems, AmazonServiceCallback<BatchWriteItemRequest, BatchWriteItemResponse> callback, AsyncOptions options = null)
 {
     var request = new BatchWriteItemRequest();
     request.RequestItems = requestItems;
     BatchWriteItemAsync(request, callback, options);
 }