C# Client Library
A C# Client Library for the AnalyzeRe REST API
|
Parameters to be used in a Large Data Upload operation. More...
Public Types | |
enum | BinaryYELTUploadOptions { AutomaticBinaryConversionAndCompression , AutomaticCompressionOnly , None } |
Options for controlling whether to automatically convert a binary YELTLossSet's data to the binary format during upload. More... | |
Public Member Functions | |
Parameters () | |
Construct a new set of Default Large Data Upload Parameters. | |
Parameters (int? min_chunk_size=null, int? max_chunk_size=null, int? max_retries_per_chunk=null, int? chunk_timeout=null, PollingOptions commit_polling_options=null, HandleUploadInSessionStrategy? handle_existing_upload_strategy=null, bool? enable_compression=null) | |
Construct a new set of Large Data Upload Parameters. | |
Properties | |
BinaryYELTUploadOptions | binary_yelt_options [get, set] |
Options for controlling whether to automatically convert a binary YELTLossSet's data to the binary format during upload. By default, the assumption is that csv data was provided and all conversion should be done on your behalf. Set to None if the file you're uploading is already in binary format and gzipped. | |
int | chunk_timeout = 1000 * 60 [get, set] |
The timeout (in milliseconds) for a single chunk upload to complete. Default: 60,000 ms (1 minute). | |
PollingOptions | commit_polling_options = DefaultCommitPollingOptions [get, set] |
Determines how the system polls the status of the data file while waiting for the data to be processed. By default, these options use exponential back-off and will timeout if the system hasn't processed the file in 3,600,000 ms (1 hour). | |
static Parameters | Default [get] |
Get a new set of Default Large Data Upload Parameters. | |
static PollingOptions | DefaultCommitPollingOptions [get] |
The default PollingOptions used when none are specified in a AnalyzeRe.LargeDataUpload.Parameters instance. By default, limits the maximum polling time to 1 hour. All other parameters use the default values for PollingOptions. | |
bool | enable_compression [get, set] |
Whether to compress (gzip) the data during upload. (Default: False) Set to true if the files you're uploading are quite large and/or you wish to decrease network usage. In practice, if your upload speed is in excess of 5 MiBps, the overhead of compression and decompression on the server outweighs the reduction in uploaded data. | |
HandleUploadInSessionStrategy | handle_existing_upload_strategy [get, set] |
The HandleUploadInSessionStrategy to employ of an existing upload session is already in progress. Default: RaiseError. | |
int | max_chunk_size = 16 * 1024 * 1024 [get, set] |
The maximum size of a single uploaded chunk in bytes. Default: 2^24 bytes (16 Megabytes). | |
int | max_retries_per_chunk = 3 [get, set] |
The maximum number of retries for a failed chunk upload before an error is thrown. Default: 3 retries. | |
int | min_chunk_size = 4 * 1024 * 1024 [get, set] |
The minimum size of a single uploaded chunk in bytes. Set this larger if there is enough latency between the server that smaller chunk requests decrease your throughput. Set this smaller if your upload stream is expensive to produce and you want to reduce the amount of time before the first chunk gets sent over the network layer. Default: 2^22 bytes (4 Megabytes). | |
Parameters to be used in a Large Data Upload operation.
Definition at line 6 of file Parameters.cs.
Options for controlling whether to automatically convert a binary YELTLossSet's data to the binary format during upload.
Definition at line 66 of file Parameters.cs.
|
inline |
Construct a new set of Default Large Data Upload Parameters.
Definition at line 89 of file Parameters.cs.
|
inline |
Construct a new set of Large Data Upload Parameters.
min_chunk_size | The minimum size of a single uploaded chunk in bytes. See min_chunk_size for details and defaults. |
max_chunk_size | The maximum size of a single uploaded chunk in bytes. See max_chunk_size for details and defaults. |
max_retries_per_chunk | The maximum number of retries for a failed chunk upload before an error is thrown. See max_retries_per_chunk for details and defaults. |
chunk_timeout | The timeout (in milliseconds) for a single chunk upload to complete. See chunk_timeout for details and defaults. |
commit_polling_options | Determines how the system polls the status of the data file while waiting for the data to be processed. See commit_polling_options for details and defaults. |
handle_existing_upload_strategy | The HandleUploadInSessionStrategy to employ of an existing upload session is already in progress. See handle_existing_upload_strategy for details and defaults. |
enable_compression | Whether to compress (gzip) the data during upload. See enable_compression for details and defaults. |
Definition at line 110 of file Parameters.cs.
|
getset |
Options for controlling whether to automatically convert a binary YELTLossSet's data to the binary format during upload. By default, the assumption is that csv data was provided and all conversion should be done on your behalf. Set to None if the file you're uploading is already in binary format and gzipped.
Definition at line 82 of file Parameters.cs.
|
getset |
The timeout (in milliseconds) for a single chunk upload to complete. Default: 60,000 ms (1 minute).
Definition at line 41 of file Parameters.cs.
|
getset |
Determines how the system polls the status of the data file while waiting for the data to be processed. By default, these options use exponential back-off and will timeout if the system hasn't processed the file in 3,600,000 ms (1 hour).
Definition at line 47 of file Parameters.cs.
|
staticget |
Get a new set of Default Large Data Upload Parameters.
Definition at line 18 of file Parameters.cs.
|
staticget |
The default PollingOptions used when none are specified in a AnalyzeRe.LargeDataUpload.Parameters instance. By default, limits the maximum polling time to 1 hour. All other parameters use the default values for PollingOptions.
Definition at line 12 of file Parameters.cs.
|
getset |
Whether to compress (gzip) the data during upload. (Default: False) Set to true if the files you're uploading are quite large and/or you wish to decrease network usage. In practice, if your upload speed is in excess of 5 MiBps, the overhead of compression and decompression on the server outweighs the reduction in uploaded data.
Definition at line 59 of file Parameters.cs.
|
getset |
The HandleUploadInSessionStrategy to employ of an existing upload session is already in progress. Default: RaiseError.
Definition at line 51 of file Parameters.cs.
|
getset |
The maximum size of a single uploaded chunk in bytes. Default: 2^24 bytes (16 Megabytes).
We noted that chunk processing has n^2 complexity, where n is the chunk size. Chunk of 128Mb can result in timeout, and chunk size of 32-64Mb is very slow.
Definition at line 33 of file Parameters.cs.
|
getset |
The maximum number of retries for a failed chunk upload before an error is thrown. Default: 3 retries.
Definition at line 37 of file Parameters.cs.
|
getset |
The minimum size of a single uploaded chunk in bytes. Set this larger if there is enough latency between the server that smaller chunk requests decrease your throughput. Set this smaller if your upload stream is expensive to produce and you want to reduce the amount of time before the first chunk gets sent over the network layer. Default: 2^22 bytes (4 Megabytes).
Definition at line 27 of file Parameters.cs.