Client
applicationautoscaling | R Documentation |
Application Auto Scaling¶
Description¶
With Application Auto Scaling, you can configure automatic scaling for the following resources:
-
Amazon AppStream 2.0 fleets
-
Amazon Aurora Replicas
-
Amazon Comprehend document classification and entity recognizer endpoints
-
Amazon DynamoDB tables and global secondary indexes throughput capacity
-
Amazon ECS services
-
Amazon ElastiCache for Redis clusters (replication groups)
-
Amazon EMR clusters
-
Amazon Keyspaces (for Apache Cassandra) tables
-
Lambda function provisioned concurrency
-
Amazon Managed Streaming for Apache Kafka broker storage
-
Amazon Neptune clusters
-
Amazon SageMaker endpoint variants
-
Amazon SageMaker inference components
-
Amazon SageMaker serverless endpoint provisioned concurrency
-
Spot Fleets (Amazon EC2)
-
Pool of WorkSpaces
-
Custom resources provided by your own applications or services
To learn more about Application Auto Scaling, see the Application Auto Scaling User Guide.
API Summary
The Application Auto Scaling service API includes three key sets of actions:
-
Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets.
-
Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history.
-
Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the
register_scalable_target
API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling.
Usage¶
Arguments¶
config
Optional configuration of credentials, endpoint, and/or region.
credentials:
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
endpoint: The complete URL to use for the constructed client.
region: The AWS Region used in instantiating the client.
close_connection: Immediately close all HTTP connections.
timeout: The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.
s3_force_path_style: Set this to
true
to force the request to use path-style addressing, i.e.http://s3.amazonaws.com/BUCKET/KEY
.sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html
credentials
Optional credentials shorthand for the config parameter
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
endpoint
Optional shorthand for complete URL to use for the constructed client.
region
Optional shorthand for AWS Region used in instantiating the client.
Value¶
A client for the service. You can call the service's operations using
syntax like svc$operation(...)
, where svc
is the name you've
assigned to the client. The available operations are listed in the
Operations section.
Service syntax¶
svc <- applicationautoscaling(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string",
close_connection = "logical",
timeout = "numeric",
s3_force_path_style = "logical",
sts_regional_endpoint = "string"
),
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string"
)
Operations¶
- delete_scaling_policy
- Deletes the specified scaling policy for an Application Auto Scaling scalable target
- delete_scheduled_action
- Deletes the specified scheduled action for an Application Auto Scaling scalable target
- deregister_scalable_target
- Deregisters an Application Auto Scaling scalable target when you have finished using it
- describe_scalable_targets
- Gets information about the scalable targets in the specified namespace
- describe_scaling_activities
- Provides descriptive information about the scaling activities in the specified namespace from the previous six weeks
- describe_scaling_policies
- Describes the Application Auto Scaling scaling policies for the specified service namespace
- describe_scheduled_actions
- Describes the Application Auto Scaling scheduled actions for the specified service namespace
- list_tags_for_resource
- Returns all the tags on the specified Application Auto Scaling scalable target
- put_scaling_policy
- Creates or updates a scaling policy for an Application Auto Scaling scalable target
- put_scheduled_action
- Creates or updates a scheduled action for an Application Auto Scaling scalable target
- register_scalable_target
- Registers or updates a scalable target, which is the resource that you want to scale
- tag_resource
- Adds or edits tags on an Application Auto Scaling scalable target
- untag_resource
- Deletes tags from an Application Auto Scaling scalable target
Examples¶
## Not run:
svc <- applicationautoscaling()
# This example deletes a scaling policy for the Amazon ECS service called
# web-app, which is running in the default cluster.
svc$delete_scaling_policy(
PolicyName = "web-app-cpu-lt-25",
ResourceId = "service/default/web-app",
ScalableDimension = "ecs:service:DesiredCount",
ServiceNamespace = "ecs"
)
## End(Not run)