ABD201 - Big Data Architectural Patterns and Best Practices on AWS
In this session, we simplify big data processing as a data bus comprising various stages: collect, store, process, analyze, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
ABD202 - Best Practices for Building Serverless Big Data Applications
Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics.
ABD203 - Real-Time Streaming Applications on AWS: Patterns and Use Cases
To win in the marketplace, businesses need to be able to use both live and historical data, continuously and in real time; they need streaming analytics. In this session, you learn best practices for implementing simple to advanced real-time streaming data use cases on AWS. First, we review decision points on batch versus real-time scenarios. Next, we take a look at streaming data architecture patterns that include Amazon Kinesis, Spark Streaming on Amazon EMR, and other open source libraries. Finally, we dive deep into the most common of these patterns and cover design and implementation considerations.
ABD204 - Accelerate Value from Your Big Data, Predictive Analytics, and IoT Initiatives with 1/10th the Effort — Real World Examples with C3 IoT and AWS
Many organizations are awash in different types of data yet struggle to utilize these assets to benefit their customers and optimize their operations. This session will explore best practices from actual deployments at leading global organizations that have unlocked value with C3 IoT and AWS, including: improved identification of fraud; reducing equipment downtime; improving lives by addressing pharmaceutical drug dependency; and significantly reducing energy costs and greenhouse gas emissions, and improving sustainability reporting.
All organizations face challenges associated with: 1) unifying and federating disparate and ever-growing data sets, 2) efficiently mining these data for valuable insights using machine learning and other advanced analytic techniques, 3) operationalizing these insights into front-line applications, and finally, 4) driving change management. These challenges quickly overwhelm typical IT systems and approaches, and require a software platform that applies big data, predictive analytics, and IoT to deliver a new generation of predictive applications.
With C3 IoT and AWS, organizations are proven to quickly solve real-world problems in a fraction of the time. Armed with actionable predictions and access to all their data, these entities can prioritize and optimize resources across their value chains to enhance operational efficiency, enable new services, and better serve constituents.
Session sponsored by C3I0T
ABD205 - Taking a Page Out of Ivy Tech’s Book: Using Data for Student Success
Data speaks. Discover how Ivy Tech, the nation's largest singly accredited community college, uses AWS to gather, analyze, and take action on student behavioral data for the betterment of over 3,100 students. This session outlines the process from inception to implementation across the state of Indiana and highlights how Ivy Tech's model can be applied to your own complex business problems.
ABD206 - Best Practices for Data Discovery & Visualization using Amazon QuickSight
Data visualization and collaboration of insights represent the last mile of business analytics. We all look for patterns to gain insights, patterns that are often not evident when we simply look at raw data. The right visualizations allows you to quickly gain a deep understanding of the data, and trends and anomalies within. In this session, we take an in-depth look at how you can connect to various data sources (in the cloud or on-premises), create datasets, setup scheduled refreshes to keep the datasets updated, and build charts and dashboards for collaboration. During each step, we demonstrate various implementation options available to you in Amazon QuickSight.
ABD207 - Leveraging AWS to Fight Financial Crime and Protect National Security
Banks aren’t known to share data and collaborate with one another. But that is exactly what the Mid-Sized Bank Coalition of America (MBCA) is doing to fight digital financial crime—and protect national security.
Using the AWS Cloud, the MBCA developed a shared data analytics utility that processes terabytes of non-competitive customer account, transaction, and government risk data. The intelligence produced from the data helps banks increase the efficiency of their operations, cut labor and operating costs, and reduce false positive volumes.
The collective intelligence also allows greater enforcement of Anti-Money Laundering (AML) regulations by helping members detect internal risks—and identify the challenges to detecting these risks in the first place. These improvements not only help banks build better relationships with regulatory agencies, but also aid law enforcement to identify suspicious activity and trace money trails to identify, target, and diffuse threats.
This session demonstrates how the AWS Cloud supports the MBCA to deliver advanced data analytics, provide consistent operating models across financial institutions, reduce costs, and strengthen national security.
Session sponsored by Accenture
ABD301 - Analyzing Streaming Data in Real Time with Amazon Kinesis
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
ABD302 - Real-Time Data Exploration and Analytics with Amazon Elasticsearch Service and Kibana
In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports.
ABD303 - Developing a Data Insights Platform – Sysco’s Journey from Disparate Systems to Data Lake and Beyond
Sysco has nearly 200 operating companies through the United States, Canada, Central and South America, and Europe. As the global leader in food services, Sysco identified the need to streamline the collection, transformation, and presentation of data produced by the distributed units and systems into a consolidated location as one source of the truth. Sysco's business intelligence team addressed these requirements by creating a data lake with scalable analytics and query engines built on AWS. In this session, Sysco will discuss the architecture, challenges, and lessons learned deploying a self-service data platform based on their data lake. They will walk through the design patterns they used and how they architected the solution to provide predictive analytics using Amazon Redshift Spectrum, Amazon S3, Amazon EMR, AWS Glue, Amazon Elasticsearch Service, and other AWS services.
ABD304 - Best Practices for Data Warehousing with Amazon Redshift & Redshift Spectrum
Most companies are over-run with data, yet they lack critical insights to make timely and accurate business decisions. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. In this session, we take an in-depth look at how modern data warehousing blends and analyzes all your data, inside and outside your data warehouse without moving the data, to give you deeper insights to run your business. We will cover best practices on how to design optimal schemas, load data efficiently, and optimize your queries to deliver high throughput and performance.
ABD305 - Design Patterns and Best Practices for Data Analytics with Amazon EMR
Amazon EMR is one of the largest Hadoop operators in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. Finally, we dive into some of our recent launches to keep you current on our latest features.
ABD307 - Deep Analytics for Global AWS Marketing Organization
To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams.
ABD401 - How Netflix Monitors Applications in Real Time with Amazon Kinesis
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, youl learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
ALX301 - 5 AWS services that will super charge your Alexa skill
The most engaging Alexa skills have fresh content, continuous improvement, and personalized voice experiences. Learn how to improve your customer’s experience by making use of AWS services like Amazon S3, AWS IoT, Amazon Polly, Amazon API Gateway, and Amazon DynamoDB.
ALX302 - Alexa and IoT: Now with 100% More Voice
ALX303 - The Art and Science of Conversation Applied to Alexa Skills
It used to be the case that we only spoke to computers in their language. But more and more often, we’re interacting with them in ours. We are moving quickly into a world of computer conversation, and one in which, for many applications, the most natural interactions will be through spoken language. But how do you create engaging narrative and compelling, organic conversational interactions using the imprecise tools of speech recognition and intent resolution? In this session, we will look at the experience as a whole and take you through key learnings that you can use when building your skills. We cover issues like knowing your audience, creating compelling storylines, using a cast of characters, integrating voiceover, designing a soundscape, and finding those “magic moments”. For each of these, we share the design pattern, the backing AI or physiological science, and how to implement the experience with Alexa.
ALX401 - Advanced Alexa Skill Building: Conversation and Memory
This session walks you through some of the more advanced features offered in Alexa Skill Builder, like Dialog Management, Entity Resolution, state management, session persistence, and maintaining context. Using Dialog Management, you can engage skill users in a multi-turn dialog to elicit and confirm slots for an intent. Using Entity Resolution, you can greatly simplify slot management by mapping multiple synonyms of your slot to a unique ID. We couple these conversational techniques with the management of session state and persistence to enable memory and personalization.
AMF301 - Big Data & Analytics for Manufacturing Operations
Manufacturing companies collect vast troves of process data for tracking purposes. Using this data with advanced analytics can optimize operations, saving time and money. In this session, we explore the latest analytics capabilities to support your goals for optimizing the manufacturing plant floor. Learn how to build dashboards that connect to prediction models driven by sensors across manufacturing processes. Learn how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, AWS Identity and Access Management, and AWS Lambda. We also review a reference architecture that supports data ingestion, event rules, analytics, and the use of machine learning for manufacturing analytics.
AMF302 - Alexa, Where’s My Car? A Test Drive of the AWS Connected Car Reference
Today's trends in auto technology are all about connecting cars and their occupants to the outside world in a seamless and safe manner. In this session, we discuss how automotive companies are leveraging AWS for a variety of connected vehicle use cases. You'll leave this session with source code, architecture diagrams, and an understanding of how to apply the AWS Connected Vehicle Reference Architecture to build your own prototypes. You'll also learn how car companies can leverage Amazon services such as Alexa and AWS services such as AWS IOT, AWS Greengrass, AWS Lambda, Amazon API Gateway, and AWS Mobile Hub to rapidly develop and deploy innovative connected vehicle services.
AMF303 - Deep Dive into the Connected Vehicle Reference Architecture
At this fast-paced, interactive workshop, get hands-on with live data streaming from an actual car driving the streets of Las Vegas. Explore AWS IoT services, common patterns, and best practices for processing IoT data, and deploy a reference architecture to begin consuming and analyzing connected vehicle data in your own AWS account. Walk away from this workshop with the knowledge needed to connect your own vehicle to the cloud.
ARC201 - Scaling up to Your First 10 Million Users
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
ARC202 - Maximizing Your Move to AWS: 5 key lessons Learned from Cloud Technology Partners (CTP) and Broadridge
Hear from CTP’s Robert Christiansen and John Treadway on how to maximize the value of your AWS initiative. From building a Minimum Viable Cloud to establishing a robust cloud security and compliance posture, we’ll walk you through key client success stories and lessons learned. We also explore how CTP is helping Broadridge (the leading provider of investor communications and technology) to leverage AWS to delight customers, drive new revenue streams, and transform their business.
Session Sponsored by: CTP
ARC203 - How to Successfully Exploit the Power of the Matrix
Leading Edge Forums (LEF) has labelled the synergistic combination of cloud computing and machine intelligence (MI) as ‘the Matrix’: the combination of cloud services such as IaaS, IoT, MI, and edge computing. For companies to thrive, they need to know the answers to the following questions: How are successful companies harnessing the power of the Matrix? How do they structure their organizations? What makes them so agile? How do they attract and retain skilled employees? LEF studies successful businesses and learns what makes them great. Our 6-month research program has dived deep with multiple AWS customers to understand not only their use of the technology, but also the business transformation program that allowed them to maximize the value that AWS provides. Attend this session to learn more about the research that has been done, client examples, observations that the LEF has made and how this can be used to help drive your transformation program.
Session sponsored by DXC Technologies
ARC204 - VMware Cloud on AWS: A World of Unique Integrations Between VMware and AWS
VMware Cloud on AWS brings VMware’s enterprise class Software-Defined Data Center software to the AWS Cloud, and enables customers to run production applications across vSphere-based private, public, and hybrid cloud environments. Delivered, sold, and supported by VMware as an on-demand service, customers can also leverage AWS services including storage, databases, analytics, and more. With the same architecture and operational experience on-premises and in the cloud, IT teams can now quickly derive value instant business value from use of the AWS and VMware hybrid cloud experience.
Session sponsored by VMware
ARC206 - Disney’s Magic – A cloud transformation strategy in play
Creating a comprehensive, accelerated cloud strategy for a complex or federated organization requires a disciplined approach—one that balances the need for centralized governance with the opportunity to innovate across all engineering segments within the enterprise.
In this session, we will follow the Walt Disney Company’s journey to create an initial cloud value hypothesis and cloud business case, and then develop a structured approach towards cloud migrations and a "cloud-first" operating model.
Attendees will learn more about the key implications, risks and considerations of the company’s cloud transformation program; see examples of reference architectures and implementation guides; and understand the required activities that contributed to the success of the program. The patterns presented will be broadly applicable to complex organizations with global aspirations to make the journey to AWS cloud.
Session sponsored by Accenture
ARC207 - Monitoring Performance of Enterprise Applications on AWS: Understanding the Dynamic Nature of Cloud Computing
Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally.
New Relic helps manage these applications without sacrificing simplicity.
In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we’ve learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing.
Session sponsored by New Relic
ARC301 - Fitch Ratings: Migrating to the Cloud to Transform Business Services Delivery
This session is aimed at those who want to learn how a large enterprise organization adapted their teams, tooling, and methodology to successfully migrate and operate business critical functions in the cloud. You also learn how the organization used migration as a launchpad for additional innovation.
Session Sponsored by Cloudreach
ARC303 - Running Lean Architectures: How to Optimize for Cost Efficiency
Whether you’re a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot Instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; and choosing the optimal instance type through load testing. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by serverless. Even if you already enjoy the benefits of serverless architectures, we show you how to select the optimal AWS Lambda memory class and how to maximize networking throughput in order to minimize Lambda run-time and therefore execution cost. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the AWS Cloud.
ARC304 - From One to Many: Evolving VPC Design
As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions.
ARC305 - How Toyota Racing Development Uses Amazon CloudFront, AWS CloudFormation, and Amazon ECS in Motorsports
Toyota Racing Development (TRD) uses AWS technology—including Amazon CloudFront, AWS CloudFormation, Amazon ECR, Amazon ECS, Elastic Load Balancing, Auto Scaling, AWS KMS, Amazon SNS, AWS Lambda, Amazon Elasticsearch Service—with Slack to create a reliable, fast, and highly available infrastructure. TRD is a fast-paced environment with tight deadlines that require frequent and precise deployments before and during every weekend's race. With TRD’s containerized architecture we can get new developers set up and writing code in a matter of minutes, not hours. Developers use the same architecture to develop locally as in production, which allows us to create reliable and repeatable builds for any environment. Using AWS technologies, we have been able to greatly improve our security by creating a separation of concerns for each environment using VPCs, security groups, and AWS KMS to provide strong yet easy to implement encryption.
ARC306 - High Resiliency & Availability of PlayStation Communities Using Multiple AWS Regions
With the increasing social footprint of PlayStation fans comes the responsibility of keeping community services highly available and resilient. With 200,000 users visiting PlayStation Communities every minute and 2.5 million active communities, our main requirement was to have isolated services per region so that we can serve our customers in each region in under 100 milliseconds. In this session, you learn how the PlayStation engineering team successfully migrated seven interconnected community services from single to multiple regions by using a combination of home-grown and AWS tools with no downtime.
ARC307 - From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
Gilt, a global e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Adrian Trenaman, SVP of Engineering at Gilt, shares Gilt's experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud. Derek Chiles, AWS Solutions Architect, reviews current best practices and recommended architectures for deploying microservices on AWS.
ARC308 - Leveraging a Cloud Policy Framework - From Zero to Well Governed
After working with several hundred enterprise class companies globally, we’ve learned that governing cloud infrastructure at scale requires software that enables an organization to capture and drive management from their internal policies, best practices and reference architectures. A policy driven management and governance strategy is critical for successfully operating in cloud and hybrid environments. As your infrastructure grows it can also be important to leverage knowledge that extends beyond the organization. An open-source “cloud policy framework” enables users to leverage a community that can help define and tune best practice policies and, at the same time, help SaaS vendors and ISVs capture the best way to manage an application and share it with customers. The bottom line - A well defined management and governance strategy gives you the ability to put automation in place that keeps your cloud running securely and efficiently without having to take it on as a full-time job.
This session discusses the development of a “cloud policy framework” that enables users to leverage open source rule definition by which organizations can govern their cloud. We will discuss what it takes to go from zero to well governed including best practice policies for managing all aspects of services, applications & infrastructure across cost, availability, performance, security and usage.
Session sponsored by CloudHealth Technologies
ARC401 - Serverless Architectural Patterns and Best Practices
As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes re-usable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility. What’s new in 2017: using X-Ray in Lambda for tracing and operational insight.; a pattern on high performance computing (HPC) using Lambda at scale; how a query can be achieved using Athena; Step Functions as a way to handle orchestration for both the Automation and Batch patterns; a pattern for Security Automation using AWS Config rules to detect and automatically remediate violations of security standards; how to validate API parameters in API Gateway to protect your API back-ends; and a solid focus on CI/CD development pipelines for serverless –that includes testing, deploying, and versioning (SAM tools).
ATC301 - Developing a Well-Architected RTB Application
Real-time bidding applications are designed for very high scale and performance. A typical RTB deployment handles at least a million queries per second, with TP99 query processing latency of 25 ms. In this session, we first cover the end-to-end architecture of a real-time bidder application on AWS. Next, we talk about the challenges and best practices for implementing a durable and high-performance system. Finally, we provide some recommendations on minimizing infrastructure cost while operating at a very large scale.
ATC302 - Building an Adtech Platform with Machine Learning on AWS
Learn how adtech platforms can use machine learning to predict the best budget allocation, channel allocation, and creative mix to control supply and demand for scaling marketplaces. Adtech platforms are highly automated and global from day one. They’re built heavily on AWS Lambda, AWS CloudFormation, AWS Elastic Beanstalk, Amazon Machine Learning (Amazon ML), Amazon EMR, Amazon Redshift, Amazon S3, and Amazon Aurora. A very small team can achieve all this with AWS.
ATC303 - Cache Me If You Can: Minimizing Latency While Optimizing Cost Through Advanced Caching Strategies
From Amazon CloudFront to Amazon ElastiCache to Amazon DynamoDB Accelerator (DAX), this is your one-stop shop to learn all your caching needs. What data do you cache and why? What are common side effects and pitfalls when caching? What is negative caching and how can it help you maximize your cache hit rate? How do you use DAX in practice? How can you ensure that data in your cache always stays current? We discuss these and many more topics in depth. We also share lessons learned and best practices from global real-time bidding architectures.
BAP201 - Realizing the Benefits of the AWS Cloud: Confident Decision Making Based on Insights and Control
To realize the benefits of the breadth and depth of services offered by the AWS Cloud, you need comprehensive visibility, contract flexibility, and full control over how your organization accesses the cloud environment. We outline our approach to building the enterprise-ready framework and intelligent platform that integrates the AWS Cloud seamlessly into your IT landscape. We show you how to drive costs down by matching system availability and SLAs to your differing environments and their requirements. We explore best practices in assigning systems to roles, and explain our methodology for improving efficiencies through automation.
Session sponsored by Capgemini
BAP401 - Delighting Customers Through Device Data with Salesforce and AWS
The Internet of Things produces vast quantities of data that promises a deep, always connected view into customer experiences through customer devices. But how do you ingest data at massive scale and develop meaningful insights about your customers? In this session, you learn how Salesforce IoT Cloud Einstein works in concert with the AWS IoT engine to ingest and transform all of the data generated by customers, partners, devices, and sensors into meaningful action. See also how our customers are using Salesforce and AWS together to augment massive quantities of device data with business insight, using simple, intuitive tools, to engage proactively with their customers in real time.
Session sponsored by Salesforce
CMP201 - Auto Scaling: The Fleet Management Solution for Planet Earth
Auto Scaling allows cloud resources to scale automatically in reaction to the dynamic needs of customers. This session shows how Auto Scaling offers an advantage to everyone—whether it's basic fleet management to keep instances healthy as an Amazon EC2 best practice, or dynamic scaling to manage extremes. We share examples of how Auto Scaling helps customers of all sizes and industries unlock use cases and value. We also discuss how Auto Scaling is evolving to scaling different types of elastic AWS resources beyond EC2 instances. NASA Jet Propulsion Laboratory (JPL) and California Institute of Technology share how Auto Scaling is used to scale science data processing of Interferometric Synthetic Aperture Radar (InSAR) data from earth-observing satellite missions. At the same time, Auto Scaling reduces these teams' response times during hazard response events such as those from earthquakes, floods, and volcanoes. JPL also discusses how they are integrating their science data systems with the AWS ecosystem to expand into NASA's next two large-scale missions with remote-sensing radar-based observations. Learn how Auto Scaling is being used at a global scale—and beyond!
CMP202 - Getting the most Bang for your buck with #EC2 #WinningGetting the Most Bang for Your Buck with #EC2 #Winning
Amazon EC2 provides you the flexibility to cost-optimize your computing portfolio through purchasing models that fit your business needs. With the flexibility of mix-and-match purchasing models, you can grow your compute capacity and throughput and enable new types of cloud computing applications with the lowest TCO. In this session, we explore combining pay-as-you-go (On-Demand), reserve ahead of time for discounts (Reserved), and high-discount spare capacity (Spot) models for purchasing instances. We show how to optimize costs while maintaining high performance and availability for your applications. We look at common application examples to demonstrate how to best combine EC2's purchasing models. You'll leave the session with best practices you can immediately apply to your application portfolio.
CMP203 - Amazon EC2 Foundations
Amazon EC2 changes the economics of computing and provides you complete control of your computing resources. It's designed to make web-scale cloud computing easier for developers. In this session, we take you on a journey, starting with the basics of key management and security groups. The journey continues with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We also discuss tools and best practices to help you build failure-resilient applications that take advantage of the scale and robustness of AWS Regions.
CMP204 - How Netflix Tunes Amazon EC2 Instances for Performance
At Netflix, we make the best use of Amazon EC2 instance types and features to create a high- performance cloud, achieving near bare-metal speed for our workloads. This session summarizes the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and helps you improve performance, reduce latency outliers, and make better use of EC2 features. We show how to choose EC2 instance types, how to choose between Xen modes (HVM, PV, or PVHVM), and the importance of EC2 features such SR-IOV for bare-metal performance. We also cover basic and advanced kernel tuning and monitoring, including the use of Java and Node.js flame graphs and performance counters.
CMP211 - Getting Started with Serverless Architectures
Serverless architectures let you build and deploy applications and services with infrastructure resources that require zero administration. In the past, you had to provision and scale servers to run your application code, install and operate distributed databases, and build and run custom software to handle API requests. Now, AWS provides a stack of scalable, fully managed services that eliminates these operational complexities. In this session, you learn about the concepts and benefits of serverless architectures and the basics of the serverless stack AWS provides (for example, AWS Lambda and Amazon API Gateway). We discuss use cases such as data processing, website backends, serverless applications, and "operational glue." After that, you get practical tips and tricks, best practices, and architecture patterns that you can take back and implement immediately.
CMP301 - Deep Dive on Amazon EC2 Instances, Featuring Performance Optimization Best Practices
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
CMP319 - Building Distributed Applications with AWS Step Functions
AWS Step Functions is a new, fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Step Functions provides a reliable way to coordinate components and step through the functions of your application. A graphical console helps you arrange and visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step and retries when there are errors so that your application executes in order―and as expected―every time. This session shows how to use Step Functions to create, run, and debug multi-service applications in a matter of minutes. We also share how customers are using Step Functions to reliably build and scale multi-step applications such as order processing, report generation, and data transformation―and to innovate faster.
CMP323 - Introducing AWS Batch: Easy and Efficient Batch Computing on AWS
AWS Batch is a fully managed service that enables developers, scientists, and engineers to easily and efficiently run batch computing workloads of any scale on AWS. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, you don't need to install or manage batch computing software, so you can focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2, EC2 Spot Instances, and AWS Lambda. AWS Batch reduces operational complexities, saving time and reducing costs. In this session, Principal Product Managers Jamie Kinney and Dougal Ballantyne describe the core concepts behind AWS Batch and details of how the service functions. The presentation concludes with relevant use cases and sample code.
CON201 - Containers - State of the Union
Just over four years after the first public release of Docker, and three years to the day after the launch of Amazon EC2 Container Service, the use of containers has surged to run a significant percentage of production workloads at startups and enterprise organizations. Join Deepak Singh, General Manager of Amazon Container Services as we cover the state of containerized application development and deployment trends, new container capabilities on AWS available now, running containerized applications on AWS, and how AWS customers successfully run container workloads.