View More
View Less
System Message
An unknown error has occurred and your request could not be completed. Please contact support.
You've been added as a Walk-up
Personal Calendar
Conference Event
There aren't any available sessions at this time.
Conflict Found
This session is already scheduled at another time. Would you like to...
Please enter a maximum of {0} characters.
{0} remaining of {1} character maximum.
Please enter a maximum of {0} words.
{0} remaining of {1} word maximum.
must be 50 characters or less.
must be 40 characters or less.
Session Summary
We were unable to load the map image.
This has not yet been assigned to a map.
Search Catalog
Replies ()
New Post
Microblog Thread
Post Reply
Your session timed out.
Meeting Summary

I'm interested in this
I'm no longer interested

ABD201 - Big Data Architectural Patterns and Best Practices on AWS In this session, we simplify big data processing as a data bus comprising various stages: collect, store, process, analyze, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost. Breakout Session
ABD202 - Best Practices for Building Serverless Big Data Applications Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics. Breakout Session
ABD203 - Real-Time Streaming Applications on AWS: Use Cases and Patterns To win in the marketplace and provide differentiated customer experiences, businesses need to be able to use live data in real time to facilitate fast decision making. In this session, you learn common streaming data processing use cases and architectures. First, we give an overview of streaming data and AWS streaming data capabilities. Next, we look at a few customer examples and their real-time streaming applications. Finally, we walk through common architectures and design patterns of top streaming data use cases. Breakout Session
ABD205 - Taking a Page Out of Ivy Tech’s Book: Using Data for Student Success Data speaks. Discover how Ivy Tech, the nation's largest singly accredited community college, uses AWS to gather, analyze, and take action on student behavioral data for the betterment of over 3,100 students. This session outlines the process from inception to implementation across the state of Indiana and highlights how Ivy Tech's model can be applied to your own complex business problems. Breakout Session
ABD206 - Building Visualizations and Dashboards with Amazon QuickSight Just as a picture is worth a thousand words, a visual is worth a thousand data points.  A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe.  In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight.  We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data. Breakout Session
ABD207 - Leveraging AWS to Fight Financial Crime and Protect National Security Banks aren’t known to share data and collaborate with one another. But that is exactly what the Mid-Sized Bank Coalition of America (MBCA) is doing to fight digital financial crime—and protect national security. Using the AWS Cloud, the MBCA developed a shared data analytics utility that processes terabytes of non-competitive customer account, transaction, and government risk data. The intelligence produced from the data helps banks increase the efficiency of their operations, cut labor and operating costs, and reduce false positive volumes. The collective intelligence also allows greater enforcement of Anti-Money Laundering (AML) regulations by helping members detect internal risks—and identify the challenges to detecting these risks in the first place. This session demonstrates how the AWS Cloud supports the MBCA to deliver advanced data analytics, provide consistent operating models across financial institutions, reduce costs, and strengthen national security. Session sponsored by Accenture Breakout Session
ABD208 - Gain real-time insights from your data at scale using Splunk and AWS AWS and Splunk are partnering to build a fully managed integration that allows you to ingest, transform, and analyze data in real time. Join us to find out more about how you can ingest AWS data at scale and leverage the power of the Splunk platform to gain real-time insights. Session sponsored by Splunk Breakout Session
ABD209 - Accelerating the Speed of Innovation with a Data Sciences Data & Analytics Hub at Takeda Historically, silos of data, analytics, and processes across functions, stages of development, and geography created a barrier to R&D efficiency. Gathering the right data necessary for decision-making was challenging due to issues of accessibility, trust, and timeliness. In this session, learn how Takeda is undergoing a transformation in R&D to increase the speed-to-market of high-impact therapies to improve patient lives. The Data and Analytics Hub was built, with Deloitte, to address these issues and support the efficient generation of data insights for functions such as clinical operations, clinical development, medical affairs, portfolio management, and R&D finance. In the AWS hosted data lake, this data is processed, integrated, and made available to business end users through data visualization interfaces, and to data scientists through direct connectivity. Learn how Takeda has achieved significant time reductions—from weeks to minutes—to gather and provision data that has the potential to reduce cycle times in drug development. The hub also enables more efficient operations and alignment to achieve product goals through cross functional team accountability and collaboration due to the ability to access the same cross domain data. Session sponsored by Deloitte Breakout Session
ABD210 - Modernizing Amtrak: Serverless Solution for Real-Time Data Capabilities As the nation's only high-speed intercity passenger rail provider, Amtrak needs to know critical information to run their business such as: Who’s onboard any train at any time? How are booking and revenue trending? Amtrak was faced with unpredictable and often slow response times from existing databases, ranging from seconds to hours; existing booking and revenue dashboards were spreadsheet-based and manual; multiple copies of data were stored in different repositories, lacking integration and consistency; and operations and maintenance (O&M) costs were relatively high. Join us as we demonstrate how Deloitte and Amtrak successfully went live with a cloud-native operational database and analytical datamart for near-real-time reporting in under six months. We highlight the specific challenges and the modernization of architecture on an AWS native Platform as a Service (PaaS) solution. The solution includes cloud-native components such as AWS Lambda for microservices, Amazon Kinesis and AWS Data Pipeline for moving data, Amazon S3 for storage, Amazon DynamoDB for a managed NoSQL database service, and Amazon Redshift for near-real time reports and dashboards. Deloitte’s solution enabled “at scale” processing of 1 million transactions/day and up to 2K transactions/minute. It provided flexibility and scalability, largely eliminate the need for system management, and dramatically reduce operating costs. Moreover, it laid the groundwork for decommissioning legacy systems, anticipated to save at least $1M over 3 years.   Session sponsored by Deloitte Breakout Session
ABD211 - Sysco’s Journey to Business Insight and Impact with Amazon Redshift This session details Sysco's journey from a company focused on hindsight-based reporting to one focused on insights and foresight. For this shift, Sysco moved from multiple data warehouses to an AWS ecosystem including Amazon Redshift, Amazon EMR, AWS Data Pipeline, and more. Working with Tableau, the team at Sysco gained agile insight across their business. Learn how Sysco decided to use AWS, how they scaled, and how they became more strategic with the AWS ecosystem and Tableau. Session sponsored by Tableau Breakout Session
ABD212 - SAP HANA: The Foundation of SAP’s Digital Core Learn how customers are leveraging AWS to better position their enterprises for the digital transformation journey. In this session, you hear about: operations and process; the SAP transformation journey including architecting, migrating, running SAP on AWS; complete automation and management of the AWS layer using AWS native services; and a customer example. We also discuss the challenges of migration to the cloud and a managed services environment; the benefits to the customer of the new operating model; and lessons learned. By the end of the session, you understand why you should consider AWS for your next SAP platform, how to get there when you are ready and some best practices to manage your SAP systems on AWS. session sponsored by DXC Technology Breakout Session
ABD213 - How to Tackle Your Dark Data Challenge with the AWS Glue Data Catalog As data volumes grow and customers store more data on AWS, they often have valuable data that is not easily discoverable and available for analytics. The AWS Glue Data Catalog provides a central view of your data across various data silos, making it readily available for analytics. We introduce key features of the AWS Glue Data Catalog and its use cases. Learn how crawlers can automatically discover your data, extract relevant metadata, and add it as table definitions to the AWS Glue Data Catalog. We also explore integrating the AWS Glue Data Catalog and Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.  Breakout Session
ABD214 - Real-time User Insights for Mobile and Web Applications with Amazon Pinpoint With customers demanding relevant and real-time experiences across a range of devices, digital businesses are looking to gather user data at scale, understand this data, and respond to customer needs instantly. This requires tools that can record large volumes of user data in a structured fashion, and then instantly make this data available to generate insights. In this session, we demonstrate how you can use Amazon Pinpoint to capture user data in a structured yet flexible manner. Further, we demonstrate how this data can be set up for instant consumption using services like Amazon Kinesis Firehose and Amazon Redshift. We walk through example data based on real world scenarios, to illustrate how Amazon Pinpoint lets you easily organize millions of events, record them in real-time, and store them for further analysis. Breakout Session
ABD217 - From Batch to Streaming: How Amazon Flex Uses Real-time Analytics to Deliver Packages on Time Reducing the time to get actionable insights from data is important to all businesses, and customers who employ batch data analytics tools are exploring the benefits of streaming analytics. Learn best practices to extend your architecture from data warehouses and databases to real-time solutions. Learn how to use Amazon Kinesis to get real-time data insights and integrate them with Amazon Aurora, Amazon RDS, Amazon Redshift, and Amazon S3. The Amazon Flex team describes how they used streaming analytics in their Amazon Flex mobile app, used by Amazon delivery drivers to deliver millions of packages each month on time. They discuss the architecture that enabled the move from a batch processing system to a real-time system, overcoming the challenges of migrating existing batch data to streaming data, and how to benefit from real-time analytics. Breakout Session
ABD218 - How EuroLeague Basketball Uses IoT Analytics to Engage Fans IoT and big data have made their way out of industrial applications, general automation, and consumer goods, and are now a valuable tool for improving consumer engagement across a number of industries, including media, entertainment, and sports. The low cost and ease of implementation of AWS analytics services and AWS IoT have allowed AGT, a leader in IoT, to develop their IoTA analytics platform. Using IoTA, AGT brought a tailored solution to EuroLeague Basketball for real-time content production and fan engagement during the 2017-18 season. In this session, we take a deep dive into how this solution is architected for secure, scalable, and highly performant data collection from athletes, coaches, and fans. We also talk about how the data is transformed into insights and integrated into a content generation pipeline. Lastly, we demonstrate how this solution can be easily adapted for other industries and applications. Breakout Session
ABD222 - How to Confidently Unleash Data to Meet the Needs of Your Entire Organization Where are you on the spectrum of IT leaders? Are you confident that you’re providing the technology and solutions that consistently meet or exceed the needs of your internal customers? Do your peers at the executive table see you as an innovative technology leader? Innovative IT leaders understand the value of getting data and analytics directly into the hands of decision makers, and into their own. In this session, Daren Thayne, Domo’s Chief Technology Officer, shares how innovative IT leaders are helping drive a culture change at their organizations. See how transformative it can be to have real-time access to all of the data that' is relevant to YOUR job (including a complete view of your entire AWS environment), as well as understand how it can help you lead the way in applying that same pattern throughout your entire company.   Session sponsored by Domo Breakout Session
ABD223 - IT Innovators: New Technology for Leveraging Data to Enable Agility, Innovation, and Business Optimization Companies of all sizes are looking for technology to efficiently leverage data and their existing IT investments to stay competitive and understand where to find new growth. Regardless of where companies are in their data-driven journey, they face greater demands for information by customers, prospects, partners, vendors and employees. All stakeholders inside and outside the organization want information on-demand or in “real time”, available anywhere on any device. They want to use it to optimize business outcomes without having to rely on complex software tools or human gatekeepers to relevant information. Learn how IT innovators at companies such as MasterCard, Jefferson Health, and TELUS are using Domo’s Business Cloud to help their organizations more effectively leverage data at scale. Session sponsored by Domo Breakout Session
ABD301 - Analyzing Streaming Data in Real Time with Amazon Kinesis Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system. Breakout Session
ABD302 - Real-Time Data Exploration and Analytics with Amazon Elasticsearch Service and Kibana In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports. Breakout Session
ABD303 - Developing an Insights Platform – Sysco’s Journey from Disparate Systems to Data Lake and Beyond Sysco has nearly 200 operating companies across its multiple lines of business throughout the United States, Canada, Central/South America, and Europe. As the global leader in food services, Sysco identified the need to streamline the collection, transformation, and presentation of data produced by the distributed units and systems, into a central data ecosystem. Sysco's Business Intelligence and Analytics team addressed these requirements by creating a data lake with scalable analytics and query engines leveraging AWS services. In this session, Sysco will outline their journey from a hindsight reporting focused company to an insights driven organization. They will cover solution architecture, challenges, and lessons learned from deploying a self-service insights platform. They will also walk through the design patterns they used and how they designed the solution to provide predictive analytics using Amazon Redshift Spectrum, Amazon S3, Amazon EMR, AWS Glue, Amazon Elasticsearch Service and other AWS services. Breakout Session
ABD304 - Best Practices for Data Warehousing with Amazon Redshift & Redshift Spectrum Most companies are over-run with data, yet they lack critical insights to make timely and accurate business decisions. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. In this session, we take an in-depth look at how modern data warehousing blends and analyzes all your data, inside and outside your data warehouse without moving the data, to give you deeper insights to run your business. We will cover best practices on how to design optimal schemas, load data efficiently, and optimize your queries to deliver high throughput and performance. Breakout Session
ABD305 - Design Patterns and Best Practices for Data Analytics with Amazon EMR Amazon EMR is one of the largest Hadoop operators in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. Finally, we dive into some of our recent launches to keep you current on our latest features. Breakout Session
ABD307 - Deep Analytics for Global AWS Marketing Organization To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams. Breakout Session
ABD308 - Build a Serverless IoT Backend in Under 30 Minutes with AWS Lambda, Amazon Kinesis, and MongoDB Stitch In this session, we bring together AWS Lambda, Amazon Kinesis, and the MongoDB Stitch backend as a service in a serverless architecture for IoT that consumes, processes, and persists data in a fully managed database service. We demonstrate how to use Lambda functions triggered by updates made to the data, and cover best practices such as maximizing performance by keeping database endpoint connections hot in a mostly ephemeral architecture. Session sponsored by MongoDB Breakout Session
ABD309 - How Twilio Scaled Its Data-Driven Culture As a leading cloud communications platform, Twilio has always been strongly data-driven. But as headcount and data volumes grew—and grew quickly—they faced many new challenges. One-off, static reports work when you’re a small startup, but how do you support a growth stage company to a successful IPO and beyond? Today, Twilio's data team relies on AWS and Looker to provide data access to 700 colleagues. Departments have the data they need to make decisions, and cloud-based scale means they get answers fast. Data delivers real-business value at Twilio, providing a 360-degree view of their customer, product, and business. In this session, you hear firsthand stories directly from the Twilio data team and learn real-world tips for fostering a truly data-driven culture at scale. Session sponsored by Looker Breakout Session
ABD310 - How FINRA Secures Its Big Data and Data Science Platform on AWS FINRA uses big data and data science technologies to detect fraud, market manipulation, and insider trading across US capital markets. As a financial regulator, FINRA analyzes highly sensitive data, so information security is critical. Learn how FINRA secures its Amazon S3 Data Lake and its data science platform on Amazon EMR and Amazon Redshift, while empowering data scientists with tools they need to be effective. In addition, FINRA shares AWS security best practices, covering topics such as AMI updates, micro segmentation, encryption, key management, logging, identity and access management, and compliance. Breakout Session
ABD311 - Deploying Business Analytics at Enterprise Scale with Amazon QuickSight One of the biggest tradeoffs customers usually make when deploying BI solutions at scale is agility versus governance. Large-scale BI implementations with the right governance structure can take months to design and deploy. In this session, learn how you can avoid making this tradeoff using Amazon QuickSight. Learn how to easily deploy Amazon QuickSight to thousands of users using Active Directory and Federated SSO, while securely accessing your data sources in Amazon VPCs or on-premises. We also cover how to control access to your datasets, implement row-level security, create scheduled email reports, and audit access to your data. Breakout Session
ABD312 - Deep Dive: Migrating Big Data Workloads to AWS Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to AWS in order to save costs, increase availability, and improve performance. AWS offers a broad set of analytics services, including solutions for batch processing, stream processing, machine learning, data workflow orchestration, and data warehousing. This session will focus on identifying the components and workflows in your current environment; and providing the best practices to migrate these workloads to the right AWS data analytics product. We will cover services such as Amazon EMR, Amazon Athena, Amazon Redshift, Amazon Kinesis, and more. We will also feature Vanguard, an American investment management company based in Malvern, Pennsylvania with over $4.4 trillion in assets under management. Ritesh Shah, Sr. Program Manager for Cloud Analytics Program at Vanguard, will describe how they orchestrated their migration to AWS analytics services, including Hadoop and Spark workloads to Amazon EMR. Ritesh will highlight the technical challenges they faced and overcame along the way, as well as share common recommendations and tuning tips to accelerate the time to production. Breakout Session
ABD315 - Building Serverless ETL Pipelines with AWS Glue Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Breakout Session
ABD316 - American Heart Association: Finding Cures to Heart Disease Through the Power of Technology Combining disparate datasets and making them accessible to data scientists and researchers is a prevalent challenge for many organizations, not just in healthcare research. American Heart Association (AHA) has built a data science platform using Amazon EMR, Amazon Elasticsearch Service, and other AWS services, that corrals multiple datasets and enables advanced research on phenotype and genotype datasets, aimed at curing heart diseases. In this session, we present how AHA built this platform and the key challenges they addressed with the solution. We also provide a demo of the platform, and leave you with suggestions and next steps so you can build similar solutions for your use cases. Breakout Session
ABD327 - Migrating Your Traditional Data Warehouse to a Modern Data Lake In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base. Breakout Session
ABD330 - Combining Batch and Stream Processing to Get the Best of Both Worlds Today, many architects and developers are looking to build solutions that integrate batch and real-time data processing, and deliver the best of both approaches. Lambda architecture (not to be confused with the AWS Lambda service) is a design pattern that leverages both batch and real-time processing within a single solution to meet the latency, accuracy, and throughput requirements of big data use cases. Come join us for a discussion on how to implement Lambda architecture (batch, speed, and serving layers) and best practices for data processing, loading, and performance tuning. Chalk Talk
ABD331 - Log Analytics at Expedia Using Amazon Elasticsearch Service Expedia uses Amazon Elasticsearch Service (Amazon ES) for a variety of mission-critical use cases, ranging from log aggregation to application monitoring and pricing optimization. In this session, the Expedia team reviews how they use Amazon ES and Kibana to analyze and visualize Docker startup logs, AWS CloudTrail data, and application metrics. They share best practices for architecting a scalable, secure log analytics solution using Amazon ES, so you can add new data sources almost effortlessly and get insights quickly.    Breakout Session
ABD337 - Making the Shift from DevOps to Practical DevSecOps Agility is the cornerstone of the DevOps movement. Developers are working to continuously integrate and deploy (CI/CD) code to the cloud, to ensure applications are seamlessly updated and current. But what about secure? Security best practices and compliance are now the responsibility of everyone in the development lifecycle, and continuous security is a critical component of the ongoing deployment process. Discover how to incorporate security best practices into your current DevOps operations, gain visibility into compliance posture, and identify potential risks and threats in your AWS environment. We demonstrate how to leverage the CIS AWS Foundation Benchmarks within Sumo to trigger alerts from your AWS CloudTrail and Amazon CloudWatch log when risks or violations occur, such as unauthorized API calls, IAM policy changes, AWS Config configuration changes, and many more.   Session sponsored by Sumo Logic Breakout Session
ABD401 - How Netflix Monitors Applications in Real Time with Amazon Kinesis Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, youl learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights. Breakout Session
ABD402 - How Esri Optimizes Massive Image Archives for Analytics in the Cloud Petabyte scale archives of satellites, planes, and drones imagery continue to grow exponentially. They mostly exist as semi-structured data, but they are only valuable when accessed and processed by a wide range of products for both visualization and analysis. This session provides an overview of how ArcGIS indexes and structures data so that any part of it can be quickly accessed, processed, and analyzed by reading only the minimum amount of data needed for the task. In this session, we share best practices for structuring and compressing massive datasets in Amazon S3, so it can be analyzed efficiently. We also review a number of different image formats, including GeoTIFF (used for the Public Datasets on AWS program, Landsat on AWS), cloud optimized GeoTIFF, MRF, and CRF as well as different compression approaches to show the effect on processing performance. Finally, we provide examples of how this technology has been used to help image processing and analysis for the response to Hurricane Harvey. Breakout Session
ABD403 - Best Practices for Distributed Machine Learning and Predictive Analytics Using Amazon EMR and Open-Source Tools This session, we focus on common use cases and design patterns for predictive analytics using Amazon EMR. We address accessing data from a data lake, extraction and preprocessing with Apache Spark, analytics and machine learning code development with notebooks (Jupyter, Zeppelin), and data visualization using Amazon QuickSight. We cover other operational topics, such as deployment patterns for ad hoc exploration and batch workloads using Spot and multi-user notebooks. The intended audience for this session includes technical users who are building statistical and data analytics models for the business using tools, such as Python, R, Spark, Presto, Amazon EMR, Notebooks. Breakout Session
ALX201 - It’s All Fun and Games with Alexa! Enriching Voice with Physical Gameplay Alexa is always getting smarter, but what about its getting more fun? In this session, hear from the general manager of Alexa Gadgets—the team responsible for a new category of connected products and developer tools that enhance voice interactions with compatible Echo devices. This session will provide an overview of Alexa Gadgets and what it means to developers, detail the history of Amazon’s first reference product (Echo Buttons), and highlight the initial 3P integrations with Musicplode Media (the makers of Beat the Intro) and Gemmy Industries (the makers of Big Mouth Billy Bass). Breakout Session
ALX202 - Integrate Alexa voice technology into your product with the Alexa Voice Service (AVS) In this session, we’ll teach you how to use the Alexa Voice Service (AVS) and its suite of development tools to bring your first Alexa-enabled product to market. You’ll learn how commercial device manufacturers are getting to market faster using the new AVS Device SDK. To ensure your customers have the best voice experience, we’ll teach you how to choose an Audio Front End and client-side hardware from a range of commercial-grade Development Kits. You’ll walk out of this session with the knowledge required to design products with optimized Alexa-enabled voice experiences around your unique design requirements. Breakout Session
ALX203 - How Voice Technology Is Moving Higher Education to a New Era In this presentation, hear from John Rome, Arizona State University’s Deputy CIO, and Jared Stein, Instructure’s VP of Higher Ed Strategy, on how voice technology is bringing higher education to a new era. Come learn how institutions are adopting Alexa on campus and in their curriculum to serve students in new, innovative ways and how Instructure is rethinking the delivery of education for millions of customers through their Canvas skill for Alexa. Breakout Session
ALX301 - Five AWS Services to Supercharge Your Alexa Skills The most engaging Alexa skills have fresh content, continuous improvement, and personalized voice experiences. Learn how to improve your customer’s experience by making use of AWS services like Amazon S3, AWS IoT, Amazon Polly, Amazon API Gateway, and Amazon DynamoDB.  Chalk Talk
ALX301-R - [REPEAT] Five AWS Services to Supercharge Your Alexa Skills The most engaging Alexa skills have fresh content, continuous improvement, and personalized voice experiences. Learn how to improve your customer’s experience by making use of AWS services like Amazon S3, AWS IoT, Amazon Polly, Amazon API Gateway, and Amazon DynamoDB. Chalk Talk
ALX302 - Alexa and AWS IoT: Now with 100% More Voice In this session, we show you how an Alexa skill can push information to a web page. The same technique could push data to a mobile application, Unity, or a Fire TV. Such a push might be used to update an order entry system or to update a dashboard, chat-based system, game command, and so on. Learn how to use AWS IoT to set up a bidirectional communication channel between your Alexa skill and your applications. You'll also learn how an Alexa skill can send messages to AWS IoT and how an HTML/JavaScript application can receive these messages and act upon them. Chalk Talk
ALX302-R - [REPEAT] Alexa and AWS IoT: Now with 100% More Voice In this session, we show you how an Alexa skill can push information to a web page. The same technique could push data to a mobile application, Unity, or a Fire TV. Such a push might be used to update an order entry system or to update a dashboard, chat-based system, game command, and so on. Learn how to use AWS IoT to set up a bidirectional communication channel between your Alexa skill and your applications. You'll also learn how an Alexa skill can send messages to AWS IoT and how an HTML/JavaScript application can receive these messages and act upon them. Chalk Talk
ALX303 - The Art and Science of Conversation Applied to Alexa Skills It used to be the case that we only spoke to computers in their language. But more and more often, we’re interacting with them in ours. We are moving quickly into a world of computer conversation, and one in which, for many applications, the most natural interactions will be through spoken language. But how do you create engaging narrative and compelling, organic conversational interactions using the imprecise tools of speech recognition and intent resolution?  In this session, we look at the experience as a whole and take you through key learnings that you can use when building your skills. We cover issues like knowing your audience, creating compelling storylines, using a cast of characters, integrating voiceover, designing a soundscape, and finding those “magic moments”. For each of these, we share the design pattern, the backing AI or physiological science, and how to implement the experience with Alexa. Chalk Talk
ALX304 - Five Ways Artificial Intelligence Will Reshape How Developers Think Thinking in terms of AI and conversation changes the way you approach building web services and customer experiences. In this session, we discuss five trends that we’re seeing right now in artificial intelligence and conversational UI as we work with people building new experiences with Alexa. Chalk Talk
ALX304-R - [REPEAT] Five Ways Artificial Intelligence Will Reshape How Developers Think Thinking in terms of AI and conversation changes the way you approach building web services and customer experiences. In this session, we discuss five trends that we’re seeing right now in artificial intelligence and conversational UI as we work with people building new experiences with Alexa. Chalk Talk
ALX306 - Build a Game Skill for the Recently Launched Echo Buttons Have you built Alexa Game skills before and are you looking to create something new? In this workshop, deep dive into how to build engaging experiences with the recently launched Echo Buttons. Participate in this interactive session and create your own Echo Button skill that leverages the new dynamic input modality in the Alexa Skills Kit! Bring your laptop, AWS account, and Amazon Developer Portal credentials. Workshop
ALX306-R - [REPEAT] Build a Game Skill for the Recently Launched Echo Buttons! Have you built Alexa Game skills before and are you looking to create something new? In this workshop, deep dive into how to build engaging experiences with the recently launched Echo Buttons. Participate in this interactive session and create your own Echo Button skill that leverages the new dynamic input modality in the Alexa Skills Kit! Bring your laptop, AWS account, and Amazon Developer Portal credentials. Workshop
ALX307 - Integrate Alexa into Your Product Using the AVS Device SDK In this hands-on workshop, learn how to build voice-enabled devices with the Alexa Voice Service (AVS), Amazon’s intelligent voice recognition and natural language understanding service. Key topics include: a technology overview, AVS development tools for commercial developers, tips for prototyping with AVS, how to build a robust C++ client using the AVS Device SDK, and how to test your AVS device. Expect to understand the process for bringing hands-free voice services to any connected device and walk out with a working prototype of an Alexa-enabled device on a Raspberry Pi. Workshop
Get More Results