Talks by users for users:
A day to shorten
the distance between
data and decision making
Speakers from our Fall 2021 Data Agility Day
Thibaut Ceyrolle
Advisor, Snowflake EMEA Founder
Christian Heinzmann
Sr. Director, Data
Watch all of the Data Agility Day Fall 2021 Sessions On-Demand
Tomorrow.io's custom stack that allows them rapid access to consumer app insights
Tomer Coreanu, General Manager B2C @ Tomorrow.io
Check out how Tomer builds an agile data stack from scratch. Tomer will explain how his stack powered his company start-up to unicorn status and beyond.
Using Rivery, BigQuery, and Tableau Tomer will show in-depth technical code examples and the dashboards used to drive Tommorow.io growth in the subscription economy for a product that delivers real-time data insights.
Companies should expect to work faster
Itamar Ben Hamo, Co-Founder & CEO @ Rivery
Adam Conway, SVP Products @ Databricks
How can companies deliver true data agility? What are key growth engines? If looking at dashboards and having static BI was the legacy way of operating, what is the future of analytics?
Join Itamar Ben Hamo of Rivery and Adam Conway of Databricks as they explore answers to these questions and the implications they make on the speed at which companies will be expected to operate.
Why you don't need to overengineer your data stack
Naomi Miller, Director of Data Engineering @ NBC Universal
How StuffThatWorks turned 51.6 million data points into health care insights
Yossi Synett, Chief Data Scientist @ Stuff That Works
Yossi Synett, the Chief Data Scientist and Co-founder of StuffThatWorks will talk about building a business based on crowdsourced data pulled from over 1 million contributors. Leveraging machine learning and big data digestion, Yossi will talk about being an agile company and responding to data-driven trends.
How Freshly is scaling business metrics observability with AI
David Drai, Co-Founder & CEO @ Anodot
David Ashirov, VP of Data @ Freshly
We put so much effort into building the perfect data infrastructure. But few actually put the same thought into their analytics stack. What good is a race car without a trained driver? Join VP of Data David Ashirov as he shares how Freshly is monitoring hundreds of thousands of business metrics in real time, and the impact that has had on operations, sales, and overall revenue.
Reverse ETL - How to power multi-directional data flows
Taylor McGrath, Head of Customer Solutions @ Rivery
Taylor McGrath, Head of Customer Solutions at Rivery, will talk about why BI + dashboards are necessary for the analysis of trends. But Reverse ETL takes insights one step further, that once something is ‘realized’, the data point can be pushed to another system, where action can immediately be taken.
Attendees will be able to walk away with an understanding of how to implement and deploy their own Reverse ETLs.
Unifying analytics – changing data architecture to unite BI and data science
Paige Roberts, Open Source Relations Manager @ Vertica
The data warehouse has been an analytics workhorse for decades for business intelligence teams. Unprecedented volumes of data, new types of data, and the need for advanced analyses like machine learning brought on the age of the data lake. Now, many companies have a data lake for data science, a data warehouse for BI, or a mishmash of both, possibly combined with a mandate to go to the cloud.
- Look at successful data architectures from companies like Philips, The TradeDesk, and Climate Corporation
- Learn to eliminate duplication of effort between data science and BI data engineering teams
- See a variety of ways companies are getting AI and ML projects into production where they have a real impact, without bogging down essential BI
Picking the right data tools: when to stop writing custom code pipelines
Ben Rogojan, Consultant @ Seattle Data Guy
Ben is a Data Engineer Influencer, Consultant, and Data Engineer at Facebook and joins us to discuss his experience with various data pipeline systems whether it be 100% custom code or drag and drop low code.
He will discuss the modern challenges that data engineers face and how different tools can help approach each of these problems. The focus will be to discuss the perfect balance between build vs. buy solutions for their data stacks. Especially as data sources continue to explode.
Attendees will learn how to prioritize their time to deliver maximum value to their companies and when to use off-the-shelf solutions to make them better data engineers.
Balance agility and governance with the data cloud and dataops
Kent Graziano, Chief Technical Evangelist @ Snowflake
DataOps is the application of DevOps concepts to data. The DataOps Manifesto outlines what that means, similar to how the Agile Manifesto outlines the goals of the Agile Software movement. But, as the demand for data governance has increased, and the demand to do “more with less” and be more agile has put more pressure on data teams, we all need more guidance on how to manage all this.
Seeing that need, a small group of industry thought leaders and practitioners got together and created a DataOps philosophy to describe the best way to deliver DataOps by defining the core pillars that must underpin a successful approach. Combining this approach with an agile and governed platform like Snowflake’s Data Cloud allows organizations to indeed balance these seemingly competing goals while still delivering value at scale.
How to layer your data the right way
Christian Heinzmann, Senior Director of Data @ Indigo
Building data marts is tricky enough by itself. When you throw a fast-growing industry on top of it, with constant shifting data inputs and business requirements it becomes nigh near impossible. Learn how Indigo Ag layers their data to deal with these constant challenges.
Healthedge's approach to multi-dimensional data observability
Rohit Choudhary, Founder & CEO @ Acceldata
Krishnan Bhagavath, VP of Engineering @ Healthedge
Rohit and Krishnan will sit down to discuss how Healthedge has used multi-dimensional data observability to reduce complexity, improve reliability, and leverage AI/ML to improve engineering productivity, all while reducing costs. This session will review the technical and business benefits of:
-
Successfully architecting, operating, and optimizing complex data systems at scale
-
Full visibility into data processing, data, and data pipelines
-
Using ML to automate data identification, quality, and management
How to build an agile data strategy: A conversation with Alex Tverdholeb, VP of Data Services at Fox
Molly Vorwerck, Head of Content & Communications @ Monte Carlo
Alex Tverdohleb, VP of Data Services @ Fox
In this fireside chat, Alex Tverdholeb, VP of Data Services at Fox, sits down with Molly Vorwerck, a founding team member at Monte Carlo, the data observability company, to discuss his experience leading and defining data strategy at hyper-growth companies.
Alex will discuss: how his team aligns their goals and objectives to Fox's company-wide KPIs; how to structure your data organization; how to weigh building vs. buying your stack; and best practices for building a culture of data trust - at scale.
Agile can't get analytics finished
Aash Viswanathan, Senior Data Scientist @ Atlassian
Data analytics is knowledge work, so as long as there are questions left to answer, there is no "done." This is because the analysis is for understanding things we don't know yet - making access to insights very difficult to schedule.
This talk should help those who've been asked for analytics ETAs or those being asked to prioritize other projects for "after analytics is done." Aash will talk through how he has learned to handle strategy and processes for data analytics projects at Atlassian, LinkedIn, Lime, and other organizations, so confident analysis can be made from the data and analytics can grow alongside the business.
What we're learning from capturing and tracing the lineage of key datasets at Northwestern Mutual
Kevin Mellott, Asst. Director Data Engineering @ Northwestern Mutual
Julien Le Dem, Co-Founder & CTO @ Datakin
Understanding open source communities using the modern data stack
Srini Kadamati, Senior Data Scientist @ Preset
Open source communities have powered the last decade of modernization in the data ecosystem, but we are just now beginning to understand how these types of communities are started, nurtured, and grown.
Srini joined Preset and the Apache Superset community 18 months ago specifically to help out with developer advocacy and community. Since joining, his team has used open source data tools to catalog and visualize community data to find better ways to understand and support the Superset community.
In this talk, Srini will share the lessons he's learned about open source and about growing communities over the last 18 months.
How we lose trust in data and the struggle of regaining it
Chaim Mazal, VP Information Security, CISO @ Kandji
Ben Herzberg, Chief Data Scientist @ Satori
In an agile data environment, it’s easy to develop trust issues, unless you adapt.
In this expert discussion, we will dig into the changes, trust challenges, and ways to overcome them, based on the experience of leading complex data organizations.
How to make cloud migration painless and achieve organizational data agility
Khalil Sheikh, EVP Solutions & Strategy @ Saxon Global
Khalil will teach attendees how to move their data to the cloud and build an agile data stack that can scale for any size organization.
Together, we'll examine some pitfalls to avoid and Khalil's best practices when performing a cloud migration.
Data lakehouse, data mesh, and data fabric (the alphabet soup of data architectures)
James Serra, Data Platform Architecture Lead @ EY
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session, James covers all of them in detail and compares the pros and cons of each from his perspective as a seasoned Data Platform Architecture Lead at EY.
Each may sound great in theory, but he'll dig into the concerns you need to be aware of before taking the plunge. James will also include use cases so you can see what approach will work best for your big data needs.
How Grubhub uses data to prioritize product roadmaps
Seth Rosenstein, Sr. Product Manager @ Grubhub
See firsthand how Grubhub uses customer & market data to prioritize their product roadmap. See how to filter out data noise to reach insights that drive value. Backtracking from revenue numbers attendees will learn how to identify the KPIs that should drive the roadmap.
This data-led product roadmap has helped Grubhub achieve a 48% year-over-year revenue growth.
5 pointers for building agile lakehouses that don’t suck up dev resources
Christian Romming, Founder & CEO @ Etleap
Building a data lakehouse can be a headache. More often than not, they end up becoming data swamps: bottomless pits that suck up dev resources and obscure data transparency in their murky waters. But when done correctly, data lakehouses can be valuable wells of knowledge that transform your business.
We’ll deep-dive into concrete enterprise lakehouse stack examples, and give you 5 refreshing pointers for keeping the dev workload sustainable and ensuring data usability for the long term.
What is the cost to attend the virtual sessions?
Data Agility Day was and is always free and open for all to attend and watch.
What is Data Agility Day?
Simply put, Data Agility Day was a day to gather and examine ways to improve value extraction from data.
Many companies still struggle with delivering data projects on time, at scale, and with useful results.
Our community mission is the persistent evolution of agile data methods, strategies, and team enablement.
Sessions covered how individuals and teams at data-savvy organizations are achieving the agility that enables decisions to be made faster and create competitive advantages.
What is data agility?
Data agility is the ability to shorten the distance between data and the decision-making that drives action and empowers businesses to be insight-driven.
As the needs for insights grow, data teams are looking to increase their data management workflows and effectiveness - in essence, run data agile organizations.
Who came to Data Agility Day 2021?
Data engineers, data developers, data architects, data scientists as well as BI teams, marketing analytics professionals, innovation-focused executives, and other data science practitioners and leadership.
Sessions included:
- Data transformation
- Data orchestration
- Data visualization
- Data governance
- Data management
- Data ingestion