Data Engineering
The Pipes and Plumbing

Data Engineering

The Pipes and Plumbing

Data Engineering

Data Engineers (DEs) build the pipes and plumbing that empower BIEs to deliver insights.

Unorganized and unclean data can lead to inaccurate analysis and flawed conclusions. Data Engineering involves collecting, cleaning, storing, and preparing data for end users. Through processes like Extract, Transform, and Load (ETL), DEs ensure data platforms are accurate, scalable, and easy to use—critical for Business Intelligence Engineers and analysts.

Use Cases

FP&A Data Lake

When Tony joined the Central FP&A team at a major tech company, all reporting was done manually in Excel, with PDF reports emailed to senior leaders. This process was time-consuming, error-prone, and inefficient.

Tony established an AWS account and integrated it with existing data infrastructure using AWS Lake Formation and S3. He transformed the data using Python and SQL with AWS Glue, creating a robust data lake. This architecture became the foundation for dashboards serving hundreds of users across the organization, including the T&E Dashboard and Click-Through Financials Web Application.

Fund Data Architecture

One of the world’s largest ETF providers struggled to access data across the $7T ETF market. Analysts relied on Bloomberg Terminals to manually download data into Excel—a slow process prone to errors, particularly when senior management requested competitor fund analyses.

Tony built automated data pipelines that ingested data from Bloomberg and other financial providers, reconciled discrepancies, and flagged errors back to the sources. He organized the cleaned data into lightweight files optimized for ingestion into an ETF Dashboard. This setup provided users with near real-time data and significantly improved efficiency and accuracy.