ADVERTISEMENT
Data Engineering: The Foundation of Data Science Efforts
Data engineering can be universally defined as the acquisition and transformation of data into a form tailored for a data science effort. Data engineering is the process and set of tools used to transform disorganized, unstructured, and often unclean data into something organized, structured, and maintainable.
Data engineering solutions may follow established recipes for retrieving and transforming data, but often require bespoke pipelines (the start to finish lifecycle of data) that meet the specific challenges of the data being used and the requirements of the data science approaches being explored. There is no cookie-cutter solution to data engineering; as the saying goes, “If you have seen one data engineering pipeline, you have seen one data engineering pipeline.”
This discussion will walk through the journey of data engineering from source to secure transport, storage, and versioning to its ultimate deployment.
Data Sources: Where Does Data Come From?
A wide range of expert-curated resources should be carefully selected and interrogated to comprehensively explore a clinical question of interest. For example, clinical pathways are constructed by leveraging data-driven knowledge. Sources include, but are not limited to, electronic health records (EHRs, which are more comprehensive than EMRs), payers (claims data), patient-reported outcomes (PROs), peer-reviewed biomedical literature (eg, PubMed/MEDLINE), clinical study findings (eg, ClinicalTrials.gov), and National Drug Code (NDC) directories. With data sources like these, pathways can track adherence, provide decision support, surface clinical intelligence, and ultimately facilitate an evidence-based standards of care. Other resources available for biomedical investigations include, but are not limited to, imaging archives (eg, the National Biomedical Imaging Archive), medical devices and mobile apps, wearables (eg, wristbands), and paper-based clinical notes.
The diversity of the multimodal data collected nowadays renders data engineers an integral part of a data science team. These individuals are tasked with providing high-quality data and a dependable infrastructure, enabling the extraction of meaningful predictive and prescriptive insights that funnel into the design of innovative health care solutions.
Data Formats: Structure vs Topology
Many people think of data as it exists in a spreadsheet like Microsoft Excel. Data can exist in numerous file formats and contain various types and attributes. Data formats depict the structure of data when it is physically stored. When data is stored on a server, cloud storage, or on a laptop, it is known as being “at rest.” Data can be commonly found in structures such as flat files (JSON, CSV, text), database records, documents, data frames, and several others. Advanced structures exist that support specific analytical needs; for example, graphs are rapidly becoming important for population analytics and insights from time series of patient data. The state of graph databases in 2020 is reviewed in Graph Technology Landscape 2020.
The topology, or shape of data, is a description of properties of the data contained in a structured format. A simple example of data’s topology, keeping with the spreadsheet analogy, is how many rows and columns are in the spreadsheet. Important properties include volume of data (1 million rows), data types (eg, string, integer, float, and varchar), and dimensions (256 rows by 256 columns), amongst others. Knowing the topology of a data set guides transportation, preparation, storing, and consumption by end users (eg, data scientists, analysts, clinicians, and businesses).
In the past, enterprises attempted to shoehorn data of different structures/topologies together into one single tool or solution, such as a database. More modern tools such as Hive, Presto, Impala, and many others allow indexing, discovery, and even joining of data stored in dramatically different formats. This mitigates transformation problems and enables multiple data formats to be used simultaneously – each tailored for a specific purpose.
Data Storage and Transport
Data is stored in a format on a given medium, whether it is a cloud-based storage (eg, AWS S3, Microsoft Azure Storage, Box, or Google Drive), on a laptop as a flat file, in a database located in a data center, or as part of a patient’s medical record located in an EHR. Some common storage locations for data engineering include:
- Data Lake
- Data Warehouse
- Relational Database
- Graph Database
- Document Store
- Spark Database
- Key Value Store
- Columnar Store
Data is seldom used from its source location in data science efforts. Data generally requires transportation to another location where it can be cleaned, explored, and anonymized when necessary, and then transported again into a final location before any analysis occurs. This need for transportation is the first indication that data science requires infrastructure in order to perform its basic functions. Transportation may involve negotiating access permissions, licensing, building infrastructures, establishing or following standards, scheduling, and setting up certifications, data usage agreements, and validations. Each data source requires a specific tool, skillset, process, and infrastructure to transport.
You likely have noticed one theme present in this post: data engineering requires unique solutions for each data set. Data engineering is just the beginning of the recognized challenges in undertaking a data science initiative.
Secure Data: Is Your Data Safe?
Data safety in part comes from adhering to regulations and following best practices. Data security can be complex and is easy to improperly implement. Ensuring secure data at rest and in transit is a special skill that often requires a security engineer when data science projects evolve past using small local data sets that do not contain Protected Health Information (PHI), financial, or strategically sensitive data. It is best to demonstrate an overabundance of caution and security, less your data science project turns into a data breach with significant financial, legal, and business implications. Data security may be a blog post in the future of this series. Some general guidelines for data security, though not comprehensively covering the topic, include:
- Best Practices for PHI
- Cloud Adoption Best Practices
- Security Standards (Amazon AWS, but generally applicable)
- Best Practices for Avoiding Code Injection of Web Apps
- Principle of Least Privilege Philosophy
Data Versioning: The Past, Present, and Future of a Data Set
Just like a text document or slide presentation, data sets change over time during their transports, transformations, and data science experiments. In order to support knowing all the states of a data set, versions are kept starting with the initial source, each transformation, and ending with the final version. An extension of this concept is that each experimental analysis and interpretation of results should reference the various versions of data used to create them. Data versioning can help support audits and explain how data science results were achieved in the event of an internal or external challenge of the results.
Next Blog Post Topic
We have reviewed a few of the topics in data engineering in this blog post. In the next article, we will provide more detail about how data is acquired, transformed, and prepared for data science efforts, as well as made available for data scientist teams.
About David Hughes
David Hughes is the Principal Machine Learning Data Engineer for Octave Bioscience. He develops cloud-based architectures and solutions for surfacing clinical intelligence from complex medical data. He leverages his interest in graph based data and population analytics to support data science efforts. David is using his experience leading clinical pathways initiatives in oncology to facilitate stakeholder engagement in the development of pathways in neurodegenerative diseases. With Octave, he is building a data driven platform for improving patient experience, mitigating cost, and advancing health care delivery for patients and families.
About Octave Bioscience
The challenges for MS are significant, the issues are overwhelming, and the needs are mostly unmet. That is why Octave is creating a comprehensive, measurement driven Care Management Platform for MS. Our team is developing novel measurement tools that feed into structured analytical data models to improve patient management decisions, create better outcomes and lower costs. We are focused on neurodegenerative diseases starting with MS.