Skip to content

Data Engineer III (Senior Data Quality Engineer)

  • On-site
    • Cincinnati, Ohio, United States
    • Blue Ash, Ohio, United States
    +1 more
  • $60 - $70 per hour
  • Information Technology

Seeking a Senior Data Quality Engineer. Hands-on with Azure, Databricks, Spark, SQL, Python, Kafka. Build pipelines, ensure data quality, deliver insights.

Job description

For over half a decade, Hudson Manpower has been a trusted partner in delivering specialized talent and technology solutions across IT, Energy, and Engineering industries worldwide. We work closely with startups, mid-sized firms, and Fortune 500 clients to support their digital transformation journeys. Our teams are empowered to bring fresh ideas, shape innovative solutions, and drive meaningful impact for our clients. If you're looking to grow in an environment where your expertise is valued and your voice matters, then Hudson Manpower is the place for you. Join us and collaborate with forward thinking professionals who are passionate about building the future of work.

About the Role
The End-to-End Fresh team is dedicated to ensuring that produce, meat, and seafood are delivered to customers with the highest level of freshness—from farm to fork. We leverage cutting-edge technologies, including Azure Data Platform, Databricks, Power BI, and Android-based tools, to monitor supplier performance, collect new data streams, and transform that data into actionable insights.

This role is ideal for a hands-on data engineer with strong experience in Azure, Databricks, and modern data architectures, who wants to design, build, and optimize scalable data pipelines and quality frameworks that directly impact customer experience.

Key Responsibilities

  • Design and implement scalable data pipelines using Azure Data Lake, Databricks, and Unity Catalog.

  • Develop and maintain data quality frameworks to monitor accuracy, freshness, and reliability.

  • Partner with cross-functional teams to translate business requirements into data-driven solutions.

  • Implement real-time and batch data integrations using Spark, Kafka, IBM MQ, and EventHub.

  • Conduct complex data analysis, identify trends, and deliver insights via reporting tools such as Power BI.

  • Define data migration and modernization strategies to evolve current platforms.

  • Champion data governance, security, and best practices to promote reusable data assets.

  • Collaborate with 3rd-party vendors to integrate and validate data solutions.

Job requirements

Minimum Qualifications

  • Proven experience building cost-effective, performance-driven solutions using Microsoft Azure and Databricks.

  • Strong proficiency in SQL, Spark, and Python.

  • Experience with SQL and NoSQL data stores on big data platforms.

  • Hands-on experience with streaming technologies (Kafka, IBM MQ, EventHub).

  • Solid understanding of data principles, patterns, and architectures.

  • Experience with relational data modeling and database design.

  • Strong analytical skills with attention to detail and accuracy.

  • Ability to own assignments, communicate clearly, and deliver results.

  • Basic understanding of network and data security architecture.

Desired Skills & Experience

  • Exposure to machine learning, AI, or operational data science solutions.

  • Azure Data Engineer Certification (or equivalent).

  • Experience with cloud-native architectures (Azure, GCP, or multi-cloud).

  • Experience with data science platforms.

  • Degree in Computer Science, IT, or related field.

or