Databricks Data Engineer
Job Description
Job Title: Data Engineer - Databricks
Location: Onsite - Toronto, Canada
Employment Type: Contract
About the Role
We are seeking an experienced Data Engineer with a strong background in Databricks, Apache Spark, and modern cloud data platforms. The ideal candidate has over 5 years of experience designing, developing, and maintaining scalable data pipelines and lakehouse architectures in enterprise environments. You will work closely with solution architects, analysts, and cross-functional teams to build robust, high-performance data solutions supporting analytics and machine learning workloads.
Key Responsibilities
• Design and implement ETL/ELT pipelines using Databricks and Apache Spark for batch and streaming data.
• Develop and maintain Delta Lake architectures to unify structured and unstructured data.
• Collaborate with data architects, analysts, and data scientists to define and deliver scalable data solutions.
• Implement data governance, access control, and lineage using Unity Catalog, IAM, and encryption standards.
• Integrate Databricks with cloud services on AWS, Azure, or GCP (e.g., S3, ADLS, BigQuery, Glue, Data Factory, or Dataflow).
• Automate workflows using orchestration tools such as Airflow, dbt, or native cloud schedulers.
• Tune Databricks jobs and clusters for performance, scalability, and cost optimization.
• Apply DevOps principles for CI/CD automation in data engineering workflows.
• Participate in Agile ceremonies, providing updates, managing risks, and driving continuous improvement.
Required Qualifications
• 5+ years of professional experience in data engineering or data platform development.
• Hands-on experience with Databricks, Apache Spark, and Delta Lake.
• Experience with at least one major cloud platform - AWS, Azure, or GCP.
• Strong proficiency in Python or Scala for data processing and automation.
• Advanced knowledge of SQL, query performance tuning, and data modeling.
• Experience with data pipeline orchestration tools (Airflow, dbt, Step Functions, or equivalent).
• Understanding of data governance, security, and compliance best practices.
• Excellent communication skills and ability to work onsite in Toronto.
Preferred Skills
• Certifications in Databricks, AWS/Azure/GCP Data Engineering, or Apache Spark.
• Experience with Unity Catalog, MLflow, or data quality frameworks (e.g., Great Expectations).
• Familiarity with Terraform, Docker, or Git-based CI/CD pipelines.
• Prior experience in finance, legal tech, or enterprise data analytics environments.
• Strong analytical and problem-solving mindset with attention to detail.
How to Apply
Ready to start your career as a Databricks Data Engineer at CloudTech Innovations?
- Click the "Apply Now" button below.
- Review the safety warning in the modal.
- You will be redirected to the employer's official portal to complete your application.
- Ensure your resume and cover letter are tailored to the job description using our AI tools.
Frequently Asked Questions
Who is hiring?▼
This role is with CloudTech Innovations in Toronto.
Is this a remote position?▼
This appears to be an on-site role in Toronto.
What is the hiring process?▼
After you click "Apply Now", you will be redirected to the employer's official site to submit your resume. You can typically expect to hear back within 1-2 weeks if shortlisted.
How can I improve my application?▼
Tailor your resume to the specific job description. You can use our free Resume Analyzer to see how well you match the requirements.
What skills are needed?▼
Refer to the "Job Description" section above for a detailed list of required and preferred qualifications.