Experience Required

  • Experience as Developer in Databricks with strong knowledge on implementation of data pipelines
  • Experience in designing, developing, and maintaining scalable data pipelines and solutions using Databricks
  • Experience in AWS Platform to host Databricks
  • Design and develop scalable data pipelines and ETL processes using Databricks and Apache Spark
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Optimize and tune data pipelines for performance and scalability.
  • Implement data quality checks and validations to ensure data accuracy and consistency.
  • Monitor and troubleshoot data pipelines to ensure reliable and timely data delivery.
  • Develop and maintain documentation for data pipelines, processes, and solutions.
  • Implement best practices for data security, governance, and compliance.

Roles & Responsibilities

  • Experience in data engineering or a related field.
  • Strong experience with Databricks and Apache Spark.
  • Proficiency in programming languages such as Python, Scala, or Java.
  • Strong SQL skills and experience with relational databases.
  • Knowledge of data warehousing concepts and technologies.
  • Experience with version control systems such as Git.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Desired qualifications/non-essential skills required:
  • Experience with Delta Lake and Databricks Delta.
  • Experience with data visualization tools such as Power BI, Tableau, or Looker.Strong initiative and ability to set stretch goals.

Strong consultative advisory skills.

Salary

$100,000 - $135,000

Project Basis based

Remote Job

Worldwide

Job Overview
Job Posted:
1 year ago
Job Type
Contractual
Job Role
Any
Education
Any
Experience
Any
Total Vacancies
-

Share This Job:

Location

United States