Role: Database/Data Engineer
Location: Charlotte, NC (Fully onsite)
Rate: Market
Duration: Long term
The primary role of the Developer – Database/Data Engineer is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the company’s objectives. They are responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python/Pyspark and AWS tools.
Responsibilities:
Provide scoping, estimating, planning, design, development, and support services to a project.
Identify and develop the technical detail design document.
Work with developers and business areas to design, configure, deploy, and maintain custom ETL Infrastructure to support project initiatives.
Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading (ETL Pipelines) into large data domains.
Document and present solution alternatives to clients, which support business processes and business objectives.
Work with business analysts to understand and prioritize user requirements.
Design, development, test, and implement application code.
Follow proper software development lifecycle processes and standards.
Quality Analysis of the products, responsible for the Defect tracking and Classification.
Track progress and intervene as needed to eliminate barriers and ensure delivery.
Resolve or escalate problems and manage risk for both development and production support.
Coordinate vendors and contractors for specific projects or systems.
Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies.
Skills and Knowledge: Developer experience specifically focusing on Data Engineering.
Hands-on experience in Development using Python and Pyspark as an ETL tool.
Experience in AWS services like Glue, Lambda, MSK (Kafka), S3, Step functions, RDS, EKS etc.
Experience in Databases like Postgres, SQL Server, Oracle, Sybase etc.
Experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions, and triggers.
Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing).
Knowledge in developing UNIX scripts, Oracle SQL/PL-SQL.
Experience with data models, data mining, data analysis and data profiling.
Experience in working with REST API’s.
Experience in workload automation tools like Control-M, Autosys etc.
Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins.
Strong experience with Agile/SCRUM methodology.
Experience with other ETL tools (DataStage, Informatica, Pentaho, etc.).
Knowledge in MDM, Data warehouse and Data Analytics.
To apply for this job please visit click.appcast.io.