This job has been expired a while ago. Please use your discretion.
BASIC

Data Engineer

Hyderabad
Posted on: 03 Oct 2021

Job details


  • NATURE OF JOB Hybrid — WFH / WFO / CW
  • CATEGORY Programming
  • SALARY RANGE Best in the industry Annual
  • EXPERIENCE 1 – 3 Years
  • JOB TYPE Full Time
  • REGIONAL PREFRENCES India
  • NO. OF VACANCIES 1

Job description


Summary We at Apple are looking for dedicated individuals to join our team to build data foundations and tools to craft the future of commerce and Apple Pay! You will design and implement scalable, extensible and highly-available data pipelines on large volume data sets, that will enable impactful insights & strategy for payment products. We believe in getting things done iteratively and rapidly, with open feedback and debate along the way. Analytics is a team sport, but we strive for independent decision-making and taking smart risks. Our team collaborates deeply with partners across product and design, engineering, and business teams. Our mission is to drive innovation by providing the business and data scientist partners outstanding systems and technology, to make decisions that improve the customer experience of using our services. This will include demonstrating large data sources, helping derive meaningful insights, delivering multifaceted, and bringing our data to life via amazing visualizations. Partnering with the head of Wallet Payments & Commerce Data Engineering & BI, you will collaborate with various data analysts, instrumentation professionals and engineering teams to identify requirements that will derive the creation of data pipelines. Work closely with the application server engineering team to understand the architecture and internal APIs involved in upcoming and ongoing projects related to Apple Pay. We are seeking an outstanding person to play a pivotal role in helping the analysts & business users make decisions using data and visualizations. You will partner with key associates across the engineering, analytics & business teams as you design and create query friendly data structures! The ideal candidate is a self-motived team member, skilled in a broad set of data processing techniques with the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible data processing solution is a must. Key Qualifications 5+ years of professional experience with big data systems, data pipelines and processing Practical hands-on experience with technologies like Apache Hadoop, Apache Pig, Apache Hive, Apache Sqoop & Apache Spark Ability to understand API Specs, identify relevant API calls, extract data and implement data pipelines & SQL friendly data structures Identify and execute Data Validation rules and alerts based on data publishing specifications for data integrity and anomaly detection Understanding on various distributed file formats such as Apache AVRO, Apache Parquet and common methods in data transformation Expertise in Python, Unix Shell scripting and Dependency driven job schedulers Expertise in Core JAVA, Oracle, Teradata and ANSI SQL Familiarity with Apache Oozie and PySpark Knowledge on Scala and Splunk are good to have Familiarity with rule based tools and APIs for multi stage data correlation on large data sets is a plus Description Translate business requirements by business team into data and engineering specifications Build scalable data sets from available raw data and derive business metrics/insights Work with engineering and business partners to define and implement the data engagement relationships required with partners Understand and identify server APIs that need to be instrumented for analytics reporting and align the server events for execution in already established data pipelines Explore and understand data sets, identify and formulate correlational rules between heterogeneous data sources Process, clean and validate the integrity of data used for analysis Develop Python and Shell Scripts for data ingestion from external data sources for business insights Work hand in hand with the DevOps team and develop monitoring and alerting scripts on various data pipelines and jobs Education & Experience Minimum of bachelor’s degree, preferably in Computer Science, Information Technology or EE, or relevant proven expertise is helpful. Additional Requirements Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Apple is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities. Role Number: 200288365

No Results Found

Similar jobs not available at this moment.

Search Jobs