Search Jobvertise Jobs
Jobvertise

Big Data ( Hadoop) Developer
Location:
US-CA-El Segundo
Jobcode:
11d36502ab59fde4c021a6bd5751269c-122020
Email Job | Report Job

Report this job





Incorrect company
Incorrect location
Job is expired
Job may be a scam
Other







Apply Online
or email this job to apply later

EzData Systems, Inc. 2025 Gateway Place, Ste: 392, San Jose, CA 95110. (link removed)



Date of the Job Posting: November 3, 2020                             Job: Big Data (Hadoop) Developer



Job Id: BDH-Dev #11032020



Work Location: El Segundo, California



Duration: 36 months                                     Start Date: March 31, 2021                                                                Payrate: Market rate



EzData Systems, Inc is a San Jose, CA based IT Solutions/ Services Organization serving customers throughout North America with a focus in Big Data Analytics & Cloud BI. We are currently looking for an experienced Big Data (Hadoop) Developer to join our Big Data Analytics team. 



Role/ Responsibilities:




  • Play a key role by collaborating with the BI teams, project managers and analysts and finalize business needs and build data integration visualization applications using Spark, Tableau, SAS, Teradata and ETL tools such as Informatica(link removed)>
  • Design Big Data Strategy to produce deep actionable analytics, monetize data as a product and expand predictive analytics through statistical modeling for targeted customer personalization in the AT&T digital Apps.

  • Design and build mission critical application namely “Comprehensive overview on Customer Viewership Customer Preferences, Customer Retention” building robust Cloud Analytics with state-of-the art AI models, ML models namely Inverse Reinforcement Learning and Deep Reinforcement learning and probabilistic inference(link removed)>
  • Design, develop, deploy Business Intelligence solutions using Big data frameworks namely Spark, MapReduce, Pig, Hive, HBase, Sqoop & languages like Java, SCALA, Python and Shell scripting(link removed)>
  • Play a key role to deliver a ML/ DL project - building & validating predictive models, and deploying completed models using SparkML, AML and DL frameworks (Tensorflow, Theano, CNTK, Keras)(link removed)>
  • Lead implementation of data tagging solution on various client properties including native mobile applications, websites, connected device and set-top applications.

  • Implement Kafka High level consumers to get data from Kafka partitions and move into HDFS and Implemented POC to migrate MapReduce programs into Spark transformations using Spark and Scala(link removed)>
  • Develop custom input formats and data types to parse and process unstructured and semi structured input data and mapped them into key value pairs to implement business logic in MapReduce(link removed)>
  • Building sandbox solutions by designing data pipelines from AWS S3 to Hadoop Data Lake infrastructure using Kafka, Data Router to perform ad-hoc and real time data analysis for users.

  • Building reports and capturing insights for all streaming metrics across multiple products and devices including engagement, performance, customer behavior, acquisition and retention(link removed)>
  • Key role in building Executive Dashboards/ revenue business critical reports (ad hoc, forecasting) on viewership data to monitor performance and providing insights and actionable recommendations to media/digital marketing leadership and product teams.

  • Manage and maintain the CDO (Chief Data Office) infrastructure including installation and configuration of the advanced analytics tools to build Open Video platform which is user friendly.



Required qualifications:



The position requires a Bachelor’s degree in Computer Science, Information technology, Computer Information Systems, or a combination of education and experience equating to the equivalent of a Bachelor’s degree in one of the aforementioned subjects. The candidate must have at least 2-3 years of industry hands-on experience developing Big Data applications using Spark, MapReduce, Pig, Hive, HBase, Sqoop, Java, SCALA, Python and Shell scripting. In-depth hands-on experience in SparkML, AML, MXNet, Caffe 2, Tensorflow, Theano, CNTK, Keras, No-SQL, Cassandra, MongoDB, Tableau, SAS, Teradata, ETL tools (Informatica) is a must. Must have worked in BI, Datawarehouse/ Data Mart projects using Big Data Technologies (MapReduce, Pig, Hive, HBase, Sqoop, Spark), Machine Learning, building BI dashboards using MicroStrategy/Power BI/Tableau, Teradata, Vertica, Informatica and Cloud Computing(link removed) Requires experience in SparkML, AML, ML, Scikit-learn, NumPy, SciPy.



 




  • If Interested, please send your resume to (e-mail removed)


EzData Systems Inc

Apply Online
or email this job to apply later


 
Search millions of jobs

Jobseekers
Employers
Company

Jobs by Title | Resumes by Title | Top Job Searches
Privacy | Terms of Use


* Free services are subject to limitations