Search Jobvertise Jobs
Jobvertise

Cloud Data Architect
Location:
US-NY-New York City
Jobcode:
3605951
Email this job to a friend

Report this Job

Report this job





Incorrect company
Incorrect location
Job is expired
Job may be a scam
Other







Apply Online
or email this job to apply later

Role : Cloud Data Architect Location : Onsite in NYCHire Type: Full time / Permanent Job Summary As a cloud data architect, you are responsible for designing and implementing scalable and secure data solutions using cloud technologies. You will work with business stakeholders and data analysts to understand the data requirements and translate them into logical and physical data models. You will also leverage your expertise in cloud services, data engineering, and data governance to optimize the performance, reliability, and quality of data pipelines and platforms.Years of experience needed Experience Level: 15+ YearsAt least 5 years of experience in data architecture, data engineering, or data analysisCandidate should have extensive experience in architecting and engineering Data Warehouses, Data Lake, ODS and OLTP data platforms and processing layers on cloud and on-prem platforms.Good knowledge in Data Management areas and expertise in building components in spark for data file ingestion and data pipelines that validates, standardize, processes, cleanse, transform the data and store.Expertise in Data Engineering on Bigdata Technologies both on in-premise and cloud Platforms, specifically AWS, Azure & GCP etc.Knowledge of data modeling techniques and best practices, such as ER diagrams, dimensional modeling, and data normalizationFamiliarity with data quality, data security, and data governance standards and frameworksAbility to communicate effectively with technical and non-technical stakeholdersTechnical Skills:Tools & Technologies Required:Architect BI / DW / DL platform for large enterprisesExperience in data modeling for oltp/olap applicationsEntity Relationship modeling, domain modelin, Dimensional Modeling, Modeling data in NoSQL, GraphDB , concept modeling for GraphDB etcHadoop Ecosystem HDFS, Hive, Hbase, Spark, Hue /Ambari, Impala, Sqoop, KafkaHadoop Distributions Cloudera / Hortonworks / MAPR / DatabricksAWS Data Platform- EMR, EC2, Kinesis, Redshift, RDS, DMS, Cloud watchExperience working on DatabricksApache Airflow, Messaging / Apache Kafka etcSecondary Skills:ETL Technologies Informatica / Abinitio / Datastage / TalendDatabases Oracle / DB2 / MySQL / Postgres / Teradata / MongoDB / Cassandra / SnowflakeProcess Skills:Should be well versed with Bigdata Tools & Technologies including Hadoop eco system components like Hive, Hbase, Sqoop, Kafka, Hue & Spark with proficiency in either one of programming languages Java/Python/Scala.Good conceptual understanding of SMP & MPP system and data processing on these platforms, using lambda & kappa architecture, streaming & batch processing.Experience in working on AWS ecosystem as a whole and should have experience in migrating ETL pipelines from Talend / Informatica/Abinitio to Hadoop / Glue / Spark.Expertise in handling structured, semi-structured (json,xml etc ) and unstructured data, data schema drift etc and understanding various big data file formats like AVRO, Parquet, ORCShould possess knowledge in modern and traditional database systems like NoSQL vs RDBMS.Knowledge of Distributed File systems is must.Behavioral Skills :Should be competent to design and develop architectures for Data Migration, Data Ingestion, Data Storage, Build Data Lakes, creating various layers in Data Lake, ETL using Hadoop tools like Spark, and AWS tools like Glue and EMR.Should be well versed with Performance tuning of ETL pipelines including spark.Must have architecture and design exp in building highly scalable enterprise grade applications.Expert designing data integrations using ETL and other data integration patternsAdvanced knowledge of Business ProcessesCandidate should be able to guide the client and management about what tools and technologies should be applied in a particular scenario for best case utilization and cost optimization.Certifications Needed:Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed collegeSkillsPRIMARY COMPETENCY : Data Engineering PRIMARY SKILL : Data Architect PRIMARY SKILL PERCENTAGE : 100

tanishasystems

Apply Online
or email this job to apply later


 
Search millions of jobs

Jobseekers
Employers
Company

Jobs by Title | Resumes by Title | Top Job Searches
Privacy | Terms of Use


* Free services are subject to limitations