Search Jobvertise Jobs
Jobvertise

Software Engineer for Data
Location:
IN-Chennai
Jobcode:
2493061
Email this job to a friend

Report this Job

Report this job





Incorrect company
Incorrect location
Job is expired
Job may be a scam
Other







Apply Online
or email this job to apply later

Job TitleSoftware Engineer for Data
Job Description


Software Engineer with Data Skills:


An extremely hands-on individual, with minimal knowledge in the Azure or AWS area on software architecting and engineer

Hard Skills:


Strong software development skills
:
Have a solid understanding of software development principles, and be able to write code in multiple programming languages. This skill is crucial in order to create practical and realistic architectural designs.


Knowledge of software design patterns
:
Deep understanding of software design patterns and how they can be applied in different contexts. This skill allows them to identify potential problems and provide effective solutions.


Knowledge of software testing, testing frameworks, and automated testing:

Deep understanding of testing, testing frameworks and automated testing applied in different contexts.

Software testing:

Have a solid understanding of testing methodologies, including manual testing, automated testing, and test-driven development. They should be able to create comprehensive test plans and execute tests to ensure the quality of the software.


Testing frameworks:

Testing frameworks such as Selenium, JUnit, TestNG, NUnit, and PyTest are widely used in software testing. Sshould be familiar with these frameworks and have experience implementing them to automate tests, validate code changes, and ensure software quality.


Automated testing:

Automated testing involves using tools and frameworks to automate the testing process. Should have experience with continuous integration and continuous testing, which involves automatically running tests after each code change to ensure the software remains functional. They should also be familiar with test automation frameworks such as Cucumber, Behave, and Robot Framework.



Experience with data modeling, data engineering, and data architecture:

A strong understanding of data modeling and data architecture is essential. Be able to design and develop data models that accurately reflect the needs of the business and ensure that the data architecture supports the company's goals.

Data processing and transformation:

Responsible for designing and implementing data processing pipelines to extract, transform, and load (ETL) data from various sources. Have experience with tools such as Apache Spark, Apache Kafka, Azure Data Factory, and other tools to build data processing pipelines.


Database design and management:

Have a deep understanding of database design principles and be able to design and manage databases such as MySQL, Oracle, and PostgreSQL. They should also be familiar with NoSQL databases such as MongoDB and Cassandra.


Cloud computing:

Be familiar with cloud computing platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to leverage cloud services for data storage, processing, and analysis.


Big data technologies:

Expertise in big data technologies such as Hadoop, Hive, Pig, and Spark to manage and process large volumes of data.


Data warehousing:

Have experience designing and building data warehousing solutions on top of Azure and Databricks: Azure Synapse and DataBricks Data Warehousing concepts.


Scripting and programming:

Have strong scripting and programming skills to automate data processes and build custom tools. This involves using languages such as Python, Java, Scala, and SQL.



Knowledge of data lakes, ETL/ELT, and Data-as-a-Service:

Have extensive experience with data technologies such as data lakes, ETL/ELT, and Data-as-a-Service. This knowledge enables them to make informed decisions about the company's data architecture and how it can be used to provide value.

Databricks knowledge:

Expertise in data lakes should have a strong understanding of Databricks, which is a cloud-based platform used for data engineering, machine learning, and analytics. They should be able to use Databricks to build and deploy data pipelines, create data models, and perform advanced analytics.


Jupyter notebooks:

Be familiar with Jupyter notebooks, which are interactive coding environments that allow users to create and share code, visualizations, and data analyses. They should be able to use Jupyter notebooks to develop and test data pipelines, create and execute machine learning models, and perform exploratory data analysis.


Spark clusters:

Have extensive experience working with Spark clusters, which are high-performance computing clusters used for big data processing. They should be able to configure and manage Spark clusters, optimize their performance, and use them to process large volumes of data.


Data ingestion and processing:

Deep understanding of data ingestion and processing techniques used in data lakes. They should be able to design and develop data pipelines that can ingest data from various sources, transform and process data using tools such as Apache Spark, and store data in data lakes such as Amazon S3, Azure Data Lake Storage, or Google Cloud Storage.


Data security:

Be familiar with data security best practices, including access control, data encryption, and network security. They should be able to design and implement data security measures to protect data in transit and at rest.


Data governance:

Have a good understanding of data governance principles and practices. They should be able to create and enforce data policies, establish data quality standards, and ensure compliance with relevant data regulations. Have experience with data quality and governance practices to ensure the accuracy, completeness, and reliability of data. This involves implementing data quality checks, defining data policies, and ensuring compliance with relevant data regulations.


Cloud computing:

Have experience working with cloud computing platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. They should be able to leverage cloud services to create scalable and cost-effective data lake architectures.


Big data technologies:

Be familiar with other big data technologies such as Hadoop, Hive, Pig, and Kafka. They should be able to use these technologies in conjunction with data lakes to create robust and scalable data processing solutions.




Soft Skills:


Excellent problem-solving skills:

Have excellent problem-solving skills and be able to identify and address potential problems with the software architecture before they become major issues.


Communication and collaboration:

Effective communication and collaboration skills are critical. Be able to clearly communicate complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams.


Leadership and management:

As a key member of the technical team, have strong leadership and management skills. Be able to lead technical teams and manage projects effectively, ensuring that the software architecture is delivered on time and to a high standard.

About Philips
We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business. Discover our rich and exciting history. Learn more about our purpose.If youre interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our commitment to diversity and inclusion here.

Philips

Apply Online
or email this job to apply later



Lead Data Engineer
  Click here
Eden Prairie, MN
Job Description: Azure Databricks, ADF, Scala, Python, Spark Databricks, Python, Spark, PySpark, BigData, Azure Deltalake Azure (Big Data Pipelines), ...
Posted more than a week ago



Data Engineer (Power BI developer)
  Click here
Austin, TX
Data Engineer (Power BI developer) Location: Denver, CO or Austin, TX (Onsite) 9 Months Contract to Hire(Candidate will be hired in 9 Months) Requirem...
Posted more than a week ago



Senior MDM ETL Developer
  Click here
Austin, TX
Title: MDM/ETL Developer Location: Austin, TX (Hybrid) The worker will work remotely 4 days in a week and a day at the office(Tuesdays). Candidates mu...
Posted more than a week ago



Java Spark Databricks AWS Data Engineer
  Click here
Houston, TX
Urgent W2 job opportunity...............No C2C............H1B & CPT to excuse please Job Role: Java Spark Databricks AWS Data engineer Required Skills...
Posted more than a week ago



Big Data Engineer- Azure
  Click here
Bangalore
Skill Required4 to 8 years of total IT experience with 2+ years in big data engineering and Microsoft Azure Experience in implementing Data Lake with ...
Posted Yesterday


 
Search millions of jobs

Jobseekers
Employers
Company

Jobs by Title | Resumes by Title | Top Job Searches
Privacy | Terms of Use


* Free services are subject to limitations