Would you like to be a part of an exciting START-UP project for one well-known, international and stable company in Brno? Are you an experienced SW Engineer with knowledge of Java and do you have an experience with programming data pipelines and ETL technologies? If YES, this is a great career opportunity for you!
Analyzes, develops, designs, and maintains software for the organization's products and systems. Performs system integration of software and hardware to maintain throughput and program consistency. Develops, validates, and tests: structures and user documentation. Work is evaluated upon completion to ensure objectives have been met. Determines and develops approach to solutions.
Responsibilities:
- Big Data Processing:
- Design, develop, and maintain scalable data pipelines using technologies such as Apache Airflow, Google Composer.
- Implement ETL (Extract, Transform, Load) processes to ingest, clean, and process large volumes of data from various sources.
- Should have strong knowledge on Object Storage like Google Storage, or similar like S3, HDFS.
- Should have strong knowledge on any Streaming ETL technologies like Apache Beam, Google Dataflow, Spark, Databricks
- Should have strong knowledge on SQL/OLAP databases like Big Query, or similar like Hive.
- Optimize data processing workflows to ensure high performance and efficiency.
- Good to have knowledge on Cloud functions, Cloud build/Jenkins, Cloud Networking
- Desired Skills:
- REST API Development with Spring boot, Web flux.
- Knowledge on Web Security.
- Collaboration and Communication:
- Work closely with data scientists, engineers, and business analysts to understand data requirements and deliver solutions.
- Participate in code reviews, provide constructive feedback, and ensure coding standards are met.
- Communicate effectively with team members and stakeholders to ensure project goals and timelines are met.
Qualifications:
- Qualifications:
- Strong knowledge in Core Java, Springboot, REST API Development, Functional programming concepts
- Should have strong knowledge in Google PubSub or Kafka
- Should have strong knowledge on any Streaming ETL technologies like Apache Beam, Google Dataflow, Apache Spark
- Should have strong knowledge on Object Storage like Google Storage, or similar like S3, HDFS
- Should have strong knowledge in GKE and Docker
- Good knowledge on Cloud functions, Cloud build/Jenkins, Cloud Networking
If the position interests you, apply directly or contact me for more information about the role!