
Recruiter
Anastasiya Ganulich
aganulich@binariks.com
Recruiter
aganulich@binariks.com
5+ years of experience in Data engineer role
Strong and proficient in Python, with the ability to mentor and assist the team in solving complex Python-related queries
Experience in data visualization for complex datasets, especially of large-scale datasets and time series data, with a strong understanding of tools and techniques such as Power BI
Expertise in SQL, PySpark, and Dask for data engineering and analysis
Proficiency in working with relational and cloud databases, including PostgreSQL and Redshift
Familiarity with cloud technologies (AWS, Snowflake)
Experience in multimodal time-series data (e.g., Accelerometer, ECG, PPG, EEG, etc.) from biosensors
Excellent communication skills for collaborating and presenting technical concepts
Design, build, and maintain data pipelines, ensuring seamless integration and high-performance processing of large-scale datasets
Design and build dashboards to present complex datasets to our stakeholders using PowerBI
Provide Python expertise, supporting team members with queries and troubleshooting, while driving best practices in code quality and development
Communicate results through reports, presentations, and documentation
Will be a plus:
Familiarity with LLMs and drive technical innovation by implementing generative AI technologies such as RAG (Retrieval-Augmented Generation) and exploring applications in digital health data
Knowledge of GPU computing, high-performance computing, and cloud-native applications
Knowledge of containerization tools like Docker and their application in deploying data workflows
Manage and optimize cloud infrastructure (AWS & snowflake), including databases, Kubernetes clusters, AWS Bedrock, Athena, and S3 integration
Familiarity with cardiovascular, neuroscience, or epidemiology data
Experience in FDA submissions, validation, and working within GxP environments