Job Spotlight 📣 Kubernetes Expert | Contract | Quant Tech | London, hybrid | up to £800pd inside ir35 | Contact: Dean Mathie Windows Automation Engineer | Permanent | Quant Tech | Dallas, hybrid | $DOE | Contact: Dean Mathie Senior Data Engineer – SQL/Azure/Data Lakehouse | Permanent | London, UK – hybrid | Contact: Dean Mathie BI Engineer – SQL/Power BI/DAX/Semantic Model/Azure - Data Lakehouse | Permanent | York/London, UK – hybrid | Contact: Dean Mathie Solutions Architect – Azure | Insurance | Permanent | London, UK – hybrid | Contact: Dean Mathie Environment & Build Lead | Permanent | Insurance | Hybrid, London | up to £80k + benefits | Contact Paul Parker Senior SRE | Permanent | FinTech | 4 days on-site | London | up to £105k + benefits | Contact Paul Parker Solution Architect | Permanent | Insurance | Hybrid, London | up to £125k + benefits | Contact Paul Parker #hiring
Arcus Search’s Post
More Relevant Posts
-
Chief Risk Systems Architect 💷 £135-185k TC 📍 London Role requires a seasoned developers eager to make an impact in a high-stakes, compliance-driven environment. Join a team shaping the future of risk technology solutions. 🐍 Python Development: Drive rapid Python development for investment risk reporting and liquidity forecasting. 📊 Data Engineering: Manage ETL processes, data pipelines, and databases for advanced risk analytics. ☁️ Cloud Integration: Transform infrastructure with cloud platforms like AWS, Azure, or GCP for scalable systems. 🏗️ Architectural Strategy: Lead the migration from legacy models (Excel, Access) to modern frameworks. 🤖 Machine Learning & Automation: Harness ML for predictive modeling, anomaly detection, and process optimization. 🤝 Stakeholder Collaboration: Partner with project teams, business stakeholders, and IT to deliver seamless solutions. This role is designed for someone who thrives in an environment where innovation meets precision. If you’re ready to lead the charge in risk technology transformation, this opportunity is for you! #PythonDevelopment #SQL #ETL #DataEngineering #CloudComputing #AWS #Azure #GCP #InvestmentRisk #LiquidityForecasting #MachineLearning #Automation #ArchitecturalStrategy #RiskManagement #Compliance #FinancialRegulations #RiskAnalytics #AssetClasses #InvestmentTechnology #TechLeadership #PythonDevelopment #RiskTech #DataEngineering #CloudComputing #MachineLearning #TechLeadership #FCA #PRA #MiFIDII #IFPR #FSMA #FinancialRegulations #Compliance #RiskManagement #InvestmentTechnology #MarketIntegrity #FinancialStability #LiquidityForecasting #RiskReporting #Governance #DataEngineering🌟
To view or add a comment, sign in
-
#immediatehire #linkdln.id plz #Azure Databricks Admin (Onsite) Location: Daytona Beach, FL No remote, Need to work from the Daytona office. Duration Longterm contract Scope of Work Deep understanding of Azure Cloud Platform and Engineering, Infrastructure setup and configuration, tools and services, Data access and sharing, and hands-on work. Key objectives Include tasks such as designing and maintaining Databricks infrastructure, optimizing data engineering workflows, and ensuring data quality and governance. Deep understanding of Databricks tools and products like Delta Tables, Delta Like Tables, Unity Catalog, Serverless execution and setup and configuring experience. The technical skills and expertise required for the role, such as proficiency in Databricks, Apache Spark, cloud platforms (Azure), programming languages (Python, Scala, Java), data modeling, and database design. Collaboration with other team members, such as data scientists, data engineers, and business stakeholders. Emphasize the need for effective communication, documentation, and knowledge sharing within the team. The expected outcomes and performance indicators for the Databricks Engineer. This could include metrics such as data processing efficiency, data quality improvements, and successful project completion. Skill Set 3-4 Years of working experience on Azure Platform – Infra, tools and services 2-3 years of experience on optimize and tune Databricks clusters for performance, scalability, and reliability for data engineering workloads. Develop and maintain ETL pipelines to extract, transform, and load data from various sources into Databricks. Hands-on working experience on Python/PySpark using Databricks Notebooks and Workspace. Experience in troubleshoot and resolve Databricks-related issues and provide technical support to users. Architect and Design, Databricks deployment strategy and experience on data engineering tasks, including data processing, transformation, and integration. Implement Azure data engineering best practices, including data quality checks, data lineage, and data governance, within the Databricks environment. Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure kvasanth@futransolutions
To view or add a comment, sign in
-
#hiring Information Technology - Data Integration Engineer, Raleigh, United States, fulltime #jobs #jobseekers #careers #Raleighjobs #NorthCarolinajobs #ITCommunications Apply: https://lnkd.in/gZ5cCwB9 Data Integration Engineer12+ Months Contract 100% RemoteSkills Required: Looking for an engineer that has knowledge on Pipelines, azure devops pipeline work. How to manage database backups and restore Doing a lot of migration of databases for all the projects. Assigned into a project and each project has different stages - some of them are provisioning users, Needs to understand how authentication works. Integration and preparation - understanding how file transfers work and how SFTP get set up. Mainly data and file transfer protocols Database manipulations - receive database and run a series of pipelines to manipulate the data and add it into a configuration for the specific project they are working on. Follow up with necessary people. Adding users, refreshing the database 2-3times during project Maintaining the tasks during the project until they can take it live. Be preparing the script during the go live. Running pipelines, creating some SQL scripts to update information for the user, customer. Knowledge of Azure, SQL, Azure Databases, Azure Concepts, TSQL, Microsoft SQL language to Query. Knowledge of how to upload data into the cloud. Query or troubleshoot pipeline in ADO/Azure Devops. Write pipeline files. Understand how a cloud application works and will set them apartManaging cloud applications.
To view or add a comment, sign in
-
We take pride in our exceptional team of engineers who consistently deliver outstanding results for our clients. Today, we are excited to introduce one of our star data engineers who is ready to take on a new challenge! Meet Our Expert Data Engineer (AWS Certified) Our data engineer has extensive experience in building robust data pipelines, transforming data formats, and leveraging AWS services to drive business success. Here's a snapshot of what he brings to the table: Project Expertise: - Data Pipeline Development: Recently completed a complex project converting unstructured data into CSV, integrating with AWS Glue, and processing data using Lambda step functions. - Infrastructure as Code: Expert in transitioning manual setups to Terraform, ensuring seamless deployment across sandbox, staging, and production environments via CI/CD. - AWS Integration: Proficient in utilizing AWS Secrets Manager, AWS Macie, and other essential AWS services for secure and efficient data processing. - Alerting and Monitoring: Implemented robust alerting systems to monitor data pipelines, ensuring timely issue detection and resolution to maintain data integrity and operational efficiency (CloudWatch, CloudTrail, Security Hub). Technical Skills: - AWS Glue, Lambda, S3, Terraform, CI/CD - Python, SQL, Data Transformation - AWS Security Tools (Secrets Manager, Macie) Proven Track Record: - Successfully optimized data processing workflows, resulting in improved efficiency and cost savings. - Delivered high-quality solutions on time, meeting stringent project requirements and client expectations. Why Partner with Us? We understand the importance of reliable and scalable data solutions. By partnering with us, you'll benefit from: - Tailored Solutions: Customized data engineering services to meet your unique business needs. - Expert Team: Access to skilled engineers with a proven track record in delivering successful projects. - Client-Centric Approach: Dedicated to ensuring client satisfaction and achieving project goals. Ready to elevate your data operations? Let's connect and discuss how our expert data engineer can help you achieve your business objectives. Reach out to Sujata Savekar to get connected to our engineering team! #DataEngineering #AWS #CloudSolutions #DataPipelines #AWSGlue #DataTransformation #Hiring #ProjectOpportunity #Data #jobs #project
To view or add a comment, sign in
-
Hi Connections, I am hiring Azure Databricks Admin for one of our clients at Daytona Beach, FL [ (Day one Onsite) Position: Azure Databricks Admin (Local candidates only) Location: Daytona Beach, FL [ (Day one Onsite) Duration: Long Term Contract If interested please reach me at ☎ 609-945-3918 or 📧 kkunjan@futransolutions.com Position Description: - Scope of Work Deep understanding of Azure Cloud Platform and Engineering, Infrastructure setup and configuration, tools and services, Data access and sharing, and hands-on work. - Key objectives Include tasks such as designing and maintaining Databricks infrastructure, optimizing data engineering workflows, and ensuring data quality and governance. - Deep understanding of Databricks tools and products like Delta Tables, Delta Like Tables, Unity Catalog, Serverless execution and setup and configuring experience. - The technical skills and expertise required for the role, such as proficiency in Databricks, Apache Spark, cloud platforms (Azure), programming languages (Python, Scala, Java), data modeling, and database design. - Collaboration with other team members, such as data scientists, data engineers, and business stakeholders. - Emphasize the need for effective communication, documentation, and knowledge sharing within the team. - The expected outcomes and performance indicators for the Databricks Engineer. - This could include metrics such as data processing efficiency, data quality improvements, and successful project completion. Skill Set: -Azure Data Architect who has 3 to 4 year’s Strong Azure Databricks exp, Azure cloud, exp with ETL, python - 3-4 Years of working experience on Azure Platform – Infra, tools and services 2-3 years of experience on optimize and tune Databricks clusters for performance, scalability, and reliability for data engineering workloads. - Develop and maintain ETL pipelines to extract, transform, and load data from various sources into Databricks. - Hands-on working experience on Python/PySpark using Databricks Notebooks and Workspace. Experience in troubleshoot and resolve Databricks-related issues and provide technical support to users. - Architect and Design, Databricks deployment strategy and experience on data engineering tasks, including data processing, transformation, and integration. - Implement Azure data engineering best practices, including data quality checks, data lineage, and data governance, within the Databricks environment. - Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure #azuredatabricksarchitect #azuredatabricksadmin #azuredatabricks #etl #python
To view or add a comment, sign in
-
🚀 Data Engineering: Crafting Solutions for the Future 🚀 Data is the fuel that powers modern businesses, and as a Data Engineer, I’m dedicated to designing and implementing solutions that make that data work smarter, faster, and more efficiently. 🔧 From building scalable data pipelines with Azure Data Factory and Databricks to optimizing real-time data processing with AWS Glue, I’ve had the privilege to work on projects that drive innovation and unlock the potential of big data. ✨ Recently, I completed a project where I enhanced real-time data analytics capabilities, reducing data processing times and empowering business leaders to make quicker, more informed decisions. It's incredibly rewarding to see the impact of well-engineered data solutions in action! 📢 I’m currently open to new C2C/C2H opportunities and excited to collaborate with teams looking to drive their business forward with the power of data. If you’re looking for a data expert with hands-on experience in cloud platforms and real-time data pipelines, let’s connect! Always eager to connect with fellow data enthusiasts and professionals – feel free to reach out! 😊 #DataEngineering #OpenToWork #Azure #AWS #Databricks #BigData #RealTimeAnalytics #DataPipelines #C2C #C2H #CloudSolutions #DataTransformation
To view or add a comment, sign in
-
Senior/Lead DBT Developer with AWS Remote Role Experience – 12+ Years Must Visa - USC/H4EAD/TNVISA/ H1B (PP is must for all except USC) Client looking for Senior/ Lead DBT Developer. Good AWS experience. Should have some DBT Admin experience. Senior DBT developer: Technical Skills: DBT Proficiency: model development: Experience in creating complex DBT models including incremental models, snapshots and documentation Ability to write and maintain DBT macros for reusable code Testing and documentation: Proficiency in implementing DBT tests for data validation and quality checks Familiarity with generating and maintaining documentation using DBT's built in features Version control: Experience in managing DBT projects using git ,including implementing CI/CD process from the scratch AWS Expertise: Data STORAGE solutions: In depth understanding of AWS S3 for data storage, including best practices for organization and security Experience with AWS redshift for data warehousing and performance optimization Data Integration: Familiarity with Aws glue for ETL processes and orchestration -Nice to have Experience with AWS lambda for serverless data processing tasks Workflow Orchestration: Proficiency in using Apache Airflow on AWS to design, schedule and monitor complex data flows Ability to integrate Airflow with AWS services and DBT models such as triggering a DBT model or EMR or reading from s3 writing to redshift Data Lakes and Data warehousing: Understanding the architecture of data lakes vs data warehouses and when to use each Experience with amazon Athena for querying data directly in s3 using SQL Monitoring and Logging: Familiarity with AWS cloud watch for monitoring the pipelines and setting up alerts for workflow failures Cloud Security: Knowledge of AWS security best practices including IAM roles, encryption, DBT profiles access configurations Programming Skills: Python: Proficiency in Pandas and NumPy for data analysis and manipulation Ability to write scripts for automating ETL processes and scheduling jobs using airflow Experience in creating custom DBT macros using jinja and Python allowing for reusable components within dbt models Knowledge on how to implement conditional logic in DBT through python SQL: Advanced SQL skills, including complex joins, window functions, CTE’s and subqueries Experience in optimizing SQL queries for performance and optimization Apply to get considered at reena@ettalent.net #hiring
To view or add a comment, sign in
-
Great to see Daniel Thornton, a highly experienced recruiter, having a go at the incredibly low rates on offer to contractors. See the comments of the attached post. Clients ultimately lose offering these low rates - the top people are unlikely to work for them. And every formerly high earning contractor I knew in my contracting days has left the market* - they're either retired, perm or taken their skills to other countries. #ir35 *a market requires buyers and sellers, so I use the term loosely, as there are almost no buyers remaining, so no functioning market https://lnkd.in/epSMnCmi
Urgent new contract requirement for a Lead Data Engineer (GCP) 3 month contract, inside IR35 on a remote basis. Looking to get a specific project up and running that has ground to a halt. We need skills in GCP, Python, DBT and Google Analytics as skills. Day Rate - £500-£550 per day Please get in touch with Rebecca Myers for more information. Can interview next week.
To view or add a comment, sign in
-
#hiring #w2 #remote Microsoft Fabric ETL Developer - Fully Remote Email: ajay@foxprotech.com Skills: High proficiency in Microsoft Fabric and related ETL tools (e.g., Azure Data Factory) Knowledge of database systems (e.g., SQL Server, Azure SQL Database, Synapse Analytics) and understanding of data warehousing concepts and architecture. Experience with data modeling and schema design. Familiarity with programming languages used in ETL processes (e.g., Python, Pyspark). Strong understanding of data engineering principles, including data modeling, data transformation, and data optimization. Strong SQL skills for data extraction, transformation, and querying. Knowledge of accounting principles and logic is highly beneficial. #hiringalert #hiringnow #w2 #w2jobs #MicrosoftFabric #ETL #etldeveloper #azure #azuresynapse #dataengineering #sql #remote #remotejobs #remotehiring #remotework #fullyremote
To view or add a comment, sign in
44,240 followers
Consultant DevOps / Management
3wI'm interested