We're #hiring a new DevSecOps Engineer - Full time in Lexington, Massachusetts. Apply today or share this post with your network.
TechnoVal Alliance’s Post
More Relevant Posts
-
Sharing the original post for reach as Shawn Hunter Turnbull is good people and those are good roles at JFrog. Under the 'sharing for reach' value add for 2025, I'll each time start to add the Datadog Integration for that software offering. It should be a busy year in that regard when you consider Datadog has over 800 out-of-the-box integrations (that's a healthy and growing number) such as this one for JFrog, significantly reducing time to value received. Datadog - JFrog Integration https://lnkd.in/eVvjq7HX All Datadog Integrations https://lnkd.in/gjdKEyt6 Simple not simplistic. Datadog is unified observability and security. If you have challenges correlating operational, development, security AND experience signals, team to team, system to system (and most do), we can help almost immediately. Ready for 2025! See you there friends.
Customer Success Executive focused on profitability and long term growth | Technology can fuel human connections - that's success. | Ex-Red Hat | Open Source | SaaS | PaaS | DevSecOps | Security | Databases | AI | MLOps
If you are a Developer Support Engineer, Solutions Architect, or Site Reliability Engineer in the DevSecOps space (or you would like to be one), JFrog is hiring! Reach out to me! #hiring #SRE #DevOps #DevSecOps #SupportEngineer #SolutionsArchitect #SalesEngineer
To view or add a comment, sign in
-
🚀 Bridging the Gap in DevSecOps, Platform, and SRE Roles! In my journey of screening candidates for DevSecOps, Platform, and SRE roles, one thing has become clear: there’s a huge gap between what the industry needs and the skills most candidates bring to the table. These roles aren’t just about certifications or buzzwords; they require real production-level expertise in: 🔥 Cloud Infrastructure Design 🔥 Infrastructure as Code (IaC) 🔥 Automation 🔥 Security at Scale Yet, the hands-on experience critical to excelling in high-pressure, real-world environments is often missing. 💡 My Mission I’m on a mission to build a brigade of world-class professionals—a team of individuals who are not just skilled but are battle-tested in production environments. Together, we’ll master the art of building resilient, secure, and scalable systems. If you’re passionate about transforming your career and want to gain true, production-grade expertise, let’s connect! 📩 Send me your CV with a brief introduction—let’s build the future of DevSecOps, one expert at a time. 🌟 Ready to level up? Let’s make it happen! #DevSecOps #platformengineering #linkedin #jobs
To view or add a comment, sign in
-
Have you missed a call from me today? If so, it's either because I needed advice on building my console table OR I wanted to see if you're interested in hiring a Multi-Cloud DevSecOps Engineer! Luckily, Keith’s home now to handle the building, so I’m free to put my feet up and reach out to all my contacts (it's a bit late so I'll drop you a message now instead..)until I secure this candidate’s next role. Experience: Hard to summarise this without writing a full summary of his experience because there's nothing he hasn't done. He's always been genuinely responsible and involved hands on in developing, building, releasing and deploying a platform across cloud providers and bare metal. Definitely think it's also worth mentioning he's exceptional in highly regulated environments. So, If you're hiring or interested in viewing a video from our pre-screening round, drop me a message and I can send it across and run you through his experience! #BuildProcess #KnowYourMarket #DevSecOps #Cloud #Hiring
To view or add a comment, sign in
-
-
CrowdStrike is looking for DevOps Engineers now, it seems the world understands our importance 😍 CrowdStrike, a cybersecurity giant, recently faced a massive disruption due to an update gone wrong. Thousands of Microsoft Windows machines were down, and the impact was huge (and still is). 🤯 Now, they’re posting job updates to hire DevOps engineers to prevent this from happening again. Honestly, I’m not surprised. A strong DevOps team is like having a superhero on call, always ready to save the day with cultural practices for successful tests, builds and releases. 🦸♀️ This incident is a harsh reminder of DevOps’s importance for business continuity. It’s not just about tech; it’s about preventing chaos and keeping things running smoothly. So, to all the DevOps engineers out there, your work is more important than ever! 👏 What are your thoughts on the same? Let me know in the comments! #DevOps #CrowdStrike #Cybersecurity #Tech #BusinessContinuity #DevOpsEngineers #Hiring #TechCareers #Innovation #Teamwork
To view or add a comment, sign in
-
-
An #infra / #devops opening with one of the more interesting companies we do AI work with! For folks curious about jumping from helping FAANG sell more ads/toys faster to using tech to save lives using data & AI, and getting an inside look at how gov really works, worth a look.
We're Hiring: Senior DevOps Engineer! Disaster Tech is looking for a seasoned DevOps Engineer to join our innovative team in Alexandria, VA. This hybrid role requires U.S. citizenship and eligibility for security clearance, ideally with prior clearance up to the SECRET level. Apply Today! https://hubs.ly/Q02GPS6W0
To view or add a comment, sign in
-
-
🚨 DevSecOps Engineer Available for New Role! 🚨 A good friend of mine, and one of the best Engineers I know, has recently moved back home to Ireland from Switzerland, and he is actively looking for a new opportunity. He’s surprisingly not on LinkedIn but trusts I will find him the *right* role before the end of the month! So, before I start badgering my clients (If you have a missed call from me, this is why), I’m reaching out to you lovely folks here!!! Over the years I've known him, he’s always played pivotal roles in complex, microservices-based projects and cloud transformations and he's would be hired in by clients to focus on streamlining and automating infrastructure. In his recent projects, he has: Migrated applications from private to public cloud environments (AWS), ensuring smooth transitions with minimal downtime, while also improving cost efficiency and scalability. Led the CI/CD process for over 40+ microservices, ensuring continuous integration, continuous regression testing, and automated validation for deployments across multiple environments. (Personally, couldn't fault his work here and either could the team) Implemented Kubernetes and Helm Charts for automating deployments in production environments, significantly reducing manual work and human error in release processes. He has always helped clients with their end-to-end cloud journey, building infrastructure from the ground up and automating their entire deployment pipelines using Terraform, Docker, and Ansible. BUT, most importantly, he has built security into the foundations of everything!! Are you interested? If so, drop me a message and we can book you in a call! #DevSecOps #KnowYourMarket #Hiring #DevOps #Migration #Hiring
To view or add a comment, sign in
-
-
SRE team spends 2 hours debugging access issues. Every. Single. Week. (But it doesn’t have to be this way.) In DevOps and SRE, access management is critical. ✅ Engineers need secure, seamless access to infrastructure. ✅ You need to avoid downtime and reduce incident response time. But let’s be honest: ❌ Manual provisioning wastes time. ❌ Poor access control causes security gaps. ❌ Endless login workflows frustrate everyone. Enter Okta for DevOps. Here's what happens when SRE teams integrate Okta: Centralized Identity Management → Say goodbye to scattered credentials. Just-in-Time Access → Give engineers the right access only when they need it. Automation-Friendly → Integrate with Terraform, Kubernetes, and CI/CD pipelines. The result? Fewer incidents. Faster recovery. Happier engineers. DevOps is about speed and reliability. Identity shouldn’t slow you down. Is your team still stuck juggling access issues? P.S. Would you prioritize access automation ? Share your thoughts below #DevOps #SRE #AccessManagement #Okta #Automation #IdentityAndAccessManagement #CyberSecurity #CloudInfrastructure #DevSecOps #Productivity #DevOpsEngineer #C2C #C2H #SRE InfoDataWorx JudgeGroup.US KTek Resourcing Experis
To view or add a comment, sign in
-
-
Crowdstrike lesson: The recent outage shows the impact of risk tolerance decisions. Prioritizing rapid product delivery can compromise quality and stability. #CrowdStrike #RiskManagement #LessonsLearned UPD: Here's the apology from CrowdStrike https://lnkd.in/dHsF5cVX However, it's unclear if they will change how releases are tested before deployment. UPD2: CrowdStrike plans to improve their Rapid Response Content testing to prevent anything like this from happening again https://lnkd.in/dF3Shte5
You can't build quality into products. You must start with a quality product then ensure you don't screw it up as you build it. I help minimize your chances of screwing it up.
RE: Crowdstrike I have so far not seen any verifiable information the outage was caused by Crowdstrike firing all or most of their QA staff. Two years (or thereabouts) in the past Crowdstrike did announce a round of layoffs but it was a general layoff that seemed to include people from most organizations. I will not further the rumor mill's seeming intention to lump the outage into an existing narrative. Here is what I will comment: 1) Crowdstrike is a DevOps shop that prioritizes rapid product delivery. They have invested time and money into building out their rapid release process. This includes testing. I have not been able to verify what is meant by "testing" when that word is used. 2) I have been able to verify the existence of professional testers with experience and no lack of skill, but I have no idea what role they play in the entire product release cycle nor how much authority or impact they have on same. My Conclusion: When a company prioritizes speed they sacrifice initial quality. Every DevOps company makes this choice. As I am fond of saying, this is a risk tolerance decision. DevOps companies choose opening themselves up to greater production issue risk when they prioritize time to market. There are different strategies you can implement to mitigate these risks, but you cannot remove them. This was not the first issue Crowdstrike faced after pushing something to production. This was not even the first outage issue for a large segment of their user base. This past April an update pushed to services running on Debian Linux machines caused hard crashes after corrupting the kernel. An internal review blamed a lack of the most up to date version of Debian not being in their automated release testing matrix. Anyone who works in a DevOps shop will most likely understand how unsatisfying I find that answer. Opinion: I strongly believe based on my professional experience what happened over the weekend was caused by a shallow risk tolerance understanding. Crowdstrike tolerated issues caused by rapid releases because they only concerned themselves with product and revenue risk. In order to respond to a rapidly evolving threat profile in electronic security, they created a process that pushed things as rapidly as they could in order to respond to new threats as rapidly as possible. This strategy worked for them until it didn't. Because their core value proposition included assumptions of stability and trust, they eventually violated that.
To view or add a comment, sign in
-
RE: Crowdstrike I have so far not seen any verifiable information the outage was caused by Crowdstrike firing all or most of their QA staff. Two years (or thereabouts) in the past Crowdstrike did announce a round of layoffs but it was a general layoff that seemed to include people from most organizations. I will not further the rumor mill's seeming intention to lump the outage into an existing narrative. Here is what I will comment: 1) Crowdstrike is a DevOps shop that prioritizes rapid product delivery. They have invested time and money into building out their rapid release process. This includes testing. I have not been able to verify what is meant by "testing" when that word is used. 2) I have been able to verify the existence of professional testers with experience and no lack of skill, but I have no idea what role they play in the entire product release cycle nor how much authority or impact they have on same. My Conclusion: When a company prioritizes speed they sacrifice initial quality. Every DevOps company makes this choice. As I am fond of saying, this is a risk tolerance decision. DevOps companies choose opening themselves up to greater production issue risk when they prioritize time to market. There are different strategies you can implement to mitigate these risks, but you cannot remove them. This was not the first issue Crowdstrike faced after pushing something to production. This was not even the first outage issue for a large segment of their user base. This past April an update pushed to services running on Debian Linux machines caused hard crashes after corrupting the kernel. An internal review blamed a lack of the most up to date version of Debian not being in their automated release testing matrix. Anyone who works in a DevOps shop will most likely understand how unsatisfying I find that answer. Opinion: I strongly believe based on my professional experience what happened over the weekend was caused by a shallow risk tolerance understanding. Crowdstrike tolerated issues caused by rapid releases because they only concerned themselves with product and revenue risk. In order to respond to a rapidly evolving threat profile in electronic security, they created a process that pushed things as rapidly as they could in order to respond to new threats as rapidly as possible. This strategy worked for them until it didn't. Because their core value proposition included assumptions of stability and trust, they eventually violated that.
To view or add a comment, sign in
-
Dang it, Curtis Stuehrenberg, you just summarized half of the blog post I'm going to publish tomorrow with the phrase "risk tolerance decision" Seriously, though y'all need to read this stuff right here 👇👇👇👇👇
You can't build quality into products. You must start with a quality product then ensure you don't screw it up as you build it. I help minimize your chances of screwing it up.
RE: Crowdstrike I have so far not seen any verifiable information the outage was caused by Crowdstrike firing all or most of their QA staff. Two years (or thereabouts) in the past Crowdstrike did announce a round of layoffs but it was a general layoff that seemed to include people from most organizations. I will not further the rumor mill's seeming intention to lump the outage into an existing narrative. Here is what I will comment: 1) Crowdstrike is a DevOps shop that prioritizes rapid product delivery. They have invested time and money into building out their rapid release process. This includes testing. I have not been able to verify what is meant by "testing" when that word is used. 2) I have been able to verify the existence of professional testers with experience and no lack of skill, but I have no idea what role they play in the entire product release cycle nor how much authority or impact they have on same. My Conclusion: When a company prioritizes speed they sacrifice initial quality. Every DevOps company makes this choice. As I am fond of saying, this is a risk tolerance decision. DevOps companies choose opening themselves up to greater production issue risk when they prioritize time to market. There are different strategies you can implement to mitigate these risks, but you cannot remove them. This was not the first issue Crowdstrike faced after pushing something to production. This was not even the first outage issue for a large segment of their user base. This past April an update pushed to services running on Debian Linux machines caused hard crashes after corrupting the kernel. An internal review blamed a lack of the most up to date version of Debian not being in their automated release testing matrix. Anyone who works in a DevOps shop will most likely understand how unsatisfying I find that answer. Opinion: I strongly believe based on my professional experience what happened over the weekend was caused by a shallow risk tolerance understanding. Crowdstrike tolerated issues caused by rapid releases because they only concerned themselves with product and revenue risk. In order to respond to a rapidly evolving threat profile in electronic security, they created a process that pushed things as rapidly as they could in order to respond to new threats as rapidly as possible. This strategy worked for them until it didn't. Because their core value proposition included assumptions of stability and trust, they eventually violated that.
To view or add a comment, sign in