📢 Announcing Ori Inference Endpoints, an easy and scalable way to deploy state-of-the-art machine learning models as API endpoints. ⚡ Deploy your favorite AI model in a single click. Ori Inference Endpoints makes inference effortless. 📊 Automatically scale inference up or down based on demand, from thousands of GPUs all the way down to zero. ⚖️ Per-minute pricing helps you keep your inference infrastructure affordable and costs predictable. Discover more about Ori Inference Endpoints 🔗 https://lnkd.in/gvjmNDU8
About us
Ori is building the AI backbone for the future, enabling teams to advance groundbreaking technology. We believe the true potential of AI will be unlocked by how seamlessly teams can access and deploy the infrastructure they need to train, serve, and scale models. As a trusted partner, Ori provides end-to-end AI infrastructure—combining powerful GPU compute with a flexible software layer that optimizes resources. With Ori, teams can explore, build, and scale AI innovations effortlessly, transforming how software and hardware work together to drive the next wave of AI breakthroughs.
- Website
-
https://ori.co
External link for Ori
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2018
- Specialties
- Cloud Computing, AI Infrastructure, GPU, MLOps, Kubernetes, and Security
Locations
-
Primary
Tintagel House, 92 Albert Embankment
London, SE1 7TY, GB
Employees at Ori
-
Laurence Sawbridge-Praties
-
Scott Collins
Scientific Computing, Cloud, AI and HPC Solutions Professional
-
Jacob Smith
Former musician, ex-Packet, ex-GTM at Equinix. Volunteer and board member. Currently building new things.
-
Richard Tame
Chief Financial Officer – Investor & Board Relations / Fundraising / Debt Financing / Financial Planning & Analysis / Budgeting & Forecasting /…
Updates
-
We're excited to announce that Jacob Smith has joined Ori's board of directors! Jacob brings a wealth of experience in scaling tech companies and fostering innovation in the digital infrastructure space. Jacob's insights will be instrumental as we continue to empower AI businesses with cutting-edge infrastructure. Welcome to the Ori family, Jacob! https://lnkd.in/dEuaX-p5
Ori Welcomes Jacob Smith to Its Board of Directors
blog.ori.co
-
We are thrilled to share that Ori will be the first AI infrastructure provider to deploy NVIDIA's H200 GPUs in a UK data centre. This milestone reflects our commitment to continue providing cutting-edge solutions that empower businesses to achieve their AI ambitions. The H200, with 141GB of HBM3e memory, offers almost twice the capacity of the H100, while its 4.8TB/s bandwidth delivers 1.4x greater performance. If you'd like to try them out, contact us! https://lnkd.in/ek5Q2YAg
Ori becomes first UK data centre to deploy Nvidia H200 chips
https://www.uktech.news
-
Llama 3.3 70B, AI at Meta's newest model is as powerful as the flagship #Llama 3.1 405B, providing state-of-the-art performance with a nimble footprint. Looking to try Llama 3.3 on a cloud GPU instance? Here's how you can do it in just a few short steps - https://lnkd.in/g_y9XdSV #llm #GenAI
How to run Llama 3.3 70B with Ollama on a cloud GPU
blog.ori.co
-
Ori's CEO, Mahdi Yahya, makes a compelling case for establishing Special Compute Zones to help the UK develop sovereign cloud infrastructure and harness its unique strengths in AI. In his latest piece for DatacenterDynamics, he explains why modernizing power and cloud infrastructure is vital for the UK to realize its ambitious AI goals and stay competitive on the global stage. Read the full article here: https://lnkd.in/gsP826mC
Special Compute Zones will drive sovereign cloud infrastructure in the UK
datacenterdynamics.com
-
Ori reposted this
Hi all! At Ori, we are hiring our first Graphic Designer to join our amazing marketing team at Ori. You will work together with Daniel Van den Berghe to manage and execute the design function for Ori. This a dynamic, hands-on role where no two days are the same! We are growing and have a lot of exciting things coming up in 2025! 🌟 Click the job description in the comments for more info and details on how to apply 📩
-
Love the simplicity of Ollama but need to scale automatically for larger workloads? With KEDA (Kubernetes Event-Driven Autoscaling) and Ori Serverless Kubernetes, you can seamlessly scale your LLM deployments based on GPU usage—enabling optimal performance without having to manage infrastructure yourself. 🔗 Explore our step-by-step tutorial to see how easy scaling can be: https://lnkd.in/gxXixZbM
KEDA Autoscalling | Ori Global Cloud
docs.ori.co
-
As GPUs get more powerful, it's equally important to have ample and fast memory to make the most of performance gains. That’s why we’re excited about NVIDIA’s H200 GPU. With 76% more memory and 43% higher bandwidth compared to the H100, it's built to handle today’s larger models and deliver faster inference. Curious how the H200 stacks up? Check out our blog for all the details, from features to use cases and comparisons: https://lnkd.in/degWPdbB #nvidia #h200 #gpu
An overview of the NVIDIA H200 GPU
blog.ori.co
-
In this article for Verdict, Ori CEO Mahdi Yahya underscores the critical need for UK's technology leaders to champion British cloud providers. By prioritizing homegrown cloud infrastructure, the UK can drive innovation, ensure data sovereignty, and significantly reduce cloud expenditure. Read the full article 🔗 https://lnkd.in/gesYm-qv
Opinion: UK tech leaders should back British cloud providers
verdict.co.uk