hosted.ai has rolled out a major platform update v2.2.1 that makes it easier for teams to deploy, manage, and scale AI infrastructure across GPU services, virtual machines, and bare metal.
Let’s get into it!
#BuildingInPublic – the platform story so far
hosted.ai has rolled out a major platform update that makes it easier for teams to deploy, manage, and scale AI infrastructure across GPU services, virtual machines, and bare metal.
This update brings together several improvements that matter directly to operators, platform teams, and AI builders:
- better performance and scalability in core platform orchestration
- a more consistent provisioning and management experience across GPU services and VMs
- new bare metal GPU instance support
- browser-based SSH access across compute types
- stronger automation for GPU service deployments
- real-time Kubernetes service health visibility
The result is a platform that is easier to operate, more flexible for customers, and better suited for modern AI workloads ranging from inference and application hosting to distributed training and Kubernetes-based deployments.
New to hosted·ai? Learn more about our GPU cloud platform or get in touch for a demo.
What’s new in hosted·ai v2.2.1?
1. Unified provisioning across GPU services and VMs
hosted.ai now delivers a more consistent deployment and runtime experience across GPU services and virtual machines.
This means teams can use a more standardized way to provision workloads, reuse service templates more effectively, and manage operational capabilities more consistently across compute environments.
Why is this important?
- reduces operational complexity
- improves reliability and consistency
- makes it easier to scale GPU services across regions and deployment types
Seems interesting? Learn more about our GPU cloud platform or get in touch for a demo.
2. Bare metal GPU instances
hosted.ai now supports bare metal GPU instances as a first-class compute option.
Customers can provision dedicated hardware with lifecycle management, hardware discovery, integrated pricing and billing, and direct access from the hosted.ai platform.
Why it matters:
- creates a smoother path for customers moving into higher-performance or specialized environments
- supports workloads that require full hardware control
- expands deployment flexibility beyond containers and VMs
This is something you are looking for? Learn more about our GPU cloud platform or get in touch for a demo.
3. Browser-based SSH access
Users can now open an SSH session directly from the browser for GPU services, VMs, and bare metal instances.
This removes the need to install external SSH clients or work around restricted environments just to access infrastructure.
Why is this important?
- reduces friction for first-time users
- speeds up time-to-access
- creates a more consistent access experience across compute types
Learn more about our GPU cloud platform or get in touch for a demo.
4. Stronger automation for GPU service deployment
hosted.ai has expanded automation support, so teams can configure GPU service environments more consistently during deployment.
This includes stronger support for software stack customization and operational setup as part of workload provisioning.
Why is this important?
- reduces friction for first-time users
- speeds up time-to-access
- creates a more consistent access experience across compute types
Want to learn more? Visit our GPU cloud platform or get in touch for a demo.
5. Expanded support for containerized and Kubernetes-style workloads
hosted.ai now supports more advanced runtime patterns for GPU services, including capabilities that make it easier to run container-based and Kubernetes-oriented workloads in a more VM-like way.
Why is this important?
- expands the range of workloads customers can run without needing a full VM for every case
- helps providers package more usable GPU services for end users
- creates more flexibility in how AI infrastructure is delivered
Want to learn more? Visit our GPU cloud platform or get in touch for a demo.
6. Real-time Kubernetes service health monitoring
hosted.ai now includes a Kubernetes service integrity dashboard in the admin experience, with live service visibility, warnings, and audit tracking.
Why is this important?
- improves operational visibility
- speeds up issue detection and troubleshooting
- reduces reliance on manual diagnostics
Want to learn more? Visit our GPU cloud platform or get in touch for a demo.
7. Better platform performance and scalability
Under the hood, hosted.ai has also reworked key orchestration components and improved concurrency in internal execution flows.
These changes help the platform process compatible operations more efficiently and improve long-term scalability.
Why is this important?
- improves performance under scale
- reduces blocking from long-running operations
- creates a stronger foundation for future platform growth
Want to learn more? Visit our GPU cloud platform or get in touch for a demo.
Additional improvements customers will notice
Alongside the major platform capabilities above, this update also improves day-to-day usability and billing accuracy across the platform.
This includes improvements such as:
- improved branding support in system email delivery
- cleaner service creation flows
- more reliable edit and clone behavior
- more accurate workspace billing visibility
- preservation of billing history after workspace deletion
- fairer billing behavior for pending subscriptions
- stronger scheduling reliability
- better support for complex node and network configurations
Want to learn more? Visit our GPU cloud platform or get in touch for a demo.
Who benefits most from this update
This update is especially relevant for:
- AI teams running GPU-backed applications and services
- infrastructure providers offering GPU services to customers
- operators managing mixed environments across GPU services, VMs, and bare metal
- teams deploying Kubernetes, distributed training, or advanced containerized workloads
A stronger foundation for what comes next
This release is not just about adding features. It is about making hosted.ai more consistent, more flexible, and more scalable across the full infrastructure lifecycle.
With unified provisioning, stronger automation, browser-based access, bare metal support, and deeper operational visibility, hosted.ai is continuing to make it easier for teams to deliver AI infrastructure that works in the real world.
If you want to see these capabilities in action, contact the hosted.ai team for a walkthrough.
It’s in final testing now –subscribe to get updates!



