Category: News

  • hosted·ai v2.0.1 – now it’s easy to optimize GPUaaS

    hosted·ai v2.0.1 – now it’s easy to optimize GPUaaS

    We’re excited to announce the availability of hosted·ai v2.0.1, with new features to make GPUaaS even easier to manage and sell. Let’s get into it!

    Screenshot of the hosted·ai control panel showing GPU infrastructure orchestration

    #BuildingInPublic – the platform story so far

    What’s new in v2.0.1?

    1. Tune your GPU pools for different workloads


    Introducing… the new GPU optimization slider.

    When you create a GPU pool, you assign GPUs to the pool and choose the sharing ratio – i.e. how many tenants the resources of the pool can be allocated/sold to. For any setting above 1, the new optimization slider becomes available.

    Behind this simple slider is a world of GPU cloud flexibility. The slider enables providers to configure each GPU pool to suit different customer use cases. Here’s a quick demo from Julian Chesterfield, CTO at hosted·ai:

    • GPUaaS optimized for security
      Temporal scheduling is used. The hosted·ai scheduler switches user tasks completely in and out of physical GPUs in the pool, zeroing the memory each time. At no point do any user tasks co-exist on the GPU. This is the most secure option, but comes with more performance overhead.
    • GPUaaS optimized for performance
      Spatial scheduling is used. The hosted·ai scheduler assigns user tasks simultaneously to make optimal use of the GPU resources available. There is no memory zeroing. This is the highest-performance option, but it doesn’t isolate user tasks – they are allocated to GPUs in parallel.
    • Balanced GPUaaS
      Temporal scheduling is used, but without fully enforced memory zeroing. This provides a blend of performance and security.

    2. Self-service GPU / end user enhancements


    Also in this release, some handy improvements for end users running their applications in your hosted·ai environments:

    GPU application service exposure

    We’re made it easier to expose ports for end user applications and services through the hosted·ai admin panel (and coming soon, through the user panel).

    Now your customers can choose how they present their application services to the outside world, through configurable ports

    Self-service GPU pool management

    We’ve added new management tools for your customers too. Each GPU resource pool they subscribe to can be managed through their user panel, with visibility of the status of each pod; the ability to start, stop and restart pods; and logs with information about the applications using GPU.

    3. Furiosa device integration


    Now service providers can create regions with clusters based on Furiosa, as well as NVIDIA. Once a region has been set up for Furiosa, it can be managed, priced and sold using the same tools hosted·ai makes available for NVIDIA – and in future, other accelerator devices.

    More information:


    Coming next:


    • Full stack KVM – complete implementation, replacing Nexvisor
    • Scheduler credit system – expanding GPU optimization with a credit system to deliver consistent performance for inference in mixed-load environments
    • Billing enhancements – more additions to the hosted·ai billing and metering engine – more ways to monetize your service
    • Infiniband support

  • hosted·ai and Maerifa form strategic partnership to provide a one-stop shop for Neocloud creation at scale

    hosted·ai and Maerifa form strategic partnership to provide a one-stop shop for Neocloud creation at scale

    Santa Clara, CA – 30th September 2025 hosted·ai has signed a strategic partnership with Maerifa Solutions, a leading digital infrastructure company focused on the provision of technology design, deployment and supply chain management services. The partnership aims to facilitate the rapid creation and scaling of Neoclouds – cloud services built around GPU infrastructure for AI – by providing a one-stop shop for infrastructure advice, hardware, procurement and finance, and efficient, profitable GPU orchestration using hosted·ai software.

    Maerifa simplifies Neocloud creation through its relationships with AI cloud infrastructure OEMs such as NVIDIA, Supermicro and Lenovo, and supply chain and finance partners who can support hardware procurement and purchasing. With hosted·ai, Maerifa can now also provide turnkey software for Neocloud orchestration and monetization, with easy-to-use tools for GPU cloud service design, pricing, metering, billing and self-service.

    “The demand for GPU infrastructure is growing by leaps and bounds, however, there remains little focus developing multi-faceted Neoclouds with the ability to deliver the full catalogue of this infrastructure to end customers in a way that is economically viable long-term. Together with hosted.ai we have a solution that enables rapid scalability and will provide these companies with a way of focusing on what they are best at, attracting customers and providing innovative software solutions. We are already working on a number of projects together and invite others looking to grow their platforms to see how we can help,” said Rahul Kumar, Senior Executive Officer, Maerifa Solutions.

    “There is huge demand for AI training and inference infrastructure, but Neoclouds face quite a few challenges to deliver the scale that the market needs,” said Narendar Shankar, Chief Commercial Officer at hosted·ai. “Our partnership with Maerifa is exciting news for companies in this space, because they now have one expert partner for sourcing and delivering GPU infrastructure, and getting help with financing; and combined with hosted·ai, the software to manage, provision and bill for AI cloud services while making those services efficient and profitable.”

    hosted.ai was founded to make GPU cloud efficient, easy and profitable for service providers, by creating a turnkey GPUaaS platform designed specifically for companies in this market. hosted.ai was launched in 2024 by a team with deep experience of owning, running, and building solutions for AI and for service providers, at businesses including VMware, Nvidia, Expedia, XenSource, OnApp, Sunlight and UK2.

    Maerifa Solutions was conceived, incubated and launched by Aethlius Holdings to create an ecosystem of Tier-1 partners across Digital Infrastructure and related financing solutions delivered by its partners to address the funding gap of acquiring hard-to-access GPU server technology. Since its launch in Q3 2024 it has already partnered with leading players in the industry and is in discussions to deliver multi-million dollars’ worth of hardware and associated solutions to projects in Europe, Middle East, Africa and Southeast Asia.

    About hosted·ai
    hosted·ai provides software to make AI infrastructure hosting simple and profitable for service providers. The hosted·ai platform is a turnkey AI cloud / GPUaaS stack that gives service providers the tools they need to create, manage and monetize GPU cloud infrastructure. hosted·ai was founded in 2024, launched publicly in 2025 and has teams across the US, EMEA and Asia-Pacific. For more information, visit https://hosted.ai


    About Maerifa Solutions
    Maerifa Solutions is an ADGM-registered digital infrastructure company, in collaboration with its extensive ecosystem, brings expertise in technology design and deployment, supply chain management, data centers, and power solutions. This, combined with Maerifa Solutions’ deep financial acumen, enables it to deliver creative investment solutions that help clients realise the full potential of AI infrastructure. By offering innovative funding mechanisms and access to hardware and hosting capacity, Maerifa Solutions ensures the long-term scalability and capital efficiency of AI projects.

  • FuriosaAI and hosted·ai Form Strategic Partnership to Deliver Industry-Leading AI Infrastructure Powered by Tensor Contraction Processors

    FuriosaAI and hosted·ai Form Strategic Partnership to Deliver Industry-Leading AI Infrastructure Powered by Tensor Contraction Processors

    Redefining price/performance/power for AI cloud deployments with next-generation inference hardware

    Santa Clara, CA – 8th July 2025 – FuriosaAI, a pioneering leader in next-gen AI semiconductor, today announced a strategic partnership with hosted·ai to deliver ultra-efficient, high-performance AI infrastructure built on Furiosa’s Tensor Contraction Processor (TCP) architecture. The hosted·ai cloud platform will fully support Furiosa’s flagship RNGD (pronounced “Renegade”) processors, enabling service providers to leverage TCP-powered infrastructure for hosting AI workloads. 

    hosted·ai is a turnkey AI cloud platform for service providers. It delivers multi-tenant virtualization of infrastructure for AI inference and training, with full software-defined control and oversubscription of hardware accelerators such as RNGD and GPUs. This enables service providers to pool the resources of multiple accelerators, provision those resources on demand to multiple clients, and sell 4x-10x the physical capacity available. As a result they can price their offerings competitively and improve unit economics, achieving higher revenue with an increasing average margin.

    The new partnership will add support for Furiosa’s flagship RNGD Processor for LLM and agentic AI inference to the hosted·ai platform. RNGD leverages Furiosa’s Tensor Contraction Processor (TCP) chip architecture, which solves the fundamental hardware challenge of running AI algorithms: providing not just raw compute power, but also using that compute effectively and efficiently to deliver excellent real-world performance. 

    “We’re excited by this partnership and its potential to transform the cost and impact of AI infrastructure,” said Furiosa’s SVP of Product and Business, Alex Liu. “Furiosa’s processors are purpose-built for AI and represent a huge leap forward in performance per watt compared to GPUs thanks to our Tensor Contraction Processor (TCP) architecture. hosted·ai has the same devotion to efficiency and performance in its AI cloud software stack, enabling service providers to properly virtualize the accelerator and maximize utilization. This unique combination delivers the best solution for sovereign service provider AI clouds.” 

    “This partnership is an important step in our mission to make AI infrastructure accessible and affordable for service providers and their customers,” said Ditlev Bredahl, CEO of hosted·ai. “Together we’ll bring new ways for service providers to accelerate AI workloads with reduced hardware CAPEX and OPEX, optimal utilization, sustainable profitability for their business, and the best price/performance for their customers.”

    Availability of Furiosa RNGD support in the hosted·ai platform is expected by the end of 2025. Looking ahead, the two companies plan to develop an off-the-shelf appliance for service provider AI cloud, combining hosted·ai software, Furiosa accelerators, and rack server modules for easy turnkey adoption by service providers.

    About hosted·ai
    hosted·ai provides software to make AI infrastructure and GPUaaS simple and profitable for service providers. The hosted·ai platform fully virtualizes AI datacenter infrastructure, including GPUs and other hardware accelerators. This makes it possible to share and utilize 100% of hardware resources with users in a secure multi-tenant environment, which reduces the overall hardware requirement, minimizes idle resources, and dramatically changes the cost/revenue/margin equation for AI cloud service providers. For more information, visit https://hosted.ai.  

    About furiosa.ai
    FuriosaAI is building a new class of AI processor for enterprise and data center workloads. Powered by the Tensor Contraction Processor (TCP) architecture, Furiosa delivers sustainable, high-efficiency AI compute designed from the ground up for modern inference applications. Its mission is to democratize powerful AI through AI-native designed ASICs and software stack, giving everyone on Earth access to powerful AI. For more information, please visit furiosa.ai.