Formerly known as Wikibon

Trustwise’s Optimize:ai Launches, All Eyes on Gen AI Safety and Efficiency

In this episode of the SecurityANGLE, our series focused on all things security, today’s focus is all about generative AI (shocker, I know) and the role technology can play in AI application performance, risk management, and, along the way, reduction of compute costs.

I’m joined today by Manoj Saxena, founder and chairman of Trustwise. Trustwise is officially launching this week with $4M in seed funding and the company’s flagship product is Trustwise Optimize:ai.

Optimize:ai is an API-based generative AI performance and risk management solution that enterprise developers can use to lower the cost and increase the safety of, as well as the “green-factor,” of the AI models they’re using.

Watch the full episode here: Trustwise’s Optimize:ai Launches, All Eyes on Gen AI Safety and Efficiency

Trustwise Founder is Not New to the AI Game

Trustwise founder Manoj Saxena is in no way a newcomer to the AI game, in fact, it’s his sixth startup. Manoj was the first GM of IBM Watson. He’s also the founder of the Responsible AI Institute, which is focused on driving the adoption of responsible AI through independent AI Conformity Assessments and Certification. He has built and sold four companies and is the holder of 34 software patents in AI and Web Services. I know what you’re thinking: solid underachiever, right? Ha! Sometimes I

When prepping for this conversation, I popped over to Manoj’s LinkedIn profile, and this description told me everything I needed to know about him: I like building things, going fast, and helping brilliant people build great companies. It also showed me how much we have in common, as I’ve spent a career doing that as well.

Why Championing Responsible AI is Important

Manoj speaks of Responsible AI as his life’s work and his true passion. That’s what drove him to start the Responsible AI Institute back when he was still at IBM Watson as he began to realize the double-edged nature of technology in general and AI in particular. He compared what the Industrial Age did for our arms and legs and how it changed everything about how we lived and worked. AI is going to do that same thing for our minds, skills, the work we do, and the lives we live. As a result, he realized that leaving technology to the technology companies alone did not bode well for humanity. The Responsible AI Institute is, to his way of thinking, the JD Power of AI. They have designed the organization so that companies can utilize the nonprofit for independent assessments, benchmarking, and certification, helping make sure they get it right as they embrace and work to integrate AI into their business operations. The organization has grown ~400% in the last year, and boasts a community of some 30,000 people who are committed to applying AI tech in a safe, reliable, and societally beneficial way.

The AI Risk Trifecta

Manoj and I discussed what he calls “the AI Risk Trifecta,” which is comprised of three things:

  • Cost — Concerns about the cost of AI are reported to have surged at the rate of 14x over the course of the past year

  • Risk — Risk factors include risk of data leakage, hallucinations, bias, etc.

  • Environment — AI models require massive compute power, which will have a massive environmental impact

With the rapid rise of ChatGPT and other generative AI solutions, it’s quickly become clear that these models are incredibly expensive to build and run. The projections are that today data centers consume about 2.5% of American electricity, but by 2030, that number is predicted to be as high as 25% of all electricity going into data centers powering AI models. Manoj shared that a typical AI model generates as much carbon emissions as five American cars over their lifetime, and of course, we are going to see tens of thousands of AI models. This aspect of societal impact, the environmental impact, is something that Manoj considers to be part of the responsible AI equation.

As we know, there are other aspects to consider related to self-learning models. Once these models start learning, they could also easily propagate biases and mistakes. Without the proper guardrails and the ability to align to regulations and ensure governance and compliance, safety becomes an outsized concern.

Manoj shared that today he has shifted his AI Risk Trifecta from “Cost, Risk, Environment” to “Cost, Compliance, and Sustainability” and that’s exactly what Trustwise and its newly launched Optimize.ai was built to address.

Trustwise Launches Optimize:ai

This is a big week for Trustwise. The company just closed a $4 million seed funding round and has officially launched its flagship product: Optimize:ai.

Manoj shared that he and the team at Trustwise have been working for the past two years building a system for generative AI and working with some of the most highly regulated companies in the world — think insurance companies, banks, and healthcare — to rest their theories and the performance of the system.

They’ve been focused on solving for how you can take a generated AI system and a large language model (LLM) into a high-stakes environment and then extract business value out of it. They found the three biggest issues companies are struggling with when it comes to scaling and getting value out of generative AI are really fairly simple:

  • The need to make sure the output is not hallucinating or leaking sensitive data

  • The need to make sure the output of an LLM is aligned with their internal business policies and applicable regulations

  • The need to ensure that the cost of delivering these things, both financially and environmentally, is sustainable.

As the team worked to solve these problems, they came up with Trustwise as an application performance and risk management system for generative AI. Their focus is to provide a single API that can fit into any AI tool chain and which can work with any model in any cloud or a laptop-based environment and almost serve like a spell checker, but instead, a “trust checker.” It reminded me of the way that Grammarly works: it sits on top of everything you’re doing and works behind the scenes, whether you’re writing an email, a proposal, a social media post, or a text message, watching you work and providing guidance along the way, resulting in better, more effective written communications. What Grammarly does for writing, Optimize:ai does for app performance and risk management in an organization’s AI initiatives.

Safety-as-a-Service, Trust-as-a-Service?

As we will most assuredly quickly reach a point where generative AI will be woven into all workflows, Manoj and the Trustwise team believe they can and will play an important role in helping define the cyber safety and trust space.

Whether users are writing content, working on drug discovery, doing clinician practices in healthcare, or processing insurance claims, AI will be a driving force. As we discussed this evolution, Manoj used the analogy: think of Trustwise almost as if Grammarly and HTTPS had a baby — think about those guardrails that are already guiding you. Is what we’re doing safe, compliant, and cost-effective? How can you ensure that, so that every prompt can be optimized? That’s the problem he and the Trustwise team are working to solve.

In the industry, when we talk about success with generative AI, across the board, no matter who I’m having this conversation with — vendor partners, fellow analysts, and ITDMs, trust and security play vitally important roles. That’s why I was so interested in hearing from Manoj about Trustwise and Optimize:ai — they are on to something. And I can absolutely see the value in an AI-focused Safety-as-a-Service and Trust-as-a-Service positioning here.

How Trustwise Optimize:ai Can Do the Heavy Lifting on Adherence to Global AI Standards, Policies, and Regulations

Talking about regulations and policies, today we have a few global organizations working to regulate AI, with more undoubtedly to come. These include the EU AI Act, RAISE Safety and Alignment Benchmarks, SCI ISO Software Carbon Intensity Standard, NIST AI RMF, the EU’s FCA Consumer Duty Regulation, and more.

Our conversation shifted to explore how Trustwise Optimize:ai can help handle some of the heavy lifting with corporate adherence to global AI standards, policies, and regulations.

While hallucinations and data leakage are obvious AI-related concerns, the other big problem organizations are working to solve is alignment. There’s a very real need to ensure that the output of prompts being put into the system are aligned to three things:

  • Alignment of the AI output to corporate requirements (taking into consideration an organization’s corporate values and corporate comms strategy)

  • Alignment of AI to regulatory requirements of a given industry

  • Alignment of AI to the end customer

Today, there is no technology that exists that helps people steer the output of an LLM. That’s another problem Trustwise Optimize:ai has been developed to solve: making sure outputs are safe, that they are compliant and personalized/targeted to end users, and lastly, making sure organizations are delivering those at the cheapest cost: both financially and environmentally.

Trustwise Optimize:ai Tackles Climate Change

I believe AI’s impact on sustainability goals will become a giant problem in the not-too-distant future — but it’s something that few are thinking about in these early, giddy days of gen AI. As mentioned earlier, the data centers used to train these AI models have massive carbon, energy, and water consumption footprints — and we’ve barely gotten started. Today, we are seeing rapidly rising costs for building models, but we’ve yet to realize the financial impact of tuning and inference these models, which Manoj estimates to easily be 4x the cost of building a model.

Trustwise is working with customers today who are looking ahead and trying to ensure that their AI efforts align with their corporate sustainability policies and objectives. They are interested in measuring and decarbonizing use cases, and that’s where Trustwise Optimize:ai comes in.

Trustwise has a layer of software that optimizes AI initiatives, much like the way we might use an application like WinZip to quickly and easily compress video files. Trustwise can help customers select the right LLM for their particular use cases, based not only on just the cost of the LLM, but also on its carbon footprint. They then use separate compression techniques to adjust pipeline settings (like RAG pipelines), and work with customers to configure a pipeline so that it uses less compute overall, not just the model itself.

The last part of the equation here is that Trustwise helps customers select the right endpoint, whether it’s a cloud endpoint, a data server, or a processor card. The example Manoj provided was that working with the Green Software Foundation, Trustwise has access to a library of data sets that can tell you if you’re running an insurance policy gen AI application, the best place to do that is in Ireland or Poland, or perhaps South London, as it will give provide a certain carbon footprint. If you change the Ireland endpoint to a data center, or to an NVIDIA processor, for example, it will show that you can reduce your carbon footprint by 28%. This is a type of “red teaming” that’s been used in safety and cybersecurity for a long time, and Trustwise is using it to help customers design optimized AI strategies and systems.

As a result, Trustwise can tell companies that, based on their AI workloads, these configurations will deliver the best safety, the best alignment, the lowest financial cost, and the smallest carbon footprint.

Wrapping Up

I walked away from my conversation with Manoj thinking about Trustwise Optimize:ai and that it’s both a giant safety net and a flashing red “EASY” button that organizations can work with as they embark upon their generative AI journeys that could potentially save time, money, resources — and the planet. While many companies are in the early stages of experimenting with, developing, and deploying AI use cases, in far too many instances they have no idea how safe those use cases are, how much they will cost, what the business value is, where the ROI is, and what impact these initiatives might have on their corporate sustainability goals and objectives.

A tool like Trustwise Optimize:ai solves many of the issues we see customers wrestling with on a daily basis. It can help make better-informed decisions about what to fund, what to scale, how to improve the safety of the models they’re developing, how to mitigate costs, and how to ensure that AI doesn’t eat corporate sustainability objectives for lunch.

Embracing and integrating generative AI into business operations is not an IT initiative, it’s a corporate-wide initiative with many different stakeholders with many different concerns, including IT and business, risk and compliance, and audit.

While we didn’t spend a lot of time discussing it, Manoj mentioned that the Trustwise API Command Center was developed to address those individual stakeholders and their disparate concerns and provide a view into the system’s overall performance, as well as the compliance and cost-effectiveness of the system. This allows different stakeholders from within the organization to see how the AI system is operating, which not only leads to increased peace of mind as it relates to corporate gen AI initiatives, it also affords better strategic decision-making moving forward.

In almost every conversation I have, keynote session I listen to at an event, customer conversation, or vendor briefing I attend, the topic of AI is front and center. The importance of trusted, ethical, responsible AI cannot be overstated: customers expect it, employees expect it, vendor partners expect it, and business leaders must deliver. Add to that the reality that we cannot unleash AI on the world and reap the benefits while destroying the planet at the same time, so sustainability and mitigating the impact of AI workloads on the climate is truly business mission critical.

I’m excited to watch Trustwise step up to the plate with Optimize:ai. I think it solves many of the challenges customers working to deploy and experiment with generative AI solutions are facing, some of which they haven’t even yet begun to consider, and can no doubt help make the process of embracing generative AI much less onerous. I’m looking forward to watching Trustwise’s progress as they roll out Trustwise Optimize:ai and will continue to track their progress. I’m going to go out on a limb and say I expect good things ahead from Manoj Saxena and team.

Image source: Pexels This is Engineering

See more of my coverage here:

Qlik Connect 2024: Where There’s Data, There’s Opportunity

CISA’s Secure By Design Pledge Continues to Build Momentum. Is it Basic? Maybe, But It’s a Start

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content