HomeSocial Impact HeroesC-Suite Perspectives On AI: Vall Herard Of Saifr On Where to Use...

C-Suite Perspectives On AI: Vall Herard Of Saifr On Where to Use AI and Where to Rely Only on…

C-Suite Perspectives On AI: Vall Herard Of Saifr On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Humans should always be capable of replicating AI’s work. If AI cannot reliably replicate or approximate the work of leaders in your field, it’s not serving you. At Saifr, we validate our AI regularly by testing our system’s outputs against the suggestions of compliance leaders. If those suggestions don’t stack up, there’s likely a data issue that needs to be solved.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Vall Herard, CEO of Saifr.

Vall Herard is the CEO of Saifr.ai, a Fidelity labs company. He brings extensive experience and subject matter expertise to this topic and can shed light on where the industry is headed, as well as what industry participants should anticipate for the future of AI. Throughout his career, he’s seen the evolution in the use of AI within the financial services industry. Vall has previously worked at top banks such as BNY Mellon, BNP Paribas, UBS Investment Bank, and more. Vall holds an MS in Quantitative Finance from New York University (NYU) and a certificate in data & AI from the Massachusetts Institute of Technology (MIT) and a BS in Mathematical Economics from Syracuse and Pace Universities.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

My journey into the world of finance and AI has been an exciting and fulfilling one. After starting my career on Wall Street working in various roles involving quantitative risk management and financial engineering, I became increasingly invested in the potential of AI and machine learning to transform the financial industry. This ultimately led me to launch Saifr, with the mission of empowering financial institutions to navigate regulatory complexities through advanced AI. At Saifr, we’ve assembled an incredible team of experts in AI, data science and financial services compliance to develop cutting-edge solutions that automate manual compliance tasks, detect potential risks in real-time and provide actionable insights for risk mitigation. Right now we solve two main problems: content review and AML/KYC.

SaifrReview® and SaifrScan® enable marketing, legal, and compliance teams within financial service to reduce risk in content creation, approval, and filing processes by highlighting potential brand and compliance risks in marketing content; explaining why something was flagged; proposing alternative language; and suggesting disclosures. Saifr’s AI can flag 90% of what a human would.

SaifrScreen® enables firms to accurately, efficiently, and continuously screen for potential threats in full customer and vendor populations by crawling and indexing over 227K structured and unstructured internet data sources worldwide 24/7. Saifr’s ML and NLP can detect 7x more bad actors with 47% fewer false positives.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

When I was a young professional working in finance, I spent one late night finalizing models in an Excel spreadsheet. This was before simple but critical AI capabilities like auto-save. The program froze up on me — as it often liked to — and I was left with hours of work wasted. To make matters worse, there was a winter weather advisory in New York City that night. I spent the next four hours repeating my work until it was 2 a.m. and I was practically snowed in. I ended up walking the entire way home.

I’ll admit this story only became funny in hindsight, but it taught me a valuable lesson: Never lose your appreciation for everyday AI applications. They can be life-changing.

My secondary lesson? Always save your work.

Are you working on any exciting new projects now? How do you think that will help people?

The overwhelming applications of AI now are to solve single-step problems — for example, a simple prompt such as “Write an email to a client that covers [these topics].” However, few problems that we face are that straightforward. We are currently working on expanding our agentic AI capabilities to tackle more complex, multi-step problems in financial services compliance. By enabling AI systems to break down intricate tasks and leverage both AI and non-AI agents, we believe we can automate processes that have traditionally required significant human intervention.

This will help financial institutions streamline their compliance efforts, reduce costs, and ultimately provide better customer service.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

One of the most challenging aspects of integrating AI into our business has been solving agentic use cases with the technology’s current limitations. We’re working toward more complex, multiple-step AI — but humans in the loop remain critical (and will remain so for the foreseeable future). Remember, an AI capability is a model, which just tries to approximate reality. Models will make mistakes that humans can catch using human judgment and intuition. We’ve learned that the key is to use AI to augment and support human decision-making, rather than trying to replace it entirely. This requires ongoing collaboration and communication between our AI teams and subject matter experts.

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

I’ll take this opportunity to discuss an industry-wide misconception. Large-language models (LLMs) are incredibly powerful, and their capacity to generate content enticed many organizations to mass-produce and disseminate content early on in the technology’s lifespan. However, we now know that content creation at scale in heavily regulated industries — especially content “penned” by AI — can be highly dangerous. The nuances of compliance with internal rules and external regulators are too complex to rely solely on LLM logic. Luckily, AI compliance tools can provide the needed guardrails to enable leaders to leverage the efficiency of LLMs without risking non-compliance.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

I suggest that leaders considering AI remember the importance of data provenance. AI operates on the data it is trained on; so, biased or otherwise misrepresentative data will force AI to perpetuate unintentional biases down the line. For example, if hiring managers rely on the data of previous applicants to build “the perfect candidate” and then train an AI system on said data, they risk perpetuating unintentional biases. Perhaps this pool of candidates skews overwhelmingly to one narrow segment of the population. Humans can spot these troubling patterns — AI, less so. Thus, human-vetted data is critical.

At Saifr, we implement these best practices by continuously vetting the quality of our system’s inputs and outputs. Furthermore, we have access to millions of proprietary data points vetted by experts, so our AI engines are incredibly informed.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

AI’s primary “improvement” over human ability is efficiency. So, we’re most impressed by our engine’s ability to regularly provide outputs that match our SMEs’ expertise. That isn’t by mistake but by design. We have SMEs embedded in the data science team. Thus, when we are building models, we vet them regularly. This helps our models achieve the same results and make the same suggestions as high-performing human compliance officers. When this is true, AI accomplishes the same level of work as humans — just much faster. That’s synergy at its best.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?

1. Humans should always be capable of replicating AI’s work. If AI cannot reliably replicate or approximate the work of leaders in your field, it’s not serving you. At Saifr, we validate our AI regularly by testing our system’s outputs against the suggestions of compliance leaders. If those suggestions don’t stack up, there’s likely a data issue that needs to be solved.

2. Know the provenance of your model’s data and avoid using AI for ethically ambiguous tasks. LLMs with opaque datasets can be dangerous, opening organizations up to non-compliance fines. The safest way to avoid these situations is to only partner with AI providers operating on robust and proprietary data. (And if you’re not sure about your current AI partners, I highly recommend asking them about their data’s provenance.)

Additionally, don’t rely on AI to make decisions in ethically ambiguous situations. As discussed earlier, AI has the capability to perpetuate biases. For example, a judge might use AI to help make sentencing consistent — same crime, same punishment. However, the historical data used to create the models likely has embedded biases, which would need to be addressed to be able to create a model that delivers on the promise of fairness.

3. Classify AI use cases into “risk buckets.” As AI and data leaders consider the appropriateness of using AI, it’s wise to categorize different use cases into risk buckets. Questions to ask here include:

  • Will using AI in this situation have a tangible and potentially negative impact on a human? If the answer is “yes,” you’re dealing with a high-risk use case.
  • Is there an incredibly strong benefit to using AI over a human in this situation? If the answer is “no” and the answer to question 1 is “yes,” it’s probably best to avoid AI.

As a rule of thumb, anything that denies a human the freedom of choice should be considered high-risk — for example, the final decision on a loan or job opportunity.

4. Ensure your AI model is explainable. Understanding your AI model’s operational data and rationale for outputs is critical. Too many AI systems currently operate as black boxes, providing little to no evidence for final conclusions. At Saifr, we implement explainability by providing users with the variables that were considered during the decision-making process. Our models will explain why they flagged a sentence as problematic — it will let you know if it was not fair and balanced, promissory, etc.

5. Remember that empathetic tasks require human oversight. There is something indelible about human compassion. AI — though incredibly advanced — cannot provide the emotional impact that a human can. Therefore, certain jobs are better left to the professionals. This fact is especially true during high-touch moments when humans feel more vulnerable than usual (for example, during therapy or counseling sessions).

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

Looking ahead, AI will have the most transformative impact in the areas of financial crime detection and regulatory reporting. By leveraging advanced machine learning techniques and natural language processing, we can identify patterns and anomalies that would be nearly impossible for humans to detect manually. This will help financial institutions better combat money laundering, fraud and other illicit activities.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

Creating technology that makes the world a better place by making knowledge, and by proxy education, more accessible. This may sound cliche because every company talks about changing the world. But I see so much promise in AI.

The invention of writing fundamentally changed humanity’s evolutionary path by making knowledge more accessible. As the world becomes more digital and complex, only about 40% of the world’s population has some level of tertiary education. Even within that 40%, the overwhelming majority are from wealthy families who can afford the associated cost. While the level of primary and secondary education has generally increased, this is insufficient to keep up with an increasingly more complex world. Technology, including AI, that can make tertiary education a zero-cost pursuit will impact the greatest number of people and improve the world.

How can our readers further follow your work online?

Readers can follow our work at Saifr through our website and LinkedIn. We regularly publish thought leadership content and case studies that showcase the impact of AI on financial services compliance. I also frequently speak at industry conferences and events, sharing my insights on the latest trends and best practices in AI governance, ethics and more.

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.


C-Suite Perspectives On AI: Vall Herard Of Saifr On Where to Use AI and Where to Rely Only on… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.