HomeSocial Impact HeroesC-Suite Perspectives On AI: Duri Chitayat On Where to Use AI and...

C-Suite Perspectives On AI: Duri Chitayat On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Experimentation Framework: Ensure the organization has a mechanism for experimenting in a low-risk, high-reward way. Most traditional investment processes force premature decision-making and incremental mindsets. AI adoption has great promise, but not every idea will work out. At Safeguard Global, we embraced Dojos and Innovation Sprints as ways to spark and celebrate change and risk-taking as part of the culture. AI solutions require continuous refinement. Organizations that bring more people into the conversation with AI will create additional value by providing multiple perspectives.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Duri Chitayat, CTO of Safeguard Global.

As Chief Technology Officer, Duri leads Safeguard’s technology team to deliver products and experiences that improve people’s lives, including ChatSG, which brings AI to payroll and HR. Safeguard Global is a future of work company that helps workers and companies thrive in the global economy. Backed by a data-rich technology platform, local expertise, and industry-leading experience, Safeguard Global provides end-to-end solutions to manage people and scale operations.

Duri holds degrees from Boston College and NYU Stern and is currently earning his master’s degree in computer science from Johns Hopkins University. He has developed and directed high-performance engineering organizations in AdTech, MedTech, Banking, and Finance on three continents.

Duri Chitayat LinkedIn
Safeguard Global LinkedIn

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

My journey in the tech world began by watching and learning from my father’s work as an inventor. His serial inventions were a wonder and included the Linear Motor. He founded and served as president of Anorad Corporation, now part of Rockwell. After seeing his work and growing my enthusiasm for the space, I spent my early career working with robotics and manufacturing companies. One of my first software programs was an enterprise resource planning system for tracking the thousands of tiny components that are needed in printed circuit board manufacturing. After developing this program, I went on to co-found a startup and took the insights I gained in technology, process and leadership to a consulting career. During this time, I focused on the people, technology and processes that enable organizations to compete in high-speed and constantly evolving industries.

As CTO at Safeguard Global, I help lead our teams to develop products that improve workplaces and hiring processes. With Safeguard Global, organizations can recruit, hire, pay, analyze and manage workers and their operations anywhere in the world, at any scale.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Early in my career, I didn’t understand or agree with some aspects of traditional corporate culture. Particularly, the need to wear a formal suit and tie to sit in meetings. As I first joined meetings and stepped into these situations, I was kicked out of conference rooms for not dressing in the expected suit and tie attire.

At the beginning of the pandemic, I was working at a wealth management company, and I had to wear formal clothing to the office. When the pandemic hit, I asked myself if I’d ever wear those clothes again. Now, four years later, they’re still gathering dust in my closet! Our culture at Safeguard Global invites you to show up as you are. I think it’s a wonderful thing to be able to be comfortable at work, to be able to express myself in the way I choose and to be judged for my work, not what I’m wearing.

Are you working on any exciting new projects now? How do you think that will help people?

Safeguard Global places significant emphasis on innovation to improve our clients’ experience. We staff teams around all our critical products to make small, gradual improvements every day. That’s one of the reasons we’re able to release updates frequently. We also put special focus on breakthrough innovation: radical, game-changing innovation that fundamentally changes something or introduces a new service altogether.

One of the things that makes global HR challenging is that it is extremely information intensive. And it’s not enough to learn something, you must keep up with it because legislation, customs and talent markets can change. To assist this, our team is extending Global Unity (our global workforce enablement platform) to include more “Intelligent HR” capabilities. For example, we released ChatSG, a conversational AI for the EOR market. ChatSG leverages our proprietary HR and payroll knowledgebase, enabling everyone to gain fast, reliable information.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

AI is weird. What I mean is, until you have had enough practical firsthand experience designing, building and using these systems, your intuitions about it will often be wrong. That is as true for executives as it is for engineers as it is for operations. The problem is that what we imagine is often limited to our experience. So, despite our best efforts, our intuition is naturally biased towards incremental innovation. I believe this is why most people imagine AI as an automation tool — it does not challenge the nature of the system, simply who or what is doing a given task. However, there are many breakthrough innovation opportunities, and they require a holistic reimagination of the system. The first, most fundamental problem to solve is getting enough of the right people with the knowledge and expertise to start your AI transformation journey. That’s the ‘Talent Gap’.

At Safeguard Global, our approach is to deliberately develop the skills in the organization through short, cross-functional learn-by-doing workshops called Dojos. And when we start a new initiative, we do so initially as an “innovation sprint” — a lightweight, time-bound and cross-functional effort. The concept is simple: when you bring people together who have an interest in learning and put their hands on the keyboard in a low-risk environment where creativity and learning are the goals, people’s biases and fears are more easily overcome. Then they know enough to take the next step.

The second, related gap is the ‘Culture Gap’. Culture is all about what we value to drive decisions without direction, ideally without us even thinking of them as decisions. It’s just “how we do things here”. In most organizations, people perform work that is routine. Even if it’s complicated, it is usually work we have done before. In the future, with AI, the emphasis for people shifts away from routine towards the more novel and strategic. The concept of expertise will move away from the fast, intuitive and automatic mode of thinking gained through 10,000 hours of experience, and will be replaced with the slower, more deliberate and effortful mode of thinking.

We invest in AI as a partner to our people, and at the same time we must invest in our people to be effective partners to AI. In the operations space, this can mean reskilling HR managers to research and feed compliance updates to AI systems, helping to grow and maintain our knowledge base.

There are also ethical and practical considerations that require “human in the loop” strategies and scrutiny of the behavior of AI-powered systems. In the recruiting space, this means AI completing a first review of candidates’ applications and confirming questions and initial steps, while our recruiting team analyzes the data the AI systems produce and occasionally spot checks the entire flow. Part of the human element is the constant analysis and back-testing of AI outputs. In AI circles, this is known as the “principle alignment problem”. It is essential that AI systems reflect our values and intentions, and we take responsibility for its behavior. This requires a commitment to understanding the technology and training it with unbiased content. Beyond IT, all departments must invest in understanding these tools so that they can properly partner with them.

Bottom line: AI-powered systems should be viewed as sociotechnical systems — a blend of people and technology engaged in complex, interdependent collaboration. In these systems, people bring at least two essential ingredients: values and critical thinking.

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

AI has been used to automate recruiting and hiring tasks for years, including resume screening and initial candidate evaluations. When used correctly, AI can be used to accelerate hiring and improve applicant experiences.

The limitations in AI come from the risk of overlooking qualified candidates if the system’s algorithm fails to recognize the needed range of skills and experiences or the versatile nature of people’s careers.

This is why AI must be backed by human oversight. The speed and positive experience of AI remains and makes daily tasks more efficient while the human element ensures a holistic candidate view.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

Peter Drucker said, “The best way to predict the future is to create it.” A corollary is: The best way to protect the future is to create it.

I believe that AI innovation will inevitably disrupt most jobs. So, the fact that there will be job displacement is a given. But jobs are not people, jobs are just what people do. And given the right vision and support, people adapt. I’m lucky that Safeguard Global already has a strong commitment to the best talent. And because our organization is dedicated to knowledge and expertise, our people will always be our greatest competitive advantage. Moreover, we are innovators, which means we bring our values and judgment to these problems and evolve with them.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

Global HR requires significant knowledge and research. ChatSG uses advanced natural language processing to interpret questions and retrieve relevant data in seconds in easy-to-understand language. With this technology, questions that would traditionally take several days to be answered can be accomplished in seconds. Behind the scenes, there’s a lot of effort and care put in by people to develop and maintain this knowledge base. Moreover, we’re continuously responding to feedback from users (internal and external) to improve and extend the knowledge areas.

One of the more interesting parts of this example is its emergent nature. We started with the knowledge base we’d developed and used internally, but quickly realized the critical nature of the feedback loop between questions asked and the need to integrate new data sources or create new content. We identified patterns of who was participating and who was not. We designed incentives and nudges to make it fun and to recognize people who leaned in.

Perhaps saying we “build” an AI-powered system is misleading. Maybe the better way to say it is that we “grow” an AI-powered system. A successful AI-powered system involves complex interactions between people and technology. Predicting precisely how these interactions will go in advance is always difficult, and sometimes impossible. In a way, those responsible must operate like gardeners: observe carefully the system and respond with probing adjustments. Amplify what works and dampen what doesn’t.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?

Okay, I’m going to repeat a few things, and introduce a couple considerations.

  1. Talent Gap: The essential ingredient to adopting AI is not technical, it’s people. Organizations need enough of the right people, with both an understanding of AI and the organization itself. At Safeguard Global, to address the talent gap, we focus on fostering an innovative learning culture through immersive, cross functional workshops. These are designed to bring employees from different departments together, facilitating a collaborative learning environment where everyone can gain firsthand experience with AI technologies. This initiative not only enhances the AI literacy of Safeguard Global’s workforce but also cultivates a proactive, innovation-focused mindset throughout the organization. By implementing such targeted educational programs, Safeguard Global effectively bridges the talent gap, ensuring our team is prepared and equipped to navigate the challenges and opportunities presented by AI technologies. This is crucial to maintaining our competitive edge in the global market.
  2. AI is Weird: Don’t trust your intuition until you have firsthand practical experience designing, building and maintaining an AI-powered system yourself. The latest wave of AI doesn’t follow a standard set of business rules. Its power is in its general-purpose application and its generative nature. It can blend to form sociotechnical systems that increase in effectiveness over time. For example, ChatSG illustrates this blend perfectly, utilizing AI for rapid data processing, while people enrich the system with depth, context and continuous learning from nuanced professional experiences.
  3. Innovation not Automation: Breakthrough innovation potential lies in the reimagination of whole services and even business models. Consider how the blending of people and technology enables new dynamic capabilities for your customers. In developing solutions like ChatSG at Safeguard Global, we realized that emergent interactions were a vital part of fostering a responsive and adaptable AI system. By treating AI development as a collaborative endeavor, both the creators and the end-users help the system to continuously improve.
  4. Ethical Considerations: AI, particularly the Large Language Models like ChatGPT, repeat learnings from broad sources, not only your database. AI doesn’t understand your organization’s culture or values, it lacks the capacity for empathy and the nuanced understanding of context that people possess. Whether people perform the task or perform oversight, the principle alignment problem requires people to remain involved and keep a critical eye towards aligning the system’s behavior with the organization’s values and culture. For example, at Safeguard Global, we are placing a special emphasis on cross-departmental collaboration, education, and back-testing. We emphasize the importance of the people involved in these systems. In recruitment, we make a special effort to periodically reevaluate the system for fairness, diversity and accuracy. And in the process, AI is assisting in resume screening, but people play a crucial role in evaluating candidates’ broader potential and ensuring a fit with organizational culture, thereby guaranteeing a comprehensive approach to talent acquisition.
  5. Experimentation Framework: Ensure the organization has a mechanism for experimenting in a low-risk, high-reward way. Most traditional investment processes force premature decision-making and incremental mindsets. AI adoption has great promise, but not every idea will work out. At Safeguard Global, we embraced Dojos and Innovation Sprints as ways to spark and celebrate change and risk-taking as part of the culture. AI solutions require continuous refinement. Organizations that bring more people into the conversation with AI will create additional value by providing multiple perspectives.

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

I believe the entire HR ecosystem will be reimagined as an Intelligent HR platform, and that’s precisely what we’re doing. HR is complicated, but we can make it easier to understand and more valuable. Anything routine will swiftly become a technical responsibility. But as discussed earlier, parallel processes will emerge that require creativity and careful problem solving.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I believe that reading, especially across diverse genres such as science fiction, fantasy, non-fiction and philosophy, can significantly expand one’s perspectives, enhance decision-making skills and foster innovation and empathy in leadership.

Drawing inspiration from my own journey as an avid reader, I’ve found that the imaginative worlds of science fiction and fantasy stimulate creative thinking and problem-solving abilities. These genres encourage us to envision alternate realities and solutions, fostering a mindset that is adaptable and innovative — qualities essential for effective leadership in today’s rapidly changing world.

Moreover, non-fiction and philosophical works, particularly those by Karl Popper and Thomas Kuhn, have profoundly influenced my management style. Popper’s emphasis on the philosophy of science, with its principles of falsifiability and critical rationalism, has taught me the importance of maintaining an open mind and being willing to challenge and revise my assumptions. Similarly, Kuhn’s ideas on paradigm shifts in scientific revolutions have highlighted the value of embracing change and the transformative power of adopting new perspectives in overcoming obstacles.

How can our readers further follow your work online?

You can learn more about Safeguard Global on our website, https://www.safeguardglobal.com/. If you’re interested in learning more or testing our ChatSG tool, visit https://www.app.safeguardglobal.com/explore/chatsg.

Feel free to connect with me on LinkedIn at https://www.linkedin.com/in/durichitayat/.

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.


C-Suite Perspectives On AI: Duri Chitayat On Where to Use AI and Where to Rely Only on Humans was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.