Site icon Social Impact Heroes

C-Suite Perspectives On AI: Kevin Dominik Korte Of mPath AI On Where to Use AI and Where to Rely…

C-Suite Perspectives On AI: Kevin Dominik Korte Of mPath AI On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Do humans think of it as boring?

One of the biggest determinators of AI will be how engaging it is for a person to perform a task. Everyone has repetitive tasks they would instead offload. Yet, we can’t, as no one else is there to perform them. Thus, we rush through them and potentially make little, careless mistakes.

Reimbursements and receipt capturing are areas where machine learning has made huge strides. These days, we can capture almost any receipt with our phones. No more mistyped totals or wrong item listings.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Kevin Dominik Korte

Kevin Dominik Korte serves on the Boards of MPathAI and Market Intend and on the Advisory Boards of The Backpackster, TARGA, and Moxey. As a board member, Kevin is responsible for the company’s strategic oversight and risk management. He focuses on supporting entrepreneurs in navigating the changing world of technology, cybersecurity, and AI by providing a strong combination of technical understanding and financial acumen. Kevin’s forward-facing personality and broad knowledge make him one of the central individuals in AI. He is a trusted voice for both technical questions and the C-Suite.

He previously led the US team at Univention, helping clients use open-source identity management systems. Univention provides standardized identity management systems for organizations with 5 people to 5 million. Kevin’s team provides Sales, Support, and Professional Services for Clients in the USA, Canada, and Mexico. Kevin’s diverse team is particularly focused on optimizing clients’ identity and access management and helping them reach their goals more efficiently.

Kevin gained his initial experience in Univention’s Professional Service Team, where he was primarily responsible for rolling out the world’s first commercial Samba 4 implementations. Kevin earned his MSc and BSc in Computer Science from Jacobs University Bremen. A German native, Kevin moved to Seattle in 2013, where he enjoys a seldomly quiet life with his wife, two children, and a family of brown bears that often walk through the garden.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

Growing up in Germany as a millennial had me experience the advent of our second information age firsthand. Cellphones went from a thousand dollars for a brick to 200 dollars for a smartphone back to a thousand dollars for a phablet. Social media went from the exciting ping of an AOL instant message to Facebook and the horror of seeing people surrender fully to social media. Living through this societal change has taught me a valuable lesson — the need to control your data, life, and destiny.

Accepting that I only can control myself has been one of the drivers behind my decision-making. In addition, it helped me develop a keen focus on what I can control because even for outside events, you can control the outcome. Since then, I have enabled others to take control of their world, whether as a Board Member for my Alumni Association, Toastmasters, or assisting people in keeping control over their organizations’ IT. This desire led me to join Univention for the longest time in my career. As a company, it strives to “be open” and promote open source and digital sovereignty.

Ultimately, my ideals have brought me to angel investing, board membership, and working as a mentor for founders. After all, there are few more enormous achievements than controlling your company and improving the world we all share.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

One of my first investments in AI happened in 2017. The company specialized in using Hidden Marcove Models, a particular form of machine learning, to predict network traffic behavior and find abusive patterns. The software looked at your network traffic patterns and noticed if the overall pattern changed unusually. While it sounds relatively simple, in a network with partly remote workers, traveling sales employees, and executives attending meetings and conferences, it is rather complex to find malicious changes.

The company founders and their team built an impressive piece of software and acquired a few customers. In August 2022, the company received a purchasing offer from a leading networking solution provider. I participated in some of the meetings as an investor representative.

Just at the end of the due diligence analysis, where the buyer evaluates the company’s state, OpenAI released ChatGPT. Suddenly, AI was on everyone’s mind, and every executive pretended to understand this new technology.

It initially meant that our company valuation suddenly multiplied, and the buyer significantly increased their offer. Yet, it also gave one significant road bump. The founders, being scientists and used to using accurate terms, kept referring to our technology as machine learning and categorically refused the term AI.

During a Zoom meeting, the difference between AI and machine learning again threatened to derail the business discussion. I was frustrated and said we needed more human intelligence in the room. Unfortunately, my mic was on, and everyone heard my outburst. On the upside, it moved the discussion away from arguing over wording and into more productive pathways.

Ultimately, it has shown me that it is crucial to speak the same language and be clear about what terms to use. Otherwise, even if the words make sense, you might not understand each other. It also taught me that sometimes, these embarrassing yet human moments can go a long way in cooling things down and making everyone reconsider their positions.

Are you working on any exciting new projects now? How do you think that will help people?

Coaching and mentoring have long been the hallmarks of successful business leaders. They are the secret weapons we use to get ahead, maintain our drive and development, and bounce ideas off of. Yet, mentors are also an expensive investment. Even successful marketplaces run for 500 USD per month or more.

Most don’t realize that a lot of the time mentors spend is used for these little daily check-ins by text or small affirmations to keep on going between the major meetings. These little tasks can easily be automated using AI. After all, simple prompts like: “Good luck reaching out to your contact today!” or “Remember, you’ll feel great after completing the cold call!” can quickly be asked by AI. Yet, they can give the mentee a significant push in reaching success.

I joined mPath AI as an investor and board member about a year ago. mPath AI combines the best human coaches with multiple AI technologies. Thus, we can reduce coaching costs while still providing the benefits of personalized attention and real human beings.

Thus, mentoring and coaching, two critical tools for long-term success, will become more accessible and democratized, allowing upcoming leaders access to the development tools they need.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

The biggest challenge is dealing with user expectations. Especially outside of the tech sector, we expect AI to be closer to the C3-POs from Star Wars or Data from Star Trek. These androids are highly accurate and competent and dish out a slight dose of sarcasm and situational humor.

Unfortunately, today’s AI isn’t of the same level. It doesn’t learn from mistakes or create new knowledge unless reprogrammed. Worse, our human drive to create equitable solutions has created AI systems that are neither accurate nor trustworthy. Google’s diverse Nazi soldiers and Native American Vikings will, for a long time, remain an example of what can happen to AI if corporate politics get valued above accurate information.

As AI (thankfully) isn’t self-aware, it is up to humans to set the guardrails around AI and ensure that the results match the user’s expectations. For example, I expect a correct answer if I ask Air Canada’s chatbot questions about airline policy. I don’t expect it to on the fly adopt United Airline’s policy, even if that was in the training data. Setting AI boundaries and testing the results will remain a human task for a long time, especially as we are great at behaving in unexpected ways.

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

Many of our interactions with HR are policy related. We ask for details about leaves, open enrollment, and corporate education. These questions have standardized answers that a computer can efficiently provide. With a language model, you can process employee questions even if they don’t use the exact terms, e.g., “the rules about going on holiday” vs. “vacation policy.”

Yet, when an acquaintance of mine implemented such a system with a test client, he got an extremely unexpected result. While the system provided the correct answers, HR satisfaction went down. In fact, during the quarterly survey, employees named the AI system the number one problem with their corporate culture.

The policy questions often provided a general entry point to deeper conversations between the HR representative and the employee. Even where the question didn’t lead to any more profound issues, the casual conversation around the question allowed the two employees to build a short connection, making them feel part of a team.

Ultimately, the product pivoted from giving the employee the answer to helping the HR representative find the correct information. This setup combined the satisfaction of human interaction with AI’s strength in getting information quickly and efficiently.

The most important lesson here was that humans are complex. Notably, when we deal with feelings such as satisfaction, a seemingly optimal technical solution can have a negative outcome. Thus, we shouldn’t underestimate the importance of user testing.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

One of the best plays on morality and scientific advancements is “The Physicists” by the Swiss author Friedrich Dürrenmatt. In his work, the inventor of a new nuclear technology entered psychiatry to keep his invention hidden. Unknown to the inventor, the owner of the psychiatric institute has stolen and monetized the invention, and others have continued developing new forms.

The same is true with AI. We won’t be able to stop its development. Someone will always want to utilize the inventions. What we have to do is balance the societal advancements with the risks. While we might get it wrong sometimes, that is part of being human.

However, we have to acknowledge when we get it wrong. The reason Google’s diverse Nazi image was so harmful is that they were very reluctant to say they got it wrong and what they would do better in the future. That opened up the way for right-wing radicals to use the images as facts, significantly multiplying the damage done. If, instead, they had clearly stated that post-applying DEI to the model, instead of sanitizing the data, was wrong, they could have moved the discussion to a technical level. Instead, we kept hearing of the problems of DEI and how Google shows that Nazis were diverse and not evil. Being humble would have blunted the impact and allowed a better focus on the ethical implications.

Lastly, the Industrial Revolution shows that society was shaped by the whole of society, not just the industry leaders or the workers. We shouldn’t leave the ethical considerations up to inventors and CEOs. We must unite as citizens to decide the guardrails and moral grounds on which we want the new technology to stand.

Thus, my three considerations are: First, we must acknowledge that we cannot stop technological advancements. Second, we must enable society to decide how we want our communities to look. Third, if we get it wrong — and we will get at least parts wrong — we must acknowledge our errors and learn from them instead of trying to hide our failures.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

One of my portfolio companies, Market Intend, is focusing on making sales more efficient. To achieve this, we have AI do research on the actual target companies so that the sales employees don’t have to search through dozens of websites and build their own profiles. Instead, humans can focus on connecting with the target clients and closing the deal.

Taking the data one step further, AI can find customers similar to the ones where you were successful. The AI can take much guesswork out of defining the target group.

The product is successful because it combines the strengths of humans and computers. The system is excellent at combing through data, making it possible to quickly answer queries like “Find all metal working companies that ship with UPS but not with USPS or FedEx.” On the other hand, humans are great at building a connection with prospects. After all, the last thing we need is more computer-generated spam, even if AI might generate better spam. It also builds upon the point that we prefer to buy more expensive items from people we build a rapport with instead of an anonymous computer.

Thus, combining better data for sales, quicker research, and the human touch helps reduce spam and make sales a smoother experience.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career? Please share a story or an example for each.

1. Does it require a human connection?

User expectations are the biggest determinator of whether or not to assign a task to AI. If a task requires significant empathy, humans are much more likely to succeed. AI doesn’t have genuine empathy. While individual responses might seem empathetic, they are still entirely rational choices.

The HR example above can quickly provide a treasure trove of examples where an employee would interpret the same responses completely differently if they came from a human or a computer.

Employee: “Could you tell me about maternity leave?”

HR: “Are you currently expecting?”

Employee: “Yes”

HR: “Congratulations”

If HR were a human, especially in a face-to-face or video meeting, the employee would likely feel a connection and genuine appreciation for the good wishes. The same response from a chatbot would likely create an eye roll. Likewise, a human HR representative might make the employee feel cared for by the company. At the same time, the bot might make the user feel like another cog in the machine. Thus, user expectations and our wish for a human-to-human connection can significantly alter the decision to let AI handle a job or rely on humans.

2. Does non-verbal communication matter?

The question of nonverbal communication is very closely related to our expectation of a human connection. Our facial expressions, gestures, and body posture can significantly alter the meaning and perception of our spoken word. In settings like job interviews, non-verbal communication can be the difference between accepting a candidate and rejecting them. Yet, interpreting nonverbal cues is extremely difficult as many environmental factors affect our behavior. For example, a candidate might be fidgeting because they are nervous. They might also have been caught in a lie. Alternatively, their shoes might not fit right, or the room temperature might be outside their comfort zone. Millions of years of evolution and cultural development have trained us to intrinsically pick up and evaluate some of these queues. Yet, for AI training, we would need to go through the exercise of making it quantifiable without spoiling the results of our intuition simultaneously.

Thus, humans might be far from perfect whenever you need to evaluate non-verbal communication. However, we are still significantly better than AI.

3. Do humans think of it as boring?

One of the biggest determinators of AI will be how engaging it is for a person to perform a task. Everyone has repetitive tasks they would instead offload. Yet, we can’t, as no one else is there to perform them. Thus, we rush through them and potentially make little, careless mistakes.

Reimbursements and receipt capturing are areas where machine learning has made huge strides. These days, we can capture almost any receipt with our phones. No more mistyped totals or wrong item listings.

4. Does it involve irrational actors?

One of the most challenging areas for AI is when one participant isn’t acting rationally. Yet, if irrationality slips into abuse, it would also be a great place to employ AI to protect your employees.

Thus, we might see very different approaches depending on the expected behavior of unpredictable actors. For example, concierge services and personal assistance, especially for high-net-worth individuals, will stay in the human hand. Dealing with irrational requests and personal quirks is too difficult to program into an AI.

Airline and telco customer service, where emotions often rise to the point that both sides are irrational, might be better in the hands of an AI that ignores any kind of abuse.

5. Does it require high levels of creativity?

The current state of AI in business focuses primarily on predictive engines and large language models. On the most fundamental level, their output comes close to the average of the input data. Unfortunately, most humans aren’t 100% average, nor do we enjoy something that is just the average.

For example, if you ask AI to create a chicken dish, it will most likely involve a curry-like spice mixture, as most eaten chicken dishes likely involve a similar mix. The AI is unlikely to create something unique to try. We still need and will continue to require humans, like restaurant chefs, to do the creative heavy lifting.

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

You will find many dull tasks if you mentally walk through your work week. Most of us think of a task as boring if it’s repetitive, seems to provide a low value, yet requires attention. Monitoring computer systems, reconciling bank accounts, or answering routine documents in calls for proposals are some of the work in this category. Yet, these tasks are easily automatable, sometimes even without AI. What AI has done here is give us a renewed drive to let computers take them over. Some systems, like programs filling out routine documents in calls for proposals, will require today’s language models. Others, like IT monitoring systems for anomalies, require different styles of machine learning. While again, others, like software for reconciliation bank accounts, might only need some AI work on the fringes. Yet, we will see more automation in all of them and AI in some.

Management, in contrast, will remain solely in the realm of humans. Bad management can sink a company, and AI is a lousy average of all its inputs. Additionally, humans strive for interpersonal relationships. Thus, we will see management, especially people management, remain a human domain.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

For many leaders, succession planning is only an afterthought. Consequently, we see older CEOs returning to take the helm and board members serving well into retirement age. If I could start a movement, it would encourage anyone in a leadership position, from congressman and board members down to team leaders, to consider and train their replacement. Even if the trainees weren’t selected or elected to replace their mentor, it would give us a new generation of trained leaders.

How can our readers further follow your work online?

You can find more about me on my personal website at https://www.korte.co where you can find my recent articles and links to my social media profiles.

This was very inspiring. Thank you so much for joining us!

Thank you for having me.

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.


C-Suite Perspectives On AI: Kevin Dominik Korte Of mPath AI On Where to Use AI and Where to Rely… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Exit mobile version