HomeSocial Impact HeroesC-Suite Perspectives On AI: Michael Burns Of Attainable Edge On Where to...

C-Suite Perspectives On AI: Michael Burns Of Attainable Edge On Where to Use AI and Where to Rely…

C-Suite Perspectives On AI: Michael Burns Of Attainable Edge On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

Focus AI on the right things: Are you paying people to do data entry? To reconcile documents? Those are AI tasks. Are you interviewing candidates, do you need to read body language to assess candor and charisma? That’s a task for a person. Using AI to assess personality/human characteristics can feel dehumanizing, so keep the human perspective front and center.

As artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Michael Burns.

Michael Burns is the Co-Founder and CEO of Attainable Edge, LLC, a company dedicated to expanding the benefits of AI technologies to a broader audience. Before founding Attainable Edge, Burns served as the Chief Information Officer at Benco, the largest privately held dental distributor in North America. In this role, Burns focused on driving innovation across the enterprise in areas such as strategy, process optimization, e-commerce, business intelligence, and AI.

Now at Attainable Edge, Burns, alongside co-founder Alan Czysz, is at the forefront of developing StratSimple, a B2B AI-powered software that assists non-profits and small to medium-sized businesses in understanding their environment, setting better goals, and equipping team members with the tools they need to deliver results. An acclaimed speaker, he actively shares his expertise with diverse audience on topics relating to AI implementation and strategic planning.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

I began my career as a software developer in the insurance industry, working for Guard Insurance, a subsidiary of Berkshire Hathaway. My work focused on building systems to determine whether treatments for an injury were appropriate for a diagnosis — this was before the advent of AI as we know it today, but the impact was very similar: how can we use technology to augment nurses in using data to make better decisions while managing a claim?

Through a bit of luck and good timing, I was able to move into a management role where I was responsible for managing team members and planning development priorities for the company. From there, I moved to Benco Dental as the Director of Application Development, a position I held for about five years. Surprisingly, I was then asked to leave the world of technology completely and report directly to the Chief Revenue Officer as the Director of National Sales Operations. In that role, I led areas like sales compensation, sales training, and sales processes for a nationwide sales team of over 400 team members.

After just two years, I was invited back into the IT fold to step into the role of Chief Information Officer, overseeing all technology strategy and implementation globally. I also enjoyed the opportunity to continue stretching beyond traditional IT as we built centers of excellence for enterprise continuous improvement, data science, and project management. I was in that role for 6 years when I became so fascinated with the pace and ability of modern AI, I knew I wanted to focus my career on it.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Wow, that’s a fun question. I love the quote that success is stumbling from failure to failure without any loss of enthusiasm! I’ve had my fair share of missteps over the years… One that immediately comes to mind is from my days as a developer when I accidentally deleted the table containing all of the insurance company’s clients from the production SQL server. This was back in the days before we had rigorous controls like we do today, and you had to manually keep track of whether you were working on the production or development environments.

I still remember the panic when the entire software platform stopped working, and the errors popped up: “table does not exist.” My first thought was, “Oh man, that’s not good,” immediately followed by, “Oh man, I think I did that. That’s REALLY not good.” Luckily, I had the right instinct to immediately run to my manager, explain what happened, and rally the team to restore the data and put things back as good as new. The whole ordeal probably lasted only about 10 minutes from start to resolution, and I was sure I would be fired. However, credit to my manager, who calmly asked, “What did you learn?” She didn’t raise her voice or make me feel in trouble; she just coached me to ensure I learned from the mistake.

That lesson has stuck with me and has become a huge part of my leadership style, which is grounded firmly in the concept of encouraging psychological safety within a team. We all make mistakes; the goal is to learn from them and not repeat them. When a team builds up trust that everyone is in this together, that everyone has each other’s back, well, at that point, that team can achieve anything.

Are you working on any exciting new projects now? How do you think that will help people?

We’re really excited about the work we’re doing with https://www.stratsimple.com to raise the bar for what planning can look like within an organization. We help organizations improve their end-to-end planning to execution process. It starts with using AI to help get better inputs during stakeholder surveys, uses AI to help leaders set better goals, and provides an OKR based solution to help teams stay accountable and transparent as they work towards accomplishing their objectives.

We believe that humans will always have a vital role in strategic planning — and we’re using AI to augment our uniquely human abilities to help improve the performance of everyone involved in the process. For instance, we use AI to eliminate drudgery and administrative work from the process and to serve as a thinking partner that can ask questions and propose solutions, so people don’t have to start from scratch every time. Because the AI understands the context of an organization’s environment and its overarching goals, it can help lower-level teams stay aligned with the big picture.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

I’ve witnessed significant impact in integrating AI into business operations, both from my role at Benco and through consulting with clients on their AI strategies. The pace at which AI is adopted is astonishingly lagging behind its capabilities, with the biggest barriers to implementation revolving around change management. The most common apprehension I encounter is the notion that “AI can’t do my job.” And to that, I say they are correct. The initial clarification I provide to companies looking to implement AI solutions is that AI doesn’t automate jobs; it automates tasks. More often than not, it automates the very tasks people are least interested in doing.

To successfully integrate AI into an organization, there must be executive leadership with a cohesive strategy. You simply can’t allow everyone to figure it out on their own. Questions that need addressing include: How will you train employees to leverage off-the-shelf tools to enhance their productivity? How will you ensure they do it safely? As full automation integrates, how will job displacement be handled? Even though we automate tasks and not jobs, the reality is that in large teams, as tasks get automated, fewer people are sometimes needed to accomplish the same amount of work. How do you plan for these displacements? Re-training? Downsizing through attrition? The worst thing you can do is to let a team assume that their jobs are at risk if they embrace automation. That’s a sure-fire way to make sure your implementation fails. Continuous improvement principles apply here — people always come first!

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

Knowing the capabilities of AI helps to avoid this — by learning about what AI is good at, and bad at up front you can help avoid implementations that fail because those failures are very predictable. One common area where we see large language models like ChatGPT fail is in anything to do with math, even simple math. For instance, we have helped clients clean up their product data using large language models, and it will do an amazing job at synthesizing all of the incoming data into a really well written product description — but, ask that same AI to make sure it comes in with a description under a certain length — and good luck to you, large language models can’t count!

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

It goes back to having top level leadership, training, and transparency. It’s crucial to have a policy in place that works for your organization. Unfortunately, while there are standard best practices for sure, there is no one-size fits all solution here. Yes, we believe there will be job displacement from AI, just like there has been from every previous significant technology. The real measure of a company’s integrity is how they handle that displacement — whether through slowing down hiring, re-training staff, or other means that prioritize respect for people.

In my view, the unethical aspect arises when companies demonstrate no loyalty to their team members, eliminating positions automated by AI without forewarning or a plan to support those impacted. Such actions can swiftly erode trust within the remaining workforce, diminishing their willingness to contribute to further automation and improvement initiatives.

I advocate for incentivizing team members to identify automation opportunities, whether through rewards for ideas or recognizing individuals who significantly streamline processes. When approached correctly, it’s possible to engage the team actively in the automation and improvement process. The key is to work with them, and not “do it to them”.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

Absolutely, this is the core of what we’re building with StratSimple. I’ll give you two examples.

1) Imagine you are getting back survey responses from stakeholders as the first step of building a strategic plan. Every employee, even the CEO themselves, has a bias that influences the themes and findings — instead, we can reliably use AI to extract the themes and insights, to identify the sentiment of the respondent to allow for humans to come in after and get a high level unbiased perspective without having to read novels worth of individual responses.

2) Or, think about goal setting — it’s hard. Really hard. But AI becomes an excellent brainstorming partner with almost perfect memory — if we give the AI context of an organization’s environment, priorities, and past performance it can help provide gentle guidance that helps everyone stay aligned across an organization. How many times have you been on a team that does retrospectives after major projects? How often does anyone actually go back and use those learnings in a systematic way? AI can empower teams where the most relevant learnings from their own reflections are brought back at just the right time to helpfully influence active decisions and planning.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?

1. Have a top-level plan: What is your objective for AI automation? How will you measure success? Both in terms of the financial, but also human perspectives. What data privacy and security concerns need to be addressed in your environment? How will you define ethical use?

2. Focus AI on the right things: Are you paying people to do data entry? To reconcile documents? Those are AI tasks. Are you interviewing candidates, do you need to read body language to assess candor and charisma? That’s a task for a person. Using AI to assess personality/human characteristics can feel dehumanizing, so keep the human perspective front and center.

3. Have a dedicated AI leader: If AI is everyone’s job, it will be no-one’s job. AI technology is moving too fast to keep up with emerging capabilities part-time. If you can’t dedicate a full-time person to this role, find a partner who can advise you. Falling behind the AI curve is going to hurt as the pace of this change is only accelerating — and building a strategic plan for how to use AI is exactly the kind of thing that AI cannot do.

4. Start training team members now: Team members that do not learn to increase their own effectiveness with AI are most likely to be negatively affected by it. At the same time, team members need to understand how to safely work with AI and proprietary data. They need to understand AI bias, both in using and building AI tools. Team members need to understand AI limitations, so you don’t unintentionally have someone trying to have an LLM tackle a complex mathematical problem without knowing that they are expected to take responsibility for the final results.

5. Anticipate and plan for failures: Technology is fallible, there are outages, there are bugs. If you build a process entirely dependent on AI, what will the impact on customers be? Consider segmenting your processes to be customer critical vs not. For customer critical processes you may want to consider keeping a “human in the loop” who is reviewing output and validating that things are working as expected, even if they are just sampling the AI output you want to be thinking ahead to what happens when things go wrong?

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

If you can document the “right way” to do a task, it’s going to be impacted by AI and automation technologies — estimates are that 50% of tasks within normal business operations are already fully automatable. But business is fundamentally about people working together. Customer to provider, leader to follower, team to team. Thinking about our business, AI can help with the purely rationale work of building a plan, but it’s always going to take a person to inspire others with their vision, it’s going to take a person to facilitate people coming together — reading the body language if people are really bought in or not — it requires a human to empathize with the pain for another person and invent a new idea that has never existed before to solve it. Ultimately as AI automates tasks, it’s going to allow all of us to focus on the things that make us human, and that is exciting to me.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

If I could start a movement that would benefit the greatest number of people, it would focus on empowering mission-oriented non-profit organizations to be more intentional and effective in their endeavors. We’ve come so far as a global society, but there are so many people still in need of the basics that many people reading this take for granted, and honestly, in the messy middle of the AI implementations world-wide, it’s likely more need will be created. We aspire to help mission oriented non-profit organizations make the world a better place by being more intentional, and more effective in their own work.

So many of the nonprofits out there do amazing work, but they feel like they are so busy with the day-to-day that they can’t take a step back to plan for the future, I wish we could help those organizations understand that failing to plan, is planning to fail — their work matters, and if they can increase their own effectiveness they will be able to help more people and improve countless lives!

How can our readers further follow your work online?

You can find me online at www.linkedin.com/in/michaelburns4

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.


C-Suite Perspectives On AI: Michael Burns Of Attainable Edge On Where to Use AI and Where to Rely… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.