Perspectives
Takes on technology leadership
Point-of-view writing on the topics that matter most to technology leaders at small and mid-size companies. No sponsored content, no product recommendations.
A CEO came back from a conference convinced AI was going to transform the business. When I asked where, he did not know. He just knew he wanted to.
The operating model that breaks AI strategy the fastest is the one where business leaders work out the strategy in one room and hand it to technology to execute. It rarely works with mainstream enterprise software. It almost never works with AI, because AI runs on your data, and the knowledge about what data exists, where it lives, and how clean it is sits with technology, not leadership. MIT research puts the number of enterprise AI pilots that fail to reach production at 95 percent, most often because of unfit or untrusted data. Strategy built without technology at the table is mostly contributing to that statistic.
Read the full article →MIT Sloan Management Review
9 Mistakes Leaders Make With AI Strategy
The companies that struggle most with technology decisions are not the ones that lack technical talent. They are the ones where no one has formal accountability for the outcomes of technology investments.
In most of the organizations I have worked with, technology decisions get made by whoever has the most momentum in a given meeting. A senior leader approves a vendor contract, a project manager drives an implementation, and the CTO finds out three months later when something breaks. The problem is not that these people made bad decisions. The problem is that no one had a clear mandate to own the outcome and course-correct when things went sideways. Formal accountability for technology investments means someone has both the authority to make decisions and the obligation to stand behind the results. Without that, organizations end up with sprawling vendor relationships, inconsistent architecture, and no one who can explain what was spent or what it produced.
Harvard Business Review
Who Owns Your Company's Technology Decisions?
Slow engineering delivery is rarely an engineering problem. It is almost always a requirements problem that engineering gets blamed for.
I have watched talented engineering teams get labeled as underperformers when the real issue was that requirements were changing weekly and no one in the business was being held to a decision. Engineers work best when the scope is clear and stable enough to build against. When it is not, they slow down, they rework, and they get frustrated in ways that eventually affect retention. Leadership usually sees the delay and concludes the team is not capable. The actual diagnosis is that the intake and requirements process has no structure and no ownership. Fixing delivery problems without fixing how work enters the engineering queue is one of the most common and most expensive mistakes I see.
McKinsey & Company
Yes, You Can Measure Software Developer Productivity
Most small companies will experience a security incident before they experience a formal audit. The ones that recover well are the ones that had the basics in place before it happened.
The organizations I have seen handle security incidents well all had something in common before the incident happened. They had basic controls in place: patching discipline, access reviews, endpoint protection, and some version of an incident response plan they had actually walked through. The companies that struggled were not necessarily negligent. They had just been deferring the fundamentals because no incident had happened yet. A security incident at a small company does not just create a technical problem. It creates a customer trust problem, a regulatory notification obligation, and often a leadership credibility problem all at once. The cost of basic hygiene before an incident is a fraction of the cost of recovery after one.
CISA
Known Exploited Vulnerabilities Catalog
The first 90 days in a technology leadership role determine whether the next three years go well or poorly. Most executives underinvest in the listening and assessment phase and overpromise in the first week.
I have made this mistake myself. In an early leadership role I came in with a clear diagnosis and a set of commitments within the first two weeks. Some of those commitments turned out to be wrong because I had not yet understood the full picture. Walking them back cost more credibility than staying quiet and learning would have. The first 90 days should be spent understanding the environment, building relationships with the people who actually know where the problems are, and forming a view of what the real constraints are. Commitments made before that work is done tend to be based on what the organization says it needs rather than what it actually needs. The two are often different.
MIT Sloan Management Review
How New Leaders Can Hit the Ground Running
Most companies will discover their AI exposure through a compliance question they cannot answer, not a strategic planning session.
Shadow AI adoption follows the same pattern as shadow IT from fifteen years ago, except the data exposure risks are considerably higher. Staff are using consumer AI tools with sensitive client data because the tools are good and no one told them not to. The problem is not malice; it is the absence of policy in a fast-moving environment. The organizations that get ahead of this are not the ones that ban AI outright. They are the ones that define what is and is not acceptable, audit what is already in use, and set clear expectations before something goes wrong.
Rochester Business Journal
The Risks of Shadow AI in the Workplace
SOC 2 is not a checkbox. Organizations that treat it as one find out the hard way.
I have guided three organizations through SOC 2 Type II. In each case, leadership came in thinking it was primarily a documentation project. It is not. SOC 2 requires that controls are designed, implemented, and operating effectively over time. That means process changes, tooling decisions, and sustained operational discipline, not just a policy binder. The organizations that succeed are the ones that treat the readiness process as a genuine improvement to their security posture, not a certification to hang on the wall. The ones that treat it as paperwork usually fail their first audit.
SecurifyAI
SOC 2 Compliance in 2026: What Startups and SMEs Need to Know
A technology roadmap that lives in a shared drive is not a roadmap. It is a document that someone spent time creating.
The difference between a roadmap that gets used and one that does not comes down to two things: who was in the room when it was built, and who owns it after it is delivered. When the roadmap is built by a consultant working from interviews and handed to a leadership team that was not deeply involved in the synthesis, it rarely survives first contact with Q2 priorities. The roadmap has to reflect how the organization actually makes decisions, not how it says it does. That means the building process matters as much as the output.
CIO.com
Whatever Happened to the Three-Year IT Roadmap?
The gap between companies using AI and companies benefiting from AI is almost entirely explained by data readiness. The tool is rarely the problem.
I have seen this pattern in healthcare technology repeatedly. An organization invests in an AI tool, goes through implementation, and then the project stalls or produces results that no one trusts. When you dig into why, the answer is almost always the same: the underlying data is incomplete, inconsistent, or not structured in a way the tool can use reliably. AI tools in healthcare settings are particularly sensitive to this because clinical and operational data tends to be fragmented across systems that were never designed to work together. Getting value from AI requires solving the data foundation problem first. That is slower and less exciting than buying a tool, which is why so many organizations skip it and wonder why their results are disappointing.
McKinsey & Company
The State of AI in 2024
A technology assessment is not a report. It is a conversation that forces alignment on what is actually true about your technology environment, which is often different from what leadership assumes.
The most valuable thing a technology assessment produces is not the findings document. It is the moment when a leadership team looks at the same set of facts together for the first time and realizes they have been operating on different assumptions. I have run assessments where the CEO believed the company had a modern, well-integrated technology stack and the head of operations thought the infrastructure was one incident away from a significant outage. Both were working from partial information. Getting to a shared and accurate picture of the current state is the prerequisite for any meaningful conversation about where to go next. Without it, roadmap conversations tend to be optimistic in ways that do not survive contact with reality.
CIO.com
What Is a Technology Assessment and Why Does Your Business Need One?
Engineering culture is set at the top. If the CTO or VP of Engineering tolerates low accountability, the team will reflect that within six months regardless of what the values doc says.
I have seen this play out more than once in both directions. A strong engineering leader who holds the team to clear standards around code quality, delivery commitments, and how problems get escalated creates a team that operates that way consistently. A leader who tolerates chronic lateness, unclear ownership, and post-mortems that produce no action creates a team that reflects those norms just as consistently. The values document does not matter. The behavior that gets modeled and rewarded does. When I am assessing an engineering organization and I want to understand the culture, I do not start with the team. I start with what the senior technology leader spends time on and what they let slide.
MIT Sloan Management Review
The Culture Factor
HIPAA compliance and good security are not the same thing. You can pass a HIPAA audit and still have a security program that would not survive a real incident. The organizations that confuse the two find out the difference at the worst possible time.
HIPAA sets a minimum floor for what covered entities and business associates are required to do, and that floor is lower than most people assume. Meeting the requirements gets you through an audit, but it does not necessarily mean your security program would perform well under real attack conditions or during an actual breach investigation. In healthcare organizations I have worked with, the audit prep process often focuses on documentation and policy rather than on whether controls are actually working. A good security program goes beyond what is required and focuses on whether the controls in place would actually detect, contain, and recover from an incident. Those are two different standards, and confusing them is a risk that shows up in breach statistics every year.
HHS Office for Civil Rights
HIPAA Security Rule Guidance
The decision to hire a fractional CIO is not primarily about cost. It is about what stage the company is at and what kind of problem needs solving.
A full-time CIO hire makes sense when you need someone to build and own a technology organization over the long term. A fractional arrangement makes sense when you need experienced judgment applied to a specific set of decisions over a defined period. These are not the same thing, and using the wrong model creates problems in both directions. I have seen companies bring in full-time leaders too early, before the scope of the role was clear, and I have seen companies use fractional arrangements to avoid making a hire they actually needed. The starting question is not how much the person will cost. It is what outcome you are trying to produce and over what timeframe.
Harvard Business Review
How to Make Fractional Leadership Work
Technical debt is not a developer problem. When it gets bad enough, it becomes a leadership problem that shows up in financial results.
Non-technical leaders often hear about technical debt as an internal engineering complaint, and they tune it out. That is a mistake. Technical debt accumulates when teams make expedient decisions under deadline pressure and never revisit them. At a certain threshold it stops being a background friction and starts affecting delivery timelines, incident rates, and the ability to hire and retain good engineers. By the time it shows up in missed commitments and escalating infrastructure costs, the underlying problem has been compounding for years. The conversation leaders should be having is not how to eliminate debt. It is how to understand the current level and what the team needs to stop making it worse.
MIT Sloan Management Review
How to Manage Tech Debt in the AI Era
The most expensive technology mistake most companies make is hiring the wrong person into the first senior technology role. The second most expensive is waiting too long to make the hire at all.
The first senior technology hire is usually made under pressure, often because something broke or a board member asked a pointed question. That is the worst time to make a consequential hire well. The role often gets defined too narrowly around the immediate problem rather than around what the company will need eighteen months out. I have seen companies hire a strong infrastructure operator when they needed someone who could build and lead an engineering team, and vice versa. The interview process for senior technology leaders also tends to overweight technical depth and underweight judgment, communication, and the ability to work with non-technical stakeholders. Those are the skills that determine whether the hire works. The technical credentials just get you in the room.
First Round Review
How to Interview a CTO
PE-backed companies post-acquisition almost always have technology debt, integration gaps, and compliance exposure that were not visible during due diligence. The first 180 days are when that becomes someone's problem.
I have worked with several PE-backed companies in the period after close, and the pattern is consistent. Due diligence identifies known risks, but the technology picture that emerges in the first 90 days of actual operation is almost always more complicated than what showed up in the data room. Integration requirements that seemed manageable turn out to require significant architectural work. Compliance controls that appeared adequate do not hold up under closer review. The first 180 days are also when the pressure to hit operational targets is highest, which means technology problems that should be prioritized often get deferred. The organizations that navigate this well are the ones that run a structured technology assessment in the first 60 days and treat the findings as a board-level issue, not an internal IT task.
Bain & Company
Technology Due Diligence in Private Equity
Follow for more
Short takes on technology leadership posted regularly on LinkedIn.