In a recent episode of Deloitte Decision Point, Moumita Sarker, Partner – Analytics (AI and GenAI Specialisation) at Deloitte India, and Darshan Gunawardena, Director – Engineering, AI and Data at Deloitte Sri Lanka and the Maldives, sat down to examine what it really takes for organisations to prepare for artificial intelligence.
The conversation covered a wide range of topics from the building blocks of AI readiness and the importance of data foundations to the role of leadership alignment, evolving platform strategies, and how organisations can measure the impact of their AI investments. Drawing from their experience advising enterprises across industries and geographies, they offered practical guidance on how organisations can make informed choices around AI adoption, avoid common pitfalls, and build scalable, responsible AI solutions that deliver tangible business value. Excerpts of the discussion follow:
What does it mean to be AI-ready? And what are the core elements organisations require for such readiness?
Moumita: AI can seem mythical and all-encompassing, influencing everything from data engineering to business decision-making. But to truly integrate AI into daily operations, organisations must become AI-ready. This involves several key elements: a reliable and structured data environment to effectively productionize AI use cases; leadership that recognizes, supports and invests in AI initiatives for them to succeed; widespread understanding and acceptance of AI across the organisation; internal talent with the right skills to reduce dependency on external resources; a clear understanding of when and where AI is truly needed within the organisation and selecting the right platforms and infrastructure guided by the specific needs of the chosen AI use cases.
Another key consideration is the classic ‘build versus buy’ decision. Organisations need to assess whether to develop AI capabilities in-house or leverage external solutions. Thinking strategically about these choices is a strong indicator that you’re moving in the right direction towards becoming AI-ready.
Finally, there’s the crucial element of continuous measurement of success. Over time, one of the biggest misconceptions about AI, or even misuses, has been the idea of a magical black box where something mysterious happens and results magically appear. That’s not reality. To truly succeed, organisations must consistently demonstrate progress through quick wins and clear, measurable outcomes.
How important is it to have a strong data foundation as the basis for building an effective AI strategy? How should organisations get started on building a robust data foundation?
Darshan: They say data is the new oil, and it’s clear that to build robust AI, we need high-quality data. Let’s look at a simple example to illustrate this. One of the long-standing goals in marketing is to personalise outreach to each individual customer, to tailor marketing in a way that truly resonates with that specific person. But how do we achieve that level of personalisation? To do this effectively, we need to understand the customer deeply. Take the common example of a bank wanting to promote a credit card or another product to a specific individual. To market that product appropriately, the bank must first understand who the customer is and what they need.
Banks already have a wealth of data, spread across various systems. Some of it sits in core banking platforms like customer profiles, savings and current account details, and even pledged assets. Additionally, there’s a rich stream of transactional data from credit card usage. Broadly, this data falls into two categories: Master Data, which includes static, descriptive details such as age, occupation, and other demographic information, and Transactional Data, which covers behaviours like how often money is withdrawn, salary patterns, and spending habits.
The real value emerges when we can bring all this data together and link it accurately to a single individual. This golden record gives us a 360-degree view of the customer. With this unified perspective, we can then build AI models that segment customers more effectively and enable deeper behavioural analysis. Ultimately, this forms the foundation for truly personalised marketing and intelligent decision-making. Reaching that level of data-driven AI is a lot of hard work, and that’s where the real challenge lies.
When it comes to building AI solutions, about 80% of the effort typically goes into data engineering: collecting, cleaning, integrating, and preparing the data. This is often the most time-consuming and complex part of the process. A major issue we see in many organisations, especially in sectors like banking, is poor data quality. Over time, institutions may have undergone several changes in their core applications, leading to inconsistencies and gaps in the data. For example, there may have been a time when capturing something as crucial as an identity card number wasn’t mandatory. Today, however, that missing data can become a major roadblock when trying to create a unified customer record.
Moumita: In the context of AI use cases, especially with the rise of Generative AI, there has been a noticeable shift over the past few years. The structured approach we’ve traditionally followed remains critical, particularly for production-grade implementations. However, the audience needs to understand that not all use cases require waiting until all your data is consolidated into one place. There are scenarios where quicker, more agile solutions are possible. In such cases, you can work with small, siloed data sets. These can be handled nimbly and don’t always need centralised, fully structured datasets. Thanks to Generative AI, these silos can now be accessed conversationally. This eliminates a major hurdle: the need for front-line workers to sift through structured reports or dashboards. They no longer need to filter or interpret complex data just to get started with their day. With these new tools, we can now build use cases on the fly, even if the data exists in multiple formats or structures. That flexibility is truly powerful.
How critical is leadership alignment in preparing an organisation for AI? And based on your experience working with organisations worldwide, what advice would you offer to others looking to get that executive support?
Moumita: There are a few pitfalls that need avoiding and lessons that are especially relevant when trying to drive AI adoption within organisations.
First, it’s essential to secure top management buy-in early on. Since AI is still a relatively new concept in many organisations, even though some may be more mature than others, it’s critical for the C-suite and their immediate teams to experience it, not just hear about it. They need to be able to touch and feel the use case. One thing I’ve seen work like magic is ensuring that every AI use case includes a clear, tangible view tailored specifically for top management. As practitioners, we sometimes fall into the trap of keeping things in the lab, working in isolation and taking pride in the technical build. But that’s a risky approach. Leadership needs visibility and context from the start.
Second, you simply cannot involve leadership only at the end or midway through the process. That’s a mistake, almost a cardinal sin. They need to be part of the journey from the beginning: involved at kickoff, consulted at key checkpoints, and aligned at every milestone. It’s about co-owning the initiative.
Third, and very importantly, let’s talk about investments. We can’t ignore the reality that AI requires experimentation, sometimes even failure. It’s not magic. It takes training, iteration, and above all, patience.
It’s crucial to set expectations with leadership right from the start. Clearly lay out: the use case, what’s expected, the milestones, and the level of iteration and refinement involved. If you preempt those conversations early, it helps avoid snap judgments. Once leadership sees progress, it becomes easier to unlock further investment. Capital for innovation rarely comes on Day One. It builds confidence as the results grow.
These factors have truly made or broken key parts of the AI journey for many organisations. The most successful ones I’ve seen have adopted a nimble approach, bringing their top management along from the start, and it’s made a real difference.
Many boards still lack a clear understanding of AI’s potential. In contrast, younger or mid-level managers may already be more comfortable with the technology. How do we bridge that gap? How do we equip different layers of the organisation, each with varying needs and starting points, with the right skills to move forward together?
Darshan: I believe AI awareness must start at the board level and cascade down throughout the organisation. At the very least, everyone needs a basic understanding of what AI is and what it’s capable of. That foundational awareness is the starting point. Right now, with the rise of Generative AI, AI is suddenly on everyone’s radar. People are curious; they want to know what it is and how it works.
Tools like ChatGPT have made it easy for anyone to type a prompt and get a response, which is great in some ways, but it’s also created confusion. In fact, this ease of use has somewhat diluted the broader understanding of what AI truly involves.
Many conversations I’ve had recently reveal a common trend: people want a ChatGPT-like interface that connects to their ERP or core systems, thinking that by just typing a prompt, they’ll get intelligent insights. The reality is, for that kind of experience to work meaningfully, the organisation still needs to build a strong data foundation.
Semantic understanding of the data has to be in place first. You can’t skip that part and expect quality results
from a simple prompt. And this brings us back to an important point: if the board and C-suite lack a nuanced understanding of AI’s capabilities and limitations, they may inadvertently pressure teams, like the CIO or head of IT, into delivering outcomes that aren’t realistically achievable without foundational work.
So, awareness is critical, but not just surface-level hype. It needs to be informed, grounded, and adopted organisation-wide. That awareness also helps in setting a more realistic picture—what’s required, what the timelines might look like, and importantly, how we ensure trust and build responsible AI. This aspect is becoming increasingly critical.
With many GenAI tools today, users often enter personal, sensitive, or even company information without fully understanding the risks. It’s now so simple. You can just take a photo of your account balance, upload it, and have the tool perform tasks. That convenience raises serious concerns around data privacy and responsible usage.
The concept of responsible AI isn’t optional—it’s essential. And for that, a baseline awareness across the organisation is a must. Everyone should be informed about key considerations: privacy, data handling, ethical use, and security. We also need to think about how we operationalise this. Do we build or buy? How do we equip our IT teams to make informed decisions and take the lead? IT teams, in particular, need to be empowered not just with technical tools but with the mindset to start small, iterate, and scale responsibly.
It’s also essential that teams truly understand what’s happening because the pace of innovation today is incredibly fast. I come from an era where, if you needed a new machine, you had to order it and wait six months for it to arrive. But with the shift to cloud, that’s completely changed. If you want to scale up infrastructure now, it’s often as simple as flipping a few switches.
Similarly, many AI models are readily available today. But fine-tuning them and integrating them effectively still requires deep technical expertise. That’s why it’s so important for organisations to ensure their employees are continuously building the right skills. And this can’t be treated as a onetime effort. You can’t just send people to a training programme and assume the job is done. Skill development has to be an ongoing process that evolves along with technology.
We’ve talked about building awareness at the board and C-suite levels, but there’s another important group to consider: functional users across departments. These users also need to be educated on AI capabilities. When they understand what’s possible, they can proactively identify opportunities. For instance, someone in HR or finance might say, “I’m doing this repetitive task every day, like scanning documents and extracting data manually. I think AI could help here.”
That kind of awareness is powerful. It helps create a pull from the business side and encourages collaboration with IT. And it’s not limited to corporate functions. Take a doctor, for example. They don’t need to know how to build AI systems, but if they recognise a potential use case, they can flag it: “Here’s a problem I face regularly. Could AI help solve this?” That’s enough to start a conversation with the technical team and potentially co-create a solution.
Success in any journey—whether it’s a project, an initiative, or a broader transformation ultimately comes down to the choices you make. And preparing an organisation for AI is no different; it’s about making the right choices. What kinds of choices do organisations face when it comes to AI? And based on your experience, what have you seen as the right choices that lead to success?
Moumita: Ultimately, success with AI comes down to making the right choices. The types of choices these organisations face can vary widely depending on where they are. One of the most critical choices in the AI space is selecting the right use cases. This is fundamental. At the end of the day, resources—especially people’s time and leadership focus—are limited. It’s not just about money or infrastructure.
That’s why it’s important to avoid the trap of running 20 proof-of-concepts just to tick a box. That approach rarely adds value. Instead, organisations should have a structured governance mechanism to assess whether a use case truly calls for AI or GenAI. Understanding the basic distinction between traditional AI and GenAI can help teams prioritise better. One framework I often recommend is a simple but powerful use case choice matrix.
Imagine a process like sanctioning loans or issuing credit cards. The task is largely automation-focused and needs to scale across, say, 400 branches. It could involve scanning forms using computer vision and cut processing time from three days to just three hours. That’s a high-impact use case. The customer is delighted, and it creates a lasting impression, which becomes a driver of brand loyalty. These are the kinds of impact-oriented choices that matter. If it’s simple, involving just one stakeholder or solving a niche problem, maybe a lightweight AI or point ML solution will do the job.
Now let’s add another layer—data availability. Some use cases, like fraud or anomaly detection, don’t always have enough historical data to learn from. That’s where GenAI could be useful in generating synthetic examples or filling in data gaps to support better model training. You’re really looking at a three-dimensional framework: What’s the potential business or customer value? How difficult is the problem, and how widely will the solution be deployed? Is there enough data, or do we need to innovate around that?
Every potential use case should be evaluated through this lens before committing resources. Of course, innovation should always have room to thrive. For instance, consider enabling sales conversations with AI that could save time and add value. But is it mission-critical? It depends. In insurance, where agents are often contracted, such a solution might be vital. But in a smaller company with just 40 in-house salespeople, a manual approach might suffice.
Darshan: When we talk about technology choices, the landscape has evolved significantly. If you look back five or six years, AI was typically developed outside of core enterprise applications. You would extract data from your ERP, CRM, or other systems, then move it to an on-prem data centre or a cloud environment, often with one of the major hyper-scalers, because of the high compute requirements for AI workloads. But today, things have changed. AI is increasingly embedded within core enterprise platforms like ERPs and CRMs. Many of these applications now come with built-in AI capabilities that can be customised and extended.
So, a key question becomes: Do I still need to extract my data and build AI externally, or can I leverage what’s already integrated into my existing systems? If you do need to build outside your core systems, the next decision is where to build it: On-premise, or on the cloud, like AWS, Microsoft Azure, or Google Cloud. Regulatory considerations often drive this decision. For example, in some countries, laws around data sovereignty might prohibit storing or processing data outside national borders.
In Sri Lanka, for instance, none of the major hyper-scalers currently have local data centres, so that limitation can heavily influence platform decisions. In environments where regulations do allow cloud use, things become simpler. Cloud platforms offer scalability, ready-to-use infrastructure, and access to a broad range of pre-built AI models, especially for Generative AI.
What we’re seeing more often now is not a single-platform strategy, but rather a multi-platform, hybrid approach combining different technologies based on specific goals, constraints, and existing system affinities. For example, if your current enterprise apps are hosted on Azure, it may make sense to build on Azure for ease of integration. But if cost or capability leads you elsewhere, that’s a valid path too. Then, GenAI introduced added flexibility with the availability of open-source models, more tooling options, and greater experimentation potential. You can start small, test out ideas, validate value, and then scale gradually.
Moumita: One of the most important points here is that you need to make technology choices based on the use case, not the other way around. We now have the flexibility in the system to support this kind of multi-modal or multi-cloud approach. It’s smart to use different platforms or tools based on what the use case demands. If one platform offers better performance or model capabilities in that space, then go with it. The idea that you have to stay married to one platform is outdated. The focus should always be on the end outcome, not the tool. This case-driven approach is not new; it’s been around for decades. What’s changed is the richness of options and the increasing need for strategic clarity.
Beyond the platform and tooling decisions, I want to highlight two other key choices.
The first is Build versus Buy: You need to carefully assess whether building a solution in-house or buying an existing one makes more sense. In many cases, the problem you’re trying to solve may already have proven solutions in the market. If that’s the case, don’t reinvent the wheel. Borrow and scale. Leverage external expertise to accelerate your outcomes. There are core processes, especially in sectors like financial services, where you may want to retain control and build the solution internally.
The second is Platform Fit versus. Flexibility: You may already have affinities with certain platforms due to legacy systems or existing licenses. Don’t let that limit your thinking. Choose what fits best for the specific challenge, and always be conscious of cost implications because some platforms may scale differently in terms of cost as your usage grows.
I think the most important question from a board perspective or even an investor perspective is measurement. Where is the organisation on its journey to be ready for AI? What are the ROIs on the investments we have made? What more do we have to make? How does an organisation go about it?
Moumita: I think it’s particularly challenging when it comes to AI because, at times, its impact can be intangible. There are two key factors that make measurement in this space a bit grey.
First, some aspects of AI drive efficiency, while others have a direct impact. The key point here is that when you start a use case, and I’ve seen organisations overlook this, it’s crucial to have a use case charter. For example, if you’re working on an AI use case where the goal is to take all your documents and summarise them, such as in a finance function, it’s important to clarify upfront that the objective is efficiency. The same applies to making SOPs or other documentation more conversational and easier to onboard, which ultimately enhances efficiency. In such cases, the use case charter should clearly state that the goal is an efficiency driver. What does this mean?
When it comes to measuring ROI, it’s important to define what investment entails, whether it’s in terms of cost, time, or the resources required for a proof of concept (POC). Once you have these aspects clarified, the next step is to show the organisation that the AI solution will save a specific amount of man-hours. These savings can be translated into indirect revenue. There are two main approaches to measurement: direct and indirect.
If the use case is directly customer-facing, such as recommending the next best products (which is common in banking, FMCG, or retail), the measurement should be direct. In these scenarios, AI engines run in the background to deliver recommendations that encourage customers to buy products they haven’t purchased before, which directly increases the range and relationship value of each customer.
On the other hand, if the use case is focused on improving efficiencies, like saving man-hours or reducing costs, the impact is more indirect. However, you need to convert those saved man-hours into a conversation about indirect revenue. The key is to demonstrate how those hours can be redirected to more value-driven work, which will eventually have a direct business impact. For example, in a trade promotion, any cost savings from AI automation could be reinvested into marketing, improving visibility, and driving more customers to the shop floor, which ultimately leads to more sales. This indirect savings thus gets converted into direct revenue.
Darshan: We start with certain use cases, and sometimes they may not be successful, but we shouldn’t get discouraged. Instead, we should focus on taking what we learned from those experiences, going back, and understanding why it didn’t work. From there, we can assess whether we can still make use of the investment, both in terms of time and effort. When you go down that journey, you inevitably learn something. There is also an ROI on that investment, simply because you’ve committed resources and time and gained valuable insights from the process. I believe this constitutes an indirect ROI on the investment.